Monday, December 30, 2019

windows server 2016 - Cross-Realm-Trust between Active Directory and MIT Kerberos

I am currently in the process of extending my development environment, which used to only run Linux servers so far, by adding machines running Windows Server 2016. The authentication process is handled by MIT Kerberos. For the new Windows machines, I am planning on using Active Directory. Since I don't want to manage users in two systems, I am setting up a cross-realm trust between the Windows AD and the already existing MIT Kerberos installation.



To do that, I have followed this guide: https://bluedata.zendesk.com/hc/en-us/articles/115007484067-How-To-Establish-Cross-Realm-Trust-from-MIT-KDC-to-AD.



Now, I have noticed that I can obtain a ticket from the Windows AD for a User from the AD on a linux machine just fine: Running kinit Administrator@AD.DOMAIN.LOCAL completes without any errors and gives me a ticket as expected.



On the other hand, I cannot login to any of the Windows machines using an account from the MIT Kerberos setup. Trying to log in using my test account (test@DOMAIN.LOCAL from the MIT realm DOMAIN.LOCAL) throws the following error:



"The security database on the server does not have a computer account for this workstation trust relationship".




Another thing I am noticing is that when I try to verify the trust relationship using the command netdom trust DOMAIN.LOCAL /Domain:AD.DOMAIN.LOCAL /Kerberos /verbose /verify, I am getting the following error message:



"Unable to contact the domain DOMAIN.LOCAL. The command failed to complete successfully."



Seems like the Windows AD is unable to communicate with the MIT Kerberos installation, which seems weird though, because it apparently does work the other way around. I have already double-checked that all the DNS records (domain.local, ad.domain.local and the FQDNs for the KDCs) resolve to the correct IP addresses. While researching the problem, I stumbled across this post https://stackoverflow.com/questions/45236577/using-mit-kerberos-as-account-domain-for-windows-ad-domain, which seemed promising at first, but couldn't help me fix my problem. Any help is greatly appreciated!

Sunday, December 29, 2019

Configure more than two logical drives in an HP ProLiant Gen8 server?

I've a server HP Proliant dl380 gen 8. I've got 2 RAID 1 drives (each having 2 HDDs (300gb) in a mirror). I've now come to put 4 extra physical disks into the server and it won't let me create the logical disks for the 2 new drives. I want to create an other new logical drive for thoses 4 physical drives



When I try to add them from the RAID utility (F8 during boot) I get a message saying ORCA can't handle any more logical drives, and that I should use the array config utility to add them. I tried using the array config to add them but can't see how to do it. The disks are both picked up and labeled "un-allocated" but I can't find any way to allocate them.

Friday, December 27, 2019

How can I delete SPAM 'Delivery Failure' Emails at the server..?



A friend of mine is having problems with large numbers of "Your EMail was Undeliverable" type messages coming into his inbox.



The original messages contain a 'Please Send Money' scam and were not sent from my friends account, someone is just spoofing the 'From' address.



The problem is that the SPAM-bot is sending out a massive numebr of emails but many are to accounts that so not exist - hence the undeliverable errors.

He is using Outlook Web Access (provided by his ISP) and does not have the ability to mark messages are Junk or to create a Rule that will scan the body of the email (only the subject and sender). I have been able to create some basic rules to move many of the messages to the Deleted Items folder (based on a common subject) but I'm worried about being too generic in case I end up deleting genuine messages.



How can I stop these messages from filling up his Inbox/Deleted Items..?



Thanks in Advance


Answer



Just something to consider: if the ISP isn't doing some proper spam filtering, you might want another provider for your email. If you subscribe your friend for a GMail account, he'd still have a web interface plus POP3/IMAP access and he can just forget about his ISP-provided mail account.



I hate it too, though. I've had a CompuServe account since 1993 and around 1999/2000 I started having some serious spam problems with that account. Fortunately, I've always had multiple mail accounts so I avoided my CompuServe account, which would just end up being flooded. I added some rules to this account to just forward emails from people on my whitelist to my new account and all other emails were just trashed. (I did check their senders and titles before trashing them, though.)




I stopped using Compuserve in 2005. I wasn't receiving any more important emails on that account and even stopped checking it for new emails. It was all spam anyways and Compuserve didn't bother to do something about it so I had no use for it.



Complain to the ISP, telling them to take action against this flood of spam. They should be able to recognize it and thus block it even before it reaches your mailbox.
Or just forget that mailbox and use GMail instead, adding rules to your old mailbox to just forward any important messages to the new account.


For businesses, there's also the Google Postini services which aren't free but might be useful for people who run their own business with their own domain name. Or people who can't switch provider.

Wednesday, December 25, 2019

scripting - Changing PF rules on the fly to mitigate damage of DDoS (OpenBSD 6.4)

This is a two part question, really. Keep in mind that I am a developer not a system admin, but being the only employee in the company, I wear ALL the hats.



I have deployed my server with two firewalls running on CARP for load balancing/redundancy plus about 40 computers for database and other backend application needs. As a start up I want to save some money by mitigating damages of a DDoS attack without paying my ISP for a business dedicated internet on top of DDoS protection. I KNOW YOU CAN'T TOTALLY protect against DDoS. I just want to mitigate damages until my App starts making money and then I can let the ISP deal with the headaches.




In that spirit, I was wondering if anybody ever implemented a solution of where a script (maybe through cron) would change the PF rules based on current usage. For example, if there are too many half open connections from millions of IP addresses I would like to tell PF to go into SYN-Cookies mode and then when the attack is over (or some time has passed) to go back to normal.



I cannot use Cloudfare because I am running a backend for an App and 99% of the content is not static. I could do cloudfare for the website of the app but that's about it.



To reiterate, money IS AN ISSUE. I am currently using FIOS Business and Verizon will not provide DDoS protection on that type of line.



Last thing, has anybody experienced drastic issues after enabling SYN-COOKIES/SYN-PROXYING. Give me real story. Please.



PS I do not want to start a debate about SYN-PROXYING vs SYN-COOKIES!

Tuesday, December 24, 2019

nameserver - Domain is reporting incorrect Name Server information



I am running a VPS that is hosting five different domains. Everything has been fine until I wanted to use our inactive domain to setup Google for Business Apps. I am unable to verify the domain because the DNS on that one domain is really messed up. To me the setup looks no different than the others that are working fine. This is an unmanaged VPS so I'm hoping that someone here may see what is wrong.



The server uses it's own name servers which are correctly set at the registrar. They are like so:



enter image description here




My first domain, plangator.com, is mostly reporting OK at intodns. Here is it's Zone file:



; Zone file for plangator.com
$TTL 14400
plangator.com. 86400 IN SOA ns1.lamardesigngroup.com. rlamar4088.aol.com. (
2016020105 ;Serial Number
86400 ;refresh
7200 ;retry
3600000 ;expire

86400 ;minimum
)
plangator.com. 86400 IN NS ns1.lamardesigngroup.com.
plangator.com. 86400 IN NS ns2.lamardesigngroup.com.
plangator.com. 14400 IN A 212.1.213.8
localhost 14400 IN A 127.0.0.1
plangator.com. 14400 IN MX 0 plangator.com.
mail 14400 IN CNAME plangator.com.
www 14400 IN CNAME plangator.com.
ftp 14400 IN A 212.1.213.8

cpanel 14400 IN A 212.1.213.8
webmail 14400 IN A 212.1.213.8
plangator.com. 14400 IN TXT "v=spf1 mx a ip4:212.1.213.8 include:plangator.com ~all"


One thing that I notice is that it doesn't report the correct IP's for the name servers. 212.1.213.8 is the IP of the Server.




Nameserver records returned by the parent servers are:




ns1.lamardesigngroup.com. ['212.1.213.8'] [TTL=172800]



ns2.lamardesigngroup.com. ['212.1.213.8'] [TTL=172800]




My problem domain is gator.digital. Here is it's Zone file:



; Zone file for gator.digital
$TTL 14400
gator.digital. 86400 IN SOA ns1.lamardesigngroup.com. rlamar4088.aol.com. (

2015101316 ;Serial Number
86400 ;refresh
7200 ;retry
3600000 ;expire
86400 ;minimum
)
gator.digital. 86400 IN NS ns1.lamardesigngroup.com.
gator.digital. 86400 IN NS ns2.lamardesigngroup.com.
gator.digital. 14400 IN A 212.1.213.8
www 14400 IN CNAME gator.digital.

cpanel 14400 IN A 212.1.213.8
gator.digital. 14400 IN TXT google-site-verification=l5pn02kvh4kCGScCaA-IUIb7toL82RnLdiuXdHw0dB8
gator.digital. 3600 IN MX 1 aspmx.l.google.com.
gator.digital. 3600 IN MX 5 alt1.aspmx.l.google.com.
gator.digital. 3600 IN MX 5 alt2.aspmx.l.google.com.
gator.digital. 3600 IN MX 10 alt3.aspmx.l.google.com.
gator.digital. 3600 IN MX 10 alt4.aspmx.l.google.com.
gator.digital. 14400 IN TXT "'v=spf1 include:_spf.google.com ~all'"
localhost 14400 IN A 127.0.0.1



Here is how the name servers are seen for gator.digital.




Nameserver records returned by the parent servers are:



ns2.lamardesigngroup.com. ['198.20.251.114'] (NO GLUE) [TTL=86400]



ns1.lamardesigngroup.com. ['198.20.251.113'] (NO GLUE) [TTL=86400]





And then all of the errors:




NS records from your nameservers NS records got from your nameservers listed at the parent NS are:
Oups! I could not get any nameservers from your nameservers (the ones listed at the parent server). Please verify that they are not lame nameservers and are configured properly.



Same Glue Hmm,I do not consider this to be an error yet, since I did not detect any nameservers at your nameservers.



Glue for NS records OK. Your nameservers (the ones reported by the parent server) have no ideea who your nameservers are so this will be a pass since you already have a lot of errors!




Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records.



DNS servers responded ERROR: One or more of your nameservers did not respond:
The ones that did not respond are:
198.20.251.114 198.20.251.113



Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me.



Missing nameservers reported by your nameservers You should already know that your NS records at your nameservers are missing, so here it is again:




ns2.lamardesigngroup.com.
ns1.lamardesigngroup.com.




It seems that although these are both setup to use the same nameservers the DNS check is looking in two different places.


Answer



Focusing on the actual problem with your domains:



Following the chain of delegations for lamardesigngroup.com you'll see a delegation to ns1.lamardesigngroup.com and ns2.lamardesigngroup.com with glue referring to 212.1.213.8.




lamardesigngroup.com.   172800  IN      NS      ns1.lamardesigngroup.com.
lamardesigngroup.com. 172800 IN NS ns2.lamardesigngroup.com.
ns1.lamardesigngroup.com. 172800 IN A 212.1.213.8
ns2.lamardesigngroup.com. 172800 IN A 212.1.213.8


However, the authoritative records served by 212.1.213.8 are:



lamardesigngroup.com.   86400   IN      NS      ns2.lamardesigngroup.com.

lamardesigngroup.com. 86400 IN NS ns1.lamardesigngroup.com.
ns1.lamardesigngroup.com. 14400 IN A 198.20.251.113
ns2.lamardesigngroup.com. 14400 IN A 198.20.251.114


There's clearly an inconsistency between the glue and authoritative address records for the nameserver names, leading to different addresses being used in different situations.



This in turn also affects your other domains that use ns1.lamardesigngroup.com and ns2.lamardesigngroup.com as nameservers.


Monday, December 23, 2019

cron - Failover tmpfs mirroring. Am I doing it right?



My goal is to have a certain directory to be available as tmpfs.
There will be some modifications during server uptime in this dir and those modifications must be synced to non-tmpfs persistent dir on HDD over rsync.



After server boot the latest version from non-tmpfs persistent dir must be moved to tmpfs and rsync syncing to be started.



I'm afraid that rsync will erase non-tmpfs backup if tmpfs dir will be empty..




I'm doing it in this way right now:




  1. create tmpfs partition in /etc/fstab

  2. cat /etc/rc.local (pseudocode)



    delete "tmpfs rsync" cronjob from /var/spool/cron/crontabs if there is any



    cp -r /path/to/non-tmpfs-backup /path/to/tmpfs/dir




    append /var/spool/cron/crontabs with "tmpfs rsync" cronjob




What do you think?


Answer



Create some sort of seed file deep in your non-tmpfs directory and only rsync back to non-tmpfs if it exists (meaning the "boot" copy worked), so something like:



BOOT



mount /path/tmpfs

rsync -aq --delete /path/non-tmpfs/ /path/tmpfs/


CRON



if [ -f /path/tmpfs/some/deep/location/filesgood.txt ]; then
rsync -aq --delete /path/tmpfs/ /path/non-tmpfs/
fi



It's not perfect but, if you enhance that (by looking for 5 "cookie" files during cron if you want to in different directories, e.g.), it should be pretty safe.


Saturday, December 21, 2019

proxy - What is the purpose of netcat's "-w timeout" option when ssh tunneling?



I am in the exact same situation as the person who posted another question, I am trying to tunnel ssh connections through a gateway server instead of having to ssh into the gateway and manually ssh again to the destination server from there. I am trying to set up the solution given in the accepted answer there, a ~/.ssh/config that includes:



host foo
User webby
ProxyCommand ssh a nc -w 3 %h %p


host a
User johndoe


However, when I try to ssh foo, my connection stays alive for 3 seconds and then dies with a Write failed: Broken pipe error. Removing the -w 3 option solves the problem. What is the purpose of that -w 3 in the original solution, and why is it causing a Broken pipe error when I use it? What is the harm in omitting it?


Answer




What is the purpose of that -w 3 in the original solution





It avoids leaving orphaned nc processes running on the remote host when the ssh session is closed improperly.




and why is it causing a Broken pipe error when I use it?




Try increasing the timeout for nc to 90 and setting ServerAliveInterval to 30 to see if your problem go away:



host foo

User webby
ServerAliveInterval 30
ProxyCommand ssh a nc -w 90 %h %p

linux - "POSSIBLE BREAK-IN ATTEMPT!" in /var/log/secure — what does this mean?




I've got a CentOS 5.x box running on a VPS platform. My VPS host misinterpreted a support inquiry I had about connectivity and effectively flushed some iptables rules. This resulted in ssh listening on the standard port and acknowledging port connectivity tests. Annoying.



The good news is that I require SSH Authorized keys. As far as I can tell, I don't think there was any successful breach. I'm still very concerned about what I'm seeing in /var/log/secure though:






Apr 10 06:39:27 echo sshd[22297]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:27 echo sshd[22298]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:31 echo sshd[22324]: Invalid user edu1 from 222.237.78.139
Apr 10 06:39:31 echo sshd[22324]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!

Apr 10 13:39:31 echo sshd[22330]: input_userauth_request: invalid user edu1
Apr 10 13:39:31 echo sshd[22330]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:35 echo sshd[22336]: Invalid user test1 from 222.237.78.139
Apr 10 06:39:35 echo sshd[22336]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:35 echo sshd[22338]: input_userauth_request: invalid user test1
Apr 10 13:39:35 echo sshd[22338]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:39 echo sshd[22377]: Invalid user test from 222.237.78.139
Apr 10 06:39:39 echo sshd[22377]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:39 echo sshd[22378]: input_userauth_request: invalid user test
Apr 10 13:39:39 echo sshd[22378]: Received disconnect from 222.237.78.139: 11: Bye Bye






What exactly does "POSSIBLE BREAK-IN ATTEMPT" mean? That it was successful? Or that it didn't like the IP the request was coming from?


Answer



Unfortunately this in now a very common occurrence. It is an automated attack on SSH which is using 'common' usernames to try and break into your system. The message means exactly what it says, it does not mean that you have been hacked, just that someone tried.


ubuntu - Nginx and NSD3 don't start on boot because they cannot use the assigned IP

Server is a Xen VPS running Ubuntu 12.04 and neither nginx nor NSD3 come up after reboot. The apparent reason for that is that they're not able to bind to their assigned IP addresses right after boot,



from /var/log/boot.log



* Starting configure network device                                     [ OK ]

* Stopping save kernel messages [ OK ]
* Starting MTA [ OK ]
nginx: [emerg] bind() to [2a01:1b0:removed:1c9c]:80 failed (99: Cannot assign requested address)
* Starting nsd3... [ OK ]
[...]
* Starting configure virtual network devices [ OK ]
* Stopping configure virtual network devices [ OK ]


from /var/log/nsd.log




[1351715473] nsd[956]: error: can't bind udp socket: Cannot assign requested address
[1351715473] nsd[956]: error: server initialization failed, nsd could not be started


Everything works fine after a couple of seconds, and both nginx and NSD3 can be started.



It seems to me that the problem is in the wrong boot order, nginx and NSD3 are started before the network configuration can fully take place. I worked around it by putting



# nginx and nsd boot fix

sleep 4
/etc/init.d/nsd3 start
/etc/init.d/nginx start


in /etc/rc.local but that's not a proper solution. What is the right way to handle this issue?



Here's my basic network configuration, from /etc/network/interfaces
auto eth0




iface eth0 inet static
address 89.removed.121
gateway 89.removed.1
netmask 255.255.255.0

iface eth0 inet6 static
up echo 0 > /proc/sys/net/ipv6/conf/all/autoconf
up echo 0 > /proc/sys/net/ipv6/conf/default/autoconf
netmask 64
gateway 2a01:removed:0001

address 2a01:removed:7c3b
up ip addr add 2a01:removed:62bd dev eth0 preferred_lft 0
up ip addr add 2a01:removed:ce6d dev eth0 preferred_lft 0
up ip addr add 2a01:removed:3e13 dev eth0 preferred_lft 0
up ip addr add 2a01:removed:1c9c dev eth0 preferred_lft 0

auto lo
iface lo inet loopback



Those awkward up id addr are there because I wanted to add additional IPs but still use the first one for all traffic originating from the server.

solaris - Compiling Apache mod_ssl for different target hardware (hardware capability unsupported SSE2 error)



I am building and packaging the following on one machine (the "build" machine) and attempting to install and use on other machines ("target" machines) some of which have different processors.




  • OpenSSL 0.9.8l


  • Apache 2.2.14

  • Tomcat Connectors 1.2.28



The problem, as far as I can tell, is that the build machine has more CPU capabilities than the target machine resulting in binaries that are not executable on the target machine. I have attempted to use configure and compiler flags to disable use of the offending instructions without luck.



Ultimately I get this error:



$ ./apachectl start 




httpd: Syntax error on line 58 of /usr/local/apache-2.2.14/conf/httpd.conf:
Cannot load /usr/local/apache2/modules/mod_ssl.so into server: ld.so.1: httpd:
fatal: /usr/local/openssl/lib/libssl.so.0.9.8: hardware capability unsupported:
0x1000 [ SSE2 ]


Here is my complete build process. Full output from each command can be viewed here. I can't link to them each directly since I don't have enough SF rep.




The Build Machine



$ echo $PATH
/usr/bin:/usr/ccs/bin:/usr/sfw/bin:/opt/sfw/bin:/usr/sbin

$ isainfo -v
32-bit i386 applications
pause sse2 sse fxsr mmx cmov sep cx8 tsc fpu

$ uname -a

SunOS bsiausstgdb02 5.10 Generic_120012-14 i86pc i386 i86pc


The Target Machine



$ isainfo -v
32-bit i386 applications
sse fxsr mmx cmov sep cx8 tsc fpu

$ uname -a

SunOS bsiausdevweb01 5.10 Generic_120012-14 i86pc i386 i86pc


Compile OpenSSL 0.9.8l



$ CC=/usr/bin/cc
$ export CC

$ CFLAGS="-xarch=sse"
$ export CFLAGS


$ ./Configure \
solaris-x86-cc \
shared \
no-asm \
no-sse2 \
-xarch=sse \
--openssldir=/usr/local/openssl-0.9.8l



view full output:
openssl-configure.txt



$ make && make test


view full output:
openssl-make-and-test.txt



$ sudo make install



view full output:
openssl-make-install.txt



Compile Apache 2.2.14



$ CC=/usr/bin/cc
$ export CC


$ CFLAGS="-xarch=sse"
$ export CFLAGS

$ ./configure \
--prefix=/usr/local/apache-2.2.14 \
--with-mpm=prefork \
--enable-so \
--enable-unique-id=shared \
--enable-rewrite=shared \
--enable-spelling=shared \

--enable-info=shared \
--enable-headers=shared \
--enable-deflate=shared \
--enable-expires=shared \
--enable-unique-id=shared \
--enable-speling=shared \
--enable-ssl=shared \
--with-ssl=/usr/local/openssl



view full output:
apache-configure.txt



$ make


view full output:
apache-make.txt



$ sudo make install



view full output:
apache-make-install.txt



Compile Tomcat Connectors 1.2.28



$ CC=/usr/bin/cc
$ export CC


$ CFLAGS="-xarch=sse"
$ export CFLAGS

$ cd native
$ ./configure \
--with-apxs=/usr/local/apache2/bin/apxs


view full output:
tomcat-connector-configure.txt




$ make


view full output:
tomcat-connector-make.txt



$ sudo make install



view full output:
tomcat-connector-make-install.txt



Testing



At this point everything will work on the build machine. Once I package these files and install them on the target machine, I get this error when Apache is started with mod_ssl enabled.



$ ./apachectl start




httpd: Syntax error on line 58 of /usr/local/apache-2.2.14/conf/httpd.conf:
Cannot load /usr/local/apache2/modules/mod_ssl.so into server: ld.so.1: httpd:
fatal: /usr/local/openssl/lib/libssl.so.0.9.8: hardware capability unsupported:
0x1000 [ SSE2 ]

Answer



I worked around this problem by building the packages on a machine with equivalent hardware to the target machine and using the Sun Studio CC compiler instead of gcc.


Thursday, December 19, 2019

redirect - Apache returning wrong Location header

The issue happens when you:





  1. issue a request with the header "Host" including the port, e.g. "Host: www.example.com:80", which is legal as per https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.23. You can do it for instance with curl curl -v -H "Host: www.example.com:80" -X GET -i http://www.example.com

  2. the server issues a redirect to https for that request, in my case using the following RewriteRule




RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]



I noticed that the "Location" header of the response also includes the port, and it's the same of that specified in the "Host" header of the request. So the server would respond with "Location: https://www.example.com:80", which is wrong.



This happens to me with "Apache/2.4.7 (Ubuntu)", but I noticed the issue also with Varnish cache server. Why does it behave this way? Is there a way to correct this?

routing - preferential active directory one-way trust targets



If you have two domains and forests, Domain + Forest A and Domain + Forest B, and you are making a one-way trust so that Domain + Forest B will implicitly trust A, is there a way to make sure all the trust-related traffic goes through only ONE preselected DC in Domain B from the DCs with A?




All the domains and forests are at Windows Server 2003 functional level. Upgrading B is an option.



Totally stumped. Update the root hints maybe? Having this restriction will make certain routing issues (avoiding setting up more IPSEC tunnels) MUCH easier with regard to trust traffic encryption.


Answer



You'd do this by ensuring name resolution queries by Domain A for DomainB return just the DC of interest. If you are forwarding DNS traffic to DomainB for DomainB queries, that means getting DCs of DomainB not to register certain records by using DNS mnemonics ( http://support.microsoft.com/kb/267855 ). Probably not what you want.



Alternate is to host your own version of DNS zone(s) for Domain B on domainA side with just the detail required. So when _kerberos_tcp.dc_msdcs.domainb.com or _ldap._tcp.dc._msdcs.domainb.com type queries are issued, these queries return just the DC of interest.



I also take it you are not concerned with single point of failure of choosing one domainb dc.



setting IPv6 address for domain



I am quite new to the server business and I was wondering about a thing with the IPv6 addresses:



When I assign an IPv6 address for my domain as an AAAA record: Do I assign a /64 address or do I assign a complete single address out of the /64 that I got from my provider?



The thing is that I only got a /64 so I divided them somehow amongst my domains, but I get the impression that I am doing this wrong...




Thanks in advance!


Answer



You assign a full address (/128) in a quad-A record. The /64 is a range of addresses for you to allocate from.



For example:



2604:4301:a:103/64 is my range, and I can use any address between 2604:4301:a:103:: (the :: is shorthand for all-zeros) and 2604:4301:a:103:FFFF:FFFF:FFFF:FFFF.


Is it possible to set up a web server on the same domain used by Active Directory?

I'm a web developer at a large-ish organization and our web site is hosted by our IT department. Our website will not load without the "www" subdomain in front of it and IT says that it's because Active Directory must use the primary domain and so the web server must use a subdomain. They say it's not possible to fix it. I'm highly skeptical of this claim because this hasn't been a problem anywhere else I've worked or heard of but I'm not familiar enough with the technologies in question to argue the point.



So my question is, does this sound reasonable? Is it not possible to use AD on the same domain as a web server? Thanks!

Monday, December 16, 2019

windows - Can't access the internet when connected to OpenVPN server

I have recently installed OpenVPN on my windows 2003 server.
Once someone is connected to the server, they do not have internet access.





  • My network is on 192.168.1.1

  • my server is on 192.168.1.110

  • I am using the dd-wrt firmware

  • I have enabled port 1194 for 192.168.1.110 on the router

  • Routing and Remote Access is disabled

  • I have 2 Tap-Win32 Adapter V8(s) on my windows 2003 server

  • I have tried setting this line to 192.168.1.1 and also my isp's dns servers
    push "dhcp-option DNS 192.168.1.1" # Replace the Xs with the IP address of the DNS for your
    home network (usually your ISP's DNS)


  • I have created an advanced routing Gateway in dd-wrt



     Destination LAN NET: 192.168.10.0
    Subnet Mask: 255.255.255.252
    Gateway: 192.168.1.110
    Interface: Lan & WLAN



I have followed this website exactly: http://www.itsatechworld.com/2006/01/29/how-to-configure-openvpn/




EDIT: I just tried to connect through the cmd prompt and get the following subnet error - potential route subnet conflict between local LAN [192.168.1.0/255.255.255.0] and remote VPN [192.168.1.0/255.255.255.0]



My server file looks as follows:



local 192.168.1.110 # This is the IP address of the real network interface on the server connected to the router

port 1194 # This is the port OpenVPN is running on - make sure the router is port forwarding this port to the above IP

proto udp # UDP tends to perform better than TCP for VPN


mssfix 1400 # This setting fixed problems I was having with apps like Remote Desktop

push "dhcp-option DNS 192.168.1.1" # Replace the Xs with the IP address of the DNS for your home network (usually your ISP's DNS)

#push "dhcp-option DNS X.X.X.X" # A second DNS server if you have one

dev tap

#dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name


ca "ca.crt"

cert "server.crt"

key "server.key" # This file should be kept secret

dh "dh1024.pem"

server 192.168.10.0 255.255.255.128 # This assigns the virtual IP address and subent to the server's OpenVPN connection. Make sure the Routing Table entry matches this.


ifconfig-pool-persist ipp.txt

push "redirect-gateway def1" # This will force the clients to use the home network's internet connection

keepalive 10 120

cipher BF-CBC # Blowfish (default) encryption

comp-lzo


max-clients 100 # Assign the maximum number of clients here

persist-key

persist-tun

status openvpn-status.log

verb 1 # This sets how detailed the log file will be. 0 causes problems and higher numbers can give you more detail for troubleshooting



My client1 file is as follows:



client

dev tap

#dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name


proto udp

remote my-dyna-dns.com 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config

route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1

resolv-retry infinite

nobind


persist-key

persist-tun

ca "ca.crt"

cert "client1.crt" # Change the next two lines to match the files in the keys directory. This should be be different for each client.

key "client1.key" # This file should be kept secret


ns-cert-type server

cipher BF-CBC # Blowfish (default) encrytion

comp-lzo

verb 1


Thanks in advance!

windows server 2008 r2 - Replace wildcard certificate on multiple sites at once (using command line) on IIS 7.5



I have 3 websites: aaa.my-domain.com, bbb.my-domain.com and ccc.my-domain.com all using a single wildcard certificate *.my-domain.com on IIS 7.5 Windows Server 2008R2 64-bit. That certificate expires in a month and I have a new wildcard certificate *.my-domain.com on my server ready.



I want all those domains to use the new wildcard certificate without noticeable downtime.




I tried the usual through the UI starting with replacing the certificate for aaa.my-domain.com:
edit site bindings window in IIS 7.5



But when I press OK, I get the following error:




--------------------------- Edit Site Binding ---------------------------



At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate?




--------------------------- Yes No ---------------------------




When I click Yes, I get the following message:




--------------------------- Edit Site Binding ---------------------------



The certificate associated with this binding is also assigned to another site's binding. Editing this binding will cause the HTTPS binding of the other site to be unusable. Do you still want to continue?




--------------------------- Yes No ---------------------------




This message tells me that https://bbb.my-domain.com and https://ccc.my-domain.com will become unusable. And I will have downtime for those at least until I'm done replacing the certificate for those 2 domains too, right?



I was thinking that there must be a smarter way of doing this. Possibly through the command line that replaces the wildcard certificate with a new one for all website at once. I couldn't find any resources online as to how to do that. Any ideas?



Sites related to wildcard and binding:






Sites related to binding certificates from the command line:




Answer



The context of the answer is that IIS 7 doesn't actually care about the certificate binding. IIS 7 only ties websites to one or more sockets. Each socket being a combination of IP + port. Source: IIS7 add certificate to site from command line



So, what we want to do is do certificate re-binding on the OS layer. The OS layer takes control of the SSL part, so you use netsh to associate a certificate with a particular socket. This is done through netsh using netsh http add sslcert.




When we bind a (new) certificate to a socket (ip + port), all sites using that socket will use the new certificate.



The command to bind a certificate to a socket is:
netsh http add sslcert ipport=10.100.0.12:443 certhash=1234567890123456789012345678901234567890 appid={12345678-1234-1234-1234-999999999999}





This part explains how to proceed step-by-step. It assumes you have some websites (aaa.my-domain.com, bbb.my-domain.com) running a *.my-domain.com certificate that is about to expire. You already have a new certificate that you already installed on the server but not yet applied to the websites on IIS.



First, we need to find out 2 things. The certhash of your new certificate and the appid.





  • certhash Specifies the SHA hash of the certificate. This hash is 20 bytes long and specified as a hexadecimal string.

  • appid Specifies the GUID to identify the owning application, which is IIS itself.



Find the certhash



Execute the certutil command to get all certificates on the machine:




certutil -store My



I need not all information so I do:



certutil -store My | findstr /R "sha1 my-domain.com ===="



Among the output you should find your new certificate ready on your server:



================ Certificate 5 ================
Subject: CN=*.my-domain.com, OU=PositiveSSL Wildcard, OU=Domain Control Validated

Cert Hash(sha1): 12 34 56 78 90 12 34 56 78 90 12 34 56 78 90 12 34 56 78 90



1234567890123456789012345678901234567890 is the certhash we were looking for. it's the Cert Hash(sha1) without the spaces.



Find the appid



Let's start of by looking at all certificate-socket bindings:



netsh http show sslcert




Or one socket in particular



netsh http show sslcert ipport=10.100.0.12:443



Output:



SSL Certificate bindings:
----------------------
IP:port : 10.100.0.12:443

Certificate Hash : 1111111111111111111111111111111111111111
Application ID : {12345678-1234-1234-1234-123456789012}
Certificate Store Name : MY
Verify Client Certificate Revocation : Enabled
Verify Revocation Using Cached Client Certificate Only : Disabled
Usage Check : Enabled
Revocation Freshness Time : 0
URL Retrieval Timeout : 0
Ctl Identifier : (null)
Ctl Store Name : (null)

DS Mapper Usage : Disabled
Negotiate Client Certificate : Disabled


{12345678-1234-1234-1234-123456789012} is the appid we were looking for. It's the Application ID of IIS itself. Here you see the socket 10.100.0.12:443 is currently still bound to the old certificate (Hash 111111111...)



bind a (new) certificate to a socket



Open a command prompt and run it as a administrator. If you don't run it as administrator, you'll get an error like: "The requested operation requires elevation (Run as administrator)."




First remove the current certificate-socket binding using this command



netsh http delete sslcert ipport=10.100.0.12:443



You should get:



SSL Certificate successfully deleted



Then use this command (found here) to add the new certificate-socket binding with the appid and the certhash (without spaces) that you found earlier using this command




netsh http add sslcert ipport=10.100.0.12:443 certhash=1234567890123456789012345678901234567890 appid={12345678-1234-1234-1234-123456789012}



You should get:



SSL Certificate successfully added



DONE. You just replaced the certificate of all websites that are binded to this IP + port (socket).


smtp - Postfix mail server refuses connections from outside mail servers




I have a Postfix server with SMTP listening on port 587 which cannot be reached by outside mail servers like Gmail and hence I receive this Mail Delivery Failure when sending an email from GMail to useraccount@mydomain.tld:



The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720
[mail.mydomain.tld MailServerIP:(It is interesting that there is no port here!) socket error]



----- Original message -----

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:from:date:message-id:subject:to;

bh=pEP+FUpQu4YrUIJfRtRY72qvieH+prFPrjpP+XncC+A=;
b=xWURH+CuLyCB2dCkDZTmlncHMmvAaP24KwgoqUxur1FxRye7cJ4qAHYDjEQLGoecJO
U3ka/qkBSwcDnCsrBZc+I4YL7sN6pRJvBatv/EXbYdwoczq8LoizXWuYKxprCgSiVKu5
3eFdaFN8dCBXJncp4mMMOzKwonqe1fO+zuV5fI3ef7TCgThEBiCwZrEFUlPb64MCkQzY
wKu/gwKVS5yvO2MvD3IJQJeqmaj2kegC9zIIQo5w9w/HeS4wasyVU9bIAAuCG9azdiL6
wR9CzV95xHJYWv/3YUcB0CBMuL7vrelDlVlRddhrhJRV4jkzOHOYlgvDVhd0GPj7/Mib
KqOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:mime-version:in-reply-to:references:from:date

:message-id:subject:to;
bh=pEP+FUpQu4YrUIJfRtRY72qvieH+prFPrjpP+XncC+A=;
b=lSA5HbBTMeKoIOp7/ZuktmhmO67v/oN4gAlk6kJDlPj2ue9yCDx8s0IdBlF4QENiae
HQqug+EqwxQItawgwYO8ZGmQDs1nPPjxLJdymIGHCdIF4G149fk0GSkbE3+yhwvGvTXj
JPYFZpDeQvnLBy293t2lIkxk5GGvaC2w7gZvP3Pt6qZAFZvbVxGTOoKwqp+zJ7valQhr
xvmImfSJAw2fzIzTXE4Or4XXsPXpP5i1rcmRwDwGk8qQnXoCVfZLoyaQBPq2J5ChWPR0
w5nLlVSVB7IFfwmRZEfVwVxjOvHCMbXtu1Eeyl1JZ88vfD0OvbSeWn7RwBSoLWZoOiVl
EuYg==
X-Gm-Message-State: AD7BkJJ4ZaGY+7wGDmRTWxi4nvS2OwcKWPrcxB9LMV0I1cD9DTnaAiMAC+1nFhQx0/W8no4EPXCNk7rU7gk8Eg==
X-Received: by 10.28.44.9 with SMTP id s9mr11997524wms.96.1459775140100; Mon,

04 Apr 2016 06:05:40 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.28.53.66 with HTTP; Mon, 4 Apr 2016 06:05:00 -0700 (PDT)
In-Reply-To: <8b4794cab4431ff4cc910761b74fb544@mydomain.tld>
References: <8b4794cab4431ff4cc910761b74fb544@mydomain.tld>
From: Name Family
Date: Mon, 4 Apr 2016 17:35:00 +0430
Message-ID:
Subject: Re: test
To: Name

Content-Type: multipart/alternative; boundary=001a113d9e02ad7f4f052fa86217


Also, digging from an external ISP to check DNS records results:



dig MX mydomain.tld:



;; ANSWER SECTION:
mydomain.tld. 21599 IN MX 10 mail.mydomain.tld.



And then, dig A mail.mydomain.tld results:



;; ANSWER SECTION:
mail.mydomain.tld. 21599 IN A proper.ip.address


I have been able to send and receive email within the mail server between local accounts and also send to outside mail server like GMail, but I cannot receive from outside.



my postfix config is:




# See /usr/share/postfix/main.cf.dist for a commented, more complete version


# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)

biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no
# TLS parameters

smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_un$

myhostname = mydomain.tld
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = mydomain.tld, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +


inet_interfaces = loopback-only
inet_protocols = all


Master.cf content:



#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master" or
# on-line: http://www.postfix.org/master.5.html).

#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
587 inet n - - - - smtpd
#smtp inet n - - - 1 postscreen
#smtpd pass - - - - - smtpd

#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
#submission inet n - - - - smtpd
# -o syslog_name=postfix/submission
# -o smtpd_tls_security_level=encrypt
# -o smtpd_sasl_auth_enable=yes
# -o smtpd_reject_unlisted_recipient=no
# -o smtpd_client_restrictions=$mua_client_restrictions
# -o smtpd_helo_restrictions=$mua_helo_restrictions
# -o smtpd_sender_restrictions=$mua_sender_restrictions

# -o smtpd_recipient_restrictions=
# -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes
# -o smtpd_sasl_auth_enable=yes
# -o smtpd_reject_unlisted_recipient=no
# -o smtpd_client_restrictions=$mua_client_restrictions
# -o smtpd_helo_restrictions=$mua_helo_restrictions

# -o smtpd_sender_restrictions=$mua_sender_restrictions
# -o smtpd_recipient_restrictions=
# -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#628 inet n - - - - qmqpd
pickup unix n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr unix n - n 300 1 qmgr
#qmgr unix n - n 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr

rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp

# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache

#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#

# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#
# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#

# Specify in cyrus.conf:
# lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
# mailbox_transport = lmtp:inet:localhost
# virtual_transport = lmtp:inet:localhost
#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)

# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus unix - n n - - pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================
# Old example of delivery via Cyrus.
#
#old-cyrus unix - n n - - pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}

#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#

ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}



netstat -tulpn:



    Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 2050/stunnel4
tcp 0 0 0.0.0.0:21976 0.0.0.0:* LISTEN 877/sshd
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 907/named
tcp 0 0 127.0.0.1:51101 0.0.0.0:* LISTEN 2310/irssi
tcp 0 0 127.0.0.1:51102 0.0.0.0:* LISTEN 2292/rtorrent

tcp 0 0 0.0.0.0:51103 0.0.0.0:* LISTEN 2292/rtorrent
tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 0.0.0.0:51106 0.0.0.0:* LISTEN 2324/python
tcp 0 0 0.0.0.0:51107 0.0.0.0:* LISTEN 2317/python
tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 963/mysqld
tcp 0 0 0.0.0.0:1194 0.0.0.0:* LISTEN 1027/openvpn
tcp 0 0 127.0.0.1:587 0.0.0.0:* LISTEN 11162/master
tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 879/dovecot

tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2224/perl
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 2317/python
tcp 0 0 0.0.0.0:21201 0.0.0.0:* LISTEN 656/vsftpd


iptables -L:



Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ctstate NEW,ESTABLISHED

ACCEPT tcp -- anywhere anywhere tcp spt:smtp
ACCEPT tcp -- anywhere anywhere tcp spt:submission

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp spt:smtp ctstate ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:submission

ACCEPT tcp -- anywhere anywhere tcp spt:submission


can anybody help me please?
Thanks.


Answer



Your Postfix installation is undoubtedly configured to send and receive e-mail for local users only. In order to receive messages from the Internet, Postfix must be able to receive connections on ports 25/tcp (SMTP) and 465/tcp (SMTP over SSL). I'm not sure if GMail initially tries to establish a secure SMTP connection (465/tcp), but GMail certainly uses the port 587/tcp (SUBMISSION) to receive messages from end users only. See here for an overview of the difference between these ports.



I guess executing dpkg-reconfigure --priority=low postfix and supplying proper answers to the wizard will allow Postfix to receive messages from the Internet. Or else:





  1. Set inet_interfaces = all in /etc/postfix/main.cf.



    inet_interfaces = all

  2. In /etc/postfix/master.cf, comment the 587 service and uncomment smtp, smtpd, submission and smtps services:



    # 587      inet  n       -       -       -       -       smtpd
    smtp inet n - - - 1 postscreen
    smtpd pass - - - - - smtpd

    submission inet n - - - - smtpd
    -o syslog_name=postfix/submission
    -o smtpd_tls_security_level=encrypt
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_reject_unlisted_recipient=no
    # -o smtpd_client_restrictions=$mua_client_restrictions
    # -o smtpd_helo_restrictions=$mua_helo_restrictions
    # -o smtpd_sender_restrictions=$mua_sender_restrictions
    # -o smtpd_recipient_restrictions=
    -o smtpd_relay_restrictions=permit_sasl_authenticated,reject

    # -o milter_macro_daemon_name=ORIGINATING
    smtps inet n - - - - smtpd
    -o syslog_name=postfix/smtps
    -o smtpd_tls_wrappermode=yes
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_reject_unlisted_recipient=no
    # -o smtpd_client_restrictions=$mua_client_restrictions
    # -o smtpd_helo_restrictions=$mua_helo_restrictions
    # -o smtpd_sender_restrictions=$mua_sender_restrictions
    # -o smtpd_recipient_restrictions=

    # -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
    # -o milter_macro_daemon_name=ORIGINATING



Use an external diagnostic tool to check if your mail server is publicly acessible on ports 25/tcp, 465/tcp and 587/tcp. I suggest you to use http://mxtoolbox.com/diagnostic.aspx and http://dns.kify.com/ .


Sunday, December 15, 2019

nat - Internet access via OpenVPN



Note: This is a repost from the OpenVPN forums



I have just set up an OpenVPN on my Linode VPS, and I have successfully connected my Android phone to it. Now, I want to use the "route all traffic" option on the client. I'm not sure how to set up the routes on the server side though, so I would greatly appreciate any help. I'm taking a class this summer at my local Community College, and they seem to think that an open WAN with web authentication is secure enough.




Here are my interface configurations:




eth0 Link encap:Ethernet HWaddr
f2:3c:91:93:a8:c2
inet addr:173.255.235.246 Bcast:173.255.235.255
Mask:255.255.255.0
inet6 addr: 2600:3c03::f03c:91ff:fe93:a8c2/64
Scope:Global
inet6 addr: fe80::f03c:91ff:fe93:a8c2/64

Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:126144742 errors:0 dropped:0 overruns:0 frame:0
TX packets:315279 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2301671639 (2.3 GB) TX bytes:136422020 (136.4 MB)
Interrupt:44



lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3971 errors:0 dropped:0 overruns:0 frame:0
TX packets:3971 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:753104 (753.1 KB) TX bytes:753104 (753.1 KB)



tun0 Link encap:UNSPEC HWaddr
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00



      inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255

UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


Answer



Bear in mind that although openvpn will provide you a secure tunnel it won't stop access to your android device across the LAN, so you might want to be having a look at that too.




What is your network layout on the vpn server side of things? That will help get the ball rolling.


Apache load balancer logging question



I am using Apache as a load balancer and would like to log the server to which the load balancer is forwarding the request to. For example, if I had three webservers, called:




  • webserver1 - 192.168.0.1


  • webserver2 - 192.168.0.2

  • webserver3 - 192.168.0.3



I would like the log to show me to which server the request was forwared to (denoted by bold):




10.1.0.1 192.168.0.1 - - [20/Jul/2010:10:52:01 -0600] "GET /js/shared/kobj-static.js HTTP/1.1" 302 236 "http://www.google.com/search?q=baked+bbq+rib+recipes&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6 (.NET CLR 3.5.30729) infoCard/AzigoLite/0.0.12"





Any help would be appreciated.


Answer



You can use the Custom log format to do that. One way I think you can do is to add the environment variable to the log.
mod_proxy_balancer (that I suppose you are using) exports BALANCER_WORKER_NAME variable that is the name of the Worker used for the request. You can use the %{BALANCER_WORKER_NAME}e directive on your Custom Log format string to get that logged. This is an example of the default debian 'combined' log format with the directive added:



LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{BALANCER_WORKER_NAME}e\"" combined

Friday, December 13, 2019

Why no first user setup as sudo on Linode (Ubuntu 10.10)

The standard setup for Ubuntu is to create two users, root and a first user. The first user always gets full sudo access and root login is disabled (for security).



Linode doesn't do this, it just creates root, with ssh login enabled.




Why is this so? Is there some limit on the number of account Linode nodes can have? Is there some other reason?



My instinct is to create a user, sudo it and disable ssh root login. This keeps dev and prod machines as alike as possible.

Thursday, December 12, 2019

cron - crontab to run bash script (ssh command in it) not working



CentOS 5.4



(in my script file - script.sh)




#!/bin/bash
ssh 192.168.0.1 'iptables -L' > /tmp;


(in /etc/crontab)



30 21 30 9 * root /bin/bash /script.sh


If I run the script in terminal, things work just fine. But use crontab to run it, the tmp will be generated, but there's nothing in the tmp file (0k). I already run ssh agent so ssh won't prompt to ask password. What could be the problem with this? Thanks.



Answer



I suggest you to always explicitly set all needed variables at the beginning of the scripts.



PATH=/bin:/usr/bin:/sbin
MYVAR=whatever


That said, I would





  1. create a private/public keypair

  2. set an empty password on the private key

  3. set permission 400 on the private key file

  4. put the public key in the authorized_keys file of the root user on 192.168.0.1



Now try the connection with



#!/bin/bash
PATH=/usr/bin


ssh -i /myprivatekey -l root 192.168.0.1 '/sbin/iptables -L' > /tmp/output.$$


Edit: I guessed that the "iptables" command had to be executed by root on the remote server. If it is not, of course the "-l" parameter has to be changed accordingly.


How to detect an SSH connection?



For some reason I have he following scenario:



On boot-up I'm launching a script which waits for a given amount of time and checks whether an SSH connection was astablished during this time window or not. If a connection is open, the script does action A, else it kills sshd and does B.




What would be the best way to detect an open connection? (The script can be written in Bash or Ruby)



thx


Answer



If you want to detect a current SSH session, use lsof -i :22 and look for it returning more than 2 lines or grep for ESTABLISHED:
[root@nemo ~]# lsof -i :22
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 3772 root 3u IPv6 9906 TCP *:ssh (LISTEN)
sshd 21376 root 3r IPv6 159072 TCP myserver:ssh->someip:27813 (ESTABLISHED)
sshd 21381 james 3u IPv6 159072 TCP myserver:ssh->someip:27813 (ESTABLISHED)



To see if a session was opened at all, look for something similar to the following in /var/log/secure (on redhat/centos/fedora):
Sep 27 05:05:28 nemo sshd[21376]: Accepted password for james from some_ip port 27813 ssh2
If you allow authentication by means other than password, the log entries may be slightly different.


linux - "Virtual hosts" for SSH




We have a remote Xen server running a lot of guest machines (on Linux), with only a couple of IPs available.




Each guest machine should be directly accessible by the SSH from the outer world.



Right now we assign a separate domain name to each guest machine, pointing to one of the few available IPs. We also assign a port number to that guest machine.



So, to access machine named foo, one should do as follows:




$ ssh foo.example.com -p 12345



...And to access machine named bar:




$ ssh bar.example.com -p 12346


Both foo.example.com and bar.example.com point to the same IP.



Is it possible to somehow get rid of custom ports in this configuration and configure SSH server, listening at that IP (or firewall or whatever on server side), so it would route the incoming connection to the correct guest machine, based on the domain address, so that following works as intended?





$ ssh foo.example.com hostname # prints foo
$ ssh bar.example.com hostname # prints bar


Note that I do know about .ssh/config and related client-side configuration solutions, we're using that now. This question is specifically about a zero client configuration solution.


Answer



                         foo  
/
Client ----- Xen server

\
bar


It sounds like SSH Gateway is what you're looking for.



Firstly, create 2 new users foo, bar on the Xen server:



Xen # useradd foo
Xen # useradd bar



Generate key pairs and copy public key to the foo-server and bar-server:



Xen # su - foo
Xen $ ssh-keygen
Xen $ ssh-copy-id -i ~/.ssh/id_rsa.pub foo-user@foo-server


(Do the same for bar user)




Now, from the Xen server (SSH Gateway) you can login to the foo-server and bar-server without password prompt.



The next step is to let the Client authenticate to the Xen server with public key:



Client $ ssh-keygen
Client $ ssh-copy-id -i ~/.ssh/id_rsa.pub foo@Xen


and the final step is make Xen server open a second connection to the corresponding internal server. Access to Xen, switch to foo, open the ~/.ssh/authorized_keys file and change:




ssh-rsa AAAAB3N...== user@clienthost


to:



command="ssh -t -t foo-user@foo-server" ssh-rsa AAAAB3N...== user@clienthost


The sample result:




$ ssh foo-user@Xen
Last login: Thu Nov 10 13:02:25 2011 from Client
$ id
uid=500(foo-user) gid=500(foo-user) groups=500(foo-user) context=user_u:system_r:unconfined_t
$ exit
logout

Connection to foo-server closed.
Connection to Xen closed.






$ ssh bar-user@Xen
Last login: Thu Nov 10 11:28:52 2011 from Client
$ id
uid=500(bar-user) gid=500(bar-user) groups=500(bar-user) context=user_u:system_r:unconfined_t
$ exit
logout


Connection to bar-server closed.
Connection to Xen closed.

Wednesday, December 11, 2019

Are different RAID cards setups compatible?



I'm setting up a new NAS/SAN system with RAID5, and I was wondering if going the software or hardware RAID way because I had this question in mind:




If my hardware-RAID card fails, will I need to substitute by one exactly the same, same brand is enough or are RAID5 setups incompatible between different cards?



My guess is that they are not compatible, and I'm trying to get the less downtime possible in case a piece of my hardware fails...


Answer



Cards from different manufactuers are typically not compatible, although different cards from the same manufacturer usually are. There is no particular standard format for RAID metadata that is compatible across RAID controllers and software RAID implementations.



If you have an adaptec (for example) card and get a different model of adaptec card, the new card will almost certainly mount the old array. In some cases the same also applies to SAN equipment and RAID controllers from the same manufactuer - Mylex DAC-FFX controllers and ExtremeRAID 3000 cards used to do this (in fact an ExtremeRAID 3000 was essentially a DAC-FFX on a PCI card), and HP Smartarray 1000 & 1500s will also mount arrays transferred from HP direct attach controllers.



This is usually a deliberate policy on the part of the manufacturer to allow substitution of current parts if an older model is no longer available in stock. It also helps with upselling existing DA customers onto entry level SAN equipment by easing the migration path - just pop the disks into the SAN and mount the volumes off the SAN.




Note, however that OEM contracts and mergers and acquisitions mean that manufacturers may have several incompatible product lines. For example:-




  • Adaptec bought Eurologic and sold Eurologic SAN equipment for a while. Eurologic SANs have Mylex RAID controllers.


  • Adaptec also purchased ICP Vortex, so some adaptec branded RAID controllers may not be compatible with others in this respect.


  • LSI also purchased Mylex at one point and sold ExtremeRAID controllers for a while, but have their own line of host-based RAID controllers. The final dissolution of Mylex was quite complex with bits going to Xyratex (the biggest manufacturer of disk array hardware you've never heard of making stuff branded by people you have) as well.


  • Intel, Dell PERC, IBM ServerRAID controllers and some HP parts are often rebadged items made by a third party (ICP Vortex, Adaptec and LSI controllers often pop up with other brands). The OEM ones tend to have custom firmware but may still represent multiple, incompatible product lines. However, branded kit of this sort tends to have specific part numbers so you can re-order compatible replacements .




Do your homework. Check with the manufacturer, avoid anyone who gives you a blank stare and make sure you know which models go with each other in this respect.




Note Linux and Unix have good SW RAID facilities but software RAID on Windows is poo. If you're using Windows then always go for hardware RAID. Ebay is your friend if you get sticker shock at the price of new kit. Make sure you get a cache battery for it - often you can get them quite cheaply off ebay - again, find the part number and hunt.


Friday, December 6, 2019

linux - ssh public key authentication

I have tried to configure ssh to use public key authentication




The link



/etc/ssh/sshd_config has got this parameters.



RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

PasswordAuthentication yes



But I'm still being prompted for password. If I use



PasswordAuthentication no


I can't login



Any suggestion?







It works but only for root user.



My problem was, I had this parameter



PermitRootLogin no.



I can login with keys with root user if I use




PermitRootLogin yes.



In the system there is only a /root/.ssh directory an authorized_keys file in this directory



How can I add this to other users, if there isn't, in this system, a home/$USER/.ssh directory and authorized_keys file for each user?



Is there a way to configure for each user?
I will need diferent authorized_keys files for each user.




Is possible to configure for a different hosts, domains or ips? similar to httpd.conf

Thursday, December 5, 2019

monitoring - calculating days until disk is full

We use graphite to track history of disk utilisation over time. Our alerting system looks at the data from graphite to alert us when the free space falls below a certain number of blocks.



I'd like to get smarter alerts - what I really care about is "how long do I have before I have to do something about the free space?", e.g. if the trend shows that in 7 days I'll run out of disk space then raise a Warning, if it's less than 2 days then raise an Error.



Graphite's standard dashboard interface can be pretty smart with derivatives and Holt Winters Confidence bands but so far I haven't found a way to convert this to actionable metrics. I'm also fine with crunching the numbers in other ways (just extract the raw numbers from graphite and run a script to do that).



One complication is that the graph is not smooth - files get added and removed but the general trend over time is for disk space usage to increase, so perhaps there is a need to look at local minimum's (if looking at the "disk free" metric) and draw a trend between the troughs.



Has anyone done this?

Wednesday, December 4, 2019

storage - How to remember RAID levels?





How do you remember(if you really do :-)) all the different levels and what each level does? Can anyone suggest an easy way to remember?


Answer



0 - S (stripe)



1 - M (mirror)




5 - P (parity)



10 - MS (mirror + stripe)



Smart Men Pay MicroSoft



or



Silly Men Pay MicroSoft



Tuesday, December 3, 2019

windows server 2008 - Secondary Domain Controller no longer part of the domain

[before anyone corrects me I've used the terms secondary and primary colloquially and I understand the terminology]



I have a problem with my secondary DC, but not on any other server in the domain. Everything is Windows Server 2008. Virtualized using VMWARE. It appears to not be part of the domain anymore. Accounts appear "locked out" on DC2 but are not locked out on DC1. Active directory won't pull up on DC2 and I can't edit accounts to unlock DC2 locally.




Any network pings are "General Failure", including 127.0.0.1 any other server by IP or DNS name. Ping TO the DC2 fail conclusively as well. Everything is fine in the adapter settings and it even shows "connected" to the correct domain, but it can't reach anything else. Services are fine. There are NO enabled firewalls or issues that would arise from connection problems.



I believe it may be a trust issue? I'm not entirely sure

domain name system - DNS/Web Hosting to a private network setup



I've tried to find a lot of answers to my question, but I am quite a bit confused to some of it and now I am here to seek further advice.




The setup:



We have a hosting and our domain will be something like this www.example.biz.



In our infra, we have the following traditional servers which will be put behind firewall and private network




  1. Web Server

  2. Database Server




Now the domain will be given to us and we would like to point it to our web server to host the web pages.



This was the solution I've come up,



Configure the hosting's domain to point the record to our Public IP which will be forwarded by the firewall through port forwarding to our web server and accept traffic to be able to serve this web pages.



My question would be, was my solution enough for this setup? or should I configure a public authoritative dns server and add it to the domain hosting's nameserver which i would still use my firewall to point it to my private network's web server.




I would really appreciate for any advice there is, I am still new and I've found this site very helpful.



Thank you and Regards,
Ian


Answer



You don't need to run your own DNS server. DNS is a basic service, you can rely on your provider or a third party like CloudFlare / AWS Route 53 for that.



Other than the DNS part of your question your solution is standard and should work.



A note: firewalls don't exactly "forward" traffic, I would say they intercept or pass through traffic but that's mostly a semantic difference. A reverse proxy server like Nginx would forward traffic.



Monday, December 2, 2019

cron - Python script succeeds manually but fails on crontab

So I'm currently trying to get a script working but it's behaving differently when I run it manually than when I run it from crontab. Basically, I have a reverse ssh tunnel set up from one server to another, and in order to verify that my tunnel is up I:





  • SSH from server A to server B

  • Wget a test url from on server A from server B

  • if Wget succeeds, I disconnect and do nothing

  • if Wget fails, I disconnect and restart the tunnel



I know there are more elegant ways to verify ssh tunnels (like autossh and ServerKeepAlive), but for both policy and redundancy issues, I have to do things this way. Anyways, here's the script:




from __future__ import print_function
from __future__ import absolute_import

import os, sys, subprocess, logging, pexpect

COMMAND_PROMPT = '[#$] '
TERMINAL_PROMPT = '(?1)terminal type\?'
TERMINAL_TYPE = 'vt100'
SSH_NEWKEY = '(?i)are you sure you want to continue connecting'
SERVERS = [{address':'192.168.100.10', 'connString':'ssh user@192.168.100.10', 'testGet':'wget http://192.168.100.11/test.html -t 1 -T 10', 'tunnel':'start_tunnel'}, {address':'192.168.100.12', 'connString':'ssh user@192.168.100.12', 'testGet':'wget http://192.168.100.13/test.html -t 1 -T 10', 'tunnel':'start_tunnel2'}]


def main():

global COMMAND_PROMPT, TERMINAL_PROMPT, TERMINAL_TYPE, SSH_NEWKEY, SERVERS

#set up logging
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/home/user/tunnelTest.log')
formatter = logging.Formatter('%(asctime)s - %(module)s.%(funcName)s: %(message)s')

handler.setFormatter(formatter)
log.addHandler(handler)


for x in SERVERS:

#connect to server
child = pexpect.spawn(x['connString'])
i = child.expect([pexpect.TIMEOUT, SSH_NEWKEY, COMMAND_PROMPT, '(?i)password'])
if i == 0: #Timeout

log.debug('ERROR! Could not log in to ' + x['address'] + ' ...')
sys.exit(1)
if i = 1: #No key cached
child.sendline('yes')
child.expect(COMMAND_PROMPT)
log.debug('Connected to ' + x['address'] + '...')
if i = 2: #Good to go
log.debug('Connected to ' + x['address'] + '...')
pass


#Housecleaning
child.sendline('cd /tmp')
child.expect(COMMAND_LINE)
child.sendline('rm -r test.html')
child.expect(COMMAND_LINE)

log.debug('Testing service using ' + x['testGet'] + ' ...')
child.sendline(x['testGet'])
child.expect(COMMAND_PROMPT)
if 'saved' in child.before.lower():

log.debug('Tunnel working, nothing to do here!')
log.debug('Disconnecting from remote host ' + x['address'] + '...')
child.sendline('exit')
else:
log.error('Tunnel down!')
log.debug('Disconnecting from remote host ' + x['address'] + ' and restarting tunnel')
child.sendline('exit')
subprocess.call(['start',x['tunnel']])
log.debug('Autossh tunnel restarted')


if __name__ == "__main__":
main()


My crontab entry is as follows:



0,30 * * * * python /home/user/tunnelTest.py


So yeah -- this script runs fine when I do it manually (sudo python tunnelTest.py) and also runs fine on crontab unless a tunnel is down. When a tunnel is down, I get the "Tunnel down!" and "Disconnecting from remote host 192.168.100.10 and restarting tunnel" messages in my log, but the script seems to die there. The tunnel doesn't restart, and I get no messages in my log until the start of the next scheduled run.




The start_tunnel script is in /etc/init, the testTunnel.py script is in /home/user, the testTunnel.log file is in /home/user/logs, and I ran crontab -e as root.



Any insight into this matter would be greatly appreciated.



Thanks!

Sunday, December 1, 2019

monitoring - Nagios - NagWin - Send notification with gmail

I would like to send Nagios notifications using my gmail account.



I have already set up my hosts I want to monitor and services also.



What is the most simple way to accomplish this using NagWin on a Windows Server 2012 installation?




As far as I know I must change some of these configuration settings:



# 'notify-host-by-email' command definition
define command{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" -server ???
}

# 'notify-service-by-email' command definition

define command{
command_name notify-service-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" -server ???
}


What should I use for smtp server? Is it possible to directly send my notifications to the Gmail server?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...