Monday, December 30, 2019

windows server 2016 - Cross-Realm-Trust between Active Directory and MIT Kerberos

I am currently in the process of extending my development environment, which used to only run Linux servers so far, by adding machines running Windows Server 2016. The authentication process is handled by MIT Kerberos. For the new Windows machines, I am planning on using Active Directory. Since I don't want to manage users in two systems, I am setting up a cross-realm trust between the Windows AD and the already existing MIT Kerberos installation.



To do that, I have followed this guide: https://bluedata.zendesk.com/hc/en-us/articles/115007484067-How-To-Establish-Cross-Realm-Trust-from-MIT-KDC-to-AD.



Now, I have noticed that I can obtain a ticket from the Windows AD for a User from the AD on a linux machine just fine: Running kinit Administrator@AD.DOMAIN.LOCAL completes without any errors and gives me a ticket as expected.



On the other hand, I cannot login to any of the Windows machines using an account from the MIT Kerberos setup. Trying to log in using my test account (test@DOMAIN.LOCAL from the MIT realm DOMAIN.LOCAL) throws the following error:



"The security database on the server does not have a computer account for this workstation trust relationship".




Another thing I am noticing is that when I try to verify the trust relationship using the command netdom trust DOMAIN.LOCAL /Domain:AD.DOMAIN.LOCAL /Kerberos /verbose /verify, I am getting the following error message:



"Unable to contact the domain DOMAIN.LOCAL. The command failed to complete successfully."



Seems like the Windows AD is unable to communicate with the MIT Kerberos installation, which seems weird though, because it apparently does work the other way around. I have already double-checked that all the DNS records (domain.local, ad.domain.local and the FQDNs for the KDCs) resolve to the correct IP addresses. While researching the problem, I stumbled across this post https://stackoverflow.com/questions/45236577/using-mit-kerberos-as-account-domain-for-windows-ad-domain, which seemed promising at first, but couldn't help me fix my problem. Any help is greatly appreciated!

Sunday, December 29, 2019

Configure more than two logical drives in an HP ProLiant Gen8 server?

I've a server HP Proliant dl380 gen 8. I've got 2 RAID 1 drives (each having 2 HDDs (300gb) in a mirror). I've now come to put 4 extra physical disks into the server and it won't let me create the logical disks for the 2 new drives. I want to create an other new logical drive for thoses 4 physical drives



When I try to add them from the RAID utility (F8 during boot) I get a message saying ORCA can't handle any more logical drives, and that I should use the array config utility to add them. I tried using the array config to add them but can't see how to do it. The disks are both picked up and labeled "un-allocated" but I can't find any way to allocate them.

Friday, December 27, 2019

How can I delete SPAM 'Delivery Failure' Emails at the server..?



A friend of mine is having problems with large numbers of "Your EMail was Undeliverable" type messages coming into his inbox.



The original messages contain a 'Please Send Money' scam and were not sent from my friends account, someone is just spoofing the 'From' address.



The problem is that the SPAM-bot is sending out a massive numebr of emails but many are to accounts that so not exist - hence the undeliverable errors.

He is using Outlook Web Access (provided by his ISP) and does not have the ability to mark messages are Junk or to create a Rule that will scan the body of the email (only the subject and sender). I have been able to create some basic rules to move many of the messages to the Deleted Items folder (based on a common subject) but I'm worried about being too generic in case I end up deleting genuine messages.



How can I stop these messages from filling up his Inbox/Deleted Items..?



Thanks in Advance


Answer



Just something to consider: if the ISP isn't doing some proper spam filtering, you might want another provider for your email. If you subscribe your friend for a GMail account, he'd still have a web interface plus POP3/IMAP access and he can just forget about his ISP-provided mail account.



I hate it too, though. I've had a CompuServe account since 1993 and around 1999/2000 I started having some serious spam problems with that account. Fortunately, I've always had multiple mail accounts so I avoided my CompuServe account, which would just end up being flooded. I added some rules to this account to just forward emails from people on my whitelist to my new account and all other emails were just trashed. (I did check their senders and titles before trashing them, though.)




I stopped using Compuserve in 2005. I wasn't receiving any more important emails on that account and even stopped checking it for new emails. It was all spam anyways and Compuserve didn't bother to do something about it so I had no use for it.



Complain to the ISP, telling them to take action against this flood of spam. They should be able to recognize it and thus block it even before it reaches your mailbox.
Or just forget that mailbox and use GMail instead, adding rules to your old mailbox to just forward any important messages to the new account.


For businesses, there's also the Google Postini services which aren't free but might be useful for people who run their own business with their own domain name. Or people who can't switch provider.

Wednesday, December 25, 2019

scripting - Changing PF rules on the fly to mitigate damage of DDoS (OpenBSD 6.4)

This is a two part question, really. Keep in mind that I am a developer not a system admin, but being the only employee in the company, I wear ALL the hats.



I have deployed my server with two firewalls running on CARP for load balancing/redundancy plus about 40 computers for database and other backend application needs. As a start up I want to save some money by mitigating damages of a DDoS attack without paying my ISP for a business dedicated internet on top of DDoS protection. I KNOW YOU CAN'T TOTALLY protect against DDoS. I just want to mitigate damages until my App starts making money and then I can let the ISP deal with the headaches.




In that spirit, I was wondering if anybody ever implemented a solution of where a script (maybe through cron) would change the PF rules based on current usage. For example, if there are too many half open connections from millions of IP addresses I would like to tell PF to go into SYN-Cookies mode and then when the attack is over (or some time has passed) to go back to normal.



I cannot use Cloudfare because I am running a backend for an App and 99% of the content is not static. I could do cloudfare for the website of the app but that's about it.



To reiterate, money IS AN ISSUE. I am currently using FIOS Business and Verizon will not provide DDoS protection on that type of line.



Last thing, has anybody experienced drastic issues after enabling SYN-COOKIES/SYN-PROXYING. Give me real story. Please.



PS I do not want to start a debate about SYN-PROXYING vs SYN-COOKIES!

Tuesday, December 24, 2019

nameserver - Domain is reporting incorrect Name Server information



I am running a VPS that is hosting five different domains. Everything has been fine until I wanted to use our inactive domain to setup Google for Business Apps. I am unable to verify the domain because the DNS on that one domain is really messed up. To me the setup looks no different than the others that are working fine. This is an unmanaged VPS so I'm hoping that someone here may see what is wrong.



The server uses it's own name servers which are correctly set at the registrar. They are like so:



enter image description here




My first domain, plangator.com, is mostly reporting OK at intodns. Here is it's Zone file:



; Zone file for plangator.com
$TTL 14400
plangator.com. 86400 IN SOA ns1.lamardesigngroup.com. rlamar4088.aol.com. (
2016020105 ;Serial Number
86400 ;refresh
7200 ;retry
3600000 ;expire

86400 ;minimum
)
plangator.com. 86400 IN NS ns1.lamardesigngroup.com.
plangator.com. 86400 IN NS ns2.lamardesigngroup.com.
plangator.com. 14400 IN A 212.1.213.8
localhost 14400 IN A 127.0.0.1
plangator.com. 14400 IN MX 0 plangator.com.
mail 14400 IN CNAME plangator.com.
www 14400 IN CNAME plangator.com.
ftp 14400 IN A 212.1.213.8

cpanel 14400 IN A 212.1.213.8
webmail 14400 IN A 212.1.213.8
plangator.com. 14400 IN TXT "v=spf1 mx a ip4:212.1.213.8 include:plangator.com ~all"


One thing that I notice is that it doesn't report the correct IP's for the name servers. 212.1.213.8 is the IP of the Server.




Nameserver records returned by the parent servers are:




ns1.lamardesigngroup.com. ['212.1.213.8'] [TTL=172800]



ns2.lamardesigngroup.com. ['212.1.213.8'] [TTL=172800]




My problem domain is gator.digital. Here is it's Zone file:



; Zone file for gator.digital
$TTL 14400
gator.digital. 86400 IN SOA ns1.lamardesigngroup.com. rlamar4088.aol.com. (

2015101316 ;Serial Number
86400 ;refresh
7200 ;retry
3600000 ;expire
86400 ;minimum
)
gator.digital. 86400 IN NS ns1.lamardesigngroup.com.
gator.digital. 86400 IN NS ns2.lamardesigngroup.com.
gator.digital. 14400 IN A 212.1.213.8
www 14400 IN CNAME gator.digital.

cpanel 14400 IN A 212.1.213.8
gator.digital. 14400 IN TXT google-site-verification=l5pn02kvh4kCGScCaA-IUIb7toL82RnLdiuXdHw0dB8
gator.digital. 3600 IN MX 1 aspmx.l.google.com.
gator.digital. 3600 IN MX 5 alt1.aspmx.l.google.com.
gator.digital. 3600 IN MX 5 alt2.aspmx.l.google.com.
gator.digital. 3600 IN MX 10 alt3.aspmx.l.google.com.
gator.digital. 3600 IN MX 10 alt4.aspmx.l.google.com.
gator.digital. 14400 IN TXT "'v=spf1 include:_spf.google.com ~all'"
localhost 14400 IN A 127.0.0.1



Here is how the name servers are seen for gator.digital.




Nameserver records returned by the parent servers are:



ns2.lamardesigngroup.com. ['198.20.251.114'] (NO GLUE) [TTL=86400]



ns1.lamardesigngroup.com. ['198.20.251.113'] (NO GLUE) [TTL=86400]





And then all of the errors:




NS records from your nameservers NS records got from your nameservers listed at the parent NS are:
Oups! I could not get any nameservers from your nameservers (the ones listed at the parent server). Please verify that they are not lame nameservers and are configured properly.



Same Glue Hmm,I do not consider this to be an error yet, since I did not detect any nameservers at your nameservers.



Glue for NS records OK. Your nameservers (the ones reported by the parent server) have no ideea who your nameservers are so this will be a pass since you already have a lot of errors!




Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records.



DNS servers responded ERROR: One or more of your nameservers did not respond:
The ones that did not respond are:
198.20.251.114 198.20.251.113



Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me.



Missing nameservers reported by your nameservers You should already know that your NS records at your nameservers are missing, so here it is again:




ns2.lamardesigngroup.com.
ns1.lamardesigngroup.com.




It seems that although these are both setup to use the same nameservers the DNS check is looking in two different places.


Answer



Focusing on the actual problem with your domains:



Following the chain of delegations for lamardesigngroup.com you'll see a delegation to ns1.lamardesigngroup.com and ns2.lamardesigngroup.com with glue referring to 212.1.213.8.




lamardesigngroup.com.   172800  IN      NS      ns1.lamardesigngroup.com.
lamardesigngroup.com. 172800 IN NS ns2.lamardesigngroup.com.
ns1.lamardesigngroup.com. 172800 IN A 212.1.213.8
ns2.lamardesigngroup.com. 172800 IN A 212.1.213.8


However, the authoritative records served by 212.1.213.8 are:



lamardesigngroup.com.   86400   IN      NS      ns2.lamardesigngroup.com.

lamardesigngroup.com. 86400 IN NS ns1.lamardesigngroup.com.
ns1.lamardesigngroup.com. 14400 IN A 198.20.251.113
ns2.lamardesigngroup.com. 14400 IN A 198.20.251.114


There's clearly an inconsistency between the glue and authoritative address records for the nameserver names, leading to different addresses being used in different situations.



This in turn also affects your other domains that use ns1.lamardesigngroup.com and ns2.lamardesigngroup.com as nameservers.


Monday, December 23, 2019

cron - Failover tmpfs mirroring. Am I doing it right?



My goal is to have a certain directory to be available as tmpfs.
There will be some modifications during server uptime in this dir and those modifications must be synced to non-tmpfs persistent dir on HDD over rsync.



After server boot the latest version from non-tmpfs persistent dir must be moved to tmpfs and rsync syncing to be started.



I'm afraid that rsync will erase non-tmpfs backup if tmpfs dir will be empty..




I'm doing it in this way right now:




  1. create tmpfs partition in /etc/fstab

  2. cat /etc/rc.local (pseudocode)



    delete "tmpfs rsync" cronjob from /var/spool/cron/crontabs if there is any



    cp -r /path/to/non-tmpfs-backup /path/to/tmpfs/dir




    append /var/spool/cron/crontabs with "tmpfs rsync" cronjob




What do you think?


Answer



Create some sort of seed file deep in your non-tmpfs directory and only rsync back to non-tmpfs if it exists (meaning the "boot" copy worked), so something like:



BOOT



mount /path/tmpfs

rsync -aq --delete /path/non-tmpfs/ /path/tmpfs/


CRON



if [ -f /path/tmpfs/some/deep/location/filesgood.txt ]; then
rsync -aq --delete /path/tmpfs/ /path/non-tmpfs/
fi



It's not perfect but, if you enhance that (by looking for 5 "cookie" files during cron if you want to in different directories, e.g.), it should be pretty safe.


Saturday, December 21, 2019

proxy - What is the purpose of netcat's "-w timeout" option when ssh tunneling?



I am in the exact same situation as the person who posted another question, I am trying to tunnel ssh connections through a gateway server instead of having to ssh into the gateway and manually ssh again to the destination server from there. I am trying to set up the solution given in the accepted answer there, a ~/.ssh/config that includes:



host foo
User webby
ProxyCommand ssh a nc -w 3 %h %p


host a
User johndoe


However, when I try to ssh foo, my connection stays alive for 3 seconds and then dies with a Write failed: Broken pipe error. Removing the -w 3 option solves the problem. What is the purpose of that -w 3 in the original solution, and why is it causing a Broken pipe error when I use it? What is the harm in omitting it?


Answer




What is the purpose of that -w 3 in the original solution





It avoids leaving orphaned nc processes running on the remote host when the ssh session is closed improperly.




and why is it causing a Broken pipe error when I use it?




Try increasing the timeout for nc to 90 and setting ServerAliveInterval to 30 to see if your problem go away:



host foo

User webby
ServerAliveInterval 30
ProxyCommand ssh a nc -w 90 %h %p

linux - "POSSIBLE BREAK-IN ATTEMPT!" in /var/log/secure — what does this mean?




I've got a CentOS 5.x box running on a VPS platform. My VPS host misinterpreted a support inquiry I had about connectivity and effectively flushed some iptables rules. This resulted in ssh listening on the standard port and acknowledging port connectivity tests. Annoying.



The good news is that I require SSH Authorized keys. As far as I can tell, I don't think there was any successful breach. I'm still very concerned about what I'm seeing in /var/log/secure though:






Apr 10 06:39:27 echo sshd[22297]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:27 echo sshd[22298]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:31 echo sshd[22324]: Invalid user edu1 from 222.237.78.139
Apr 10 06:39:31 echo sshd[22324]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!

Apr 10 13:39:31 echo sshd[22330]: input_userauth_request: invalid user edu1
Apr 10 13:39:31 echo sshd[22330]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:35 echo sshd[22336]: Invalid user test1 from 222.237.78.139
Apr 10 06:39:35 echo sshd[22336]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:35 echo sshd[22338]: input_userauth_request: invalid user test1
Apr 10 13:39:35 echo sshd[22338]: Received disconnect from 222.237.78.139: 11: Bye Bye
Apr 10 06:39:39 echo sshd[22377]: Invalid user test from 222.237.78.139
Apr 10 06:39:39 echo sshd[22377]: reverse mapping checking getaddrinfo for 222-237-78-139.tongkni.co.kr failed - POSSIBLE BREAK-IN ATTEMPT!
Apr 10 13:39:39 echo sshd[22378]: input_userauth_request: invalid user test
Apr 10 13:39:39 echo sshd[22378]: Received disconnect from 222.237.78.139: 11: Bye Bye






What exactly does "POSSIBLE BREAK-IN ATTEMPT" mean? That it was successful? Or that it didn't like the IP the request was coming from?


Answer



Unfortunately this in now a very common occurrence. It is an automated attack on SSH which is using 'common' usernames to try and break into your system. The message means exactly what it says, it does not mean that you have been hacked, just that someone tried.


ubuntu - Nginx and NSD3 don't start on boot because they cannot use the assigned IP

Server is a Xen VPS running Ubuntu 12.04 and neither nginx nor NSD3 come up after reboot. The apparent reason for that is that they're not able to bind to their assigned IP addresses right after boot,



from /var/log/boot.log



* Starting configure network device                                     [ OK ]

* Stopping save kernel messages [ OK ]
* Starting MTA [ OK ]
nginx: [emerg] bind() to [2a01:1b0:removed:1c9c]:80 failed (99: Cannot assign requested address)
* Starting nsd3... [ OK ]
[...]
* Starting configure virtual network devices [ OK ]
* Stopping configure virtual network devices [ OK ]


from /var/log/nsd.log




[1351715473] nsd[956]: error: can't bind udp socket: Cannot assign requested address
[1351715473] nsd[956]: error: server initialization failed, nsd could not be started


Everything works fine after a couple of seconds, and both nginx and NSD3 can be started.



It seems to me that the problem is in the wrong boot order, nginx and NSD3 are started before the network configuration can fully take place. I worked around it by putting



# nginx and nsd boot fix

sleep 4
/etc/init.d/nsd3 start
/etc/init.d/nginx start


in /etc/rc.local but that's not a proper solution. What is the right way to handle this issue?



Here's my basic network configuration, from /etc/network/interfaces
auto eth0




iface eth0 inet static
address 89.removed.121
gateway 89.removed.1
netmask 255.255.255.0

iface eth0 inet6 static
up echo 0 > /proc/sys/net/ipv6/conf/all/autoconf
up echo 0 > /proc/sys/net/ipv6/conf/default/autoconf
netmask 64
gateway 2a01:removed:0001

address 2a01:removed:7c3b
up ip addr add 2a01:removed:62bd dev eth0 preferred_lft 0
up ip addr add 2a01:removed:ce6d dev eth0 preferred_lft 0
up ip addr add 2a01:removed:3e13 dev eth0 preferred_lft 0
up ip addr add 2a01:removed:1c9c dev eth0 preferred_lft 0

auto lo
iface lo inet loopback



Those awkward up id addr are there because I wanted to add additional IPs but still use the first one for all traffic originating from the server.

solaris - Compiling Apache mod_ssl for different target hardware (hardware capability unsupported SSE2 error)



I am building and packaging the following on one machine (the "build" machine) and attempting to install and use on other machines ("target" machines) some of which have different processors.




  • OpenSSL 0.9.8l


  • Apache 2.2.14

  • Tomcat Connectors 1.2.28



The problem, as far as I can tell, is that the build machine has more CPU capabilities than the target machine resulting in binaries that are not executable on the target machine. I have attempted to use configure and compiler flags to disable use of the offending instructions without luck.



Ultimately I get this error:



$ ./apachectl start 




httpd: Syntax error on line 58 of /usr/local/apache-2.2.14/conf/httpd.conf:
Cannot load /usr/local/apache2/modules/mod_ssl.so into server: ld.so.1: httpd:
fatal: /usr/local/openssl/lib/libssl.so.0.9.8: hardware capability unsupported:
0x1000 [ SSE2 ]


Here is my complete build process. Full output from each command can be viewed here. I can't link to them each directly since I don't have enough SF rep.




The Build Machine



$ echo $PATH
/usr/bin:/usr/ccs/bin:/usr/sfw/bin:/opt/sfw/bin:/usr/sbin

$ isainfo -v
32-bit i386 applications
pause sse2 sse fxsr mmx cmov sep cx8 tsc fpu

$ uname -a

SunOS bsiausstgdb02 5.10 Generic_120012-14 i86pc i386 i86pc


The Target Machine



$ isainfo -v
32-bit i386 applications
sse fxsr mmx cmov sep cx8 tsc fpu

$ uname -a

SunOS bsiausdevweb01 5.10 Generic_120012-14 i86pc i386 i86pc


Compile OpenSSL 0.9.8l



$ CC=/usr/bin/cc
$ export CC

$ CFLAGS="-xarch=sse"
$ export CFLAGS


$ ./Configure \
solaris-x86-cc \
shared \
no-asm \
no-sse2 \
-xarch=sse \
--openssldir=/usr/local/openssl-0.9.8l



view full output:
openssl-configure.txt



$ make && make test


view full output:
openssl-make-and-test.txt



$ sudo make install



view full output:
openssl-make-install.txt



Compile Apache 2.2.14



$ CC=/usr/bin/cc
$ export CC


$ CFLAGS="-xarch=sse"
$ export CFLAGS

$ ./configure \
--prefix=/usr/local/apache-2.2.14 \
--with-mpm=prefork \
--enable-so \
--enable-unique-id=shared \
--enable-rewrite=shared \
--enable-spelling=shared \

--enable-info=shared \
--enable-headers=shared \
--enable-deflate=shared \
--enable-expires=shared \
--enable-unique-id=shared \
--enable-speling=shared \
--enable-ssl=shared \
--with-ssl=/usr/local/openssl



view full output:
apache-configure.txt



$ make


view full output:
apache-make.txt



$ sudo make install



view full output:
apache-make-install.txt



Compile Tomcat Connectors 1.2.28



$ CC=/usr/bin/cc
$ export CC


$ CFLAGS="-xarch=sse"
$ export CFLAGS

$ cd native
$ ./configure \
--with-apxs=/usr/local/apache2/bin/apxs


view full output:
tomcat-connector-configure.txt




$ make


view full output:
tomcat-connector-make.txt



$ sudo make install



view full output:
tomcat-connector-make-install.txt



Testing



At this point everything will work on the build machine. Once I package these files and install them on the target machine, I get this error when Apache is started with mod_ssl enabled.



$ ./apachectl start




httpd: Syntax error on line 58 of /usr/local/apache-2.2.14/conf/httpd.conf:
Cannot load /usr/local/apache2/modules/mod_ssl.so into server: ld.so.1: httpd:
fatal: /usr/local/openssl/lib/libssl.so.0.9.8: hardware capability unsupported:
0x1000 [ SSE2 ]

Answer



I worked around this problem by building the packages on a machine with equivalent hardware to the target machine and using the Sun Studio CC compiler instead of gcc.


Thursday, December 19, 2019

redirect - Apache returning wrong Location header

The issue happens when you:





  1. issue a request with the header "Host" including the port, e.g. "Host: www.example.com:80", which is legal as per https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.23. You can do it for instance with curl curl -v -H "Host: www.example.com:80" -X GET -i http://www.example.com

  2. the server issues a redirect to https for that request, in my case using the following RewriteRule




RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]



I noticed that the "Location" header of the response also includes the port, and it's the same of that specified in the "Host" header of the request. So the server would respond with "Location: https://www.example.com:80", which is wrong.



This happens to me with "Apache/2.4.7 (Ubuntu)", but I noticed the issue also with Varnish cache server. Why does it behave this way? Is there a way to correct this?

routing - preferential active directory one-way trust targets



If you have two domains and forests, Domain + Forest A and Domain + Forest B, and you are making a one-way trust so that Domain + Forest B will implicitly trust A, is there a way to make sure all the trust-related traffic goes through only ONE preselected DC in Domain B from the DCs with A?




All the domains and forests are at Windows Server 2003 functional level. Upgrading B is an option.



Totally stumped. Update the root hints maybe? Having this restriction will make certain routing issues (avoiding setting up more IPSEC tunnels) MUCH easier with regard to trust traffic encryption.


Answer



You'd do this by ensuring name resolution queries by Domain A for DomainB return just the DC of interest. If you are forwarding DNS traffic to DomainB for DomainB queries, that means getting DCs of DomainB not to register certain records by using DNS mnemonics ( http://support.microsoft.com/kb/267855 ). Probably not what you want.



Alternate is to host your own version of DNS zone(s) for Domain B on domainA side with just the detail required. So when _kerberos_tcp.dc_msdcs.domainb.com or _ldap._tcp.dc._msdcs.domainb.com type queries are issued, these queries return just the DC of interest.



I also take it you are not concerned with single point of failure of choosing one domainb dc.



setting IPv6 address for domain



I am quite new to the server business and I was wondering about a thing with the IPv6 addresses:



When I assign an IPv6 address for my domain as an AAAA record: Do I assign a /64 address or do I assign a complete single address out of the /64 that I got from my provider?



The thing is that I only got a /64 so I divided them somehow amongst my domains, but I get the impression that I am doing this wrong...




Thanks in advance!


Answer



You assign a full address (/128) in a quad-A record. The /64 is a range of addresses for you to allocate from.



For example:



2604:4301:a:103/64 is my range, and I can use any address between 2604:4301:a:103:: (the :: is shorthand for all-zeros) and 2604:4301:a:103:FFFF:FFFF:FFFF:FFFF.


Is it possible to set up a web server on the same domain used by Active Directory?

I'm a web developer at a large-ish organization and our web site is hosted by our IT department. Our website will not load without the "www" subdomain in front of it and IT says that it's because Active Directory must use the primary domain and so the web server must use a subdomain. They say it's not possible to fix it. I'm highly skeptical of this claim because this hasn't been a problem anywhere else I've worked or heard of but I'm not familiar enough with the technologies in question to argue the point.



So my question is, does this sound reasonable? Is it not possible to use AD on the same domain as a web server? Thanks!

Monday, December 16, 2019

windows - Can't access the internet when connected to OpenVPN server

I have recently installed OpenVPN on my windows 2003 server.
Once someone is connected to the server, they do not have internet access.





  • My network is on 192.168.1.1

  • my server is on 192.168.1.110

  • I am using the dd-wrt firmware

  • I have enabled port 1194 for 192.168.1.110 on the router

  • Routing and Remote Access is disabled

  • I have 2 Tap-Win32 Adapter V8(s) on my windows 2003 server

  • I have tried setting this line to 192.168.1.1 and also my isp's dns servers
    push "dhcp-option DNS 192.168.1.1" # Replace the Xs with the IP address of the DNS for your
    home network (usually your ISP's DNS)


  • I have created an advanced routing Gateway in dd-wrt



     Destination LAN NET: 192.168.10.0
    Subnet Mask: 255.255.255.252
    Gateway: 192.168.1.110
    Interface: Lan & WLAN



I have followed this website exactly: http://www.itsatechworld.com/2006/01/29/how-to-configure-openvpn/




EDIT: I just tried to connect through the cmd prompt and get the following subnet error - potential route subnet conflict between local LAN [192.168.1.0/255.255.255.0] and remote VPN [192.168.1.0/255.255.255.0]



My server file looks as follows:



local 192.168.1.110 # This is the IP address of the real network interface on the server connected to the router

port 1194 # This is the port OpenVPN is running on - make sure the router is port forwarding this port to the above IP

proto udp # UDP tends to perform better than TCP for VPN


mssfix 1400 # This setting fixed problems I was having with apps like Remote Desktop

push "dhcp-option DNS 192.168.1.1" # Replace the Xs with the IP address of the DNS for your home network (usually your ISP's DNS)

#push "dhcp-option DNS X.X.X.X" # A second DNS server if you have one

dev tap

#dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name


ca "ca.crt"

cert "server.crt"

key "server.key" # This file should be kept secret

dh "dh1024.pem"

server 192.168.10.0 255.255.255.128 # This assigns the virtual IP address and subent to the server's OpenVPN connection. Make sure the Routing Table entry matches this.


ifconfig-pool-persist ipp.txt

push "redirect-gateway def1" # This will force the clients to use the home network's internet connection

keepalive 10 120

cipher BF-CBC # Blowfish (default) encryption

comp-lzo


max-clients 100 # Assign the maximum number of clients here

persist-key

persist-tun

status openvpn-status.log

verb 1 # This sets how detailed the log file will be. 0 causes problems and higher numbers can give you more detail for troubleshooting



My client1 file is as follows:



client

dev tap

#dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name


proto udp

remote my-dyna-dns.com 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config

route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1

resolv-retry infinite

nobind


persist-key

persist-tun

ca "ca.crt"

cert "client1.crt" # Change the next two lines to match the files in the keys directory. This should be be different for each client.

key "client1.key" # This file should be kept secret


ns-cert-type server

cipher BF-CBC # Blowfish (default) encrytion

comp-lzo

verb 1


Thanks in advance!

windows server 2008 r2 - Replace wildcard certificate on multiple sites at once (using command line) on IIS 7.5



I have 3 websites: aaa.my-domain.com, bbb.my-domain.com and ccc.my-domain.com all using a single wildcard certificate *.my-domain.com on IIS 7.5 Windows Server 2008R2 64-bit. That certificate expires in a month and I have a new wildcard certificate *.my-domain.com on my server ready.



I want all those domains to use the new wildcard certificate without noticeable downtime.




I tried the usual through the UI starting with replacing the certificate for aaa.my-domain.com:
edit site bindings window in IIS 7.5



But when I press OK, I get the following error:




--------------------------- Edit Site Binding ---------------------------



At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate?




--------------------------- Yes No ---------------------------




When I click Yes, I get the following message:




--------------------------- Edit Site Binding ---------------------------



The certificate associated with this binding is also assigned to another site's binding. Editing this binding will cause the HTTPS binding of the other site to be unusable. Do you still want to continue?




--------------------------- Yes No ---------------------------




This message tells me that https://bbb.my-domain.com and https://ccc.my-domain.com will become unusable. And I will have downtime for those at least until I'm done replacing the certificate for those 2 domains too, right?



I was thinking that there must be a smarter way of doing this. Possibly through the command line that replaces the wildcard certificate with a new one for all website at once. I couldn't find any resources online as to how to do that. Any ideas?



Sites related to wildcard and binding:






Sites related to binding certificates from the command line:




Answer



The context of the answer is that IIS 7 doesn't actually care about the certificate binding. IIS 7 only ties websites to one or more sockets. Each socket being a combination of IP + port. Source: IIS7 add certificate to site from command line



So, what we want to do is do certificate re-binding on the OS layer. The OS layer takes control of the SSL part, so you use netsh to associate a certificate with a particular socket. This is done through netsh using netsh http add sslcert.




When we bind a (new) certificate to a socket (ip + port), all sites using that socket will use the new certificate.



The command to bind a certificate to a socket is:
netsh http add sslcert ipport=10.100.0.12:443 certhash=1234567890123456789012345678901234567890 appid={12345678-1234-1234-1234-999999999999}





This part explains how to proceed step-by-step. It assumes you have some websites (aaa.my-domain.com, bbb.my-domain.com) running a *.my-domain.com certificate that is about to expire. You already have a new certificate that you already installed on the server but not yet applied to the websites on IIS.



First, we need to find out 2 things. The certhash of your new certificate and the appid.





  • certhash Specifies the SHA hash of the certificate. This hash is 20 bytes long and specified as a hexadecimal string.

  • appid Specifies the GUID to identify the owning application, which is IIS itself.



Find the certhash



Execute the certutil command to get all certificates on the machine:




certutil -store My



I need not all information so I do:



certutil -store My | findstr /R "sha1 my-domain.com ===="



Among the output you should find your new certificate ready on your server:



================ Certificate 5 ================
Subject: CN=*.my-domain.com, OU=PositiveSSL Wildcard, OU=Domain Control Validated

Cert Hash(sha1): 12 34 56 78 90 12 34 56 78 90 12 34 56 78 90 12 34 56 78 90



1234567890123456789012345678901234567890 is the certhash we were looking for. it's the Cert Hash(sha1) without the spaces.



Find the appid



Let's start of by looking at all certificate-socket bindings:



netsh http show sslcert




Or one socket in particular



netsh http show sslcert ipport=10.100.0.12:443



Output:



SSL Certificate bindings:
----------------------
IP:port : 10.100.0.12:443

Certificate Hash : 1111111111111111111111111111111111111111
Application ID : {12345678-1234-1234-1234-123456789012}
Certificate Store Name : MY
Verify Client Certificate Revocation : Enabled
Verify Revocation Using Cached Client Certificate Only : Disabled
Usage Check : Enabled
Revocation Freshness Time : 0
URL Retrieval Timeout : 0
Ctl Identifier : (null)
Ctl Store Name : (null)

DS Mapper Usage : Disabled
Negotiate Client Certificate : Disabled


{12345678-1234-1234-1234-123456789012} is the appid we were looking for. It's the Application ID of IIS itself. Here you see the socket 10.100.0.12:443 is currently still bound to the old certificate (Hash 111111111...)



bind a (new) certificate to a socket



Open a command prompt and run it as a administrator. If you don't run it as administrator, you'll get an error like: "The requested operation requires elevation (Run as administrator)."




First remove the current certificate-socket binding using this command



netsh http delete sslcert ipport=10.100.0.12:443



You should get:



SSL Certificate successfully deleted



Then use this command (found here) to add the new certificate-socket binding with the appid and the certhash (without spaces) that you found earlier using this command




netsh http add sslcert ipport=10.100.0.12:443 certhash=1234567890123456789012345678901234567890 appid={12345678-1234-1234-1234-123456789012}



You should get:



SSL Certificate successfully added



DONE. You just replaced the certificate of all websites that are binded to this IP + port (socket).


smtp - Postfix mail server refuses connections from outside mail servers




I have a Postfix server with SMTP listening on port 587 which cannot be reached by outside mail servers like Gmail and hence I receive this Mail Delivery Failure when sending an email from GMail to useraccount@mydomain.tld:



The recipient server did not accept our requests to connect. Learn more at https://support.google.com/mail/answer/7720
[mail.mydomain.tld MailServerIP:(It is interesting that there is no port here!) socket error]



----- Original message -----

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gmail.com; s=20120113;
h=mime-version:in-reply-to:references:from:date:message-id:subject:to;

bh=pEP+FUpQu4YrUIJfRtRY72qvieH+prFPrjpP+XncC+A=;
b=xWURH+CuLyCB2dCkDZTmlncHMmvAaP24KwgoqUxur1FxRye7cJ4qAHYDjEQLGoecJO
U3ka/qkBSwcDnCsrBZc+I4YL7sN6pRJvBatv/EXbYdwoczq8LoizXWuYKxprCgSiVKu5
3eFdaFN8dCBXJncp4mMMOzKwonqe1fO+zuV5fI3ef7TCgThEBiCwZrEFUlPb64MCkQzY
wKu/gwKVS5yvO2MvD3IJQJeqmaj2kegC9zIIQo5w9w/HeS4wasyVU9bIAAuCG9azdiL6
wR9CzV95xHJYWv/3YUcB0CBMuL7vrelDlVlRddhrhJRV4jkzOHOYlgvDVhd0GPj7/Mib
KqOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20130820;
h=x-gm-message-state:mime-version:in-reply-to:references:from:date

:message-id:subject:to;
bh=pEP+FUpQu4YrUIJfRtRY72qvieH+prFPrjpP+XncC+A=;
b=lSA5HbBTMeKoIOp7/ZuktmhmO67v/oN4gAlk6kJDlPj2ue9yCDx8s0IdBlF4QENiae
HQqug+EqwxQItawgwYO8ZGmQDs1nPPjxLJdymIGHCdIF4G149fk0GSkbE3+yhwvGvTXj
JPYFZpDeQvnLBy293t2lIkxk5GGvaC2w7gZvP3Pt6qZAFZvbVxGTOoKwqp+zJ7valQhr
xvmImfSJAw2fzIzTXE4Or4XXsPXpP5i1rcmRwDwGk8qQnXoCVfZLoyaQBPq2J5ChWPR0
w5nLlVSVB7IFfwmRZEfVwVxjOvHCMbXtu1Eeyl1JZ88vfD0OvbSeWn7RwBSoLWZoOiVl
EuYg==
X-Gm-Message-State: AD7BkJJ4ZaGY+7wGDmRTWxi4nvS2OwcKWPrcxB9LMV0I1cD9DTnaAiMAC+1nFhQx0/W8no4EPXCNk7rU7gk8Eg==
X-Received: by 10.28.44.9 with SMTP id s9mr11997524wms.96.1459775140100; Mon,

04 Apr 2016 06:05:40 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.28.53.66 with HTTP; Mon, 4 Apr 2016 06:05:00 -0700 (PDT)
In-Reply-To: <8b4794cab4431ff4cc910761b74fb544@mydomain.tld>
References: <8b4794cab4431ff4cc910761b74fb544@mydomain.tld>
From: Name Family
Date: Mon, 4 Apr 2016 17:35:00 +0430
Message-ID:
Subject: Re: test
To: Name

Content-Type: multipart/alternative; boundary=001a113d9e02ad7f4f052fa86217


Also, digging from an external ISP to check DNS records results:



dig MX mydomain.tld:



;; ANSWER SECTION:
mydomain.tld. 21599 IN MX 10 mail.mydomain.tld.



And then, dig A mail.mydomain.tld results:



;; ANSWER SECTION:
mail.mydomain.tld. 21599 IN A proper.ip.address


I have been able to send and receive email within the mail server between local accounts and also send to outside mail server like GMail, but I cannot receive from outside.



my postfix config is:




# See /usr/share/postfix/main.cf.dist for a commented, more complete version


# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)

biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no
# TLS parameters

smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_un$

myhostname = mydomain.tld
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = mydomain.tld, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +


inet_interfaces = loopback-only
inet_protocols = all


Master.cf content:



#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master" or
# on-line: http://www.postfix.org/master.5.html).

#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
587 inet n - - - - smtpd
#smtp inet n - - - 1 postscreen
#smtpd pass - - - - - smtpd

#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
#submission inet n - - - - smtpd
# -o syslog_name=postfix/submission
# -o smtpd_tls_security_level=encrypt
# -o smtpd_sasl_auth_enable=yes
# -o smtpd_reject_unlisted_recipient=no
# -o smtpd_client_restrictions=$mua_client_restrictions
# -o smtpd_helo_restrictions=$mua_helo_restrictions
# -o smtpd_sender_restrictions=$mua_sender_restrictions

# -o smtpd_recipient_restrictions=
# -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes
# -o smtpd_sasl_auth_enable=yes
# -o smtpd_reject_unlisted_recipient=no
# -o smtpd_client_restrictions=$mua_client_restrictions
# -o smtpd_helo_restrictions=$mua_helo_restrictions

# -o smtpd_sender_restrictions=$mua_sender_restrictions
# -o smtpd_recipient_restrictions=
# -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#628 inet n - - - - qmqpd
pickup unix n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr unix n - n 300 1 qmgr
#qmgr unix n - n 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr

rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp

# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache

#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#

# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#
# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#

# Specify in cyrus.conf:
# lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
# mailbox_transport = lmtp:inet:localhost
# virtual_transport = lmtp:inet:localhost
#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)

# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus unix - n n - - pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================
# Old example of delivery via Cyrus.
#
#old-cyrus unix - n n - - pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}

#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#

ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}



netstat -tulpn:



    Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 2050/stunnel4
tcp 0 0 0.0.0.0:21976 0.0.0.0:* LISTEN 877/sshd
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 907/named
tcp 0 0 127.0.0.1:51101 0.0.0.0:* LISTEN 2310/irssi
tcp 0 0 127.0.0.1:51102 0.0.0.0:* LISTEN 2292/rtorrent

tcp 0 0 0.0.0.0:51103 0.0.0.0:* LISTEN 2292/rtorrent
tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 0.0.0.0:51106 0.0.0.0:* LISTEN 2324/python
tcp 0 0 0.0.0.0:51107 0.0.0.0:* LISTEN 2317/python
tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 963/mysqld
tcp 0 0 0.0.0.0:1194 0.0.0.0:* LISTEN 1027/openvpn
tcp 0 0 127.0.0.1:587 0.0.0.0:* LISTEN 11162/master
tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 879/dovecot
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 879/dovecot

tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2224/perl
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 2317/python
tcp 0 0 0.0.0.0:21201 0.0.0.0:* LISTEN 656/vsftpd


iptables -L:



Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ctstate NEW,ESTABLISHED

ACCEPT tcp -- anywhere anywhere tcp spt:smtp
ACCEPT tcp -- anywhere anywhere tcp spt:submission

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp spt:smtp ctstate ESTABLISHED
ACCEPT tcp -- anywhere anywhere tcp dpt:submission

ACCEPT tcp -- anywhere anywhere tcp spt:submission


can anybody help me please?
Thanks.


Answer



Your Postfix installation is undoubtedly configured to send and receive e-mail for local users only. In order to receive messages from the Internet, Postfix must be able to receive connections on ports 25/tcp (SMTP) and 465/tcp (SMTP over SSL). I'm not sure if GMail initially tries to establish a secure SMTP connection (465/tcp), but GMail certainly uses the port 587/tcp (SUBMISSION) to receive messages from end users only. See here for an overview of the difference between these ports.



I guess executing dpkg-reconfigure --priority=low postfix and supplying proper answers to the wizard will allow Postfix to receive messages from the Internet. Or else:





  1. Set inet_interfaces = all in /etc/postfix/main.cf.



    inet_interfaces = all

  2. In /etc/postfix/master.cf, comment the 587 service and uncomment smtp, smtpd, submission and smtps services:



    # 587      inet  n       -       -       -       -       smtpd
    smtp inet n - - - 1 postscreen
    smtpd pass - - - - - smtpd

    submission inet n - - - - smtpd
    -o syslog_name=postfix/submission
    -o smtpd_tls_security_level=encrypt
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_reject_unlisted_recipient=no
    # -o smtpd_client_restrictions=$mua_client_restrictions
    # -o smtpd_helo_restrictions=$mua_helo_restrictions
    # -o smtpd_sender_restrictions=$mua_sender_restrictions
    # -o smtpd_recipient_restrictions=
    -o smtpd_relay_restrictions=permit_sasl_authenticated,reject

    # -o milter_macro_daemon_name=ORIGINATING
    smtps inet n - - - - smtpd
    -o syslog_name=postfix/smtps
    -o smtpd_tls_wrappermode=yes
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_reject_unlisted_recipient=no
    # -o smtpd_client_restrictions=$mua_client_restrictions
    # -o smtpd_helo_restrictions=$mua_helo_restrictions
    # -o smtpd_sender_restrictions=$mua_sender_restrictions
    # -o smtpd_recipient_restrictions=

    # -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
    # -o milter_macro_daemon_name=ORIGINATING



Use an external diagnostic tool to check if your mail server is publicly acessible on ports 25/tcp, 465/tcp and 587/tcp. I suggest you to use http://mxtoolbox.com/diagnostic.aspx and http://dns.kify.com/ .


Sunday, December 15, 2019

nat - Internet access via OpenVPN



Note: This is a repost from the OpenVPN forums



I have just set up an OpenVPN on my Linode VPS, and I have successfully connected my Android phone to it. Now, I want to use the "route all traffic" option on the client. I'm not sure how to set up the routes on the server side though, so I would greatly appreciate any help. I'm taking a class this summer at my local Community College, and they seem to think that an open WAN with web authentication is secure enough.




Here are my interface configurations:




eth0 Link encap:Ethernet HWaddr
f2:3c:91:93:a8:c2
inet addr:173.255.235.246 Bcast:173.255.235.255
Mask:255.255.255.0
inet6 addr: 2600:3c03::f03c:91ff:fe93:a8c2/64
Scope:Global
inet6 addr: fe80::f03c:91ff:fe93:a8c2/64

Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:126144742 errors:0 dropped:0 overruns:0 frame:0
TX packets:315279 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2301671639 (2.3 GB) TX bytes:136422020 (136.4 MB)
Interrupt:44



lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:3971 errors:0 dropped:0 overruns:0 frame:0
TX packets:3971 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:753104 (753.1 KB) TX bytes:753104 (753.1 KB)



tun0 Link encap:UNSPEC HWaddr
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00



      inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255

UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


Answer



Bear in mind that although openvpn will provide you a secure tunnel it won't stop access to your android device across the LAN, so you might want to be having a look at that too.




What is your network layout on the vpn server side of things? That will help get the ball rolling.


Apache load balancer logging question



I am using Apache as a load balancer and would like to log the server to which the load balancer is forwarding the request to. For example, if I had three webservers, called:




  • webserver1 - 192.168.0.1


  • webserver2 - 192.168.0.2

  • webserver3 - 192.168.0.3



I would like the log to show me to which server the request was forwared to (denoted by bold):




10.1.0.1 192.168.0.1 - - [20/Jul/2010:10:52:01 -0600] "GET /js/shared/kobj-static.js HTTP/1.1" 302 236 "http://www.google.com/search?q=baked+bbq+rib+recipes&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6 (.NET CLR 3.5.30729) infoCard/AzigoLite/0.0.12"





Any help would be appreciated.


Answer



You can use the Custom log format to do that. One way I think you can do is to add the environment variable to the log.
mod_proxy_balancer (that I suppose you are using) exports BALANCER_WORKER_NAME variable that is the name of the Worker used for the request. You can use the %{BALANCER_WORKER_NAME}e directive on your Custom Log format string to get that logged. This is an example of the default debian 'combined' log format with the directive added:



LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{BALANCER_WORKER_NAME}e\"" combined

Friday, December 13, 2019

Why no first user setup as sudo on Linode (Ubuntu 10.10)

The standard setup for Ubuntu is to create two users, root and a first user. The first user always gets full sudo access and root login is disabled (for security).



Linode doesn't do this, it just creates root, with ssh login enabled.




Why is this so? Is there some limit on the number of account Linode nodes can have? Is there some other reason?



My instinct is to create a user, sudo it and disable ssh root login. This keeps dev and prod machines as alike as possible.

Thursday, December 12, 2019

cron - crontab to run bash script (ssh command in it) not working



CentOS 5.4



(in my script file - script.sh)




#!/bin/bash
ssh 192.168.0.1 'iptables -L' > /tmp;


(in /etc/crontab)



30 21 30 9 * root /bin/bash /script.sh


If I run the script in terminal, things work just fine. But use crontab to run it, the tmp will be generated, but there's nothing in the tmp file (0k). I already run ssh agent so ssh won't prompt to ask password. What could be the problem with this? Thanks.



Answer



I suggest you to always explicitly set all needed variables at the beginning of the scripts.



PATH=/bin:/usr/bin:/sbin
MYVAR=whatever


That said, I would





  1. create a private/public keypair

  2. set an empty password on the private key

  3. set permission 400 on the private key file

  4. put the public key in the authorized_keys file of the root user on 192.168.0.1



Now try the connection with



#!/bin/bash
PATH=/usr/bin


ssh -i /myprivatekey -l root 192.168.0.1 '/sbin/iptables -L' > /tmp/output.$$


Edit: I guessed that the "iptables" command had to be executed by root on the remote server. If it is not, of course the "-l" parameter has to be changed accordingly.


How to detect an SSH connection?



For some reason I have he following scenario:



On boot-up I'm launching a script which waits for a given amount of time and checks whether an SSH connection was astablished during this time window or not. If a connection is open, the script does action A, else it kills sshd and does B.




What would be the best way to detect an open connection? (The script can be written in Bash or Ruby)



thx


Answer



If you want to detect a current SSH session, use lsof -i :22 and look for it returning more than 2 lines or grep for ESTABLISHED:
[root@nemo ~]# lsof -i :22
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 3772 root 3u IPv6 9906 TCP *:ssh (LISTEN)
sshd 21376 root 3r IPv6 159072 TCP myserver:ssh->someip:27813 (ESTABLISHED)
sshd 21381 james 3u IPv6 159072 TCP myserver:ssh->someip:27813 (ESTABLISHED)



To see if a session was opened at all, look for something similar to the following in /var/log/secure (on redhat/centos/fedora):
Sep 27 05:05:28 nemo sshd[21376]: Accepted password for james from some_ip port 27813 ssh2
If you allow authentication by means other than password, the log entries may be slightly different.


linux - "Virtual hosts" for SSH




We have a remote Xen server running a lot of guest machines (on Linux), with only a couple of IPs available.




Each guest machine should be directly accessible by the SSH from the outer world.



Right now we assign a separate domain name to each guest machine, pointing to one of the few available IPs. We also assign a port number to that guest machine.



So, to access machine named foo, one should do as follows:




$ ssh foo.example.com -p 12345



...And to access machine named bar:




$ ssh bar.example.com -p 12346


Both foo.example.com and bar.example.com point to the same IP.



Is it possible to somehow get rid of custom ports in this configuration and configure SSH server, listening at that IP (or firewall or whatever on server side), so it would route the incoming connection to the correct guest machine, based on the domain address, so that following works as intended?





$ ssh foo.example.com hostname # prints foo
$ ssh bar.example.com hostname # prints bar


Note that I do know about .ssh/config and related client-side configuration solutions, we're using that now. This question is specifically about a zero client configuration solution.


Answer



                         foo  
/
Client ----- Xen server

\
bar


It sounds like SSH Gateway is what you're looking for.



Firstly, create 2 new users foo, bar on the Xen server:



Xen # useradd foo
Xen # useradd bar



Generate key pairs and copy public key to the foo-server and bar-server:



Xen # su - foo
Xen $ ssh-keygen
Xen $ ssh-copy-id -i ~/.ssh/id_rsa.pub foo-user@foo-server


(Do the same for bar user)




Now, from the Xen server (SSH Gateway) you can login to the foo-server and bar-server without password prompt.



The next step is to let the Client authenticate to the Xen server with public key:



Client $ ssh-keygen
Client $ ssh-copy-id -i ~/.ssh/id_rsa.pub foo@Xen


and the final step is make Xen server open a second connection to the corresponding internal server. Access to Xen, switch to foo, open the ~/.ssh/authorized_keys file and change:




ssh-rsa AAAAB3N...== user@clienthost


to:



command="ssh -t -t foo-user@foo-server" ssh-rsa AAAAB3N...== user@clienthost


The sample result:




$ ssh foo-user@Xen
Last login: Thu Nov 10 13:02:25 2011 from Client
$ id
uid=500(foo-user) gid=500(foo-user) groups=500(foo-user) context=user_u:system_r:unconfined_t
$ exit
logout

Connection to foo-server closed.
Connection to Xen closed.






$ ssh bar-user@Xen
Last login: Thu Nov 10 11:28:52 2011 from Client
$ id
uid=500(bar-user) gid=500(bar-user) groups=500(bar-user) context=user_u:system_r:unconfined_t
$ exit
logout


Connection to bar-server closed.
Connection to Xen closed.

Wednesday, December 11, 2019

Are different RAID cards setups compatible?



I'm setting up a new NAS/SAN system with RAID5, and I was wondering if going the software or hardware RAID way because I had this question in mind:




If my hardware-RAID card fails, will I need to substitute by one exactly the same, same brand is enough or are RAID5 setups incompatible between different cards?



My guess is that they are not compatible, and I'm trying to get the less downtime possible in case a piece of my hardware fails...


Answer



Cards from different manufactuers are typically not compatible, although different cards from the same manufacturer usually are. There is no particular standard format for RAID metadata that is compatible across RAID controllers and software RAID implementations.



If you have an adaptec (for example) card and get a different model of adaptec card, the new card will almost certainly mount the old array. In some cases the same also applies to SAN equipment and RAID controllers from the same manufactuer - Mylex DAC-FFX controllers and ExtremeRAID 3000 cards used to do this (in fact an ExtremeRAID 3000 was essentially a DAC-FFX on a PCI card), and HP Smartarray 1000 & 1500s will also mount arrays transferred from HP direct attach controllers.



This is usually a deliberate policy on the part of the manufacturer to allow substitution of current parts if an older model is no longer available in stock. It also helps with upselling existing DA customers onto entry level SAN equipment by easing the migration path - just pop the disks into the SAN and mount the volumes off the SAN.




Note, however that OEM contracts and mergers and acquisitions mean that manufacturers may have several incompatible product lines. For example:-




  • Adaptec bought Eurologic and sold Eurologic SAN equipment for a while. Eurologic SANs have Mylex RAID controllers.


  • Adaptec also purchased ICP Vortex, so some adaptec branded RAID controllers may not be compatible with others in this respect.


  • LSI also purchased Mylex at one point and sold ExtremeRAID controllers for a while, but have their own line of host-based RAID controllers. The final dissolution of Mylex was quite complex with bits going to Xyratex (the biggest manufacturer of disk array hardware you've never heard of making stuff branded by people you have) as well.


  • Intel, Dell PERC, IBM ServerRAID controllers and some HP parts are often rebadged items made by a third party (ICP Vortex, Adaptec and LSI controllers often pop up with other brands). The OEM ones tend to have custom firmware but may still represent multiple, incompatible product lines. However, branded kit of this sort tends to have specific part numbers so you can re-order compatible replacements .




Do your homework. Check with the manufacturer, avoid anyone who gives you a blank stare and make sure you know which models go with each other in this respect.




Note Linux and Unix have good SW RAID facilities but software RAID on Windows is poo. If you're using Windows then always go for hardware RAID. Ebay is your friend if you get sticker shock at the price of new kit. Make sure you get a cache battery for it - often you can get them quite cheaply off ebay - again, find the part number and hunt.


Friday, December 6, 2019

linux - ssh public key authentication

I have tried to configure ssh to use public key authentication




The link



/etc/ssh/sshd_config has got this parameters.



RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

PasswordAuthentication yes



But I'm still being prompted for password. If I use



PasswordAuthentication no


I can't login



Any suggestion?







It works but only for root user.



My problem was, I had this parameter



PermitRootLogin no.



I can login with keys with root user if I use




PermitRootLogin yes.



In the system there is only a /root/.ssh directory an authorized_keys file in this directory



How can I add this to other users, if there isn't, in this system, a home/$USER/.ssh directory and authorized_keys file for each user?



Is there a way to configure for each user?
I will need diferent authorized_keys files for each user.




Is possible to configure for a different hosts, domains or ips? similar to httpd.conf

Thursday, December 5, 2019

monitoring - calculating days until disk is full

We use graphite to track history of disk utilisation over time. Our alerting system looks at the data from graphite to alert us when the free space falls below a certain number of blocks.



I'd like to get smarter alerts - what I really care about is "how long do I have before I have to do something about the free space?", e.g. if the trend shows that in 7 days I'll run out of disk space then raise a Warning, if it's less than 2 days then raise an Error.



Graphite's standard dashboard interface can be pretty smart with derivatives and Holt Winters Confidence bands but so far I haven't found a way to convert this to actionable metrics. I'm also fine with crunching the numbers in other ways (just extract the raw numbers from graphite and run a script to do that).



One complication is that the graph is not smooth - files get added and removed but the general trend over time is for disk space usage to increase, so perhaps there is a need to look at local minimum's (if looking at the "disk free" metric) and draw a trend between the troughs.



Has anyone done this?

Wednesday, December 4, 2019

storage - How to remember RAID levels?





How do you remember(if you really do :-)) all the different levels and what each level does? Can anyone suggest an easy way to remember?


Answer



0 - S (stripe)



1 - M (mirror)




5 - P (parity)



10 - MS (mirror + stripe)



Smart Men Pay MicroSoft



or



Silly Men Pay MicroSoft



Tuesday, December 3, 2019

windows server 2008 - Secondary Domain Controller no longer part of the domain

[before anyone corrects me I've used the terms secondary and primary colloquially and I understand the terminology]



I have a problem with my secondary DC, but not on any other server in the domain. Everything is Windows Server 2008. Virtualized using VMWARE. It appears to not be part of the domain anymore. Accounts appear "locked out" on DC2 but are not locked out on DC1. Active directory won't pull up on DC2 and I can't edit accounts to unlock DC2 locally.




Any network pings are "General Failure", including 127.0.0.1 any other server by IP or DNS name. Ping TO the DC2 fail conclusively as well. Everything is fine in the adapter settings and it even shows "connected" to the correct domain, but it can't reach anything else. Services are fine. There are NO enabled firewalls or issues that would arise from connection problems.



I believe it may be a trust issue? I'm not entirely sure

domain name system - DNS/Web Hosting to a private network setup



I've tried to find a lot of answers to my question, but I am quite a bit confused to some of it and now I am here to seek further advice.




The setup:



We have a hosting and our domain will be something like this www.example.biz.



In our infra, we have the following traditional servers which will be put behind firewall and private network




  1. Web Server

  2. Database Server




Now the domain will be given to us and we would like to point it to our web server to host the web pages.



This was the solution I've come up,



Configure the hosting's domain to point the record to our Public IP which will be forwarded by the firewall through port forwarding to our web server and accept traffic to be able to serve this web pages.



My question would be, was my solution enough for this setup? or should I configure a public authoritative dns server and add it to the domain hosting's nameserver which i would still use my firewall to point it to my private network's web server.




I would really appreciate for any advice there is, I am still new and I've found this site very helpful.



Thank you and Regards,
Ian


Answer



You don't need to run your own DNS server. DNS is a basic service, you can rely on your provider or a third party like CloudFlare / AWS Route 53 for that.



Other than the DNS part of your question your solution is standard and should work.



A note: firewalls don't exactly "forward" traffic, I would say they intercept or pass through traffic but that's mostly a semantic difference. A reverse proxy server like Nginx would forward traffic.



Monday, December 2, 2019

cron - Python script succeeds manually but fails on crontab

So I'm currently trying to get a script working but it's behaving differently when I run it manually than when I run it from crontab. Basically, I have a reverse ssh tunnel set up from one server to another, and in order to verify that my tunnel is up I:





  • SSH from server A to server B

  • Wget a test url from on server A from server B

  • if Wget succeeds, I disconnect and do nothing

  • if Wget fails, I disconnect and restart the tunnel



I know there are more elegant ways to verify ssh tunnels (like autossh and ServerKeepAlive), but for both policy and redundancy issues, I have to do things this way. Anyways, here's the script:




from __future__ import print_function
from __future__ import absolute_import

import os, sys, subprocess, logging, pexpect

COMMAND_PROMPT = '[#$] '
TERMINAL_PROMPT = '(?1)terminal type\?'
TERMINAL_TYPE = 'vt100'
SSH_NEWKEY = '(?i)are you sure you want to continue connecting'
SERVERS = [{address':'192.168.100.10', 'connString':'ssh user@192.168.100.10', 'testGet':'wget http://192.168.100.11/test.html -t 1 -T 10', 'tunnel':'start_tunnel'}, {address':'192.168.100.12', 'connString':'ssh user@192.168.100.12', 'testGet':'wget http://192.168.100.13/test.html -t 1 -T 10', 'tunnel':'start_tunnel2'}]


def main():

global COMMAND_PROMPT, TERMINAL_PROMPT, TERMINAL_TYPE, SSH_NEWKEY, SERVERS

#set up logging
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/home/user/tunnelTest.log')
formatter = logging.Formatter('%(asctime)s - %(module)s.%(funcName)s: %(message)s')

handler.setFormatter(formatter)
log.addHandler(handler)


for x in SERVERS:

#connect to server
child = pexpect.spawn(x['connString'])
i = child.expect([pexpect.TIMEOUT, SSH_NEWKEY, COMMAND_PROMPT, '(?i)password'])
if i == 0: #Timeout

log.debug('ERROR! Could not log in to ' + x['address'] + ' ...')
sys.exit(1)
if i = 1: #No key cached
child.sendline('yes')
child.expect(COMMAND_PROMPT)
log.debug('Connected to ' + x['address'] + '...')
if i = 2: #Good to go
log.debug('Connected to ' + x['address'] + '...')
pass


#Housecleaning
child.sendline('cd /tmp')
child.expect(COMMAND_LINE)
child.sendline('rm -r test.html')
child.expect(COMMAND_LINE)

log.debug('Testing service using ' + x['testGet'] + ' ...')
child.sendline(x['testGet'])
child.expect(COMMAND_PROMPT)
if 'saved' in child.before.lower():

log.debug('Tunnel working, nothing to do here!')
log.debug('Disconnecting from remote host ' + x['address'] + '...')
child.sendline('exit')
else:
log.error('Tunnel down!')
log.debug('Disconnecting from remote host ' + x['address'] + ' and restarting tunnel')
child.sendline('exit')
subprocess.call(['start',x['tunnel']])
log.debug('Autossh tunnel restarted')


if __name__ == "__main__":
main()


My crontab entry is as follows:



0,30 * * * * python /home/user/tunnelTest.py


So yeah -- this script runs fine when I do it manually (sudo python tunnelTest.py) and also runs fine on crontab unless a tunnel is down. When a tunnel is down, I get the "Tunnel down!" and "Disconnecting from remote host 192.168.100.10 and restarting tunnel" messages in my log, but the script seems to die there. The tunnel doesn't restart, and I get no messages in my log until the start of the next scheduled run.




The start_tunnel script is in /etc/init, the testTunnel.py script is in /home/user, the testTunnel.log file is in /home/user/logs, and I ran crontab -e as root.



Any insight into this matter would be greatly appreciated.



Thanks!

Sunday, December 1, 2019

monitoring - Nagios - NagWin - Send notification with gmail

I would like to send Nagios notifications using my gmail account.



I have already set up my hosts I want to monitor and services also.



What is the most simple way to accomplish this using NagWin on a Windows Server 2012 installation?




As far as I know I must change some of these configuration settings:



# 'notify-host-by-email' command definition
define command{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" -server ???
}

# 'notify-service-by-email' command definition

define command{
command_name notify-service-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /bin/blat - -to $CONTACTEMAIL$ -f nagios@localhost -subject "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" -server ???
}


What should I use for smtp server? Is it possible to directly send my notifications to the Gmail server?

Saturday, November 30, 2019

linux - Reverse Proxy multiple internal FTP Servers

I have setup a reverse proxy for http using Apache mod_proxy like this:




  • Client > http:/abc.domain1.com > Reverse Proxy Server > 192.168.50.1 (Internal Server)


  • Client > http:/def.domain2.com/ > Reverse Proxy Server > 192.168.50.2 (another internal Server)





Now I want to acheive the same for FTP:




  • Client > ftp:/abc.domain1.com/ > Reverse Proxy Server > ftp:/192.168.50.1 (internal FTP Server)


  • Client > ftp:/def.domain2.com/ > Reverse Proxy Server > ftp:/192.168.50.2 (another internal FTP Server)




Both internal FTP Servers are running vsftpd. Please let me know the setup for Redhat/Centos.




Reason: I have only one public IP available.

Friday, November 29, 2019

debian - After resizing an encrypted LVM device, the machine takes 4 hours to boot

I have a host, under Debian Wheezy.



The virtualisation software is qemu/KVM, and uses LVM Volumes as disks for the guests.



The guests all have been installed using debian wheezy, full-disk encryption, LVM (/boot is out of the luks device, LVM is divided into /, /home, swap).



Two times I had to resize a drive for a guest, with the wish to grow the /home volume of the guest.



What I did was :





  • Turn off the machine

  • From the host, grow the guest LVM volume

  • From a debian-cd1 boot the guest machine, with rescue/enable=true as an extra boot parameter.

  • From that live system, chroot into the guest system (passphrase needed)

  • From that chroot, cryptsetup resize

  • Still in the chroot, resize filesystem

  • update-initramfs




And then I reboot the machine (after correctly unmounted and closed volumes and luks device), and it takes few hours before asking me for the passphrase.



If anybody has ever experienced this or knows about this problem, something I do wrong or so, please let me know!



Here is the dmesg log from last time :



[    0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 3.2.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.63-2

[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.2.0-4-amd64 root=/dev/mapper/srvices-root ro single console=tty0 console=ttyS0,115200
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009dc00 (usable)
[ 0.000000] BIOS-e820: 000000000009dc00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 00000000dfffe000 (usable)
[ 0.000000] BIOS-e820: 00000000dfffe000 - 00000000e0000000 (reserved)
[ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] BIOS-e820: 0000000100000000 - 00000001a0000000 (usable)

[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.4 present.
[ 0.000000] DMI: Bochs Bochs, BIOS Bochs 01/01/2007
[ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
[ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
[ 0.000000] No AGP bridge found
[ 0.000000] last_pfn = 0x1a0000 max_arch_pfn = 0x400000000
[ 0.000000] MTRR default type: write-back
[ 0.000000] MTRR fixed ranges enabled:
[ 0.000000] 00000-9FFFF write-back

[ 0.000000] A0000-BFFFF uncachable
[ 0.000000] C0000-FFFFF write-protect
[ 0.000000] MTRR variable ranges enabled:
[ 0.000000] 0 base 00E0000000 mask FFE0000000 uncachable
[ 0.000000] 1 disabled
[ 0.000000] 2 disabled
[ 0.000000] 3 disabled
[ 0.000000] 4 disabled
[ 0.000000] 5 disabled
[ 0.000000] 6 disabled

[ 0.000000] 7 disabled
[ 0.000000] 8 disabled
[ 0.000000] 9 disabled
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] last_pfn = 0xdfffe max_arch_pfn = 0x400000000
[ 0.000000] found SMP MP-table at [ffff8800000fdad0] fdad0
[ 0.000000] initial memory mapped : 0 - 20000000
[ 0.000000] Base memory trampoline at [ffff880000098000] 98000 size 20480
[ 0.000000] init_memory_mapping: 0000000000000000-00000000dfffe000
[ 0.000000] 0000000000 - 00dfe00000 page 2M

[ 0.000000] 00dfe00000 - 00dfffe000 page 4k
[ 0.000000] kernel direct mapping tables up to dfffe000 @ 1fffa000-20000000
[ 0.000000] init_memory_mapping: 0000000100000000-00000001a0000000
[ 0.000000] 0100000000 - 01a0000000 page 2M
[ 0.000000] kernel direct mapping tables up to 1a0000000 @ dfffa000-dfffe000
[ 0.000000] RAMDISK: 369a4000 - 374ca000
[ 0.000000] ACPI: RSDP 00000000000fd920 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 00000000dfffe550 00038 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 00000000dfffff80 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 00000000dfffe590 01121 (v01 BXPC BXDSDT 00000001 INTL 20100528)

[ 0.000000] ACPI: FACS 00000000dfffff40 00040
[ 0.000000] ACPI: SSDT 00000000dffffe40 000FF (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: APIC 00000000dffffd50 00080 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.000000] ACPI: HPET 00000000dffffd10 00038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
[ 0.000000] ACPI: SSDT 00000000dffff6c0 00644 (v01 BXPC BXSSDTPC 00000001 INTL 20100528)
[ 0.000000] ACPI: Local APIC address 0xfee00000
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-00000001a0000000
[ 0.000000] Initmem setup node 0 0000000000000000-00000001a0000000
[ 0.000000] NODE_DATA [000000019fffb000 - 000000019fffffff]

[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
[ 0.000000] kvm-clock: cpu 0, msr 0:16a9701, boot clock
[ 0.000000] [ffffea0000000000-ffffea0005bfffff] PMD -> [ffff880199600000-ffff88019ebfffff] on node 0
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA32 0x00001000 -> 0x00100000
[ 0.000000] Normal 0x00100000 -> 0x001a0000
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[3] active PFN ranges
[ 0.000000] 0: 0x00000010 -> 0x0000009d

[ 0.000000] 0: 0x00000100 -> 0x000dfffe
[ 0.000000] 0: 0x00100000 -> 0x001a0000
[ 0.000000] On node 0 totalpages: 1572747
[ 0.000000] DMA zone: 56 pages used for memmap
[ 0.000000] DMA zone: 5 pages reserved
[ 0.000000] DMA zone: 3920 pages, LIFO batch:0
[ 0.000000] DMA32 zone: 14280 pages used for memmap
[ 0.000000] DMA32 zone: 899126 pages, LIFO batch:31
[ 0.000000] Normal zone: 8960 pages used for memmap
[ 0.000000] Normal zone: 646400 pages, LIFO batch:31

[ 0.000000] ACPI: PM-Timer IO Port: 0xb008
[ 0.000000] ACPI: Local APIC address 0xfee00000
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[ 0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)

[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] ACPI: IRQ0 used by override.
[ 0.000000] ACPI: IRQ2 used by override.
[ 0.000000] ACPI: IRQ5 used by override.
[ 0.000000] ACPI: IRQ9 used by override.
[ 0.000000] ACPI: IRQ10 used by override.
[ 0.000000] ACPI: IRQ11 used by override.
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000

[ 0.000000] SMP: Allowing 2 CPUs, 0 hotplug CPUs
[ 0.000000] nr_irqs_gsi: 40
[ 0.000000] PM: Registered nosave memory: 000000000009d000 - 000000000009e000
[ 0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[ 0.000000] PM: Registered nosave memory: 00000000dfffe000 - 00000000e0000000
[ 0.000000] PM: Registered nosave memory: 00000000e0000000 - 00000000feffc000
[ 0.000000] PM: Registered nosave memory: 00000000feffc000 - 00000000ff000000
[ 0.000000] PM: Registered nosave memory: 00000000ff000000 - 00000000fffc0000

[ 0.000000] PM: Registered nosave memory: 00000000fffc0000 - 0000000100000000
[ 0.000000] Allocating PCI resources starting at e0000000 (gap: e0000000:1effc000)
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88019fc00000 s82944 r8192 d23552 u1048576
[ 0.000000] pcpu-alloc: s82944 r8192 d23552 u1048576 alloc=1*2097152
[ 0.000000] pcpu-alloc: [0] 0 1
[ 0.000000] kvm-clock: cpu 0, msr 1:9fc13701, primary cpu clock
[ 0.000000] KVM setup async PF for cpu 0
[ 0.000000] kvm-stealtime: cpu 0, msr 19fc0dfc0

[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1549446
[ 0.000000] Policy zone: Normal
[ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-4-amd64 root=/dev/mapper/srvices-root ro single console=tty0 console=ttyS0,115200
[ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.000000] Checking aperture...
[ 0.000000] No AGP bridge found
[ 0.000000] Calgary: detecting Calgary via BIOS EBDA area
[ 0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[ 0.000000] Memory: 6116684k/6815744k available (3432k kernel code, 524756k absent, 174304k reserved, 3307k data, 580k init)
[ 0.000000] Hierarchical RCU implementation.

[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[ 0.000000] NR_IRQS:33024 nr_irqs:512 16
[ 0.000000] Console: colour VGA+ 80x25
[ 0.000000] console [tty0] enabled
[ 0.000000] console [ttyS0] enabled
[ 0.000000] hpet clockevent registered
[ 0.000000] Detected 3415.532 MHz processor.
[ 0.000000] Marking TSC unstable due to TSCs unsynchronized
[ 0.008000] Calibrating delay loop (skipped) preset value.. 6831.06 BogoMIPS (lpj=13662128)
[ 0.008000] pid_max: default: 32768 minimum: 301

[ 0.008000] Security Framework initialized
[ 0.008000] AppArmor: AppArmor disabled by boot time parameter
[ 0.008000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[ 0.012000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[ 0.015249] Mount-cache hash table entries: 256
[ 0.016170] Initializing cgroup subsys cpuacct
[ 0.017500] Initializing cgroup subsys memory
[ 0.018775] Initializing cgroup subsys devices
[ 0.020011] Initializing cgroup subsys freezer
[ 0.021356] Initializing cgroup subsys net_cls

[ 0.022698] Initializing cgroup subsys blkio
[ 0.024016] Initializing cgroup subsys perf_event
[ 0.025383] mce: CPU supports 10 MCE banks
[ 0.029219] ACPI: Core revision 20110623
[ 0.033372] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.035377] CPU0: AMD QEMU Virtual CPU version 1.1.2 stepping 03
[ 0.040002] APIC calibration not consistent with PM-Timer: 116ms instead of 100ms
[ 0.040002] APIC delta adjusted to PM-Timer: 6249547 (7253497)
[ 0.040002] Performance Events: Broken PMU hardware detected, using software events only.
[ 0.040002] NMI watchdog disabled (cpu0): hardware events not enabled

[ 0.040121] Booting Node 0, Processors #1 Ok.
[ 0.041284] smpboot cpu 1: start_ip = 98000
[ 0.053431] NMI watchdog disabled (cpu1): hardware events not enabled
[ 0.053428] KVM setup async PF for cpu 1
[ 0.053428] kvm-stealtime: cpu 1, msr 19fd0dfc0
[ 0.053428] kvm-clock: cpu 1, msr 1:9fd13701, secondary cpu clock
[ 0.060005] Brought up 2 CPUs
[ 0.068011] Total of 2 processors activated (13662.12 BogoMIPS).
[ 0.069677] devtmpfs: initialized
[ 0.074798] print_constraints: dummy:

[ 0.076177] NET: Registered protocol family 16
[ 0.077716] ACPI: bus type pci registered
[ 0.079085] PCI: Using configuration type 1 for base access
[ 0.080210] mtrr: your CPUs had inconsistent variable MTRR settings
[ 0.081863] mtrr: your CPUs had inconsistent MTRRdefType settings
[ 0.084006] mtrr: probably your BIOS does not setup all CPUs.
[ 0.085652] mtrr: corrected configuration.
[ 0.088260] bio: create slab at 0
[ 0.089632] ACPI: Added _OSI(Module Device)
[ 0.092010] ACPI: Added _OSI(Processor Device)

[ 0.093576] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.096015] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.098360] ACPI: EC: Look up EC in DSDT
[ 0.100581] ACPI: Interpreter enabled
[ 0.102354] ACPI: (supports S0 S3 S4 S5)
[ 0.104014] ACPI: Using IOAPIC for interrupt routing
[ 0.113888] ACPI: No dock devices found.
[ 0.115411] HEST: Table not found.
[ 0.116017] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug
[ 0.118877] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])

[ 0.120055] pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7] (ignored)
[ 0.120057] pci_root PNP0A03:00: host bridge window [io 0x0d00-0xffff] (ignored)
[ 0.120059] pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff] (ignored)
[ 0.120061] pci_root PNP0A03:00: host bridge window [mem 0xe0000000-0xfebfffff] (ignored)
[ 0.120094] pci 0000:00:00.0: [8086:1237] type 0 class 0x000600
[ 0.120330] pci 0000:00:01.0: [8086:7000] type 0 class 0x000601
[ 0.120657] pci 0000:00:01.1: [8086:7010] type 0 class 0x000101
[ 0.122297] pci 0000:00:01.1: reg 20: [io 0xc0a0-0xc0af]
[ 0.124442] pci 0000:00:01.2: [8086:7020] type 0 class 0x000c03
[ 0.126058] pci 0000:00:01.2: reg 20: [io 0xc040-0xc05f]

[ 0.126769] pci 0000:00:01.3: [8086:7113] type 0 class 0x000680
[ 0.127047] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI
[ 0.128016] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB
[ 0.129795] pci 0000:00:02.0: [1013:00b8] type 0 class 0x000300
[ 0.132504] pci 0000:00:02.0: reg 10: [mem 0xfc000000-0xfdffffff pref]
[ 0.136052] pci 0000:00:02.0: reg 14: [mem 0xfebf0000-0xfebf0fff]
[ 0.141073] pci 0000:00:02.0: reg 30: [mem 0xfebd0000-0xfebdffff pref]
[ 0.141386] pci 0000:00:03.0: [1af4:1000] type 0 class 0x000200
[ 0.148589] pci 0000:00:03.0: reg 10: [io 0xc060-0xc07f]
[ 0.149632] pci 0000:00:03.0: reg 14: [mem 0xfebf1000-0xfebf1fff]

[ 0.156542] pci 0000:00:03.0: reg 30: [mem 0xfebe0000-0xfebeffff pref]
[ 0.156956] pci 0000:00:04.0: [1af4:1001] type 0 class 0x000100
[ 0.158142] pci 0000:00:04.0: reg 10: [io 0xc000-0xc03f]
[ 0.159248] pci 0000:00:04.0: reg 14: [mem 0xfebf2000-0xfebf2fff]
[ 0.164887] pci 0000:00:05.0: [1af4:1002] type 0 class 0x0000ff
[ 0.165458] pci 0000:00:05.0: reg 10: [io 0xc080-0xc09f]
[ 0.169256] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
[ 0.169685] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x1e)
[ 0.174231] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
[ 0.176076] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)

[ 0.180978] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
[ 0.184072] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
[ 0.185535] ACPI: PCI Interrupt Link [LNKS] (IRQs 9) *0
[ 0.187291] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[ 0.188014] vgaarb: loaded
[ 0.188596] vgaarb: bridge control possible 0000:00:02.0
[ 0.189569] PCI: Using ACPI for IRQ routing
[ 0.190306] PCI: pci_cache_line_size set to 64 bytes
[ 0.190443] reserve RAM buffer: 000000000009dc00 - 000000000009ffff
[ 0.190448] reserve RAM buffer: 00000000dfffe000 - 00000000dfffffff

[ 0.190645] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[ 0.192034] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
[ 0.193147] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
[ 0.212036] Switching to clocksource kvm-clock
[ 0.214927] pnp: PnP ACPI init
[ 0.215592] ACPI: bus type pnp registered
[ 0.216366] pnp 00:00: [bus 00-ff]
[ 0.216369] pnp 00:00: [io 0x0cf8-0x0cff]
[ 0.216371] pnp 00:00: [io 0x0000-0x0cf7 window]
[ 0.216372] pnp 00:00: [io 0x0d00-0xffff window]

[ 0.216374] pnp 00:00: [mem 0x000a0000-0x000bffff window]
[ 0.216376] pnp 00:00: [mem 0xe0000000-0xfebfffff window]
[ 0.216426] pnp 00:00: Plug and Play ACPI device, IDs PNP0a03 (active)
[ 0.216440] pnp 00:01: [io 0x0070-0x0071]
[ 0.216472] pnp 00:01: [irq 8]
[ 0.216473] pnp 00:01: [io 0x0072-0x0077]
[ 0.216492] pnp 00:01: Plug and Play ACPI device, IDs PNP0b00 (active)
[ 0.216526] pnp 00:02: [io 0x0060]
[ 0.216529] pnp 00:02: [io 0x0064]
[ 0.216545] pnp 00:02: [irq 1]

[ 0.216564] pnp 00:02: Plug and Play ACPI device, IDs PNP0303 (active)
[ 0.216594] pnp 00:03: [irq 12]
[ 0.216613] pnp 00:03: Plug and Play ACPI device, IDs PNP0f13 (active)
[ 0.216633] pnp 00:04: [io 0x03f2-0x03f5]
[ 0.216635] pnp 00:04: [io 0x03f7]
[ 0.216650] pnp 00:04: [irq 6]
[ 0.216652] pnp 00:04: [dma 2]
[ 0.216683] pnp 00:04: Plug and Play ACPI device, IDs PNP0700 (active)
[ 0.216750] pnp 00:05: [io 0x03f8-0x03ff]
[ 0.216766] pnp 00:05: [irq 4]

[ 0.216784] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
[ 0.216878] pnp 00:06: [mem 0xfed00000-0xfed003ff]
[ 0.216906] pnp 00:06: Plug and Play ACPI device, IDs PNP0103 (active)
[ 0.217005] pnp: PnP ACPI: found 7 devices
[ 0.217730] ACPI: ACPI bus type pnp unregistered
[ 0.228506] PCI: max bus depth: 0 pci_try_num: 1
[ 0.228515] pci_bus 0000:00: resource 0 [io 0x0000-0xffff]
[ 0.228517] pci_bus 0000:00: resource 1 [mem 0x00000000-0xffffffffff]
[ 0.228705] NET: Registered protocol family 2
[ 0.231459] IP route cache hash table entries: 262144 (order: 9, 2097152 bytes)

[ 0.235819] TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
[ 0.250779] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[ 0.253381] TCP: Hash tables configured (established 524288 bind 65536)
[ 0.255193] TCP reno registered
[ 0.256172] UDP hash table entries: 4096 (order: 5, 131072 bytes)
[ 0.257857] UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes)
[ 0.259902] NET: Registered protocol family 1
[ 0.260861] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[ 0.262056] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[ 0.263174] pci 0000:00:01.0: Activating ISA DMA hang workarounds

[ 0.273610] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ 0.275164] pci 0000:00:02.0: Boot video device
[ 0.275194] PCI: CLS 0 bytes, default 64
[ 0.275261] Unpacking initramfs...
[ 0.477514] Freeing initrd memory: 11416k freed
[ 0.482711] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[ 0.484075] Placing 64MB software IO TLB between ffff8800dbffa000 - ffff8800dfffa000
[ 0.485685] software IO TLB at phys 0xdbffa000 - 0xdfffa000
[ 0.487886] audit: initializing netlink socket (disabled)
[ 0.489023] type=2000 audit(1414446944.488:1): initialized

[ 0.507853] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[ 0.512541] VFS: Disk quotas dquot_6.5.2
[ 0.513981] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[ 0.516081] msgmni has been set to 11968
[ 0.518658] alg: No test for stdrng (krng)
[ 0.522396] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
[ 0.525107] io scheduler noop registered
[ 0.526139] io scheduler deadline registered
[ 0.527797] io scheduler cfq registered (default)
[ 0.529671] pci_hotplug: PCI Hot Plug PCI Core version: 0.5

[ 0.547796] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[ 0.549695] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.551777] acpiphp: Slot [3] registered
[ 0.553206] acpiphp: Slot [4] registered
[ 0.554527] acpiphp: Slot [5] registered
[ 0.555406] acpiphp: Slot [6] registered
[ 0.556338] acpiphp: Slot [7] registered
[ 0.557273] acpiphp: Slot [8] registered
[ 0.558229] acpiphp: Slot [9] registered
[ 0.559134] acpiphp: Slot [10] registered

[ 0.560029] acpiphp: Slot [11] registered
[ 0.560925] acpiphp: Slot [12] registered
[ 0.561909] acpiphp: Slot [13] registered
[ 0.562938] acpiphp: Slot [14] registered
[ 0.564249] acpiphp: Slot [15] registered
[ 0.565569] acpiphp: Slot [16] registered
[ 0.566820] acpiphp: Slot [17] registered
[ 0.568191] acpiphp: Slot [18] registered
[ 0.569475] acpiphp: Slot [19] registered
[ 0.570820] acpiphp: Slot [20] registered

[ 0.572144] acpiphp: Slot [21] registered
[ 0.573376] acpiphp: Slot [22] registered
[ 0.574658] acpiphp: Slot [23] registered
[ 0.575989] acpiphp: Slot [24] registered
[ 0.577173] acpiphp: Slot [25] registered
[ 0.578425] acpiphp: Slot [26] registered
[ 0.579739] acpiphp: Slot [27] registered
[ 0.580980] acpiphp: Slot [28] registered
[ 0.582278] acpiphp: Slot [29] registered
[ 0.583573] acpiphp: Slot [30] registered

[ 0.584773] acpiphp: Slot [31] registered
[ 0.586223] ERST: Table is not found!
[ 0.587312] GHES: HEST is not enabled!
[ 0.588856] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[ 0.629003] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.658059] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.660565] Linux agpgart interface v0.103
[ 0.673119] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
[ 0.677148] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 0.678248] serio: i8042 AUX port at 0x60,0x64 irq 12

[ 0.679836] mousedev: PS/2 mouse device common for all mice
[ 0.681944] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[ 0.684831] rtc_cmos 00:01: RTC can wake from S4
[ 0.686732] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
[ 0.688777] rtc0: alarms up to one day, 114 bytes nvram, hpet irqs
[ 0.690651] cpuidle: using governor ladder
[ 0.691951] cpuidle: using governor menu
[ 0.693583] TCP cubic registered
[ 0.694984] NET: Registered protocol family 10
[ 0.697245] Mobile IPv6

[ 0.698222] NET: Registered protocol family 17
[ 0.709844] Registering the dns_resolver key type
[ 0.714300] PM: Hibernation image not present or could not be loaded.
[ 0.714348] registered taskstats version 1
[ 0.720212] rtc_cmos 00:01: setting system clock to 2014-10-27 21:55:43 UTC (1414446943)
[ 0.724133] Initializing network drop monitor service
[ 0.726898] Freeing unused kernel memory: 580k freed
[ 0.727988] Write protecting the kernel read-only data: 6144k
[ 0.730894] Freeing unused kernel memory: 648k freed
[ 0.734594] Freeing unused kernel memory: 688k freed

[ 0.856102] udevd[51]: starting version 175
[ 0.882389] SCSI subsystem initialized
[ 0.887544] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[ 0.902570] usbcore: registered new interface driver usbfs
[ 0.902753] virtio-pci 0000:00:03.0: setting latency timer to 64
[ 0.903451] virtio-pci 0000:00:04.0: setting latency timer to 64
[ 0.904180] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ 0.904217] virtio-pci 0000:00:05.0: setting latency timer to 64
[ 0.908345] libata version 3.00 loaded.
[ 0.908908] ata_piix 0000:00:01.1: version 2.13

[ 0.909051] ata_piix 0000:00:01.1: setting latency timer to 64
[ 0.912461] usbcore: registered new interface driver hub
[ 0.916405] scsi0 : ata_piix
[ 0.920989] usbcore: registered new device driver usb
[ 0.923162] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 0.925064] scsi1 : ata_piix
[ 0.926215] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14
[ 0.928003] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15
[ 0.930726] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.933881] uhci_hcd 0000:00:01.2: setting latency timer to 64

[ 0.933890] uhci_hcd 0000:00:01.2: UHCI Host Controller
[ 0.935462] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
[ 0.937756] uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c040
[ 0.939465] usb usb1: New USB device found, idVendor=1d6b, idProduct=0001
[ 0.941008] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.942864] usb usb1: Product: UHCI Host Controller
[ 0.944217] usb usb1: Manufacturer: Linux 3.2.0-4-amd64 uhci_hcd
[ 0.945600] usb usb1: SerialNumber: 0000:00:01.2
[ 0.947218] hub 1-0:1.0: USB hub found
[ 0.948370] hub 1-0:1.0: 2 ports detected

[ 0.958914] virtio-pci 0000:00:03.0: irq 40 for MSI/MSI-X
[ 0.958937] virtio-pci 0000:00:03.0: irq 41 for MSI/MSI-X
[ 0.958954] virtio-pci 0000:00:03.0: irq 42 for MSI/MSI-X
[ 0.971100] virtio-pci 0000:00:04.0: irq 43 for MSI/MSI-X
[ 0.971120] virtio-pci 0000:00:04.0: irq 44 for MSI/MSI-X
[ 0.971579] FDC 0 is a S82078B
[ 0.973380] vda: vda1 vda2
[ 1.101935] ata2.01: NODEV after polling detection
[ 1.102793] ata2.00: ATAPI: QEMU DVD-ROM, 1.1.2, max UDMA/100
[ 1.109793] ata2.00: configured for MWDMA2

[ 1.115456] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 1.1. PQ: 0 ANSI: 5
[ 1.148087] sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
[ 1.149401] cdrom: Uniform CD-ROM driver Revision: 3.20
[ 1.151382] sr 1:0:0:0: Attached scsi CD-ROM sr0
[ 1.158778] sr 1:0:0:0: Attached scsi generic sg0 type 5
[ 1.260227] usb 1-1: new full-speed USB device number 2 using uhci_hcd
[ 1.302176] device-mapper: uevent: version 1.0.3
[ 1.307903] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-devel@redhat.com
[ 1.456752] usb 1-1: New USB device found, idVendor=0627, idProduct=0001
[ 1.456755] usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=5

[ 1.456757] usb 1-1: Product: QEMU USB Tablet
[ 1.456759] usb 1-1: Manufacturer: QEMU 1.1.2
[ 1.456760] usb 1-1: SerialNumber: 42
[ 1.485089] input: QEMU 1.1.2 QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/input/input1
[ 1.485328] generic-usb 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Pointer [QEMU 1.1.2 QEMU USB Tablet] on usb-0000:00:01.2-1/input0
[ 1.485378] usbcore: registered new interface driver usbhid
[ 1.485384] usbhid: USB HID core driver
[ 6308.594031] PM: Starting manual resume from disk
[ 6308.607186] PM: Hibernation image partition 253:2 present
[ 6308.607188] PM: Looking for hibernation image.

[ 6308.607520] PM: Image not found (code -22)
[ 6308.607522] PM: Hibernation image not present or could not be loaded.
[ 6308.684653] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
[ 6310.590145] udevd[372]: starting version 175
[ 6310.869309] WARNING! power/level is deprecated; use power/control instead
[ 6310.917370] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
[ 6310.938347] ACPI: Power Button [PWRF]
[ 6311.083046] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0
[ 6311.105034] input: PC Speaker as /devices/platform/pcspkr/input/input3
[ 6311.523064] Error: Driver 'pcspkr' is already registered, aborting...

[ 6311.779350] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
[ 6312.487542] EXT4-fs (dm-1): re-mounted. Opts: (null)
[ 6312.775121] EXT4-fs (dm-1): re-mounted. Opts: errors=remount-ro
[ 6313.573035] loop: module loaded
[ 6314.543459] Adding 8519676k swap on /dev/mapper/srvices-swap_1. Priority:-1 extents:1 across:8519676k
[ 6326.584149] eth0: no IPv6 routers present

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...