Friday, September 30, 2016

linux - How to enable gzip?

I would like to enable GZIP throughout my whole website. What would be the best way to do it? Would it be through .htaccess?



Any pointers on how to do this would be greatly appreciated.

centos - Too many TIME_WAIT state connections!

I've been reading about this everywhere all day, and from what I've gathered, TIME_WAIT is a relatively harmless state. It's supposed to be harmless even when there's too many.




But if they're jumping to the numbers I've been seeing for the past 24 hours, something is really wrong!



[root@1 ~]# netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n
1 established)
1 Foreign
12 CLOSE_WAIT
15 LISTEN
64 LAST_ACK
201 FIN_WAIT2

334 CLOSING
605 ESTABLISHED
816 SYN_RECV
981 FIN_WAIT1
26830 TIME_WAIT


That number fluctuates from 20,000 to 30,000+ (so far, the maximum I've seen it go is 32,000).
What worries me is that they're all different IP addresses from all sorts of random locations.




Now this is supposed to be (or was supposed to be) a DDoS attack. I know this for a fact, but I won't go into the boring details. It started out as a DDoS and it did impact my server's performance for a couple minutes. After that, everything was back to normal. My server load is normal. My internet traffic is normal. No server resource is being abused. My sites load fine.



I also have IPTABLES disabled. There's an odd issue with that too. Every time I enable the firewall/iptables, my server starts experiencing packet loss. Lots of it. About 50%-60% packets are lost. It happens within an hour or within a few hours of enabling the firewall. As soon as I disable it, ping responses from all locations I test them from start clearing up and get stable again. Very strange.



The TIME_WAIT state connections have been fluctuating at those numbers since yesterday. For 24 hours now, I've had that, and although it hasn't impacted performance in any way, it's disturbing enough.



My current tcp_fin_timeout value is 30 seconds, from the default 60 seconds. However, that seems to not help, at all.



Any ideas, suggestions? Anything at all would be appreciated, really!

Thursday, September 29, 2016

linux - ulimit -n not changing - values limits.conf has no effect

I am trying to raise the open file descriptor maximum for all users on an ubuntu machine.



This question is somewhat of a follow up to this question.



open file descriptor limits.conf setting isn't read by ulimit even when pam_limits.so is required




except that i've added the required "root" entries in limits.conf



Here are the entries



*               soft    nofile           100000
* hard nofile 100000
root soft nofile 100000
root hard nofile 100000



Lines related to pam_limits.so have been un-commented in all relevant files in /etc/pam.d/ and fs.file-max has been set correctly in /etc/sysctl.conf



However, I still see



abc@machine-2:/etc/pam.d$ ulimit -n
1024


after reboot.




What could be the problem?



My default shell is /bin/sh and i can't use chsh to change my default shell since the my user on the machine is authenticated via some distributed authentication scheme.

yum - Upgrading PHP - Missing Dependency php = 5.1.6 is needed by package php-eaccelerator

im trying to upgrade php from 5.1.6 to 5.2.1. When invoking the yum update php i get this message:





--> Finished Dependency Resolution php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64
from installed has depsolving problems



--> Missing Dependency: php = 5.1.6 is needed by package
php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64
(installed) Error: Missing Dependency:
php = 5.1.6 is needed by package
php-eaccelerator-5.1.6_0.9.5.2-4.el5.rf.x86_64
(installed) You could try using
--skip-broken to work around the problem You could try running:

package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest



The program package-cleanup is found in the yum-utils package.




The message is giving instructions but i really don't know how to proceed. This is a production server and it cannot be down for more than 1 minute.



Thanks for any ideas.




I only needed PHP 5.2.1 for the new json_decode() function. I added this and the upgrade was not longer necessary: http://snipplr.com/view/4964/emulate-php-5-for-backwards-compatibility/

ssl certificate for www.example.com and example.com

I used make-dummmy-cert that comes with apache 2.2 and ssl_mod to make a self-signed certificate. I tried making it for www.example.com, example.com, or *.example.com, but none of them would work for both www.example.com and example.com. The browser would say The certificate is only valid for example.com (or www.example.com or *.example.com respectively)



How do I make a self-signed cert that would work for both cases?

active directory - Exchange setup on existing domain

I am working on a project where a new MS Exchange 2016 will be installed on a new server for a company. The company has already a working AD with 700 users and decided to buy and setup an Exchange server. Because I do not want to mess-up with the existing servers, I prefer to setup a new machine, join in to the AD, deploy Exchange etc....
I have read a few tutorials on this, I tested it in a virtual lab but I would like to know whether this setup is going to harm/modify something on the working AD. Has anyone deployed an Exchange server by joining the machine in an existing AD? Additionally, is there a better design concept instead of mine, when creating an Exchange for an existing AD? (e.g. create a new domain only for mail purposes_.



Thank you very much for your time.

Wednesday, September 28, 2016

ZFS: RAIDZ versus stripe with ditto blocks

I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops.



But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts.



Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

ubuntu - How does “sendmail” send mails to any domain name?

I noticed that one of our servers (running Ubuntu) can send mails to any domain (yahoo.com ,gmail.com) using simple "sendmail" command.



But I cannot figure out how to configure similar setup in a new server. I cannot see any files such as "/etc/mail/sendmail.cf" in the first server.




How does this work ? Does it use some other SMTP server to do actual mail delivery? Where can I find these settings?

domain name system - “wildcard” DNS vs Separate Websites

We would like to offer a free trial to customers of our SaaS product. So, customers will put the desired sub-domain name while signing up and they will get a site like clientname.ourmainsite.com for free trial. We don’t want to manually setup a sub-domain for each customer. After research I came to know this can be achieved by setting up “wildcard” DNS record for ourmainsite.com, so we need to add a *. ourmainsite.com entry in DNS of ourmainsite.com pointing to our server.




But this seems to point all sub-domains to a single a site which is opposite to our current setup where each client domain has a separate website setup in IIS like client1.com, client2.com, etc. How this option of using single website for multiple domains sounds as compared to separate site for each domain? What are the pros/cons of this approach specially cons? Which option is more secured? Which option uses less resources like memory on server?



My main concern is in current setup we can have separate app pools for each site. But if we go with the approach of “wildcard” DNS record where all domains are pointed to a single site then what will happen if some site is attacked or having huge traffic slowing down other sites on server then how this can be controlled or monitored as we won’t be able to set separate app pool for each domain? Are there any alternatives to this? I have read that many SaaS companies follow the “wildcard” DNS approach. So how do they handle loading or attack on some specific site? Or do they use “wildcard” DNS approach only for trial sites to setup a virtual sub-domain like client1.ourmainsite.com and after trial create separate site in IIS for their domain like client1.com?

fastcgi - php as fast_cgi shutdowns without log



I'm running php5 behind a nginx proxy, as Fast_CGI, after some usage (Always while being used, not while idle. The Fast_CGI server just shuts down (no longer displays in a 'ps -A'.



In php.ini Log_Errors is set to On and Error_Log is set to /var/log/php.log, however if view php.log only the startup errors are displayed, nothing that would signify php shutting down.


Answer



The best way to find out what the problem is when you don't get any output is to run php under strace. Start up php and get the list of pids from ps. Then run:




# strace -f -o /tmp/php.strace.log -p pid1 -p pid2 -p pid3 ....


Once PHP dies, look in the log to see what happens.



Having said that, in your particular instance, I suspect you have your environment variables wrong. If you have a single php process, this would agree with my hunch. PHP has an option to terminate after a certain number of requests. This is a sensible thing to do to prevent memory leaks and other such problems. There is also an option to specify the number of processes that are running at the same time. If there is only one process, after a number of requests, it will die. The solution is to run more than one processes. The options that I use are:




export PHP_FCGI_CHILDREN=4
export PHP_FCGI_MAX_REQUESTS=1000



If you put these lines in the script you use to start your php server, you should find your PHP website remains running. :)


centos - Tomcat with virtual hosts - 404

I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat.



Here's the virtual host's configuration:




ServerName project.demo.us.com

CustomLog logs/project.demo.us.com/access_log combined env=!VLOG

ErrorLog logs/project.demo.us.com/error_log
DocumentRoot /var/www/vhosts/project.demo.us.com


Allow from all
AllowOverride All
Options -Indexes FollowSymLinks


##########

##########
##########

JkMount /project/* online



JkMount line defines that we use online worker and our workers.properties contains this:



worker.list=..., online, ...


worker.online.port=7703
worker.online.host=localhost
worker.online.type=ajp13
worker.online.lbfactor=1


And tomcat's conf/server.xml contains:



    
enableLookups="false" redirectPort="8443" protocol="AJP/1.3"
URIEncoding="UTF-8" maxThreads="80"
minSpareThreads="10" maxSpareThreads="15"/>


I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter?



Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes.



Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either.




Apache's access_log contains:



88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2"



I couldn't find any mention of the request in Tomcat's logs.



If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat.



Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

Tuesday, September 27, 2016

vmware esxi - ILO Update : Which version should we select?

We have a HP ProLiant DL160 Gen 8 server, with VMWare ESXi 6.0.0 actually installed on it.



We started to update it to VMWare ESXi 6.5, but the upgrade stopped, stating that we need to update the HP ILO before going forward.



Our current HP ILO is 600.9.0.2.8-1OEM.600.0.0.2159203



So here is my question : Which ILO should I select ?






Thank you very much,

Ubuntu Server Hotswap RAID 1: Hardware or Software?



I'm building new servers for a project I'm working on. They will all run Ubuntu Server x64 (10.04 soon) and require a RAID 1 hotswap configuration (just two drives) to minimize downtime.



I'm not worried about Raid performance. The server hardware will have plenty of CPU power, and I'm only doing a RAID 1. My only requirements are:




  1. Everything, including the OS, must be mirrored.

  2. There must be no down-time when a drive fails. I need to be able to swap out the failed drive with another and have the RAID rebuild itself automatically (or maybe by running a simple script).




I'm wondering if the built-in Ubuntu Software RAID can handle this, particularly the hotswap part. 10.04 looks promising.



I'm considering buying the 3Ware 9650SE-2LP-SGL RAID controller, but with the number of servers we're purchasing, that would increase the total price quite a bit.



Any advice at all would be appreciated. Thank you.


Answer



I have hot swapped drives using the software RAID builtin into the Linux kernel on many occasions. You may need to run a command to add the new device. I believe it is possible to make it automatic, but in the places where I use it manually running the command to add the new drive has never been a problem.




I am not entirely certain that the computer will survive with zero downtime. That may depend on your hard drive controller and how it responds to a drive failure.


Monday, September 26, 2016

debian - My VPS apache goes down often



I have a debian VPS with 2 GB RAM which is really good for handling big stuff.
Recently, however, my website goes down often and I still can't target the exact reason.



At first I had 512MB RAM which was really too small to handle my website. As I saw in the logs, the site used at least 450MB. I upgraded it to 2GB hoping that would solve everything, but it hasn't done anything.



Then I thought it might be that my website code was running a huge process, because it actually was. So I rebuilt a simple system to reduce the huge process that was being done. Still, the same problem persisted.



Now I'm thinking it might be a number of visitors problem. But there aren't even 30 active visitors, even less, and 2 GB of RAM should good to handle them all. After looking at the RAM usage when the site goes down, it's about 400-500MB of the 2GB, so to me that confirmed it's not the RAM's problem.




So I'm really confused now. What else could it be?



Apache error logs are all about my PHP files notices and un-important stuff that has nothing to do with taking the Apache down, but I'm sure its only an Apache problem because SSH connects and works perfectly while the website is down.



What are expected problems or anything else to check? Could it be an Apache limitations for visitor usage?


Answer



While I have little information as to what happens in regard to TCP handshaking or other network issues, it appears (by your comment) that in apache.conf that you are having over 10 users trying to be processed concurrently, and your MaxClients directive is too low to handle your traffic. I would increase the number. Since I do not know what kind of traffic your server receives, I'd set the value to at least 50, and increase it if loadtesting incurs problems. You can run a loadtest with a free service such as Load Impact. [No affiliation]



From http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients:





The MaxClients directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxClients limit will normally be queued[emplasis mine], up to a number based on the ListenBacklog directive. Once a child process is freed at the end of a different request, the connection will then be serviced.




Your connections appear to be 'hanging' since they are being queued up for processing, although I do not doubt your server can handle a bit concurrently.


docker - CoreOS high RAM usage for tmpfs

I'm using the latest CoreOS AMI (ami-0fc25a0b6bd986d03 details) on a small t2.nano instance.



This instance only has 500MB of memory. Unfortunately, CoreOS immediately consumes ~240MB for a tmpfs, which it then mounts at /tmp as shown below. This seems to completely eat my shared memory and I cannot launch containers. Is there any way to reduce the size of this? Or perhaps some way to mount /tmp onto the root filesystem?



I'm considering abandoning CoreOS solely because I cannot get it to work with small instance sizes, which is a shame since I chose it specifically because it was supposed to be a tiny OS that gets out of the way and let's me run containers...




$ free -h
total used free shared buff/cache available
Mem: 479Mi 232Mi 7.0Mi 199Mi 238Mi 34Mi
Swap: 0B 0B 0B

$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 219M 0 219M 0% /dev
tmpfs 240M 0 240M 0% /dev/shm

tmpfs 240M 488K 240M 1% /run
tmpfs 240M 0 240M 0% /sys/fs/cgroup
/dev/xvda9 14G 2.8G 9.9G 22% /
/dev/mapper/usr 985M 791M 143M 85% /usr
none 240M 200M 41M 84% /run/torcx/unpack
tmpfs 240M 0 240M 0% /media
tmpfs 240M 0 240M 0% /tmp
/dev/xvda6 108M 112K 99M 1% /usr/share/oem
/dev/xvda1 127M 53M 74M 42% /boot
tmpfs 48M 0 48M 0% /run/user/500



Edit: Perhaps relevant, RancherOS apparently requires a minimum of 1GB to launch, although their GitHub discusses values from 512MB up to 2GB. Unclear to me why these "tiny OS" have such relatively high RAM needs. For context, Debian minimum is 256MB on a headless install

Non-ECC memory with ZFS: a stupid idea?



I have a new server and am planning to upgrade the paltry 2 GB of memory to the maximum of 16 GB. (Theoretically 8 GB is the limit, but empirically 16 GB has been shown to work.) Some guides advise that ECC memory is not that important, but I'm not so sure I believe this.




I've installed FreeNAS and am planning to add ZFS volumes as soon as my new hard drives arrive. Would it be stupid to skimp and get non-ECC memory for a ZFS-based NAS? If it's necessary, then I'll bite the bullet, but if it's just paranoia, then I'll probably skip it.



Is there any reason ZFS or FeeeNAS specifically would require ECC memory, or suffer especially when running on a system using non-ECC memory?


Answer



I would argue that running FreeNAS with non-ECC RAM is a stupid idea, as is running it as a virtualized guest, when the data stored on the ZFS volume is important.



Joshua Paetzel, one of the FreeNAS developers, has a good write-up on this topic: http://www.freenas.org/whats-new/2015/02/a-complete-guide-to-freenas-hardware-design-part-i-purpose-and-best-practices.html.



TL;DR





ZFS does something no other filesystem you’ll have available to you does: it checksums your data, and it checksums the metadata used by ZFS, and it checksums the checksums. If your data is corrupted in memory before it is written, ZFS will happily write (and checksum) the corrupted data. Additionally, ZFS has no pre-mount consistency checker or tool that can repair filesystem damage. [...] If a non-ECC memory module goes haywire, it can cause irreparable damage to your ZFS pool that can cause complete loss of the storage.



Sunday, September 25, 2016

domain name system - SPF Setup - Sending from VPS and Google Apps



In follow up to my question here, how on earth do I setup a SPF record?!?



I understand that I have to add a TXT record to my DNS entries but what to put in that TXT entry is what's confusing me...



I have a Windows 2008 VPS with two IPs - x.x.x.10 & x.x.x.20




I have two RDNS records for x.x.x.10 => bob.charlino.com & x.x.x.20 => simon.charlino.com



I have a web application setup on the server vallenous.com (note: different from the rDNS entries)



vallenous.com is setup to use google apps for email BUT I do wish to send some emails from the web application itself through the local SMTP server (IIS6 SMTP) on my VPS.



In response to the answers to my previous question, I've set the FQDN in my smtp virtual server to equal bob.charlino.com because when you send an email through the vallenous.com web applicaiton it seems to come from x.x.x.10.



Was this the correct thing to do? I noticed when I did this google mail wasn't giving it a soft fail anymore...




Secondly, how on earth do I setup the SPF record? I've done some googling but it all just confuses me. I need to setup it for google apps (which is outlined here) but I also need to set it up so I can send from my VPS.



And yes, I've seen a couple of SPF generater thingies but they aren't really clearing up any confusion... just adding to it really.


Answer



I'd suggest that you need the following in your TXT record:



"v=spf1 mx ip4:x.x.x.10 include:aspmx.googlemail.com ~all"



This was generated by the first SPF generator you linked to.




It states that your domain is using SPF v1.



The mx keyboard states that any server listed in a DNS MX record as a mail server is allowed to send mail from this domain.



The ipv4: bit states that the given IPv4 address is allowed to send mail from this domain.



The include: part states that any server allowed to send mail for the aspmx.googlemail.com domain is also allowed to send for your domain - this bit lets google apps email work. If google add/change which servers they use, they will change their SPF record for their aspmx.googlemail.com domain (and hence your domain will carrying on working without needed to be changed any time google make a change...)



The ~all part states that the previous parts should be all the allowed mail servers. Any other server sending email claiming to be from this domain is probably in error - accept the email but you might want to check it more thoroughly for spam,etc.




If you use -all instead of ~all, it states that any other server sending email claiming to be from this domain is definitely in error - don't accept the email (or accept and delete it). Google recommend you don't use this setting as it can be a bit over-vealous and lead to mail being lost.


linux - What group should public_html belong to?



I have an appache server running on Linux - CentOS.




In order to be able to edit my php files on Windows, I linked the server to my Dropbox account and created a symlink from the Dropbox folder, which is located under /root/Dropbox, to my public_html folder.
Then when I tried to edit a file in public_html through Windows, its permission turned to root and thus I got the famous 500 error. I guessed it has to do with the mentioned symlink's permission, so I changed the permission for the symlink to my user account but it didn't change.
But what happed next overwhelmed me: suddenly when I try to access any page on my site I get:



Forbidden You don't have permission to access /My/site/name/page.php on this server.



Digging around I found out that the public_html owner and group is root,
ps aux | grep apache showed




root      4533  0.0  0.0  10892  1604 ?        S    Jul31   0:00     /usr/local/apache/bin/httpd -k start -DSSL
nobody 4534 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4535 0.0 0.1 10892 2952 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4536 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4537 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4538 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4551 0.0 0.1 10892 2208 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4556 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4565 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL
nobody 4572 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL



changing the group of public_html to nobody did the trick and got this error off. But I don't know if it should be like this,
I mean, I don't know what group it had before.



So I have two qustions:



1. Given the mentioned apache's user, to what user should public_html be belong to?



2. If the answer to 1 is root, can you think of any reason that caused this error to suddenly happen, and what should be done in order to solve it?




It's worth to mention that I started by posting the question here but I didn't get any answer so I'm trying here. Hope it's legal.


Answer



You could run Dropbox as a non-root user, have public_html owned by that user and the apache group, and permissioned rwxrwx--- (i.e. 770) so that both your user and Apache can read and write.



Also, as a general principle of Linux/Unix administration, you should never run applications as root unless you absolutely have to.



To explain why Apache appears to use root, applications are only allowed to listen on privileged ports (those below 1024) if they are started with root privileges. As HTTP/HTTPS is served on ports 80/443 (respectively), Apache is started as root, and then forks processes under its own user (by default, called 'apache' on Red Hat based distributions - of which CentOS is one - or 'www-data' on Debian-based distribufions - e.g. Ubuntu). The unprivileged user can be configured in your Apache configuration, though for 95% of applications the default is fine.


Installing mod_pagespeed (Apache module) on CentOS



I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed.




I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing:



# rpm -U mod-pagespeed-*.rpm
warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991
[ OK ] atd: [ OK ]


It does appear to be installed properly:




# apachectl -t -D DUMP_MODULES
Loaded Modules:
...
pagespeed_module (shared)


And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf



Added:




ModPagespeedEnableFilters collapse_whitespace,elide_attributes
ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css
ModPagespeedEnableFilters rewrite_javascript,inline_javascript
ModPagespeedEnableFilters rewrite_images,insert_img_dimensions
ModPagespeedEnableFilters extend_cache
ModPagespeedEnableFilters remove_quotes,remove_comments

ModPagespeedEnableFilters add_instrumentation



Commented out the following lines in mod_pagespeed_statistics




**# Order allow,deny**
# You may insert other "Allow from" lines to add hosts you want to
# allow to look at generated statistics. Another possibility is
# to comment out the "Order" and "Allow" options from the config
# file, to allow any client that can reach your server to examine
# statistics. This might be appropriate in an experimental setup or
# if the Apache server is protected by a reverse proxy that will

# filter URLs in some fashion.
**# Allow from localhost**
**# Allow from 127.0.0.1**
SetHandler mod_pagespeed_statistics



As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly.



./system_test.sh www.domain.com

You have the wrong version of wget. 1.12 is required.

Answer



I was running into an issue in my installation of mod_pagespeed on a CentOS system wherein it just refused to work after installation.



It turns out there was a permission/ownership access issue for particular folder(s).



In /var/www/, there existed:



drwxr-xr-x  4 root      root   4096 Dec  8 12:02 mod_pagespeed

drwxr-xr-x 2 root root 4096 Dec 8 12:03 mod_pagespeedcache


I changed the permissions to:



drwxr-xr-x  4 apache    apache 4096 Dec  8 12:02 mod_pagespeed
drwxr-xr-x 4 apache apache 4096 Dec 10 13:10 mod_pagespeedcache


The logs were showing:




...
[Sat Dec 10 13:08:43 2011] [error] [mod_pagespeed 0.10.19.4-1209 @30739] /var/www/mod_pagespeedcache/XAM3DOzfwmGm-DkPVUC7.outputlock:0: creating dir (code=13 Permission denied)
...


Worked fine after that.


domain name system - How to properly configure BIND forward zone for an internal DNS server?

I have:




  1. internal DNS server ns1.internal with IP 192.168.0.4.

  2. external DNS server with an external TLD mydns.example.com and internal IP 192.168.0.5. It's accessible both from the Internet (via a static NAT rule) and from the local network.



I'm trying to setup my external DNS server to forward zone subzone.mydns.example.com to the internal DNS server. The internal DNS server is authoritative for this zone.




Important: I can't modify the internal DNS server configuration. I can read it, however, if that's needed to diagnose the issue.



File /etc/named.conf on the external DNS server:



options {
directory "/var/named";
version "get lost";

recursion yes;
allow-transfer {"none";};

allow-query { any; };
allow-recursion { any; };
};

logging{
channel example_log{
file "/var/log/named/named.log" versions 3 size 2m;
severity info;
print-severity yes;
print-time yes;

print-category yes;
};
category default{
example_log;
};
};

// Zones:

zone "mydns.example.com" {

type master;
file "mydns.example.com.zone";
allow-update{none;};
};

zone "subzone.mydns.example.com" {
type forward;
forwarders { 192.168.0.4; };
};



File /var/named/mydns.example.com.zone on the external DNS server:



$TTL 1
$ORIGIN mydns.example.com.
@ IN SOA mydns.example.com. root.mydns.example.com. (
2003080800 ; se = serial number
60 ; ref = refresh
60 ; ret = update retry
60 ; ex = expiry

60 ; min = minimum
)

@ IN NS mydns.example.com.


So, now I try to resolve some DNS records.
The external server zone seems to work.



workstation$ dig mydns.example.com NS +tcp +short

mydns.example.com.


But the forwarded zone does not work:



workstation$ dig subzone.mydns.example.com NS +tcp

; <<>> DiG 9.8.1-P1 <<>> subzone.mydns.example.com NS +tcp
;; global options: +cmd
;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 36887
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;subzone.mydns.example.com. IN NS

;; AUTHORITY SECTION:
mydns.example.com. 1 IN SOA mydns.example.com. root.mydns.example.com. 2003080800 60 60 60 60

;; Query time: 3 msec

;; SERVER: 91.144.182.3#53(91.144.182.3)
;; WHEN: Thu Jul 19 17:27:54 2012
;; MSG SIZE rcvd: 108


The results are identical when these commands are executed on remote Internet host and on an internal host.



If I try to resolve subzone.mydns.example.com. from external name server AND specify the internal server explicitly, I get:



mydns$ dig @192.168.0.4 subzone.mydns.example.com NS


; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> @192.168.0.4 subzone.mydns.example.com NS
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 87
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 3

;; QUESTION SECTION:
;subzone.mydns.example.com. IN NS


;; ANSWER SECTION:
subzone.mydns.example.com. 3600 IN NS ns1.internal.

;; ADDITIONAL SECTION:
ns1.internal. 3600 IN A 192.168.0.4

;; Query time: 613 msec
;; SERVER: 192.168.0.4#53(192.168.0.4)
;; WHEN: Thu Jul 19 18:20:55 2012

;; MSG SIZE rcvd: 163


What's wrong? How do I configure the forwarding DNS zone to work as I expect?

Saturday, September 24, 2016

Apache 2.2 / MySQL Overload of Ubuntu Server

I've run into a problem and I just can't seem to figure out how to solve it; I
have a regular Ubuntu 12.04 Server with apache 2.2 running a website. Every now and then the server overloads and starts becoming unresponsive, simple commands takes ages to execute until the server is restart or apache/mysql is restarted. (And the website itself shuts down complety)



Looking in the error log I see a simple



[error] server reached MaxClients setting, consider raising the MaxClients setting



Followed by a bunch of mysqli being unable to connect



One would simply assume that I would need to increase the MaxClients but I've already done this a couple of times and I worry that I will overload the server myself by setting it too high, below is how the mpm prefork is currently set:




StartServers 20
MinSpareServers 10
MaxSpareServers 20
MaxClients 150
MaxRequestsPerChild 90




On a normal day we have roughly 1700 Users/Visitors (During 24 hours).



Server details:




  • Memory: 3GB

  • CPU: 1 - 3300MHz


  • OS: Ubuntu 12.04

  • Apache: 2.2 with php 5.3.10 & Mysql 5.5.41



Couple of pictures I got with glances, here you can see apache swallowing quiet a lot of CPU before they drop down again: (This is with only a coupe of users on the webserver (5 to 20) )



high load 1
high load 2



How do I avoid my server crashing / overloading? ( I am open to any solution, even changing to nginx or something else if that could handle the load better).




Also, I'm not concerned about RAM usage / consumption since I can add a lot more RAM, it's the CPU I'm concerned about.

linux - Creating multiple SFTP users for one account



I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files.



I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/.



Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP.




My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords.



SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)?



My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH.



Currently our SSH configuration has this appended to it in order to jail the users in their own directories:




# all customers have group 'customer'

Match group customer
ChrootDirectory /home/%u # jail in home directories
AllowTcpForwarding no
X11Forwarding no
ForceCommand internal-sftp # force SFTP
PasswordAuthentication yes # for non-customer accounts we use keys instead


Our servers are running Ubuntu 12.04 LTS.


Answer




Our solution is to create a main user account for each customer, such as flowershop. Each customer can create an arbitrary number of side accounts with their own passwords, such as flowershop_developer, flowershop_tester, flowershop_dba, etc. This allows them to hand out accounts without sharing their main account password, which is better for a whole bunch of reasons (for example, if they need to remove their DBA's account, they can easily do that without changing their own passwords).



Each one of these accounts is in the flowershop group, with a home folder of /home/flowershop/. SSH uses this as the chroot directory (/home/%u, as shown in the configuration in the question).



We then use ACLs to enable every user in group flowershop to modify all files. When a new customer account is created, we set the ACLs as follows:



setfacl -Rm \
d:group:admin:rwx,d:user:www-data:r-x,d:user:$USERNAME:rwx,d:group:$USERNAME:rwx,\
group:admin:rwx, user:www-data:r-x, user:$USERNAME:rwx, group:$USERNAME:rwx \
/home/$USERNAME/



This does the following:




  • Gives group admin (for us, the hosting providers) rwx

  • Gives user www-data (Apache) r-x to the files*

  • Gives user $USERNAME rwx to the files

  • Gives group $USERNAME rwx to the files




This setup appears to be working well for us, but we are open to any suggestions for doing it better.



* we use suexec for CGI/PHP running as the customer account


spf - Sent mails go to spam on Gmail but not on Yahoo

Why is a message like the following going to Gmails SPAM folder?
I noticed that the same message goes correctly to Yahoo!s Inbox, but observing the header i see (in the received Yahoo message the following part: domainkeys=neutral (no sig); from=mydomain.com; dkim=permerror (no key))




The following is the received gmail message.



Delivered-To: recipient@gmail.com
Received: by 10.58.136.2 with SMTP id pw2csp417955veb;
Mon, 12 Nov 2012 08:56:07 -0800 (PST)
Received: by 10.180.8.134 with SMTP id r6mr11575409wia.19.1352739366833;
Mon, 12 Nov 2012 08:56:06 -0800 (PST)
Return-Path:
Received: from reco-server.mydomain.com ([1.2.3.4])

by mx.google.com with ESMTP id i6si8062286wix.3.2012.11.12.08.56.06;
Mon, 12 Nov 2012 08:56:06 -0800 (PST)
Received-SPF: pass (google.com: domain of www-data@mydomain.com designates 1.2.3.4 as permitted sender) client-ip=1.2.3.4;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of www-data@mydomain.com designates 1.2.3.4 as permitted sender) smtp.mail=www-data@mydomain.com
Received: by reco-server.mydomain.com (Postfix, from userid 33)
id 35A9FC35AC; Mon, 12 Nov 2012 18:53:38 +0200 (EET)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=mydomain.com;
s=default.private; t=1352739218;
bh=xB6AkthEgELFO1uSJiG0uEqE+qEnyoQ/RQnK9N0kGcY=;
h=To:Subject:Date:From:Message-ID:MIME-Version:

Content-Transfer-Encoding:Content-Type;
b=HbafDiTnuJMT837tf/PWk0LZPMBStf17PJYM94StSg5odjEIPzuzf5hPxJc2DfQMV
+e9MdhgDoKJ09YJJV0nvH07Y+20XB6uPOxk/sJry3ItYCFkqzbFFnK7YkAHRwIuSiy
gueYz6tpfZekxpWTWysic465o4mRLxTG28EdnF2U=
To: recipient@gmail.com
Subject: =?UTF-8?B?zpXOu867zrfOvc65zrrOrA==?=
X-PHP-Originating-Script: 0:class.phpmailer.php
Date: Mon, 12 Nov 2012 18:53:38 +0200
From: noreply@mydomain.com
Message-ID: <8002c9e1cccf93e64ea3b98588ae7971@localhost>

X-Priority: 3
X-Mailer: PHPMailer [version 1.73]
X-Mailer: phplist v2.10.19
X-MessageID: 13
X-ListMember: recipient@gmail.com
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="UTF-8"

MESSAGE TEXT



EDIT
Results from autmated check@isnotspam.com



This message is an automatic response from isNOTspam's authentication verifier service.  The service allows email senders to perform a simple check of various sender authentication mechanisms.  It is provided free of charge, in the hope that it is useful to the email community.  While it is not officially supported, we  welcome any feedback you may have at .

Thank you for using isNOTspam.

The isNOTspam team


==========================================================
Summary of Results
==========================================================

SPF Check : pass
Sender-ID Check : neutral
DomainKeys Check : neutral
DKIM Check : invalid
SpamAssassin Check : ham (non-spam)

==========================================================
Details:
==========================================================

HELO hostname: [1.2.3.4]
Source IP: 1.2.3.4
mail-from: noreply@domain.com
---------------------------------------------------------
SPF check details:
----------------------------------------------------------


Result: pass
ID(s) verified: smtp.mail=noreply@domain.com DNS record(s):
domain.com. 86400 IN TXT "v=spf1 ip4:1.2.3.4 a mx -all" ""


----------------------------------------------------------
Sender-ID check details:
----------------------------------------------------------


Result: neutral
ID(s) verified: smtp.mail=noreply@domain.com DNS record(s):
domain.com. 86400 IN TXT "v=spf1 ip4:1.2.3.4 a mx -all" ""


----------------------------------------------------------
DomainKeys check details:
----------------------------------------------------------

Result: neutral (message not signed)

ID(s) verified: header.From=noreply@domain.com Selector= domain= DomainKeys DNS Record=

----------------------------------------------------------
DKIM check details:
----------------------------------------------------------

Result: invalid
ID(s) verified: header.From=noreply@domain.com Selector= domain= DomainKeys DNS Record=._domainkey.

----------------------------------------------------------

SpamAssassin check details:
----------------------------------------------------------
SpamAssassin v3.2.5 (2008-06-10)

Result: ham (non-spam) (05.1points, 10.0 required)

pts rule name description
---- ---------------------- -------------------------------



* 1.8 SUBJ_ALL_CAPS Subject is all capitals
* -0.0 SPF_PASS SPF: sender matches SPF record
* 3.2 RCVD_ILLEGAL_IP Received: contains illegal IP address
* 0.0 DKIM_SIGNED Domain Keys Identified Mail: message has a signature
* 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS

To learn more about the terms used in the SpamAssassin report, please search
here: http://wiki.apache.org/spamassassin/

==========================================================

Explanation of the possible results (adapted from
draft-kucherawy-sender-auth-header-04.txt):
==========================================================

"pass"
the message passed the authentication test.

"fail"
the message failed the authentication test.


"softfail"
the message failed the authentication test, and the authentication
method has either an explicit or implicit policy which doesn't require
successful authentication of all messages from that domain.

"neutral"
the authentication method completed without errors, but was unable
to reach either a positive or a negative result about the message.

"temperror"

a temporary (recoverable) error occurred attempting to authenticate
the sender; either the process couldn't be completed locally, or
there was a temporary failure retrieving data required for the
authentication. A later retry may produce a more final result.

"permerror"
a permanent (unrecoverable) error occurred attempting to
authenticate the sender; either the process couldn't be completed
locally, or there was a permanent failure retrieving data required
for the authentication.



==========================================================
Original Email
==========================================================

From www-data@domain.com Mon Nov 12 12:38:18 2012
Return-path:
Envelope-to: check@isnotspam.com
Delivery-date: Mon, 12 Nov 2012 12:38:18 -0600

Received: from [1.2.3.4] (helo=reco-server.domain.com)
by s15387396.onlinehome-server.com with esmtp (Exim 4.71)
(envelope-from )
id 1TXytm-0006Ks-F4
for check@isnotspam.com; Mon, 12 Nov 2012 12:38:18 -0600
Received: by reco-server.domain.com (Postfix, from userid 33)
id 6BF85C35AD; Mon, 12 Nov 2012 20:35:48 +0200 (EET)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=domain.com;
s=default.private; t=1352745348;
bh=ct/pgfefCHs8+LIeEBcMrJ5P+x8P9h/ezEkaBkHvCN4=;

h=To:Subject:Date:From:Message-ID:MIME-Version:
Content-Transfer-Encoding:Content-Type;
b=KqwYkomSJ7DFGIYp9fwajqCAPr8bab5Blp8FlbN9MGaaNIAt4pBBlnlLOeKeqQ1Dk
B9GzgDaYmzvCeufDu6vHsDX4l2RjzvMvEOu1zYedOni71Pcm8E1R30ACmRE21GMTh1
ydht7n4dCV1ixaRch+yA+usEExUbrrMG5kvSoZyE=
To: check@isnotspam.com
Subject: TEST FOR SPAM
X-PHP-Originating-Script: 0:class.phpmailer.php
Date: Mon, 12 Nov 2012 20:35:48 +0200
From: noreply@domain.com

Message-ID: <0a2dfc42198ab976374beeb033478102@localhost>
X-Priority: 3
X-Mailer: PHPMailer [version 1.73]
X-Mailer: phplist v2.10.19
X-MessageID: 14
X-ListMember: check@isnotspam.com
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-DKIM-Status: invalid (pubkey_unavailable)


This is a test


--
powered by phpList, www.phplist.com --

linux - bash script returns "out of memory" in cron, but not in shell

I'm running a nightly bash script to sync a remote folder (source) with a local folder (target). I've tested this script, based on rsync, and it works fine in a root shell. It takes time since there are hundred of gigs to copy but it works.



Once I use it in crontab my server runs out of memory.



My server has 8GB of RAM, 4GB of swap, and as I said, the script never goes OOM when manually ran from a shell. It's a default Centos 5.5 installation. I could split the load and sync all the 2nd level dirs in a find/for script, but I'd like to keep it simple and only sync the top level directories.



I cannot make too many tests since this server is used to host websites and other services and I cannot afford to hang it just for testing purpose. Do you know a setting that could allow cron to finish this job normally ?



#!/bin/bash


BACKUP_PATH="/root/scripts/backup"

rsync -av --delete /net/hostname/source/ /export/target/ > $BACKUP_PATH/backup_results_ok 2> $BACKUP_PATH/backup_results_error


edit: cron configuration is default, as /etc/security/limits.conf, which is all commented out.

ubuntu - How to disable software raid (mdadm)?

I have two 500GB hard disk that were in a software RAID1 on a gentoo distribution. Now I have put the on an Ubuntu Server 10.10 and they still want to be in a RAID. How do I disable the RAID.




sudo mdadm --detail /dev/dm-1
mdadm: /dev/dm-1 does not appear to be an md device



sudo mdadm --stop /dev/dm-1
mdadm: /dev/dm-1 does not appear to be an md device




Device Boot Start End
Blocks Id System /dev/sdb1
1 60801 488384001 83 Linux



Disk /dev/sdd: 500.1 GB, 500107862016

bytes 255 heads, 63 sectors/track,
60801 cylinders Units = cylinders of
16065 * 512 = 8225280 bytes Sector
size (logical/physical): 512 bytes /
512 bytes I/O size (minimum/optimal):
512 bytes / 512 bytes Disk identifier:
0x00000000



Device Boot Start End
Blocks Id System /dev/sdd1
1 60801 488384001 83 Linux




Disk /dev/sdc: 500.1 GB, 500107862016
bytes 255 heads, 63 sectors/track,
60801 cylinders Units = cylinders of
16065 * 512 = 8225280 bytes Sector
size (logical/physical): 512 bytes /
512 bytes I/O size (minimum/optimal):
512 bytes / 512 bytes Disk identifier:
0x00000000




Device Boot Start End
Blocks Id System /dev/sdc1
1 60801 488384001 83 Linux



Disk /dev/dm-1: 500.0 GB, 499999965184
bytes 255 heads, 63 sectors/track,
60788 cylinders Units = cylinders of
16065 * 512 = 8225280 bytes Sector
size (logical/physical): 512 bytes /
512 bytes I/O size (minimum/optimal):
512 bytes / 512 bytes Disk identifier:

0x00000000



 Device Boot      Start         End      Blocks   Id  System


/dev/dm-1p1 1
60801 488384001 83 Linux


Friday, September 23, 2016

PC can access another LAN devices's embedded web server when wired to router/switch, but not over WiFi



For work, I have designed a hardware 'box' (actually an industrial control gizmo) which has an embedded web server for the purpose of configuration (much like a regular domestic Internet router has a web server for similar purposes). It's usually used with a direct connection to a laptop, so that a regular browser (IE8, FF, etc.) may be used to configure the box.




The problem that has recently come to light is that if the 'box' is connected via wired Ethernet to certain Netgear or Linksys WiFi switch/router units and then an attempt is made to access the box via the switch using a WiFi connection to the switch from the laptop, the laptop browser is unable to connect to the box (resulting in a typical 'server not found' error). However, if the laptop is connected via wired connection to the switch, the box can be accessed just fine. It's almost as though the problem is specific to just WiFi.



To clarify with some examples:




  • Linksys or Netgear wireless router/switch(/modem) configured with IP of 192.168.0.5.


  • My custom hardware with its embedded web server configured with static IP of 192.168.0.100 and has a wired Ethernet connection to the Linksys / Netgear unit.


  • PC obtains IP via DHCP from the Linksys/Netgear unit; let's say it's at 192.168.0.200.


  • If the PC is connected via wired connection, the browser can access the box at 192.168.0.100 just fine.


  • If PC connected to the router over WiFi, then it can't access the box. But it can happily access the router's own web server at 192.168.0.5.



  • Even when the PC's browser won't connect when WiFi is in use, I'm able to successfully ping the hardware box from the PC.


  • From what I can recall, when the connection doesn't work over WiFi, I don't seen an entry for 192.168.0.100 with arp -a. When the connection is working (PC has wired connection to router), arp -a shows me an entry for 192.168.0.100.




I have been currently trying to investigate this today with a Linksys WRT54G. At first, I had the problem as described above. Later, after much messing about, it somehow resolved itself. The only procedure I recall doing immediately before it magically started working over WiFi was a series of successful ping tests from the router itself to the hardware box.



This problem has been reported with several Netgear / Linksys routers, though it will be a bit of time before I can determine the model numbers.



Any help would be much appreciated. Please let me know if there's any further information or logs or tables I could provide.


Answer




This is quite embarrassing, but it turned out that it was due to the MAC address that I had assigned to the box. Being an R&D PCB without production assigned MAC, I just put a dummy MAC address in. What I had put in had the multicast bit set (if I remember right - this is going back months now). I changed it to a more standard MAC address and all was fine.


Connecting to a new MySQL instance



I just created a new MySQL data directory using mysql_install_db:





$mysql_install_db --datadir=/home/user1/opt/mysqld/data/
Installing MySQL system tables...
091123 10:51:54 [Warning] One can only use the --user switch if running as root

OK
Filling help tables...
091123 10:51:54 [Warning] One can only use the --user switch if running as root

OK


To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h machine1 password 'new-password'
See the manual for more instructions.
You can start the MySQL daemon with:

cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at http://shop.mysql.com



I ran mysqld_safe to start the instance, but mysql will not connect. I can run mysqld with --skip-grant, but it won't let me set new privileges. How do I kickstart the permissions on a new MySQL instance?



$ps aux|grep mysql




root 2602 0.0 0.0 87076 1308 ? S Nov22 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid
mysql 2662 0.0 0.2 190676 23732 ? Sl Nov22 0:22 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
user1 18954 0.0 0.0 84984 1224 pts/4 S+ 11:36 0:00 /bin/sh /usr/bin/mysqld_safe --defaults-file=my.cnf

user1 18980 0.0 0.2 190160 22860 pts/4 Sl+ 11:36 0:00 /usr/libexec/mysqld --defaults-file=my.cnf --basedir=/usr --datadir=/home/user1/opt/mysqld/data --pid-file=/home/user1/opt/mysqld/mysqld.pid --skip-external-locking --port=3307 --socket=/home/user1/opt/mysqld/mysql.sock
user1 20148 0.0 0.0 82236 756 pts/2 S+ 12:15 0:00 grep mysql


netstat -lnp|grep mysql




tcp 0 0 0.0.0.0:3307 0.0.0.0:* LISTEN 18980/mysqld
unix 2 [ ACC ] STREAM LISTENING 153644 18980/mysqld /home/user1/opt/mysqld/mysql.sock
unix 2 [ ACC ] STREAM LISTENING 8193 - /var/lib/mysql/mysql.sock



Edit: There are two instances of MySQL. I want the one on port 3307. mysqld_safe is being run as user1 not root with this command: mysqld_safe --defaults-file=my.cnf


Answer



Ah, it must use a socket file instead of a port.



mysql --socket=/home/user1/opt/mysqld/mysql.sock -uroot did the trick.


Thursday, September 22, 2016

dmarc - How to set Exim envelope domain to From domain



I've set up DKIM on Exim with the domain set like:




DKIM_DOMAIN = ${sender_address_domain}


However, the domain is always set to the same domain (my primary domain), which causes DMARC validation to fail, because of alignment, when sending emails for other domains (I host several websites).



From reading the documentation, I think the sender_address_domain is the envelope address and not the From field. How can I change the envelope address so that it matches the From field of a given email (I assume this will also allow SPF alignment to be correct)?



Also, for security, is it possible to have a whitelist of allowable domains, so Exim refuses to send emails that have another domain in the From field?


Answer




Add the rewrite rule:



*       "$header_from:" F


In debian this can be added by creating a file such as /etc/exim4/conf.d/rewrite/10_from_rewrite. This rule rewrites the sender field to match the From header, allowing DMARC alignment to work correctly.


HeartBleed Openssl update Redhat Enterprise server 6.3

I already update using yum update openssl but still my server is vulnerable.
Tried grep 'libssl.*(deleted)' /proc/*/maps and no result as I already restarted the server.
Yet, it is still vulnerable.



$ rpm -qa | grep openssl
openssl-devel-1.0.1e-30.el6_6.4.x86_64
openssl-1.0.1e-30.el6_6.4.x86_64


Did I miss something to execute?




I scanned my site with the acunetix scanner, and the one from redhat website. all says it is vulnerable.



Here are some additional details:



grep libssl.so.1.0.1e /proc/*/maps | cut -d/ -f3 | sort -u | xargs -r -- ps uf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 30932 0.0 0.3 77076 7612 ? Ss 18:50 0:00 squid -f /etc/squid/squid.conf
squid 30935 0.0 1.3 119244 26256 ? S 18:50 0:00 \_ (squid-1) -f /etc/squid/squid.conf
root 30907 0.0 0.2 81328 3852 ? Ss 18:50 0:00 /usr/libexec/postfix/master

postfix 30911 0.0 0.2 81592 3872 ? S 18:50 0:00 \_ qmgr -l -t fifo -u
postfix 31221 0.0 0.1 81408 3840 ? S 20:30 0:00 \_ pickup -l -t fifo -u
root 30775 0.0 1.0 290340 20900 ? Ss 18:50 0:00 /usr/sbin/httpd
apache 31041 0.0 1.6 520736 31100 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31042 0.0 1.1 316740 22496 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31043 0.0 0.9 313992 18868 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31044 0.0 1.4 520416 28544 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31047 0.0 1.0 314700 21104 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31048 0.0 1.4 448284 27656 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31051 0.0 1.2 317584 23292 ? Sl 19:08 0:00 \_ /usr/sbin/httpd

apache 31052 0.0 1.1 317472 22872 ? Sl 19:08 0:00 \_ /usr/sbin/httpd
apache 31065 0.0 1.1 316592 21676 ? Sl 19:09 0:00 \_ /usr/sbin/httpd
apache 31082 0.0 1.4 445272 28168 ? Sl 19:30 0:00 \_ /usr/sbin/httpd
apache 31085 0.0 0.8 313452 17208 ? Sl 19:30 0:00 \_ /usr/sbin/httpd
apache 31086 0.0 1.0 315984 20944 ? Sl 19:30 0:00 \_ /usr/sbin/httpd
apache 31091 0.0 1.4 447032 27504 ? Sl 19:31 0:00 \_ /usr/sbin/httpd
apache 31094 0.0 0.8 311240 16208 ? Sl 19:31 0:00 \_ /usr/sbin/httpd
apache 31095 0.0 1.1 316264 21408 ? Sl 19:31 0:00 \_ /usr/sbin/httpd
apache 31210 0.0 0.8 311228 16100 ? Sl 20:23 0:00 \_ /usr/sbin/httpd



ldd sbin/httpd
linux-vdso.so.1 => (0x00007fff50dff000)
libm.so.6 => /lib64/libm.so.6 (0x00007fd9501e5000)
libpcre.so.0 => /lib64/libpcre.so.0 (0x00007fd94ffb9000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fd94fd99000)
libaprutil-1.so.0 => /usr/lib64/libaprutil-1.so.0 (0x00007fd94fb75000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007fd94f93e000)
libexpat.so.1 => /lib64/libexpat.so.1 (0x00007fd94f715000)
libdb-4.7.so => /lib64/libdb-4.7.so (0x00007fd94f3a1000)

libapr-1.so.0 => /usr/lib64/libapr-1.so.0 (0x00007fd94f175000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fd94ef57000)
libc.so.6 => /lib64/libc.so.6 (0x00007fd94ebc3000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fd94e9bf000)
/lib64/ld-linux-x86-64.so.2 (0x0000003747a00000)
libuuid.so.1 => /lib64/libuuid.so.1 (0x00007fd94e7ba000)
libfreebl3.so => /lib64/libfreebl3.so (0x00007fd94e541000)

openssl version -a
OpenSSL 1.0.1e-fips 11 Feb 2013

built on: Thu Oct 16 11:05:49 EDT 2014
platform: linux-x86_64
options: bn(64,64) md2(int) rc4(16x,int) des(idx,cisc,16,int) idea(int) blowfish(idx)
compiler: gcc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT -m64 -DL_ENDIAN -DTERMIO -Wall -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wa,--noexecstack -DPURIFY -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
OPENSSLDIR: "/etc/pki/tls"
engines: dynamic

Apache SSL VirtualHosts on a single IP using UCC/SAN certificate



I need to host several Apache virtual hosts with SSL from a single IP.




Now - I understand that because SSL wraps around the HTTP request, there's no way to know which host is being requested until a public key has been sent to the client first. This essentially breaks the possibility of SSL virtual hosts using a standard SSL certificate.



I have obtained a Unified Communications Certificate (UCC), otherwise known as a Subject Alternative Name (SAN) certificate. This allows me to serve the same certificate for multiple domains.



I would like this to be the certificate served by Apache for any SSL request - and then have Apache resolve the virtual host as usual, once the encryption has been established.



How should I configure Apache for this? I have tried to research how this can be done, but all I can find are quotes which say that it is possible, but no specifics:







wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI




While Apache can renegotiate the SSL
connection later after seeing the
hostname in the request (and does),
that's too late to pick the right
server certificate to use to match the
request hostname during the initial
handshake, resulting in browser

warnings/errors about certificates
having the wrong hostname in them.




serverfault.com/questions/48334/apache-virtual-hosts-with-ssl




Incidentally, it is possible to have
multiple SSL-secured named virtual
hosts on a single IP address - I do it

on my website - but it produces all
sorts of warnings in the Apache logs,
and certificate warnings in the
browser. I certainly wouldn't
recommend it for a production site
that needs to look clean.
-David Jul 31 at 4:58




www.digicert.com/subject-alternative-name.htm





Virtual Host Multiple SSL sites on a single
IP address.
Hosting multiple
SSL-enabled sites on a single server
typically requires a unique IP address
per site, but a certificate with
Subject Alternative Names can solve
this problem. Microsoft IIS 6 and
Apache are both able to Virtual Host

HTTPS sites using Unified
Communications SSL, also known as SAN
certificates.







Please help.


Answer



I tested this on my apache 2.2.14 instance and it worked fine:




Use the NameVirtualHost directive (to ports.conf):



NameVirtualHost *:443


define your vhosts:




ServerName www.siteA.com

DocumentRoot "/opt/apache22/htdocs/siteA"
SSLCertificateFile "/path/to/my/cert"
SSLCertificateKeyFile "/path/to/my/key"


ServerName www.siteB.com
DocumentRoot "/opt/apache22/htdocs/siteB"
SSLCertificateFile "/path/to/my/cert"
SSLCertificateKeyFile "/path/to/my/key"




I used this link as a resource.


http - Does apaches keep alive time-out reset every time a request is received?

The title of this question is pretty self explanatory, but:



Does apaches keep alive timeout reset (as in, start again) every time a request is received?



So for example, assume we have a 60 second keep alive timeout:



Second 0 - First request recieved, keep alive starts - Timeout currently 60 seconds



Second 10 - Next request recieved, keep alive reset - Timeout currently 60 seconds




OR



Second 0 - First request recieved, keep alive starts - Timeout currently 60 seconds



Second 10 - Next request recieved, keep alive does not reset - Timeout currently 50 seconds



Thanks.

Wednesday, September 21, 2016

virtualhost - Apache virtual host pointing to wrong document root



I am running Linux mint and I'm trying to setup up a virtual host with apache.



I have added the following file in /etc/apache2/sites-available/ ( it's copied from the 'default' file in that directory)





ServerAdmin webmaster@localhost
ServerName testsite.dev
DocumentRoot /home/chris/Projects/web/testsite


Allow from all
AllowOverride All
Order allow,deny



ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all


ErrorLog ${APACHE_LOG_DIR}/error.log


# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/access.log combined

Alias /doc/ "/usr/share/doc/"

Options Indexes MultiViews FollowSymLinks
AllowOverride None

Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128




I have also added an entry in my hosts file (/etc/hosts) which would be:



127.0.0.1    testsite.dev



I've also enabled the site and reloaded the apache service with:



a2ensite testsite 
service apache2 reload


However when I browse to http://testsite.dev it's serving pages from /var/www/ instead of /home/chris/Projects/web/testsite.




What am I doing wrong?


Answer



A few things I would check:




  • Ensure that you have a "NameVirtualHost ***:80" in your config. If the "*:80" is different it may conflict with the value in "VirtualHost" (in general there are less issues if they are the same).

  • Ensure you don't have other "VirtualHost" defined somewhere that may be conflicting with this one (I assume everything in "sites-available" as well as any other Apache config file).

  • Check the error log to make sure nothing "bad" is happening. Enabling and checking the access log may also be useful.

  • Double check that the files/content in the two directories is what you think it is. If you have them somehow mixed up it could be working as expected.

  • Stop and start Apache service. In theory reloading should work but just in case (it wouldn't be the first time I've seen reloading fail but stopping/starting work).




If you run through all this and still can't seem to get what you want I would create a minimal set of Apache configs (move all existing configs out and create temporary ones) and start changing things a step at a time to see where things are going wrong.


Tuesday, September 20, 2016

linux - Apache permission to write on user home folder



I'm performing server backup with php script and trying to save the output to home folder.
* I have there available space



Apache is the user that running the php script and don't have permission to write to my user home folder, event when i set chmod 777 to ~/backup.

I try to add symlink to this path but still there is permission issue.



Is there simple way to let Apache drop file to my home folder and still stay secured?


Answer



First of all make sure that the apache user has suitable (rx) access to the ~/backup directory from / down and rwx on the ~/backup directory.



It is also likely that SELinux is playing it's part and denying httpd access to user home directories.



Check the output of




getsebool httpd_enable_homedirs
httpd_enable_homedirs --> off


If it is off as shown above then it needs to be allowed



setsebool -P httpd_enable_homedirs on


If this doesn't solve the problem then take a look in your audit.log and other log files for relevant messages.



Monitoring Redis and mysql server for memory and cpu usage

I want to compare the memory and cpu usage of the the redis-server and the mysql-server. I have used a profiler to get the client data but I would also like to know what is going on on the servers when I execute queries.




Do you know a tool that I could use.



I am running them locally on mac os x10.6.8

firewall - Stealthed vs Closed Port



I was reading a website about the difference between stealthed and closed ports.




http://www.grc.com/faq-shieldsup.htm



A closed port will echo a packet if closed. However, a stealthed port will not respond at all.



Is it recommended to stealth all the ports you don't use? If so, how do you go about doing so?


Answer



Depends on what you're trying to do. Basically, if you don't reply with a packet saying the port is closed, you'll make life more difficult for legitimate users, but possibly also make life difficult for any attackers trying to break into the box. It won't prevent somebody scanning the box to find out what ports are open, but it might slow them down. And it might make it less likely somebody finds out your system exists in the first place.



Is it a system providing services on a well-known port to the world? (such as a web server) Then trying to "stealth" your ports won't do much. good.




Is it a system doing nothing anybody needs to know about? go for it.



You didn't say what OS, etc. you're running, so the answer to how varies. On Linux with iptables you do "-j DROP" instead of "-j REJECT", basically.


Sunday, September 18, 2016

domain name system - Are Zerigo's claims to have instant update on their DNS service legitimate or just marketing word semantics?



Zerigo states on their site




Instant updates




When you make a change to your DNS
records, it takes a moment for them to
replicate to all of the DNS servers.
Until this happens, nobody can see the
change.



Zerigo's proprietary synchronization
engine ensures this happens in mere
seconds. We won't name names (to

protect the guilty), but many
providers take hours to perform
updates. In our opinion, that's crazy
and it's why we designed our system
for much faster updates.




Are they really stating that their service is different from how DNS operates normally, or is this just stating that when you make a change they make sure it instantly removes any old cached records from their systems but doesn't do anything about other DNS servers?


Answer



It is true. It basically means that all updates you do to their DNY do sync to THEIR DNS (!) immediately (or close to real time). This is standard for good DNS providers - there is a push protocol that a DNS can sue to inform replicated slaves that a zone has changed. Normall DNS slaves (as most are in a typcial DNS farm) timeout on the data.




Sadly this is amrketing speak. Basically they push "normal practices" for a well managed DNS farm. Only a badly managed farm will not configure the slaves and master for a push if a change occurs on the master.



That said, this does NOT invalidate caces all over the internet. Point. They also dont claim that. if you read carefully waht they said - basically thy claim to immediately update, while some providers take hours to update THEIR OWN (i.e. those providers) servers. Crappy providers all around, pretty much.


Saturday, September 17, 2016

networking - Pinging a computer on the network fails cyclically



I've the most strange behavior from one of the computers on my work network...



The network is composed by a Cisco switch and about 10 computers... ( my position in the company is senior programmer, and so I don't have all the information about the network )..




The computer with this behavior is a Windows XP SP3, P4 2Ghz and 1GB of ram.. formatted last week, I only installed on this computer two things, Wamp stack ( Wampserver ), and no-ip DUC client.. at least so far, I also want to install Mercurial, but I'm not going to bother until I solve this problem..



The problem is the following, I start pinging the server with ping ip -t, and my computer starts pinging that computer with no problem, but after a while it starts giving..




Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Destination host unreachable.
Destination host unreachable.
Destination host unreachable.
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128
Reply from 192.168.10.18: bytes=32 time=68ms TTL=128




It keep's on pinging the machine and then it starts failing again, and it keeps on doing this all day long.. if I'm using Windows RDP while pinging, the RDP simply closes, with no warning what so ever..




Looking at task manager of that computer, what I get is a cyclic wave with peaks of 100% of CPU use, that matches exactly with the time that the ping fails...



We already tried two thing's, change the port where the cable was connected, and connect the computer directly to the switch, and that produced no result..



I'm using this server as a pre-deployment server for our sites, I'm developing on my local machine using the database on the server, that means that I'm constantly requesting access to the database on that computer..



But I don't think that this constant requests are the cause for this behavior... I think.. I'm not sure..



I've already checked the windows event log, and nothing was showed regarding this problem..




This also happens when I'm not pinging the computer, after I installed Wampserver, I started to use the database almost immediately.. and when doing some refreshes, the database connection on the PHP page simply timed out.. that's when I figured something was wrong with that computer or with the network...



But for example, this morning I was able to work with no problem whatsoever, only in the afternoon it started again with this strange behavior..



Edit: I forgot to mention something, after disabling a lot of services, the interval in which the ping fail's increased that means that the I could ping the machine more, and the ping failing decreased..



Thanks in advance for any insight on this problem..


Answer



On old hardware like this, I would suspect hardware trouble: try replacing the network card.


amazon web services - Should I stick with ELB? How bad is a DDoS attack of 2 million packets per second?

I have an Elastic Load Balancer (ELB) on Amazon Web Services under DDoS attack (specifically a SYN flood) that Amazon has said occasionally hit over 2.4 million packets per second. While it hasn't taken the site down, it has been marginally effective at occasionally taking out a single ELB instance (there are 6-12 instances in the load balancer group) over the last week.



My obvious thought is- how bad is that level of traffic? Should I consider deploying my own load balancer solution on EC2 if ELB can't handle this much traffic? Or is this a pretty significant attack and would you say they are doing a pretty good job of mitigation?

Friday, September 16, 2016

ubuntu - Very poor guest performance in VMware Server 2

I'm running VMware Server 2.0.2 on my dual core Athlon server with 4 GB RAM and a RAID1 with two 400 GB SATA hard disks. This server's running three VMs at a time.



The host system is a Debian 5 x64 with the latest kernel and all updates installed. Besides VMware Server it doesn't run anything else.



The VMs do use non-fixed hard disk images. I'm running two VMs with 768 MB of RAM each, the third one uses 1.5 GB of RAM, so there should be another GB of free RAM for the host system.




Two VMs have a Ubuntu 9.10 x64 installed, the other one uses Debian 5 x64.



My problem is the very poor performance. In one of the VMs I'm running Apache with mod_rails (Phusion Passenger). None of the VMs do have to handle very heavy load. So after a time of idle the Passenger goes to sleep. Waking it up again takes up to 45 (!) seconds during which the VM doesn't really respond anymore due to the load generated while waking it up again. The load meter in the VM peaks up to a 10.00, which, in my opinion, can't be normal. On a (non-virtualized) test system I can't see such a behavior, so it has to be the VMware Server, doesn't it?



Sometimes even a simple SSH connect to one of the VMs generates a very high load, up to 8.00.



Someone told me that it is possible to direct a precise amount of CPU power and other resources to the VMs but I really don't know what to look for. Unfortunately Google didn't tell my either.



Any help is appreciated.

domain - Using Subdomain on Same Server : Good or Bad & Why?



I've working on a simple HTML-CSS-JQuery based website and choose 000webhost as a free hosting provider.




It gives me sufficient space and bandwidth and a shared Unix based Apache Server.



Now on to my question, i had googled for pros and cons of managing subdomains especially for static data and concluded that, subdomain is good idea and can shared the load of HTTP request and response.



But what in my case where i've only one server available which is also shared?



Is it optimal to have subdomains on same server or its good to have just one domain and folder divisions?


Answer



Having multiple hostnames in URLs used to be quite important. There were mainly 2 reasons, which follow. But today I'd say it is often not that important.




The reasons are / used to be:




  • Sharding (an overloaded term): Older browsers would only open 2 connections per hostname. Thus if all HTML, CSS, JS & IMG files were retrieved from www.company.com then the browser would download at most 2 files at any time. Using multiple hostnames in internal URLs, i.e. http://www.company.com as well as shard1.company.com and shard2.company.com would speed up the downloads. This is no longer important, because all modern browsers use 8 or more parallel connections per hostname.


  • Cookieless subdomains: Assuming that www.company.com sets a number of cookies for stuff like login state and analytics, then there is a small performance benefit to serving static files from a cookieless domain, for example static-files.company.com. This still holds true today, and is still helpful -- but it is a smaller optimization. Cookies are usually quite small, and the time taken to transmit them is low, but of course it all adds up.




The classical book that broke the news about this is "High Performance Websites" by Steve Souders. Some of the specific recommendations in the book are a bit old, but it's still
the best all-round introduction to frontend performance engineering there is.


Thursday, September 15, 2016

sql - Why didn't my performance increase when I went from 4 disk RAID 10 to 6 disk RAID 10?

I had 4 drives in a RAID10 set and 2 spare disks. We've been monitoring the perfmon Average Disk seconds per transfer (reads and writes) counter. We bought 2 more disks, rebuilt the RAID10 set using 6 disks (2 span) and the performance stayed the same. Is there some limitation that RAID10 only improves based on 4 disk multiples (4, 8, 12, etc)?



I'm using a Dell R510. This is a SQL server. we kept the same files on the volume.







I'm using Dell Perc H700 1GB non-volatile cache.



ADs/T is around 200ms.



These are 15K sas drives 600GB



I don't know how Dell PERC controller does the spans. It just asks how many disks I want in the span (2 or 3 in this case).




The reason we added spindles was to increase the overall time per transaction. we went from RAID1 to RAID10 (4 disks) and the performance doubled. we were hoping for a 33% increase. ADs/T is recommended at 20ms max. we are at 100. we realize we weren't going to get to 20 by adding 2 disks, but since before I was here they were at 400ms, they were going to be ok for the time being in the 50's 60's.



transfers per second are around 850






I understand the logic behind adding disks not speeding up an individual transfer against a given disk. But for all intents and purposes, ADs/T (or read or write too) is a measure of the amount a time it takes to do something against a given disk/volume according to windows. So if a whole transaction take 40 milliseconds but in reality it's writing to 4 disks, it's spending theoretically 10ms per disk if done in parallel. There's always the chance it's writing as fast as it can to one disk and then going on to the next, but how can someone tell which it's doing?



so for that matter, the time it takes for 4 disks should be proportional to the 6 disks. windows should see it as faster even though if a disk has reached it's max potential, each disk won't be faster.




ssd isn't an option for us due to the large size of our indexes, the total space we need for all of our sql files, and the cost. SSD sizes just aren't there yet though for smaller size files it might make sense.



if adding disks won't help improve speed how else does one explain the 50% performance increase when going from 2 to 4 disks and then nothing when going from 4 to 6?

apache 2.2 - ec2, ping and security settings



Got a noob question about security settings on AWS ec2 instance. I've set up an instance with Tomcat7 ( ami-95da17fc ) and I have a little issue.



If I ssh into the instance and do ping -c 2 -p 80 localhost I get 0 packet loss
if I ping my elastic ip I get 100% pocket loss, same thing with the long.winded.aws.dns.name




if I simply try to ping the site from terminal (not logged into the instance) I also get 100% pocket loss.



My default security group has the following settings:



0 - 65535 sg-07787e6e (default)



80 (HTTP) 0.0.0.0/0



8080 (HTTP*) 0.0.0.0/0




22 (SSH) 70.126.98.72/32



I'd be most grateful if anyone can shed some light on what I'm missing.






... hm, I get 404 with curl, sudo netstat -lp gives me:




Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:ssh : LISTEN 841/sshd
tcp 0 0 localhost:smtp : LISTEN 868/sendmail: accep
tcp 0 0 *:webcache : LISTEN 981/java
tcp 0 0 *:http : LISTEN 948/httpd
tcp 0 0 *:ssh : LISTEN 841/sshd
tcp 0 0 localhost:8005 : LISTEN 981/java
tcp 0 0 *:8009 : LISTEN 981/java
udp 0 0 *:bootpc : 734/dhclient
udp 0 0 domU-12-31-39-09-A6:ntp : 852/ntpd
udp 0 0 localhost:ntp : 852/ntpd
udp 0 0 *:ntp : 852/ntpd
udp 0 0 fe80::1031:39ff:fe0:ntp : 852/ntpd
udp 0 0 localhost:ntp : 852/ntpd
udp 0 0 *:ntp : 852/ntpd
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 1954 922/gam_server @/tmp/fam-root-
unix 2 [ ACC ] STREAM LISTENING 1967 927/bluepilld: host /var/bluepill/socks/hostmanager.sock




... and I've not changed anything in iptables


Answer



Ping uses the ICMP protocol - the security groups in AWS Console default to the TCP protocol. If you wish to be able to ping your instance from 'the outside', you need change the security group settings to permit the ICMP protocol (Echo), using, for instance, something like the following:




ec2-authorize default -P icmp -t -1:-1 -s 0.0.0.0/0


You can also use the AWS Console to accomplish this:




  • Create a 'Custom ICMP Rule' for your security group

  • Type: Echo Request and Type: Echo Reply (both are required)

  • Source: 0.0.0.0/0




Alternatively, for the same effect as the ec2-authorize command above, you can allow 'All ICMP'



See the AWS EC2 Docs for more information, and the AWS FAQ.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...