Tuesday, January 30, 2018

networking - What are the 'gotcha's' a software developer needs to look out for if tasked with setting up a production ready server?

As a software developer I am very used to installing my typical stack (java, mysql and tomcat/apache) on my development machine. But setting up and securing a production machine is not something I would feel confident doing. Is there a A-Z guide for dummies for setting up and securing a production server? Is the process very different depending on platform (Windows or Linux)? Are there some general rules that can be applied across different platforms (and application stacks)?

How to mask redirect subdomain and ports to another external subdomain on that same port

I have read through the forum but still cannot get a clear answer for what I need.
My purpose is to mask redirect a subdomain to another one with the port passing together.
For example, I want my vdi.mydomain.com:445566 redirect to other location at client.cloud.example.com:445566 (The port is the same)
Basically, I want to connect a RDP session to someone else's server on special port. I want to use my url instead of others. And I prefer mask the destination url if possible. I have all domain control on my side. Any help? Thanks!

Sunday, January 28, 2018

capacity - Using all space on Amazon EC2 - Medium tier



I'm currently running an Amazon EC2 - Medium Tier reserved instance for hosting client websites. Recently it seems space is starting to run low on /dev/sda1 - so I thought I'd better prepare..



df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 8.0G 5.6G 2.5G 70% /

none 848M 116K 848M 1% /dev
none 853M 0 853M 0% /dev/shm
none 853M 56K 853M 1% /var/run
none 853M 0 853M 0% /var/lock
/dev/sda2 335G 195M 318G 1% /mnt


I've seen this Question : How to mount space in /dev/sda2 - which offers one solution, but how do you resize an active partition like sda1 so I don't have to change our current setup? Or is there better ways to utilize sda2?


Answer



DO NOT USE THE SPACE IN /mnt! This is ephemeral storage and will not persist across reboots - if you put stuff in there IT WILL BE LOST. Some linux distros mount the ephemeral storage up for you as a convenience, use it for temp or swap.




You have a couple options.



Increase Root Drive Size



You can't resize /dev/sda1 on the fly. You can resize it on launch, however. This will require downtime, but what you can do is




  1. Snapshot the existing instance into an AMI. This should give you ami-yyyyyy (I hope you are doing this or something like it already to make backups over time!)




    ec2-stop-instances i-xxxxxx



    ec2-create-image --name my-image-name-v1.0 --description "My lovely Web Server" i-xxxxxxx


  2. Run a new instance of that image with a larger root drive size



    ec2-run-instances -k ssh-key -z us-east-1b -t c1.medium -b "/dev/sda1=:50" ami-yyyyyy


  3. Now you are running instance i-zzzzz. Depending on what Linux you are using you may need to then resize the file system to get the additional space. On Ubuntu, on the box:



    sudo resize2fs /dev/sdf


  4. Now swap in i-zzzzzz for i-xxxxxx in your elastic IP or ELB or DNS or however you're advertising it to the world.





Add A Second Drive



This is probably better - marginally more expensive, but best practice is NOT to put a bunch of stuff on your root drive, as if it fills up with logs or files you're going to crash and have a sad time of recovery.




  1. Create an EBS volume of the desired size, let's say 20 GB. This gives you a volume, vol-yyyyyy.



    ec2-create-volume -z us-east-1b -s 20



  2. Attach the volume to your instance



    ec2-attach-volume vol-yyyyyy -i i-xxxxxx -d /dev/sdf


  3. On the instance, create a filesystem on it and mount it



    sudo mkfs -t ext3 /dev/sdf



    sudo mkdir -p /web



    sudo mount /dev/sdf /web



  4. Move your web root over to there.


  5. Add the new drive permanently to /etc/mnttab



    /dev/sdf /opt/apps ext3 defaults,rw 0 0


  6. Snapshot your new image to AMI as in step 1 - always a good practice.




This also has the benefit of being able to back up that EBS separately by just snapshotting the volume, and also if you need to kill one server and bring up another, you can detach the /web EBS volume from one and attach it to the other, making data migration easy.


ubuntu - EC2 instance ssh connection not working



I tried to connect to my EC2 Ubuntu LTS instance but I failed.
The error message I got like below:
ssh: connect to host ec2-79-125-83-13.eu-west-1.compute.amazonaws.com port 22: Connection refused



I checked the instance log, the last lines like below:




The disk drive for /dev/xvdf is not ready yet or not present. 
Continue to wait, or Press S to skip mounting or M for manual recovery


What could be the problem here?



Could you help me please?


Answer



Probably an /etc/fstab issue. Ubuntu is trying to mount a partition present in /etc/fstab and yet not able to find that partition i.e. /dev/xvdf.




In order to troubleshoot this, you need to -



1) shutdown this instance



2) Launch another instance



3) Attach the root EBS of this instance to the new instance from step(2)



4) mount the EBS from step(3) , say in /mnt/ebs




5) Check fstab for any inconsistency , /mnt/ebs/etc/fstab. If there is any inconsistency, backup that file, modify it, unmount the partition, and re-attach it to the instance in step(1) and launch it.


Friday, January 26, 2018

iis 7 - HTTP Error 500.19 - Internal Server Error in Windows 7 IIS



I have a simple ASP.NET Web Site. When I try to run it on Windows 7 IIS, the following error message is displayed in my browser:




HTTP Error 500.19 - Internal Server
Error The requested page cannot be
accessed because the related
configuration data for the page is

invalid




How can I solve this problem?



Web Site Error


Answer



When you get this error it means that there's probably something broken in your site's web.config file. IIS7 is very helpful in these cases and will usually point you roughly where the problem lies.



For example, I've deliberately broken a web.config on a test site by making it malformed XML:




enter image description here



The areas highlighted in red are providing me with a hint as to what is broken.



Without seeing the full error message from your site it would be hard to speculate what is wrong other than that something is invalid in your web.config file (or perhaps a parent web.config).



Update:



Based on your update and the new screenshot comment out the line indicated in red in your web.config file, e.g.:







It looks like this setting has been locked somewhere on your machine, probably in the applicationHost.config file in C:\Windows\System32\inetsrv\config.



You could try unlocking this setting to allow the web.config setting to be used by running the following on an Administrator command line:



appcmd.exe unlock config /section:system.webserver/modules /commit:apphost


domain name system - Bind get zone transfer status after executing rndc reload

I have a script that executes rndc reload in on secondary (slave) servers on the zones that are modified. This command returns success if the reload is queued successfully.



I wanted to know if there is a way I can get the status of the actual zone transfer without going through the logs itself. I want to be able to automatically handle the case when bind reload failed based on the error itself. Currently, I have to parse the logs to get the status of the zone transfer after executing rndc reload.



Can someone help me figure out how I can get the status of the zone transfer after executing rndc reload which is better than parsing the logs itself.




NOTE [to add more clarity]: I know notify can be used for master to communicate to the slave about a change. My question is about knowing if there is any way to get notified when the zone transfer initiated by the slave failed due to any reason without parsing the logs.



E.g. May be after notifying the slave, the master server died due to some reason. In this case, when the slave initiates a zone transfer, it would fail on getting the SOA record from the master. I want to get notified for these kind of errors that can happen during zone transfer without actually parsing the logs.



Let me know if more information is needed.

Thursday, January 25, 2018

internet - Can someone explain to me exactly why it's bad to use NS name that is a subdomain for the domain that NS is for



Reading a book and it states that you should not register
nameserver names such as ns1.example.com for the domain
example.com as you'll create a catch-22 situation when looking up ns1.example.com etc.
I know what a catch-22 is, but I just don't see how this fits in.



At which stage does this occur exactly, when internet central directory queries whois records
for that domain to find nameserver ip ?




excerpt from "The difinitive guide to CentOS (APress)"




Caution As you can imagine, telling the world that the primary DNS
server for example.org is dns0. example.org has some issues. Mainly,
this has you running around in circles because you’ve created a
catch-22 situation. You cannot look up dns0.example.org because to do
so you need to ask dns0. example.org. Now, as long as you have name
servers in another domain, this will still work, but it’s clearly a

bad idea. Stay away from this one, and if you have to, use a so-called
glue record that defines dns0.example.org in the parent .org zone to
solve the issue.



Answer



It's simply not true to say that doesn't work, or causes problems. For example, on my main domain:



[madhatta@www tmp]$ whois teaparty.net
[Querying whois.verisign-grs.com]
[Redirected to whois.tucows.com]

[Querying whois.tucows.com]
[whois.tucows.com]
[...]
Domain servers in listed order:
NS.TEAPARTY.NET 193.219.118.102
NS2.TEAPARTY.NET 78.46.204.154
[...]


which works just fine. The technical elements that makes this work are the glue records, which you can read about at the link above, and elsewhere.



proxy - SSH forwarding by username

I want to make available a pool of dev machines to a pool of developers, where each developer should only access its own dev machine. Dev machines are on a private network, behind a NAT, whereas developers are on the DMZ.



I'd want to implement a solution in which:





  • users connect via ssh (X forwarding included) to the NAT public interface, which in turn forwards the request to an Access Control machine (basically a proxy);

  • the AC machine forwards the connection to the right dev machine, depending on a configurable policy.



Clearly, users should neither know about target machine name or IP nor guess the network structure. They should only initiate an SSH connection without any client-side special configuration (this prevents the use of ProxyCommand).



How can I implement such a solution?

Wednesday, January 24, 2018

security - Where should I setup a staging environment in Amazon VPC? And testing?

I setup the production environment for my service in an Amazon VPC in Oregon:




  • 2 availability zones

  • 1 public subnet (including bastion, nat, and ELBs) and 3 private subnets (database, web servers and configuration/supervision) in each availability zone.

  • 11 security groups



There are about 25 VMs, for now, and hopefully it will grow.




Now I'm going to setup the staging environment, but I'm not sure where to put it:




  1. should I simply put the staging instances next to the production instances? Basically, simply reuse the same Amazon region, the same availability zones, the same subnets and the same security groups? I would just need to create new ELBs pointing to staging instances, and that's it. Simple.

  2. or should I put the staging instances in their own subnets, but still in the same region/availability zones? The public subnets would have to be the same though, because you cannot have two public subnets in a single availability zone. Having separate subnets might make it easier to manage things, and I could have dedicated routing rules, to go through different nat instances and possibly a different bastion as well. More complex, but tighter security. I think I might not need to double the security groups though, because I could have a single overall network ACL forbidding traffic between production and staging subnets.

  3. or should I duplicate the whole setup in a different VPC? Since I can only have one VPC per Amazon Region, I would have to do this in a separate region.



The whole point of the staging environment is to be identical to the production environment (or as close as possible). So setting up staging environment in a different Amazon Region just feels wrong: this rules out option 3, doesn't it?




Option 1 is closest to the target of being as close to production as possible. But having staging and production environments in the same subnets feels a bit like a potential security issue, right? So I'm somewhat leaning towards option 2, but I wonder if the potential security issues are serious enough to justify having twice as many subnets to manage?



And what about the testing environment? It should resemble production as well, but it does not need to match it as closely: everything can fit on a few instances, no need for ELBs and everything. Perhaps this environment could all fit in a single dedicated subnet in the same VPC? Being in the same VPC, it would have simple access to the git repository and chef server and supervision tools and openvpn access, etc.



I'm sure many people have been through these considerations? What's your take on this?



Thanks.

Tuesday, January 23, 2018

Cisco static NAT not working on LAN side

I have a web server in my private network that has the ip address 192.168.1.134.

I need to allow users to access this web server from both the internet and the private network. The public ip address is 85.185.236.12. I setup static nat (192.168.1.136 => 85.185.236.12) on the wan interface. Now, when we access it from the internet everything works perfectly, but when we try to access it from the LAN we can't access the webserver. I use cisco 1841 router and i think nat not working when i try to access it. How can we access the web server from the LAN? Thanks.

Monday, January 22, 2018

logging - Log MySQL login attempts

From time to time there are failed login attempts in our MySQL production server (MySQL dashboard alerts us). Is there a way to log every single success and failed login to the MySQL server without enabling general_log?



We think general_log is not an option due it's a production server with high load.

linux - Doing a long-running command on ssh



I'm sshing into a server and I'm starting a Python script that'll take approx. 24 hours to complete. What if my internet connection dies in the middle? Will that stop the command?



Is there any way to run my long-running command in a way that local disconnects won't affect it and I could continue to see its output after I log in to ssh again?


Answer



The best way is to use screen (on the server) to start a session to run the command in and then disconnect the screen so it will keep running, and you can do other things, or just disconnect from the server. The other option is to use nohup in combination with & so you would have nohup &


monitoring - What tool do you use to monitor your clients?

Well, we had "What tool do you use to monitor your servers?", and I wondered, do you (and should you) monitor your clients (desktops and laptops)? What tools are useful for this?




It seems to me that one should monitor the clients -- to guage how well they are performing, perhaps to keep an eye on battery life and power usage, perhaps watching hard drive, network, CPU and maybe even GPU usage, and, indeed, to see if lab users avoid a certain machine or if it never shows up on the network.



Please state which platform(s) a given tool works with, and the licence or cost, if it is easily determined.

Sunday, January 21, 2018

How do I redirect subdomains to the root domain in Nginx on CentOS?



I'm using Centos with Nginx and Puma. I would like to redirect all subdomains to my main root domain, so I was following the instructions here -- https://stackoverflow.com/questions/26801479/nginx-redirect-all-subdomains-to-main-domain . However I can't get it to work. Below is my configuration



upstream projecta {
server unix:///home/rails/projecta_production/shared/sockets/puma.sock;

}

server {
listen 80;
server_name mydomein.com;
return 301 http://mydomein.com$request_uri;
root /home/rails/projecta_production/public; # I assume your app is located at this location

location / {
proxy_pass http://projecta; # match the name of upstream directive which is defined above

proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry
expires 1y;
add_header Cache-Control public;

# Some browsers still send conditional-GET requests if there's a

# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}
}


If I exclude the "return 301 http://mydomein.com$request_uri;" line then my site will work on the root domain ,but not on any of the sub-domains (e.g. viewing a subdomain will yield the default Nginx index page). How do I redirect all sub-domains to my main domain and preserve my Rails/Puma configuration?



Answer



You are currently listening on the apex domain vhost for the redirect. What you need to do is have a separate vhost listener that redirects to the apex. This is an example of a wildcard listener redirecting to the apex domain definition:



upstream projecta {
server unix:///home/rails/projecta_production/shared/sockets/puma.sock;
}

# Listener for all subdomains
server {
listen 80;

server_name *.mydomein.com;
# If you want to redirect all requests, not just subdomains, use below config instead.
# server_name _;
return 301 http://mydomein.com$request_uri;
}

# Listener for Apex Domain
server {
listen 80;
server_name mydomein.com;

root /home/rails/projecta_production/public; # I assume your app is located at this location

location / {
proxy_pass http://projecta; # match the name of upstream directive which is defined above
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

location ~* ^/assets/ {
# Per RFC2616 - 1 year maximum expiry

expires 1y;
add_header Cache-Control public;

# Some browsers still send conditional-GET requests if there's a
# Last-Modified header or an ETag header even if they haven't
# reached the expiry date sent in the Expires header.
add_header Last-Modified "";
add_header ETag "";
break;
}

}

amazon web services - Losing database connection with high traffic on AWS - not sure where to start troubleshooting



I'm designing a highly available WP site on AWS using Elastic Beanstalk, and testing use load with a Locust.



Everything looks alright: my EC2s are t2.mediums, auto scaled over 3-6 availability zones. Load balancer is set to "Cross-zone" load balancing (so traffic should be distributed to 3 servers in 3 different zones), I am using Aurora (db.t2.medium) with a master->read replica setup.



Things are fine when I visit the site in my browser, but as soon as I spin up Locust (with 100-500 users, 90-100 second wait times, 10 user hatch rate) my site will almost instantly lose the connection to the database and eventually throw a 50x error.



My Apache/PHP setup is pretty out of the box from Beanstalk (Amazon Linux AMI, php 5.6), specs listed below.
opcache is enabled by default, but phpfpm is not currently installed.




Here is a diagram of my setup, and then the specs:



image




  • EC2


    • 3 t2.Mediums


    • 2 vCPUs

    • 24 CPU credits/hour

    • 4g RAM


  • Apache 2.4

  • PHP 5.6


    • upload_max_filesize => 64M

    • post_max_size => 64M


    • max_execution_time => 120

    • memory_limit => 256M

    • Opcache

    • opcache.enable=1

    • opcache.memory_consumption=128

    • opcache.interned_strings_buffer=8

    • opcache.max_accelerated_files=4000





I am unsure whether or not this is a hardware configuration issue, or if I need to tweak PHP/Apache/MySql


Answer



OK so I think I had several issues:




  1. When I originally created the DBs, I made t2.micros which by default only allow for 40 connections at once. I had later changed the instance, to t2.mediums but the max_connections seemed to stay the same. I have recreated the DBs at t2.mediums and the max_connections are now 90 and I can increate if need be.


  2. I misread the Locust documentation, and I had set the test to hit the site every 90ms, so every .09 seconds which is a lot. I just increased the hit time to 3-10 seconds (actual seconds) and the servers hold up fine now.




Increasing the Locust users to 200 however results in a 75% failure (database disconnect) rate but I think I can either adjust the max_connections even more, or throw a CDN in front of the site (which I'll be doing anyway)




@michael-sqlbot gets the prize here, he led me down the right path.


Mail not delivered from custom email server (MS Exchange)



I have my own Exchange Server 2013 (SP1). I am able to receive emails ok but sending emails is not reliable. Emails sent to gmail or seznam.cz (email provider in Czech Republic) is ok.




The problem is sending emails to outlook.com. I guess it is somehow discarded by spam filter but I don't know why. The delivery report says The message was successfully handed off to a different email system. This is as far as we can track it. but email is not delivered and nothing returns back.



This I have already done:




  • set MX record for my domain (aaa.com to mail.aaa.com)

  • set A recored (mail.aaa.com to x.x.x.x)

  • set reverse IP record (x.x.x.x to mail.aaa.com)

  • set TXT record v=spf1 mx a -all


  • removed internal hostnames of outbound emails (link)

  • set FQDN for send connector (mail.aaa.com)

  • set FQDN for FrontendTransport receive connectors (mail.aaa.com)



The mail server is not listed in any blacklist in http://mxtoolbox.com/blacklists.aspx.



Is there anything else I should do? I really don't know where to start looking right now.


Answer



Adding this as an answer : If your mail server is getting a successful SMTP message out, and recipients aren't receiving it, you must ask the people who own the next-hop mailserver. There is literally no other way for you to find out what that mailserver did with what your server gave them. You can guess all day long, and you should (in general) follow best practices and check if you're on DNS RBLs, etc, but it's still just guessing.



Saturday, January 20, 2018

networking - Local or public NTP servers?



For a relatively large network (thousands of hosts) - what are the arguments for and against running a locally managed (pool of) NTP server(s) (perhaps periodically set via some public NTP server) and having all other hosts on the network use that (pool of) NTP server(s) versus having all hosts simply use public NTP servers directly, say via ntp.pool.org?



Aside from the pros and cons, What is typical best practice today?


Answer



The best practice is to run your own pool of NTP servers set to sync from public NTP servers. In the event that your organization was to lose internet access, you would not want your clocks to become skewed. Further, it is rude to set thousands of hosts to public servers when you could (and should) operate a mirror.



Finally, if you have a secure computing requirement, then you should operate your own independent NTP hosts. You would require special hardware for these systems to operate.




EDIT: Since there was discussion of it, here is some hardware:



Any hardware supporting PPS seems to work on a modern ntpd. This includes some GPS units, although this seems to be rare, at least as rare as serial GPS units are these days. There are hardware devices sold explicitly for this function, however, including one product called TSync-PCIe. According to the manufacturer's site:




The TSync-PCIe offers several
configurations of a synchronized
timecode reader/generator package
offering flexibility and easy
integration of precise timing into an

embedded computing application. Choose
from synchronization to IRIG (and
other similar timecodes), GPS
(internal or external receivers), or
Precise Time Protocol
(PTP/IEEE-1588v2).
- Site Link: http://i564f.6o.to



routing - PFSense: For specific IP address, route traffic to internal host



I have a PFSense box, with several hosts on its LAN. On the WAN side is a series of switches and routers. External hosts use a specific IP address (we'll call it 1.2.3.4) which is forwarded through several layers to the PFSense box, which then port forwards it to a host INSIDE the PFSense LAN network (let's call it 192.168.1.2).




On the distant network, everyone can use 1.2.3.4 to connect to that host and it all works fine. However, on the internal side, they have to remember to use a different address (192.168.1.2).



We want to have a way for the 1.2.3.4 address to work both outside AND inside the PFSense network. However, we do not want to change the internal 192.168.1.0/24 subnet. It's not as simple as NAT reflection, though, because the WAN address of the PFSense box is NOT 1.2.3.4 but rather an arbitrary IP assigned by the next layer. The 1.2.3.0/24 network is several routers away. I've tried multiple configurations of port forwarding, NAT, firewall rules, etc. all without success.



What I want to do is intercept all traffic bound for 1.2.3.4 at the PFSense router and have it sent to 192.168.1.2 instead. Bonus points if the traffic doesn't actually have to go through PFSense but is instead routed by the switch to avoid bandwidth bottlenecks (maybe some combination of DHCP/ARP could accomplish this).



Is this possible, and what's the best way to achieve it?






┌──────────────────────┐           ╔══════════════════╗        
│ Server (192.168.1.2) │ ║ Client 2 ║
└──────────────────────┘ ║ (Internal) ║
│ ║ Uses 192.168.1.2 ║
│ ║SHOULD use 1.2.3.4║
│ ╚══════════════════╝
│ │
├────────────────────────────────┘


┌──────────────────────┐
│ PFSense Router │
│ │
│ LAN: 192.168.1.1/24 │
│ WAN: 192.168.2.2 │
│ Virtual: 192.168.2.4 │
└──────────────────────┘




┌────────────────────┐
│ Router (NAT) │ This router transparently
│ │ converts incoming WAN traffic
│LAN: 192.168.2.1/24 │◀────── bound for 1.2.3.X to the
│ WAN: 1.2.3.1/24 │ equivalent 192.168.2.X address.
└────────────────────┘



╔══════════════╗

║ Client 1 ║
║ (External) ║
║ Uses 1.2.3.4 ║
╚══════════════╝


To clarify, for external clients, the sequence is as follows:




  1. Access 1.2.3.4


  2. NAT router converts to 192.168.2.4

  3. PFSense router receives traffic at 192.168.2.4 and maps internally to 192.168.1.2


Answer



The only way I know how to do this is with NAT. You will have to specify the ports that you will be using. I just tested this on PFsense 2.3.2.




  1. Create a NAT rule

  2. Specify destination as Single Host or Alias: 1.2.3.4

  3. Port needs to be the port that you intend to use (From and To)(Tested with 80)


  4. Redirect target ip needs to be 192.168.1.2

  5. Redirect target port cant be "any", so you need to specify the port from above

  6. NAT reflection needs to be "Enable (Pure NAT)" <-IMPORTANT!!!!



If you have an application that needs several ports, you will have to specify every port.



Hope this helps!



enter image description here

enter image description here


apache 2.2 - apache2 configuration for blog.example.com not working

I have my blog running at a server with ip 10.10.10.10(not real).



There is only one virtual host for this blog on the server.



Here is the apache configuration:



VirtualHost star:80
ServerName blog.example.com
ServerAlias blog.example.com
ServerAdmin ab@example.com
DocumentRoot /var/www/
RewriteEngine on
RewriteCond %{HTTP_HOST} ^[^.]+.example.com$
RewriteRule ^(.+) %{HTTP_HOST}$1 [C]
RewriteRule ^([^.]+).example.com/(.*) /var/www/$1
DirectoryIndex index.php



Directory /var/www/
AllowOverride All
Order Deny,Allow
Allow from all
/Directory
/VirtualHost




My DNS blog.example.com is pointing to this ip address 10.10.10.10 .



When I load first time blog.example.com it loads correct with no problems. But whenever i click a link it shows url like http://10.10.10.10/login.php etc.



I am a newbie. Please help me set up this.
Is there any problem with the rewrite rule ?

internet - Can't connect to server, others can



I've set up a Webserver(port 80) and a Minecraft server. Me and my friends play on this server regularly and haven't had any problems connecting before this.



Now, one friend complains about not being able to connect. In order to debug this I made him connect to my webserver(different computer, same IP), but he says he can't.
Pinging doesn't work either.



Other friends however, have no difficulty connecting, so it's not a router issue(right?)

In order to be sure, we both did a system reset on both our routers/modems, still nothing.



I did some research on why this could be, and some say that being on the same subnet results in this kind of behavior. How can i test this?



To sum up:
At first, everybody can connect.
Then out of nowhere, A can't ping B, B can't ping A. But C,D, etc. can ping A and B just fine.



What can I do to fix this?




Edit: We are connected over the internet.



Edit 2: Traceroute
$ traceroute 80.56.241.61
traceroute to 80.56.241.61 (80.56.241.61), 64 hops max, 52 byte packets
1 10.15.141.1 (10.15.141.1) 10.534 ms 9.016 ms 8.072 ms
2 p3129.net.upc.nl (212.142.3.129) 8.804 ms 9.535 ms 9.301 ms
3 nl-ams05a-rd2-ae-30-2368.aorta.net (84.116.244.73) 11.016 ms 11.831 ms 27.244 ms
4 84.116.244.66 (84.116.244.66) 12.686 ms 13.742 ms 11.674 ms
5 84.116.135.30 (84.116.135.30) 11.933 ms 12.267 ms 12.614 ms
6 p60030.net.upc.nl (212.142.60.30) 13.194 ms 12.476 ms 10.680 ms
7 * * *
8 * * *
9 * * *



Edit 3: Called UPC
Fed up, I called UPC again. They told me everything was fine and that it's most likely my server or my router's problem. Thing is, I know for sure it isn't since I reset them both. He did admin however, that it could be a subnet problem but they can't do much about that other than wait until I get a new IP.



Anybody got ANY ideas that don't just mean: wait?



Edit 4: UPC caved, they're investigating it now.


Answer




If your friends are connecting over the public internet then try looking at the output of tracert (windows) or traceroute (unix) too see where the connection is being blocked if your IP address is 1.2.3.4 then



tracert 1.2.3.4


or



traceroute 1.2.3.4



If you are all connected to the same LAN then check that the IP addresses are all in the same block and then netmask for your friends computer is the same as everyone else's.


How to disable all bounce back email in exim 4.69

I have set up an email server to send out solicited newsletters.



There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically.



What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender.



How can I do this?



Thank you!




(I accidentally posted this to superuser before posting it here, disregard that if you come across)






What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce.



I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth.



I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to bounce@host.com. I could potentially increase my email productivity by 10~20% if all bounce emails are eliminated.

Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed!



The projects use the following rules (from HTML5 boilerplate) in the .htaccess:





# Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/



SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding
RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding



# HTML, TXT, CSS, JavaScript, JSON, XML, HTC:

FilterDeclare COMPRESS

FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html
FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css
FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain
FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml
FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml

FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject
FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml
FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon
FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf
FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype
FilterChain COMPRESS
FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no





We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell.



I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression.



Example response headers of a PHP output:




HTTP/1.1 200 OK

Date: Wed, 25 Apr 2012 23:30:59 GMT
Server: Apache
Accept-Ranges: bytes
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: public
Pragma: no-cache
Vary: User-Agent
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Transfer-Encoding: chunked

Content-Type: text/html; charset=utf-8


Example of response headers on a css file:




HTTP/1.1 200 OK
Date: Wed, 25 Apr 2012 23:30:59 GMT
Server: Apache
Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT

Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Cache-Control: public
Expires: Fri, 25 May 2012 23:30:59 GMT
Content-Length: 714
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/css; charset=utf-8



Does anyone has a clue or experienced the same "problem"?



thanks!

Friday, January 19, 2018

linux - Gmail bouncing postfix mail

I followed this tutorial to get email working on my VPS. I've been having problems sending email to Gmail.



I can receive emails fine, but any mail that is sent to Gmail is bounced with the following message:




Our system has detected that this message
is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to
Gmail, 550-5.7.1 this message has been blocked. Please visit 550 5.7.1
https://support.google.com/mail/answer/188131 for more information.
qa9si13920205vdb.18 - gsmtp (in reply to end of DATA command)





I have updated my PTR record to match my IP address.



Before I was getting messages in mail.log that looked like this:



postfix/smtp[3160]: connect to gmail-smtp-in.l.google.com[2607:f8b0:400c:c06::1b]:25: Connection timed out


and after searching around I found that this was a problem with IPv6, which I've now disabled on my box. I've stopped getting these messages but I'm still receiving bounce messages.




Here is my main.cf:



# See /usr/share/postfix/main.cf.dist for a commented, more complete version


# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname


smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h


readme_directory = no

# TLS parameters
#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
#smtpd_use_tls=yes
#smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

smtpd_tls_cert_file=/etc/ssl/private/_com/ssl-bundle.crt

smtpd_tls_key_file=/etc/ssl/private/_com.key
smtpd_use_tls=yes
smtpd_tls_auth_only = yes
smtpd_tls_loglevel = 1

smtp_address_preference = ipv4

#Enabling SMTP for authenticated users, and handing off authentication to Dovecot
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

smtpd_sasl_auth_enable = yes

smtpd_recipient_restrictions =
permit_sasl_authenticated,
permit_mynetworks,
reject_unauth_destination

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.


myhostname =
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
#mydestination = .com, , localhost., localhost
mydestination = localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +

inet_interfaces = all
virtual_transport = lmtp:unix:private/dovecot-lmtp

#Virtual domains, users, and aliases
virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf
virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf
virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf


Here are the logs I get when trying to send email to gmail:




Jun  5 11:27:52  postfix/master[5118]: terminating on signal 15
Jun 5 11:27:53 postfix/master[5326]: daemon started -- version 2.11.3, configuration /etc/postfix
Jun 5 11:28:09 dovecot: imap-login: Login: user=<@.com>, method=PLAIN, rip=127.0.0.1, lip=127.0.1.1, mpid=5346, TLS, session=<0w+B8MYXwQB/AAAB>
Jun 5 11:28:09 postfix/submission/smtpd[5347]: connect from localhost[127.0.0.1]
Jun 5 11:28:09 postfix/submission/smtpd[5347]: Anonymous TLS connection established from localhost[127.0.0.1]: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Jun 5 11:28:09 postfix/submission/smtpd[5347]: 9A97B6241F: client=localhost[127.0.0.1], sasl_method=LOGIN, sasl_username=@.com
Jun 5 11:28:09 postfix/cleanup[5352]: 9A97B6241F: message-id=.com>
Jun 5 11:28:09 postfix/qmgr[5335]: 9A97B6241F: from=<@.com>, size=1032, nrcpt=1 (queue active)
Jun 5 11:28:09 postfix/submission/smtpd[5347]: disconnect from localhost[127.0.0.1]

Jun 5 11:28:09 dovecot: imap(@.com): Disconnected: Logged out in=912 out=460
Jun 5 11:28:09 dovecot: imap-login: Login: user=<@.com>, method=PLAIN, rip=127.0.0.1, lip=127.0.1.1, mpid=5357, TLS, session=
Jun 5 11:28:09 dovecot: imap(@.com): Disconnected: Logged out in=333 out=1227
Jun 5 11:28:09 dovecot: imap-login: Login: user=<@.com>, method=PLAIN, rip=127.0.0.1, lip=127.0.1.1, mpid=5358, TLS, session=
Jun 5 11:28:09 dovecot: imap(@.com): Disconnected: Logged out in=90 out=861
Jun 5 11:28:10 postfix/smtp[5354]: 9A97B6241F: to=<@gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.141.26]:25, delay=0.95, delays=0.06/0.03/0.22/0.65, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.141.26] said: 550-5.7.1 [ 12] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550 5.7.1 https://support.google.com/mail/answer/188131 for more information. qa9si13920205vdb.18 - gsmtp (in reply to end of DATA command))
Jun 5 11:28:10 postfix/cleanup[5352]: 9003A62422: message-id=<20150605152810.9003A62422@>
Jun 5 11:28:10 postfix/qmgr[5335]: 9003A62422: from=<>, size=3474, nrcpt=1 (queue active)
Jun 5 11:28:10 postfix/bounce[5359]: 9A97B6241F: sender non-delivery notification: 9003A62422
Jun 5 11:28:10 postfix/qmgr[5335]: 9A97B6241F: removed

Jun 5 11:28:10 dovecot: lmtp(5362): Connect from local
Jun 5 11:28:10 dovecot: lmtp(5362, @.com): nFazJArAcVXyFAAALAfe6g: msgid=<20150605152810.9003A62422@>: saved mail to INBOX
Jun 5 11:28:10 postfix/lmtp[5361]: 9003A62422: to=<@.com>, relay=[private/dovecot-lmtp], delay=0.06, delays=0/0.02/0.01/0.03, dsn=2.0.0, status=sent (250 2.0.0 <@.com> nFazJArAcVXyFAAALAfe6g Saved)
Jun 5 11:28:10 dovecot: lmtp(5362): Disconnect from local: Successful quit
Jun 5 11:28:10 postfix/qmgr[5335]: 9003A62422: removed

Does Apache Benchmark allow intervals between each request?



I have a server which blocks burst request (if the same IP address accesses the server more than 3 times a second, it will serve 500 error).



So when I use Apache benchmark, I see a lot of failed requests, even when I set concurrency to 1.




Is there an option to set an interval between each request? Or is there similar benchmarking tools that offers this interval functionality?


Answer



You're really not going to accomplish any meaningful load testing with concurrency limited to 3. You probably need to remove the requests per second restriction for the IP that you're testing from.


yum - CentOS 6: installing .el5 rpms?




System has CentOS 6.2 32-bit installed. Im having some issues running 'yum update' - I get a series of errors that point to incorrect RPMs being installed (see this link: http://www.centos.org/modules/newbb/print.php?form=1&topic_id=34994&forum=56&order=ASC&start=0 )




[root@orange yum]# yum update
Finished Dependency Resolution
Error: Package: python-paramiko-1.7.6-1.el5.rf.noarch (rpmforge)
Requires: python(abi) = 2.4
Installed: python-2.6.6-29.el6.i686 (@anaconda-CentOS-201112130233.i386/6.2)
python(abi) = 2.6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest




I was following the thread and entered this cmd: rpm -qa | fgrep .el5 | less and found a number of entries.




[root@orange ~]# rpm -qa | grep .el5 | sort
fping-3.1-1.el5.rf.i386
hwloc-1.3-1.el5.rf.i386
libedit-20090923-3.0_1.el5.rf.i386
libffi-3.0.9-1.el5.rf.i386
libssh2-1.2.7-1.el5.rf.i386
nagios-nrpe-2.12-1.el5.rf.i386
nagios-plugins-1.4.15-2.el5.rf.i386
perl-Compress-Raw-Bzip2-2.037-1.el5.rf.i386
perl-Compress-Raw-Zlib-2.037-1.el5.rf.i386
perl-Crypt-DES-2.05-3.2.el5.rf.i386
perl-Data-UUID-1.203-1.el5.rf.i386
perl-Log-Message-Simple-0.06-1.el5.rf.noarch
perl-Module-Build-0.3607-1.el5.rf.noarch
perl-Module-CoreList-2.25-1.el5.rf.noarch
perl-Net-SNMP-5.2.0-1.2.el5.rf.noarch
perl-Test-Harness-3.22-1.el5.rf.noarch
perl-Test-Simple-0.98-1.el5.rf.noarch
perl-Time-Piece-1.20-1.el5.rf.i386
perl-YAML-0.72-1.el5.rf.noarch
portreserve-0.0.5-2.el5.rf.i386
powertop-1.13-1.el5.rf.i386





My repositories:




[root@orange yum]# yum repolist
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
* base: linux.mirrors.es.net
* extras: centos.mirror.facebook.net
* rpmforge: mirror.hmc.edu
* updates: mirrors.cat.pdx.edu
repo id repo name
base CentOS-6 - Base
extras CentOS-6 - Extras
rpmforge RHEL 6 - RPMforge.net - dag
updates CentOS-6 - Updates




So my questions are:
1) How did I end up with these rpms?
2) I've tried to 'yum erase ' and then reinstall - I get the .el5 file again.




What am I doing wrong?


Answer



.rf. is telling you it's from RPMforge. Not sure why they would install but Yum figured they were compatible. Is there a problem with those packages or just yum update? I would disable RPMforge for the update. yum update --disablerepo=rpmforge. If the package exists in both repositories you will also end up with trouble.


apache 2.4 - Let's Encrypt SSL Certificate File Not Found Error, but still working



I'm running SSL Certificates from Let's Encrypt. I've got them installed on my Ubuntu machine running Apache. The setup works fine and I can launch the website, see the green padlock and even got an A+ on SSL Labs.



The problem is that when I do apachectl configtest the server would return a file not found error:



SSLCertificateFile: file '/etc/letsencrypt/live/www.example.com/fullchain.pem' not exist or is empty.



But sudo service apache2 restart works just fine.



I got this question running at Let's Encrypt Community but the issue hasn't been resolved yet.



sudo cat /etc/letsencrypt/live/www.example.com/fullchain.pem works, returns valid certificate details.



sudo x509 -text -noout -in /etc/letsencrypt/live/www.example.com/fullchain.pem


does not work and returns the error below:




Error opening Certificate /etc/letsencrypt/live/www.example.com/fullchain.pem
139774254929568:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/letsencrypt/live/www.example.com/fullchain.pem.','r')
139774254929568:error:2007402:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
ubable to load certificate


Any ideas on why I'm getting errors on apachectl configtest and openssl?



Thanks guys!



Answer



After several sleepless nights, I finally got it to work. (overkill statement) We all know it was permissions, but exactly where was something to check.



I kept on working with /ect/letsencrypt/live and the directories and files under that. I kept changing permissions from the original to 0755 and 0777. What I did not immediately see was that /etc/letsencrypt/live was a link created from /etc/letsencrypt/archive and it had a 0700 permission. That's why it wasn't able to read the file. After changing the permission of /etc/letsencrypt/archive to 0755, apachectl configtest already responded with Syntax OK.



Although the original issue was resolved, I will refer this back to Let's Encrypt because this was all Auto Installation of Certificates. Something like this should not happen in "auto". But my setup might have something to do with the permission issue since I installed it using a non-root user (but I did sudo).



Hope this helps someone.


Thursday, January 18, 2018

How to enable CORS in Nginx

I have tried every tutorial on the internet and on serverfault regarding this. No matter what I do, CORS is not working in nginx




Here is my example.conf




server {
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name www.arcadesite.io arcadesite.io;

    location = /favicon.ico { log_not_found off; access_log off; }

location = /robots.txt { log_not_found off; access_log off; allow all; }
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}


location / {
try_files $uri $uri/ /index.php$is_args$args;
add_header Access-Control-Allow-Origin *;

}

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}

location ~ /\.ht {
deny all;
}


location ~* \.(eot|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}


listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/arcadesite.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/arcadesite.io/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;

ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;


include rocket-nginx/default.conf;



}
server {
if ($host = www.arcadesite.io) {
return 301 https://$host$request_uri;
}




if ($host = arcadesite.io) {
return 301 https://$host$request_uri;
}


listen 80;
server_name www.arcadesite.io arcadesite.io;
return 404;include rocket-nginx/default.conf;}

Wednesday, January 17, 2018

linux - Accessing the partitions on an LVM volume



Suppose you have an LVM volume /dev/vg0/mylv. You have presented this as a virtual disk to a virtualised or emulated guest system. During installation the guest system sees it as /dev/sda and partitions it into /dev/sda{1,2,5,6} and completes the installation. Now at some point you need to access those filesystems from within the host system, without running the guest system. fdisk sees these partitions just fine:




# fdisk -l /dev/vg0/mylv
Device Boot Start End Blocks Id System
/dev/vg0/mylv1 2048 684031 340992 83 Linux
/dev/vg0/mylv2 686078 20969471 10141697 5 Extended
/dev/vg0/mylv5 686080 8290303 3802112 83 Linux
/dev/vg0/mylv6 8292352 11980799 1844224 83 Linux


However, the devices such as /dev/vg0/mylv1 do not actually exist. I guess that because they're within an LV, the OS does not recognise this type of nesting by default. Is there any way I can prod Linux so that /dev/vg0/mylv1 or equivalent appears and thus becomes mountable within the host system?




I understand that it's possible with qemu-nbd, and will use this if necessary. However, I was hoping for something more direct if possible, rather than simulating a network block device and attaching that.


Answer



I believe the tool you're looking for is kpartx.



The general procedure is:




  1. List partitions in the disk image:




    kpartx -l /dev/vg0/mylv

  2. Add the partitions to device-mapper:



    kpartx -a /dev/vg0/mylv

  3. Mount the partition you're interested in:



    mount -o ro /dev/mapper/loop0p5 /mnt



raid - ESXi local storage recommendation

We are going to buy HP Proliant Server DL380p Gen8, and we want to run vmware esxi 5.1. but I have problem with choosing the correct storage with the lack of budget. we have choice to buy 4x2TB SATA or 4x300GB 10K SAS or NAS (iSCSI support such as WD or Synology) but the price of these storage are completely different so I want to know:




  1. How much different between these storage in VMWare world? (our main problem is budget)

  2. Which RAID should I use? I heard that RAID 5 has reduce performance?

  3. Because it`s completely bare metal drive, How should I partition these drive? (Assume I used RAID 5 4x2TB?) I want to install 2 Windows server 2008 R2 for sql Server DB?

  4. Is it best practice to install the vmware hypervisor on USB?



what is the best practice for choosing storage for vmware esxi?

Tuesday, January 16, 2018

cdn - How does "Cloud Files" from Rackspace know about my CNAME record?

TLDR: when I make this DNS record:



whatever.mydomain.com.  IN CNAME biglongjibberstring.r89.cf2.rackcdn.com.



How does Rackspace know that this random host name I've chosen should go to my Cloud Files container/account as opposed to another customer?






I just tested a simple example of using Cloud Files with Rackspace. I made a Cloud Files "Container" and enabled the "Static Website" option. I uploaded an example image and an example index.html file. These files in this container are now available at this crazy-long URL, which the UI tells me I can use as the CNAME for my domain:



Cloud Files container settings



And sure enough, adding a CNAME record like:




whatever.mydomain.com. IN CNAME biglongjibberstring.r89.cf2.rackcdn.com.



Browsing to http://whatever.mydomain.com/ works (tested the home page and the image). But how could it possibly know that when it sees a request for "whatever.mydomain.com" (in the HTTP Host header) that it is meant for my particular cloud files container?



It seems to have something to do with DNS, since if I just put "blah.example.com" in my hosts file with the IP address that biglongjibberstring.r89.cf2.rackcdn.com. resolves to - that does not work (gives an error about Invalid URL - appears to be coming from Akamai). The only way I could possibly see this working is if somehow the DNS lookup for whatever.mydomain.com was somehow transmitting back to Rackspace/Akamai DNS servers this relationship between "whatever.mydomain.com" and "biglongjibberstring.r89.cf2.rackcdn.com.". But I've never seen this approach and I don't even think that DNS lookup that goes back to Rackspace/Akamai actually contains the information necessary for this (although I could be wrong about that).



Does anyone know what kind of black magic is going on here?

Monday, January 15, 2018

linux - When exim4 sends HELO/EHLO, how do I configure which host name it sends?




Mails from my system are being rejected when the receiving server does HELO checking. I believe my system is sending the wrong domain name. I'm running exim4. Googling for anything about exim4 and domains yields a nightmarish list of irrelevant results. Similarly, googling for HELO rejections yields a horde of outlook users who need to turn on SMTP authentication. I cannot for the life of me figure out this simple question: which hostname is exim sending and how do I change it?



Unfortunately, I can't watch what exim is sending over the wire, so I have no way to debug this myself. I'm hoping someone has had this problem and just knows :).


Answer



Assuming the error is complaining about the HELO/EHLO data, you want to use the helo_data option on the smtp transport. The default is $primary_hostname.



remote_smtp:
driver = smtp
helo_data = host.example.com



More information is available in the manual.



If this isn't the case, you probably want to include some example rejection messages. Just the three digit code and the string after that.


linux - Symlink - Permission Denied

I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks



Linux: Slackware 13.1



Directory with Symlink:





root@Tower:/var/lib# ls -lah
drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./
drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../
lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop -> /mnt/user/media/ntop6/


Symlinked Directory:




root@Tower:/mnt/user/media# ls -lah

drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./
drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../
drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/


What I have done:




  • I have used chown -h ntop:ntop on the ntop directory in /var/lib

  • Just to be sure, I have chmod 777 to both directories




Permission denied actions:




root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test
mkdir: cannot create directory `/var/lib/ntop/test': Permission denied


Any ideas?

email - FreeBSD: system is sending mail for any user to an external smtp - why?

There is a problem with one of my jails. All Jails sending their mails for root to root, except one jail (nginx). This jail sends mails to root@jailname.domain.tld. My Postfix is configured to accept those mails but I don't find the reason why only this jail is sending to the relay. (Strange thing: If I send a mail from jail nginx the mail will be signed with DKIM. If I use ~./forward in another jail than nginx to reach my Postfix, the mail will not be DKIM-signed.)
Using ~/.forward in the jail nginx will be ignored.




The jail is sending every mail to the external relay (even for users that don't exist!)



In all Jails I set sendmail_enable=NO and sendmail isn't running. Also pkg info don't list any 3rd-party mailer.



Test from this Jail (Jailname: nginx):



mail -v root
Subject: test
test

.
EOT
root... Connecting to [127.0.0.1] via relay...
220 mail.domain.tld ESMTP TheTardis
>>> EHLO nginx.domain.tld
250-mail.domain.tld
250-PIPELINING
250-SIZE 52428800
250-VRFY
250-ETRN

250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250 SMTPUTF8
>>> STARTTLS
220 2.0.0 Ready to start TLS
>>> EHLO nginx.domain.tld
250-mail.domain.tld
250-PIPELINING

250-SIZE 52428800
250-VRFY
250-ETRN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250 SMTPUTF8
>>> MAIL From: SIZE=29
250 2.1.0 Ok
>>> RCPT To:

>>> DATA
250 2.1.5 Ok
354 End data with .
>>> .
250 2.0.0 Ok: queued as 52FAC2BDC6E8
root... Sent (Ok: queued as 52FAC2BDC6E8)
Closing connection to [127.0.0.1]
>>> QUIT
221 2.0.0 Bye



Test from any other Jail:



# mail -v root
Subject: test
test
.
EOT
root... Connecting to [127.0.0.1] via relay...
220 mysql.domain.tld ESMTP Sendmail 8.15.2/8.15.2; Tue, 7 May 2019 17:05:05 +0200 (CEST)

>>> EHLO mysql.domain.tld
250-mysql.domain.tld Hello localhost [127.0.0.1], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN
250-ETRN
250-STARTTLS
250-DELIVERBY

250 HELP
>>> STARTTLS
220 2.0.0 Ready to start TLS
>>> EHLO mysql.domain.tld
250-mysql.domain.tld Hello localhost [127.0.0.1], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN

250-ETRN
250-DELIVERBY
250 HELP
>>> MAIL From: SIZE=29
250 2.1.0 ... Sender ok
>>> RCPT To:
>>> DATA
250 2.1.5 ... Recipient ok
354 Enter mail, end with "." on a line by itself
>>> .

250 2.0.0 x47F55hU031641 Message accepted for delivery
root... Sent (x47F55hU031641 Message accepted for delivery)
Closing connection to [127.0.0.1]
>>> QUIT
221 2.0.0 mysql.domain.tld closing connection


So, you see that the jail nginx is relaying his mails to the mailserver while other jails don't.
I don't know why.
I tryed several ideas to find the cause of relaying:




# cat ~/.forward
cat: /root/.forward: No such file or directory



# cat ~/.mailrc
cat: /root/.mailrc: No such file or directory



# mail -d root say: user = root, homedir = /root Sendmail arguments: "send-mail" "-i" "root"



/etc/mail/ is untouched. The timestamps points to the time, the jail was build. Also there is no .mc-file except the initially existend freebsd.mc and freebsd.submit.mc. Because of this there is also no special directive in /etc/mail/aliases.




The directive to relay mails to the external mailserver must come from outside /etc/mail but I don't have any clues where to search any more...



Any ideas?



edit1: I did grep -rl 10.23.102.251 / (the mailserver-IP) and grep -rl mail.domain.tld / last night. Each ran for about four hours and found nothing.



edit2: sendmail is served by:



    # cat /etc/mail/mailer.conf


# $FreeBSD: releng/11.1/etc/mail/mailer.conf 93858 2002-04-05 04:25:14Z gshapiro $
#
# Execute the "real" sendmail program, named /usr/libexec/sendmail/sendmail
#
sendmail /usr/libexec/sendmail/sendmail
send-mail /usr/libexec/sendmail/sendmail
mailq /usr/libexec/sendmail/sendmail
newaliases /usr/libexec/sendmail/sendmail
hoststat /usr/libexec/sendmail/sendmail

purgestat /usr/libexec/sendmail/sendmail

Sunday, January 14, 2018

debian - Is it possible and wise to put the grub bios partition on a software raid?

in a two disk setup



/dev/sda
/dev/sdb


i created per disk 3 partitions




bios_grub   1mb
linux raid 512mb md0 (/boot - ext3)
linux raid rest md1 (physical volume for lvm)


the answer on https://unix.stackexchange.com/a/218384 states that it is not possible to create a mirror raid for bios grub




You will need hardware RAID if you want the BIOS boot partition to be

mirrored.




but on https://askubuntu.com/a/57010 it looks like it is possible but there is no info how it works




You will need hardware RAID if you want the BIOS boot partition to be mirrored. You can, however, install GRUB to multiple drives; just run grub-install in the target OS on each drive that has a BIOS boot partition (give it the entire drive, like /dev/sda, as an argument; grub-install will figure out the location of the BIOS boot partition from the GPT).




so is it possible to create a software raid for the bios_grub partition? if yes how? currently i use fai setup-storage to create the partitions.

untagged - Learning Curve and time needed to install/manage Centos Server





Completely newby question please. I always had shared hosting. Now I am about to get a dedicated Centos server. I now need to know whether I should get a managed one or not. 2 questions please:




  1. Whats the learning curve (approx. days/months) assuming I am a fast learner?

  2. If I managed it myself, how much of my time (approx. each day) do I need to commit in managing the server? (Assuming normal circumstances).




thank you for any help!


Answer



If I was in your shoes, I would rent a very cheap VPS (less than $5/month) and have the provider install your intended Centos distrib on it. Use this to determine your comfort level with working with the platform before renting a dedicated server.



Also, please note that the amount of time spent managing a server will vary depending on what is running on the server.


linux - Server suddenly became too slow with timeout error

My server suddenly became slow, all the websites in it at the http level
i have checked all the logs but there is nothing that indicate why it's happening if anyone have any suggestion please



My setup :


Centos 6.8
Nginx 1.10
php-fpm 7.0
Mariadb 10.1

Memory 32gb with 13 gb free

1 year later still the same problem


Note: The only way i was able to reduce load to 10 seconds was by moving all cache to memory tmpfs
- Enabling Nginx cache jump the load back to 100s either adding cache folder to memory or not (Which i couldn't understand as it was working fine before)

Caching system:
Opcache
Filecache (Codeigniter)
Memcached

Redis



netstat -an | wc -l
22445

netstat -an | grep :80 | wc -l
51196

netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n
1 established)

1 FIN_WAIT1
1 Foreign
2 SYN_RECV
3 LAST_ACK
6 LISTEN
21144 ESTABLISHED
30908 TIME_WAIT


netstat -i 15




Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 218927712 0 0 0 257329662 0 0 0 BMRU
lo 65536 0 310876055 0 0 0 310876055 0 0 0 LRU
eth0 1500 0 218994531 0 0 0 257405202 0 0 0 BMRU
lo 65536 0 311006988 0 0 0 311006988 0 0 0 LRU
eth0 1500 0 219060371 0 0 0 257479672 0 0 0 BMRU
lo 65536 0 311139650 0 0 0 311139650 0 0 0 LRU
eth0 1500 0 219131683 0 0 0 257561956 0 0 0 BMRU

lo 65536 0 311270631 0 0 0 311270631 0 0 0 LRU
eth0 1500 0 219201250 0 0 0 257640969 0 0 0 BMRU
lo 65536 0 311407443 0 0 0 311407443 0 0 0 LRU
eth0 1500 0 219268062 0 0 0 257717388 0 0 0 BMRU
lo 65536 0 311534133 0 0 0 311534133 0 0 0 LRU
eth0 1500 0 219335153 0 0 0 257792649 0 0 0 BMRU
lo 65536 0 311666747 0 0 0 311666747 0 0 0 LRU


iostat 15




avg-cpu:  %user   %nice %system %iowait  %steal   %idle
10.98 0.52 3.15 0.30 0.00 85.05

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 20.79 833.21 490.35 76292548 44898504

avg-cpu: %user %nice %system %iowait %steal %idle
28.67 0.00 9.33 0.83 0.00 61.17


Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 46.67 32.00 873.60 480 13104

avg-cpu: %user %nice %system %iowait %steal %idle
27.82 0.00 10.18 0.15 0.00 61.85

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 25.47 33.60 1692.80 504 25392

avg-cpu: %user %nice %system %iowait %steal %idle

28.84 0.00 10.21 0.06 0.00 60.89

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 11.27 32.53 712.00 488 10680

avg-cpu: %user %nice %system %iowait %steal %idle
27.83 0.00 10.45 0.95 0.00 60.76

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 58.67 18.67 1861.33 280 27920



vmstat 15



procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
12 0 0 11291076 1118568 12297544 0 0 17 10 8 4 11 3
85 0 0
3 0 0 11248336 1118568 12299980 0 0 9 399 29547 52458 27 8 65 0 0
11 0 0 11224000 1118568 12303336 0 0 26 683 29830 52331 27

9 63 1 0
21 0 0 11193280 1118568 12306744 0 0 39 347 31722 67277 31 10 59 0 0
16 0 0 11158436 1118568 12310428 0 0 17 809 30677 62257 27
10 62 1 0
10 0 0 11129768 1118568 12314088 0 0 32 472 32385 61133 30 10 60 0 0
7 2 0 11120616 1118572 12317448 0 0 18 761 31739 66259 29 10 60 0 0


top -b -n 1 | head




top - 20:17:07 up 1 day,  1:12,  1 user,  load average: 3.95, 4.11, 4.29
Tasks: 641 total, 15 running, 626 sleeping, 0 stopped, 0 zombie
Cpu(s): 10.8%us, 2.9%sy, 0.5%ni, 85.3%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 32865012k total, 21206392k used, 11658620k free, 1118552k buffers
Swap: 8388604k total, 0k used, 8388604k free, 12165180k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27709 mysql 20 0 5189m 1.0g 13m S 559.9 3.1 1800:22 mysqld
2750 nginx 10 -10 437m 42m 1488 R 19.2 0.1 46:05.13 nginx
3135 nginx 10 -10 433m 41m 1496 S 15.4 0.1 46:45.90 nginx



iotp



Total DISK READ: 64.98 K/s | Total DISK WRITE: 667.90 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
884 be/3 root 0.00 B/s 491.00 K/s 0.00 % 4.80 % [jbd2/sda5-8]
6091 be/4 nginx 64.98 K/s 64.98 K/s 0.00 % 0.16 % php-fpm: pool www
16943 be/4 mysql 0.00 B/s 3.61 K/s 0.00 % 0.00 % mysqld
27829 be/4 mysql 0.00 B/s 7.22 K/s 0.00 % 0.00 % mysqld

27862 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld
27887 be/4 mysql 0.00 B/s 238.28 K/s 0.00 % 0.00 % mysqld
2745 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
2747 be/2 nginx 0.00 B/s 7.22 K/s 0.00 % 0.00 % nginx: worker process
2748 be/2 nginx 0.00 B/s 10.83 K/s 0.00 % 0.00 % nginx: worker process
2750 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
2751 be/2 nginx 0.00 B/s 7.22 K/s 0.00 % 0.00 % nginx: worker process
2788 be/2 nginx 0.00 B/s 14.44 K/s 0.00 % 0.00 % nginx: worker process
2872 be/2 nginx 0.00 B/s 10.83 K/s 0.00 % 0.00 % nginx: worker process
2919 be/2 nginx 0.00 B/s 7.22 K/s 0.00 % 0.00 % nginx: worker process

2920 be/2 nginx 0.00 B/s 10.83 K/s 0.00 % 0.00 % nginx: worker process
2991 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
3038 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
3121 be/2 nginx 0.00 B/s 7.22 K/s 0.00 % 0.00 % nginx: worker process
3135 be/2 nginx 0.00 B/s 14.44 K/s 0.00 % 0.00 % nginx: worker process
3218 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
3256 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
3258 be/2 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % nginx: worker process
3316 be/2 nginx 0.00 B/s 10.83 K/s 0.00 % 0.00 % nginx: worker process
3325 be/2 nginx 0.00 B/s 14.44 K/s 0.00 % 0.00 % nginx: worker process

3390 be/2 nginx 0.00 B/s 7.22 K/s 0.00 % 0.00 % nginx: worker process
28264 be/4 mysql 0.00 B/s 57.76 K/s 0.00 % 0.00 % mysqld
27959 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld
28000 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 0.00 % mysqld
5892 be/4 nginx 0.00 B/s 25.27 K/s 0.00 % 0.00 % php-fpm: pool www
5918 be/4 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % php-fpm: pool www
5938 be/4 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % php-fpm: pool www
5958 be/4 nginx 0.00 B/s 28.88 K/s 0.00 % 0.00 % php-fpm: pool www
5981 be/4 nginx 0.00 B/s 43.32 K/s 0.00 % 0.00 % php-fpm: pool www
5990 be/4 nginx 0.00 B/s 57.76 K/s 0.00 % 0.00 % php-fpm: pool www

6037 be/4 nginx 0.00 B/s 28.88 K/s 0.00 % 0.00 % php-fpm: pool www
6062 be/4 nginx 0.00 B/s 21.66 K/s 0.00 % 0.00 % php-fpm: pool www
6114 be/4 nginx 0.00 B/s 3.61 K/s 0.00 % 0.00 % php-fpm: pool www
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
4 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/0]
6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]

8 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/1]
9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1]
10 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/1]
11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/2]
12 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/2]
13 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/2]
14 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/2]
15 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/3]
16 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/3]
17 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/3]

18 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/3]
19 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/4]
20 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/4]
21 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/4]
22 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/4]
23 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/5]
24 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [stopper/5]


Nginx configuration





user  nginx nginx; 
worker_processes 24;
worker_rlimit_nofile 300000;


pid /var/run/nginx.pid;

events {

use epoll;
worker_connections 20192;
multi_accept on;
}

http {
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
set_real_ip_from 127.0.0.1;

real_ip_header X-Forwarded-For;
sendfile on;
tcp_nopush on;
charset UTF-8;


fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;

proxy_http_version 1.1;

proxy_set_header Connection "";
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
fastcgi_send_timeout 600s;
fastcgi_read_timeout 600s;

keepalive_timeout 60;
keepalive_disable none;
max_ranges 1;

reset_timedout_connection on;
tcp_nodelay on;

open_file_cache max=10000 inactive=5m;
open_file_cache_valid 10m;
open_file_cache_min_uses 5;
open_file_cache_errors off;

client_body_buffer_size 1m;
client_header_buffer_size 1m;

client_max_body_size 1m;
large_client_header_buffers 2 1m;

gzip on;
gzip_comp_level 5;
gzip_min_length 1000;
gzip_proxied any;
gzip_vary on;
gzip_types
application/atom+xml

application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml

image/x-icon
text/css
text/plain
text/x-component;

#limit_req_zone $binary_remote_addr zone=gulag:1m rate=50r/m;

# Upstream to abstract backend connection(s) for PHP.
upstream php {
server unix:/tmp/php-fpm.sock;

server 127.0.0.1:9000;
}

include /etc/nginx/conf.d/*;
}







php-fpm config
http://pasted.co/d6a2fa9c



mysql Configuration




port = 3306
#bind-address =
default-storage-engine = ARIA
aria-pagecache-buffer-size= 2048M

key_buffer = 2024M
query_cache_size = 64M
query_cache_limit = 512M
max_connections = 3000
thread_cache_size = 512
query_cache_min_res_unit = 0
tmp_table_size = 32M
max_heap_table_size = 32M
table_cache=1024
concurrent_insert=2

max_allowed_packet = 64M
sort_buffer_size = 128K
read_buffer_size = 512K
read_rnd_buffer_size = 512K
net_buffer_length = 2K
thread_stack = 512K
wait_timeout = 300
table_definition_cache = 4000
thread_handling = pool-of-threads
host_cache_size = 2000

skip_name_resolve
thread-pool-max-threads= 1000






php.in
http://pasted.co/523a81db




nginx error log
nothing beside




[warn] 10063#10063: conflicting server name




mysql error log





also empty beside restarting log




php-fpm error log




[01-Jun-2016 21:54:00] WARNING: [pool www] child 9596, script '/var/www/index.php' (request: "POST /index.php") execution timed out (302.124548 sec), terminating 
[01-Jun-2016 21:54:00] WARNING: [pool www] child 9596 exited on signal 15 (SIGTERM) after 1203.573297 seconds from start
[01-Jun-2016 21:54:00] NOTICE: [pool www] child 10116 started
[01-Jun-2016 22:07:20] WARNING: [pool www] child 9450, script '/var/www/index.php' (request: "GET /index.php") execution timed out (331.919262 sec), terminating

[01-Jun-2016 22:07:25] WARNING: [pool www] child 9450 exited on signal 15 (SIGTERM) after 2009.277917 seconds from start
[01-Jun-2016 22:07:25] NOTICE: [pool www] > > child 10271 started



Top




top - 22:46:35 up  1:20,  1 user,  load average: 306.50, 283.23,
290.78 Tasks: 801 total, 89 running, 712 sleeping, 0 stopped, 0 zombie Cpu(s): 85.3%us, 14.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi,

0.5%si, 0.0%st Mem: 32865348k total, 13868888k used, 18996460k free, 28104k buffers Swap: 8388604k total, 727164k used, 7661440k free,
394328k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7962 mysql 20 0 3082m 321m 4072 S 114.1 1.0 82:07.36 mysqld
9405 php 20 0 1387m 924m 8156 R 4.9 2.9 1:33.70 php-fpm
9505 php 20 0 714m 252m 8116 R 4.9 0.8 1:29.31 php-fpm
9448 php 20 0 475m 21m 14m S 4.6 0.1 1:34.66 php-fpm
9533 php 20 0 472m 19m 12m D 4.6 0.1 1:34.74 php-fpm
9628 php 20 0 516m 62m 14m R 4.6 0.2 1:40.33 php-fpm

9685 php 20 0 670m 210m 8156 R 4.6 0.7 1:31.48 php-fpm
9380 php 20 0 712m 253m 7896 R 4.3 0.8 1:28.64 php-fpm
9387 php 20 0 507m 48m 7564 R 4.3 0.2 1:31.58 php-fpm
9395 php 20 0 473m 20m 12m D 4.3 0.1 1:36.94 php-fpm
9435 php 20 0 1597m 1.1g 12m R 4.3 3.6 1:34.38 php-fpm
9461 php 20 0 500m 47m 13m R 4.3 0.1 1:36.33 php-fpm
9493 php 20 0 491m 32m 7560 R 4.3 0.1 1:41.10 php-fpm
9514 php 20 0 477m 25m 14m R 4.3 0.1 1:32.29 php-fpm
9554 php 20 0 513m 59m 13m R 4.3 0.2 1:33.10 php-fpm
9556 php 20 0 512m 57m 12m R 4.3 0.2 1:24.28 php-fpm

9566 php 20 0 473m 15m 7524 D 4.3 0.0 1:33.42 php-fpm
9598 php 20 0 531m 71m 7764 R 4.3 0.2 1:32.87 php-fpm
9620 php 20 0 473m 21m 14m S 4.3 0.1 1:20.62 php-fpm
9653 php 20 0 509m 54m 11m R 4.3 0.2 1:36.90 php-fpm
9677 php 20 0 513m 23m 7392 R 4.3 0.1 1:34.66 php-fpm
9689 php 20 0 714m 258m 14m R 4.3 0.8 1:30.03 php-fpm
9695 php 20 0 1431m 973m 12m R 4.3 3.0 1:31.20 php-fpm
9408 php 20 0 472m 13m 7456 S 4.1 0.0 1:35.02 php-fpm
9463 php 20 0 514m 60m 12m R 4.1 0.2 1:33.01 php-fpm
9504 php 20 0 505m 48m 21m R 4.1 0.1 1:36.00 php-fpm

9507 php 20 0 491m 38m 12m R 4.1 0.1 1:37.42 php-fpm
9516 php 20 0 493m 32m 8108 R 4.1 0.1 1:35.84 php-fpm
9522 php 20 0 627m 166m 7816 R 4.1 0.5 1:30.21 php-fpm
9524 php 20 0 936m 480m 13m R 4.1 1.5 1:33.89 php-fpm
9534 php 20 0 475m 15m 8060 D 4.1 0.0 1:38.95 php-fpm
9638 php 20 0 504m 45m 7652 S 4.1 0.1 1:35.04 php-fpm
9641 php 20 0 777m 314m 8508 R 4.1 1.0 1:37.97 php-fpm
9655 php 20 0 507m 56m 14m R 4.1 0.2 1:28.89 php-fpm
9671 php 20 0 513m 54m 7544 R 4.1 0.2 1:34.73 php-fpm
9684 php 20 0 514m 61m 14m R 4.1 0.2 1:36.00 php-fpm

9793 php 20 0 472m 19m 13m R 4.1 0.1 1:28.19 php-fpm
9806 php 20 0 515m 55m 8172 R 4.1 0.2 1:35.00 php-fpm
9818 php 20 0 771m 308m 8068 R 4.1 1.0 1:31.61 php-fpm
9825 php 20 0 473m 13m 8740 S 4.1 0.0 1:29.41 php-fpm
9850 php 20 0 475m 18m 11m S 4.1 0.1 1:35.34 php-fpm
9861 php 20 0 473m 14m 7556 S 4.1 0.0 1:32.32 php-fpm
9425 php 20 0 475m 21m 13m S 3.8 0.1 1:34.69 php-fpm
9446 php 20 0 471m 12m 7640 S 3.8 0.0 1:36.17 php-fpm
9579 php 20 0 471m 12m 7592 D 3.8 0.0 1:37.13 php-fpm
9588 php 20 0 491m 32m 7468 R 3.8 0.1 1:36.44 php-fpm



[root@davidsm01 ~]# vmstat



procs -----------memory---------- ---swap-- -----io---- --system-------cpu-----  
r b swpd free buff cache si so bi bo in cs us sy id wa st
273 0 669112 13713228 76480 541424 21 30 69 50 567 165 83 15 1 1 0




tcpdump -i eth0




22:55:11.703006 IP 141.101.70.49.36344 >
expressvpn16.clients.netelligent.ca.http: Flags [.], ack 2921, win 77,
length 0 22:55:11.703017 IP 141.101.70.49.36344 >
expressvpn16.clients.netelligent.ca.http: Flags [.], ack 3718, win 80,
length 0 22:55:11.717115 IP 108.162.214.215.23885 >
expressvpn16.clients.netelligent.ca.http: Flags [.], ack 3878216429,
win 29, length 0 22:55:11.717173 IP 108.162.214.215.23885 >

expressvpn16.clients.netelligent.ca.http: Flags [P.], seq 0:485, ack
1, win 29, length 485 22:55:11.717192 IP
expressvpn16.clients.netelligent.ca.http > 108.162.214.215.23885:
Flags [.], ack 485, win 123, length 0

15493 packets captured
43347 packets received by filter
27820 packets dropped by kernel




mpstat -P ALL




Linux 2.6.32-642.1.1.el6.x86_64 (davidsm01.localdomain)         06/01/2016      _x86_64_        (12 CPU)

11:14:56 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
11:14:56 PM all 83.76 0.00 14.16 1.08 0.00 0.56 0.00 0.00 0.43
11:14:56 PM 0 84.53 0.00 13.89 1.18 0.00 0.12 0.00 0.00 0.28
11:14:56 PM 1 83.44 0.00 14.19 1.14 0.00 0.85 0.00 0.00 0.38

11:14:56 PM 2 83.38 0.00 14.23 1.16 0.00 0.85 0.00 0.00 0.38
11:14:56 PM 3 84.59 0.00 13.76 1.15 0.00 0.10 0.00 0.00 0.39
11:14:56 PM 4 84.30 0.00 14.04 1.14 0.00 0.13 0.00 0.00 0.39
11:14:56 PM 5 84.66 0.00 13.73 1.10 0.00 0.11 0.00 0.00 0.40
11:14:56 PM 6 83.24 0.00 14.39 1.16 0.00 0.84 0.00 0.00 0.37
11:14:56 PM 7 84.44 0.00 13.94 0.55 0.00 0.15 0.00 0.00 0.92
11:14:56 PM 8 83.07 0.00 14.54 1.17 0.00 0.87 0.00 0.00 0.35
11:14:56 PM 9 83.07 0.00 14.43 1.09 0.00 0.97 0.00 0.00 0.45
11:14:56 PM 10 83.11 0.00 14.50 1.05 0.00 0.84 0.00 0.00 0.50
11:14:56 PM 11 83.31 0.00 14.33 1.13 0.00 0.84 0.00 0.00 0.39




ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10




%CPU   PID USER     COMMAND
2.3 10772 php php-fpm: pool www
2.3 10490 php php-fpm: pool www
2.2 9869 php php-fpm: pool www

2.2 9868 php php-fpm: pool www
2.2 9864 php php-fpm: pool www
2.2 9854 php php-fpm: pool www
2.2 9850 php php-fpm: pool www
2.2 9827 php php-fpm: pool www
2.2 9824 php php-fpm: pool www



ss -s





 Total: 14878 (kernel 15172)
TCP: 30203 (estab 11194, closed 16290, orphaned 89, synrecv 0, timewait > 16281/0), ports 4092

Transport Total IP IPv6
* 15172 - -
RAW 0 0 0
UDP 0 0 0
TCP 13913 13908 5

INET 13913 13908 5
FRAG 0 0 0



Device eth0 [] (1/5):




 Incoming:


Curr: 1.61 MBit/s
Avg: 1.78 MBit/s
Min: 886.95 kBit/s
Max: 5.11 MBit/s
Ttl: 2.06 GByte

Outgoing

Curr: 4.82 MBit/s
Avg: 5.08 MBit/s

Min: 2.10 MBit/s
Max: 14.50 MBit/s
Ttl: 5.59 GByte

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...