Thursday, August 31, 2017

apache 2.2 - mod_rewrite map doesn't work




I am trying to build a simple mod_rewrite map to have category names translated into ids like so: ../category/electronics -> category.php?cat=1



The map is placed inside the www folder. The code ignores the map as if it doesn't exist



This is my rewrite code, what is wrong?



edited: path to catmap.txt, now it's working correctly




DocumentRoot "${path}/www"

....
RewriteMap cat2id txt:${path}/www/catmap.txt
RewriteEngine On
RewriteOptions Inherit
RewriteLogLevel 3
RewriteRule ^/beta/category/(.*) /beta/category.php?cat={cat2id:$1}


Answer



The RewriteRule should be:




RewriteRule ^/beta/category/(.*) /beta/category.php?cat=${cat2id:$1}



I created the file /var/www/beta/category.php with the following contents:





And this is what I get:



$ curl 'http://localhost/beta/category/electronics'

Array
(
[cat] => 1
)

Running Small Business Server (Exchange) & Terminal Server on one physical server (Virtualization)

Good day experts,



I'm in the process of upgrading our office network, mainly upgrading our servers from 2003 to 2008/2010 with new hardware.




We've currently 2 servers with these specs:



Exchange Server




  • Intel Xeon 3.40Ghz

  • 2GB RAM

  • 2 x 10k 72GB drives




Terminal Server




  • Intel Pentium 4 3GHz

  • 1 GB Ram

  • 2 x 250GB 7.2k IDE drives



I'd like to replace both of these with a single (physical) server and install virtualization software so I can run both installations on the same box.




The specs of the hardware I'm looking at is:




  • IBM Series x3550

  • Quad Core 2.50 GHz Intel Xeon E5420 VT-x 12 MB L2

  • 8 GB DDR2 FB-DIMM

  • 2 x 146 GB 15k SAS (we can easily add more storage)

  • Hardware RAID

  • 2 x Gbit NICs




The new servers will be running Small Business Server 2008 (with Exchange) and Terminal Server 2008. The terminal Server will only run things like Microsoft Office and be used by max 3 users simultaneously.



Would this hardware be sufficient, baring in mind that our current hardware doesn't run too badly? (we're upgrading because the current servers have hardware issues).



Also, what virtualization software would you recommend for this scenario? Would Hyper-V be a good candidate? Or would you use something else?

Wednesday, August 30, 2017

.htaccess - How do I obscure my Wordpress install via htaccess?



(I am aware that security via obscurity is not recommended).



I am trying to hide the fact that I am using Wordpress. This post is helpful, but it only addresses the content (sort of). I am interested in having the following occur:





  1. User tries to access any url with wp* as a substring via their browser.



    Result: Redirected to 404 page.


  2. Blog user/administrator knows in order to login they should go to http://example.com/blogin/.



    Result: apache redirects them to http://example.com/wp-admin/.


  3. If a user tries to directly access wp-admin from their browser they get sent to #1.




    Result: Redirected to 404 page.




Things I've done so far




  1. I noticed for a default install of WordPress that I could access any of the wp* files in the (relative) root directory of the WP install. Specifically wp-settings.php was problematic because it gave away information about my set-up. If a user accessed it, it would spew some PHP errors and reveal part of the directory structure. I edited my php.ini file to turn display_errors off. Now accessing http://example.com/wp-settngs.php brings up a blank page.


  2. This in itself isn't ideal because it reveals that wp-settings.php exists. In fact, accessing all the different wp* files is possible (with different results). I then put the following in my htaccess file:



          RewriteEngine On

    RewriteBase /
    RewriteCond %{PATH_INFO} wp* [NC]
    RewriteRule .* - [F]


    This worked great! Anything with a wp* was routed to my custom 404 page. But now I can't access my admin page.


  3. I tried to insert this line into the above code: RewriteRule ^blogin wp-admin [NC,R,L]. It was supposed to be right after RewriteBase but this didn't work.


  4. I tried to do a:



     

    Order Allow, Deny
    Allow from example.com
    Deny from all



    hoping that a referer from my site (via the rewrite of the rule) would be able to access wp-admin, but not someone from outside. This didn't work either. apache complained that you can't use this directive from htaccess.




I've read the apache documentation; I understand the concepts, theoretically, but I need some practical help.




EDIT: I'm looking for a solution that uses .htaccess instead of httpd.conf since my particular setup makes using httpd.conf inconsistent.


Answer



TLDR; It is not possible to obscure WordPress by only using directives in your .htaccess file.



Now cometh a tale of woe and horror. Our friend, fbh was right about the difficulty in hiding WordPress, it be not for yellow-bellied cowards. Arr! Here be the details of this (mis)adventure. Ye be warned!



Motivation



I'm one of those guys that likes things perfect. I will spend waste time over-engineering something to be the 'right way'. One of things I didn't like about the default WordPress setup was that a user could type in http://ex.com/wp-settings.php and then all this php jargon would spew all over the place. I eventually was able to turn off errors via PHP but that led to a greater desire to only have things that made since be locatable resources from the server...and that everything else would be 404/3'ified to our custom search page. After that I got this idea that I'd like to completely hide the underlying framework (i.e. WP)... anyways... if you want to hide WP it's possible. But it's really hard.




Steps to your doom




  1. Modify your PHP ini settings appropriately. (i.e. turn display errors off) You might think this is unnecessary because if we're using .htaccess to reroute things, folks won't see errors because they can't access the error causing resources (I'm looking at you wp-settings.php). But errors could occur in displayed pages, so you definitely want them off. Just because WP_* directives are set doesn't necessarily mean that things will work the way you think they will. I found that on my server I had to set the display_errors to false FIRST, because WP_DISPLAY_ERRORS assumed that the default setting was false.



    Controlling PHP ini settings may be something as simple as putting a directive in your .htaccess file. Or, in my case, as complicated as creating a CGI handler and then putting a php.ini file there. YMMV depending on your set-up.


  2. Remove all access to files/directories with wp- prefix. The idea is that your WP deployment is about your content, not about WP (unless it's specifically focused on WP). It doesn't make sense for people to want to see what http;//ex.com/wp-cron.php has... unless they're up to no-good. I accomplished this via this:



     # If the resource requested is a `wp-*` file or directory, poop to a 403. 

    RewriteCond %{REQUEST_FILENAME} wp-.*$ [NC]
    RewriteCond %{ENV:REDIRECT_STATUS} ^$
    RewriteCond %{REQUEST_FILENAME} -f [NC,OR]
    RewriteCond %{REQUEST_FILENAME} -d [NC]
    RewriteRule .* - [F,L]

  3. Learn how to just pass through mordor By removing all access to wp-* you can no longer gain access to the administrative part of WP. That really sucks. In addition to that downer, you've just realized that you don't know what RewriteCond %{ENV:REDIRECT_STATUS} ^$ really does. Well, what I tried to do is to give myself a 'secret' backdoor to the WP admin page. I used this code:



     # If the resource requested is 'mordor' (with or without an ending
    # slash) do a URL rewrite to `wp-login.php`.

    RewriteCond %{REQUEST_URI} mordor/?$ [NC]
    RewriteRule mordor/?$ /wp-login.php [NC,L]


    So the URL: http://ex.com/mordor should bring us to the login page. The reason why we had the REDIRECT line in the step above is that since this URL gets rewritten to a wp-* URL, we don't want the first rewrite rule to get it. Since it's being redirected internally, REDIRECT_STATUS will be set correctly and it won't push us to 403/4 land.


  4. Remove wp-content Wordpress.stackexchange has a great article on removing wp-content. You have to redefine some WP constants and that pretty much works. You also have to redirect all accesses from wp-content to 'whatever-content`. This probably won't be an issue if this is a clean deployment. If you're modifying a pre-existing deployment you'll have to do some extra stuff.


  5. Rewrite URLs to wp-content optional RewriteRule (.*)(wp-content)(.*) $1whatever-content$3 [NC,R,L]. This goes in your .htaccess file. If your user tries to access some old content via a wp-content URL, it will get redirected here.


  6. Grep and replace all references to wp-content in your DB optional. You still have wp-content in your database. If you want to WP free you need to get rid of that stuff. I exported/mysql dumped my database, did a search and replace on the wp-content string to the new string. You might say... why do I have to do this if apache will rewrite my URLs? The problem is that the source code will contain these references so if you're really interested in obscuring WordPress, you need to do this. Note: At this point I should've just stopped and accepted the reality that this wasn't going to work. But I wanted Mr. T to pity me.


  7. Replace all references to wp-includes and wp-admin in the source. A lot of the WordPress functionality depends on these two directories: wp-includes and wp-admin. This means these directory names are hardcoded in the source code. This means that you would have to create new directories (since PHP uses the underlying OS file system, not apache) to access these and then WRITES THESE OUT into the emitted html. This is just way too much trouble. I quickly gave up and went to the bathroom to take a poop.





Lesson



Sure, I could've just read http://codex.wordpress.org/Hardening_WordPress and followed those steps. But I wanted the perfect site. Now I just want all those hours back. The biggest thing that prevented me from stopping was that I didn't read anywhere on the internet that this was a lot of work and almost impossible to do. Instead I read of people trying to do it with no sense of if they were successful or not. So, to my past self, whom I will send this to via Apple's Time Machine, please don't try and obscure WordPress. It's not worth it.


Tuesday, August 29, 2017

iis 7 - Using several SSL certificates on same IP with IIS 7



I've got several domains (different sites with different domains. not sub-domains) which need SSL.




I couldn't find a way to make it work - so that each domain will have its SSL certificate, but use the same port and IP as the other domains.



Can this be accomplished?



If not, should I buy a different IP for each domain that needs a SSL?



Thanks


Answer



You can bind only one SSL certificate per IP:port pair. If you need to run 2 HTTPS on the same IP -- bind them to different ports and then refer to such site providing port in URL (e.g. https://beta.example.com:444/). Otherwise you need another IP.




The reason is -- HTTP protocol kicks in only after secure channel has been established, which means only 1 SSL certificate can be used.



If you would have only one domain and wildcart certificate (*.domain.com), then you could try this article: http://www.sslshopper.com/article-ssl-host-headers-in-iis-7.html , but your situation is different.


linux - Recovering ZFS pool with errors on import

I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error.



The pool Storage no longer automatically mounts



sqeaky@sqeaky-media-server:/$ sudo zpool status
no pools available



A regular import says its corrupt



sqeaky@sqeaky-media-server:/$ sudo zpool import
pool: Storage
id: 13247750448079582452
state: UNAVAIL
status: The pool is formatted using an older on-disk version.
action: The pool cannot be imported due to damaged devices or data.
config:


Storage UNAVAIL insufficient replicas
raidz1 UNAVAIL corrupted data
805066522130738790 ONLINE
sdd3 ONLINE
sda3 ONLINE
sdc ONLINE


A specific import says the vdev configuration is invalid




sqeaky@sqeaky-media-server:/$ sudo zpool import Storage
cannot import 'Storage': invalid vdev configuration


Cannot offline or detach the drive because the pool cannot be started/imported



sqeaky@sqeaky-media-server:/$ sudo zpool offline Storage 805066522130738790
cannot open 'Storage': no such pool
sqeaky@sqeaky-media-server:/$ sudo zpool detach Storage 805066522130738790
cannot open 'Storage': no such pool



Cannot force the import



sqeaky@sqeaky-media-server:/$ sudo zpool import -f Storage 
cannot import 'Storage': invalid vdev configuration


I should have 4 devices in my ZFS pool:





/dev/sda3
/dev/sdd3
/dev/sdc
/dev/sdb




I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on.



For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap.



Any clue on how to recover this ZFS pool?




Edit - Status update
I figured out what 805066522130738790 is. It is some guid that ZFS was failing to use to identify /dev/sdb. When physically I remove /dev/sdb the pool mounts and comes online. But I still cannot swap out the disks. I guess I will back up the files to external media, then blow away the whole pool because it is too corrupt to continue functioning. I should have just had good backups from the start...

Monday, August 28, 2017

Temporarily redirect *all* HTTP/HTTPS requests in IIS to a "server maintenance" page



We've got an IIS server that hosts hundreds of separate web apps, and the physical database server that hosts these apps is going to be taken offline for maintenance for a brief period (we expect it to take less than 15 minutes).



During that period, we want to redirect ALL traffic that comes in, for any website, to a "we're currently undergoing maintenance" page.




I realize I could do this by going to every web app, and setting up an IIS rewrite rule that sends the user to another page for all requests in that app. But, it would take us longer to do that than it will to do the database maintenance!



I've tried three things, none of which have worked:





I've been searching for a simple way to apply a rule to all sites, in one fell swoop--and then be able to "undo" that rule in one equally painless step. So far, none of my attempts have worked. I did try putting this rewrite rule in my global web.config at W:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\web.config:






















This didn't work. We're running .NET 4.0 64 bit in IIS, but "just in case" I put the same thing the 32 bit and 2.0 global web.config files, and still no change.






One other suggestion I've seen is the app_offline.htm "special" file, but we're back the same issue of it taking longer to deploy this file to the app root of all our apps than it would to actually do the maintenance.





All our sites are setup in IIS with a single IP. This works for us even without SNA because all our apps share a single SSL certificate (it's a UCC). One thing that occurred to me was that perhaps I could setup a site in IIS that matched all traffic to the IP we're using, and did not specify a host header value. The hope was that I could give it a higher "precedence" and that it, when started, would match all traffic to that IP, before any of the other sites had a chance to match. I could set that site up to serve the same page for all requests, regardless of the request URL.



Start that site when undergoing maintenance, and stop it when finished.



But, I wasn't able to get this working either, as IIS seems to match an HTTP request to a more specific site before less specific. So, by omitting a host-header value for this "tell users we're offline" site, it didn't get matched unless the request didn't have a host-header value that matched another site. Which puts us back at the same problem of having to manually go to each web app and perform an action to take it offline and then put it back online when we're done with maintenance




Is there a simple way to accomplish this? It would seem that surely we are not the first to encounter this issue.



-Josh


Answer



I would go with your third approach "We're offline" Site in IIS, say you named it Offline, if it has no host header specified it will serve all requests not picked up by any of the other sites which have a matching host header. To prevent this you just stop all other sites.



Assuming you have IIS Scripting installed, open an elevated PowerShell:



import-module webadministration



now you can stop all sites except the Offline one:



Get-ChildItem IIS:\Sites | Where {$_.Name -ne "Offline"} | Stop-WebSite


when the SQL-Server is back up, start them up again:



Get-ChildItem IIS:\Sites | Where {$_.Name -ne "Offline"} | Start-WebSite



If you have FTP sites as well, the commands will show an error because you can not pipe an FTP site to a Stop-WebSite cmdlet, but it still works for all the web sites.



If you have sites that normally not run, you have to exclude them in the second command, like:



Where {$_.Name -ne "Offline" -and $_.Name -ne "foobar.com"}


If you don't have the PowerShell cmdlets for IIS installed, you can use appcmd.exe to do the same, I haven't used that in years though.


domain name system - Any suggestions on why DNS is failing over DrayTek 2820?

In recent weeks a weird problem has started in my office. The internet seems to stop working, but it has not failed, it's just DNS problems.




Setup:



ADSL2+ AnnexM connection via a Draytek Vigor 2820 router. Windows server domain running Server 2008 R2. A DNS server is set up on the server, and DNS forwarders set to the values sent to the router (141.1.1.1 and 195.27.1.1 - Thus/CW/Vodafone). I've also added Google's public DNS as backup (8.8.8.8 and 8.8.4.4).



Symptoms



Most of the day the network works fine and web browsing works.



At various points of the day, DNS seems to stop working for external hosts so web browsing stops. There does not seem to be an obvious trigger, although it almost always fails about 4pm local time.




The ADSL line is still working (I run BBC radio 2 streaming over it and this does not stop), and the VPN links to the other office are also working. I can ping external IP addresses - so the problem definitely seems to be with DNS.



What I've Tried



I've tried diagnose the cause usingnslookup: it resolves only internal hosts, anything external times out. I tried setting the server to the CW and the Google ones directly, but this also times out:



> server 8.8.8.8
DNS request timed out.
timeout was 2 seconds.

Default Server: [8.8.8.8]
Address: 8.8.8.8
>


The only solution appears to be to reboot the router. After this everything works again for a while.



I did suspect the problem was with the router but we've not made any configuration changes. So do the assembled experts think this is a router issue or is the ISP?

Sunday, August 27, 2017

Persistent way to allow a user to restart a service




There is a windows service that gets reinstalled sometimes.



I need a user to be able to start/stop/restart this service. This user is not an administrator and shouldn't be.



If I use setacl.exe than it works, or even I can use sc sdset, but after the service gets reinstalled setacl needs to be called again, but the process that reinstalls the service has no rights to run setacl.



Is there a way to grant a specific user the right to restart a service with a specific name, or even all services, that persists through a service reinstall?



If I'm able to give a user some general permissions to "manage services" that would also be fine, but I'm unable to pinpoint the exact rights needed for this (if I add the user to the admin group, he can start/stop services, but can -obviously- do a lot more than that).


Answer




Since you've already know about SetACL, and how to use it to allow a user to control a service, you could simply use Scheduled Tasks to regularly run SetACL.



Configure the task to repeat in an interval as small, as the longest acceptable time the user cannot control the service, after a re-installation.



Edit



As you say, it is kind of hacky ;).



Another option, as Adam mentions, is to use GPO's to enforce your ACL.




For a non-standard Windows services, you will have to install and run the Group Management Console, on the computer where the service is installed. Then do the following:




  1. Launch GPMC.msc on the computer

  2. Edit an existing GPO, or create a new, that applies to the computer in question

  3. Expand Policies, Windows Settings, Security Settings, System Services

  4. Open the properties of the service in question

  5. Define startup mode and edit permission as desired


filesystems - How many files can I have directly under a directory in ext3?



I have a root directory 'data_0'.

Under this directory are about 15,000 directories ('a', 'b', 'c', ... 'aa', 'ab' ...).
Under each of theses directories there are thousands of very small files (4~10kB), something between 1,000 and 2,000 files each.



All this leads to 30 million files. I need to move these from 'data_0' to a 'data_1' folder, but without the "level 2" folders (a, b, c etc), so:



/data_0/a/1.txt --> /data_1/a_1.txt
/data_0/a/2.txt --> /data_1/a_2.txt
...
/data_0/ccc/989.txt --> /data_1/ccc_989.txt
...



How far can I go with this? Performance is unimportant here. Is there a logical limit or just a performance limit?


Answer



If strangers on the internet are to be trusted, there is no limit to the number of files that an ext3 folder can contain. So says the ext3-users RedHat mailing list. The 2.6 kernel supposedly allows for the theoretical allowance of "billions" of files in one directory. You may want to tweak dir_index a bit to make it run smoothly if you'll be doing in searching on the files. There are also some other side effects of massive amounts of files in one directory that you might want to read through in this StackOverflow thread.



So the answer is most likely: "Yes, but..."


Saturday, August 26, 2017

raid controller - Flash Backed Write Cache (FBWC) without capacitor pack



I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays.



I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible.




So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post.



"What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)?
The flashed based ones use non-volitile memory so need no battery."



After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%.



The one time I reached the ACU, it reported a problem with the Cache Module.



To me, it seems like the cache module is faulty, however the supplier tells me




“Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.”



Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?


Answer



You were correct in understanding that the write cache solution would help performance. However, you just didn't order the right parts. What you ordered was a 512MB chip. The reason the flash capacitor is external to the memory module/RAID controller is to preserve the old form-factor of the older battery-backed module (forward-compatible). The part number should have been #534916-B21 and should have included everything listed below:



If you don't have both parts pictured here, your solution won't work.



Left: RAM module. Right: External flash capacitor.

enter image description here



The capacitor unit is pretty key to the operation. Where did you buy this from? Nobody sells the memory module alone unless it's ordered as a spare or repair part.



In general, when ordering HP, look at the quickspecs for the product you wish to buy. In this case, the Smart Array P410 quickspecs would have given you specific part numbers (and compatibility notes) to make your solution whole.



Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?


windows server 2008 - Secondary domain controller not processing log on requests

So I have a weird bug that pauses the netlogon service periodically on my primary domain controller. When this happens users cannot log in to the domain. I have a secondary DC offsite that is a Global Catalog and DNS server but it is reachable through an MPLS connection.



DC1 has all FSMO roles and is located with most client PCs in the 192.168.1.0/24 network. The offsite DC is fully reachable through its dns name and sits with a few clients in the 192.168.2.0/24 network.




Why wont DC2 take over log on responsibilities when DC1 is unavailable?

Friday, August 25, 2017

mysql - Very large number open connections from web to DB server



I run 2 servers, 1 web (nginx/php), 1 database (mysql).




Nginx has about 1500 active processes per second, and mysql status shows about 15 currently option connections on average.



Now today i started running: netstat -npt | awk '{print $5}' | grep -v "ffff\|127\.0\.0\.1" | awk -F ':' '{print $1}' | sort -n | uniq -c | sort -n



This showed that there were over 7000 active connections from my webserver to my database server IP. This seems kind of extreme. I do not use persistent connections in PHP to connect to Mysql.



Any idea why there are so many open connections?


Answer



Though this is getting a bit stackoverflow'y, here goes:




Probably because you don't close your connections in the code. If so, I would recommend you switch to mysql_pconnect(), or just add mysql_close() to the end of all requested php-pages



If all the connections to the mysql server is in state: TIME_WAIT, try lowering the wait_timeout variable in your mysqld configuration. Check out the MySQL documentation for more info



UPDATE: As ChristopherEvans pointed out, you can connect directly to the mysql socket instead of using IP endpoints, to avoid running out of unused ports on the local interface


raid - Adding nearline SAS disks to a controller with SAS disks



I have a Dell Poweredge 840 server that currently has a pair of hot swappable 146Gb SAS disks attached to a SAS 5IR controller, configured as RAID 1.



I have two spare slots to add two add more storage, and have a need to add a relatively large amount of storage as cheaply as possible. The data on the new disks will mostly be temporary file storage.



I've explored adding SATA storage to the on board SATA interface, which is limited to 1TB per SATA disk, and am now looking at alternatives. I'm wondering about adding nearline SAS drives to the existing controller to give me lots of storage at near SATA pricing.



I'm a little nervous about mixing SAS and nearline SAS disks on the same controller even though no volumes will ever span the two disks types. I'm looking for advise on whether this would be a good idea or not?




I'm considering adding either 1 or 2 disks with a capacity of 2TB each. If I add two disks, I'll likely configure them as RAID 0 as capacity is more important than fault tolerance.


Answer



SAS uses higher signalling voltages than SATA so mixing those on the same backplane isn't the best of ideas, although you can get SATA interposer cards to allow SATA to be used on the same backplane.



Near line SAS however uses a proper SAS interface so there should be no issue with your intended setup even if you put them all on the SAS 5iR.



I would suggest using the SAS 5iR rather than the onboard SATA for a few reasons - the 5iR will be connected to your hotswap backplane and the onboard SATA won't. Also from my experience the onboard SATA on most poweredge's is just a basic SATA controller that doesn't support RAID in any way shape or form.


Thursday, August 24, 2017

networking - Long Gigabit Ethernet Run

I am trying to get an Gig-E network between two buildings that are approximately 260 ft. away. While some TRENDnet switches failed to be able to connect to each other over Cat 6 at that distance, two Netgear 5-port Gig-E switches do so just fine. However, it still fails after I put in place APC PNET1GB ethernet surge protectors at each end before the line connects to the respective switches. So I find myself wondering if I simply need to find a better surge protector that doesn't degrade the signal as much (if so, what kind would you recommend?) or if I should give up on copper and use fiber between the buildings.



If I opt to go the latter route, I could really use some pointers. It looks like LC connectors are the most common, but I keep running into some others as well. A media converter on each end seems like the simplest solution, but perhaps a Gig-E switch with an SFP port would make more sense? Given a very limited budget, sticking with my existing copper seems best, but if it is bound to be a headache, a 100 meter fiber cable is something I think I can swing cost wise.

rackspace - Connection Refused When Trying To Connect To IRCd-Hybrid



I have a Rackspace server running Ubuntu Lucid Lynx, where I have installed an IRCd-Hybrid. I can connect to the IRC server using irssi that was installed on the same machine where the server is, but when I try to access it from my computer at home or my friends try I get this error:





Connection Refused




What should I do?


Answer



I assume that probably means a firewall somewhere is blocking the connection. On your home machine, try connecting directly to the IRC port on your server. If you are running linux you can do this with telnet:



$ telnet your.example.server 6667



you should get some response from the server (you might have to hit enter first).



If that doesn't work, there's probably a firewall involved. Check the server where you are running ircd. What does the output of /sbin/iptables -nvL show for firewall rules? I bet you have a standard 'default-deny' setup, where incoming connections are dropped unless they are going to specific predefined ports.


Wednesday, August 23, 2017

SAS HBA card or integrated with MB?



I need an entry level virtualization server for a few people and I like the Fujitsu line of this kind, but they all seem to have built-in SATA disk interfaces.. I'd like to use SAS disk and apparently there are some SAS interface cards I could use but I have never used such cards and wonder if it is a good idea (re reliability and performance) or should I go with integrated SAS controller?



Answer



Using PCIe SAS HBAs is quite a common solution and nothing to worry about. There is no difference regarding performance or reliability I am aware of.


mysql - solaris ssh port forward



I have been trying to create a ssh tunnel from a Linux box to a mysql server on a Solaris box with: ssh -i -L 3333:localhost:3306 root@ command on the Linux box.



On trying to connect to the mysql server from the Linux box with command mysql -P 3333 -h 127.0.0.1 -u root -p, I am getting the following error: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0



Now running the sshd (Solaris) on debug level 3 I get the following error:




debug1: server_input_channel_open: ctype direct-tcpip rchan 3 win 2097152 max 32768
debug1: server_request_direct_tcpip: originator 127.0.0.1 port 34100, target localhost port 3306
Received request to connect to host localhost port 3306, but the request was denied.
debug1: server_input_channel_open: failure direct-tcpip


And also getting the following error: channel 3: open failed: administratively prohibited: open failed



On the Solaris machine:





  • SSH version : Sun_SSH_1.1

  • cat /etc/release : Solaris 10 11/06 s10x_u3wos_10 X86

  • uanme -a : SunOS unknown 5.10 Generic_118855-33 i86pc i386 i86pc



On the Linix box :





  • SSH version : OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010


Answer



Fixed : There was two conflicting AllowTcpForwarding value in the sshd_config. Somehow the first one with value no was taking precedence. Might be a bug with OpenSSH 4.2p1


RAID SCSI Hard Drive: 73GB vs 72.8GB

I know that this may be a stupid question to some but I don't often work with SCSI drives with what seems to be non-standardized sizes compared to SATA drives. I currently have a RAID1 of two drives on an old Dell server and one of the drives has failed.




The current drives are Maxtor 73GB SCSI Ultra320 80-pin 10k (8J073J002075E). When searching for replacement drives of similar specs, it seems that 72.8GB drives are most commonly listed and 73GB drives are more rare.



Is this a case of a manufacturer rounding up on the capacity or is there a real 200MB difference going to cause an issue?



(Note: I realize that the general rule of thumb is to replace with all the same specifications or higher -- this question is whether the difference between 72.8 versus 73 is just labeling versus actual technical difference in size.)

Tuesday, August 22, 2017

spam - How to send a large number of emails to a dated list of receipients

One of my clients was to send an email to ~100,000 recipients. The client is a state organization, the emails are not promotional, the recipients are all members of the organization and they are sure that the messages will not be considered spam by any human recipient. Additionally, this is a one-off event. They will probably need to send such an email again in a couple of years.



The problem is that the list is old and I'm estimating that around 10-20% (maybe even more) of the addresses will bounce back.
As far as I know, such a large number of bounces will raise a red flag for services such as mailchimp, Amazon SES, etc. Am I correct in this?



By using their own mail server and following the practices outlined in various questions of this site (such as Sending 10,000 emails?, How to send emails and avoid them being classified as spam?, Best Practices for preventing you from looking like a spammer), will their server be blacklisted after the first few thousands of bounces?



If in both cases the high bounce rate will be a problem, is there any other way of doing this, sort of manually curating the list (which would mean making ~100,000 phone calls)?

Monday, August 21, 2017

security - How To Block Some UNC Paths for Windows 7 In An AD environment

I look after a network where the servers are Server 2008 [Domain controllers] and the client stations are either Windows 7 Pro SP1 64 Bit or Windows XP Pro SP3 32 Bit.



I have configured GPOs to protect the workstations/servers and the network generally and I am happy with most of this. However when a user clicks 'Save' or 'Save As' in an application they can type a UNC path to a server or a client and see any shares that are not hidden.



\\Server1\



or:



\\Workstation1\


I would like a way of blocking this. Some of the server shares I have created [for operational reasons] are not hidden and are open to all users to make modifications to. Even if the shares were hidden if the users knew the path to the share then they can still open that share]



Is there a way of preventing the users form entering a named UNC, like \\server1\ without adversely affecting the performance of the workstation or the network?

amazon ec2 - Installing SSL Certificate

I am trying to install an SSL certificate on my Apache server that's hosted on an EC2 instance from AWS. I originally intended to go with AWS Certificate Manager and put the SSL on a Load Balancer but I have no need for more than one EC2 instance.

What I have found is that you can install the SSL directly to the server that's hosted at AWS. What I am confused about is how to do it. https://www.digicert.com/ssl-certificate-installation-apache.htm, among others, is a link that I have been trying to follow. I am stuck at #2 in the digicert link above becuase I can't find the SSL configuration file. Is it possible that I could not have that set up? If so, do I create the virtual host like specified in #4? Where would I place the virtual host block of code in my server?

Thank you for any and all help!

linux - bash + run command in tcsh from bash



when I run from bash shell the command:




bash
for i in 1 2 3 ; do echo $i ; done
1
2
3


but when I switch to tcsh and want to run:




    tcsh
bash -c for i in 1 2 3 ; do echo $i ; done
i: -c: line 1: syntax error near unexpected token `newline'
i: -c: line 1: `for'
i: Undefined variable.


please advice why I get errors ( I run the for loop from bash -c its the same ?
and what I need to fix ?


Answer




You'll need to quote it:



bash -c 'for i in 1 2 3 ; do echo $i ; done'


In your example, the only command bash is running is "for" on its own.


Nginx not making proxy_pass when getting files (other than index.html) from Apache's Document Root



I think I'm chasing my own tail here and I've decided to ask all you gurus.



I have two machines, one has a reverse proxy Nginx and the other an Apache running several virtual hosts.



The nginx correctly does the proxy_pass and I'm able to view the index.html, but not any other file than that.



I attach the conf file for the nginx host (nbte.com.br, (192.168.4.30)) and the apache virtual host (SIPREPWWPRESS03.simosa.inet)




NGINX - 083-nbte.conf



server {

listen 80;
server_name nbte.com.br www.nbte.com.br;

access_log /var/log/nginx/nbte.access.log;
error_log /var/log/nginx/nbte.error.log error;


error_page 404 403 /handle404.html;
#error_page 502 503 504 /handle503.html;
error_page 500 502 503 504 /handle500.html;

location = /handle404.html {
root html/errores-prxy;
}

location = /handle503.html {
root html/errores-prxy;

}

location = /handle500.html {
root html/errores-prxy;
}

location = / {
proxy_pass http://SIPREPWWPRESS03.simosa.inet/;
}



SIPREPWWPRESS03.simosa.inet resolves to 192.168.16.79



APACHE - 021-nbte.conf





ServerName nbte.com.br
ServerAlias nbte.com.br www.nbte.com.br


DocumentRoot "/apps/htmlsites/nbte"

ErrorLog "logs/error_nbte.log"
CustomLog "logs/nbte-access.log" combined







Options +Indexes FollowSymLinks
#AllowOverride AuthConfig FileInfo
Order allow,deny
Allow from all



# ModSecurity exceptions

SecRuleRemoveById 990011

SecRuleRemoveById 960017
SecRuleRemoveById 960015
SecRuleRemoveById 970013





I've almost no experience with NGINX, I'm very very new to it and specially it's reverse proxy functioning. Nevertheless, I think it's a NGINX issue, since looking at the error log file I find error lines each time a static file is requested:




2015/06/25 12:00:04 [error] 5075#0: *1393 open() "/etc/nginx/html/Informacoes-Financeiras-30-junho-2014-Norte-Brasil-Transmissora-Energia.pdf" failed (2: No such file or directory), client: 192.168.14.1, server: nbte.com.br, request: "GET /Informacoes-Financeiras-30-junho-2014-Norte-Brasil-Transmissora-Energia.pdf HTTP/1.1", host: "nbte.com.br", referrer: "http://nbte.com.br/"


2015/06/25 12:00:04 [error] 5075#0: *1393 open() "/etc/nginx/html/Informacoes-Financeiras-30-junho-2014-Norte-Brasil-Transmissora-Energia.pdf" failed (2: No such file or directory), client: 192.168.14.1, server: nbte.com.br, request: "GET /Informacoes-Financeiras-30-junho-2014-Norte-Brasil-Transmissora-Energia.pdf HTTP/1.1", host: "nbte.com.br", referrer: "http://nbte.com.br/"


The file Informacoes-Financeiras-30-junho-2014-Norte-Brasil-Transmissora-Energia.pdf is located inside Apache's document root /apps/htmlsites/nbte alog with index.html



Many thanks in advance. Any help is really appreciated.


Answer




This



location = / {
proxy_pass http://SIPREPWWPRESS03.simosa.inet/;
}


Should be



location / {

proxy_pass http://SIPREPWWPRESS03.simosa.inet/;
}


As per http://wiki.nginx.org/HttpCoreModule#location




location = / {
# matches the query / only.
[ configuration A ]
}



location / {
# matches any query, since all queries begin with /, but regular
# expressions and any longer conventional blocks will be
# matched first.
[ configuration B ]




Example requests:



/ -> configuration A
/index.html -> configuration B



Sunday, August 20, 2017

linux - What does glibc detected …httpd: double free or corruption mean?



I have an EC2 server running that I use to process image uploads. i have a flash swf that handles uploading to the server from my local disk - while uploading about 130 images (a total of about 650MB) I got the following error in my server log file after about the 45th image.




  • glibc detected /usr/sbin/httpd: double free or corruption (!prev): 0x85a6b990 **



What does this error mean?




The server has stopped responding so I will restart it. Where should I begin to find the cause of this problem?



thanks



some info -



Apache/2.2.9 (Unix) DAV/2 PHP/5.2.6 mod_ssl/2.2.9 OpenSSL/0.9.8b configured Fedora 8


Answer



This message means that there is a bug either in httpd, one in of its loaded modules or in its execution environment (libraries, OS, hardware).




The technical explanation of the bug is that part of the httpd process kept a pointer to a block of memory around even though the memory had already been freed for other use. In this instance, the error was caught, and did not cause any harm, because the block of memory happened not to have been reused for something else. But if you see this error, it's very likely that it arises in other cases where the block of memory is reused, and then the error is impossible to detect.



Ideally, you would find a way to reproduce this bug, and send a bug report to the Apache development team (unless you think the bug has been fixed in a subsequent version, but for a bug like this it would be hard to tell). Unfortunately, this kind of bug is hard to reproduce. You may want to search on the Apache web site if the development team has preferences regarding the report of such bugs, ask on some Apache mailing list (I don't know which one would be appropriate). Of course, if the error is in a third-party module, or in a library, you should contact its development team instead. There is no miracle method to find this out unless you can reproduce the bug.



Just to rule out a hardware problem, you might want to run a memory test.


ssl - Nginx upstream block nondefault port

experts.



I am trying to configure nginx/1.10.0 for upstream to 2 https web servers on non standart port with ssl termination.



Here is my current website setup in sites-available/



upstream backend {
ip_hash;
server 172.31.16.1:8444;
server 172.31.16.2:8444;

}
server {
listen 80;
listen 443 ssl;
listen 8444 ssl;
ssl on;
server_name backend_1;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_session_cache shared:SSL:5m;


#--------ssl certificates for fronend------------#
ssl_certificate /etc/nginx/ssl/nginxSvr.crt;
ssl_certificate_key /etc/nginx/ssl/nginxSvr.key;
ssl_verify_client off;

location / {
proxy_pass https://172.31.16.1;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/ssl/IPSUMCUICA.crt;

proxy_ssl_certificate_key /etc/nginx/ssl/IPSUMCUICA.key;
proxy_ssl_session_reuse off;
}


It works without using upstream section.
But when I change proxy_pass to proxy_pass https://backend; I got an error 404, and https://backend:8444 in browser.



Apparently it tried to resolve this name and failed, probably after some error but error log is empty in this case.
All suggestions are welcome. Thank you.

Saturday, August 19, 2017

datacenter - how many blade enclosures can you fit in a rack?

Considering the power and cooling requirements of HP's or IBM's newest blade (10U) chassis fully stacked, how many of them can you fit in a standard size rack (42 U) ?

linux - OOM killer goes insane

On our cluster we would sometimes have nodes go down when a new process would request too much memory. I was puzzled why the OOM killer does not just kill the guilty process.



The reason turned out to be that some processes get -17 oom_adj. That makes them off-limits for OOM killer (unkillabe!).




I can clearly see that with the following script:



#!/bin/bash
for i in `grep -v 0 /proc/*/oom_adj | awk -F/ '{print $3}' | grep -v self`; do
ps -p $i | grep -v CMD
done


OK, it makes sense for sshd, udevd, and dhclient, but then I see regular user processes get -17 as well. Once that user process causes an OOM event it will never get killed. This causes OOM kiler to go insane. NFS rpc.statd, cron, everything that happened to to be not -17 will be wiped out. As a result the node is down.




I have Debian 6.0 (Linux 2.6.32-3-amd64).



Does anyone know where to contorl the -17 oom_adj assignment behaviour?



Could launching sshd and Torque mom from /etc/rc.local be causing the overprotective behaviour?

Friday, August 18, 2017

networking - IPv6 - Multiple routers and 'dealing' with NAT




Note: I am not thinking of NAT on IPv6.



I have the following network setup made up of GNU/Linux boxes:



http://portablejim.now.im/images/network_diagram.png



Some network traffic is currently being passed through the VPN tunnel to the internet. Computer A is the VPN server THere can be more than 1 client on the VPN.



I am wanting to get the network ipv6 capable and am trying to understand how it would work. I currently only have a /64, however I can get a larger pool of addresses.




What I am wondering is:




  • If I use the /64 and have A as a router, how will computers C and D know to route to the Internet out (from computer A).


  • Can I have both A and B be routers, A advertising the global address as well as a ULA, and B advertising a subnet ULA? Do I need something bigger than a /64?



Answer



You're going to need more than a /64 pool to do what you want to do. Each subnet should have its own /64 pool according to RFC4291. I'm counting 3 subnets right now. So get a /48 allocation, assign a /64 to each subnet. The rest is just a matter of routing between networks. For something this small, you can just enter in static routes on each router.


Thursday, August 17, 2017

centos - Best method to share ISCSI lun across cluster of app servers

I'm in the process of migrating our cluster to new hardware. We use bare metal servers, no virtualization.



Currently, we have around 30 "application" servers. When we make an update, we push the changes to one machine, then use lsync (a branch of rsync) to push the new files to all of the machines in the cluster.




My new idea was to use a SAN along with iscsi to simply "share" the app across all servers from one location. Little did I know that you can't really do that out of the box. Each machine slices out its own piece of the SAN, but the machines can't see each others files.



What is the best way around this? We're running Centos 6.4 on all of the machines. I stumbled across this, but have heard mixed things about running a clustered filesystem http://ricardobaylon.wordpress.com/2013/11/11/centos-6-4-cluster-gfs-iscsi/

centos7 - Setting FQDN with external domain name

I have a Centos 7 server which has tens of domain names and IPs.



The IP addresses are pointing to my server.




Each domain name is pointing to its own IP via A record.



I want to configure my FQDN in order to install Postfix.



/etc/hostname contains myproject.localdomain



/etc/hosts contains:



127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6


Command hostname returns:



myproject.localdomain


hostname -f returns:




localhost


domainname returns (none)



Do I have to choose a real domain purchased and prepend to it the hostname like this:



ip    hotname.domainame    hostname

Wednesday, August 16, 2017

domain name system - Setup of DNS Server inside a VirtualBox



Here's my scenario: my host is a Windows XP, and I have a guest OS of Ubuntu 8.04. I'm using VirtualBox for this.



I have successfully configured my DNS server in my Ubuntu OS. And if I do the following commands, all of them will give me the corresponding IP address of my DNS server.



dig example.com
nslookup ns1.example.com
host ns1.example.com




But when I try to ping my DNS server from the host (Windows XP), it says it cannot find the DNS server. Also, if I try to ping my DNS server from another PC, it does not seem to find the DNS server as well.


Answer



Check that the virtual network adapter is configured as bridged (not NAT)


SSL certificate and Azure classic load balancer



I have 2 VMs behind a load balancer on Azure. I created a new SSL certificate via Let's Encrypt on one of the VMs using the domain that is assigned to the load balancer. When I connect to that VM directly via IP I see the certificate loaded, but I'm having trouble routing from load balancer traffic to the VM now, the website just doesn't load.



It all worked fine before I added the certificate and a rule in nginx to redirect 80 to HTTPS. I added a new rule in the load balancer to route data from 443 to the backpool 443, and a new health probe for 443, but that did not help.




I read that loading the certificate directly on the load balancer is possible using the Azure Application Gateway, but in that case I need to recreate my VMs to put them in the same virtual network, which I would like to avoid.



Is it possible to route HTTPS traffic using the classic load balancer. Note that I'm fine with setting up each VM to use the same certificate in order to get the HTTPS traffic in each VM.


Answer



Azure's Load Balancer is a Layer 4 balancer and can balance TCP and UDP traffic.Therefor, it doesn't support SSL offloading.



The Application Gateway can balance at Layer 7, so it can do SSL offloading. This means you only need to upload the certificate to the App Gateway.



If you want to stick with the LB, all your VMs will need the certificate. You should be able to balance on port 443 with no issue. You'll need a balancing rule and a health probe, and you will need to allow traffic to 443 from the Internet in your Network Security Groups.



How do you join employees in remote locations to Active Directory?

We have geographically distributed employees Across India. Most of the employees join in remote location and they rarely have to travel to regional offices. e.g. employees in Goa would rarely need to travel to regional office in Mumbai.



Now we are installing Active Directory in our office (Azure, Virtual Machine Hosted). Whats the best way to join laptops (Windows) in remote locations to this AD? Asking them to travel to a office to get this done doesn't seem to be the right way.




-Ajay

Tuesday, August 15, 2017

Apache SSL losing session over load balancer



I have two physical Apache servers behind a load balancer. The load balancer was supposed to be set up so that a user would always be sent to the same physical server after the first request, to preserve sessions.



This worked fine for our web apps until we added SSL to the setup. Now the user can successfully login, see the home page, but clicking on any other internal links logs the user right out. I traced the issue to the fact that while initial authentication is performed by server 1, clicking on internal links leads to having the request sent to server 2. Server 2 does not share sessions with server 1, and the user is kicked out.




How can I fix it?



Do I need to share sessions between the two servers? If so, could you point me to a good guide for doing this?



Thanks.


Answer



If you want to have session stickiness in your load balancer, then you have to terminate the SSL on the load balancer. This means that you have to install the SSL certificate into load balancer.



Another solution is to configure the load balancer to use source IP stickiness for SSL (HTTPS).




A 3rd solution would be to keep the sessions in a common database (e.g. memcached, SQL database). For .NET see: http://support.microsoft.com/kb/317604 For PHP see: http://kevin.vanzonneveld.net/techblog/article/enhance_php_session_management/


networking - Cisco ASA - NATing internal -> internal IP for users on a VPN

RemoteSite   (172.16.1.*)
|
Internet --- InternetUsers
|
ASA --- LocalUsers (192.168.1.*)

|
InsideNet (10.1.1.*)
|
Router
|
DeeperNet (10.22.22.*)


I have a Cisco ASA 5510 with three interfaces, inside/outside/localusers.




On the inside there are two subnets, InsideNet and DeeperNet, connected by a simple router. The ASA's routing table has an entry for DeeperNet.



The remote sites connects via a lan-to-lan VPN on the outside interface. (This VPN includes InsideNet and DeeperNet, so a user from RemoteSite can contact servers on DeeperNet)



All Traffic to a web server on InsideNet (10.1.1.1) needs to be redirected to a web server on Deepernet (10.22.22.22)
For localusers this is easily done with a static NAT rule:




static (inside,localusers) 10.1.1.1 10.22.22.22 netmask 255.255.255.255





Any traffic from internet users comes to the public IP of the ASA, and is also easy to handled with a static NAT rule.




static (inside,outside) 203.203.203.203 10.22.22.22 netmask 255.255.255.255




Where I'm having problems is with the VPN users. I'm not sure exactly how the VPN functionality interacts with NAT, and what order NAT & VPN get applied to an ASA.



How do I configure a static NAT rule so any RemoteUsers sending data to 10.1.1.1 over the VPN have it redirected to 10.22.22.22?




Does this NAT take effect before or after VPN traffic selection? (that is, if the VPN was configured as RemoteSite<-> InsideNet only would traffic to 10.1.1.1 come through and be NATTed to the DeeperNet IP, or would the ASA look at the real IPs and decide it's not part of the VPN?)

Slow LDAP connection on Apache?

I'm trying to troubleshoot why the first call on a web service is taking more than a minute when subsequent takes less than a second (this repeats every 10+ minutes without calls).
I ran Wireshark on the server and the difference that I found was that the slow request calls to LDAP and the other doesn't. On the slow request, I can see the client/server Hello and Handshake and an LDAP call. Then the server waits (or addresses other requests) until EXACTLY 60 seconds later (see below picture), another call to LDAP is made, immediately after which the first line of the code begins to execute.



The fact that it takes exactly 60 seconds EVERY TIME makes me think that there is a timeout involved somewhere, but I'm struggling with the configurations.




Other non-web-service requests start directly with the second LDAP call, but this REST request is being punished for some reason.



Any idea on how can I improve this? Anything will be a lot of help.
Thanks



See the execution time

Monday, August 14, 2017

email - Exchange 2010 CAS (OWA) in Exchange 2007 Environment



I want to be able to use the Exchange 2010 OWA with mailboxes on an Exchange 2007 server.




I want to maintain my mailboxes on the 2007 server and maintain my 2007 HUB/CAS role. My setup is all Windows 2008 x64 servers, 3 Exchange 2007 servers (2 mailbox servers (CCR Active/Passive clustering) and 1 HUB/CAS server).



My goal is to maintain the current environment and have all the mailboxes on 2007, maintain the 2007 HUB/CAS for mail flow and for mobile phone access (2010 is still buggy with some mobile platforms) and have a NEW Exchange 2010 CAS server that only acts as a web portal for mail (the 2010 OWA blows away the 2007 OWA as you are not required to use IE to get real features). This will run on a separate internal server and use a separate hostname from the inside and outside.



Right now I have everything running, the 2007 servers and the 2010 with ONLY the CAS role installed. My problem right now is when I access the 2010 OWA and log in, it opens up the mailbox using the 2007 version of OWA.



So with all that said, is what I am trying to do possible?



I did find It is possible to use the Exchange 2010 web client with an Exchange 2007 mail server? as a relevant post but with no real help except a link I have already read over in my few hours of research into this.




Any help is appreciated.


Answer



From the playing around i have done i don't think you can view 2007 mailboxes using CAS2010 unless it proxies to a CAS2007. I have found this which seems to say the same thing:




CAS 2010 can only open 2010 mailboxes; it cannot connect to 2003 and 2007 mailboxes. The only time CAS2010 will do anything OWA related with 2007 is if the user's mailbox lives in a non-internet facing AD site and only has CAS2007 boxes in that AD site. When this happens the CAS2010 box in the Internet facing AD site will proxy to CAS2007, which in turn opens the 2007 mailbox. In this one particular case you'll have to copy the CAS2007 binaries to the CAS2010 server for this to work.




http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/eeef7ac3-bae0-47d3-be9d-07e62d4c3b92




This also seems to confirm that it can't be done:
http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/96d63d71-c536-4dab-a401-f26c425c29c9



Sorry i couldn't provide a solution.


storage - Multiple 2 TB LUNs -- ZFS or ext4-over-LVM

I am going to deploy an Ubuntu 12.04-based archiving system. Due to limitations of the SAN Storage, it can only present me with 2TB LUNs. So, I have to merge multiple 2TB LUNs to achieve the desired space (4.5TB with expected growth of up to 1TB per year).



The SAN Storage has its own RAID protection (yes, not a replacement for backup, I know), so I don't need a software-based RAID protection.



Please provide me suggestion on what should I use to 'combine' these LUNs:




  1. Combine them into a ZFS pool, or


  2. Combine them into an LVM VG



If possible, please also inform me of the drawback/benefit of both.



(All I can find on the Internet assume that the ZFS/LVM will be built on top of individual disks with RAID provided by RAIDZ/MDRAID, so really not suitable for my situation).



Note: Yeah, maybe I should use a better SAN Storage, but the 'Management' had decided to use this particular SAN Storage, and my hands are tied.

Sunday, August 13, 2017

how to discourage email spoofing

One of my production server emails look like to be spoofed from another network.
The team (group or individual or professional company) is sending mass mails out to their list of users, using our email addresses. And, I am receiving a lot of fail back emails.



The hosting provider sysadmin team did not find out that those emails were originated from my domain/server. There are about 400 - 1000 emails a day returning back fail failed delivery.



Should I have to worry about the server being low reputed because of this kind of illict activity from thrird party? What is the way to discourage them? There is probably no chance that I can trace out how many successful spoofs were generated. I am getting only those emails that returned back to the soft catch-all email account.

Saturday, August 12, 2017

php fpm - PHP5-FPM and 'ondemand'



I've set up a server with Nginx and PHP5-FPM, and things are running fine. However, as I add more and more sites to the server I see that the memory usage steadily increases, and I've come to the conclusion that PHP5-FPM is to "blame".



What I currently do is that I set up a separate PHP5-FPM pool for each site, and configure that pool according to expected traffic. However, with enough sites, I will in the end have a server which just sists on a rather large number of PHP5-FPM "children" which are just waiting for work.



I just found out about the ondemand PHP5-FPM mode, which allows me to configure PHP5-FPM in a way so that child processes are forked only when actually needed, and then kept alive for a given duration to process.



However, I can't really find too much details on this. What I'm most curious about is how the variables pm.max_children and pm.max_requests affect the ondemand mode (if at all). I assume that the variables pm.start_servers, pm.min_spare_servers, pm.max_spare_servers not apply to the ondemand mode.


Answer




you're right, start_servers, min_spare_servers and max_spare_servers do not apply to the ondemand mode. The following variables are those that apply to ondemand mode:




  • pm.max_children

  • pm.process_idle_timeout

  • pm.max_requests



When you set pm = ondemand, FPM will fork childrens as soon as he need, always keeping children number less or equal to pm.max_children, so this variable is a upper limit on number of childrens forked at the same time.




The other two variables allows you to specify when a children has to be destroyed:




  • pm.process_idle_timeout sets how long a children waits without work before it gets destroyed. It is defined in seconds.


  • pm.max_requests defines how many requests (one at a time) a children will process before it gets destroyed. For example, if you set this variable a value of 50, a children will process 50 requests and closes itself. If FPM master process still needs another children, it will fork a new one.




In my company we use ondemand mode on FPM, and we use pm.max_requests to force recycling of fpm childrens and avoid high memory usage.



Hope this helps,




Greetings.


Friday, August 11, 2017

linux networking - Debian 7 how are IPv6 link local addresses set?



It seems like when dhclient runs on eth0 I get an IPv4 address from the DHCP server and a Scope:Link IPv6 address attached to eth0:



inet6 addr: fe80::a00:27ff:fed0:4d41/64 Scope:Link


But I can't see from dhclient-script how that address is being added. On another interface with a static IP address, I'd like to add a link local IPv6 address, and I was wondering if there was a generic command to do that without knowing the mac.



Edit:
It looks like the kernel assigns the link local address when you do "ip link set dev ethX up" or "ifconfig ethX up". However, in my case I had a cable plugged in to the interface that was DHCP'ing and no cable plugged into the interface I was setting up statically. Can't verify until Monday but I'm guessing the kernel does not assign link local addresses to the interface if there's no link.



Answer



Link local addresses are derived from the MAC address of the device. They are auto-generated as a part of bringing the interface up. Auto-configuration includes a discovery process to ensure that the address is unique on the network.



A similar process is used to auto-configure routable addresses when a router advertisement is available. These addresses may be regenerated periodically to provide privacy.



RFC 4862 specifies the processes to be followed.


Wednesday, August 9, 2017

Does Active Directory on Server 2003 R2 support IPv6 subnets in Sites and Services?



I've been experimenting with IPv6 at our organization. The domain controllers (all 2003 R2) and most of the servers (2003 R2 / 2008 / 2008 R2) have IPv6 configured. We have a subnet assigned through a tunnel provider.




Currently, the only workstation that is running IPv6 is mine. (Windows 7) I have been noticing that my workstation is picking domain controllers in other sites for things like DFS, and I finally realized that I don't have the IPv6 subnets set up in Active Directory Sites and Services (ADSS). But when I try to add a IPv6 prefix in ADSS, it tells me:



Windows cannot create the object 2001:xxxx:xxxx:xxxx::/64 because:  
The object name has bad syntax.


I believe I may be using the 2008 version of the admin tools (ADSS reports version 6.1.7601.17514) so I'm wondering if maybe my 2003 R2 Active Directory schema doesn't support configuring IPv6 subnets in ADSS. Is this true?



UPDATE




Even with 2008 R2 schema in Active Directory, I'm having the same problem. How can I get my IPv6 subnets into Sites and Services?


Answer



It turns out that Windows 2003 Domain Controllers will not accept IPv6 subnets in Sites and Services. After adding a 2008 R2 domain controller, I was able to add IPv6 subnets. But I also found out that running IPv6 on Windows 2003 does not work out very well, especially with Exchange in the mix.


Tuesday, August 8, 2017

domain name system - Route 53 - IP:PORT











Using Route 53, how would I make a record point to an IP:PORT?



I have tried using a CNAME but it will never resolve.


Answer



This is impossible. The DNS is only able to map to an IP, not to a port.


memory - ECC RAM, Background Scrubbing, and IOMMU BIOS Settings



I'm upping the RAM in one of our servers from 2GB to 4GB. Looking around in the BIOS, I see the following settings:



DRAM ECC Enable (Enabled)

MCA DRAM ECC Logging (Disabled)
ECC Chip Kill (Disable)
DRAM Scrub Redirect (Disable)
DRAM BG Scrub (Disabled)
L2 Cache BG Scrub (Disabled)
Data Cache BG Scrub (Disabled)
IOMMU Mode (Disabled)


Should these be turned on? And for the background scrubbing options, various times are in nano and microseconds; how would one go about calculating the optimal time to use?




Additionally, IOMMU has an options for Best Fit and Absolute, and then allows me to set the aperture size in MB. What should this be set to? We're running VMWare Server on this box, so my basic understanding is that IOMMU is helpful, but don't know what the ideal aperture would be.


Answer



Sounds like the server your using is AMD based; here is some information on i/o virtualization and AMD's IOMMU option that might help. -> http://developer.amd.com/documentation/articles/pages/892006101.aspx -> Specifically under "What's an IOMMU."



Some more info related to chipkill and scrub modes in bios related to ECC with detailed information on ECC Scrubbing and performance when using some of these options -> http://episteme.arstechnica.com/eve/forums/a/tpc/f/77909774/m/346009152831


Raid 1+0 on HP ML350 G5



I'm trying to setup a Proliant ML350 that has a SmartArray E200i Raid controller. I want to setup RAID 1 but there isn't any option for it, just 0, 1+0, and 5. I only want to use 2 drives since I just want to protect against hardware failure of 1 drive. The RAID controller allows me to select level 1+0 when I select the 2 drives. But as I understand, RAID 1+0 is RAID 10 and needs a minimum of 4 drives. Why isn't there an option for selecting RAID 1 on this controller for mirroring? Or is it that if I select RAID 1+0 it will mirror the drive without the the striping for performance? Thanks!


Answer



In HP SmartArray language, RAID 1+0 means RAID 1 when you only have two drives.



Azure Database For MySql - How to see the successful backup and restore logs?



I am using Azure Database for MySQL, vcore GpV2 and had opted for Geo-redundant backup during Database creation. According to the official documentation,




Generally, full backups occur weekly, differential backups occur twice
a day, and transaction log backups occur every five minutes.





But how shall I know whether backups are actually happening automatically? Where are the logs?



I checked in Activity Log but there aren't any backup logs coming.


Answer



There are no backup logs. The backup of PaaS databases is handled as part of the platform and the logs for this are not accessible to the user. The assumption is that if you are going to use a PaaS service that you trust the provider to do these things for. If you don't then you should run in IaaS.


Monday, August 7, 2017

Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5



My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important.




Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)?



I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)).



I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern?



I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it.



Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes):




Apache:



Server version: Apache/2.2.14 (Ubuntu)
Server built: Apr 13 2010 20:22:19
Server's Module Magic Number: 20051115:23
Server loaded: APR 1.3.8, APR-Util 1.3.9
Compiled using: APR 1.3.8, APR-Util 1.3.9
Architecture: 64-bit
Server MPM: Worker
threaded: yes (fixed thread count)

forked: yes (variable process count)


Worker:




StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64

ThreadsPerChild 25
MaxClients 400
MaxRequestsPerChild 2000



PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2):



--enable-bcmath \
--enable-calendar \

--enable-exif \
--enable-ftp \
--enable-mbstring \
--enable-pcntl \
--enable-soap \
--enable-sockets \
--enable-sqlite-utf8 \
--enable-wddx \
--enable-zip \
--enable-fastcgi \

--with-zlib \
--with-gettext \


Apache php-fastcgi-setup.conf



FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2
FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13
FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9


ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

Answer



The only thing you should ask yourself is: Are you really expecting so much traffic on your site to warrant such a complicated and risky setup (as opposed to "regular" prefork + php as a module).



I've been running a couple of php heavy sites peaking at 10m+ hits/day without having the need to switch to threaded model. PHP per se is a mess, making it jump through hoops is asking for it.


Sunday, August 6, 2017

ubuntu - Kernel attempts to kill MySQL with sigkill

I'm running an Ubuntu server for MySQL.



Server info




  • Ubuntu 12.10

  • MySQL installed via apt


  • RAM: 512M

  • innodb_buffer_pool_size : 300M

  • There is no other memory intensive application running on this box.



Problem



Every morning, at approx. 6:40am something happens to cause a noticeable change in memory:



https://dl.dropbox.com/u/12520837/mem.s.png




At the same time, a systematic "kill" of running processes seems to occur, causing MySQL to restart.




Apr 10 06:43:40 mysql-01 kernel: [1866472.511966] select 1 (init), adj 0, size 41, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.511973] select 385 (dbus-daemon), adj 0, size 44, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.511975] select 389 (rsyslogd), adj 0, size 124, to kill




Apr 10 06:43:40 mysql-01 kernel: [1866472.511982] select 4578 (snmpd), adj 0, size 160, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.514157] select 1 (init), adj 0, size 41, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.514164] select 385 (dbus-daemon), adj 0, size
44, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.514166] select 389 (rsyslogd), adj 0, size 124, to kill



Apr 10 06:43:40 mysql-01 kernel: [1866472.514171] select 4578 (snmpd), adj 0, size 160, to kill




Apr 10 06:43:44 mysql-01 /etc/mysql/debian-start[21807]: Upgrading MySQL tables if necessary.



Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored



Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: Looking for 'mysql' as: /usr/bin/mysql



Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck



Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: This installation of MySQL is already upgraded to 5.5.29, use --force if you still need to run mysql_upgrade




Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21821]: Checking for insecure root accounts.
Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21826]: Triggering myisam-recover for all MyISAM tables




Any help diagnosing this would be much appreciated!

Saturday, August 5, 2017

linux - Messages deferred (time out while sending message body)



I have a Linux CENTOS Server running PostFix. It has no mailboxes, but is a mail gateway, for several Domains in another destination Server, which is also a Linux CENTOS running Sendmail. So, after checking the emails with antivirus and antispam, the Postfix Server delivers them to the Sendmail Server which hosts the real mailboxes.




Many of our customers are claiming they don't receive several emails, or receive them with many hours delay, so we monitored the /var/log/maillog file at the Postfix Server, and found out that those emails are not being delivered to the Sendmail Server and are deferred with messages like:




Aug 23 11:48:58 srv7 postfix/smtp[618]: 980C773D64B: to=, relay=srv6.multisitesdominios.com.br[200.184.161.136], delay=2375, status=deferred (conversation with srv6.multisitesdominios.com.br[200.184.161.136] timed out while sending message body)




Most of these emails have attachments, so probably they are a little large. We realized that small emails pass and are received normally.



My questions:





  1. What is the real reason of this problem, is it really the emails size?

  2. Is there any Postfix parameter we should tune to avoid it?

  3. Would the problem be at the destination Server (the Sendmail one) and not at this "gateway" Server (the Postfix one)?

  4. What would be the definite solution?


Answer




  1. It's indirectly the size. It takes too long without receiving progress on the data so that either side hangs up


  2. smtp_data_xfer_timeout. It defaults to 180 seconds which should be OK in every scenario.

  3. It can be on both. But probably it is the network in between.

  4. Check why the communication stalls.


apache 2.2 - Apache2 ssl + virtualhosts of the same domain



My webserver hosts several subdomains (vhosts) of a website, say sub1.example.com and sub2.example.com. The only difference between these vhosts is the documentroot. Everything else is shared across vhosts.



Now I would like to do the same for HTTPS, but of course ssl + virtualhost is tricky. The good thing is that my SSL certificate is valid for my complete domain. Hence I don't need to specify per-vhosts certificate. The only thing that I want to specify per vhost is the document root.



The FAQ says:





Name-Based Virtual Hosting is a very popular method of identifying
different virtual hosts. It allows you to use the same IP address and
the same port number for many different sites. When people move on to
SSL, it seems natural to assume that the same method can be used to
have lots of different SSL virtual hosts on the same server.



It is possible, but only if using a 2.2.12 or later web server, built
with 0.9.8j or later OpenSSL. This is because it requires a feature
that only the most recent revisions of the SSL specification added,

called Server Name Indication (SNI).




I am using Ubuntu 11.10 which ships with Apache 2.2.20 and openssl 1.0.0e so I think I should be good. However, I can't get it to work. I already have default and default-ssl sites enabled. If I add a virtualhost like I would do for HTTP:




ServerName sub1.example.com
DocumentRoot /var/www/sub1




And then try to restart Apache, I get:




[Thu Mar 01 23:55:15 2012] [warn] default VirtualHost overlap on
port 443, the first has precedence Action 'start' failed.



Answer



What you probably need is three things:





  1. A NameVirtualHost *:80 directive. If you want to follow the Ubuntu conventions, put this in ports.conf.

  2. Fix the host specification on the default SSL vhost. It's set to in the default config; it needs to match the listener specification of your other vhost and your NameVirtualHost directive.

  3. You also need to specify the SSL-related settings in your new vhost. SSLEngine On and your certificate settings are needed.



..and if this isn't the case, then please provide your existing config and the output of apache2ctl -S.


apache 2.2 - Interpreting Netstat output

I have a LAMP stack running Fedora 12 and I am run the following instruction through the command line on the server:
netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n



As far as I understand this should show me the IPs of the connected processes and how many connections per IP there are, correct?



I would need some help in interpreting the results. Out of 250 total connections, 140 don't show any IP (just a blank), 80 are from my slave DB, 20 are from IP 0.0.0.0 and only 10 are from what appears to be normal connections to the server.



My questions are thus:
1. What do the 140 blank IPs represent?

2. Why does my slave DB (MySQL) need 80 connections? Can this number be reduced? If so how?
3. What are 20 0.0.0.0 connections?



I apologise if this is a trivial question, but I am a relative newbie at this.



Adding part of the netstat -anp (just removed my servers IPs and had to trim because of post length limit):




Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nams

tcp 0 0 0.0.0.0:2234 0.0.0.0:* LISTEN 1616/sshd
tcp 0 0 my-server-ip:443 201.23.177.150:18846 SYN_RECV -
tcp 0 0 my-server-ip:443 182.156.150.187:41594 SYN_RECV -
tcp 0 0 my-server-ip:443 151.61.126.161:50591 SYN_RECV -
tcp 0 0 my-server-ip:443 119.30.38.84:51449 SYN_RECV -
tcp 0 0 my-server-ip:443 190.121.239.151:38961 SYN_RECV -
tcp 0 0 my-server-ip:443 201.23.160.144:54884 SYN_RECV -
tcp 0 0 my-server-ip:443 151.61.126.161:50592 SYN_RECV -
tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1625/dovecot
tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1625/dovecot

tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2393/mysqld
tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1625/dovecot
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1625/dovecot
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1302/rpcbind
tcp 0 0 my-server-ip:80 201.23.177.72:17927 SYN_RECV -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1436/cupsd
tcp 0 0 0.0.0.0:44183 0.0.0.0:* LISTEN 1356/rpc.statd
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1718/master
tcp 0 0 my-server-ip:46961 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46978 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47147 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47264 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47250 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47243 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47273 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47267 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46955 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 70 my-server-ip:47282 my-slave-db-ip:3306 ESTABLISHED 25697/httpd
tcp 0 0 my-server-ip:47034 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:41433 my-slave-db-ip:3306 ESTABLISHED 24983/httpd

tcp 0 0 my-server-ip:46986 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47247 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 52 my-server-ip:2234 177.42.224.14:57947 ESTABLISHED 25774/0
tcp 0 0 my-server-ip:46969 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 70 my-server-ip:41504 my-slave-db-ip:3306 ESTABLISHED 24982/httpd
tcp 0 0 my-server-ip:47271 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46970 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 69 my-server-ip:47275 my-slave-db-ip:3306 ESTABLISHED 25836/httpd
tcp 0 0 my-server-ip:46953 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47080 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47094 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47019 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46956 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46963 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 70 my-server-ip:46930 my-slave-db-ip:3306 ESTABLISHED 24976/httpd
tcp 0 0 my-server-ip:46906 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47272 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 70 my-server-ip:46888 my-slave-db-ip:3306 ESTABLISHED 25709/httpd
tcp 0 0 my-server-ip:2234 177.42.224.14:64099 ESTABLISHED 25866/sshd: root@no
tcp 0 0 my-server-ip:47112 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47221 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 240 my-server-ip:47276 my-slave-db-ip:3306 ESTABLISHED 25374/httpd
tcp 0 0 my-server-ip:47109 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 240 my-server-ip:47278 my-slave-db-ip:3306 ESTABLISHED 25711/httpd
tcp 0 0 my-server-ip:41195 my-slave-db-ip:3306 ESTABLISHED 24966/httpd
tcp 0 0 my-server-ip:46975 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47027 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 86 my-server-ip:47281 my-slave-db-ip:3306 ESTABLISHED 25599/httpd
tcp 0 70 my-server-ip:46993 my-slave-db-ip:3306 ESTABLISHED 25841/httpd
tcp 0 0 my-server-ip:47119 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47165 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47032 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47054 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 1 my-server-ip:47322 94.100.187.197:25 SYN_SENT 25909/smtp
tcp 0 0 my-server-ip:47185 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 70 my-server-ip:46893 my-slave-db-ip:3306 ESTABLISHED 25712/httpd
tcp 0 0 my-server-ip:41273 my-slave-db-ip:3306 ESTABLISHED 24974/httpd
tcp 0 0 my-server-ip:47076 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47214 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47241 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47176 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47012 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:3306 my-slave-db-ip:57117 ESTABLISHED 2393/mysqld
tcp 0 0 my-server-ip:143 177.42.224.14:62179 ESTABLISHED 23981/imap-login
tcp 0 0 my-server-ip:47060 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 240 my-server-ip:47280 my-slave-db-ip:3306 ESTABLISHED 25602/httpd
tcp 0 0 my-server-ip:46967 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47003 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46972 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:46946 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 0 my-server-ip:47202 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47017 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 644 my-server-ip:46811 my-slave-db-ip:3306 ESTABLISHED 25604/httpd
tcp 0 0 my-server-ip:47030 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47253 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47044 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47274 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 my-server-ip:47096 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 240 my-server-ip:47277 my-slave-db-ip:3306 ESTABLISHED 25842/httpd
tcp 0 0 my-server-ip:47269 my-slave-db-ip:3306 TIME_WAIT -

tcp 0 240 my-server-ip:47279 my-slave-db-ip:3306 ESTABLISHED 25506/httpd
tcp 0 490 my-server-ip:46673 my-slave-db-ip:3306 ESTABLISHED 25665/httpd
tcp 0 0 my-server-ip:47045 my-slave-db-ip:3306 TIME_WAIT -
tcp 0 0 :::2234 :::* LISTEN 1616/sshd
tcp 0 0 :::443 :::* LISTEN 24956/httpd
tcp 0 0 :::111 :::* LISTEN 1302/rpcbind
tcp 0 0 :::80 :::* LISTEN 24956/httpd
tcp 0 0 ::1:631 :::* LISTEN 1436/cupsd
tcp 0 0 :::25 :::* LISTEN 1718/master
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:88.154.220.155:33982 ESTABLISHED 25842/httpd

tcp 0 47 ::ffff:my-server-ip:443 ::ffff:186.122.246.21:52748 ESTABLISHED 25837/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:55078 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55984 ESTABLISHED 25608/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:41350 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:90.210.173.3:51483 ESTABLISHED 25697/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55965 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.144.158.19:40385 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:47388 TIME_WAIT -
tcp 0 6230 ::ffff:my-server-ip:443 ::ffff:106.197.152.19:48216 FIN_WAIT1 -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:36967 CLOSE_WAIT 24974/httpd

tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55968 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:93.146.164.167:50292 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:206.53.54.187:38688 ESTABLISHED 25374/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:81.136.129.137:52966 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55966 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:220.255.1.131:8102 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:41.222.192.226:64538 ESTABLISHED 25832/httpd
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:60849 CLOSE_WAIT 24966/httpd
tcp 0 513 ::ffff:my-server-ip:443 ::ffff:187.80.54.17:41707 FIN_WAIT1 -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:57877 CLOSE_WAIT 24983/httpd

tcp 0 0 ::ffff:my-server-ip:443 ::ffff:180.178.25.61:5453 ESTABLISHED 25506/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.144.158.195:4773 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:79.26.90.157:24409 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:197.200.85.248:13747 ESTABLISHED 25599/httpd
tcp 0 6230 ::ffff:my-server-ip:443 ::ffff:41.53.45.194:33870 FIN_WAIT1 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55960 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:2.158.95.179:60733 ESTABLISHED 24976/httpd
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:213.30.118.99:55288 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.144.158.195:4769 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:187.80.191.105:45382 FIN_WAIT2 -

tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55967 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:83.35.243.61:53344 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:40979 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:200.81.44.250:31556 TIME_WAIT -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:57631 CLOSE_WAIT 24982/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55980 ESTABLISHED 25605/httpd
tcp 0 11201 ::ffff:my-server-ip:443 ::ffff:93.65.181.47:53717 FIN_WAIT1 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:188.80.5.246:53477 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:41.190.8.31:12272 ESTABLISHED 25217/httpd
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:39440 CLOSE_WAIT 25712/httpd

tcp 0 0 ::ffff:my-server-ip:80 ::ffff:178.96.198.45:43412 TIME_WAIT -
tcp 0 6230 ::ffff:my-server-ip:443 ::ffff:41.17.156.241:36223 FIN_WAIT1 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55983 ESTABLISHED 25713/httpd
tcp 0 213 ::ffff:my-server-ip:443 ::ffff:187.119.160.67:59660 FIN_WAIT1 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55964 TIME_WAIT -
tcp 1 455 ::ffff:my-server-ip:80 ::ffff:187.141.84.94:56325 CLOSING -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:197.252.77.141:5340 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:201.52.83.193:50966 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:151.26.113.212:26581 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:56702 TIME_WAIT -

tcp 0 6230 ::ffff:my-server-ip:443 ::ffff:95.66.57.147:60665 FIN_WAIT1 -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:46.222.107.130:55683 TIME_WAIT -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:201.23.160.151:29801 CLOSE_WAIT 25709/httpd
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:85.77.28.94:47593 ESTABLISHED 25841/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:151.60.30.6:48331 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:117.229.95.166:48579 ESTABLISHED 25711/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.61.76:37227 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:43040 TIME_WAIT -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:80.7.104.8:56232 CLOSE_WAIT 25604/httpd
tcp 0 6230 ::ffff:my-server-ip:443 ::ffff:41.190.3.232:17447 FIN_WAIT1 -

tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55985 ESTABLISHED 24991/httpd
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:186.182.113.12:59188 TIME_WAIT -
tcp 1 0 ::ffff:my-server-ip:80 ::ffff:201.23.160.151:28767 CLOSE_WAIT 25665/httpd
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:178.240.115.10:53294 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:85.62.234.254:45417 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:2.232.62.163:43565 FIN_WAIT2 -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:117.230.223.85:48221 ESTABLISHED 25602/httpd
tcp 0 1 ::ffff:my-server-ip:80 ::ffff:189.60.238.52:42449 FIN_WAIT1 -
tcp 0 28 ::ffff:my-server-ip:443 ::ffff:101.2.90.246:50399 CLOSING -
tcp 0 513 ::ffff:my-server-ip:443 ::ffff:49.201.248.245:60295 FIN_WAIT1 -

tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.4.135.249:65105 ESTABLISHED 25836/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:201.191.198.96:32171 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55982 ESTABLISHED 25843/httpd
tcp 0 1 ::ffff:my-server-ip:80 ::ffff:189.131.12.46:55981 FIN_WAIT1 25546/httpd
tcp 0 3533 ::ffff:my-server-ip:443 ::ffff:189.116.225.43:52473 ESTABLISHED 25727/httpd
tcp 0 0 ::ffff:my-server-ip:80 ::ffff:83.178.136.202:40922 TIME_WAIT -
tcp 0 0 ::ffff:my-server-ip:443 ::ffff:41.206.15.18:9701 TIME_WAIT -
udp 0 0 0.0.0.0:39616 0.0.0.0:* 1356/rpc.statd
udp 0 0 0.0.0.0:68 0.0.0.0:* 1384/dhclient
udp 0 0 0.0.0.0:5353 0.0.0.0:* 1336/avahi-daemon:

udp 0 0 0.0.0.0:111 0.0.0.0:* 1302/rpcbind
udp 0 0 0.0.0.0:60021 0.0.0.0:* 1336/avahi-daemon:
udp 0 0 0.0.0.0:629 0.0.0.0:* 1302/rpcbind
udp 0 0 0.0.0.0:631 0.0.0.0:* 1436/cupsd
udp 0 0 0.0.0.0:684 0.0.0.0:* 1356/rpc.statd
udp 0 0 :::111 :::* 1302/rpcbind
udp 0 0 :::629 :::* 1302/rpcbind
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node PID/Program name Path
unix 2 [ ACC ] STREAM LISTENING 10999 1436/cupsd /var/run/cups/cups.sock

unix 2 [ ACC ] STREAM LISTENING 11536 1625/dovecot /var/run/dovecot/dict-server
unix 2 [ ACC ] STREAM LISTENING 11538 1625/dovecot /var/run/dovecot/login/default
unix 2 [ ACC ] STREAM LISTENING 11543 1625/dovecot /var/run/dovecot/auth-worker.1626
unix 2 [ ACC ] STREAM LISTENING 11892 1718/master public/cleanup
unix 2 [ ACC ] STREAM LISTENING 11924 1718/master public/flush
unix 2 [ ACC ] STREAM LISTENING 11944 1718/master public/showq
unix 2 [ ACC ] STREAM LISTENING 11567 1626/dovecot-auth /var/run/dovecot/auth-master
unix 2 [ ACC ] STREAM LISTENING 10206 1316/dbus-daemon /var/run/dbus/system_bus_socket
unix 2 [ ] DGRAM 7400 1/init @/com/ubuntu/upstart
unix 2 [ ACC ] STREAM LISTENING 19051 2393/mysqld /var/lib/mysql/mysql.sock

unix 23 [ ] DGRAM 10066 1256/rsyslogd /dev/log
unix 2 [ ] DGRAM 7555 551/udevd @/org/kernel/udev/udevd
unix 2 [ ] DGRAM 11146 1469/hald @/org/freedesktop/hal/udev_event
unix 2 [ ACC ] STREAM LISTENING 11899 1718/master private/tlsmgr
unix 2 [ ACC ] STREAM LISTENING 11903 1718/master private/rewrite
unix 2 [ ACC ] STREAM LISTENING 11908 1718/master private/bounce
unix 2 [ ACC ] STREAM LISTENING 11912 1718/master private/defer
unix 2 [ ACC ] STREAM LISTENING 11916 1718/master private/trace
unix 2 [ ACC ] STREAM LISTENING 11920 1718/master private/verify
unix 2 [ ACC ] STREAM LISTENING 11928 1718/master private/proxymap

unix 2 [ ACC ] STREAM LISTENING 11932 1718/master private/proxywrite
unix 2 [ ACC ] STREAM LISTENING 11936 1718/master private/smtp
unix 2 [ ACC ] STREAM LISTENING 11940 1718/master private/relay
unix 2 [ ACC ] STREAM LISTENING 11948 1718/master private/error
unix 2 [ ACC ] STREAM LISTENING 11952 1718/master private/retry
unix 2 [ ACC ] STREAM LISTENING 11956 1718/master private/discard
unix 2 [ ACC ] STREAM LISTENING 11960 1718/master private/local
unix 2 [ ACC ] STREAM LISTENING 11964 1718/master private/virtual
unix 2 [ ACC ] STREAM LISTENING 11968 1718/master private/lmtp
unix 2 [ ACC ] STREAM LISTENING 11972 1718/master private/anvil

unix 2 [ ACC ] STREAM LISTENING 11976 1718/master private/scache
unix 2 [ ACC ] STREAM LISTENING 11082 1469/hald @/var/run/hald/dbus-zZ2k930EI0
unix 2 [ ACC ] STREAM LISTENING 11561 1626/dovecot-auth /var/spool/postfix/private/auth
unix 2 [ ACC ] STREAM LISTENING 11124 1469/hald @/var/run/hald/dbus-3PhM3NhFOT
unix 2 [ ACC ] STREAM LISTENING 10147 1302/rpcbind /var/run/rpcbind.sock
unix 2 [ ACC ] STREAM LISTENING 10285 1336/avahi-daemon: /var/run/avahi-daemon/socket
unix 2 [ ACC ] STREAM LISTENING 11594 1635/saslauthd /var/run/saslauthd/mux
unix 2 [ ACC ] STREAM LISTENING 12223 1762/abrtd /var/run/abrt/abrt.socket
unix 2 [ ACC ] STREAM LISTENING 11055 1461/acpid /var/run/acpid.socket
unix 2 [ ACC ] STREAM LISTENING 11440 1595/pcscd /var/run/pcscd.comm

unix 2 [ ] DGRAM 561886 25911/bounce
unix 3 [ ] STREAM CONNECTED 561870 1978/tlsmgr private/tlsmgr
unix 3 [ ] STREAM CONNECTED 561869 25910/smtp
unix 2 [ ] DGRAM 561861 25910/smtp
unix 3 [ ] STREAM CONNECTED 561856 1978/tlsmgr private/tlsmgr
unix 3 [ ] STREAM CONNECTED 561855 25909/smtp
unix 2 [ ] DGRAM 561847 25909/smtp
unix 3 [ ] STREAM CONNECTED 561857 25909/smtp private/smtp
unix 3 [ ] STREAM CONNECTED 561845 2111/qmgr
unix 3 [ ] STREAM CONNECTED 561844 2393/mysqld /var/lib/mysql/mysql.sock

unix 3 [ ] STREAM CONNECTED 561843 25906/proxymap
unix 3 [ ] STREAM CONNECTED 561842 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561841 25906/proxymap
unix 2 [ ] DGRAM 561829 25906/proxymap
unix 3 [ ] STREAM CONNECTED 561837 25906/proxymap private/proxymap
unix 3 [ ] STREAM CONNECTED 561828 25905/trivial-rewri
unix 2 [ ] DGRAM 561820 25905/trivial-rewri
unix 3 [ ] STREAM CONNECTED 561838 25905/trivial-rewri private/rewrite
unix 3 [ ] STREAM CONNECTED 561819 2111/qmgr
unix 3 [ ] STREAM CONNECTED 561804 2393/mysqld /var/lib/mysql/mysql.sock

unix 3 [ ] STREAM CONNECTED 561803 25697/httpd
unix 3 [ ] STREAM CONNECTED 561802 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561801 25697/httpd
unix 3 [ ] STREAM CONNECTED 561798 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561797 25599/httpd
unix 3 [ ] STREAM CONNECTED 561796 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561795 25599/httpd
unix 3 [ ] STREAM CONNECTED 561792 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561791 25602/httpd
unix 3 [ ] STREAM CONNECTED 561790 2393/mysqld /var/lib/mysql/mysql.sock

unix 3 [ ] STREAM CONNECTED 561789 25602/httpd
unix 3 [ ] STREAM CONNECTED 561787 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561786 25506/httpd
unix 3 [ ] STREAM CONNECTED 561785 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561784 25506/httpd
unix 3 [ ] STREAM CONNECTED 561782 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561781 25711/httpd
unix 3 [ ] STREAM CONNECTED 561780 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561779 25711/httpd
unix 3 [ ] STREAM CONNECTED 561777 2393/mysqld /var/lib/mysql/mysql.sock

unix 3 [ ] STREAM CONNECTED 561776 25842/httpd
unix 3 [ ] STREAM CONNECTED 561775 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561774 25842/httpd
unix 3 [ ] STREAM CONNECTED 561772 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561771 25374/httpd
unix 3 [ ] STREAM CONNECTED 561770 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561769 25374/httpd
unix 3 [ ] STREAM CONNECTED 561760 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 561759 25836/httpd
unix 3 [ ] STREAM CONNECTED 561758 2393/mysqld /var/lib/mysql/mysql.sock

unix 3 [ ] STREAM CONNECTED 561757 25836/httpd
unix 3 [ ] STREAM CONNECTED 560678 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 560677 25841/httpd
unix 3 [ ] STREAM CONNECTED 560676 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 560675 25841/httpd
unix 3 [ ] STREAM CONNECTED 560401 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 560400 24976/httpd
unix 3 [ ] STREAM CONNECTED 560399 2393/mysqld /var/lib/mysql/mysql.sock
unix 3 [ ] STREAM CONNECTED 560398 24976/httpd
unix 3 [ ] STREAM CONNECTED 560346 1626/dovecot-auth /var/run/dovecot/login/default

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...