Thursday, November 30, 2017

centos - strange linux zip command behaviour



I run this on my CentOs box:



zip -r backup.zip /home/user/domains/example.com/public_html/


The command works ok as it should, but I have an annoying problem.



What I expect is when I open the zip to have one folder in it: public_html




Instead i have /home/user/domains/example.com/public_html/. Does anyone know how to prevent this?


Answer



That looks like expected behavior, not strange behavior.



To get what you want, try this:



cd /home/user/domains/example.com
zip -r /srv/backups/example.com/backup.zip public_html


Wednesday, November 29, 2017

raid - vsphere 5 - what is the best way to partition local disks?

WE have a Dell R815 with 6x900GB SAS disks and a perc h700 controller.




We want to use it as a dedicated bare metal vsphere server (stand alone).



We have no idea what the best way to partition the beast.



Currently its one big RAID 5 array, with 3 virtual disks:
0=1.8TB
1=1.8TB
2=500GB



We could just run with this, and install vsphere 5 on disk 2, and I guess we can use both the other partitions for VMS (i.e. create 2 storage pools). However, 500GB is probably a waste for just installing vsphere. Also, we guess that to get best performance, we would want different VMS running on differnetn disks via differnt channels in the controller?




Does anyone have any suggestsions for the best disk/raid layout is?

load balancing - nginx geo location module configuration using geo database?

I've setup nginx as a reverse proxy for a couple of apache backend/upstream servers.



Using the GeoLite database from MaxMind, I'm trying to loadbalance requests between the two servers dependent on the clients country code.



Nginx Configuration:



geo $geo {

default default;
include geo.conf;
}
upstream default.backend {
server 192.168.0.1:8080; #Server A
server 192.168.0.2:8080; #Server B
}
upstream DE.backend {
server 192.168.0.1:8080; #Server A
}

upstream US.backend {
server 192.168.0.2:8080; #Server B
}
server {
listen 80;
server_name myserver.com;
location / {
proxy_pass http://$geo.backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}


So I'm trying to send any German clients to server A, and US clients to server B, and any other clients not matching German or US country codes to be loadbalanced between servers A & B.



However, since geo.conf contains country codes for many (all) other countries, these values are being set to the variable $geo, as opposed to the 'default' value.



With my current configuration this causes '502 Bad Gateway' errors with all requests that aren't DE or US.




Nginx error log:



2013/10/11 08:18:50 [error] 25017#0: *1 no resolver defined to resolve NL.backend, client: 85.17.131.209, server: myserver.com, request: "GET / HTTP/1.1", host: "myserver.com"


Nginx access log:



85.17.131.209 - - [11/Oct/2013:08:18:50 -0700] "GET / HTTP/1.1" 502 574 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.63 Safari/537.31" "-"



How can I configure nginx.conf to interpret any IP country code value from geo.conf that's not DE or US as default, and loadbalance it accordingly to upstream default.backend?

Tuesday, November 28, 2017

high availability - Understanding the nameserver aspect of a DNS based failover system

As part of a project I'm involved in, system is required with as close to 99.999% uptime as possible (the system involves healthcare). The solution I am investigating involves having multiple sites which in turn have their own load balancers and multiple internal servers, and their own replicated database which is synchronised with every other site. What sits in front of all of this is a DNS based failover system that redirects traffic if a site goes down (or is manually taken down for maintenance).




What I'm struggling with however is how the DNS aspect functions without preventing a single point of failure. I've seen talk of floating IPs (which present that point of failure), various managed services such as DNSMadeEasy (which don't provide the ability to fully test their failover process during their free trial, so I can't verify if it's right for the project or not) and much more, and have been playing around with simple solutions such as assigning multiple A records for a domain name (which I understand falls far short given the discrepancies between how different browsers will interact with such a setup).



For a more robust DNS based approach, do you simply stipulate a nameserver for each location on a domain, run a nameserver at each location, and update each nameserver's independent records regularly when a failure is detected at another site (using scripts run on each nameserver to check all other sites)? If so, aren't there still the same issues that are found with regularly changed A records (browsers not updating to the new records, or ignoring very low TTLs)?



Here's a visual representation of how I understand the system would work.



I have been reading around this subject for several days now (including plenty of Q&As on here), but feel like I'm missing a fundamental piece of the puzzle.



Thanks in advance!

Tuning Apache KeepAlive Timeout for HTTPS

My website forces HTTPS everywhere and has an average first load time of 3-5 seconds. Thanks to caching, repeat load time is 0.8 seconds.



The SSL negotiation takes 150-300ms on my server, so I want to keep each connection alive as frequently as possible to prevent latency.




SSLSessionCache is set to the default 300 seconds.



Apache KeepAlive Timeout was recently lowered from 5 seconds to 2 seconds.



This change has resulted in a noticeable decrease in Server Load Average (5% average instead of 10% average), but I'm wondering if it could also be causing slower first load times, if the first load times are 3-5 seconds? Does that mean it must perform the SSL negotation again each time it passes the 2 second timeout?



Is it better to have slightly higher load averages with fewer SSL negotations (but more sleeping httpd tasks), or lower load averages with more SSL negotations?



We definitely have plenty of CPU & memory resources to spare. So ultimately the question is, what will result in the best performance for our viewers? Raising the KeepAlive Timeout to 3-5, or keeping it at 2?




Thanks!

Monday, November 27, 2017

Nginx - Does doing a nginx -s reload when upgrading nginx cause the binary to reload?



We were wondering the following:



Imaging upgrading nginx by compiling the new version, and doing a make install.




The targets are all the same, essentially the old version is overwritten (we usually pull the configure string from nginx -V).



Is it sufficient to do a nginx -s reload to force the new version of nginx to start being used? Or do we have to kill the process and start it back up?



We are asking this to try and limit downtime as much as possible. I know I know, a quick killall nginx ; nginx.... is a second of downtime... But why have even that second of downtime if it can be avoided.



Thanks.


Answer



nginx -s reload is not sufficient to upgrade to a new binary. Read this entry on the wiki to see the series of signals that need to be sent to upgrade to a new binary. Alternatively, since you're already installing from source, there's a make upgrade target you can run after make install that will send the signals for you.


windows xp - Error 720 on VPN (PPTP) attempt

When I attempt to connect to a server running XP x64 (so essentially Server 2003) using a PPTP connection, it fails with client-side error



Registering your computer on the network...




Error 720: A connection to the remote computer could not be established. You might need to change the network settings for this configuration.



and server-side error



Event ID: 20050



The user WINSERV3\Andy connected to port VPN8-1 has been disconnected because no network protocols were successfully negotiated.



I have configured the router to pass both TCP packets on 1723 and GRE packets. I have used Wireshark (filtering out ARP, UDP, and all TCP ports other than 1723) to observe the packets received by the server. Wireshark does not explicitly name any protocol GRE, but it does tell me the server sent and received TCP, PPTP, PPP LCP, PPP CHAP, PPP CBCP, and PPP IPCP. The connection seems to go wrong at packet 30, where the protocol is PPP LCP, with the payload of the packet being labeled "Protocol Reject". Obviously, this is going from server to client.




This would seem to lead to the conclusion that there is something wrong with my client, which runs Windows 7 Ultimate x64. However, it is able to connect to my house's router, which runs the DD-WRT firmware and is thus a PPTP endpoint. I'm thoroughly at a loss. Please help!

Sunday, November 26, 2017

migration - Which CentOS version for migrated web server?



I recently got a side job migrating a website from shared hosting to a VPS. The site is running on Django + Apache (mod_wsgi) + MySQL. The current host is running CentOS 5.6 (32-bit); should I take advantage of the move to switch to CentOS 6? And given the choice of 32-bit or 64-bit CentOS, should I stick with 32-bit or switch to 64-bit?




(I'm more experienced with development than with sysadmin stuff, hence my question. I also know Debian/Ubuntu much better than CentOS, but I'd like to get familiar with CentOS, and this is a fairly low-complexity setup to get started with.)


Answer



There are several PROs and CONs:



5.x vs 6.x




  1. Does your new provider actually support CentOS 6.0 right now? For example, Rackspace Cloud only promises Centos 6.0 support "soon", right now you'll have to start with 5.6.


  2. Do you value more recent packages or you need to support legacy software, especially closed-source built for version 5.x? If you don't need to support older software, I'd say start using newer version.



  3. Do you know there's no upgrade path from 5.x to 6.x? E.g. you'll have to do a complete re-install if you install 5.x now but need 6.x later.




32-bit vs 64-bit




  1. What does your hosting support? Some provide only 64-bit or only 32-bit supported platforms? E.g. some Amazon cloud instances are only 32-bit, and Rackspace cloud instances are 64-bit only.


  2. Generally speaking, 64-bit system takes more RAM to do the same job as 32-bit system. However, it can also support and efficiently manage more memory. If you're planning 4GB server or larger, by all means, 64-bit is the way to go. If, on the other hand, you'll have 2GB of memory in your server, you don't really need 64-bit, and 32-bit system will manage your existing memory with less waste.



centos5 - Compiling Gearman PHP Library for CentOS 5.8



I've been trying to get Gearman compiled on CentOS 5.8 all afternoon.




Searches have said to install the following via yum:



yum -y install --enablerepo=remi boost141-devel libgearman-devel e2fsprogs-devel e2fsprogs gcc44 gcc-c++


To get the Boost headers working correctly I did this:




cp -f /usr/lib/boost141/* /usr/lib/

cp -f /usr/lib64/boost141/* /usr/lib64/
rm -f /usr/include/boost
ln -s /usr/include/boost141/boost /usr/include/boost


With all of the dependancies installed and paths setup I then download and compile gearmand-1.1.2 just fine.




wget -O /tmp/gearmand-1.1.2.tar.gz https://launchpad.net/gearmand/1.2/1.1.2/+download/gearmand-1.1.2.tar.gz
cd /tmp && tar zxvf gearmand-1.1.2.tar.gz

./configure && make -j8 && make install


That works correctly. So now I need to install the Gearman library for PHP. I have attempted through PECL and downloading the source directly, both result in the same error:




checking whether to enable gearman support... yes, shared
not found
configure: error: Please install libgearman



What I don't understand is I installed the libgearman-devel package which also installed the core libgearman. The installation installs libgearman-devel-0.14-3.el5.x86_64, libgearman-devel-0.14-3.el5.i386, libgearman-0.14-3.el5.x86_64, and libgearman-0.14-3.el5.i386.



Is it possible the package version is lower than what is required? I'm still poking around with this, but figured I'd throw this up to see if anyone has a solution while I continue to research a fix.



Thanks!


Answer



This should do the trick:



export GEARMAN_LIB_DIR=/usr/include/libgearman

export GEARMAN_INC_DIR=/usr/include/libgearman


That should work, if not you'll have to do some minor edits to config.m4.


Friday, November 24, 2017

vmware esxi - Exposing disks to virtual machines as 512e



I'm aware that VMware ESXi supports the creation of datastores on local Advanced Format 512e disks as of v.6.5. However, all the (very scarce) information I can find seems to suggest that virtual disks created on that datastore will still be exposed to the virtual machine as 512n disks.



For some workloads, performance could be seriously degraded if the guest OS believes the disk is natively 512 byte sectored, producing lots of read-modify-write operations.




Why am I not hearing anything about this? Perhaps my information is incorrect and it exposes 512e to the guest? Or is there a setting for whether the guest will see 512n/512e/4kn?


Answer



VMDK will be still exposed as 512n in any case since it is the way how the VMware vSphere works. However, starting from VMware 6.5 you can make RDM of the 512e disk directly into VM. Moreover, your RAID controller must support those 512e and/or 4k disks and either emulate 512n or path-through all the sector size into the VMware hypervisor. (Please note that 4k disk support was added in vSphere 6.7)


domain name system - How to format and where to put the SPF TXT record?



EDIT I think I more or less understand the syntax and, anyway, Google is giving, in the link below, the syntax needed.



My question is really where to put that stuff. Should I quote every field? The whole line? :)



I've set up Google apps for my domain: I've registered the domain with Google by adding the CNAME Google asked and I've apparently succesfully setup the MX Google mail servers.




So far I haven't yet a dedicated server: I'm just having a domain at a registrar.



Now I want to activate SPF and I'm confused. In the following short webpage:



http://www.google.com/support/a/bin/answer.py?answer=178723



it is written that I must add a TXT record containing:



v=spf1 include:_spf.google.com ~all 



Where should I enter this? Should this go in the zone (?) file, like I did for the CNAME and the MX records?



So far I have something like this:



@ 10800 IN A 217.42.42.42
@ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM.
@ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM.
@ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM.
@ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM.

@ 10800 IN MX 1 ASPMX.L.GOOGLE.COM.
google8a70835987f31e34 10800 IN CNAME google.com.


Does adding the SPF TXT record mean I should literally have something like that:



@ 10800 IN A 217.42.42.42
@ 10800 IN MX 5 ASPMX3.GOOGLEMAIL.COM.
@ 10800 IN MX 5 ASPMX2.GOOGLEMAIL.COM.
@ 3600 IN TXT "v=spf1 include:_spf.google.com ~all"

@ 10800 IN MX 3 ALT2.ASPMX.L.GOOGLE.COM.
@ 10800 IN MX 3 ALT1.ASPMX.L.GOOGLE.COM.
@ 10800 IN MX 1 ASPMX.L.GOOGLE.COM.
google8a70835987f31e34 10800 IN CNAME google.com.


I made that one up and included right in the middle to show how confused I am. What I'd like to know is the exact syntax and where/how I should put this TXT record.


Answer



Our SPF records look like this:




@ 1800 IN TXT "v=spf1" "a" "mx" "ip4:x.x.x.x" "ptr:example2.org.au" "mx.example.org.au" "ip4:x.x.x.x" "ip4:y.y.y.y" "a:example2.org.au" "+all"


The equivalent text is:



v=spf1 a mx ip4:x.x.x.x ptr:example2.org.au mx.example.org.au ip4:x.x.x.x ip4:y.y.y.y a:example2.org.au +all


So your guestimate record is very close.


What is the Best RAID configuration for Hard Drives Varying in Sizes



We will be rebuilding our server to run Windows Server 2016, with Hyper-V for 1 domain controller and 1 more VM for a NAS + Web Server.




However, we must first decide on our RAID configuration before proceeding.



We currently have 7 HDD which total 19TB of storage in the following sizes:




  • 2 x 5TB

  • 4 x 2TB

  • 1 x 1TB




We have on-site and off-site replication so this RAID configuration's purpose will not be for a backup (as RAID is not a backup). What would be the most reliable and best way to install the Windows Server 2016 and RAID configure these drives of varying sizes?


Answer



Well, given what you've got there's not too much choice:



2x 5 TB in RAID1



4x 2 TB in RAID10



1x 1 TB as temp


Thursday, November 23, 2017

linux - Bootable USB backup



I would like to create a copy of my running Linux installation onto a USB 16 GB pen drive. The current system is a basic ATX PC used as a headless server on a remote location. I would like to have a backup option in case of a hard drive failure. Basically someone would shutdown the computer, plug in the USB containing the exact copy of the system and the system would boot and run off the USB drive for as long as needed, that is until I can get to the location with the new proper HDD replacement.




The current HDD is 120 GB. Used space is around 5 GB. So the questions are:
1. How to create a exact(bootable) copy of that, onto a smaller 16 GB usb drive?
2. How to copy everything back onto a bigger drive(i.e. 250 GB or 500 GB) when I do a new HDD install? Preferably expanding the file system back to the whole disk size, because the rest of the free space is used occasionally as time-lapse photography storage.



Here are some more details about the current disk configuration:



fdisk -l



Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xac46573c

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 14594 116707328 8e Linux LVM

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_swap: 1979 MB, 1979711488 bytes
255 heads, 63 sectors/track, 240 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_home: 63.8 GB, 63837306880 bytes
255 heads, 63 sectors/track, 7761 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000


df -BG



Filesystem           1G-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 3G 45G 5% /
tmpfs 1G 0G 1G 0% /dev/shm
/dev/sda1 1G 1G 1G 20% /boot

/dev/mapper/VolGroup-lv_home
59G 1G 55G 2% /home

Answer



Generic tool for low level work against hard drives



DISCLAIMER Playing with low-level tool may harm your system! Don't ask me for anything about potential damages you've made!



For this kind of jobs, I use a personal version of Debian-Live, built with all needed disk tools




    gsmartcontrol
smartmontools
partclone
ntfs-3g
lvm2
mdadm


Backing-up




There is few steps to backing your machine up:




  • Copying partition structure For this, you could use any of parted, sfdisk, gparted or other cfdisk... Followed by mdadm and/or lvm2 tools.

  • Copying datas Copying datas could be done by following command: tar -cpC /sourcePath . | tar -xpC /destPath. For backing-up mounted partitions with sub-mount active, I use the following workaround (sample backing-up root directory /):



    # Debian-live is automatically mounted to /media/DEBIAN-LIVE and /media/persistance
    mkdir /media/persistance/root/Backup
    mount --bind / /mnt
    tar -zcpC /mnt . >/media/persistance/root/Backup/root.tgz

    umount /mnt

  • Make system bootable. This is more subtle: Assuming you've booted on Debian-Live you have to build your target structure, chroot into them, than run grub-install:



    # mount /dev/mapper/VolGroup-lv_root /mnt
    # mount dev/sda1 /mnt/boot
    # # /home is useless for installing grub
    # for bind in proc sys dev{,/pts};do mount --bind /$bind /mnt/$bind;done
    # chroot /mnt
    # /usr/share/mdadm/mkconf >/etc/mdadm/mdadm.conf

    # update-initramfs -u -k all
    # grub-install
    # exit
    # umount /mnt/{dev{/pts,},sys,proc,}



Than (In the hope all work fine) I reboot.



Alternative multi-os using partclone




There is an overall solution for backing whole partitions, but as you store byte-by-byte each partitions, you need a destination bigger or with same size as your source: (This could be stored on small USB key anyway).



The basis is quite same, built your own Debian live with all needed tools, but partclone.



Than to store a whole multi-boot disk sharing WinXP and Linux on same disk (sample):



mkdir ReleventDirectoryName
cd $_
SOURCE=sdA

dd if=/dev/$SOURCE count=1 | gzip >bblock.gz
sfdisk -d /dev/$SOURCE >sfdisk.dump
partclone.ntfs -c -s /dev/${SOURCE}1 | xz >part1-ntfs.pclone.xz
partclone.ext4 -c -s /dev/${SOURCE}2 | xz >part2-ext4.pclone.xz
partclone.ext4 -c -s /dev/${SOURCE}5 | xz >part5-ext4.pclone.xz


and so on...



to restore, you only have to invers the process:




cd ReleventDirectoryName
DEST=sdA
zcat bblock.gz | dd of=/dev/$DEST
sfdisk /dev/$DEST partclone.ntfs -r -o /dev/${DEST}1 < <(xzcat part1-ntfs.pclone.xz)
partclone.ext4 -r -o /dev/${DEST}2 < <(xzcat part2-ext4.pclone.xz)
partclone.ext4 -r -o /dev/${DEST}5 < <(xzcat part5-ext4.pclone.xz)



Than... reboot... !


linux - 301 Redirect issue when term in new and old URL



We are trying to create a set of 301 redirects where the exact string from the old URL is also present in the same position in the new URL. See the example below:



Old URL



http://www.domain.com/foobar



New URL



http://www.domain.com/foobar/i55


We've tried a standard 301 redirect like this:



Redirect 301 /foobar$ http://www.domain.com/foobar/i55



This doesn't work and causes a 404:



File does not exist: /home/domain/public_html/foobar


Should we be looking at rewrite rules instead or can this be fixed by just adjusting the 301 rule?



Thanks.



Answer



You need to use the RedirectMatch in mod_alias i.e:



RedirectMatch 301 /foobar$ http://www.domain.com/foobar/i55


EDITED BASED COMMENT


Wednesday, November 22, 2017

domain name system - Suitable Record Type for a web server using a non-standard port?

I need to set up some servers that must be ran on ports above 5000. (This is a request from the client for this job, due to internal policies, and I can't do anything about it). What is concerning me is that, among other things, I have to set up a web server to run on a port different than 80, but the site must still be able to receive requests like example.com/contact and example.com/register . Since it will not operate on port 80, I won't be able to use A Record. So... is it even possible to do this job? I could configure Apache to use 5080, or whatever they want, but then what? How could people outside the LAN access it by the address example.com/whatever ?

Tuesday, November 21, 2017

performance monitoring - determining free memory from command line on Windows



I would like to monitor the free memory on four machines each running Windows Server 2003 R2 SP2 64-bit. Each box has 31.7gb of RAM. I would like to periodically run a command line tool so I can collect the output and later make charts with it.



I ran some tests and collected vmstat output periodically, using Cygwin. I'm seeing numbers like this:



0  0 1235228 4194303      0      0    0    0     0     0 4652 3089  1  5 94  0
0 0 1235228 4194303 0 0 0 0 0 0 4718 7591 5 4 91 0
0 0 1235228 4194303 0 0 0 0 0 0 5027 7816 5 4 92 0

0 0 1235228 4194303 0 0 0 0 0 0 4886 7099 3 3 93 0


On a completely different machine I see these numbers:



0  0  10344 4194303      0      0   32    0     1     0 3113 6492 32 10 58  0
0 0 10340 4194303 0 0 0 0 0 0 3908 6180 38 11 51 0
0 0 10340 4194303 0 0 0 0 0 0 2094 4501 23 5 72 0
0 0 10340 4194303 0 0 0 0 0 0 2435 3792 32 5 63 0



I intended to use the 4th column, "free", to determine the amount of free memory. However, this column is 4194303 on all four boxes! This doesn't seem to correlate to any number in task manager. I guess this column isn't useful? Is 4194303 some magic number?



I was unable to find a command line tool to monitor per process CPU usage. The Cygwin tools don't seem to do this. I ended up writing a tool I call cpumem (Google Code). My tool also outputs memory per process. Maybe this could be useful to determine the total used memory, which I could use to determine free memory? Here is some example output from my tool:



Name PID CPU PageFaultCount PeakWorkingSetSize WorkingSetSize QuotaPeakPagedPoolUsage QuotaPagedPoolUsage QuotaPeakNonPagedPoolUsage QuotaNonPagedPoolUsage PagefileUsage PeakPagefileUsage
Idle 0 362
System 4 2 17019 4857856 258048 0 0 80 80 45056 45056
smss 408 0 1371 1257472 684032 27704 7696 1520 1120 229376 2527232
csrss 456 0 265739 4042752 4005888 206112 205264 10608 10160 1781760 1789952

winlogon 480 0 4223496 15458304 15282176 150992 126576 130272 129120 10924032 11526144
services 528 6 8328 5210112 5144576 66024 63400 14416 12256 2703360 2928640
lsass 540 0 14312 12607488 12558336 105200 100400 30976 25344 11763712 12058624
svchost 716 0 1774 4636672 4608000 57704 53832 8512 5264 1634304 1806336
svchost 788 0 8397 5877760 5804032 68488 66992 35536 33984 2318336 2457600
svchost 852 0 7341 7426048 7352320 113496 100176 21330 16342 6791168 6991872
svchost 888 0 1321 5218304 5197824 57680 57648 10256 8544 2134016 2469888
svchost 904 0 319648 37355520 28160000 227912 206056 52384 38384 20017152 35856384
spoolsv 1036 0 82386 10686464 8921088 149696 106656 12528 10224 5988352 7823360
msdtc 1064 0 1814 7053312 6979584 63264 62656 12752 11424 2785280 2916352

svchost 1260 0 25164 5328896 3710976 77576 49096 5072 4080 1306624 3690496
svchost 1412 0 723 2912256 2908160 35200 35008 4432 3984 1187840 1191936
VMwareService 2008 0 1797 6971392 6897664 119624 118888 11120 6944 3108864 3231744
svchost 432 0 7845 7294976 7151616 98552 96304 12512 11232 3588096 24743936
dllhost 2096 0 2966 10526720 10362880 75568 74464 12128 9984 4001792 4362240
wmiprvse 2736 0 3955 12021760 9736192 73360 70112 8320 6960 4620288 7536640
logon.scr 1112 0 630 2564096 2543616 76648 76080 2960 2960 1187840 1187840
httpd 5744 0 59245 8650752 8650752 83728 66840 128494 100894 4268032 4460544
cmd 6336 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416
cmd 6840 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416

rotatelogs 8212 0 857 3366912 3366912 66080 51840 4464 4464 1617920 1617920
rotatelogs 3296 0 846 3321856 3321856 66080 51840 4400 4384 1581056 1581056
httpd 7536 0 42975 11481088 11460608 82280 70664 34976 30624 8179712 8241152
cmd 5948 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416
rotatelogs 2968 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688
cmd 2532 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416
rotatelogs 8584 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688
wrapper 5800 0 998 4038656 4005888 39992 39960 7846 6960 1929216 1970176
java 7708 0 1873243 161222656 134750208 125392 121616 515910 122640 130805760 157409280
csrss 1520 0 3747 5390336 4505600 108024 100128 9040 6960 2060288 2097152

winlogon 2804 0 4568 11296768 4702208 165832 163496 14944 13552 4583424 4943872
rdpclip 1736 0 1331 5201920 5185536 124832 114664 5520 5040 2105344 2183168
explorer 2072 0 83569 24793088 23375872 209560 194280 20704 18320 11202560 12865536
jusched 5716 0 1769 6991872 6991872 130320 124592 9264 8640 2850816 2850816
ApacheMonitor 3648 0 977 3858432 3825664 109320 95720 4832 4240 1638400 1679360
VMwareTray 3740 0 1394 5521408 5496832 117352 104184 6672 6080 2478080 2551808
VMwareUser 2820 0 1759 7024640 6995968 119552 107184 7472 7120 4067328 4108288
SciTE 4692 0 3158 9125888 8888320 112560 98560 5280 4880 6103040 6381568
mmc 2812 0 6443 23494656 23470080 205416 203064 14640 13520 12255232 16740352
eclipse 8956 0 205139 184983552 178651136 225112 217320 42560 40720 168902656 174735360

firefox 7672 0 19757 58945536 58462208 174896 171416 33568 27328 47968256 49152000
wrapper 3132 0 985 3973120 3944448 39712 39680 8438 6640 1900544 1941504
java 7456 13 1996365 -1154555904 -1194131456 1061632 865376 2342660 1668320 -1 -1
cmd 8700 0 613 2527232 2527232 33680 33672 3600 3040 2150400 2150400
java 7140 0 9411 26324992 23179264 54408 54408 12928 11120 581910528 586002432
java 228 0 22616 69480448 60051456 54344 54344 10464 10336 593469440 604434432
cmd 5664 0 606 2494464 2494464 38040 38040 7072 6512 2064384 2064384
cpumem 6040 13 3393 4390912 4222976 37280 37200 5360 4960 2260992 2433024
_Total 0 366
Idle 0 366

System 4 1 17019 4857856 258048 0 0 80 80 45056 45056
smss 408 0 1371 1257472 684032 27704 7696 1520 1120 229376 2527232
csrss 456 0 265739 4042752 4005888 206112 205264 10608 10160 1781760 1789952
winlogon 480 0 4223496 15458304 15282176 150992 126576 130272 129120 10924032 11526144
services 528 4 8328 5210112 5144576 66024 63400 14416 12256 2703360 2928640
lsass 540 0 14312 12607488 12558336 105200 100400 30976 25344 11763712 12058624
svchost 716 0 1774 4636672 4608000 57704 53832 8512 5264 1634304 1806336
svchost 788 0 8397 5877760 5804032 68488 66992 35536 33984 2318336 2457600
svchost 852 0 7341 7426048 7352320 113496 100176 21330 16342 6791168 6991872
svchost 888 0 1321 5218304 5197824 57680 57648 10256 8544 2134016 2469888

svchost 904 0 319648 37355520 28160000 227912 206056 52384 38384 20017152 35856384
spoolsv 1036 0 82386 10686464 8921088 149696 106656 12528 10224 5988352 7823360
msdtc 1064 0 1814 7053312 6979584 63264 62656 12752 11424 2785280 2916352
svchost 1260 0 25164 5328896 3710976 77576 49096 5072 4080 1306624 3690496
svchost 1412 0 723 2912256 2908160 35200 35008 4432 3984 1187840 1191936
VMwareService 2008 0 1797 6971392 6897664 119624 118888 11120 6944 3108864 3231744
svchost 432 0 7845 7294976 7151616 98552 96304 12512 11232 3588096 24743936
dllhost 2096 0 2966 10526720 10362880 75568 74464 12128 9984 4001792 4362240
wmiprvse 2736 0 3955 12021760 9736192 73360 70112 8320 6960 4620288 7536640
logon.scr 1112 0 630 2564096 2543616 76648 76080 2960 2960 1187840 1187840

httpd 5744 0 59245 8650752 8650752 83728 66840 128494 100894 4268032 4460544
cmd 6336 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416
cmd 6840 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416
rotatelogs 8212 0 857 3366912 3366912 66080 51840 4464 4464 1617920 1617920
rotatelogs 3296 0 846 3321856 3321856 66080 51840 4400 4384 1581056 1581056
httpd 7536 0 42975 11481088 11460608 82280 70664 34976 30624 8179712 8241152
cmd 5948 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416
rotatelogs 2968 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688
cmd 2532 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416
rotatelogs 8584 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688

wrapper 5800 0 998 4038656 4005888 39992 39960 7846 6960 1929216 1970176
java 7708 0 1873243 161222656 134750208 125392 121616 515910 122640 130805760 157409280
csrss 1520 0 3747 5390336 4505600 108024 100128 9040 6960 2060288 2097152
winlogon 2804 0 4568 11296768 4702208 165832 163496 14944 13552 4583424 4943872
rdpclip 1736 0 1331 5201920 5185536 124832 114664 5520 5040 2105344 2183168
explorer 2072 0 83569 24793088 23375872 209560 194280 20704 18320 11202560 12865536
jusched 5716 0 1769 6991872 6991872 130320 124592 9264 8640 2850816 2850816
ApacheMonitor 3648 0 977 3858432 3825664 109320 95720 4832 4240 1638400 1679360
VMwareTray 3740 0 1394 5521408 5496832 117352 104184 6672 6080 2478080 2551808
VMwareUser 2820 0 1759 7024640 6995968 119552 107184 7472 7120 4067328 4108288

SciTE 4692 0 3158 9125888 8888320 112560 98560 5280 4880 6103040 6381568
mmc 2812 0 6443 23494656 23470080 205416 203064 14640 13520 12255232 16740352
eclipse 8956 0 205139 184983552 178651136 225112 217320 42560 40720 168902656 174735360
firefox 7672 0 19757 58945536 58462208 174896 171416 33568 27328 47968256 49152000
wrapper 3132 0 985 3973120 3944448 39712 39680 8438 6640 1900544 1941504
java 7456 12 1996365 -1154555904 -1194131456 1061632 865376 2342660 1668320 -1 -1
cmd 8700 0 613 2527232 2527232 33680 33672 3600 3040 2150400 2150400
java 7140 0 9411 26324992 23179264 54408 54408 12928 11120 581910528 586002432
java 228 0 22616 69480448 60051456 54344 54344 10464 10336 593469440 604434432
cmd 5664 0 606 2494464 2494464 38040 38040 7072 6512 2064384 2064384

cpumem 6040 14 5567 4395008 4227072 37280 37200 5360 4960 2260992 2433024
_Total 0 369
Idle 0 365
System 4 0 17019 4857856 258048 0 0 80 80 45056 45056
smss 408 0 1371 1257472 684032 27704 7696 1520 1120 229376 2527232
csrss 456 0 265739 4042752 4005888 206112 205264 10608 10160 1781760 1789952
winlogon 480 0 4223496 15458304 15282176 150992 126576 130272 129120 10924032 11526144
services 528 1 8328 5210112 5144576 66024 63400 14416 12256 2703360 2928640
lsass 540 0 14312 12607488 12558336 105200 100400 30976 25344 11763712 12058624
svchost 716 0 1774 4636672 4608000 57704 53832 8512 5264 1634304 1806336

svchost 788 0 8397 5877760 5804032 68488 66992 35536 33984 2318336 2457600
svchost 852 0 7341 7426048 7352320 113496 100176 21330 16342 6791168 6991872
svchost 888 0 1321 5218304 5197824 57680 57648 10256 8544 2134016 2469888
svchost 904 0 319648 37355520 28160000 227912 206056 52384 38384 20017152 35856384
spoolsv 1036 0 82386 10686464 8921088 149696 106656 12528 10224 5988352 7823360
msdtc 1064 0 1814 7053312 6979584 63264 62656 12752 11424 2785280 2916352
svchost 1260 0 25164 5328896 3710976 77576 49096 5072 4080 1306624 3690496
svchost 1412 0 723 2912256 2908160 35200 35008 4432 3984 1187840 1191936
VMwareService 2008 0 1797 6971392 6897664 119624 118888 11120 6944 3108864 3231744
svchost 432 0 7845 7294976 7151616 98552 96304 12512 11232 3588096 24743936

dllhost 2096 0 2966 10526720 10362880 75568 74464 12128 9984 4001792 4362240
wmiprvse 2736 0 3955 12021760 9736192 73360 70112 8320 6960 4620288 7536640
logon.scr 1112 0 630 2564096 2543616 76648 76080 2960 2960 1187840 1187840
httpd 5744 0 59245 8650752 8650752 83728 66840 128494 100894 4268032 4460544
cmd 6336 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416
cmd 6840 0 754 2797568 2797568 33584 33000 3760 3600 2203648 2236416
rotatelogs 8212 0 857 3366912 3366912 66080 51840 4464 4464 1617920 1617920
rotatelogs 3296 0 846 3321856 3321856 66080 51840 4400 4384 1581056 1581056
httpd 7536 0 42975 11481088 11460608 82280 70664 34976 30624 8179712 8241152
cmd 5948 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416

rotatelogs 2968 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688
cmd 2532 0 753 2793472 2793472 33584 33000 3760 3600 2203648 2236416
rotatelogs 8584 0 863 3391488 3391488 66080 51840 4464 4464 1650688 1650688
wrapper 5800 0 998 4038656 4005888 39992 39960 7846 6960 1929216 1970176
java 7708 0 1873243 161222656 134750208 125392 121616 515910 122640 130805760 157409280
csrss 1520 0 3747 5390336 4505600 108024 100128 9040 6960 2060288 2097152
winlogon 2804 0 4568 11296768 4702208 165832 163496 14944 13552 4583424 4943872
rdpclip 1736 0 1331 5201920 5185536 124832 114664 5520 5040 2105344 2183168
explorer 2072 0 83569 24793088 23375872 209560 194280 20704 18320 11202560 12865536
jusched 5716 0 1769 6991872 6991872 130320 124592 9264 8640 2850816 2850816

ApacheMonitor 3648 0 977 3858432 3825664 109320 95720 4832 4240 1638400 1679360
VMwareTray 3740 0 1394 5521408 5496832 117352 104184 6672 6080 2478080 2551808
VMwareUser 2820 0 1759 7024640 6995968 119552 107184 7472 7120 4067328 4108288
SciTE 4692 0 3158 9125888 8888320 112560 98560 5280 4880 6103040 6381568
mmc 2812 0 6443 23494656 23470080 205416 203064 14640 13520 12255232 16740352
eclipse 8956 0 205139 184983552 178651136 225112 217320 42560 40720 168902656 174735360
firefox 7672 0 19757 58945536 58462208 174896 171416 33568 27328 47968256 49152000
wrapper 3132 0 985 3973120 3944448 39712 39680 8438 6640 1900544 1941504
java 7456 15 1996365 -1154555904 -1194131456 1061632 866240 2342660 1670336 -1 -1
cmd 8700 0 613 2527232 2527232 33680 33672 3600 3040 2150400 2150400

java 7140 0 9411 26324992 23179264 54408 54408 12928 11120 581910528 586002432
java 228 0 22617 69480448 60055552 54344 54344 10464 10336 593469440 604434432
cmd 5664 0 606 2494464 2494464 38040 38040 7072 6512 2064384 2064384
cpumem 6040 11 7794 4616192 4448256 47488 47408 6320 5920 2334720 2506752
_Total 0 370


That is one run of the tool. The machine is a quad core, so I guess the output quadrupled so that each core's CPU usage could be seen for each process. The memory values look the same though. Which of these columns should I sum to determine the total used memory? Is this a reasonable way to do it?



Is there a better way to determine free or used memory from the command line on Windows?



Answer



The simplest way is at a command line do: systeminfo



You will get a section that looks like:
Total Physical Memory: 16,383 MB
Available Physical Memory: 926 MB
Page File: Max Size: 19,868 MB
Page File: Available: 4,562 MB
Page File: In Use: 15,306 MB






That is taken directly from one of my machines. That info is near the top.


linux - Grub fails to boot my barebone Arch BTRFS setup



I'm not sure what I'm doing wrong.
I basically used the latest arch linux live disk in a VM (Linux KVM),




  • booted the latest arch linux live disk in a VM (Linux KVM on Arch)

  • made a single partition

  • formatted that with btrfs -m dup

  • mounted the partition, ran pacstrap with base and base-devel


  • genfstab -U /mnt /mnt/etc/fstab

  • arch-chroot into the partition at /mnt

  • install grub through pacman, run grub-install /dev/vda and grub-mkconfig -o /etc/grub/grub.cfg

  • reboot

  • Grub throws a few error messages: "error: no such device: [some device ID].\n loading linux core repo kernel \n error no such partition \n loading initial ramdisk \n error you need to load the kernel first \n press any key to continue"



I can still boot the machine by going into the grub commandline, doing "linux (hd0,msdos1)/boot/vmlinuz...." and the same for initrd, and running "boot" to boot it, but that seems a little inconvenient.
Yes I'm cutting short some things like hostname and what not, but it should boot as far as I know.




Does anyone know what I'm doing wrong?



Edit: I changed /etc/default/grub to not use UUIDs and ran grub-mkconfig again, here's the grub.cfg it generated: http://pastebin.ca/3746197
it still will not boot, though.


Answer



I found the problem. Not proud of my findings.



It's supposed to be grub-mkconfig -o /boot/grub/grub.cfg, not /etc/grub/grub.cfg. D'oh!



Leaving this here in case anyone else runs into it.



domain name system - A DNS record for both www and non-www websites



For each website I have, I noticed that having just this A DNS record:




*.example.com   3600    A   0   192.1.2.3


will make http://example.com unavailable and having just this A DNS record:



example.com 3600    A   0   192.1.2.3


will make http://www.example.com unavailable.




Question: is it mandatory to have two A DNS records to support www and non-www



*.example.com   3600    A   0   192.1.2.3
example.com 3600 A 0 192.1.2.3


or is there a way to define both in one A DNS record?







PS: If it's mandatory to have two records, would you use:



www.example.com 3600    A   0   192.1.2.3
example.com 3600 A 0 192.1.2.3


or would you do it this way:



www.example.com 3600    CNAME example.com

example.com 3600 A 0 192.1.2.3


?


Answer



Short answer: Yes.



Longer: I would suggest you adding exactly www.example.com and not *.example.com unless you don't want to ever use any sub-domain like john.example.com and jane.example.com, etc....



Also, do not forget to configure your apache/nginx (which ever you use) to accept connections for both domain names.



Monday, November 20, 2017

domain name system - Can you reference a CNAME record in an MX record?




We have several domains all pointing their MX records at mail.ourdomain.com, an internal mail server.



We are looking to outsource our email to a new supplier, who would like us to use mail.newsupplier.com; their mail server.



We'd rather not change all of the domain names to point to that MX record; several aren't in our control, and it would mean attempting to get many parties to change their MX records at the same time, which seems problematic.



Simpler would be to repoint mail.ourdomain.com at the IP for the new supplier. The problem is that our supplier isn't able to guarantee that IP will be fixed.



My question is, therefore: is changing mail.ourdomain.com to CNAME to mail.newsupplier.com an acceptable solution?




(For the record, only the email is moving, so we'd want to leave www.ourdomain.com and everythingelse.ourdomain.com unchanged.)



I've found several messages warning of the dangers of CNAMES in MX records, but I can't quite find someone talking about this particular setup, so any advice will be greatfully received.


Answer



According to RFC 1123, the MX record cannot point to a CNAME. If I were in your situation, I would setup mail.ourdomain.com as an A record pointing to the new suppliers IP address and then quickly work on changing all MX records over to the correct data. Then address why changing MX records is so difficult in your organization.



That being said, most mail servers will still submit mail to a CNAME; however, you can't be guaranteed of it.


linux - How do I verify the speed of my NIC?



I just installed a new gigabit network interface card (NIC) in Linux. How do I tell if it is really set to gigabit speeds? I see ethtool has an option to set the speed, but I can't seem to figure out how to report its current speed.


Answer



Just use a command like: ethtool eth0 to get the needed info. Ex:




$ sudo ethtool eth0 | grep Speed



Speed: 1000Mb/s

Sunday, November 19, 2017

MySQL problem with many concurrent connections

here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served.
NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections.




[error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7





I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

zfs - Mitigating risk of Disk Failure + Some corruption?

So my understanding of one scenario that ZFS addresses is where a RAID5 drive fails, and then during a rebuild it encountered some corrupt blocks of data and thus cannot restore that data. From Googling around I don't see this failure scenario demonstrated; either articles on a disk failure, or articles on healing data corruption, but not both.



1) Is ZFS using 3 drive raidz1 susceptible to this problem? I.e. if one drive is lost, replaced, and data corruption is encountered when reading/rebuilding, then there is no redundancy to repair this data. My understanding is that the corrupted data will be lost, correct?

(I do understand that periodic scrubbing will minimize the risk, but lets assume some tiny amount of corruption occurred on one disk since the last scrubbing, and a different disk also failed, and thus the corruption is detected during the rebuild)



2) Does raidz2 4 drive setup protect against this scenario?



3) Does a two drive mirrored setup with copies=2 would protect against this scenario? I.e. one drive fails, but the other drive contains 2 copies of all data, so if corruption is encountered during rebuild, there is a redundant copy on that disk to restore from?
It's appealing to me because it uses half as many disks as the raidz2 setup, even though I'd need larger disks.



I am not committed to ZFS, but it is what I've read the most about off and on for a couple years now.



It would be really nice if there were something similar to par archive/reed-solomon that generates some amount of parity that protects up to 10% data corruption and only uses an amount of space proportional to how much x% corruption protection you want. Then I'd just use a mirror setup and each disk in the mirror would contain a copy of that parity, which would be relatively small when compared to option #3 above. Unfortunately I don't think reed-solomon fits this scenario very well. I've been reading an old NASA document on implementing reed-solomon(the only comprehensive explanation I could find that didn't require buying a journal articular) and as far as I my understanding goes, the set of parity data would need to be completely regenerated for each incremental change to the source data. I.e. there's not an easy way to do incremental changes to the reed-solomon parity data in response to small incremental changes to the source data. I'm wondering though if there's something similar in concept(proportionally small amount of parity data protecting X% corruption ANYWHERE in the source data) out there that someone is aware of, but I think that's probably a pipe dream.

Friday, November 17, 2017

routing - Are people really going to use public IPv6 addresses on their private networks?





I have been reading the Debian System Administrator's Handbook, and I came across this passage in the gateway section:




...Note that NAT is only relevant for IPv4 and its limited address space;
in IPv6, the wide availability of addresses greatly reduces the

usefulness of NAT by allowing all “internal” addresses to be directly
routable on the Internet (this does not imply that internal machines
are accessible, since intermediary firewalls can filter traffic).




That got me thinking... With IPv6 there is still a private range. See: RFC4193. Are companies really going to set up all their internal machines with public addresses? Is that how IPv6 is intended to work?


Answer




Is that how IPv6 is intended to work?





In short, yes. One of the primary reasons for increasing the address space so drastically with IPv6 is to get rid of band-aid technologies like NAT and make network routing simpler.



But don't confuse the concept of a public address and a publicly accessible host. There will still be "internal" servers that are not Internet accessible even though they have a public address. They'll be protected with firewalls just like they are with IPv4. But it will also be much easier to decide that today's internal-only server needs to open up a specific service to the internet tomorrow.




Are companies really going to set up all their internal machines with public addresses?




In my opinion, the smart ones will. But as you've probably noticed, it's going to take quite a while.



Thursday, November 16, 2017

linux - Out of memory at 18% usage : where the ram goes?



This is related to : Out of memory at 72% usage




It looks to be the same problem but the question is slightly different : Where my memory goes ? I have 18% memory usage and my OOM Killer is killing mysqld each 10 minutes.



I was able to gather some informations :



1 - Thanks to https://serverfault.com/a/619681/182343 I found that the report of OOM Killer show that DMA35 + DMA + Normal usage are at 96% (the report https://pastebin.com/UJUiSsSi) ... so there is a problem ...



2 - The processlist from OOM Killer : https://pastebin.com/yYTD4QzW



3 - The free, top, htop and other tools are showing me 18% ram usage at maximum. Here is top sorting ram usage (https://pastebin.com/DEDV1HWb)




4 - free -m tells nothing about the ram problem :



              total        used        free      shared  buff/cache   
available
Mem: 6809 414 470 201 5924 5825


(I have added some swap as i had no swap on this virtual machine but nothing changed, no swap is used)



5 (EDIT) : Thanks to Daniel Gordi i cleanup my buff/cache free && sync && echo 3 > /proc/sys/vm/drop_caches && free and ran oom-killer manually with echo f > /proc/sysrq-trigger. And, WTF, the oom-killer ram report (DMA35 + DMA + Normal) shows my expected ram usage : 18% ! I always thought that buff/cache means available when the OS needs-it...




Why and where the ram is eaten ?



(I really hope i could have some help in there as my production server is really instable since this problem it appears :( Thanks )


Answer



In case someone comes here for a solution, this is an update :



I rolledback all the config modifications and I did a fresh reboot of the server. Since 2 months the server looks good and the problem dissapeared.



Not sure what happened here ...



Wordpress and Apache crashing MySql (CLOSE_WAIT)

Last days mysql server on my vps start crashing, sometimes right after restart.

I found that problem in wordpress website. After service mysql start many apache2 processes overload memory and cpu. This is screenshot of top.



enter image description here



I didn't find anything usefull from apache logs, mysql error log show messages about lack of memory:



140922 12:24:43 [Note] Plugin 'FEDERATED' is disabled.
140922 12:24:43 InnoDB: The InnoDB memory heap is disabled
140922 12:24:43 InnoDB: Mutexes and rw_locks use GCC atomic builtins
140922 12:24:43 InnoDB: Compressed tables use zlib 1.2.8

140922 12:24:43 InnoDB: Using Linux native AIO
140922 12:24:43 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
140922 12:24:43 InnoDB: Completed initialization of buffer pool
140922 12:24:43 InnoDB: Fatal error: cannot allocate memory for the buffer pool
140922 12:24:43 [ERROR] Plugin 'InnoDB' init function returned error.
140922 12:24:43 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
140922 12:24:43 [ERROR] Unknown/unsupported storage engine: InnoDB
140922 12:24:43 [ERROR] Aborting


140922 12:24:43 [Note] /usr/sbin/mysqld: Shutdown complete


First I thought maybe problem in db and checked database with mysqlcheck --all-databases but all ok, I tried drop and create new database, no luck.



Also I tried to delete wordpress plugins folder. Wordpress is last version, comments disabled, 3 users, little amount of visitors (corporate blog).



Now I don't know where to look and how to diagnose problem, mysql lives no more than 10-20 sec if wordpress config enabled in apache.



UPDATE: I'm deleted WP folder and download new WP without any config. If wp site is enabled, vps starts overloading, apache can't answer to requests. Without WP enabled all works smooth. I don't destroy this droplet yet because I want found the cause of the problem. Maybe this is some exploit in vps?




UPDATE2: after searching in logs and netstat I found that problem appears when:




  1. wordpress site was enabled and apache with mysql works

  2. Some request from dummy bot for rpc.php or something like this set apache connection in CLOSE_WAIT, after several request apache had many workers in CLOSE_WAIT condition, as result many workers and luck of memory (yes I should reduce max, but anyway site stopped work when all workers waiting)

  3. MySQL falling with luck of memory (it's not the problem as suggested in answer it's consequence of another problems, and upgrade VM not solve the issue)

  4. After mysql falling overload stops. (this led me to think that perhaps the database is corrupted, but after checking db not errors were found)




What reason may enter apache in CLOSE_WAIT state after dummy request for missing file?

Wednesday, November 15, 2017

mod rewrite - Redirect apache to http requests to https except for when request is a POST?



I have setup my apache2 configuration to redirect http requests to https. This works fine however I want to change it so that it only does this if the request is not a POST request.



Here is my current configuration:




RewriteEngine On




RewriteCond %{HTTPS} !=on



RewriteRule ^/?(.*) https:// %{SERVER_NAME}/$1 [R,L]




How can I change this configration so that it only redirects when the request is not a POST?


Answer



Add a new RewriteCond line:



RewriteEngine On

RewriteCond %{HTTPS} !=on
RewriteCond %{REQUEST_METHOD} !^POST$
RewriteRule ^/?(.*) https:// %{SERVER_NAME}/$1 [R,L]

Tuesday, November 14, 2017

How do I get hard disks to spin down?



I am going to build a server for backups. I want to install the operating system on USB flash disk (a little space), and attach some hard disks for data (much more space needed). Because it is going to be backup, disks will run only once a day. So to save power I want disks to stop (spin off) while not used.



Do I need to buy some special disks for it, or is it enough to configure something in Ubuntu? I want to buy 2.5" disks, also because it is consuming less power.


Answer




You can use a linux command hdparm -S




man hdparm
-S : Set the standby (spindown) timeout for the drive. This value is used by
the drive to determine how long to wait (with no disk activity) before
turning off the spindle motor to save power...




Also look at this question:




What’s the effect of standby (spindown) mode on modern hard drives?




$ sudo hdparm -S 240 /dev/sda1



/dev/sda1:
setting standby to 240 (20 minutes)



Monday, November 13, 2017

lamp - Troubleshooting mysterious server freezes on Amazon EC2

I have an Amazon EC2 instance running LAMP on Ubuntu Natty/11.04. On three separate occasions within the last few months, two of which in the last two weeks, the server has just... stopped. It becomes unresponsive and stops responding to connection attempts (SSH or otherwise), but the EC2 control panel still reports it as running. Each time I had to reboot the instance through the console, with ensuing data loss.




So, now I'm trying to diagnose the issue, but I'm coming up blank and I need advice on what else to check for. Syslog contains nothing suspicious -- on each occasion, the last thing that happened was munin running its regular five-minute cronjob, although since I don't know exactly when the machine stopped working, I can't say how close the cron log is to the point of freezing. After that, it's as if the machine was simply not running until the point where it was restarted, after which point syslog contains what looks to me like normal dmesg output.



There seems to be no correlation between traffic volume and the time of these freezes. Each occasion has been far removed from peak traffic times.



What else can I look at to attempt to figure out what has been causing these issues? What might the issue be?



ADDENDUM: The server was not under heavy load at any occasion when it went down. CPU and memory use were both well and safely under limits. There was plenty of free disk space (tens of gigabytes). There is nothing strange in Apache or MySQL logs either, they just stop operating at that time. This is a medium/high-CPU instance.

centos - Server computational slowdown when RAM is used extensively

I have problem with server slowdowns in very specific scenario. The facts are:





  • 1) I use computational application WRF (Weather Research and Forecast)

  • 2) I use Dual Xeon E5-2620 v3 with 128GB RAM (NUMA architecture - probably related to problem!)

  • 3) I run WRF with mpirun -n 22 wrf.exe (I have 24 logical cores available)

  • 4) I use Centos 7 with 3.10.0-514.26.2.el7.x86_64 kernel

  • 5) Everthing works OK in terms of computational performance until one of things happen:

  • 5a) linux file cache gets some data, or

  • 5b) I use tmpfs and fill it with some data



In 5a or 5b scenario, my WRF start to slow down suddenly and get sometimes even ~5x slower than normal.





  • 6) RAM does not get swapped, it is not even close to happening, I have around 80% of RAM free in worst case scenario!

  • 7) vm.zone_reclaim_mode = 1 in /etc/sysctl.conf seems to help a bit to delay issue in 5a scenario

  • 8) echo 1 > /proc/sys/vm/drop_caches resolve problem completely in 5a scenario, restores WRF performance to maximum speed, but only temporary until file cache get data again, so I use this command in cron (don't worry, it IS ok, I use computer only for WRF and it does not need file cache to work at full performance)

  • 9) but, above command still does nothing in 5b scenario (when I use tmpfs for temporary files)

  • 10) perfomanace is restored in 5b scenario only if I manually empty tmpfs

  • 11) It is not WRF or mpi problem

  • 12) This happens only on this one computer type and I administer a lot of them for same/similar purporse (WRF). Only this one has full NUMA architecture so I suspect this has something with it

  • 13) I also suspect that RHEL kernel has something with this but not sure, didn't tried to reinstall into different distro yet


  • 14) numad and numactl option to invoke mpirun like "numactl -l", did not make any difference



Let me know if you have any idea to try to aviod those slowdowns.



One idea come to me after following some "Related" links on this question. Can Transparent Huge Pages be a source of this problem? Some articles highly suggest that THP does not play well on NUMA systems.

Saturday, November 11, 2017

windows - Mapping Virtual Hosts by folder name in Apache

I am trying to add virtual hosts to my Apache, but I am trying to do it in a specific way of which I don't know if it is possible.



I currently have it like this:




ServerName til.local
DocumentRoot "C:/xampp/htdocs/til"

Options FollowSymLinks
AllowOverride All


Order allow,deny
Allow from all




Where "til" is the name of the folder AND of the 'domain name'.



I want to set something up once so I don't need to look at it anymore afterwards, even when adding new folders to the htdocs folder. I am going to create a lot of different domains in a short period of time so I don't want to go to the apache settings every single time I add a virtual host. Is it possible to set something up that haves all folders in the htdocs automatically be a domain name like this? So if I put abc as a folder, it uses "abc.local" and the folder "htdocs/abc"?

Friday, November 10, 2017

Tools for tracking disk usage

I manage a number of linux fileservers. These all run applications written from 0-10 years ago. As sometimes happens, a machine will come close to, or run out of disk space. Reasons include applications not rotating log files, a machine with 500GB of disk producing 150GB of new files every month that were not written to tape, databases gradually increasing in size, people doing silly things...generally a bit of chaos.




Anyway, when a machine unexpectedly goes from 50% to 100% full in a couple of hours, I figure out what broke (lots of "du") and delete files or contact someone. I also can look at cacti graphs to figure out what the machine's normal disk usage is (e.g. for /home).



Does anyone know of any tools that will give finer grained information on historial usage than a cacti/RRD graph? Like "/home/abc/xyz increased 50GB in the last day".

domain name system - How wordpress.com do scalability

Out of curiosity, I wanted to know, how Wordpress.com scale their architecture, specifically:





  1. how they handle sub-domains.



    I believe they have millions of subdomains.CMIIW. How do they scale their DNS to handle it?


  2. They also support custom domains.



    How do they handle the translation and also, how do they scale the requests?




note, I choose the wordpress tag, because I'm not allowed yet to create wordpress.com tag.

Newly registered domain name, unable to access

I recently registered a domain name, with a new site (found on HN), called EntryDNS (entrydns.net). For the sake of argument, call it theweb.com. I did this yesterday.



Today, to give their server ample time to get itself sorted, I tried to go to theweb.com, only to find that I cannot access it. I perform a dig command, and get the following information:



; <<>> DiG 9.7.3 <<>> theweb.com
;; global options: +cmd

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 51782
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;theweb.com. IN A

;; AUTHORITY SECTION:
org. 887 IN SOA a0.org.afilias-nst.info. noc.afilias-nst.info. 2010298277 1800 900 604800 86400


;; Query time: 28 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Nov 30 12:35:14 2012
;; MSG SIZE rcvd: 96


I won't lie, I haven't a clue what that actually means. But I know for a fact I can't see my IP address in there anywhere. Here's the output after checking against EntryDNS's servers:



; <<>> DiG 9.7.3 <<>> @ns1.entrydns.net theweb.com
; (1 server found)

;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15513
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;theweb.com. IN A

;; ANSWER SECTION:

theweb.com. 3600 IN A <<< EDIT: My IP address >>>

;; Query time: 19 msec
;; SERVER: 213.229.74.106#53(213.229.74.106)
;; WHEN: Fri Nov 30 12:42:01 2012
;; MSG SIZE rcvd: 49


So, this leaves me with: normal DNS servers cannot see this record, but the EntryDNS servers can. I thought the whole point of running a public DNS server like this, is to let other people use the address theweb.com, rather than 192.168.0.1 (not my IP), to access my site. Why won't this work?




Edit: after running dig with +trace, I get:



; <<>> DiG 9.7.3 <<>> theweb.com +trace
;; global options: +cmd
. 37985 IN NS a.root-servers.net.
. 37985 IN NS b.root-servers.net.
. 37985 IN NS c.root-servers.net.
. 37985 IN NS d.root-servers.net.
. 37985 IN NS e.root-servers.net.
. 37985 IN NS f.root-servers.net.

. 37985 IN NS g.root-servers.net.
. 37985 IN NS h.root-servers.net.
. 37985 IN NS i.root-servers.net.
. 37985 IN NS j.root-servers.net.
. 37985 IN NS k.root-servers.net.
. 37985 IN NS l.root-servers.net.
. 37985 IN NS m.root-servers.net.
;; Received 228 bytes from 8.8.4.4#53(8.8.4.4) in 32 ms

org. 172800 IN NS d0.org.afilias-nst.org.

org. 172800 IN NS a2.org.afilias-nst.info.
org. 172800 IN NS c0.org.afilias-nst.info.
org. 172800 IN NS b2.org.afilias-nst.org.
org. 172800 IN NS a0.org.afilias-nst.info.
org. 172800 IN NS b0.org.afilias-nst.org.
;; Received 435 bytes from 192.36.148.17#53(i.root-servers.net) in 20 ms

org. 900 IN SOA a0.org.afilias-nst.info. noc.afilias-nst.info. 2010298451 1800 900 604800 86400
;; Received 96 bytes from 199.249.112.1#53(a2.org.afilias-nst.info) in 32 ms

Thursday, November 9, 2017

apache 2.2 - Using Passenger with dynamic mass virtual hosting/userdirs



I already have an apache webserver set up, and it is working for PHP.




It has no static VirtualHosts set up, and dynamically routes all requests.



A request for http://example.com/ would be served from the document root /var/www/example.com (VirtualDocumentRoot), and a request for http://example.com/~user/ would be served from the document root /home/user/public_html (mod_userdir). The latter works no matter what the domain.



I would like to be able to serve Ruby on Rails applications, from the root of a document root, or from a subdirectory, using Phusion Passenger. However, it requires me to add some lines to the directive, which obviously isn't there.



I would prefer a solution that does not require root to deploy an application, but this is not critical. I also do not mind a solution that does not use Passenger, if I have the same ease of deployment.


Answer



Unfortunately, this doesn't seem possible. Passenger is completely incompatible with userdirs, and with VirtualDocumentRoot, a separate VirtualHost is required.


zfs - zpool status reports error ... what next?



On our FreeNAS server, zpool status gives me:




  pool: raid2
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: none requested
config:


NAME STATE READ WRITE CKSUM
raid2 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gptid/5f3c0517-3ff2-11e2-9437-f46d049aaeca ONLINE 0 0 0
gptid/5fe33556-3ff2-11e2-9437-f46d049aaeca ONLINE 3 1.13M 0
gptid/60570005-3ff2-11e2-9437-f46d049aaeca ONLINE 0 0 0
gptid/60ebeaa5-3ff2-11e2-9437-f46d049aaeca ONLINE 0 0 0
gptid/61925b86-3ff2-11e2-9437-f46d049aaeca ONLINE 0 0 0


errors: No known data errors


What should I do? scrub the pool?


Answer



Type zpool clear raid2 to clear the errors and initiate a scrub.



If the errors persist following that, replace the disk.



More details about the hardware would help, so this is generic advice. My recommendation for bunch of consumer disks connected to a PC motherboard are different than what I'd do for enterprise-level gear.



Wednesday, November 8, 2017

ssl - Unable to disable SSL3.0 in IIS



I have tried to disable SSL3.0 on my Windows web server that is running Windows Server 2008R2 with IIS 7.5. I have entered the following registry keys:



Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]
"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128]
"Enabled"=dword:00000000

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]
"Enabled"=dword:00000000
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client]
"DisabledByDefault"=dword:00000001



After rebooting the server I have checked my site's SSL protocols using nmap and openssl.



NMAP Command:



nmap --script ssl-enum-ciphers mysite.com


NMAP Output:



| ssl-enum-ciphers: 

| SSLv3:
| ciphers:
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_RC4_128_MD5 - strong
| TLS_RSA_WITH_RC4_128_SHA - strong
| compressors:
| NULL
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong

| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_RC4_128_MD5 - strong
| TLS_RSA_WITH_RC4_128_SHA - strong
| compressors:
| NULL
|_ least strength: strong



OpenSSL Command:



openssl s_client -connect mysite.com:443 -ssl3


OpenSSL Output:



SSL handshake has read 2540 bytes and written 486 bytes
---

New, TLSv1/SSLv3, Cipher is RC4-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : RC4-SHA
Session-ID: (omitted...)
Session-ID-ctx:

Master-Key: (omitted...)
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1415005218
Timeout : 7200 (sec)
Verify return code: 20 (unable to get local issuer certificate)



Could anyone please help me disable SSL3 on the server ?



Thanks in advance


Answer



I used IIS Crypto to do this, seems to work just fine. Note that a full reboot is required after disabling/enabling encryption methods.


linux - Forced per-user ssh port



I want to allow access to each user on a server through a different port. For example; user1 can only be accessed by ssh through port 2201, user 2 can only be accessed through port 2202. I have already allowed access through ports 2201 and 2202 by editing "/etc/ssh/sshd_config" and adding two lines:




Port 2201
Port 2202




Both users can now access ssh through both ports (and 22).




  • How would I restrict them to only their own ports?



(Also), the users [except root] don't have any automatically created "~/.ssh/" directory so I made one and tried adding a config file and an authorized_keys file - these don't seem to make any difference.



OS is debian squeeze and thanks in advance.


Answer




You'll have to create a separate sshd_config for each user/port combo containing (along with the usual configuration options) the ListenAddress and AllowUsers keywords.



sshd_config_2201



ListenAddress 0:2201
AllowUsers user1


sshd_config_2202




ListenAddress 0:2202
AllowUsers user2


etc.



You'll need to run sshd once for each user with the -f switch to specify the individual configuration files.


Tuesday, November 7, 2017

ip - Do anycast addresses imply IPv6?





From what I understand, as a random user on the Internet, you cannot really know if an IPv4 address is unicast or anycast. However if you ping that IPv4 from two hosts physically located on two different continents far apart and get a ping time < 30 ms in both cases for the same IPv4 address, you can be sure it's two different servers answering the ping for that particular IPv4 (*).



So two different servers are answering for the same IPv4 IP: does this mean anycast is used for sure?



If anycast is used, does this mean there's IPv6 somewhere or can anycast be used in an hypothetical network which would be IPv4 only?


Answer




There are two different ways of doing anycast: routing-based and on a single subnet. The routing based one can be done for both IPv4 and IPv6. The single-subnet way cannot be done with IPv4.



Routing based anycast is done by announcing the same IPv4 or IPv6 prefix into a routing protocol from multiple routers. All of them announce they have a direct connection to those addresses. The routing protocol (if used on a global scale BGP, if used within one organisation it might also be e.g. OSPF etc) calculates the shortest path to that prefix and thereby uses the 'closest' instance. What is considered 'closest' depends on the routing protocol's algorithm and metrics.



IPv6 has a subnet based form of anycast for use within a single subnet. It works by letting multiple hosts answer to the same address for Neighbor Discovery queries (think the IPv6 equivalent of IPv4 ARP queries). The sender will use the first answer it gets, which is assumed to be the closest and/or fastest.



I hope this explains the confusion: two different techniques with the same name.


apache 2.2 - How to create a multi-domain self-signed certificate for Apache2?

I've got a little private webserver where I have several virtualhosts. I know that it's impossible to assign a certificate to each individual virtualhost, because the server finds out which virtualhost was requested only AFTER the SSL connection has been established. But is it possible to have a single SSL certificate which lists several domains? Or at least a wildcard domain, like *.example.com. If yes, what Linux commands do I have to write to make such a self-signed certificate?




Added: To clarify - I have just one IP address for all the virtual hosts.

Sunday, November 5, 2017

solaris - How can I add one disk to an existing raidz zpool?



I have an OpenSolaris server with a zpool backupz comprised of four SCSI drives:



-bash-3.2# zpool status backupz
pool: backupz

state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
backupz ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0

c7t3d0 ONLINE 0 0 0

errors: No known data errors


I want to add a fifth drive... but zpool add backupz raidz c7t4d0 isn't working...



-bash-3.2# zpool add backupz raidz c7t4d0
invalid vdev specification: raidz requires at least 2 devices



Can I not have a raidz config with 5 devices? Do I have to add two devices at once? or am I doing something incorrect altogether here?


Answer



You can't expand an existing raidz vdev, you have to blow it away and create it again with the new drive(s). See the other answer for better details.



Side note: Someone actually worked out that it's technically possibly to add drives to a raidz, but the functionality hasn't been implemented. The same is true of removing a disk.


ping - Client can't reach my production webserver. It's their ISP's fault, but now what?



I have a customer in Michigan who can't access my production SaaS webserver that is hosted on Slicehost. All other companies across the US/Canada/Europe have no problem reaching the site. This problem is occuring intermittantly, and Slicehost customer service says it's a problem with the client's ISP.



I got the IP address of my client, and ping'ing that IP address from my PROD server fails, but ping'ing the IP address from my dev box or our seperate blog server (also hosted on slicehost) works. How do I debug a problem like this? I asked the client to reach out to their local ISP and ask about this problem.




A traceroute shows that the packets are getting stopped on a Comcast Michigan node which is the client's ISP. Is there anything I can do additionally to fix this problem for my client?


Answer



There is little to nothing you can do. If the ISP has a routing issue on their network that isn't allowing the connection through, then the options are:




  1. Client can complain to the ISP to fix their network.

  2. Client can use a different connection to access the site.

  3. Move the site to another host where the client can access it (not recommended).




Option 1 is usually the best course of action. Unfortunately, the Internet isn't perfect and sometimes there are connection problems. If a traceroute is showing a break within Comcast's network, then Comcast needs to fix it.


Trying to install Mysql on Ubuntu Server 12.04



I am currenty trying to install MySql on my Ubuntu 12.04 server.
But I got problems, When i run sudo apt-get install mysql-server it runs, ask me for a Yes but then.
It returns Temporart failure resolving, Failed to fetch.
I am using PuTTY to manage the server, but I can access it physically.
This is what I get when i run the command.




root@cloud:/home/tek/openstackgeek# sudo apt-get install mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18
libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-client-core-5.5
mysql-common mysql-server-5.5 mysql-server-core-5.5
Suggested packages:

libipc-sharedcache-perl libterm-readkey-perl tinyca mailx
The following NEW packages will be installed
libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18
libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-client-core-5.5
mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5
0 upgraded, 12 newly installed, 0 to remove and 0 not upgraded.
Need to get 27.2 MB of archives.
After this operation, 97.1 MB of additional disk space will be used.
Do you want to continue [Y/n]? y
Err http://no.archive.ubuntu.com/ubuntu/ precise-updates/main mysql-common all 5.5.24-0ubuntu0.12.04.1

Temporary failure resolving âno.archive.ubuntu.comâ
Err http://no.archive.ubuntu.com/ubuntu/ precise/main libnet-daemon-perl all 0.48-1
Temporary failure resolving âno.archive.ubuntu.comâ
Err http://no.archive.ubuntu.com/ubuntu/ precise/main libplrpc-perl all 0.2020-2
Temporary failure resolving âno.archive.ubuntu.comâ
Err http://no.archive.ubuntu.com/ubuntu/ precise/main libdbi-perl amd64 1.616-1build2
Temporary failure resolving âno.archive.ubuntu.comâ
Err http://no.archive.ubuntu.com/ubuntu/ precise/main libdbd-mysql-perl amd64 4.020-1build2
Temporary failure resolving âno.archive.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-common all 5.5.24-0ubuntu0.12.04.1

Temporary failure resolving âsecurity.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main libmysqlclient18 amd64 5.5.24-0ubuntu0.12.04.1
Temporary failure resolving âsecurity.ubuntu.comâ
Err http://no.archive.ubuntu.com/ubuntu/ precise/main libhtml-template-perl all 2.10-1
Temporary failure resolving âno.archive.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-client-core-5.5 amd64 5.5.24-0ubuntu0.12.04.1
Temporary failure resolving âsecurity.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-client-5.5 amd64 5.5.24-0ubuntu0.12.04.1
Temporary failure resolving âsecurity.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-server-core-5.5 amd64 5.5.24-0ubuntu0.12.04.1

Temporary failure resolving âsecurity.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-server-5.5 amd64 5.5.24-0ubuntu0.12.04.1
Temporary failure resolving âsecurity.ubuntu.comâ
Err http://security.ubuntu.com/ubuntu/ precise-security/main mysql-server all 5.5.24-0ubuntu0.12.04.1
Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-common_5.5.24-0ubuntu0.12.04.1_all.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/libmysqlclient18_5.5.24-0ubuntu0.12.04.1_amd64.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://no.archive.ubuntu.com/ubuntu/pool/main/libn/libnet-daemon-perl/libnet-daemon-perl_0.48-1_all.deb Temporary failure resolving âno.archive.ubuntu.comâ
Failed to fetch http://no.archive.ubuntu.com/ubuntu/pool/main/libp/libplrpc-perl/libplrpc-perl_0.2020-2_all.deb Temporary failure resolving âno.archive.ubuntu.comâ
Failed to fetch http://no.archive.ubuntu.com/ubuntu/pool/main/libd/libdbi-perl/libdbi-perl_1.616-1build2_amd64.deb Temporary failure resolving âno.archive.ubuntu.comâ

Failed to fetch http://no.archive.ubuntu.com/ubuntu/pool/main/libd/libdbd-mysql-perl/libdbd-mysql-perl_4.020-1build2_amd64.deb Temporary failure resolving âno.archive.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-client-core-5.5_5.5.24-0ubuntu0.12.04.1_amd64.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-client-5.5_5.5.24-0ubuntu0.12.04.1_amd64.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-server-core-5.5_5.5.24-0ubuntu0.12.04.1_amd64.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-server-5.5_5.5.24-0ubuntu0.12.04.1_amd64.deb Temporary failure resolving âsecurity.ubuntu.comâ
Failed to fetch http://no.archive.ubuntu.com/ubuntu/pool/main/libh/libhtml-template-perl/libhtml-template-perl_2.10-1_all.deb Temporary failure resolving âno.archive.ubuntu.comâ
Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-server_5.5.24-0ubuntu0.12.04.1_all.deb Temporary failure resolving âsecurity.ubuntu.comâ
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?



I am trying to install OpenStack on the server, and came to the Mysql step.


Answer



I can resolve security.ubuntu.com so you may have limited network connectivity or your name resolution is incorrectly configured. Make sure you have valid nameservers in /etc/resolv.conf and DNS is enabled on the hosts line of /etc/nsswitch.conf. Then try pinging www.yahoo.com.


security - MySQL through SSH Tunnel

I have a php web application (Server A) that accesses MySQL on a remote server (Server B) through an SSH tunnel. Once the tunnel is set up, I can log in and run queries on Server B from Server A exactly as you would expect. However, when the web application tries to query the server I get the error:



[PDOException] SQLSTATE[HY000] [3159] Connections using insecure transport are prohibited while --require_secure_transport=ON.


Sure enough if I set the require_secure_transport system variable to OFF, it all works as expected but I do not understand why the web application connection triggers this exception but a normal connection does not.

ssl - Single certificate for a single domain & multiple domains - Apache



I am hosting multiple websites on a server with a single IP address. I have a single certificate for one of the domain names. Is there a way to configure Apache so that the certificate applies only to that single domain and not others on a single IP address? Right now the setup works, but that same certificate is used for all, which is obviously not what I want. I don't want to set up SSL for these other domains, rather, just disable SSL functionality for all except that one domain.



An example would be really helpful!



Thanks!


Answer




Because of the way SSL typically works, what you want is not feasible. When you understand how https works, its easy to understand why. When a browser makes an HTTP connection, what happens is to first create a TCP connection, then start talking the HTTP protocol over that TCP connection. The HTTP protocol provides a way for the client to say "I'm intending to fetch pages from the server named 'www.example.com'", This server name is what is being used by apache to decide which VirtualHost to use.



With HTTPS, what happens is, a TCP connection is made, then SSL is negotiated (the server sends the certificate to the client, the client verifies that the certificate is legitimate, they exchange session keys), then they talk HTTP over this SSL encrypted TCP connection, and only then can the client say which server name they are intending to talk to.



So here the problem is that they have already negotiated the SSL connection before the server knows which server they are trying to talk to.



Server Name Identification intends to extend SSL/TLS in order to let the client specify a server name as part of the SSL negotiation, but it is not supported widely enough in browsers to rely upon yet.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...