Thursday, June 30, 2016

smtp - Multiple mail servers and reverse DNS



If I have three mail servers MS1, MS2 and MS3 all with different IPs but share the same domain name (exampledomain.com) and I use an SPF record to specify them, how would reverse DNS work on the server receiving mail from any of my mail servers since each of them would resolve to a different IP?
Or would the receiving server have to check against the SPF records instead?


Answer



Never name your mail servers (or any other server) with the naked domain name. This will break a lot more stuff than forward confirmed reverse DNS lookups.



Each server should have its own unique name which is a subdomain of your domain, and for which the reverse DNS points back to that name.



Apache 2.0 Administraion

Is there a way to control the no of threads in a prefork MPM????

Zend Framework on PHP 7

I've recently upgraded my server to use PHP 7.0. However following this upgrade, I noticed that my web application wasn't working. I looked in my apache2 error.log file and found this error:



PHP Fatal error: Uncaught Error: Class 'Zend_Loader_Autoloader' not found



When I do 'php -v' on the command line, it shows this:




PHP 7.0.0-5+deb.sury.org~trusty+1 (cli) ( NTS )
Copyright (c) 1997-2015 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2015 Zend Technologies
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend Technologies



It looks like the framework is installed, but for the cli only (not Apache).



Does anyone know how to enable it for Apache?




Thanks.

Wednesday, June 29, 2016

zfs - smartOS HPC config suggestion

I'm configuring a brand new HPC server and am interested in using SmartOS because of it's virtualization control and zfs features. Does this configuration make sense for a SmartOS HPC, or would you recommend an alternative?



System Specs:
2x 8-core xeon

384 GB RAM
30 TB HDs with 2x512GB SSDs



Uses:
- zfs for serving data to different vms, and over the network; 1 SSD for L2ARC and 1 for ZIL
- typically 1-2 ubuntu instances running R and custom C/C++ code



My biggest concerns as a newbie to SmartOS and ZFS are:



(1) will I get near-metal performance from ubuntu running on SmartOS if it is the only active vm?

(2) how do I serve data from the global zfs pool to the containers and other network devices?

tunnel - SSH to Remote host via another host



I am trying to ssh to remote Host B, but network access control governs I am only able to do this via Host A. How would I go about doing that?




Have tried creating a tunnel to Host A
ssh -f -N -D 2222 user@hostA



Then when creating new ssh connections from Local specifying tunnel port to tunnel those connections, but cant get this working..
ssh -L 2222:hostB:22 hostA



Hosts involved:
Local
Host A (local intranet)
Host B (internet)




Flow of traffic:
Local > HostA > HostB



Any pointers would be super hand.. thanks in advance!


Answer



Your thought of using a dynamic port forward for this will never work. Think through it logically - you need to open a local port that forwards from your local machine, through hostA, to port 22 on hostB. There are a couple of ways you can achieve this. First, the inelegant, manual way:



First, set up the tunnel:




$ ssh -L2222:hostB:22 user@hostA


Then, connect to hostB:



$ ssh -p 2222 user@localhost


The preferred option is to use the ssh client's ProxyCommand directive, which can automate this for you. Add something like this to your ~/.ssh/config:




host hostB
Hostname hostB
ProxyCommand ssh user@hostA nc %h %p 2> /dev/null


After doing this, you can do this:



$ ssh hostB



...and the ssh client will take care of everything for you.


Tuesday, June 28, 2016

Clarification about Linux TCP window size and delays



I have delays when sending data through a TCP channel I am not able to understand. The link is a 1Gb link with a end to end latency of roughly 40ms. In my current setup, latency (the time from one message to go from the sender user space to the receiver user space) can reach 100ms.




The sender socket is configured with the TCP_NODELAY option. Sender buffer (SO_SNDBUF) is configured to be 8MB. Receive buffer (SO_RCVBUF) is also configured to be 8MB. Tcp window scaling is activated.



update-1: I am using the zeromq 3.1.1 middleware to carry data. Socket configuration, including the TCP_NODELAY flag is performed by the middleware. Some options are accessible like rx and tx emit buffer sizes but not TCP_NODELAY. As far as I have understood, the TCP_NODELAY is activated to ensure that the data is sent as possible. In the meantime, actual socket sends and decision to send a message are performed in two separate threads. A proper batching is done if several messages are available at the time the first message in the batch is to be sent.



I ran a capture with tcpdump from which the frames below have been extracted. After the initial TCP handshake, the sender (172.17.152.124) starts sending data. The initial window size is 5840 bytes for the receiver & 5792 bytes for the sender.



My problem is that the sender sends two frames (#6 and #7) then stops, waiting for an ack to come back from the receiver. As far as I can see, the window size of the receiver is not reached and the transfer should not stop (384 bytes outstanding with an initial receive window size of 5840 bytes). I am starting to think that I have no understood correctly what TCP is. Can someone help clarifying ?



update-2: My data payload consists of a magic number followed by a timestamp. I have isolated the delayed packets by comparing the timestamps of the payloads with the timestamps put by tcpdump. The payload ts of frame #9 is very close to the one of frame #6 and #7 and clearly less than the timestamp of the received ack in frame #8.




update-1: The fact that frame #9 is not sent immediately can be explained by the slow-start of the TCP channel. In fact, the problem also appears once the connection is running for several minutes so the slow-start does not seem to be the general explanation.





  1. 20:53:26.017415 IP 172.17.60.9.39943 > 172.17.152.124.56001: Flags [S], seq 2473022771, win 5840, options [mss 1460,sackOK,TS val 4219180820 ecr 0,nop,wscale 8], length 0


  2. 20:53:26.017423 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [S.], seq 2948065596, ack 2473022772, win 5792, options [mss 1460,sackOK,TS val 186598852 ecr 219180820,nop,wscale 9], length 0


  3. 20:53:26.091940 IP 172.17.60.9.39943 > 172.17.152.124.56001: Flags [.], ack 1, win 23, options [nop,nop,TS val 4219180894 ecr 186598852], length 0


  4. 20:53:26.091958 IP 172.17.60.9.39943 > 172.17.152.124.56001: Flags [P.], seq 1:15, ack 1, w in 23, options [nop,nop,TS val 4219180895 ecr 186598852], length 14


  5. 20:53:26.091964 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [.], ack 15, win 12, options [nop,nop,TS val 186598927 ecr 4219180895], length 0


  6. 20:53:26.128298 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [P.], seq 1:257, ack 15, win 12, options [nop,nop,TS val 186598963 ecr 4219180895], length 256



  7. 20:53:26.128519 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [P.], seq 257:385, ack 15, win 12, options [nop,nop,TS val 186598963 ecr 4219180895], length 128


  8. 20:53:26.202465 IP 172.17.60.9.39943 > 172.17.152.124.56001: Flags [.], ack 257, win 27, options [nop,nop,TS val 4219181005 ecr 186598963], length 0


  9. 20:53:26.202475 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [.], seq 385:1833, ack 15, win 12, options [nop,nop,TS val 186599037 ecr 4219181005], length 1448


  10. 20:53:26.202480 IP 172.17.152.124.56001 > 172.17.60.9.39943: Flags [P.], seq 1833:2305, ack 15, win 12, options [nop,nop,TS val 186599037 ecr 4219181005], length 472





If this matters, both ends are Linux RHEL5 boxes, with 2.6.18 kernels and network cards are using e1000e drivers.



update-3

Content of /etc/sysctl.conf



[jlafaye@localhost ~]$ cat /etc/sysctl.conf | grep -v "^#" | grep -v "^$" 
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536

kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.ipv4.tcp_rmem = 65536 4194304 16777216
net.ipv4.tcp_wmem = 65536 4194304 16777216
net.core.netdev_max_backlog = 10000

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_mem = 262144 4194304 16777216
kernel.shmmax = 68719476736

Answer



After doing a little more digging into my traffic, I was able to see that my data was nothing but a sequence of small bursts with small idle periods between them.



With the useful tool ss, I was able to retrieve the current congestion window size of my connection (see the cwnd value in the output):





[user@localhost ~]$ /usr/sbin/ss -i -t -e | grep -A 1 56001



ESTAB 0 0 192.168.1.1:56001
192.168.2.1:45614 uid:1001 ino:6873875 sk:17cd4200ffff8804
ts sackscalable wscale:8,9 rto:277 rtt:74/1 ato:40 cwnd:36 send 5.6Mbps rcv_space:5792




I ran the tool several times and discovered that the congestion window size was regularly reset to the initial value (10ms, on my Linux box). The connection was constantly looping back to the slow start phase. During the slow start period, bursts with a number of messages exceeding the window size were delayed, waiting for the acks related to the first packets of the burst.



The fact that the traffic consists of a sequence of bursts likely explains the reset of the congestion window size.




By deactivating the slow start mode after idle period, I was able to get rid of the delays.




[user@host ~]$ cat /proc/sys/net/ipv4/tcp_slow_start_after_idle
0



domain name system - Connectivity problems on website



We've been experiencing some connectivity problems on http://www.scirra.com. Running a DNS check:



http://dnscheck.pingdom.com/?domain=www.scirra.com


We're getting some errors:





Delegation not found at parent.



No delegation could be found at the parent, making your zone
unreachable from the Internet.



Not enough nameserver information was found to test the zone
www.scirra.com, but an IP address lookup succeeded in spite of that.





It's been ticking by nicely for many months, and we've made no changes recently except replacing expiring SSL certificates on our servers.



Problems I'm experiencing are intermittent This web page is not available errors in Chrome, where refreshing sometimes loads it.



Any ideas on what could be causing these issues?


Answer



There is an issue with the last hop to the server. Here are the results from a pathping to www.scirra.com.



$ pathping www.scirra.com


Tracing route to www.scirra.com [108.61.84.218]
over a maximum of 30 hops:


...



  3  vnn-rc0001-cr101-ae10-217.core.as9143.net [213.51.166.65]
4 asd-tr0042-cr101-ae6-0.core.as9143.net [213.51.158.78]
5 te0-6-1-4.rcr21.b031955-0.ams03.atlas.cogentco.com [149.14.34.173]

6 be2499.ccr41.ams03.atlas.cogentco.com [130.117.1.149]
7 be2038.rcr21.ams05.atlas.cogentco.com [154.54.36.134]
8 tinet.ams05.atlas.cogentco.com [130.117.14.50]
9 xe-2-0-2.nyc39.ip4.gtt.net [141.136.111.106]
10 gtt-gw.ip4.gtt.net [173.241.131.238]
11 ae1-50g.ar1.nyc3.us.as4436.gtt.net [69.31.95.194]
12 as20473.ae7.ar1.nyc3.us.as4436.gtt.net [69.31.34.62]
13 108.61.244.41
14 vl329-c11-15-b2-1-sa.pnj1.choopa.net [108.61.65.62]
15 * 108.61.84.218


Computing statistics for 375 seconds...
Source to Here This Node/Link
Hop RTT Lost/Sent = Pct Lost/Sent = Pct Address


...



  3   10ms     0/ 100 =  0%     0/ 100 =  0%  vnn-rc0001-cr101-ae10-217.core.as9143.net [213.51.166.65]
0/ 100 = 0% |

4 15ms 0/ 100 = 0% 0/ 100 = 0% asd-tr0042-cr101-ae6-0.core.as9143.net [213.51.158.78]
0/ 100 = 0% |
5 10ms 0/ 100 = 0% 0/ 100 = 0% te0-6-1-4.rcr21.b031955-0.ams03.atlas.cogentco.com [149.14.34.173]
0/ 100 = 0% |
6 11ms 0/ 100 = 0% 0/ 100 = 0% be2499.ccr41.ams03.atlas.cogentco.com [130.117.1.149]
0/ 100 = 0% |
7 9ms 0/ 100 = 0% 0/ 100 = 0% be2038.rcr21.ams05.atlas.cogentco.com [154.54.36.134]
0/ 100 = 0% |
8 9ms 0/ 100 = 0% 0/ 100 = 0% tinet.ams05.atlas.cogentco.com [130.117.14.50]
0/ 100 = 0% |

9 84ms 0/ 100 = 0% 0/ 100 = 0% xe-2-0-2.nyc39.ip4.gtt.net [141.136.111.106]
0/ 100 = 0% |
10 84ms 0/ 100 = 0% 0/ 100 = 0% gtt-gw.ip4.gtt.net [173.241.131.238]
0/ 100 = 0% |
11 86ms 0/ 100 = 0% 0/ 100 = 0% ae1-50g.ar1.nyc3.us.as4436.gtt.net [69.31.95.194]
0/ 100 = 0% |
12 89ms 0/ 100 = 0% 0/ 100 = 0% as20473.ae7.ar1.nyc3.us.as4436.gtt.net [69.31.34.62]
0/ 100 = 0% |
13 --- 100/ 100 =100% 100/ 100 =100% 108.61.244.41
0/ 100 = 0% |

14 86ms 0/ 100 = 0% 0/ 100 = 0% vl329-c11-15-b2-1-sa.pnj1.choopa.net [108.61.65.62]
54/ 100 = 54% |
15 86ms 54/ 100 = 54% 0/ 100 = 0% 108.61.84.218

Trace complete.


There is a 54% packet loss between hops 14 and 15.


apache 2.2 - Why is the response on localhost so slow?



I am working on a tiny little PHP project for a friend of mine, and I have a WAMP environment setup for local development. I remember the days when the response from my local Apache 2.2 was immediate. Alas, now that I got back from a long, long holiday, I find the responses from localhost painfully slow.



It takes around 5 seconds to get a 300B HTML page served out.



When I look at the task manager, the httpd processes (2) are using up 0% of the CPU and overall my computer is not under load (0-2% CPU usage).



Why is the latency so high? Is there any Apache setting that I could tweak to perhaps make its thread run with a higher priority or something? It seems like it's simply sleeping before it's serving out the response.



Answer



The issue was with Apache's main settings file httpd.conf.



I found this:




There are three ways to set up PHP to work with Apache 2.x on Windows. You can run PHP as a handler, as a CGI, or under FastCGI. [Source]




And so I went into the Apache's settings and saw where the problem was: I had it set up as CGI, instead of loading it as a module. This caused php-cgi.exe to start up and shut down every time I made a request. This was slowing my localhost development down.




I changed the settings to load PHP as an Apache MODULE and now it all works perfectly. :)




To load the PHP module for Apache 2.x:



1) insert following lines into httpd.conf



LoadModule php5_module "c:/php/php5apache2.dll"




AddHandler application/x-httpd-php .php



(p.s. change C:/php to your path. Also, change php5apache**.dll to your existing file name)



2) To limit PHP execution only for .php files, add this in httpd.conf:




SetHandler application/x-httpd-php




3) set path of php.ini in httpd.conf (if after restart you get error, then remove this line again)



PHPIniDir "C:/php"




Thank you all for your efforts.


do_ypcall: clnt_call: RPC: Unable to receive; errno = Connection refused

I had been running a time-consuming background program via a bash script on a linux server. In the same bash script, I set the notification by "set -o notify" so I could know when the background job was done.



Probably yesterday night the notification popped out to say the background job was done, and after that there were following error messages, which I am not sure if occured immediately after the background job finished or a while later, since I was not at my terminal at that time:




do_ypcall: clnt_call: RPC: Unable to receive; errno = Connection refused




do_ypcall: clnt_call: RPC: Unable to receive; errno = Connection refused



do_ypcall: clnt_call: RPC: Unable to receive; errno = Connection refused




I also redirected the stdout output of my background job to a log file which seems to say the program was not finished as expected but terminated midway.



Could you explain the meaning of the error message? If possible,what kinds of problem could I possibly met? Could it be that the administrator placed some restriction on the resources that I can use on that server?



Thanks and regards!







UPDATE:



the same hard drive is mounted via NFS over several servers including the one mentioned above. I just found a similar but different error occur on the other server:




do_ypcall: clnt_call: RPC: Timed out





This error and the previous error seem not affect the programs that are running. BTW my program has IO operations.

Monday, June 27, 2016

500 error with deploying rails application via apache2+passenger

I finally completed my own app, so the only work left is deploying the app.



I'm using Ubuntu 10.04 and apache2(installed by apt-get), so I'm trying to deploy through passenger.



I installed passenger gem like this:



sudo gem install passenger
rvmsudo passenger-install-apache2-module



and I configured apache settings as what the installation message says.



I added below lines in the middle of /etc/apache2/apache2.conf file.



LoadModule passenger_module /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17/ext/apache2/mod_passenger.so
PassengerRoot /home/admin/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.17
PassengerRuby /home/admin/.rvm/wrappers/ruby-1.9.3-p194/ruby



and, I appended below lines in /etc/apache2/sites-available/default file.




ServerName localhost
# !!! Be sure to point DocumentRoot to 'public'!
DocumentRoot /home/admin/homepage/public

# This relaxes Apache security settings.
AllowOverride all
# MultiViews must be turned off.

Options -MultiViews





But when I restart the apache service and hit the address, 500 error occurs.



At first, it was same 500 error but the 500 error page is from apache's, but when I reinstalled the libapache2-module-passenger, the 500 error page is changed to that from rails'.




Because of rails' 500 error page(which is located at public/500.html), I think passenger module is properly connected with apache.



What should I do to fix this problem?



Do I need to configure something inside my app before deployment?

windows server 2000 - Configure DNS Zone to Forward for any non-existant hosts



Not sure if this is possible, but in my head, it sounds reasonable to be able to do. I'm just not sure how...




We have our company domain on our internal DNS servers (company.com), but the domain is hosted externally as well. We have the zone setup on a Win2k Server, and it is AD integrated.



What I want to be able to do, is for any hosts which do NOT exist on our internal DNS (queried by internal machines with that DNS server set manually), to then look at public DNS for the domain.



So:
On our internal DNS we have the company.com zone setup.
On public DNS we have the company.com zone setup, and add an A record for host name 'www'.
External machines lookup www.company.com, and resolve as normal, using public DNS.
Internal machines lookup www.company.com, cannot find it on internal DNS, forwards resolution to public DNS and finds the record.



Is this too much to ask? Or am I just going about it the wrong way?




Thanks.


Answer



No, Windows DNS Server doesn't work that way. You have to add an A record with the IP of your website.



You can achieve something like this for subdomains of your primary, but it's a ugly trick and will not work in your situation.


ESXi boot process / state storage



I've got a standalone ESXi server and I'm having problems with it losing config on reboot. I restored the config from a previous install and it reverts to that every time it's restarted.



My current hypothesis is that although the state is correctly being backed up to /bootbank/local.tgz on the hour (it's a USB installation and if I understand autobackup.sh correctly, that's expected behaviour), the boot process is reading from /bootbank/state.tgz.




I think this because of the contents of /bootbank/boot.cfg (specifically the modules line) and because the restored config was from a disk installation, rather than USB:



~ # cat /bootbank/boot.cfg
kernel=b.z
kernelopt=
modules=k.z --- s.z --- c.z --- oem.tgz --- license.tgz --- m.z --- state.tgz
build=4.1.0-381591
updated=2
bootstate=0



Should I swap in local.tgz for state.tgz here (bearing in mind one is an archive and one is an archive of an archive and so need to be treated differently), or is this entry a result of a setting elsewhere I should be targeting instead?



Alternatively, should I just delete this entry from the modules line (to have it go to local.tgz by default because of the USB boot status)? Do I need to adjust /altbootbank/boot.cfg too? I ask these two questions because neither state file is included in the modules line in this file.



Normally, I'd just experiment, but I'm wary of tampering with the boot process in case it stops booting!



The system is a recently patched 4.1 (free version - it's not a production system) on more or less HCL hardware, using DAS for the datastore and a 2GB USB stick for the hypervisor install.



Edit




I've looked through /sbin/backup.sh (which is called from autobackup.sh) and this actually adds --- state.tgz to the modules line in boot.cfg if a) it's not a USB boot and b) it's not already there. This strongly suggests to me that (in my USB boot environment) it's there erroneously and I should just delete it... but I'd still love some confirmation of that from someone more knowledgeable.



Can anyone tell me (or even speculate on) why "embedded" / USB booting systems use local.tgz and "installed" systems use state.tgz (which, AIUI, just contains local.tgz)? Could it be something to do with multiple configs for clusters?


Answer



In the absence of suggestions either way, I bit the bullet and removed the --- state.tgz parameter from the modules line in /bootbank/boot.cfg and, judging by a couple of test restarts, config changes are persisting between boots now. I read post #44 in this thread, which suggested it was a valid thing to do. It seems local.tgz is read on boot now instead of the stale state.tgz, as I was hoping.



I still don't know what the reason for this entry appearing in the modules line was, so I'll be keeping an eye out for it returning. As an entirely new boot image is written to /altbootbank/ whenever updates are applied, I'll be checking the newly created boot.cfg to make sure it hasn't crept back in when I next patch the server.


windows - How do I grant start/stop/restart permissions on a service to an arbitrary user or group on a non-domain-member server?




We have a suite of Windows Services running on our servers which perform a bunch of automated tasks independently of one another, with the exception of one service which looks after the other services.



In the event that one of the services should fail to respond or hang, this service attempts to restart the service and, if an exception is thrown during the attempt, emails the support team instead, so that they can restart the service themselves.



Having done a little research, I've come across a few 'solutions' which range from the workaround mentioned in KB907460 to giving the account under which the service is running administrator rights.



I'm not comfortable with either of these methods - I don't understand the consequences of the first method as outlined in Microsoft's knowledge base article, but I definitely don't want to give administrator access to the account under which the service is running.



I've taken a quick look through the Local Security Policy and other than the policy which defines whether or not an account can log on as a service, I can't see anything else which looks like it refers to services.




We're running this on Server 2003 and Server 2008, so any ideas or pointers would be graciously received!






Clarification: I don't want to grant the ability to start/stop/restart ALL services to a given user or group - I want to be able to grant the permission to do so on specific services only, to a given user or group.






Further Clarification: The servers I need to grant these permissions on do not belong to a domain - they are two internet-facing servers which receive files, process them and send them on to third parties, as well as serving a couple of websites, so Active Directory Group Policy isn't possible. Sorry that I didn't make this clearer.


Answer




There doesn't appear to be a GUI-based way of doing this unless you're joined to a domain - at least not one I could find anywhere - so I did a bit more digging and I've found an answer that works for our sitaution.



I didn't understand what the string representation meant in the knowledge base article, but doing a bit of digging led me to discover that it's SDDL syntax. Further digging led me to this article by Alun Jones which explains how to get the security descriptor for a service and what each bit means. MS KB914392 has more details.



To append to the service's existing security descriptor, use sc sdshow "Service Name" to get the existing descriptor. If this is a plain old .NET Windows Service - as is the case with ours - the security descriptor should look something like this:



D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCLCSWLOC
RRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;CR;;;AU)(A;;CCLCSWRPWPDTLOCRRC;;;PU)S:(AU;FA
;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;WD)



We needed to grant permissions RP (to start the service), WP (to stop the service), DT (to pause/continue the service) and LO (to query te service's current status). This could be done by adding our service account to the Power Users group, but I only want to grant individual access to the account under which the maintenance service runs.



Using runas to open a command prompt under the service account, I ran whoami /all which gave me the SID of the service account, and then constructed the additional SDDL below:



(A;;RPWPDTLO;;;S-x-x-xx-xxxxxxxxxx-xxxxxxxxxx-xxxxxxxxx-xxxx)


This then gets added to the D: section of the SDDL string above:




D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCLCSWLOC
RRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;CR;;;AU)(A;;CCLCSWRPWPDTLOCRRC;;;PU)(A;;RPWP
DTLO;;;S-x-x-xx-xxxxxxxxxx-xxxxxxxxxx-xxxxxxxxx-xxxx)S:(AU;FA;CCDCLCSWRPWPDTLOC
RSDRCWDWO;;;WD)


This is then applied to the service using the sc sdset command:



sc sdset "Service Name" D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;
CCLCSWLOCRRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;CR;;;AU)(A;;CCLCSWRPWPDTLOCRRC;;;PU

)(A;;RPWPDTLO;;;S-x-x-xx-xxxxxxxxxx-xxxxxxxxxx-xxxxxxxxx-xxxx)S:(AU;FA;CCDCLCSW
RPWPDTLOCRSDRCWDWO;;;WD)


If all goes according to plan, the service can then be started, stopped, paused and have it's status queried by the user defined by the SID above.


Sunday, June 26, 2016

Mysql running out of memory

I am having a lot of trouble with mysql eating too much ram on a small machine and am looking for some help in figuring out why it is eating so much.




I have a small virtual machine with 256MB of RAM running Debian Wheezy. On this server I have apache2 and mysql installed. I don't do very much on this server, only a few lightly used websites and a mail server.



For some reason, several times a day, my mysql server crashes. When I check my syslog I find the following error:




kernel: [3323593.630722] Out of memory in UB: OOM killed process 9471
(mysqld) score 0 vm:327824kB, rss:37080kB, swap:0kB




So as far as I can tell, mysql starts to eat up too much memory and is killed by the system. I log slow queries and keep a tab on my mysql.err log but I dont see anything of much value in those that would show me why mysql starts to eat so much memory.




My my.cnf file has these options set:



key_buffer              = 8M
max_allowed_packet = 16M
thread_stack = 128K
thread_cache_size = 8

query_cache_limit = 512K
query_cache_size = 8M



The other thing is that when I check the amount of memory being used when I start mysql and even during the day while it is running, I usually have about 128MB free. I don't see how mysql would end up eating that amount ever.



What can I do to track this problem down?

domain name system - How to change web host for my small site with minimal downtime



Company S is a small mom and pop host that can't keep our site running for more than a couple weeks at a time. They have suggested we find a better home. So we are moving our site to CrystalTech for hosting in their shared plans. I have moved the site over and it is working fine on the IP address, but now I need to move the name servers, and how do I minimize downtime



Here is my plan, please point out any errors:




  • On Monday, ask Company S to reduce the TTL on our name to something very small, perhaps 1200

  • On Friday ask Company S to change their DNS to point our domain to the IP address at CrystalTech. Just the web and not the email.

  • At the same time change the ns records with network solutions to CrystalTech's nameservers


  • At the same time disable the database on Company S and change the template to read "sorry, site moved blah blah blah"



I hope those four steps make the transition basically buttery smooth for everybody at once and nobody sees the "sorry site moved" for more than a 20 minute window



Will this make the transition as smooth as possible? We don't have anything super time sensitive like a shopping cart, but users do log into the site and update forms dynamicslly, so being in two places at once isn't cool.



Can we do this ONLY for the website? The email is a Google AFYD account, so the email is working fine and the company owner is adament that email can experience zero downtime while the web move over.



please visit my question about how to migrate the email as well

https://superuser.com/questions/93012/how-to-change-web-host-and-have-minimal-downtime-for-email


Answer



I wouldn't worry about the TTL stage.



Personally, I would do it like this, and as long as you do it in order - you can do it on the same day.




  1. Make sure that the new host is working (You already said it does).

  2. Set up DNS records at new host, pointing all A records / Cnames to the new host(or independent provider - I like Everydns). Also set up any additional entries such as MX records.*

  3. Take a backup from the old host and import to the new host / move database and change the site to a single page that has a Meta-refresh tag to the IP address of the new site site.


  4. Ask old host to change DNS to point to the new host.

  5. Ask old host to change name server to point to the new host.

  6. Wait for a few days just to make sure that all caches have expired and you can delete your old account - however, it will be pretty much inactive anyway.



* After step 2, you may want to wait 30 minutes - it is not really needed, but if the host has any funky DNS failover or load balancing, you may want to give it time to do its stuff!



As long as you pre-populate the DNS at the new host (step 2) with all the required fields (such as the MX records of Google Apps For Your Domain) before you switch nameservers, there should be zero downtime as it doesn't matter which dns server gets queried, it will get the same result from both.


solaris - With a 500GB SSD and a 250GB SSD is it possible to mirror a 250GB partition on the 500GB with the 250GB SSD using ZFS?




So I have a Samsung 250GB 850 Evo SSD and a 500GB 860 EVO SSD. I'm looking at using Solaris for this server (so looking at whether doing this with ZFS is possible). Is it possible to mirror the 250GB SSD with a 250GB partition on the 500GB SSD, while leaving the other half of the 500GB drive useable (it would used rather infrequently so not too worried about a performance hit)?


Answer



First things first: this is not a good idea. You should really use same-capacity disks, if possible.



That said, what you ask is indeed possible: you need to partition both disks each with a ~250 GB partition, and setup ZFS to use these two partitions as block devices for the mirrored vdev.



For example:




  • disk #1 will have a single, 250 GB partition;


  • disk #2 will have two 250 GB partitions;

  • a zpool is created using the first partition on each drive (ie: zpool create tank mirror /dev/sda1 /dev/sdb1);

  • the second 250 GB partition on disk #2 is available for other uses: you can create another zpool (ie: zpool create scratch /dev/sdb2), or even use it for with another filesystem (ie: mkfs.xfs /dev/sdb2). But remember that this will not be mirrored in any way.


update - I can't enable the Meltdown/Spectre mitigations in Windows Server 2008 R2

I have installed the patch released today as detailed here and then set the two registry keys as mentioned:



reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /f


reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /f


However, when I run the provided PowerShell module to check, it is informing me the mitigations are still not enabled:



PS C:\Users\Administrator> get-speculationcontrolsettings
Speculation control settings for CVE-2017-5715 [branch target injection]

Hardware support for branch target injection mitigation is present: False
Windows OS support for branch target injection mitigation is present: False

Windows OS support for branch target injection mitigation is enabled: False

Speculation control settings for CVE-2017-5754 [rogue data cache load]

Hardware requires kernel VA shadowing: True
Windows OS support for kernel VA shadow is present: False
Windows OS support for kernel VA shadow is enabled: False

Suggested actions


* Install BIOS/firmware update provided by your device OEM that enables hardware support for the branch target injection mitigation.
* Install the latest available updates for Windows with support for speculation control mitigations.
* Follow the guidance for enabling Windows support for speculation control mitigations are described in https://support.microsoft.com/help/4072698


BTIHardwarePresent : False
BTIWindowsSupportPresent : False
BTIWindowsSupportEnabled : False
BTIDisabledBySystemPolicy : False
BTIDisabledByNoHardwareSupport : False

KVAShadowRequired : True
KVAShadowWindowsSupportPresent : False
KVAShadowWindowsSupportEnabled : False
KVAShadowPcidEnabled : False


Why is this? What else do I have to do? I have rebooted the server for good measure with no improvement.



Update after answer from @Paul:




I've now installed the correct update (wally), and this is the output of the PowerShell cmdlet:



PS C:\Users\Administrator> get-speculationcontrolsettings
Speculation control settings for CVE-2017-5715 [branch target injection]

Hardware support for branch target injection mitigation is present: False
Windows OS support for branch target injection mitigation is present: True
Windows OS support for branch target injection mitigation is enabled: False
Windows OS support for branch target injection mitigation is disabled by system policy: True
Windows OS support for branch target injection mitigation is disabled by absence of hardware support: True


Speculation control settings for CVE-2017-5754 [rogue data cache load]

Hardware requires kernel VA shadowing: True
Windows OS support for kernel VA shadow is present: True
Windows OS support for kernel VA shadow is enabled: False

Suggested actions

* Install BIOS/firmware update provided by your device OEM that enables hardware support for the branch target injection mitigation.

* Follow the guidance for enabling Windows support for speculation control mitigations are described in https://support.microsoft.com/help/4072698


BTIHardwarePresent : False
BTIWindowsSupportPresent : True
BTIWindowsSupportEnabled : False
BTIDisabledBySystemPolicy : True
BTIDisabledByNoHardwareSupport : True
KVAShadowRequired : True
KVAShadowWindowsSupportPresent : True

KVAShadowWindowsSupportEnabled : False
KVAShadowPcidEnabled : False


Is this everything I can do pending a microcode update?

Saturday, June 25, 2016

How to run a Scheduled Task as NetworkService in Windows Server 2003?

How do i configure a scheduled task to run as NT AUTHORITY\NetworkService in Windows Server 2003?




Background



Even though the account is known as NetworkService, the full name is NT AUTHORITY\Network Service.



On Windows Server 2008 R2, when choosing the account to run the task as, you must specify:




  • NETWORK SERVICE (with a space)




That will then resolve to NT AUTHORITY\NetworkService (no space):



enter image description here



Note: You cannot try to specify NetworkService



enter image description here



Nor can you specify NT AUTHORITY\NetworkService.




In summary:




  • NETWORK SERVICE valid

  • NetworkService invalid

  • NT AUTHORITY\NetworkService invalid



The same is true on Windows 7. You must specify NETWORK SERVICE if you wish for a scheduled task to run as NetworkService (aka NT AUTHORITY\NetworkService)




What about Windows Server 2003?



In Windows Server 2003 it doesn't work:



enter image description here



i know that any password given for the Network Service (or Local Service) accounts are ignored, as the accounts have no password:





Note that this account does not have a password, so any password information that you provide in this call is ignored.




But i cannot specify that account:



enter image description here




  • NETWORK SERVICE invalid

  • NetworkService invalid


  • NT AUTHORITY\NetworkService invalid



NetworkService security



The NetworkService account, like LocalService, are limited rights accounts. The only difference between them is that:




  • NetworkService presents machine credentials (e.g. VADER$) when accessing the network

  • LocalService presents anonymous credentials when accessing the network




My question works just as well if i want to have a scheduled task run as LocalService (aka NT AUTHORITY\LocalService). i just happened to choose NetworkService when asking this question.



See the question:




How to grant network access to LocalSystem account?





How do i configure a scheduled task to run as NT AUTHORITY\Network Service in Windows Server 2003?

SSL Error - unable to read server certificate from file



I've been setting up SSL for my domain today, and have struck another issue - I was hoping someone could shed some light on..



I keep receiving the following error messages:




[error] Init: Unable to read server certificate from file /etc/apache2/domain.com.ssl/domain.com.crt/domain.com.crt

[error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error


I'm running Apache 2.2.16 and Ubuntu 10.10. My .crt file has the Begin and End tags, and has been copied exactly from the confirmation email I received, very frustrating!



Cheers!



Edit >>
When trying to verify the .crt It doesn't seem to work:





>> openssl x509 -noout -text -in domain.com.crt
unable to load certificate
16851:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: TRUSTED CERTIFICATE


Also >>





>> openssl x509 -text -inform PEM -in domain.com.crt
unable to load certificate
21321:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: TRUSTED CERTIFICATE



>> openssl x509 -text -inform DER -in domain.com.crt
unable to load certificate
21325:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1316:
21325:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:380:Type=X509



Edit>>
(Cheers for the help by the way)




>> grep '^-----' domain.com.crt
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----



Just emailed the company providing the Certificate, they responded>




I have checked the CSR file that you have provided and I can assure
that this was correctly generated. The error that you are currently
encountering is caused because you are using a wrong command line for
installing the CSR. You will need to modify this domain.com.crt from
your command line with the according name of your domain.






  • currently the crt is set up to mysite.com.crt - I've used domain.com.crt as an example


Answer



Is it possible that the lines are ^M-terminated? This is a potential issue when moving files from Windows to UNIX systems. One easy way to check is to use vi in "show me the binary" mode, with vi -b /etc/apache2/domain.ssl/domain.ssl.crt/domain.com.crt.



If each line ends with a control-M, like this



-----BEGIN CERTIFICATE-----^M

MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM^M
MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg^M
THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x^M


you've got a file in Windows line-terminated format, and apache doesn't love those.



Your options include moving the file over again, taking more care; or using the dos2unix command to strip those out; you can also remove them inside vi, if you're careful.







Edit: thanks to @dave_thompson_085, who points out that this answer no longer applies in 2019. That is, Apache/OpenSSL are now tolerant of ^M-terminated lines, so they don't cause problems. That said, other formatting errors, several different examples of which appear in the comments, can still cause problems; check carefully for these if the certificate has been moved across systems.


Thursday, June 23, 2016

Altering names for DNS servers where Primary and Parent records may not be changed at the same time

I have a few Name Servers (BIND9) that I want to alter the Fully Qualified Name for. As a hypothetical example:




  • dns1.olddomain.com

  • dns2.olddomain.com

  • dns3.olddomain.com



Are the old Name Servers and I would like to use the following instead.





  • dns1.newdomain.com

  • dns2.newdomain.com

  • dns3.newdomain.com



Presently all of the above records point to the same DNS servers, but the IP's of the new are different then the old (they route into the same machines).



My question here is if I update all the zones on my servers to use the new names for the SOA and NS, will I run into any issues if people using these servers do not update the registration records right away? Or will they have an issue is they jump the gun and update the registration prior to my change?




I have done several tests resolving records using both scenarios, and so far I don't see that there is an issue with resolution. However I am unsure if there is something I am missing here.

apache 2.2 - PHP Errors are not stored on CentOS Server



I just adjusted the php.ini on my CentOS 64 Bits VPS in /etc/php.ini to log PHP errors:




cat /etc/php.ini | grep php-errors.log
error_log = /var/log/php-errors.log


I also have log_errors = on



I created the log file in /var/log/ and it is CHMOD 644. I also turned on Error reporting E_ALL



cat /etc/php.ini | grep error_reporting
; error_reporting

error_reporting = E_ALL
; Eval the expression with current error_reporting(). Set to true if you want
; error_reporting(0) around the eval().


Then I restarted the httpd daemon. When I add a file via the WordPress uploader I see it is not uploaded because of a permission issue



“cannot-open-file.png” has failed to upload due to an error
Unable to create directory wp-content/uploads/2014/05. Is its parent directory writable by the server?



, but it is not stored as an error in php-errors.php:



pwd
/var/log

ls -l | grep php
-rw-r--r-- 1 root root 0 May 6 06:21 php-errors.log



All my other logs in /var/log/httpd are also root:root so I would assume the logging would work. And when I did adjust the file's permissions to apache:apache as suggested I still had no errors in the log file. Even adding error logging on to the .htaccess did not help.



I also checked the PHP.ini using phpinfo() . The only ini loaded is the one I adjusted in /etc/php.ini and the user and group it is using is apache - User/Group apache(48)/48 . What am I missing?



PS Could be issues with the directory for the log files as suggested here Can't configure PHP error log I am checking out more info on this.


Answer



Apparently I also needed display_errors = on . I thought this was for error display onscreen but it is for error logging as well it seems.


hardware - How to hook up a Supermicro JBOD expander to an HP DL380




I'm totally new to the subject of high-availability / distributed storage, and have to figure out a solution for a particular project.



I am looking at using Windows Server 2012 R2 Storage Spaces to create a VHDX made out disks in a JBOD storage expander.



This JBOD rig does nothing in itself and needs to interface into an existing server. The hardware I am looking at for this is:



2U Supermicro 2.5" 24 Bay Storage Expander SAS SSD SATA SC216E16 PT-JBOD-CB2 HBA



I want to attach this to one of my Remote Desktop Services server, which are all HP Proliant DL380 G6 / G7 / G8 servers.




However, this isn't very documented on the internet. Specifically, I'm just trying to figure out:




  • Can these two servers interface?

  • What cable do I need?

  • Do I need to put some card into one of my HP servers? What card?

  • If I were to do 24 2.5" SSD disks, would I need multiple cards?



Sorry if these questions sound dumb, but I'm really feeling around in the dark on this one.



Answer



I'd really recommend period-correct HP JBOD enclosures and gear if you're doing this.



The HP D2600, D2700, D3600 and D3700 are the right tools for this.



For interfaces, you'll want an HP HBA in each server. The G6/G7 and Gen8 should use different HBAs.



The cabling will be SFF-8088 for the D2x00 and SFF-8644 for the D3x00.



There's a lot more to this... especially if you want high-availability to actually work. Can you elaborate on which SSDs you're planning to use?



active directory - My AD domain and DNS domain names are the same. Can this be resolved with SRV secords?



My company has a website with the domain name of acme.com. It also used acme.com as the AD domain name. External DNS is set up properly. Internal DNS has to resolve to the DC, for obvious reasons.



As we know, visitors to acme.com from outside our network get the website and visitors to acme.com from within our network hit the domain controller, for obvious reasons.




I am familiar with the usual answers found here and here that state it goes against best practice to use the same name for DNS and AD domains and to either 1) migrant my AD domain to another name or 2) use http redirection via IIS on each of the DCs in my forest.



Based on what I have read on Wikipedia and Reddit, it seems possible to solve this problem with a SRV record.



So, I created the following SRV record but it does not seem to work:



_http._tcp.acme.com. 86400 IN SRV 0 100 80 www.acme.com.



Is it even possible to "redirect" internal http requests from acme.com to www.acme.com using only a SRV record?



Answer



No. Web browsers don't use SRV records, so this won't work.



https://stackoverflow.com/questions/9063378/why-do-browsers-not-use-srv-records



And that's why it's not listed as an answer to the problem.


Wednesday, June 22, 2016

apache 2.4 - apache2 mod-php cpu 100% on process

I have a VPS Debian server with Apache 2.4.10 mod-php. Server starts normally, but after some time I get 100% cpu on one of www-data processes and a web-server becomes unavailable.




I tried strace on that procces and I got an infinite loop of these lines:



poll([{fd=93, events=POLLIN}], 1, 3000) = 1 ([{fd=93, revents=POLLHUP}])
read(93, "", 13160)


Then I tried lsof ant got this:



COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF       NODE NAME

apache2 1134 www-data 93r FIFO 0,8 0t0 3176528027 pipe


What can cause the problem? When I restart apache, after some time I have the same behavior.

dell - PERC H310 Raid 5 - Fault - Turn array into RAID 0

Yesterday one of the three Raid 5 disks I have configured into my DELL Server PERC H310 controller cannot be found anymore.




I cannot replace the faulty driver now, so the thing I actually would like to do is to turn it into a RAID 0 for the time being.



Is it possible to do this without loss of data? If yes, how?



Thank you so much.






As said in the comments, I think what happened is that previously one of the three drives died. Then a second got foreign to some error (and now my data is not exposed).




The question is: What happens if I import the configuration of the foreign disk? Will it get together with the one that is ready and expose the data for me, so that I can get them back?



Thanks a lot!

Tuesday, June 21, 2016

My SSL configuration aren't working. Ubuntu apache 2.4

My ssl config arent working now. I just move a site from one host (I moved the certs files.



When I try access I have this error. (Chrome and Firefox say to me that SSL protocol is invalid)



Solved: Finally I forgottednto enable the site :( Sorry and thanks for the help with the SSL config.



I see this in my apache log (I think that is when I enter usin my http route)



[09/Apr/2016:16:54:05 +0000] "GET /homepage/ HTTP/1.1" 302 560 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36"



Here my SSL virtualhost file in sites-enable (I replaced name site to my-site):



Edited after the responses:
- Commented SSLCertificateChainFile line.




  • ErrorLog from ssl virtualhost changed to /home/my-site/logs/my-site.com-error-ssl.log


  • LogLevel setted to debug


  • CustomLog from ssl virtualhost changed to /home/my-site/logs/my-site.com.com-access-ssl.log combined





The new logs files are missing, I really can't see any ssl error.
I tested too that ssl module is enabled



Testing normal and private session in Chrome results in the same
Private session accessing the site



NON-SSL




    
ServerAdmin webmaster@my-site.com
ServerName my-site.com
ServerAlias www.my-site.com
DocumentRoot /home/my-site/www/my-site.com/current/public/

Options FollowSymLinks
AllowOverride None



Options Indexes FollowSymLinks MultiViews
AllowOverride All
Require all granted

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Require all granted


ErrorLog /home/my-site/logs/my-site.com-error.log
LogLevel warn
CustomLog /home/my-site/logs/my-site.com.com-access.log combined
Alias /doc/ "/usr/share/doc/"

Options Indexes MultiViews FollowSymLinks
AllowOverride None
Require all denied
Allow from 127.0.0.0/255.0.0.0 ::1/128





SSL







ServerAdmin webmaster@my-site.com

ServerName my-site.com
ServerAlias www.my-site.com
DocumentRoot /home/my-site/www/my-site.com/current/public



    
Allow from All
Require all granted
Options FollowSymLinks
AllowOverride All


ErrorLog /home/my-site/logs/my-site.com-error-ssl.log
LogLevel debug
CustomLog /home/my-site/logs/my-site.com.com-access-ssl.log combined
SSLEngine on
SSLCertificateFile /home/my-site/www/my-site.com/current/ssl/www.my-site.com.crt
SSLCertificateKeyFile /home/my-site/www/my-site.com/current/ssl/www.my-site.com.key
#SSLCertificateChainFile /home/my-site/www/my-site.com/current/ssl/my-site.com.crt
SSLCACertificateFile /home/my-site/www/my-site.com/current/ssl/my-site.com.crt
SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire


SSLOptions +StdEnvVars


SSLOptions +StdEnvVars

BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

nginx reverse proxy and rewrite rule

This question is actually quite close to my requirement where, the Nginx configuration rewrites the URL for $1.



Reproduced here:




location  /foo {
rewrite /foo/(.*) /$1 break;
proxy_pass http://localhost:3200;
proxy_redirect off;
proxy_set_header Host $host;
}


Whereas, in my case the original URL may have any number of levels of nesting and query parameters My requirement is to maintain those levels and prepend a level.




Examples:



Original URL: https://apis.demo.com/books/12414



Desired URL: http://localhost:3000/prepend/books/12414



Original URL: https://apis.demo.com/books/12414?find=meta



Desired URL: http://localhost:3000/prepend/books/12414?find=meta




Original URL: https://apis.demo.com/library/LIB001/books/12414



Desired URL: http://localhost:3000/prepend/library/LIB001/books/12414



Original URL: https://apis.demo.com/library/LIB001/books/12414/history



Desired URL: http://localhost:3000/prepend/library/LIB001/books/12414/history



How do we achieve this?

security - How to detect Bios Rootkits on a server mainboard?



I recently read about a talk by Corey Kallenberg and Xeno Kovah given at the CanSecWest-conference which describes how the firmware of a server mainboard can be reprogrammed to include malicious software. This has left me really worried! I'm now seeking for a method how to make sure that some given hardware has not been tempered with in such regard. How can I do this?


Answer



Well, the obvious answer would be to compare the BIOS you have with the BIOS released by the manufacturer... of course, that only works if the BIOS released by your manufacturer doesn't contain a rookit to begin with.



Failing that, you're left with a topic you could literally write several books on... or parlay into millions of dollars worth of IT security consulting, so it a subject that's much too broad to cover here, but it's not all that different than detecting any other rootkit - you examine logs and memory contents at a low level and look for evidence of the system doing something it shouldn't be doing. John Heasman did an interesting talk on ACPI BIOS rootkits at Blackhat Europe in 2006, which seems relevant here. (PDF)




The bottom line, though, is that this is still a technically advanced and relatively rare type of malware that's used against high value targets, which probably doesn't include you. If you actually do have reason to be worried about being targeted by this kind of attack, you need to hire some dedicated security resources and be directing your questions about BIOS malware at them. And remember, security is a type of insurance. There's no sense in buying a $10,000 wall safe to protect a stack of 1 dollar bills, just like there's no point in spending hundreds of thousands of dollars on a security team unless the data you're protecting is very valuable.



The information Security site is probably better suited to any further queries you have on the topic, and there are a number of existing questions and answers about BIOS malware already that might be of some interest to you.


Setting up IIS reverse proxy to preserve host headers

I have an IIS server that is hosting a number of sites and apis. These sites include Confluence and Jira instances. These products actually run their own web servers so the Application Request Routing and Url Rewrite modules are being used to reverse proxy incoming requests to documents.example.com' and 'jira.example.com' tolocalhost:8080andlocalhost:8090` - where the confluence and jira instances are running.




Now I am trying to setup a reverse proxy to a small simple-storage-server (s3) api (minio) - that is hosted on localhost:9000 - but the s3 protocol requires that the host header is part of its Message Authentication Codes.



However, when Application Request Routing reroutes a request following a URL Rewrite rule it also rewrites the host header to reflect the new destination header.



This can be disabled by setting system.webServer.proxy:preserveHostHeaders but only in ApplicationHost.config as ARR runs a the server, not the site level.



So now I have a conundrum:



If I set this setting, then the REST APIs that use host header in their MAC can function, but Confluence and Jira as their supported reverse proxy configuration expects rewritten host headers.




For reference, this sets enables host headers to be preserved



%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/proxy -preserveHostHeader:true /commit:apphost

Sunday, June 19, 2016

linux - Apache - High Availability



I'm looking for a way to setup Apache as high-availability. The idea is to have a cluster of 2+ Apache servers serving the same websites. I can have the IP address of each server set up with round-robin DNS so that each request is randomly sent to one of the servers in the cluster (I'm not too concerned with load-balancing just yet, though that may come into play later on).




I already have it set up and working with multiple Apache VM servers (spread across multiple physical servers) serving websites, and round-robin DNS, and this works fine. The SQL database is set up using MariaDB in a high-availability cluster, the web data (HTML, JS, PHP scripts, images, other assets) are stored within LizardFS, and the sessions are stored in a shared location as well. This all works well until one of the servers in the cluster becomes inaccessible for whatever reason. Then a percentage of the requests (roughly the number of downed servers divided by the number of total servers in the cluster) are unanswered. Here are the options I've considered:



Automatic DNS Updates



Have some process that monitors the functionality of the web servers, and removes any downed servers from DNS. This has two issues:




  • First, even though we can set our TTL to some very low number (like 5
    seconds), I've heard that a handful of DNS servers will enforce a
    minimum TTL higher than ours. And, some browsers (namely Chrome)

    will cache DNS for no less than 60 seconds regardless of TTL
    settings. So even though we're good on our end, some clients may not
    be able to reach sites for some time in the event of a DNS update.


  • Second, the program that monitors the functionality of the cluster
    and updates DNS records becomes a new single point of failure. We
    may be able to get around this by having more than one monitor spread
    across multiple




systems, because if they both detect a problem and they both make the same DNS changes, then that shouldn't cause any issues.



uCarp/Heartbeat




Make the IP addresses that are accessed and in round-robin DNS virtual, and have them reassigned to up servers from down servers in the case that a server goes down. For instance, server1's VIP is 192.168.0.101 and server2's VIP is 192.168.0.102. If server1 goes down, then 192.168.1.102 becomes an additional IP on server2. This has two issues:




  • First, to my knowledge, uCarp/Heartbeat monitors their peers
    specifically for inaccessibility, for instance, if the peer can't be
    pinged. When that happens, it takes over the IP of the downed peer.
    This is an issue because there are more reasons a web server may not
    be able to serve requests other than just being inaccessible on the
    network. Apache may have crashed, a config error may exist, or some

    other reason. I would want the criteria to be "the server isn't
    serving pages as required" rather than "the server isn't pingable".
    I don't think I can define that in uCarp/Heartbeat.


  • Second, this doesn't work across data centers, because each set of
    servers across data centers has different blocks of IP addresses. I
    can't have a virtual IP float between data centers. The requirement
    to function across data centers (yes, my distributed file system and
    database cluster are available across data centers) isn't required,
    but it would be a nice plus.





Question



So, any thoughts on how to deal with this? Basically, the holy grail of high availability: No single points of failures (either in the server, load balancer, or the data center), and virtually no downtime in the event of a switch over.


Answer



When I want HA and load sharing, I use keepalived and configure it with two VIPs. By default, VIP1 is assigned to server1 and VIP2 is assigned to server2. When any server is down, the other server takes both VIPs.



Keepalived will take care of HA by watching the other server. If a server is not reachable or any interface is down, it changes to FAULT state. VIP will be taken by other server. To monitor your service, you can use track_script option.



If you want to add another cluster in another data center, you can add two more servers and do the same configuration. Now, you can load-share traffic between data centers using DNS round-robin. No DNS update is required in this case.



Saturday, June 18, 2016

virtualization - What is the easiest Linux containers solution?



I've opted for 'virtualizing' some software using a containers solution.



However I lack experience with this, and was wondering if anyone could vouch for either V-Server or OpenVZ ?



I'm mostly concerned with ease of use during setup and maintenance, since feature-wise they seem to be on par with each other.


Answer



I used they both in production environments. While VServer uses the 100% of the host operating system resources, OpenVZ resource shortage is very grained: memory, cpu consumption, quotas (two levels, per container and per user/group inside a container), ipfilter entries, etc. OVZ also supports soft and hard limits: You have a memory limit (soft) of 512MB, but also a hard limit of 768MB. Your container may use up to 512 Mb, but if more is needed, it will take up to 768MB.




If you are planning to use all of your machine, Linux-Vserver is your solution due it simple configuration and zero resource checking if any container gets small for their processes, it scales along all your containers.



Now, if you want all control, OpenVZ is the path to follow. But, be careful, you should check if any container is getting small and assign more resources to it. I use OpenVZ for many things, for example, one NS server with 256MB of ram and 5GB disk space.



You should check both and finally choose the one who best fits into your requeriments.


performance - How much resources does vmstat really use?




We have a server running Tru64 Unix, which is our main production server for a single application our organisation uses. The software vendor has complete control of the hardware and software (we still administer the software, but have no root access).



however the vendor has allowed us to run vmstat, which will produce output every 15 seconds for 10 intervals then exit.



I was going to setup an automated process that would run vmstat, logging the output. I thought this would be quite useful information, especially considering we have had performance issues lately.



Mangement have told me that I cannot do this as vmstat chews up a lot of resources and will slow the system down if it is constantly running.



Can anyone tell me if this is actually true?



Answer



You can continually run vmstat without fear of chewing up your resources.



vmstat outputs all it's performance statistics in text form that is printed to standard output, nothing more. The overhead is incredibly small. As a test I ran vmstat on two different servers and in both cases it required approximately:




  • 456k to 485k usage







Additional superfluous information



On both servers I ran it at 1 second intervals for 50 intervals and it averaged




  • 485k

  • about 0.03% of the overall system CPU during that time period




I then ran it at 1 second intervals for 500 intervals and it averaged (1GB Ram - Intel(R) Xeon(TM) CPU 3.00GHz)




  • 485k

  • 0.38% of the overall system CPU during that time period



And I ran it at 1 second intervals for 500 intervals and it averaged (12GB Ram - Quad Core Intel(R) Xeon(R) CPU 5130 @ 2.00GHz)





  • 485k

  • 0.26% of the overall system CPU during that time period



Note: One server was a high performance server, the other an email server. Both functioned with barely a thought to vmstat running on the terminal. It'll take your server more resources to find out how much load vmstat creates than actually running vmstat.


Google Cloud DNS - add a CNAME to another domain?



I can't add a CNAME to another domain - why and how can I fix?



My zone name is myapp.com. To add automated email security re spam, SendGrid (provider) wants me to add 3 CNAME records of the format:



 - mail.myapp.com             a123.b456.sendgrid.net
- s1._domainkey.myapp.com s1.domainkey.a123.b456.sendgrid.net
- s2._domainkey.myapp.com s2.domainkey.a123.b456.sendgrid.net



I can't do this. There is no error, the create button just does not work. I can however create CNAME records to my own domain.



Am I breaking the DNS rules or is this a Google Cloud limitation??



EDIT: Screen shot added.



Redacted parts are all myapp.com except the last two which is the SendGrid code. "Respond" is my subdomain. The existing CNAME is just a test to show I can add CNAMEs.



screenshot



Answer



Ahh, I see the problem.



MX and CNAME records of the same name cannot exist within the same zone. You'll need to rename one of them.


Apache multiple virtual hosts with ssl certificates












I have an problem with Apache and multiple SSL certificates. In case if i config it only for one domain, everithing works fine, but when i add another one as virtualhost it returns an error:



VirtualHost domain1.cz:443 overlaps with VirtualHost domain2.sk:443, the first has precedence, perhaps you need a NameVirtualHost directive
[Wed Nov 07 16:14:49 2012] [warn] NameVirtualHost *:443 has no VirtualHosts


I tried many combinations of virtualhosts configuration methods, but result are still very similar - First domain is correctly secured and second (domain2.sk) recive certificate from first one.



Please, can you help me with this kind of certificate configuration?




NameVirtualHost *:443


ServerName domain1.cz
DocumentRoot /var/www/www.domain1.cz/htdocs/

SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM


SSLCertificateFile /etc/apache2/ssl/domain1.cz/ssl.crt
SSLCertificateKeyFile /etc/apache2/ssl/domain1.cz/ssl.key
SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem
SSLCACertificateFile /etc/apache2/ssl/ca.pem

SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
CustomLog /var/www/www.domain1.cz/logs/ssl-access.log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

LogLevel warn

ErrorLog /var/www/www.domain1.cz/logs/ssl-error.log



ServerName domain2.sk
DocumentRoot /var/www/www.domain2.sk/htdocs/

SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM


SSLCertificateFile /etc/apache2/ssl/domain2.sk/ssl.crt
SSLCertificateKeyFile /etc/apache2/ssl/domain2.sk/ssl.key
SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem
SSLCACertificateFile /etc/apache2/ssl/ca.pem

SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
CustomLog /var/www/www.domain2.sk/logs/ssl-access.log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"


LogLevel warn
ErrorLog /var/www/www.domain2.sk/logs/ssl-error.log


Answer



Each IP address/port can only serve one SSL certificate. In order to get more than one SSL to work, you'll either need another IP address (recommended) or bind the second SSL certificate to another port on your IP (functional, but a pain for your site visitors b/c the port has to be included in the URL). Check with your host, most of them make additional IP's available affordably.



This thread has more info.



Edit: I can't grammar.



Friday, June 17, 2016

VMware Virtual vCenter and High Availability




To continue with this question:
Should be Vmware vCenter server high available?



According to the response there even if vCenter is down HA will continue to work.



So, if my vCenter is a VM, using the express sql edition in the same VM, and that VM is hosted in the same cluster it manages (and the cluster is setup for HA): Am I correct to assume that if the host that hosts the vCenter goes down HA will vmotion the vCenter VM to another host and it will continue to function?



BTW: my environment is small, two ESXi 5.0 hosts, with about 50 VMs, using iSCSI shared storaged for everything.


Answer




The vCenter VM won't be migrated to the remaining host, it will be restarted on the remaining host, so no vMotion will occur. It's also dependent on the VM Restart Priority. If the VM Restart Priority for the vCenter VM is disabled then it won't be restarted on the remaining host. If the VM Restart Priority is set to anything but disabled then the vCenter VM will/should be restarted on the remaining host. Note that the VM Restart Priority is dependent upon the available resources on the remaining host and the Admission Control setting (with or without DRS) so you want to set the vCenter VM restart priority to high to ensure that it is started on the remaining host.



Also note that a vMotion doesn't occur for the VM's on a failed host. VMware HA restarts the VM's on the remining host, it does not migrate them (with vMotion) to the remaining host.



Have a read here for more information:



http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.availability.doc_41/c_useha_works.html


domain name system - DNSMasq is slower than my ISP at returning cached DNS entries.

I have DNSMasq set up on a relatively idle Pentium D 3.4Ghz Debian Linux machine. When I run dig queries locally, the second result is always 0 ms. When I run dig queries on any other machine on my network, the cached response time is a constant 35ms. This is in spite of the fact that I get LAN ping times back -- under 1ms.



Using DNS Benchmark, I ran a test that shows I can hit my ISP's DNS servers faster than my own for cached queries.




How am I accruing 35ms on cached DNS responses for remote queries but <1ms for local queries at the server's command prompt?

Thursday, June 16, 2016

domain name system - dns - BIND - how to return a different IP based on request's subnet




We have an intranet DNS server (system-config-bind on RHEL) serving office A, and a VPN connecting offices A and B. Office A has a server named "dev".



In office A, to access a server "dev" on the local network, the address is 192.168.1.13



In office B, to access a server "dev" over the VPN, the address is 192.168.2.13



My question is this - can I set the DNS server to return a different IP for "dev" based on the subnet of the incoming request?



Example:

In office A, BIND returns 192.168.1.13 as the "dev" IP, because the originating request is from the 192.168.1/24 subnet.



In office B, BIND returns 192.168.2.13 as the "dev" IP, because the originating request is from the 192.168.2/24 subnet.


Answer



You need to use views:



view "officeA" {
match-clients { 192.168.1.0/24; };

include "/etc/named.conf.zones-rfc1912";

include "/etc/named.conf.zones-common";
include "/etc/named.conf.zones-officeA";
};

view "officeB" {
match-clients { 192.168.2.0/24; };

include "/etc/named.conf.zones-rfc1912";
include "/etc/named.conf.zones-common";
include "/etc/named.conf.zones-officeB";

};

linux - I just did a chmod -x chmod



So I did a chmod -x chmod. How I can fix this problem? How do I give execute rights back to chmod?


Answer



In Linux:



/lib/ld-linux.so.2 /bin/chmod +x /bin/chmod



http://www.slideshare.net/cog/chmod-x-chmod


apache 2.2 - what chmod and owner:group settings are best for a web application?



we are configuring a PHP web application on CentOS and have all our files currently in /var/www/html/project/



Apache is configured to run as apache:apache and has access to the directory above. Right now our files and directories have the following rights:




owner = root
group = apache



DIRECTORIES:
drwxr-x--- root apache



FILES:
-rw-r----- root apache



Is this a safe setup? Or is it better to use a new user e.g. "project" to be the owner of all files and directories?



Answer



It's a best practice to have the owner be whatever limited user account is used for uploading/managing the files on the server. The group is often the account that php is running under, so in this case apache would be correct. The other permissions should be set to nothing, as they are. You are close to perfect.



If you have a situation where multiple accounts may be modifying/editing the files you can creat a cron script that chowns the dir recursively every hour or so to maintain correct ownership. The same technique works to keep the permissions correct as well.



Also, you may want to modify the umask of the limited user account that has ownership to be inline with your permission scheme.


apache 2.2 - Made a mess of privileges for /var/www

I've got an installation of Ubuntu 10.10 running in virtualBox which I'm going to use for some local development. I've installed PHP, Apache and MySQL and want to use vsftpd to access /var/www so I can develop on my Windows installation (from which virtualbox is running) and FTP over the files.



I was originally getting an error saying that access was denied when I was transferring files over FTP to /var/www so I figured some chmod tweaking was needed. I'm no expert so did some reading beforehand and executed the following:



sudo chmod -R 777 /var/www
sudo chown james:james /var/www


I can now FTP files over, but when loading up newly transferred files in the web browser, I receive a permission denied error. The new files don't have the 777 permission which I set - surely you don't need to use chmod every time you transfer something new over?




This is simple stuff I'm getting stuck with so I just know there's going to be permission problems with PHP and MySQL accessing things in the future so I could really use some help! If anyone would be so kind as to suggest some privileges I can use, I would be most grateful. Security isn't a concern as this is all local and I just want to get it up and running ASAP!



Probably would have been better off installing XAMPP on my Windows installation but I wanted to keep it separate and learn a thing or two along the way to getting this set up!



Here's the output from ls -l /var/www



-rw------- 1 james www-data 3458 2011-03-31 00:36 g.jpg
-rwxrwxrwx 1 james www-data 177 2011-03-27 23:16 index.html
-rwxrwxrwx 1 james www-data 21 2011-03-28 01:18 test.php



index.html and test.php were in /var/www before I executed the chmod command and g.jpg was FTP'ed over after I messed around with the privileges. I've run chown james:www-data, but that hasn't helped with the Apache access problem.

Wednesday, June 15, 2016

networking - how to stop eth0 bridged connection via eth1 host only connection in a virtual machines?

I have to simulate unplugged network cable for testing issue to all applications we are developing in my company .
I have about 6 virtual machines Cent-OS on a virtual box .
from a php web page , I have to choose a server and stop its network and then start it again .
of course I'm using ssh for remote connection to other servers .
and I if I stopped the eth0 (main network) on a server . I won't be able to reach it again with ssh .
so I had to find another way to perform this .
I made another network connection (Host-only) between servers via virtualbox with the help of this tutorial , then I logged into one of the servers to configure IP for this new network with these two commands :




 sudo ifconfig eth1 192.168.1.101 up 


also



sudo ifconfig eth1 inet 192.168.1.101 broadcast 192.168.1.255 netmask 255.255.255.0 up   


but when I'm trying to ssh this via php :




exec('ssh root@192.168.1.101 2>&1; ',$output);


I get this output :



ssh: connect to host 192.168.1.101 port 22: No route to host


I don't know what I have missed ?




Edit : This is what I get when I run route



$ route -n                                                                                                        
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 192.168.0.204 0.0.0.0 UG 0 0 0 eth0

storage - What is the difference between SAN, NAS and DAS?

What is the difference between SAN, NAS and DAS?

linux - RAID hard drive configurations




I'm building a linux traffic shaping routing box for a leased line connection. This box has to be reliable, so I'm planning on installing the operating system and data files on a RAID 1 configuration. My question is twofold:




  1. I was going to use linux software raid. I have heard that hardware RAID really only gives any significant benefits once you shell out for a good quality RAID card - with a battery backup at least. I'm not too concerned with speed - I won't be using the disks much at all, but I am concerned with data recovery in case of an accident. Is shelling out for a hardware raid card worth the price? If so, can anyone recommend a card to use?


  2. I've been in the situation before where hard disks bought at the same time failed at roughly the same time as well - i.e., within a week of each other. In order to avoid this, I'd like to populate my RAID array with disks from different manufacturers. As long as the disk sizes, speeds, and cache sizes are the same, can anyone see a problem with this?




Cheers,


Answer



1) Linux Software RAID is very mature these days, and removing drives from one machine and placing in another will work every time. With a hardware solution, you need to get a spare card because that particular chip's way of doing RAID may not be the same as another, and you may have lost your data. With modern CPUs, software RAID is safe to use and quick too - I'd trust it moreso than a hardware solution unless you've got the budget for a high-end RAID card. The benefits of these is that they have the battery backup units which store data in the case of a power outage. Typically, you're not really going to be affected by power outages though - the drives themselves tend to do caching as well, so you're going to lose some data anyway, just do Linux software RAID. Or ZFS - it's very nice, VERY safe, useful feature, but a different paradigm.




2) That'll be fine. As long as they're within about 1% of each other, you'll just get a RAID set of the smallest drive size. I do the same - I tend to stick with the same manufacturer, but get different build sets.



Remember that RAID is not backups, too.


networking - Multiple Websites on Multiple Servers behind One Public IP

I’m currently working on a project to bring two of our hosted servers (one email, one web) in-house to run alongside our other web server. Hosting one web server is fairly straightforward, but I need help with how I can divert the traffic to the correct server once it reaches our network.



I’ve read that a proxy server will do this. I’ll be using IIS 7.5 with Application Request Routing and URL Rewrite. I have been trying different methods but I haven’t had any success so I must be missing something.



Would I use URL Rewrite to change the external web address to in internal IP address for the correct server, and then a different port for each individual website on that server? Or is it possible to say the URL of example.com goes to this internal IP, and the URL of company.com goes to this IP?






The two external servers are running Plesk 10 and 11. They host multiple websites and the emails related to those websites. Two examples:-
example.com – info@example.com
company.com – info@company.com



We currently host one web server in-house at IP 192.168.1.5 running IIS 6. This server contains the sub-domain websites for company.com:-
one.company.com
two.company.com



Our router forwards incoming traffic on port 80 to 192.168.1.5






We will have the router forward incoming traffic on port 80 to IP.6
The server at IP.6 will be running IIS 7.5 with Application Request Routing and URL Rewrite. I want this server to direct the traffic to either one of the following servers.



A server at IP.5 to replace our current internal web server will be running IIS 8 and hosting the sub-domain websites for company.com:-
one.company.com
two.company.com



A server at IP.11 to replace our current external web server will be running Plesk 12 hosting many websites including:-
example.com
company.com



As I mentioned in the brief, we will be transferring our email server as well. This will be located at IP.10 running Plesk 12. Am I correct that I will also need the router to forward incoming traffic on ports 25 and 143 to IP.10? This server will be hosting the webmail websites:-
mail.example.com
mail.company.com

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...