Monday, February 29, 2016

windows server 2008 - SSL certificate generated with openssl doesn't have certification root

I'm trying to set up a script to generate SSL certificates for use with IIS. I'm trying to get certificates signed by a an self signed CA cert to work. I'm 99% there but something is sill wrong.
This is for use with MSExchange SSL certs. I want to have long life self signed certificates and to have a root cert which I can install on devices like smartphones which will allow me to trust other certs I have signed with it, like SSL certs.



This is what I'm doing:




/// create a private root cert
openssl genrsa -des3 -out work\Private-CA.key 2048

openssl req -new -x509 -days 3650
-key work\Private-CA.key
-out work\Public-CA.CRT

/// Create an SSL cert request
openssl genrsa -des3 -out work\Certificate-Request.key 2048


openssl req -new
-key work\Certificate-Request.key
-out work\SigningRequest.csr

/// Sign the request with the root cert
openssl x509 -req -days 3650 -extensions v3_req
-in work\SigningRequest.csr
-CA work\Public-CA.CRT
-CAkey work\Private-CA.key

-CAcreateserial
-out work\SSL-Cert-signed-by-Public-CA.CRT


The first 4 commands seem to be fine. The final command is generating a certificate which has the attributes I want.



I import the Public-CA.CRT into the machine Store as a trusted root certificate. I then use exchanges import-exchangecertifiate cmdlet to try and import SSL-Cert-signed-by-Public-CA.CRT. This fails with a message saying that the certificate is not trusted.



It would appear it is not being signed. If I import the ssl cert into to machine personal store, it also indicates that it doesn't have a certification route.




Can anyone with a better knowledge of this see what I'm missing?



As an aside: Is there any way, from the command line, of asking openssl if Certificate X has been signed by Certificate Y?
This should work but doesn't:



openssl verify  -cafile Public-CA.CRT SSL-Cert-signed-by-Public-CA.CRT
usage: verify [-verbose] [-CApath path] [-CAfile file] [-purpose purpose] [-crl_check] [-engine e] cert1 cert2 ...
recognized usages:
sslclient SSL client
sslserver SSL server

nssslserver Netscape SSL server
smimesign S/MIME signing
smimeencrypt S/MIME encryption
crlsign CRL signing
any Any Purpose
ocsphelper OCSP helper


adding -purpose doesn't make matters any better.

nginx - what is causing wordpress file empty error on uploads?



I can't seem to figure out why I get the following when I try to upload anything in wordpress... for both media uploads and wordpress import xml:




"Sorry, there has been an error.
File is empty. Please upload something more substantial. This error could also be >caused by uploads being disabled in your php.ini or by post_max_size being >defined as smaller than upload_max_filesize in php.ini."





Seems to be very little on how to troubleshoot this is online in way of nginx/php-fpm... most of which is about php.ini max configs or chmod, which... in my setup post_max_size is large enough as well as upload_max_filesize in /etc/php5/fpm/php.ini (as well as timeouts)... and chmod/chown seems correct for using separate php pools. Maybe someone can make heads or tails of this?



Here's my setup:




  • Cloudflare (is off) to Floating IP

  • Digitalocean Floating IP to droplet

  • Droplet is Ubuntu 14.04 with nginx using php-fpm with pools created for each wordpress ms installation (x4 atm)


  • SSL with Let's encrypt used for each wpms installation

  • Chmod 755 for all wpms directories in site roots

  • Chmod 644 for all wpms files in each site roots

  • Chmod 660 for wp-config.php

  • Chown each php5-fpm pool user on all files/directories for within their own site root
    eg: chown -R example1:example1 /home/example1/*

  • Wordpress is one directory below their nginx conf roots. eg /home/example1/app/wordpress_files_here

  • php.ini has uploads enabled with directory defined (/home/tmp/)




The users are NOT in www-data group nor sudo group,
I read doing so is a security risk but even so I temporarily tried adding them to www-data group to see if the wordpress uploading would work... it didn't.
I've also tried chown example1:www-data ownership as well, didn't work.
I've also tried chmod 777 for uploads folder, didn't work.



Error logs have the following:
in wpms-error.log (this also doesn't make sense to me)



2016/05/20 01:12:00 [crit] 1584#0: *1251 open() "/home/example1/example1.com-access.log" failed (13: Permission denied) while logging request, client: [my IP address], server: example1.com, request: "POST /wp-admin/admin-ajax.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm-example1.sock", host: "example1.com", referrer: "https://example1.com/wp-admin/admin.php?import=wordpress&step=1&_wpnonce=dad4d82487"



in these site's nginx conf files I have:
access_log /home/example1/$host-access.log;



access logs are enabled in nginx.conf (even though not recommended) but access logs for each site is not being written to their site roots.



so... after trying everything I've read online... I've yet to find out even what the underlying issue is... because file permissions alone doesn't seem to be it? .. is it???



The following (after changing usernames to example1 etc) is ps aux | grep php results:




root       993  0.0  0.2 266688 11396 ?        Ss   May18   0:10 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
example1 1003 0.0 1.1 302568 45856 ? S May18 0:32 php-fpm: pool example1
example1 1004 0.0 1.1 304620 47808 ? S May18 0:31 php-fpm: pool example1
example2 1005 0.0 1.1 304360 47648 ? S May18 0:30 php-fpm: pool example2
example2 1007 0.0 1.1 302308 45956 ? S May18 0:30 php-fpm: pool example2
example3 1008 0.0 0.1 268640 7704 ? S May18 0:00 php-fpm: pool example3
example3 1009 0.0 0.1 268640 7744 ? S May18 0:00 php-fpm: pool example3
www-data 1010 0.0 0.1 266680 7560 ? S May18 0:00 php-fpm: pool www
www-data 1011 0.0 0.1 266680 7564 ? S May18 0:00 php-fpm: pool www
example4 1013 0.0 0.9 296016 39704 ? S May18 1:24 php-fpm: pool example4

example4 1014 0.0 1.3 310952 55024 ? S May18 1:23 php-fpm: pool example4
example5 1015 0.0 1.0 297352 40940 ? S May18 0:32 php-fpm: pool example5
example5 1016 0.0 1.1 305104 48232 ? R May18 0:32 php-fpm: pool example5
example4 1105 0.0 0.9 296016 39596 ? S May18 1:20 php-fpm: pool example4
example1 1313 0.0 0.9 296284 39884 ? S May18 0:31 php-fpm: pool example1
example2 1317 0.0 1.1 304364 47628 ? S May18 0:29 php-fpm: pool example2
example5 1332 0.0 0.9 296880 39056 ? S May18 0:29 php-fpm: pool example5
example3 3727 0.0 0.0 11744 932 pts/1 S+ 18:42 0:00 grep --color=auto php



example3 above is not a wpms site, it's just an empty root atm and that user is also in the sudo group and has it's own ssh login. I don't know if that's relevant.


Answer



This sounds like a permissions issue to me, likely around groups. I go into detail in this tutorial, but the gist is below



First the script I use to reset permissions



chown -R tim:www-data *
find /var/www/wordpress -type d -exec chmod 755 {} \;
find /var/www/wordpress -type f -exec chmod 644 {} \;
find /var/www/wordpress/wp-content/uploads -type f -exec chmod 664 {} \;

find /var/www/wordpress/wp-content/plugins -type f -exec chmod 664 {} \;
find /var/www/wordpress/wp-content/themes -type f -exec chmod 644 {} \;
chmod 440 /var/www/wordpress/wp-config.php
chmod -R g+s /var/www/wordpress/


Here are the main points from my tutorial - the information from that came from the wordpress.org website, mostly, I have references in my tutorial




  • set owner to the user you created earlier, and group ownership to

    www-data

  • Folder default is 755, standard

  • File default is owner writable, readable by everyone

  • Uploads folder writable by user and web server, so media can be uploaded by users

  • Plugins folder writable by user and web server, so plugins can be added

  • Themes folder only modifiable by owner


apache 2.2 - var/www permissions and FTP

There are many threads about this topic but many are subjective since it's such a flexible subject, so I hope I'm asking an objective question relevant to my setup which is: is this is a sensible approach to the problem of Apache and FTP permissions.



Ubuntu 12.04 LTS 64 bit LAMP server running in Amazon's EC2; Apache2, MySQL, PHP.




Apache2 runs under www-data.



Apache2 default site disabled.
Site: mysitename.com enabled.



sitesavailable/mysitename.com configured, with DocumentRoot and the Directory directive to point to /mnt/ebs1/public_html



In order to FTP files to the above public_html directory and also be able to manage them from the terminal, I created two specific user accounts and a group:




remote-system-user
remote-ftp-user
group: ftp



remote-system-user is a member of the admin group so is able to perform sudo operations at the terminal, and also the ftp group. It was created with 'adduser' and has a home directory.



remote-ftp-user is only a member of the ftp group. It was created with 'useradd' and does not have a home directory.



I then set permissions on /mnt/eb1/public_html like this:




sudo chgrp -R ftp /var/www
sudo chmod -R g+w /var/www



I use proFTPd as an FTP server, and using it's conf file, I jail the FTP user to /mnt/ebs1/public_html



Other than perhaps choosing a better name for the ftp group, since it's a bit illogical if the remote SSH user is also a member, what are people's comments on this setup?



The objective is to not give www-data full permissions to the public_html folder, but I will need site users to be able to upload files. I intend to create a folder within public_html which is writable by www-data to solve that issue.



Thank you

networked storage for a research group, 10-100 TB




this is related to this post:



Scalable (> 24 TB) NAS for research department



but perhaps a little more general.



Background:



We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year.




Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal.



Our options:



1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot.



2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks.



3) We can try to implement our own data storage solution, which is why I'm asking you guys.




One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight.



Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system.



Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us?



As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system.



Thanks


Answer




There's an excellent and extremely detailed article on building NAS "pods" by a company who developed the system for its own use, at http://www.backblaze.com/petabytes-on-a-budget-how-to-build-cheap-cloud-storage.html . They describe it as "67 TB for $7,867", which is very good going. They run JFS on top of RAID-6 volumes under Debian; they then offer that via https, but there's nothing to stop you putting (eg) SaMBa in there instead (you don't say what your current remote-file-access protocol is).



Disclaimer: I know nothing about these people except what I have read, and I haven't tried to build one of these myself. Nevertheless, unless they have been faking photos, they really do build and deploy a bunch of these things, and they haven't yet gone out of business.



Edit: it took me a little longer to find the specific supplier list (the detailed parts list is in the original link above), but it's at http://blog.backblaze.com/2009/10/07/backblaze-storage-pod-vendors-tips-and-tricks/#more-199 . I really do admire the way these guys have thrown open their detailed infastructure for reuse; but as they say in the original posting:




Finally, we thank the thousands of
engineers who slaved away for millions
of hours to bring us the pod

components that are either inexpensive
or totally free, such as the Intel
Processor, Gigabit Ethernet,
ridiculously dense hard drives, Linux,
Tomcat, JFS, etc. We realize we’re
standing on the shoulders of giants.




I don't know about their product (I have my own tape stacker for backups) but I approve of their humility.


linux - What's the difference between keepalived and corosync, others?



I'm building a failover firewall for a server cluster and started looking at the various options. I'm more familiar with carp on freebsd, but need to use linux for this project.



Searching google has produced several different projects, but no clear information about features they provide . CARP gave virtual interfaces that failover, I am not really clear on whether that's what corosync does, or is that what pacemaker does?




On the other hand I did get manage to get keepalived working. However, I noted that corosync provides native support for infiniband. This would be useful for me.



Perhaps someone could shed some light on the differences between:




  1. corosync

  2. keepalive

  3. pacemaker

  4. heartbeat




Which product would be the best fit for router failover?



EDIT: So I worked out a little more...



Pacemaker is the bigger project that can use Corosync & Pacemaker.
It seems that Corosync & Heartbeat basically do the same thing. So you choose one or the other.



Heartbeat seems to be an older project but is still being worked on.




Keepalive on the other hand is an entirely different project and implements the VRRP protocol. It has lees features than the others. It appears to still be widely used but is missing recent documentation.



Unfortunately, for firewall/router failover there are very little examples. Has anyone found some nice howto's? I've found one written in Spanish.


Answer



Here is the general rule of thumb I have used when deciding between keepalived and heartbeat.



Heartbeat is usually used when you want a true active/standby cluster setup (where only one node is actually "up.") Think NFS. Usually w/ Heartbeat the pre and post script actions are used to start and stop services.



Keepalived is much simpler, and is usually used for hot-standby usage (i.e. To keep a service up in a redundant fashion.)




A good usage example with keepalived that I have had success with is for redundant Nginx load balancers. In that situation, if a node fails, the "floating ip" moves over to the backup node.



Keepalived is simple, but it allows you to create your own check scripts (that would trigger a failover, etc.) Some info: https://tobrunet.ch/2013/07/keepalived-check-and-notify-scripts/



Which is best for you depends on your situation: keepalived is a good fit for router failover.


Saturday, February 27, 2016

security - How can I implement ansible with per-host passwords, securely?

I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host



ansible -i ansible_hosts host1 --sudo -K # + commands ...



My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible.



Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting:



host1 | ...
host2 | FAILED => Incorrect sudo password
host3 | FAILED => Incorrect sudo password
host4 | FAILED => Incorrect sudo password
host5 | FAILED => Incorrect sudo password



Research so far:




  • a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo"


  • the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine)


  • this security StackExchange question which takes it as read that NOPASSWD is required


  • article "Scalable and Understandable Provisioning..." which says:




    "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password"


  • article "Basic Ansible Playbooks", which says



    "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t"



    My thoughts exactly, but then how to extend beyond a single server?


  • ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time."




So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

some questions about raid card



I have some questions about raid card.



Use LSI MegaRAID® SAS 9260-8i raid card as a example.




There is only two connection ports on the raid card, and they provide two cables that can connect 8 SAS/SATA storages.
The document say it can connect "Maximum 32 drives per controller",
I want to know how can I connect more than 8 hard disks with this card.



Is that they say "Eight internal SATA+SAS ports" means they provide two cables which can connect 8 hard drives, but you can use other way to connect Maximum 32 drives?



The other question. I see host bus adapter has raid integrated , so what is the difference between host bus adapter and raid card. Are they the same?



I also know is SAS/SATA Expander's function only to connect more hard drives?




Thanks in advance.


Answer



This is easier than it seems, and you're already onto the solution.



Basically it has two channels of 6Gbps each, each connector can have a 1-into-4 cable pugged into it going off to either a single drive or an expander, each expander capable of supporting 0-4 drives. This is how you'd get to 32 drives, through the use of expanders.



Oh and at its simplest a HBA converts back and forth between the system and its disks while a RAID card does the same but more - it lets you bunch up disks into virtual disks of various types of array (R0/1/3/4/5/6/10/50/60 etc).


Friday, February 26, 2016

linux - Do you NEED BIND running next to Apache on a production env (apache is using virtualhosts)?

Good afternoon to you all. My question is perhaps a simple one. Say I have a webserver running (Linux + Apache) I have a few domains I'd like to point to this machine. all great and dandy BUT! Do I need a dns server like BIND to be running as well? or can I just host multiple websites using just apache and the virtual hosts? thanks guys!

Thursday, February 25, 2016

cron - anacron screws my crontab?



My system is CentOS 6.3 , I notice my /etc/cron.daily are not executed at 4:01 AM , instead , these scripts are executed at random time . I searched and seemed it is 'anacron' that screws up my crontab. For example , these is part of my log file from /var/log/cron :



Oct  1 04:01:01 xfiles anacron[7350]: Anacron started on 2012-10-01
Oct 1 04:01:01 xfiles anacron[7350]: Will run job `cron.daily' in 18 min.
Oct 1 04:01:01 xfiles anacron[7350]: Jobs will be executed sequentially
Oct 1 04:01:01 xfiles run-parts(/etc/cron.hourly)[7352]: finished 0anacron

Oct 1 04:19:01 xfiles anacron[7350]: Job `cron.daily' started

Oct 2 03:01:01 xfiles anacron[8810]: Anacron started on 2012-10-02
Oct 2 03:01:01 xfiles anacron[8810]: Will run job `cron.daily' in 36 min.
Oct 2 03:01:01 xfiles anacron[8810]: Jobs will be executed sequentially
Oct 2 03:01:01 xfiles run-parts(/etc/cron.hourly)[8812]: finished 0anacron
Oct 2 03:37:01 xfiles run-parts(/etc/cron.daily)[10133]: starting 00webalizer

Oct 3 03:01:01 xfiles anacron[14989]: Will run job `cron.daily' in 30 min.
Oct 3 03:01:01 xfiles anacron[14989]: Jobs will be executed sequentially

Oct 3 03:01:01 xfiles run-parts(/etc/cron.hourly)[14991]: finished 0anacron
Oct 3 03:31:01 xfiles anacron[14989]: Job `cron.daily' started
Oct 3 03:31:01 xfiles run-parts(/etc/cron.daily)[16301]: starting 00webalizer

Oct 4 03:01:01 xfiles anacron[16357]: Will run job `cron.daily' in 12 min.
Oct 4 03:01:01 xfiles anacron[16357]: Jobs will be executed sequentially
Oct 4 03:01:01 xfiles run-parts(/etc/cron.hourly)[16359]: finished 0anacron
Oct 4 03:13:01 xfiles anacron[16357]: Job `cron.daily' started
Oct 4 03:13:01 xfiles run-parts(/etc/cron.daily)[16692]: starting 00webalizer


Oct 5 03:01:01 xfiles anacron[19413]: Will run job `cron.daily' in 29 min.
Oct 5 03:01:01 xfiles anacron[19413]: Jobs will be executed sequentially
Oct 5 03:01:01 xfiles run-parts(/etc/cron.hourly)[19415]: finished 0anacron
Oct 5 03:30:01 xfiles anacron[19413]: Job `cron.daily' started
Oct 5 03:30:01 xfiles run-parts(/etc/cron.daily)[20086]: starting 00webalizer


You can see that the /etc/cron.daily just cannot start at a fixed time. Sometimes at 3:30 , sometimes at 3:13 , and sometimes at 3:37 , or 4:19 ...



In previous CentOS (5.x) , /etc/cron.daily starts at 4:01 AM correctly. But I just cannot figure how CentOS6's anacron screws up the cron schedule . How to make the system just behave as CentOS 5.x , that starts /etc/cron.daily just at a fixed time (4:01 , for example) ?




Thanks.



(This is a 24/7 server , no shut-down problem )


Answer



If RANDOM_DELAY is set in your /etc/anacrontab, it's the expected behaviour.



Verbatim copy from the manualpage anacrontab(5):





If the RANDOM_DELAY environment variable is set, then a random value
between 0 and RANDOM_DELAY minutes will be added to the start up delay
of the jobs. For example a RANDOM_DELAY set to 12 would therefore add,
randomly, between 0 and 12 minutes to the user defined delay.




This might explain your symptoms.


firewall - Using a dynamic dns securely



I'm bit of a newcomer when it comes to networking, firewalls and port forwarding - please bear with me:



I've just setup a dynamic DNS that points to my external IP and are handled through my Cisco router. Everything fine so far.

When I visited my DNS through the browser, http://exampledynamicdns.com, I got redirected to the backend GUI of my router - kind of expected, but still not cool.



So to prevent my Router settings from being available on the interwebs, I did a port forward of port 80 to a non-existent IP on my LAN.



Is this a good practice or not?


Answer



Sure. This is an acceptable workaround.



Better would be to configure the router not to expose the web interface for external IPs, configure it to run on a separate port than 80, password-protect the web interface. But sometimes you have these very cheap routers that can't be configured that way. Buying a better one is recommended but not necessary.


vmware esxi - Boot from SD card on Gen8 DL360

I'm attempting to get VMWare ESXi running from an SD card on an eBay purchased DL360 Gen8. I'm guessing that the firmware is probably out of date (iLO 4 - v2.50).



The only options I have in boot are USB storage device (which I guess the internal SD card might fall into that category).



The question is, could this be firmware related and if so where do I find the correct firmware to update via the iLO web interface. The HPE support site seems to list only firmware which is updated via a host OS.

Wednesday, February 24, 2016

linux - Resolve all external addresses to an internal address when an internet connection is missing

I am using a raspberry-pi running ArchLinux with a WiFi router to provide an access point for locally-hosted web content. I am running a DHCP server. When the Pi is plugged into an internet connection, clients connected to it's WiFi network can access any webpage online as well as accessing locally hosted content (via 10.1.0.1).



When an internet connection is not present, I would like all DNS requests to route to 10.1.0.1. However, I only want this behavior to occur if the DNS request to the real webpage (say www.google.com) does not resolve. How can I conditionally resolve all external addresses to the internal address, only when external internet access is not possible?



This question is similar to How can i resolve all external addresses to internal address?, but I am not clear how to apply dnsmasq conditionally, or whether dnsmasq is the correct tool for my use case.

virtualization - Can't lock vz with proxmox

I would like to backup my virtual servers to move them away. And when i have started the backup process then i only see this every time:



Feb 09 08:40:34 INFO: Starting Backup of VM 101 (openvz)
Feb 09 08:40:34 INFO: CTID 101 exist mounted running

Feb 09 08:40:34 INFO: status = running
Feb 09 08:41:34 ERROR: Backup of VM 101 failed - can't lock VM 101


If you need any more details then i will give them for you!
Thanks in advance for your help!

Tuesday, February 23, 2016

domain name system - SPF Record - Sender server SPF record permerror



I cannot seem to get a SPF record working for a client of ours, Google mail keeps failing on the lookup.



My SPF record is




v=spf1 a ip4:80.74.254.215 include:mx1.helloevery1.co.uk include:_spf.google.com include:smtproutes.com include:smtpout.com





The clients main mail server are




smtproutes.com and smtpout.com




These are working fine, SPF passes as expected.



mx1.helloevery1.co.uk is our mail server. It is a simple ISPConfig Postfix setup. We send all mail through 1 account, let's say that is "noreply@example.com".




There is a username and password set up to send through but we change the "from" address in our application. The from address is "enquiry@clientdomain.com".



"enquiry@clientdomain.com" is not set up on mx1.helloevery1.co.uk. It is only on the client servers.



When I send through my SMTP server from the site, I am receiving the following error when I send to my email account.




Received-SPF: permerror (google.com: permanent error in processing during lookup of enquiry@clientdomain.com) client-ip=212.71.234.103;



Authentication-Results: mx.google.com;

spf=permerror (google.com: permanent error in processing during lookup of enquiry@clientdomain.com) smtp.mail=enquiry@clientdomain.com




This looks like it is trying to lookup the domain on my SMTP server (where is not is configured). If I were to set up the domain on my SMTP server and create an account then when I send through my SMTP server then it will try to deliver it locally.



I've always assumed that SPF was just a verification tool to say which server is allowed to send but never really took into account the email it is coming from.



I'm stuck as I can't find a resource on SPF record creation that I can relate to


Answer



An SPF record states which mailservers are allowed to send mail from the sending domain. Basicly, what is in the from: address.




So if you have someone sending mail as "ninja@ninja.com" and the receiving mailserver checks SPF, it looks for an SPF record on "ninja.com" to see if the sending mailserver is listed.



Does this answer your question ?


Web- and DB-Server: High clock rate and less cores vs. less clock rate and many cores



I am currently looking into setting up a new infrastructure for a hosting project of mine. Basically it will be managed hosting with a strong focus on Django-based apps. All will be Linux-based of course with PostgreSQL as DB and either nginx or Apache as web server along with GUnicorn etc.




Now I am scouting the market for server systems to rent and it is tough to get things that fit the budget and all criteria. So I would like to ask for advice on the following.



All the good offers I could find either use single high end XEON E3-12xx (Quads @ >= 3.2Ghz) or many core Opterons (either 6000 or older with 8 cores or more @ 2.00 - 2.40 GHz). From an I/O POV, both usually have HW-RAID10 with battery, enough RAM to satisfy my needs (24-32 GiB ECC-RAM) and a 1 Gbit/s uplink. A single offer I found is also rather nice but is based on two E5620 XEONs which are rather dated imho and that system is offered at the same price as the others.



Now I am torn. The XEONs do outperform the Opterons hands down in every synthetic benchmark I have seen - emphasis on synthetic. But it is my strong believe that many cores do offer great benefit when it comes to a server's work where a lot more is to do in parallel (e.g. also less expensive context switching). But with a difference in 1 Ghz or more per core, I am no longer so sure because in my case, I am comparing different micro-architectures (XEON vs. Opteron) and different generations as well.



So I'd like to ask the community: More cores at a lower rate or less cores at a higher rate for an app-centric web server that also has to handle the DB load?



The mail system is another story. Ideally I would like to have mail, db and web on three different servers. But that's not in the budget right now. So depending on what system I get for the web server, it is possible that the mail system could end up on that system as well... which would be sub-optimal, I know. I am worried here how much all the small writes from the mail system would effect the DB and Web performance. With 32 GiB of RAM for example, the DB will fit completely in RAM for the very near future until the service has grown considerably (if ever).




One possible (more or less optimal) scenario: Web and DB on an 8 core Opteron 62xx @ 2 Ghz box (everything else as above) and the mail system on a smaller E3-1230 for example. But I am again very worried by the Opteron's performance. :(



Tough decision. Again, I'd appreciate any advice/help I could get.



Thanks a lot in advance...



UPDATE(11/08/11 @ 1518 GMT): Basically I am comparing Sandy Bridge E3-12x0 XEONs with Magny-Cours/Zurich/Interlagos Opterons. Unfortunately I cannot get my hands on an Ivy Bridge based dedicated server. Based on the apache benchmarks and cpubenchmark.net results, the E3-1270v1 seems like a true workhorse which will outperform even a dual E5620 and most Opterons thrown at it which is kind of unbelievable. Naturally most of those tests are still synthetic and there are other bottlenecks to consider. But at this stage I want to lay a solid foundation for the future, so I won't be CPU bound too easily.



My intuition has always been more cores and/or processors for a web/db server instead of a higher clock rate at the expense of the amount of cores/processors. So looking at a 4C setup with the E3-1270 for example, feels like the wrong thing to do.




By the way, the hosting will be a product I am offering my clients, so it is not a single product I can benchmark. Basically it will be almost always Django-based apps, mostly CMS systems with custom functionality or custom projects.



Right now I am really considering a nice E3-1270 system as the combined Web- and DB-Server and a E3-1220 as the mail system. Both with fast RAIDs and plenty of RAM naturally. I am still rather worried though that the real 4C will pose a problem in the production environment soonish. :( But if I get a Opteron 6274 based system, I will have to run the mail system on that system as well, which is not very ideal. And besides, according to cpubenchmark.net it is not too much faster... but again: synthetic benchmark. :(



Basically what I am asking myself: Will a e.g. Opteron 6274 or 2x 6128 outperform a E3-1270 in a real world scenario or will the E3-1270 still win? Is it the right decision for a solid foundation?



Again, if anyone has any good suggestions and/or advice for me, it is very much appreciated because I am stuck in a feedback loop in my brain right now. :)



UPDATE(12/08/11 @ 1835 GMT) Thanks to everyone for their help. Right now I am investigating a totally different approach: Hosting my client's projects and such over at Heroku or Google AppEngine and thus avoiding most of this trouble in advance. ;) For the mail system, a E3-12x0 will totally suffice, so I would save myself all the headache with a combined web/app/db server which would not be very scalable in the end after all. I'll have to do some further investigating if this would be possible without any major limitations... but I am hopeful. :)


Answer




There is a lot more to your question than just cores and performance.




  • How many concurrent users does your server need to support?

  • Are you with one server and no redundancy? What is the acceptable downtime for your application?

  • Have you done any benchmark of your development machine for this application to somewhat extrapolate performance?



You may be too worried if you do not expect too much traffic. If you want to put all your apps on a single server, you risk complete outage if hardware has issues. See if you can go for 2x lower machines and distribute the load. For performance, you can dedicate cores to PostgreSQL process using taskset so DB performance is manageable.




If you manage your disks well, then you may get better performance as well. For example, set 2 disks in RAID 1 for pg_xlog.



The long answer is, benchmark your application, and consider redundancy if you cannot afford downtime. Also, compare costs with cloud solutions which will help if your application can scale.


ubuntu - Using Load Balancer in Rackspace cloud for website HA

I have tested tomcat specific clustering with apache mod_jk and mod_proxy on ubuntu local VMs for our website high availability and load balancing. The real servers are hosted by Rackspace cloud server provider. I tested tomcat clustering with 1 load balancer and 2 web servers. As single load balancer is again a single point of failure, I'm trying to add one more LB as slave.




In Rackspace there is an option Load Balancers and here is the link which guides how setting up load balancer which I am not sure to setup because of some doubts as I never did this before anytime.



Could anybody recommend me step-by-step what I should and what I shouldn't with only necessary resource avoiding unnecessary costs?



The following are the things which I'm not sure and requesting someone who is already using Rackspace help me out here in setting up load balancing:




  • I want to add atleast 4 machines, 2 as web servers, 1 as load balancer server and one more as Failover load balancer server. I think I can add new machines from the Rackspace Load Balancer option?


  • I just heard that one could cut down the cost of static IPs by setting up the cluster in LAN assigning private ip address to cluster computers and thus I can cut down bandwidth costs too. Is it really possible to join cluster computers in a LAN in Rackspace?



  • As far as I know I will point the domain name(website name) to the Load Balancer in the DNS where the domain is registered and both the LBs should have Static Public IP assigned(i'm thinking I'm right here). As I already said I want to add one more LB to avoid single point of failure, Is there any option in Rackspace where I can point the website domain to both the LBs so that only one is active and if active fails, it should point to another LB(similar to ip failover) so that I can make it zero-down-time website?.




I request if possible if anybody give me step-by-step list on how to do them in Rackspace with your own recommendations on what I should and shouldn't.



Thank you in advance!



EDIT: 1



I heard Rackspace offers to share an ip between computers, is that so?, then I can use this option. I will specify public/shared ip as virtual ip by eth0:0 in the network interface file on both the load balancers. Do you really think it works flawless even if specified the public ip as virtual ip with eth0:0 interface without any interruption?




EDIT: 2



I was thinking the setup like all the computer(web servers & load balancers) in a LAN(in Rackspace) with some network(192.168.x.x) using eth0 interface. For load balancers, for the interface eth0:0 or eth1, a public ip is shared between LBs and mod_proxy & mod_jk on LBs redirect them to web servers as they could be in same network(192.168.x.x).



After some analysis I realized that I could not directly access the web servers for testing from my place as they don't have the public ip assigned and so again I thought of adding some proxy redirection like ProxyPass /web1 http://web1-ip-here on the LBs to access the web servers with share public ip or site domain name(I think it redirects as all the servers are in lan). But again to update or to install some packages on web servers, need internet connection. I am again wondering if there is an option to connect web server to WAN with single public ip on LBs and if so if it works with no issue. Else I have to use public ip for web servers each.

windows - OfficeScan RealTime Scan randomly stops




I'm having an issue with a couple of computers where the RealTime scan for trendMicro 8 just stops with no user intervention. It is random, they might not see it for days then it will just turn off. In the logs I'm seeing the services are sent a stop command, sometimes from the user, other times with one of the administrator accounts. There is two services, Listener and the RealTime Scan, both are sent a stop, not a crash but the Listener is started right up by the SYSTEM after shutting down. They are not dependent on each other, RealTime Scan has no dependancies, Listener depends on Network Connections and WMI. I'm not seeing anything particularly unusual here either in the logs or the running processes.



Sequence:



Stop is sent to Listener Service, 
Stop is sent to RealTime Scan Service,
Start command sent by SYSTEM to Listener Service.



This all happens within 2-3 minutes of windows starting.
All computers are running Windows XP Service Pack 3.
Both services are set to start Automatically and they log no other issues.



Event Type: Information
Event Source: Service Control Manager
Event Category: None
Event ID: 7035
Date: 10/13/2009
Time: 7:41:14 AM

User: DOMAIN\administrator (Sometimes this is the users name)
Computer: RNDCOMPNAMEHERE
Description:
The OfficeScanNT RealTime Scan service was successfully sent a stop control.


I'm at a bit of loss as to what else to look at or where to look for further clues as to the source. Searching online nets a bit fat 0 results for this particular problem.


Answer



Okay, after finally getting a hold of a trendMicro rep it would appear this is actually an application bug and not a network or configuration issue. Our network vendor who is supposed to be responsible for these systems has not updated or let us know about a service pack for TrendMicro 8. Pushing the issue through with my organizations VP and hopefully this should now be a solved problem once that is in place.


linux - ZFS RAID0 pool without redundancy




I created a ZFS pool on Ubuntu 14.04 without specifiying RAID or redundancy options, wrote some data to it, rebooted the machine and the pool is no longer available (UNAVAIL). I don't have the exact error to hand but it mentioned that there was not sufficient replication available. I created two datastores in the pool which consists of 2 3TB disks. ZFS was recommended to me for its deduplication abilities and I'm not concerned with redundancy at this point.



I actually only want RAID0 so no mirroring or redundancy in the short term. Is there a way to do this with ZFS or would I be better off with LVM?



zpool status -v:

sudo zpool status -v
pool: cryptoporticus
state: UNAVAIL
status: One or more devices could not be used because the label is missing

or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:

NAME STATE READ WRITE CKSUM
cryptoporticus UNAVAIL 0 0 0 insufficient replicas

sda ONLINE 0 0 0
sdc UNAVAIL 0 0 0


UPDATE



zpool export cyrptoporticus, then zpool import cryptoporticus resolved this for now. Is this likely to happen again on reboot?


Answer



You likely are seeing a situation where at least one of your used disks became unavailable. This might be intermittent and resolvable, both Linux implementations (ZFS on Linux as well as zfs-fuse) seem to exhibit occasional hiccups which are easily cured by a zpool clear or a zpool export / zpool import cycle.




As for your question, yes, ZFS is perfectly capable of creating and maintaining a pool without any redundancy just by issuing something like zpool create mypool sdb sdc sdd.



But personally, I would not use ZFS just for its deduplication capabilities. Due to its architecture, ZFS deduplication will require a large amount of RAM and plenty of disk I/O for write operations. You probably will find it unsuitable for pools as large as yours as writes will be getting painfully slow. If you need deduplication, you might want to look at offline dedup implementations with a smaller memory and I/O footprint like btrfs file-level batch deduplication using bedup or block-level deduplication using dupremove: https://btrfs.wiki.kernel.org/index.php/Deduplication


sql server - Can't connect to mssql 2008 via ssh



I'm using WinSSHD on my server.



I can connect locally on the server via telnet&management studio, specificing the port 1433.
I have allowed remote connections.



I can telnet from my local computer at localhost 14333(local port) and get the correct telnet prompt up.




But when I try to connect via management studio locally to localhost,14333 I get



TITLE: Connect to Server



Cannot connect to localhost,14333.






ADDITIONAL INFORMATION:




A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No connection could be made because the target machine actively refused it.) (Microsoft SQL Server, Error: 10061)



What a classic error message.



Is there any special reason this isn't working? Considering telnet is working via ssh, I thought I was home and dry.


Answer



something else is at play here. I have successful got management studio to work over ssh, even with winsshd without issue.



Is SQL Server running as a default instance?

May we see the port forwarding settings your using in your ssh/putty client?


Disaster recovery plan development best practicies or resources?




I have been tasked with leading a project regarding updating a old and somewhat onesided disaster recovery plan. For now we're just looking at getting the IT side of DR sorted out. The last time they did this they set their scope by making up a single disaster (the data center flooded) and planning for it to the exclusion of all other disaster types. I would like to take a more well rounded approach. I know this is a solved problem, other organizations have written DR plans.



Our plan is to take our IT DR plan and go forward with it and say "Hey, this is what we want in a DR plan for IT, does it mesh with what the rest of the University is doing? Are there restored service priorties you'd like changed?" We have a pretty good idea what the rest of the plan is and we're expecting this to go over well.




What I am looking for is guidance on how to scope a DR plan and what questions I should be thinking about. Do you have favorite resources, books, training that are related to DR plan development?


Answer



An excellent source of information is Disaster Recovery Journal (about).



Community resources available include the current draft of their Generally Accepted Practices (GAP) document, which provides an excellent outline of the process and deliverables that constitute a solid business continuity plan and process. Also available are several white papers covering various DR/BC topics.



The process seems daunting, but if approached systematically with a good outline of where you would like to end up (like the DRJ GAP document), you can ensure that you optimize the time invested and maximize the value of the end product.



I find their quarterly publication to be interesting and informative as well (subscribe).



Monday, February 22, 2016

exchange - Can someone explain the relationship between a server's FQDN and Active Directory Domain




Sorry, I know this is a rather lazy question, my server experience is limited to OS X, I'm hoping a Windows guy can "explain it to me like I'm five"



I'll need to help configure a bunch of iPads/iPhones to use Exchange shortly, and I'm sure some of the users will give me inaccurate authentication details. Rather than send them packing, I'd like to be able to make an educated guess at what it might be based on the info they do know, but I'm still a bit fuzzy on the following:



• do all versions of Windows Server follow the same rules for the AD Domain (eg: is it based on FQDN? NetBIOS name? totally arbitrary?)?



• is an AD Domain case-sensitive?



Edit: I'm not asking what is the difference between the two (yes, we use DNS on the Mac too). The question is rather what is the relationship between the two. Do they need to match, basically.



Answer



The DNS suffix of a domain joined computer is the name of the Active Directory domain to which the computer is joined, which is also the DNS namespace for the domain.



So, you have a computer named "computer1" in an AD domain named "mydomain.local":



The NetBIOS name for the computer is computer1



The name of the AD domain that the computer is joined to is mydomain.local



The DNS suffix for the computer is mydomain.local




The AD DNS zone for the domain is mydomain.local



The FQDN of the computer is computer1.mydomain.local.



The NetBIOS name for the domain is mydomain (although it is possible to create a NetBIOS name for the domain that doesn't match the DNS name for the domain).



EDIT



Incidentally, in Windows NT 4 it was possible for a computer to have a different DNS host name than the NetBIOS name (multiple DNS host names in fact), but I don't think that's been possible since Windows 2000, due to AD's integration with DNS.



How to migrate 200+ people company from hosting to Exchange 2010 in most non distruptive way




We want to implement Exchange 2010 in our organization which is spread over couple of locations. Right now we're using POP3/SMTP at hosting and all people are using Outlook 2007 and 2010. Mail for our company (and many more) is crucial form of communication with clients and internally.



Normally I would want to do it all in one day (and still I prefer that way) doing more or less this:




  • setting up Exchange to work for company.com domain

  • setup all the accounts mailboxes

  • do any other related tasks (some testing etc.)

  • switch MX records from old server to new server


  • tell users (prepare instructions) to setup Outlook 2007/2010 with one of the following:


    • by modifying in Control Panel / Mail / New Account in same profile and leave the old mailbox for a few days and then remove the old mail server and just leave .PST files from it (or ask user to move the emails from PST to Exchange).

    • whatever option they wish :-)


  • the list is just example for the sake of information



But the thing is our management would like to do it over couple of days/weeks since there will be problems with some people being offline, some people computers not working as they should, pick any other problem here (as we had experience with domain rename taking over 2 months for all people to finally switch to new domain).




So this starts tricky as after MX records change there will be no new emails in old mailboxes and people who will be offsite and will have problems with setting up Outlook to use Exchange will be out of order. Also with more then 200 people and few IT staff (most located in one location) this could cause some havoc with people who didn't read emails informing about change, who didn't proceed with instructions, who are offsite etc etc (we have all seen that during our domain rename instructions where last guys were migrated 2 months after rename).



So my plan would be (a bit complicated):




  • inform users 1 month/1 week/1day prior to implementation

  • make accounts/domains configured on Exchange and tell people even 1-2 weeks before change to add Exchange to their Outlooks.

  • let exchange be able to send emails as company.com for the whole time before the final switch (so the emails would keep on coming to old mailbox but would let users to send emails from both just in case they make some mistake)

  • make the final switch and tell users to only use Exchange and help them to migrate emails/remove old mailboxes.




Or




  • inform users 1 month/1 week/1day prior to implementation

  • make accounts/domains configured on Exchange and tell people even 1-2 weeks before change to add Exchange to their Outlooks.

  • let exchange be able to send emails as company.com for the whole time before the final switch (so the emails would keep on coming to old mailbox but would let users to send emails from both just in case they make some mistake)

  • go thru rooms of people / per project / etc and switch them one by one over days/weeks by implementing for each account on hosting Redirect to a separate domain company.com.pl which would be already set up and working so that all emails that come to people within room/project which have been migrated emails flowing to their mailboxes in hosting will be forwarded automatically to Exchange (by using 2nd domain).

  • until all people are done that way MX would stay pointing to old server


  • after all people are migrated switch MX and after a while switch redirection off



Additional option would be to use pop3 connector for everyone but this brings other problems:




  • people's emails will be vanishing from their hosting Inbox and may cause some confusion

  • there may be some race condition for Outlook's which download and delete emails after download (like mine)

  • we would need to know all people's passwords for mailboxes




What do you think? Maybe there's other, better way? Or you have some nice idea how to better mitigate problems with migration.


Answer



I'd use a POP3 connector. Here's how I'd do it:




  1. Setup the accepted domain as an Internal relay in Exchange. This will allow Exchange to deliver messages to itself when the mailbox exists and forward them when it doesn't.


  2. Migrate users in batches. Reset their POP3 passwords to whatever suits you (because they won't need them after the migration). Create an Exchange mailbox for these users then setup the POP3 connector on the server (you could set it up in Outlook but I believe this adds unnecessary complexity). Migrate users 1 department at a time or 1 manageable batch at a time. After setting up their Exchange profile, import their PST back into Exchange. You may take as long as you want to do this as impact is minimal.


  3. When all users have been migrated, change the domain type to authoritative and move the MX records to point to your anti-spam solution (Your appliance, or better yet, a cloud solution). Wait 48 hours then remove the POP3 connectors.




windows - Conditional Forwarder or DNS Stub Zone to Non-Authoritative DNS Zone



How could I go about forwarding DNS lookups to a non-authorative zone, in a sort of 'next hop' scenario?



The setups is as follows:



One ADDS Domain (contosob.local) which contains two DNS servers, these servers need to be able to lookup records for another ADDS domain (contosob.local) however it is not possible for these servers to speak directly. This is merely for security and not due to clashing subnets.




However, there is another domain (notconsoto.local) which can speak to both the contosoa.local domain and the contosob.local domain. The DNS servers within this domain have a Stub Zone which forwards all lookups for contosob.local to it's DNS servers. This is all working as intended.



However, I still need contosoa.local to lookup records for contosob.local. I tried to create another Stub Zone which pointed lookups to the Stub Zone in notcontoso.local but as this is not an authoritative zone it was denied.



How can I hop DNS lookups via notcontoso.local from contosoa.local? I tried adding one of contosob.local's DNS servers to the DNS client on the required hosts however this does not work as Windows doesn't seem to round robin that far down the list.


Answer



This is certainly a one off case, but after testing this in my home lab it seems that it is possible to set up conditional forwarders in this manner.



So contosoa.local has a conditional forwarder for contosob.local that goes to notcontoso.local, notcontoso.local has a conditional forwarder for contosob.local that goes to contosob.local. The DNS query for contosob.local from contosoa.local will "flow" through notcontoso.local.




I've tested this successfully and confirmed the traffic flow with Microsoft Network Monitor.



Note that when setting this up, when the wizard prompts that the DNS server is not authoritative for the zone, add it anyway. Once the "chained" conditional forwarders are set up in contosoa.local and notcontoso.local the DNS query for contosob.local from contosoa.local should flow through notcontoso.local.


printing - Trying to prevent server printers on terminal servers

We have 4 servers running windows 2008 r2. We have a separate print server and users are able to connect to and use shared printers. However the terminal servers appear to create local printers for our 5 xerox printers which error.



I've checked the group policy which is set to prevent printer redirection.



I've checked the users profiles and the printers aren't listed in there.



I've checked the servers connection properties which also show printer redirection as disallowed.



The printers concerned recreate themselves when deleted from file structure and registry using the Terminal server's own ip address as the port.




What am I missing?



Thanks for any help.



Jay

Should I move servers and change email address after email spoofing?





I'm hoping the community can help me shed some light on a recent email spoof. Yesterday my client woke up to find hundreds of bounced failure notices.



The client did not personally send any of these emails. Each failure notice had a different reply-to address i.e.



xyxyxs@client-domain.co.uk
trg@client-domain.co.uk

hjd@client-domain.co.uk



The various reply-to addresses suggest that only the clients domain had been spoofed and not a specific email account (i.e actual-email@client-domain.co.uk).



I know if your email account has been spoofed, it's game over and you need to create a new email address. However, a specific address hasn't been targeted. Am I correct in thinking that I do not need to delete and create a new email address? I also assume the domain would have been widely blacklisted? Should I move hosting companies and would this make a difference?



Either way, I'll be implementing DKIM.



Sorry for so many questions, I'm just a little lost as the spoofer didn't target a specific email address.




Thanks


Answer



If I understand well, a spammer sent email with a forged from header.



Unfortunately, this is easy to do, but it has no other consequence but annoyance. You have therefore nothing to do except securing your server with SPF and dkim.



You speak about changing the hosting company. There is no need at all, not even for changing the mail address, at least if I understand what happened to you.


Sunday, February 21, 2016

subdomain - Getting domain without www in front to point to same place as with www in front



Consider the following screenshot from Godaddy:



enter image description here



Why am I not allowed to make the * refer to ghs.google.com, when I can make www point to that just below? I think there is something fundamental that I have misunderstood here, but this should really not be that difficult.



What I want to achieve is that open-org.com points to the same place as www.open-org.com. Preferably they should both point to open-org.com like serverfault.com.




There are some similar questions to this question around, but their solutions do not seem to work on Godaddy.



I am using Google Sites for web hosting.


Answer



You may not CNAME the root of the domain - but you certainly may CNAME the * name, which you should be able to do in the CNAME section of that interface. Note that this will not apply to the name open-org.com though.



A CNAME of the root violates the rules of DNS, since you have (and need) SOA and NS records on the root name for your domain to function properly.



Your best option is probably to have open-org.com's A record point to a server which does an HTTP redirect to www.open-org.com - GoDaddy may provide this kind of redirection as a built-in service, not sure.



Apache Virtual Host Multiple SSL Mappings Being Ignored




I have a VHOST configuration that I need a fresh set of eyes on. We have SSLEngine enabled in two virtual hosts - on on port 443 and another on port 4432. For some reason, regardless of whether the connection comes in on 443 or 4432 it automatically resorts to the first vhost defined. If I put 443 on top it uses that config for 443 and 4432 and if I put 4432 on top it uses that config for 443 and 4432. Can anyone tell me why it's just grabbing the top virtual host even though they only should be going to their respective ports? I know that SSL needs it's own IP but it's my understanding a separate Port should suffice too?



Listen *:443

SSLEngine On
SSLCertificateFile ...
SSLCertificateKeyFile ...
SSLCertificateChainFile ...
...



Listen *:4432

SSLEngine On
SSLCertificateFile a_different_file...
SSLCertificateKeyFile a_different_file...
SSLCertificateChainFile a_different_file...
...



Answer



Why not make all the SSL on port 443 and use vHosts to use multiple domains? You're trying to do that, but you're overlooking it. Try this:



NameVirtualHost *:443


insert ssl stuff1 here
ServerAdmin email@you.com
DocumentRoot "C:/xampp/htsecure1/"

ServerName domain1.com
ServerAlias www.domain1.com



insert ssl stuff2 here
ServerAdmin email@you.com
DocumentRoot "C:/xampp/htsecure2/"
ServerName domain2.com
ServerAlias www.domain2.com




insert ssl stuff3 here
ServerAdmin email@you.com
DocumentRoot "C:/xampp/htsecure3/"
ServerName domain3.com
ServerAlias www.domain3.com



linux - Our Red Hat Enterprise 5 Server is swapping itself to death - need a plan for detecting the cause



At seemingly random intervals, the memory usage on our server is increasing over the maximum available and swapping until the CPU usage is also 100%. It then starts killing off processes when it runs out of swap memory and we have to restart the server.




When this happens our website and internal systems become unresponsive. I also cannot SSH into the server at this point so I have no way of identifying the processes which are killing the it.



I don't have a huge amount of experience with server admin but I'm looking for ideas for how to detect the problem. Let me know what extra information you may need.


Answer



Could be a fork-bomb tbh (i.e. a process that's infinitely forking children and hence exhausting the resources). Could also be a memory leak type issue.



Identifying the key process(es) is key here. Try this:



When you next restart the server leave a console open as root but use renice to set its priority to -20. Once that's done run (top with priority -20) and watch to see what's causing the issue.




This command ought to do it:



sudo bash
renice -n -20 -u root
top


When things start looking tight resort to the killall command or kill the parent and then the zombies.




At -20 you should be able to keep an active connection over ssh and still do your work, its same priority as the Kernel.



Don't forget to look in the logs (web server and otherwise in /var/log) as well since they can be quite revealing.



If you identify the problem let us know what it is and if you require further help and assistance.



Good luck.



See the renice man page and top man page.


Saturday, February 20, 2016

domain name system - Change my MX record on my server to google MX?



I got a windows server 2008 where I host a site, now I decided to have the email on google apps. I did add the MX records I get from them to my DNS settings on the server but with no luck. I recently started doing server stuff so I did like this.



Server Manager / Roles / DNS Server / DNS / SERVERNAME / MYDOMAIN / Forward Lookup / New MX



Host or child domain: What goes here?
FQDN: here is my domain name, i think because I named the ns my domain?
FQDN MX: here is the google MX record I got from them
MSP: 10



I have no Idea where I go wrong but I thought I would ask you guys if any of you can maybe give me some tips on what to look for or any newbee mistake I do that you see from this little info.




I really appreciate all help I could get on this.


Answer



Leave the host or child domain blank - it will then apply to the parent domain (what you refer to as MYDOMAIN). FQDN will autofill with MYDOMAIN - leave it as is. FQDN MX should be the hostname that Google provided you and then assign the appropriate number. Create one such MX record for each entry they asked you to add.


Friday, February 19, 2016

linux - PHP upgrade fails, CentOS 6.7



System information



Operating system CentOS Linux 6.7




Kernel and CPU Linux 2.6.32-042stab108.1 on x86_64



yum repolist enabled


only ones i added manually






  • base - CentOS-6 -

  • Base epel - Extra Packages for Enterprise Linux 6 - x86_64

  • extras - CentOS-6 - Extras

  • ius - IUS Community Packages for Enterprise Linux 6 - x86_64

  • mod-pagespeed - mod-pagespeed

  • remi-safe - Safe Remi's RPM repository for Enterprise Linux 6 - x86_64

  • rhscl-php55-epel-6-x86_64 - PHP 5.5.21 -

  • epel-6-x86_64 updates - CentOS-6 - Updates

  • virtualmin - RHEL/CentOS/Scientific 6 - x86_64 -

  • Virtualmin virtualmin-universal - Virtualmin Distribution Neutral


  • Packages vz-base - vz-base vz-updates - vz-updates




available installed versions




  • /usr/bin/php 5.3.3

  • /usr/bin/php55 5.5.30







i tried the following (using virtualmin)




  1. enabled 5.5.30 for a specific directory and that didn't worked out got this error from virtualmin This virtual server is using the mod_php execution mode for PHP, such does not allow per-directory version selection.

  2. enabled the directory specific version home//domains/..com/public_html/public , and phpinfo(); returns 5.3.3 version

  3. tried yum replace php-common --replace-with=php55-php-common got too many packages in WARNING: Unable to resolve all providers and didn't proceed.

  4. tried to remove version 5.3 but other php code breaks that's why i hesitate replacing that version


  5. Browsed the web for answers only to find out that more than half of the things i read are broken, repositories outdated or conflicting with other packages when i try to yum upgrade php



Can someone please help with this frustrating situtation? i really thought that installing another version and enabling it for a directory would solve this problem.


Answer



We recommend you use SCL versions of packages, so that the PHP versions can co-exist peacefully with each other and not cause the conflicts you're running into. I've got Remi's PHP 5.6.15 packages running on our new server, under Virtualmin, and it's working fine (I did have to tweak the detection code in php-lib.pl, though that won't be needed in a few days when new Virtualmin comes out).



Also, you should use the fcgid execution mode, and not mod_php. mod_php can only exist in one version in a single Apache instance and will never work with multiple versions. fcgid is the default execution mode in a Virtualmin system installed with install.sh, but it is configurable in System Settings:Server Templates:Template Name:Apache Website. "Default PHP execution mode" is the option you want, and FCGId is the right value for using multiple PHP versions (and for a variety of other good reasons).



Current version of Virtualmin doesn't support all of the SCL PHP packages, yet, but the next version will handle arbitrary versions easily (and will likely have the ability to query the SCL command to figure out what your preferred PHP version is; I don't know how much of that has been implemented yet).




There has been quite a bit of discussion about this subject in our forums over the past few weeks, as SCL has gotten more PHP versions, and as Virtualmin support for SCL packages has been expanded.



There's some docs here (which I'm not sure if Eric has updated yet, to address recent changes in SCL, but they will be soon if not already):



http://www.virtualmin.com/documentation/web/multiplephp


How to perform incremental / continuous backups of zfs pool?




How can zfs pools be continuously/incrementally backed up offsite?



I recognise the send/receive over ssh is one method however that involves having to manage snapshots manually.



There are some tools I have found however most are no longer supported.



The one tool which looks promising is https://github.com/jimsalterjrs/sanoid however I am worried that non-widely known tool may do more harm then good in that it may corrupt/delete data.



How are continuous/incremental zfs backups performed?


Answer




ZFS is an incredible filesystem and solves many of my local and shared data storage needs.



While, I do like the idea of clustered ZFS wherever possible, sometimes it's not practical, or I need some geographical separation of storage nodes.



One of the use cases I have is for high-performance replicated storage on Linux application servers. For example, I support a legacy software product that benefits from low-latency NVMe SSD drives for its data. The application has an application-level mirroring option that can replicate to a secondary server, but it's often inaccurate and is a 10-minute RPO.



I've solved this problem by having a secondary server (also running ZFS on similar or dissimilar hardware) that can be local, remote or both. By combining the three utilities detailed below, I've crafted a replication solution that gives me continuous replication, deep snapshot retention and flexible failover options.



zfs-auto-snapshot - https://github.com/zfsonlinux/zfs-auto-snapshot




Just a handy tool to enable periodic ZFS filesystem level snapshots. I typically run with the following schedule on production volumes:



# /etc/cron.d/zfs-auto-snapshot

PATH="/usr/bin:/bin:/usr/sbin:/sbin"

*/5 * * * * root /sbin/zfs-auto-snapshot -q -g --label=frequent --keep=24 //
00 * * * * root /sbin/zfs-auto-snapshot -q -g --label=hourly --keep=24 //
59 23 * * * root /sbin/zfs-auto-snapshot -q -g --label=daily --keep=14 //
59 23 * * 0 root /sbin/zfs-auto-snapshot -q -g --label=weekly --keep=4 //

00 00 1 * * root /sbin/zfs-auto-snapshot -q -g --label=monthly --keep=4 //


Syncoid (Sanoid) - https://github.com/jimsalterjrs/sanoid



This program can run ad-hoc snap/replication of a ZFS filesystem to a secondary target. I only use the syncoid portion of the product.


Assuming server1 and server2, simple command run from server2 to pull data from server1:



#!/bin/bash


/usr/local/bin/syncoid root@server1:vol1/data vol2/data

exit $?


Monit - https://mmonit.com/monit/



Monit is an extremely flexible job scheduler and execution manager. By default, it works on a 30-second interval, but I modify the config to use a 15-second base time cycle.



An example config that runs the above replication script every 15 seconds (1 cycle)




check program storagesync with path /usr/local/bin/run_storagesync.sh
every 1 cycles
if status != 0 then alert


This is simple to automate and add via configuration management. By wrapping the execution of the snapshot/replication in Monit, you get centralized status, job control and alerting (email, SNMP, custom script).







The result is that I have servers that have multiple months of monthly snapshots and many points of rollback and retention within: https://pastebin.com/zuNzgi0G - Plus, a continuous rolling 15-second atomic replica:



# monit status



Program 'storagesync'
status Status ok
monitoring status Monitored
last started Wed, 05 Apr 2017 05:37:59
last exit value 0
data collected Wed, 05 Apr 2017 05:37:59

.
.
.
Program 'storagesync'
status Status ok
monitoring status Monitored
last started Wed, 05 Apr 2017 05:38:59
last exit value 0
data collected Wed, 05 Apr 2017 05:38:59


Best practice for picking convenient IPv6 addresses for a few hosts on an isolated LAN




I have in the past setup small ad hoc LANs which were totally disconnected from the internet and when assigning addresses to the hosts, I could pick whatever made communicating the addresses between humans as easy as possible (and as easy as possible to remember in your head). Not surprising, one of my favourites were to give hosts numbers like 10.1.1.1, 10.1.1.2, 10.1.1.3 etc. Very easy to communicate, and very easy to keep in your head. (Ok, I had almost total freedom on how to choose my addresses. I could of course not use 127.0.0.1 for any of the ethernet interfaces, or use any subnet addresses or broadcast addresses)



While waiting for various parties (enterprises, ISPs etc) to deploy IPv6 (and thus provide a real incentive to use IPv6 in the real world), I'm a bit keen on trying it out by on a small (minimalistic?) scale, simply by repeating the task by setting up a isolated LAN but this time relying on IPv6 to communicate between the hosts. I can, quite freely, pick any IPv6 addresses I like. Almost, at least. I cannot pick ::1 as the address of any LAN interface for example, as that is reselved for the loopback interface. And given all the different ranges of IPv6 addresses that are reserved for all kinds of uses and purposes, I wonder: in this isolated LAN context, what is the best way to pick easy to remember, easy to communicate verbally IPv6 addresses? (Say it is for 3 to 32 hosts or so)



I know this question is a bit academic and probably not something you would run into in a 'real' deployment of IPv6 (be it business or hobby usage). Still I'm curious about the best way to "handcraft" convenient IPv6 addresses, so please don't provide answers which only provides me with a solution which "saves" me from the need to create these IPv6 addresses manually. (Or provide answers that only explains why it is a bad practice manually setting these IPv6 addresses...)


Answer



I'm on board with Tom's Solution but an amendment:



FC00:0001:0001::/48 would be your network segment




Hosts:



FC00:1:1::1



FC00:1:1::2



FC00:1:1::3



.

.
.



FC00:1:1:FFFF:FFFF:FFFF:FFFF:FFFF



...THAT'S A LOT OF IPs!


vps - CentOS 7 firewall-cmd not found



I have just installed CentOS 7:





[root@new ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)


I am trying to configure the firewall, and I'm told that in CentOS 7 iptables is no longer used, replaced by firewalld. When attempting to run a command to set a firewall rule as such:




firewall-cmd --add-port=80/tcp



I receive the following message:




[root@new ~]# firewall-cmd --add-port=80/tcp
-bash: firewall-cmd: command not found


edit: I tried the following command, too:





[root@new ~]# firewall-offline-cmd --add-port=80/tcp
-bash: firewall-offline-cmd: command not found


without any success.



I tried running the following to check that firewalld was installed:





[root@new ~]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
firewalld.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)


Following this output, I tried starting firewalld:





[root@new ~]# service firewalld start
Redirecting to /bin/systemctl start firewalld.service
Failed to issue method call: Unit firewalld.service failed to load: No such file or directory.


Any ideas what is wrong with the CentOS 7 install? This is a clean install on an OpenVZ VPS, I'm yet to make any changes at all.


Answer



Two possible options





  • Your PATH does not contain /usr/bin

  • firewall-cmd is not installed


    • yum install firewalld



Thursday, February 18, 2016

filesystems - mke2fs says "Device or resource busy while setting up superblock"

I'm trying to test restoring a backed up linux file system /apps (ext3 filesystem)



/dev/cciss/c0d0p7     177G  3.8G  164G   3% /apps


I ran the following command to take a dump:



/sbin/dump -0uz -f /backup_labeir1/apps.dmp /apps



Then I deleted the /apps folder:



rm -rf /apps


And unmounted it:



umount -l /apps



Next I'm trying to make the file-system



mke2fs -j -b 4096 -L data /dev/cciss/c0d0p7


after which I'm planning to do the below steps:



# mkdir /apps

# mount -t ext3 /dev/cciss/c0d0p7 /apps
# cd /apps
# restore -rf /backup_labeir1/apps.dmp .
# reboot


I've 2 questions:




  1. Are my testing steps correct


  2. When I run the below I get the error:
    [root@labeir2 backup_labeir1]# mke2fs -F -j -b 4096 -L data /dev/cciss/c0d0p7
    mke2fs 1.39 (29-May-2006)
    /dev/cciss/c0d0p7 is apparently in use by the system; mke2fs forced anyway.
    /dev/cciss/c0d0p7: Device or resource busy while setting up superblock



But neither the filesystem is mounted nor lsof shows me any output:



 lsof | grep /dev/cciss/c0d0p7

lsof /dev/cciss/c0d0p7


Please help me resolve this.

cisco - IOS Port Forwarding and NAT involving a VPN



We have a Cisco 1921 router running IOS 15.1 at one of our branches which is connected via a L2L IPsec VPN to a ASA5510 running ASA 8.2 at our headquarters.




The network looks something like this:




192.168.14.0/24 - RT - Internet - ASA - 192.168.10.0/24
|----L2L VPN----|


RT has NAT configured to let the local users there access the internet. The configuration looks like this:



crypto isakmp policy 10

encr aes
authentication pre-share
group 2
crypto isakmp key SECRETKEY address HQ_ASA_IP
!
!
crypto ipsec transform-set ESP-AES-SHA esp-aes esp-sha-hmac
!
crypto map outside_map 10 ipsec-isakmp
set peer HQ_ASA_IP

set transform-set ESP-AES-SHA
match address 120
!


interface GigabitEthernet0/0
ip address 192.168.14.252 255.255.255.0
ip nat inside
ip virtual-reassembly in
duplex auto

speed auto
no mop enabled
!

interface Dialer0
mtu 1492
ip address negotiated
ip access-group 101 in
ip nat outside
ip virtual-reassembly in

encapsulation ppp
ip tcp adjust-mss 1452
dialer pool 1
dialer-group 1
ppp authentication chap callin
ppp chap hostname SECRETUSERNAME
ppp chap password 0 SECRETPASSWORD
ppp pap sent-username SECRETUSERNAME password 0 SECRETPASSWORD
crypto map outside_map
!


ip nat inside source route-map nonat interface Dialer0 overload

route-map nonat permit 10
match ip address 110
!

access-list 110 deny ip 192.168.8.0 0.0.7.255 192.168.8.0 0.0.7.255
access-list 110 permit ip 192.168.14.0 0.0.0.255 any
access-list 120 permit ip 192.168.14.0 0.0.0.255 192.168.8.0 0.0.7.255

access-list 120 permit ip 192.168.8.0 0.0.7.255 192.168.14.0 0.0.0.255


Now we have a service which needs to be accessed from the internet on one of the hosts within the 192.168.14.0/24 network and have configured a port forwarding using the following command:



ip nat inside source static tcp 192.168.14.7 8181 EXT_IP 31337 extendable


The forwarding works and the service can be accessed via EXT_IP:1337 but we can no longer access 192.168.14.7:8181 via VPN from the 192.168.10.0/24 network while this worked just fine before the forwarding was in place.



Any hint on what I'm missing or why this behaves in such a way would be very much appreciated.


Answer




Here's a good writeup of the problem you are facing:



https://supportforums.cisco.com/docs/DOC-5061


IPtables port forward

Scenario:



Using DDwrt on a linksys router. I want to port forward a specific public IP address to internal IP 192.168.0.20 port 80 using IPtables. Not sure how to do this any help would be appreciated.

Wednesday, February 17, 2016

linux - forward public port 81 to port 80 on local ip




i am new to serverfault, so please inform me of any bad behaviors :)



i searched serverfault (and google) for an answer, but can't find the answer to my problem
(i can find answers which are partially what i need, but i lack the knowledge/experience to combine them to the solution to my problem)



the problem is as follows :
- i have a public server with port 81 which is available on the public ip address
- i have a local server with port 80 which is not available to the public
- i want the user to connect to port 81 on the public ip address and arrive at port 80 of the local server (192.168.98.###)




i think i need to do some configuring with iptables, but that's quite foggy to me



i tried some answers from How can I port forward with iptables?
but i run into all kinds of errors



some questions :
- does the local server have to have some special configuration ? for example do i have to set the gateway to the ip address of the public server ?
- /proc/sys/net/ipv4/conf/ppp0 doesn't exist, is that a problem ?



there are no ports blocked by the firewall




i have total control over the public server which is running on :



# cat /proc/version
Linux version 2.4.22-1.2115.nptl (bhcompile@daffy.perf.redhat.com) (gcc version 3.2.3 20030422 (Red Hat Linux 3.2.3-6)) #1 Wed Oct 29 15:42:51 EST 2003
# iptables --version
iptables v1.2.8


i don't know the os of the local server, and have no control over its configuration




could you please explain me which iptables settings i could use, or any other configuration ?


Answer



First thing, you don't need to deal with this /proc/sys/net/ipv4/conf/ppp0, if you are not running a modem on your gateway.



First thing you got to do, is to enable forwarding on your gateway like this:



# echo '1' > /proc/sys/net/ipv4/conf/eth0/forwarding (if you are running your live IP on eth0)



Then simply forward your traffic like this:



# iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.2:80
# iptables -A FORWARD -p tcp -d 192.168.1.2 --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT


You should replace 192.168.1.2 with the internal IP of your machine. Also, replace eth0, with the interface on which you have the live IP on your gateway.



and at last, as given in the post you read earlier, you can check the routing with




# ip route


Hope this helped. Feel free to revert in case you face issue.



Also, please post the errors also which you get in this process.


Cannot acess SSL version of mysite [Apache2][SSL][Certbot]



So i just installed let's encrypt ssl certificate via certbot with command



sudo certbot --apache -d mysite.org -d mysite.org


However after succesfull intallation the site simply cant be accessed, i've used a few recommendation from the internet like adding port 443 to ports.conf



Listen 443


NameVirtualHost *:443
Listen 443



Adding VirtualHost *:443 block to 000-default (even tho i'm sure i'm not using that conf)




DocumentRoot /var/www/html/mysite

ServerName mysite.org
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/mysite.org/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mysite.org/privkey.pem



I've also enabled mod_ssl with a2enmod ssl in my apache, disabled my firewall, and restarting apache everytime i make a change, but nothing happened, my site still can't be accessed via ssl the browser simply said my site is unreachable.



This is how mysite.org.conf looks like (i commented the https redirect) :





ServerName mysite.org
ServerAlias www.mysite.org localhost
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/mysite


Options Indexes FollowSymLinks MultiViews
AllowOverride All

Require all granted


ErrorLog ${APACHE_LOG_DIR}/mysite.org-error.log
CustomLog ${APACHE_LOG_DIR}/mysite.org-acces.log combined
#RewriteEngine on
#RewriteCond %{SERVER_NAME} =www.mysite.org [OR]
#RewriteCond %{SERVER_NAME} =localhost [OR]
#RewriteCond %{SERVER_NAME} =mysite.org
#RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]




And this is how mysite.org-le-ssl.conf looks like





ServerAdmin admin@mysite.org
ServerName mysite.org
ServerAlias www.mysite.org

DocumentRoot /var/www/html/mysite
SSLCertificateFile /etc/letsencrypt/live/mysite.org/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mysite.org/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf




I really have no idea how to solve this problem, could you guys please help me ?




here is the result of




sudo netstat -nlp |grep :443




tcp6       0      0 :::443                  :::*                    LISTEN      16258/apache2   
tcp6 0 0 :::443 :::* LISTEN 16258/apache2
tcp6 0 0 :::443 :::* LISTEN 16258/apache2
tcp6 0 0 :::443 :::* LISTEN 16258/apache2

tcp6 0 0 :::443 :::* LISTEN 16258/apache2
tcp6 0 0 :::443 :::* LISTEN 16258/apache2
tcp6 0 0 :::443 :::* LISTEN 16258/apache2


the result of wget command :



Connecting to mysite.org (mysite.org)|my.public.ip.address|:443... failed: Connection refused.



the result of curl command (my ubuntu somehow cant locate package curl so i did it in windows)



curl: (56) Recv failure: Connection was reset


and yes i can access my site via http and my public ip.


Answer



well i somehow solved it, so my router is configurated to ip forward any person who accessing my ip public to the server ip port 80, and there is no configuration to ip forward to port 443, after added that configuration now i can finally access the site via https.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...