Tuesday, October 31, 2017

virtualhost - Configuring virtual host on apache



I have a development machine with lamp setup. Now to add virtual hosts this is what I do currently:




  1. add a virtual host file ... in the /etc/apache2/sites-available

  2. add the virtual host using sudo a2ensite


  3. sudo service apache2 restart



This works fine and I get the desired site up and running on my localhost.



Now the problem is that every time I have to make some change to the configuration file for a site, I have to sudo to edit the configuration file.



What I was wondering is if it is possible to specify some directory in my home folder where apache can look for configuration files for the sites instead of the default sites-enabled directory.


Answer



You can create a directory wherever you want and make the permissions to fit like you want.




In Apache conf (/etc/apache2/apache2.conf) you can include the config files in that directory



Include /path/to/dir/*.conf



Apache will have to be restarted when a configuration change has occured even if you choose this solution.



Also note, that the files in there will be included when apache is restarted - there is no need for a2ensite, and nor can you disable the config files with a2dissite. The way to disable a config file included this way would be to remove or change the extension to something else than .conf.



Remember to take the possible risks into account. You will need to give sudo access, but you can give sudo access to "/etc/init.d/apache2 reload" - the reload parameter will do a configtest before restarting the httpd. If the configtest fails, it will not restart Apache.




I am assuming you run Debian or Ubunbu.


ilo - What is the NAND used for in the HPE ProLiant DL360p Gen8?

I have a server with the error: 'iLO Self-Test reports a problem with: Embedded Flash/SD-CARD' ' Embedded media manager failed initialization '




HPE advises the following: https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c04996097



I've now reached the step of all steps, replace the system board. But I'm wondering what the impact would be if I would not replace the board. I do not use the SD-CARD. I'm not sure what the NAND actually is used for and what it stores, and further more what do I risk if I do not resolve this issue?



I believe it might be loss of logs from the iLO and perhaps even loss of settings from the iLO?



Edit, it is now months later, servers still out of warranty, even found more similar issues with NAND which all fit under this HPE Advisory umbrella. Using the RESTful Interface Tool solves annoying problems with hands on support for the AC power removal part. But sadly I still can't fix all issues. While I've seen some information, in this thread, online and the like. I can't find any conclusive information from HPE what the impact is for NAND issues like this.

Monday, October 30, 2017

security - I just got a linode VPS a week ago and I've been flagged for SSH scanning











I got a 32-bit Debian VPS from http://linode.com and I really haven't done any sort of advanced configuration for securing it ( port 22; password enabled ).



It seems somehow there is ssh scanning going on from my IP, I'm being flagged as this is against the TOS. I've been SSHing only from my home Comcast ISP which I run Linux on.



Is this a common thing when getting a new vps? Are there any standard security configuration tips? I'm quite confused as to how my machine has been accused of this ssh scanning.



Answer



Personally, it sounds like you have been compromised. I would re-install the OS and then reconfigure SSH with:




  • key-based auth only

  • use AllowUsers or AllowGroups to lock down users allowed onto the box

  • make use of iptables to lock down allowed IP addresses.


email - How can I attract more Spammers hitting my Spam traps?



Currently I have a number or domains that are set up as Email Spam traps. So if I get mails on that domains I can be certain that it is ~100% Spam. I'm using this information to temporarily defer message delivery from spamming IPs on my real Email domains. I can also use the Spam mails to improve Bayesian filtering and identifying brand new viruses before they hit my real inboxes.




This procedure is only effective when I get many Spams on the Spam traps. So the question is how can I generate more Email traffic on the Spam trap domains?



I'm not going to register Spam traps at dubious newsletter senders as this would increase the false negative rate. And it would also need too much manual work to register hundreds of addresses.



Trying to publish the Spam trap addresses on Websites also failed. I have millions of addresses published and they got harvested but not used for spamming. It takes weeks and months until you get a noticeable amount of Spam on these addresses.



I'm not going to publish these Spam traps on forums and guestbooks as this would mean fighting Spam by spamming the web.



What I'm now looking for are ways how I can "accidentally" reveal hundreds and thousands of Email addresses so that Spammers pick them up and use them in their campaigns. But if someone can give me advice which other methods are good to attract Spammers I will appreciate this.







Anwering Miles' suggestions:




  • Mark's only points out how to set up good sites for harvesting and what to do with the fetched Spam. But as I said I already have these pages which are not harvested enough

  • Phil's experiment is too old. His approach was appropriate until 2004 and in a way until 2006. But then Spammers changed their methods drastically.





    1. Using external services as Craigslist or guestbooks counts as spamming in my opinion and so is not a valid option.

    2. This is poisoning of half-legitime newsletters and increases the false negative rate.

    3. I already have two servers that are pretending to be open proxies. But as they are not a real open proxy I can see that spammers do testing attempts. These test mails are not returned to them and so they see that it is only a fake open relay. So they avoid these servers for their tasks.

    4. Twitter gets only be crawled for tweets with special keywords. These accounts are then followed and used for twitter spamming. But not for email spamming.



Answer



You could setup a fake company web sites and "accidentally" publish a dump file called "users.sql" with names and email addresses (something like "staff.csv" might actually be more effective). Once it gets it indexed by Google you'd expect some spammer to pick it up.



If you're feeling a bit bolder you could dig into the underbelly of the email marketing underground yourself and offer to sell a database dump you stole from a server you compromised.... (since patched of course). Just make sure you route through tor or a public vpn provider when doing this!




Or do a Lulzsec-style release on pastebin, not sure how you'd "promote" it so it got picked up by scripts though, probably using keywords like hacked database, email address etc would help.


Active Directory Split-Zone vs SubDomain Domain Name




Note - I know there are a ton of questions around AD Naming. I do not believe though that this is a duplicate question. If it is please link me to a relevant one :).



We are implementing AD. Our big issue is the domain name. We already have decided against .local after reading the many articles out there and speaking with people (We are 70/30 Mac right now).



We're trying to figureout if we should go with ourdomain.com or corp.ourdomain.com for our Domain Name.



We know already if we went with ourdomain.com we'd have potential issues if people didn't prepend www. to our site's URL, and we are willing to live with that.



Our concern is if there are any other consequences we don't know about. E.g. If we have an Exchange server of ours hosted in a Data center that isn't part of the LAN, would it have issues with DNS?




To give an overview of what we have -



Our site is hosted in an external datacenter, we currently use Google Apps but plan a migration to Exchange (Yes, we know it's against the trend..) which may also be hosted in a datacenter or onsite.



We also make extensive use of our UTM Firewalls VPNs and are looking at a Cisco VPN or Citrix solution as we scale up.



There are also plans to institute Windows Distributed File Sharing and possibly use Centrify or Extreme-Z IP if we find Native Mac Integration lacking.



We also plan to use AD as our authentication backbone using it for RADIUS and LDAP services for authentication and role management across our internal web apps and wireless.




We did read http://msmvps.com/blogs/acefekay/archive/2009/09/07/what-s-in-an-active-directory-dns-name-choosing-a-domain-name.aspx but I was hoping for some more up to date information from anyone well versed in maintaining AD especially in hybrid/distributed environments.


Answer



There is absolutely no reason to use the same AD domain DNS name as your external web-facing DNS zone. None. At all.



Microsoft recommends using a subdomain of an existing domain, so something like corp.yourdomain.com or ad.mydomain.com is fine. If you don't want your users to see that their login name is corp\user you can set the domain's NetBIOS name to MYDOMAIN during the DCPROMO process of the first DC in your domain. The end result would be that your domain's FQDN would be corp.mydomain.com but your users would see mydomain\user. This way you can have "prettier" logins, without the complete shitmess of split-horizon DNS.



Seriously, there's no valid reason to ever have split-horizons DNS with your AD infrastructure.


Sunday, October 29, 2017

linux - iptables: why only OUTPUT rules are needed for samba clients?



I tried the following iptables rules for samba client and they worked. Please note that policy for INPUT, OUTPUT and FORWARD were all set to DROP



iptables -A OUTPUT -m state --state NEW,ESTABLISHED -p udp --dport 137 -j ACCEPT

iptables -A OUTPUT -m state --state NEW,ESTABLISHED -p udp --dport 138 -j ACCEPT
iptables -A OUTPUT -m state --state NEW,ESTABLISHED -p tcp --dport 139 -j ACCEPT
iptables -A OUTPUT -m state --state NEW,ESTABLISHED -p tcp --dport 445 -j ACCEPT


why we only need OUTPUT rules for samba clients? Why don't we need INPUT rules to open those ports for incoming packets?



An additional question: does the chain names carry any significance of directions internally or are they just mnemonics for easy understanding?



iptables:

-------------
# Generated by iptables-save v1.4.7 on Wed Aug 28 21:18:39 2013
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [4:284]
-A INPUT -p udp -m udp --dport 177 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT

-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p tcp -m tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 1024:65535 --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 80 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 443 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

-A INPUT -p icmp -m icmp --icmp-type 8 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -m icmp --icmp-type 0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -p tcp -m tcp --dport 7100 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 6000 -j ACCEPT
-A OUTPUT -p udp -m udp --sport 177 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 22 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT

-A OUTPUT -p tcp -m tcp --sport 20 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 1024:65535 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 80 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 443 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p icmp -m icmp --icmp-type 8 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p udp -m state --state NEW,ESTABLISHED -m udp --dport 137 -j ACCEPT

-A OUTPUT -p udp -m state --state NEW,ESTABLISHED -m udp --dport 138 -j ACCEPT
-A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 139 -j ACCEPT
-A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 445 -j ACCEPT
COMMIT
# Completed on Wed Aug 28 21:18:39 2013

Answer



The default chain names are most definitely involved in the packet flow. There are many diagrams all over the internet showing the various paths a packet might take through the chains, but in general for your scenario traffic from the machine will traverse output, and traffic to the machine will traverse input. They will traverse other chains too, but that doesn't likely matter for the scope of this question.



Also recall that iptables works on a first dispositive match basis (the first match which disposes of a packet, such as by accepting, rejecting or dropping it, causes processing of the chain to stop also). So, none of your input rules after -A INPUT -j REJECT --reject-with icmp-host-prohibited have any effect.




With that said, the reason your samba connections are working is this input rule: -m state --state RELATED,ESTABLISHED -j ACCEPT. That is because when you connect to another samba host, conntrack will record the connection state and this rule will begin accepting the return traffic. I suspect you would find, if you tried to serve something from this box, that nobody could access it.


spam - Email from website form submission not being deliverd due to SPF fail

I need a little guidance on a spam related email rejection our website is having. It is a Wordpress site hosted with WPengine. As far as I'm aware it's using the default PHP mailer.



I've configured an SPF record for our website's domain that allows the WPengine IP's by way of include:wpengine.com. This in turn has it's own IP's and includes that add google SPF's, SendGrid SPF's and so on.
There is also a number of other IP's and includes in our record, but despite the fail from http://www.kitterman.com/spf/validate.html, and the warning about too many lookups from MXtoolbox, the SPF is valid.



We're receiving all form submissions from our site (I did add some Exchange message rules to make sure they are delivered into the inboxes rather than junk or clutter), and so are other Exchange email servers our various offices have that use a different email domain. There is one problem office that has a Kerio mail server that is rejecting form submission emails from our website.
I know this because I've setup a "Mail User" on our Exchange server with an external forward to the Kerio mailbox the form submission is intended for.
Form submission emails are from: postmaster@ourdomain.com.au to MailUser@ourdomain.com.au (forwarded to recipient@theirdomain.com.au).
When I do a message trace on our Exchange, I see the following results:



enter image description here




Is this problem mine entirely? Or can I simply ask the administrator of this Kerio mail server to create some whitelist entries for our email domain, the WPengine site and the form submission email subject?

Friday, October 27, 2017

Properly configuring DNS for email sending on multi-domain hosting VPS




Background



I have a VPS, with one external IP hosting <10 domains (DOMAIN.TLD). Each domain receives and sends email. Each domain has associated DKIM / SPF / MX entries. The PTR record exists and is associated with the main domain (MAINDOMAIN.TLD) on the VPS.



Problems




  1. Mails end up in the spam folder (yahoo) for some receivers and do
    not get received at all by others (outlook). Gmail (and others emails hosted at different hosting providers like one.com) receives inbox immediately.



  2. Not having a clear idea on how to configure each domain DNS in regards to email.




What I've done so far



Initially (the non-tested ones still have), each domain had an A record (mail.domain.tld ) and a MX record that pointed to the A record, but no PTR associated with the VPS IP. Email sending worked but I had Problem #1



A     -> MAIL -> VPS_IP
MX -> 10 -> MAIL.DOMAIN.TLD.



After finding out about and setting up the PTR record (which is the main reason why some email servers disregarded my emails, thus not having them received), I considered pointing each domain MX record to the domain resolved by PTR (MAINDOMAIN.TLD -> VPS_IP). I tried using CNAME to point and then directly MX to point.



CNAME -> MAIL -> MAINDOMAIN.TLD.
MX -> 10 -> MAIL.DOMAIN.TLD.


and then



MX    -> 10   -> MAINDOMAIN.TLD.



In both cases, I had the same situation as described in Problem #1.



Questions




  1. What's wrong with the setup ?

  2. Whats the best way to approach this - having all domains using the MAINDOMAIN.TLD as MX (via CNAME or directly?) or having all domains using their own domain as MX ? (I think the 1st variant is to go for, because of the PTR record and the fact that I only have a single external IP address - but I'm not getting why it's not working)

  3. Are there any free and reliable (wanting too much?) external email providers that can handle email sending instead of doing it myself ?




Additional info that might be relevant




  • how do I know DNS records are/were according to my description - using linux cli tools like host, dig, nslookup + https://mxtoolbox.com/

  • i'm using ISPconfig3 as a
    hosting control panel

  • the VPS is bought from DigitalOcean ,DNS
    management being done in the DigitalOcean dashboard


  • SMTP server is
    postfix

  • my IP is not blacklisted - checked with
    https://mxtoolbox.com/SuperTool.aspx?action=blacklist ; mail-blacklist-checker.online-domain-tools.com/

  • nothing relevant in /var/log/mail.log - shows that emails are being sent but there is nothing answer related


Answer



So how to become a good postmaster / hostmaster ? So far, what I've read and finally applied was according to best practices - In this case I would appreciate pointing me to the FM that you are referring to.



For my questions 1) and 2) and for your suggestions :

1) I have corrected that before your answer, I just explained what I tried
2) Yes, it existed and was resolvable
3) Hostname in EHLO is resolvable and is the same with servers hostname.
4) Tried a lot of text variants - that was not the problem.



For my 3rd question
Free solutions would include ZohoMail and Yandex
Payed solutions are many but really do not make sense from a financial perspective.



Conclusion

I had the correct config/DNS settings but the problem is microsofts mail filtering and the fact that the domain was barely created (affecting DNS propagation + filters that check the age of the domain)


Thursday, October 26, 2017

ESXi Server is not showing all the available disk space

I have installed ESXi server 4.1 which has "2398 GB" disk space, but when I'm running the command df -h in the ESXi server terminal, it is showing :vmfs3 180.5GB"



Could you please let me know how to resolve this issue.



Here is the solution for the above issue.




There seems to be a problem with the new version of VMware's ESXi, 4.1. Due to ESXi's automatic installation and disk partitioning, no advanced parameters can be given, e.g. to manually create vmfs3 partitions. Usually that works fine, the installation creates the system partitions (HyperVisor) and uses the rest of the disk for a local vmfs3 datastore.



The situation: I installed ESXi 4.1 on a Dell PE 2900 with 8 harddisks on a Raid-5, a total of 2.86 TB of diskspace. The integrated Raid Controller (PERC 5/i) shows the correct sum of diskspace and the installation of ESXi detects the logical harddisk correctly with a diskspace of 2.86 TB.



The problem: Once ESXi 4.1 was installed, a local VMFS datastore was created - with a size of 744 GB (on another server model it showed 877GB, see screenshot below). Instead of using the whole diskspace. The maximum filesystem limit of VMFS3 is 2TB per LUN and ESXi detects the local disk/partition as a LUN so it should have created a 2TB vmfs datastore. But no.



The following instructions are advanced system commands. Do not do them on a production machine, only on a newly installed ESXi 4.1 machine which doesn't host any virtual machines yet.



First we have to find the correct disk, for which ESXi has given a special name. Use the following commands to find your local disk. Note that I have cut the full output, I only show the local disk information (the full output would also contain CD drive, attached iSCSI or SAN disks, etc.).

To run the following commands, you need to enable SSH. You can do this on the ESXi console in Troubleshooting.



This shows the disk name (naa.6001e4f01c94d50013d852397c7ef00d) and the LUN name (vmhba1:C2:T0:L0):



# esxcfg-mpath -b
naa.6001e4f01c94d50013d852397c7ef00d : Local DELL Disk (naa.6001e4f01c94d50013d852397c7ef00d)
vmhba1:C2:T0:L0 LUN:0 state:active Local HBA vmhba1 channel 2 target 0


With the following command we see even more information and what we need is also here: The device path which we will use as the local disk identifier in the next commands:




# esxcfg-scsidevs -l
naa.6001e4f01c94d50013d852397c7ef00d
Device Type: Direct-Access Size: 3000704 MB Display Name: Local DELL Disk (naa.6001e4f01c94d50013d852397c7ef00d) Multipath Plugin: NMP Console Device: /vmfs/devices/disks/naa.6001e4f01c94d50013d852397c7ef00d Devfs Path: /vmfs/devices/disks/naa.6001e4f01c94d50013d852397c7ef00d Vendor: DELL Model: PERC 5/i Revis: 1.03 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: false Is Removable: false Is Local: true Other Names: vml.02000000006001e4f01c94d50013d852397c7ef00d504552432035 VAAI Status: unknown

Disk /vmfs/devices/disks/naa.6001e4f01c94d50013d852397c7ef00d: 3146.4 GB, 3146466197504 bytes
64 heads, 32 sectors/track, 3000704 cylinders, total 6145441792 sectors
Units = sectors of 1 * 512 = 512 bytes



Check your current partition table and note down the number of partition which is used for VMFS (by default it should be p3):



# fdisk -l
Device Boot Start End Blocks Id System
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp1 5 900 917504 5 Extended
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp2 901 4995 4193280 6 FAT16
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp3 4996 761728 774894592 fb VMFS
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp4 * 1 4 4080 4 FAT16 <32M
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp5 5 254 255984 6 FAT16
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp6 255 504 255984 6 FAT16

/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp7 505 614 112624 fc VMKcore
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp8 615 900 292848 6 FAT16


The next step is to delete the automatically created vmfs3 partition using the fdisk command:



fdisk -u /vmfs/devices/disks/naa.6001e4f01c94d50013d852397c7ef00d
Command (m for help): d
Partition number (1-8): 3
Command (m for help): w



Now we create a new partition and change its type to VMFS. When fdisk asks for the last sector (=size) of the new partition, we enter +2097152M (which is 2TB):



fdisk -u /vmfs/devices/disks/naa.6001e4f01c94d50013d852397c7ef00d
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
p

Selected partition 3
First sector (10229760-1850474495, default 10229760): 10229760
Last sector or +size or +sizeM or +sizeK (10229760-4294967294, default 4294967294): +2097152M

Command (m for help): t
Partition number (1-8): 3
Hex code (type L to list codes): fb
Changed system type of partition 3 to fb (VMFS)

Command (m for help): w



Now we check again the partition table to verify the changes:



# fdisk -l
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp1 5 900 917504 5 Extended
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp2 901 4995 4193280 6 FAT16
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp3 4996 2004996 2047999936+ fb VMFS
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp4 * 1 4 4080 4 FAT16 <32M
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp5 5 254 255984 6 FAT16

/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp6 255 504 255984 6 FAT16
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp7 505 614 112624 fc VMKcore
/dev/disks/naa.6001e4f01c94d50013d852397c7ef00dp8 615 900 292848 6 FAT16


Now the new partition has to be formatted to a VMFS3. This can be done with the following command where -b is standing for the filesystem blocksize. Here I use 8M which is currently the biggest blocksize and made for big vmdk files. Note that the partition number has to be given, therefore the :3 at the end:



# vmkfstools -C vmfs3 -b 8M -S datastore1 /dev/disks/naa.6001e4f01c94d50013d852397c7ef00d:3
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs3 file system on "naa.6001e4f01c94d50013d852397c7ef00d:3" with blockSize 8388608 and volume label "datastore1".

Successfully created new volume: 4c45bc40-6aa5a458-e509-001e4f2a6fac


Congratulations, the new VMFS datastore with a size of 2TB has been created on your ESXi 4.1 machine.

linux - "fdisk -l" like list of partitions and their types for LVM logical volumes?

You know how "fdisk -l" lists drive partition tables and shows the partition id/types for each partition?



Is there a similar way to get the partition id for LVM logical volumes?



EDIT: I'm aware of "lvs", which is mostly what I'm looking for (it gives me the list of logical volumes, kind of like "fdisk -l"... except it would also be useful to know what the partition types of the logical volumes (which I like to think of as "virtual partitions") are. That info is what "fdisk -l" lists on the last two columns on the right. (Such as "8e" for a physical LVM partition, or "83" for Linux ext, etc.).



The tool I'm looking for may not be part of LVM; maybe just some other utility that can print partition ids/types given a partition?

Monday, October 23, 2017

fastcgi - mod_fcgid, perl script output going to apache error_log

I'm trying to get an old Perl script running again after installing mod_fcgid. I had to install mod_fcgid for a new client, but it seems to have broken some of my other cgi scripts.



When going to the page, its now a 500 error. I checked the error log, and the output from the script is in the error log... so the script is running but for some reason it still delivers a 500 Internal Server Error to the browser...



HTML headers are the first thing printed... so I'm not really sure why this error is occurring.




The Error Log:




[omitted:html output]
[Wed Dec 08 08:59:18 2010] [warn] (104)Connection reset by peer: mod_fcgid: read data from fastcgi server error.
[Wed Dec 08 08:59:18 2010] [error] [client x.x.x.x] Premature end of script headers: www_protect.cgi, referer: http://www.mywebsite.net/
[Wed Dec 08 08:59:21 2010] [notice] mod_fcgid: process /www/sites/somescript.cgi(6747) exit(communication error), terminated by calling exit(), return code: 0



fcgi.conf:





AddHandler fcgid-script .fcgi .cgi
#SocketPath /var/lib/apache2/fcgid/sock
IPCConnectTimeout 45
IPCCommTimeout 20
OutputBufferSize 0
MaxRequestsPerProcess 500

IdleTimeout 3600
ProcessLifeTime 7200
MaxProcessCount 8
DefaultMaxClassProcessCount 2


# Sane place to put sockets and shared memory file
SocketPath /var/run/mod_fcgid
SharememPath /var/run/mod_fcgid/fcgid_shm

Sunday, October 22, 2017

iis - Windows Server 2012 web server maxing out on application start - could antivirus be responsible?

We have a Windows Server 2012 R2 web server running IIS 8.5 hosting a number of ASP.NET applications each in their own app pool. The server was originally specced to cope with expected load, but since then the client has also insisted on installing McAfee antivirus. We've excluded the application directories from the on-demand scanning mechanism.



We're finding that when the applications start up for the first time we're seeing particularly high (too high) CPU load. The two processes that are hogging the CPU are alternately Visual C# Command Line Compiler (csc.exe) and McAfee On-Access Scanner Service (mcshield.exe).




I would expect csc.exe to be pretty high on CPU during initial compilation of a restarting ASP.NET application, but I'm concerned that McAfee is interfering and making this process take longer and hurt the CPU more. Has anyone had similar experience?



If so, are there any other specific directories that I should be excluding from the scan? Or is it more correct to be recommending against antivirus on web servers?



If not, then is there anything I can do to prevent such a load on CPU during application startup?

virtualhost - Apache 2 Multiple Named Virtual Hosts with one IP vhost



I have an instance running on AWS, with an Apache 2, two named domains and one ip.




I managed to configure apache with both domains, ( domain1.com and domain2.com ).
The first domain's docroot is pointing to /var/www/html/vh/domain1.com
The second domain's docroot is pointing to /var/www/html/vh/domain2.com



This is ok.



The problem is, that I want to access /var/www/html using IP directly on the browser.



When I try to do that, I get the site hosted on domain1.com .




How can I do that??



Relevant lines from httpd.conf:



ServerName 9.9.9.9:80
DocumentRoot "/var/www/html"

NameVirtualHost *:80



DocumentRoot /var/www/html


DocumentRoot /var/www/html/vh/domain1.com
ServerName domain1.com
DirectoryIndex index.php


DocumentRoot /var/www/html/vh/domain2.com

ServerName domain2.com
DirectoryIndex index.php



If I try to access :



http://9.9.9.9



I get the pages under /var/www/html/vh/domain1.com instead of the pages hosted at /var/www/html



What am I doing wrong?



Thx in advance!


Answer



Your NameVirtualHost and directives must match. This means that you must either change



NameVirtualHost *:80



to



NameVirtualHost 9.9.9.9:80


or else change each







to






Also, I would recommend that the default virtualhost have the actual hostname of the server as ServerName, instead of the IP address. Since it will be default, it'll still be the one chosen when you connect using only the IP address.


Azure SQL to On-Premise SQL Server Data Transfer

I need to securely transfer data from a SQL Server in Microsoft Azure to a SQL Server on-premises. I have been researching different methods but I have not found the right way.



Following is the list of what I have considered.




  1. Encrypted SQL Server connection. How can I selectively enforce encryption on particular IP addresses? The source SQL Server services other clients that are not ready for enforced encryption.

  2. Azure Data Factory Integration Runtime. It looks like it is in alpha version and designed for full integration and not file sharing between independent parties.

  3. SSH Tunnel. What is the best way to setup SSH server and client on Windows?

  4. File transfer using SFTP. This method is inefficient and does not allow for real time access.


  5. Other method that you suggest?

sudo - how to use xauth to run graphical application via other user on linux



My regular user account is, let's say, user1. I created separate user2 for some x application that i would like to run while being logged into x as user1 but in a way that will prevent it from read/write access to user1 data. I thought that i could use xauth and sudo/su to user2 from user1 to run this application. How do i do this? I'm not sure how to configure xauth.


Answer



I found something that works great for me on KDE



kdesu -u username /path/to/program


linux - cron job executing script not writing to file

I have a server running AIDE, and a cron job that runs executes a bash script and sends an email alert out. It is still a WIP, but I can't get the script to run properly. When the script is executed, my output file defined here /sbin/aide --check > /tmp/$AIDEOUT is still an empty file. I even tried a simple /bin/echo "hello world" > /tmp/$AIDEOUT and it also doesn't seem to work. The /tmp/$AIDEOUT file remains empty.




However, if I run this script manually without using Cron, it runs fine.



Here is my bash script



#!/bin/bash

PATH=/sbin:/bin:/usr/sbin:/usr/bin

MYDATE=`date +%Y-%m-%d`
AIDEOUT="AIDE-${MYDATE}.txt"

MAIL_TO=
ALLMATCH='All files match AIDE database. Looks okay!'
MAIL_FROM=

/bin/touch /tmp/$AIDEOUT
/bin/chmod 755 /tmp/$AIDEOUT
#/bin/echo "Aide check `date`" > /tmp/$AIDEOUT
/sbin/aide --check > /tmp/$AIDEOUT

if ! grep -q "$ALLMATCH" /tmp/$AIDEOUT; then

/usr/bin/mailx -s "Daily AIDE report for $(hostname)-${ENVIRONMENT_NAME} ${AWS_REGION}" -r $MAILFROM $MAILTO < /tmp/$AIDEOUT
fi

#/bin/rm /tmp/$AIDEOUT

/sbin/aide --update
/usr/bin/mv /var/lib/aide/aide.db.gz /var/lib/aide/db_backup/aide.db.gz-$(date +"%m-%d-%y")
/usr/bin/mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz



my cronjob is defined in /etc/cron.d/aide
*/5 * * * * root /usr/local/etc/cron_aide2.sh



Thanks!

How to deal with lots of requests for "x80dx01x03x01" in cpanel/apache?



I'm seeing a lot of these in the apache error log, from many different client IPs:



Invalid method in request \x80d\x01\x03\x01




with "lots" i mean several per second, constantly. None of these IPs are found in the regular apache logs, so only in the error log.



Is this something to worry about, and if so, how can I repair or protect against it?



I suppose I could just make fail2ban block the IPs but that seems a bit unnecessary when I don't know what's going on.



Edit: Apache is serving both regular HTTP (about 100 vhosts) and SSL HTTPS (4 vhosts).



# uname -a

Linux xxxx 2.6.18-371.3.1.el5PAE #1 SMP Thu Dec 5 13:29:20 EST 2013 i686 i686 i386 GNU/Linux

# /usr/local/cpanel/cpanel -V
11.38.2 (build 12)

# httpd -V
Server version: Apache/2.2.23 (Unix)
Server built: Jan 13 2013 07:13:59
Cpanel::Easy::Apache v3.16.6 rev9999
Server's Module Magic Number: 20051115:31

Server loaded: APR 1.4.6, APR-Util 1.4.1
Compiled using: APR 1.4.6, APR-Util 1.4.1
Architecture: 32-bit
Server MPM: Prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APACHE_MPM_DIR="server/mpm/prefork"
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP

-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_SYSVSEM_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=128
-D HTTPD_ROOT="/usr/local/apache"
-D SUEXEC_BIN="/usr/local/apache/bin/suexec"
-D DEFAULT_PIDLOG="logs/httpd.pid"

-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_LOCKFILE="logs/accept.lock"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"

# httpd -l
Compiled in modules:
core.c
mod_authn_file.c

mod_authz_host.c
mod_authz_groupfile.c
mod_authz_user.c
mod_authz_default.c
mod_auth_basic.c
mod_include.c
mod_filter.c
mod_deflate.c
mod_log_config.c
mod_logio.c

mod_env.c
mod_expires.c
mod_headers.c
mod_unique_id.c
mod_setenvif.c
mod_version.c
mod_proxy.c
mod_proxy_connect.c
mod_proxy_ftp.c
mod_proxy_http.c

mod_proxy_scgi.c
mod_proxy_ajp.c
mod_proxy_balancer.c
mod_ssl.c
prefork.c
http_core.c
mod_mime.c
mod_status.c
mod_autoindex.c
mod_asis.c

mod_info.c
mod_suexec.c
mod_cgi.c
mod_negotiation.c
mod_dir.c
mod_actions.c
mod_userdir.c
mod_alias.c
mod_rewrite.c
mod_so.c


# httpd -M
Loaded Modules:
core_module (static)
authn_file_module (static)
authz_host_module (static)
authz_groupfile_module (static)
authz_user_module (static)
authz_default_module (static)
auth_basic_module (static)

include_module (static)
filter_module (static)
deflate_module (static)
log_config_module (static)
logio_module (static)
env_module (static)
expires_module (static)
headers_module (static)
unique_id_module (static)
setenvif_module (static)

version_module (static)
proxy_module (static)
proxy_connect_module (static)
proxy_ftp_module (static)
proxy_http_module (static)
proxy_scgi_module (static)
proxy_ajp_module (static)
proxy_balancer_module (static)
ssl_module (static)
mpm_prefork_module (static)

http_module (static)
mime_module (static)
status_module (static)
autoindex_module (static)
asis_module (static)
info_module (static)
suexec_module (static)
cgi_module (static)
negotiation_module (static)
dir_module (static)

actions_module (static)
userdir_module (static)
alias_module (static)
rewrite_module (static)
so_module (static)
auth_passthrough_module (shared)
bwlimited_module (shared)
frontpage_module (shared)
security2_module (shared)
Syntax OK


Answer



That error signifies that clients are attempting to speak SSL/TLS to a listener that is not actually running SSL.



This might be an error in configuration (missing an SSLEngine On for a virtual host that's intended to be SSL enabled, or is listening on port 443?). Or it might just be a case of some wacky user trying to access https://example.com:80.



Unfortunately, the error doesn't provide any hints on which listener got the request - the best thing to do is to go through your configuration and make sure that all the listeners that are supposed to have SSL are speaking it correctly.


Saturday, October 21, 2017

Apache with mod_perl eating memory when idle

An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated:




system memory stats graph



This is the graph for apache hits per second to correlate:
apache hits graph



The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution.



All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

Thursday, October 19, 2017

linux - Extremely slow disk speeds Centos 6




Setup looks as follows:




  • HP Proliant DL380 G7

  • 6 x 3TB Sata drives (surveillance level) configured with hardware RAID 1+0 with the SATA controller on board. Model is Seagate SV35

  • 192GB RAM



VMware ESXi 6.0





  • One VM guest running Centos 6.7 (Kernel 2.6.32-573)



Datastore is made up of all the remaining disk space after the ESXi-installation (little less than 8tb)




  • 1 VMDK file for the system partition at 100GB

  • 1 VMDK file for the data partition at around 7.7TB




On the guest CentOS, the system partition is LVM ext4
The data partition is a LVM with a single PV, LV and VG ext4



Now the problem I have is that data transfer speeds on the disk is extremely slow. Trying to copy a semi-large file (10-30 GB) from one place on the LVM to another on the LVM starts out with a transfer rate of around 240MB/s which is the speed I'd expect it to have, but just after a few seconds (30ish usually) it drops down to 1-4 MB/s, and viewing iotop tells me a process starts running called flush-253:2 which seem to slow everything down.



I've been using
rsync --progress
to get a better picture of the transfer speeds in real time, but I'm seeing the same result with a
cp

operation.



When it finally finishes, I have tried performing the same procedure again, with the same file to the same location. The second time the indicated transfer speed of rsync keeps steady at around 240MB/s throughout the whole transfer, but when rsync indicated the file transfer is complete it hangs at that state for about as long as it took to complete the first copy procedure. I can see the flush-253:2 process working just as hard for both procedures.



Now I know the setup isn't optimal, and I would have preferred to have a separate disk for the ESXi system, but I don't feel like that should be the cause of this extreme slow transfer rates.



I've searched for information regarding the flush-process, and as far as I can tell, it basically writes data from memory on to the actual disks, but I haven't found anyone saying they've experienced this level of slow transfer rates. The system is not in production yet, and CPU is hardly even running at all, and it has around 100GB of free memory to use when the copy procedures run.



Does anyone have any idea on what to try? I've seen similar results on a different system which is basically setup the same way, except on completely different (somewhat lesser) hardware. I have also a third system running CentOS 5 and ext3 on LVM, which does not have any issues like this.




EDIT 1:
I realize now had remembered incorrectly, and the system partition is also lvm, but still a separate volume from the data partition



[root@server /]# mount
/dev/mapper/vg1-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)

/dev/mapper/vg1-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/vg_8tb-lv_8tb on /datavolume type ext4 (rw,nobarrier)


[root@server /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_1-lv_root
50G 9.7G 37G 21% /
tmpfs 91G 0 91G 0% /dev/shm

/dev/sda1 477M 52M 400M 12% /boot
/dev/mapper/vg_1-lv_home
45G 52M 43G 1% /home
/dev/mapper/vg_8tb-lv_8tb
7.9T 439G 7.1T 6% /datavolume


Update 1: I have tried increasing the dirty_ratio all the way up to 90, and still saw no improvements. I also tried mounting it with -o nobarriers, and still the same result



Update 2:

Sorry to everyone who are trying to help me about the confusion, now that I've had a look myself, the hardware is actually a HP Proliant 380 G7, I don't know if that makes any difference.



I have also had a look myself at the raid configuration, and it seems we're using a P410 raid controller, and when I'm booting into the raid management, it says



HP Smart array (I think) P410 "SOMETHING", with 0MB in parenthesis


I'm guessing that might mean we have 0MB in write cache?



I'm a little out of my depth here when it comes to hardware, can you add a write cache module(?) to this raid controller if one doesn't already exist?

Or do you need a new controller/move to a SAN?
How can I tell if it has a write cache, but perhaps the battery is dead?



Update 3:
Thanks to your suggestions and some futher research, I'm now going to try and install the HP smart array driver vib file in the ESXi, and hopefully get a clearer picture of what I have. I also found the option in the system BIOS to enable drive cache, so I might have a last resort in case it turns out we don't have write cache on the controller.



Update 4 (solved):
Thanks to all who suggested solutions, and yes it turned out there was no cache module present on the disk controller.



To anyone having similar problems, I installed the hpssacli utility VIB for ESXi, and could with the following output confirm what had been suggested in the replies.

Cache Board Present: False




Smart Array P410i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number:
Controller Status: OK
Hardware Revision: C
Firmware Version: 6.62
Rebuild Priority: Medium
Surface Scan Delay: 15 secs

Surface Scan Mode: Idle
Parallel Surface Scan Supported: No
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 0 secs
Cache Board Present: False
Drive Write Cache: Disabled
Total Cache Size: 0 MB
SATA NCQ Supported: True
Number of Ports: 2 Internal only

Driver Name: HP HPSA
Driver Version: 5.5.0
PCI Address (Domain:Bus:Device.Function): 0000:05:00.0
Host Serial Number:
Sanitize Erase Supported: False
Primary Boot Volume: logicaldrive 1
Secondary Boot Volume: None

Answer



It doesn't appear as though you have any write cache.




Please confirm the generation and model of your server. If you don't have a Flash-backed write cache module (FBWC) on the controller that your disks are attached to, your VMware performance will suffer.



The other issue here is LVM and some of the defaults that appeared in RHEL6 a few years ago. You'll want to try this with disabling write barriers. LVM can be an issue because it leads people to avoid partitioning their volumes... And that impacts the ability for tools like tuned-adm to do their job.



I asked for the output of mount. Can you please post it?



Try mounting your volumes with the no barrier flag. Write barriers are the default for EL6 on ext4, so that's the biggest problem you're running into.


Tuesday, October 17, 2017

windows server 2008 - Win2008/IIS7/fx2.0 - 500.19 error



I installed new boxes at the beginning of the week.
1) Web Server on Win2008 x64, IIS 7 + all updates
2) DB Server on Win2008 x64, SQL 2008 Ent + all updates



I configured my websites, set up host headers and DNS entries, worked through some problems on my handlers and finally got it all running Wednesday morning. Our team has been using it since then. This morning I came in and everyone of us is getting a 500 error.








Error Summary
HTTP Error 500.19 - Internal Server Error
The requested page cannot be accessed because the related configuration data for the page is invalid.
Detailed Error Information
Module IIS Web Core
Notification Unknown
Handler Not yet determined
Error Code 0x80070005
Config Error Cannot read configuration file due to insufficient permissions
Config File \?\C:\RivWorks\dev\web.config



Requested URL http://dev.rivworks.com:80/login.aspx



Physical Path
Logon Method Not yet determined
Logon User Not yet determined
Config Source
-1:
0:
Links and More InformationThis error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error.








I’ve gone through the KB articles, made sure IIS_IUSRS had read permissions and am now stumped. What bothers me is IIS is looking in \?\C:\ instead of just C:. What is happening?



TIA



NOTE:
I've gone through and reconfigured everything on my web site. AppPool is using NetworkServices. NetworkServices has been granted R/W permissions on all directories at the web root on down. I've restarted my web site as well as issuing an IISRESET. I am now getting a 401.3 error when I go to the URL with no page (http://dev.rivworks.com/). If I put a page in there - including what is already listed as the default page in IIS settings (http://dev.rivworks.com/default.aspx) - it works but CSS does not render. This is true wether I am directly on the server or on any client machine within our network. I am seriously stumped at the moment.


Answer



What changed since the beginning of the week? Does your team consist of programmers actively developing the site you are hosting?



Did you grant IUSRS read permission or was it already there? Did you restart IIS after adding if so?




You could try running the Process Monitor tool, reproduce the error and look for “Access Denied” in the “Result” column. You can then configure the required permissions accordingly.



Does the application pool's user also have read access to C:\RivWorks\dev? You might want to replace the permissions on all child objects within RivWorks\dev after verifying (Folder properties, Security Tab, Advanced)


Expand Raid 5 Array on a HP ML350 G6 with ESXi 4.0 and P410i Controller

I have to increas an Raid 5 Array 270GB (3x 146GB SAS) with an additional 146GB SAS hard disk.



On the Server runs a Vmware ESXI 4.0 with two Win 2008 Virtual Machines.
The Server (HP Proliant ML350 G6) has a Smart Array Controller P410i with Firmware 1.62.



Can I simple expand the raid 5 Array with the HP SmartCD and the program HP Array Configuration Utility (ACU).?
Is anything else to do after rebooting?

Monday, October 16, 2017

linux - How can I add a single SAS drive to a Perc H700 on Dell R310 server WITHOUT loosing the data on it?

On a Dell R310 server with Perc H700 RAID controller.



I have 1 virtual disk configured as Raid 1 but with only 1 physical SAS disk attached and the other missing. This virtual disk has been configured with fresh CentOS 6.9 and boots normally.



I now have another physical SAS disk containing some recovered data. Can I create a second virtual disk again containing only 1 disk WITHOUT loosing the data on it and then simply mount it in the OS? I am afraid it will initalize and erase the disk if I do that. How can this be accomplished? We don't have other options for reading the SAS drive. I am not able to find any guides relating to this scenario.




Reading the manual it seems I should be able to create a VD and then NOT initialize it. Would this make it accessible to the OS (f.ex. like would it allow linux to create a /dev/sdX device)?

Sunday, October 15, 2017

SSL certificate paths in a virtual host



I've recently purchased a wildcard SSL certificate for my domain, generated the CSR, and everything has been sent through OK.




My question is quite straightforward, but following this - http://www.globalsign.com/support/install/install_apache.php, I can't make any sense of what to match to what.



Basically - I have 5 files:




- gs_intermediate_ca.crt
- gs_root.pem
- mydomain.com.crt
- intermediate.pem

- *.mydomain.com.key


The Values:




SSLCACertificateFile = ?
SSLCertificateChainFile = ?
SSLCertificateFile = mydomain.com.crt
SSLCertificateKeyFile = ?



I'm new to this, any help would be greatly appreciated! Thanks



Edit >>
Using the Answers below! Cheers,



I'm now receiving the following errors:





[error] Init: Unable to read server certificate from file /etc/apache2/domain.ssl/domain.ssl.crt/domain.com.crt
[error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag
[error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error


My vHost now looks like so:




SSLCertificateFile /etc/apache2/domain.ssl/domain.ssl.crt/domain.com.crt
SSLCertificateKeyFile /etc/apache2/domain.ssl/domain.ssl.key/domain.com.key

SSLCertificateChainFile /etc/apache2/domain.ssl/ca.crt
SSLCACertificateFile /etc/apache2/domain.ssl/gs_intermediate_ca.crt


Any idea where these errors can be coming from - is there a check I can run on the .crt file?



Kind regards


Answer



That doc is definitely confusing. My guess:




SSLCACertificateFile = /path/to/gs_intermediate_ca.crt
SSLCertificateChainFile = /path/to/chain_file
SSLCertificateFile = /path/to/mydomain.com.crt
SSLCertificateKeyFile = /path/to/mydomain.com.wildcard.key


You should put all files outside the DocumentRoot and protect them with ownership/permissions. (I usually store certs in /etc/apache2/ssl and set ownership to root:root, permissions to 400.)



EDIT: You should download a combined chain ("bundle") file here:
http://www.globalsign.com/support/intermediate-root-install.php




Scroll to GlobalSign Root Bundle Certificates.


Load and performance testing for webapps with JavaScript support

Years ago I used OpenSTA to perform load and performance tests for web applications. I remember that it offered great recording possibilities which enabled us to quickly create new test scripts. Unfortunately it's a bit outdated, hence I'm a bit skeptic if it still works correctly with todays browsers.



Please let me which tools you recommend. Free tools are clearly preferred ;)



Note: The "to be tested" app is served over HTTP and uses jQuery and CSS.

Saturday, October 14, 2017

apache 2.2 - Heartbleed, which specific services must be restarted?

Trying to figure out exactly what services should be restarted after patching openssl against Heartbleed. At least one post mentions restarting:




sshd, apache, nginx, postfix, dovecot, courier, pure-ftpd, bind, mysql






  • Is there a command that can be run to see what running services are
    dependent on openssl?

  • Is there a command to run against apache/nginx to see if the patch is active so the service doesn't need to be restarted?

  • Should we just schedule downtime and reboot
    every server entirely?




EDIT: This post suggests using: lsof -n | grep ssl | grep DEL to display processes still using the old version of OpenSSL marked for deletion

apache 2.2 - mod_wsgi, apache2, and load average

I've got a server running several cherrypy apps on apache2 under mod_wsgi. We're seeing constantly fluctuating load average on a box that is not serving many requests. As far as I can tell, the box is under no real CPU load, has plenty of memory, there is very little network traffic and no disk I/O occurring. We are running 13 mod_wsgi daemon processes with 5 threads per process serving 5 different applications. These are very lightweight backend service applications that don't do much processing at all. I've checked just about everything I can think of as a cause of the load flapping and was wondering if anyone here has had experience with a similar problem. Any comments greatly appreciated.



Here's a trace of load averages over the course of about 5 minutes on a staging box serving 10s of requests per minute:



~ $ sar -q 5
Linux 2.6.32-305-ec2 01/27/2011 _i686_ (1 CPU)

04:18:37 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15
04:18:42 AM 0 257 1.52 1.90 1.89

04:18:47 AM 0 257 1.40 1.87 1.88
04:18:52 AM 0 257 1.28 1.84 1.87
04:18:57 AM 0 257 1.18 1.81 1.86
04:19:02 AM 0 257 1.17 1.79 1.85
04:19:07 AM 0 257 1.15 1.78 1.85
04:19:12 AM 0 257 1.14 1.77 1.84
04:19:17 AM 0 257 1.05 1.74 1.83
04:19:22 AM 0 257 0.96 1.71 1.82
04:19:27 AM 0 257 0.89 1.68 1.81
04:19:32 AM 0 256 0.82 1.65 1.80

04:19:37 AM 0 256 0.75 1.62 1.79
04:19:42 AM 0 256 0.69 1.60 1.78
04:19:47 AM 0 256 0.95 1.64 1.79
04:19:52 AM 0 256 1.20 1.67 1.81
04:19:57 AM 0 256 1.42 1.71 1.82
04:20:02 AM 0 256 1.31 1.68 1.81
04:20:07 AM 0 256 2.00 1.82 1.85
04:20:12 AM 0 256 2.64 1.96 1.89
04:20:17 AM 0 256 3.23 2.09 1.94
04:20:22 AM 0 256 2.97 2.06 1.93

04:20:27 AM 0 256 2.74 2.02 1.92
04:20:32 AM 0 256 2.52 1.99 1.91
04:20:37 AM 0 256 2.31 1.95 1.90
04:20:42 AM 0 256 2.13 1.92 1.89
04:20:47 AM 0 256 1.96 1.89 1.88
04:20:52 AM 0 256 1.80 1.86 1.87
04:20:57 AM 0 256 1.66 1.83 1.85
04:21:02 AM 0 256 1.52 1.80 1.84
04:21:07 AM 0 256 1.40 1.77 1.83
04:21:12 AM 0 256 1.29 1.74 1.82

04:21:17 AM 0 256 1.19 1.71 1.81
04:21:22 AM 0 256 1.09 1.68 1.80
04:21:27 AM 0 256 1.00 1.65 1.79
04:21:32 AM 0 256 0.92 1.62 1.78
04:21:37 AM 0 256 0.85 1.59 1.77
04:21:42 AM 0 256 0.78 1.57 1.77
04:21:47 AM 0 256 0.72 1.54 1.76
04:21:52 AM 0 256 0.98 1.58 1.77
04:21:57 AM 0 256 1.22 1.62 1.78
04:22:02 AM 0 256 1.44 1.66 1.79

04:22:07 AM 0 256 2.13 1.80 1.83
04:22:12 AM 0 256 2.76 1.93 1.88
04:22:17 AM 0 256 3.34 2.07 1.92
04:22:22 AM 0 256 3.87 2.20 1.96
04:22:27 AM 0 256 3.56 2.16 1.95
04:22:32 AM 0 256 3.28 2.13 1.94
04:22:37 AM 0 256 3.01 2.09 1.93
04:22:42 AM 0 256 2.77 2.06 1.92
04:22:47 AM 0 256 2.55 2.02 1.91
04:22:52 AM 0 256 2.34 1.99 1.90

04:22:57 AM 0 256 2.16 1.95 1.89
04:23:02 AM 0 256 1.98 1.92 1.88
04:23:07 AM 0 256 1.82 1.89 1.87
04:23:12 AM 0 256 1.68 1.86 1.86


and a top profile:



top - 04:38:57 up  1:17,  1 user,  load average: 2.55, 3.03, 2.46
Tasks: 78 total, 1 running, 77 sleeping, 0 stopped, 0 zombie

Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1741016k total, 946844k used, 794172k free, 63712k buffers
Swap: 917496k total, 0k used, 917496k free, 646064k cached


per request, apache conf of one service (they all pretty much look like this).



Listen 12800

WSGIScriptAlias / /var/www/services/tracking/tracking.wsgi

WSGIDaemonProcess tracking user=www-data group=www-data processes=3 threads=5 maximum-requests=1000 umask=0007
WSGIProcessGroup tracking
WSGIApplicationGroup tracking
WSGIPassAuthorization On

ErrorLog /var/log/apache2/tracking.error.log
CustomLog /var/log/apache2/tracking.access.log combined
LogLevel warn




We haven't really done any specific parameter tuning for mod_wsgi beyond what you see in this conf.

subnet - Cisco, How to do a subnetting scheme using VLSM and RIP-2?



I'm studying for my CCNA exam and I have to create a VLSM scheme using RIP-2 for the following requirements:
(this is an exercise)




  1. Use the class C network 192.168.1.0 network for your point-to-point connections



  2. Using the Class A network 10.0.0.0, plan for the following number of hosts in each location:
    New York: 1000
    Chicago: 500
    Los Angeles: 1000


  3. On the LAN and point-to-point connections, select subnet masks that use the smallest ranges of IP addresses possible given the above requirements.


  4. In all cases, use the lowest possible subnet numbers. Subnet zero is allowed.




My guess is the following:




New York: S0/0 192.168.1.1 /24 Fa0/0 10.1.0.1 netmask 255.255.248.0 - because we need 1000 hosts
Chicago: S0/0 192.168.1.2 /24 Fa0/0 10.2.0.1 netmask 255.255.252.0 (for 500 hosts)
Los Angeles: S0/0 192.168.2.3 /24 Fa0/0 10.3.0.1 netmask 255.255.248.0 (for 1000 hosts)



Is this a good configuration? I'm reading the CCNA book but not everything is very clear, so I said to do some exercises...



Thank you!


Answer



Here is my answer:
Set the following IP addresses to each it corresponding interface:



192.168.1.0 /30




Chicago
office network 10.0.8.1/23
192.168.1.5 int to LA
192.168.1.2 int to NY



NY
office network 10.0.0.1/22
192.168.1.1 int to CH
192.168.1.10 int to LA



LA
office network 10.0.4.1/22
192.168.1.6 int to CH
192.168.1.9 int to NY



next you have to set RIP-2 on each router for the 192.168.1.0 and 10.0.0.0 network and after disable the auto-summarization on each router.


apache 2.2 - Setting up SSL for phpMyAdmin



I would like to run phpmyadmin using my SSL certificate.



I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does...




$cfg['ForceSSL'] =true;


However, my issue is when I did this, now I get an error stating "cannot connect to server."



I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this?



I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire.




Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ however, I do not have any pages on my website that requires https:// .


Answer



Ok I found the answer to this question.




  1. I had to turn on ssl by typing in on command line: sudo a2enmod ssl

  2. Then type in sudo a2ensite default-ssl

  3. go to /etc/apache2/sites-available/defaultssl file and put in the pointer to the site I want the SSL to run. In my case it was /var/www/mydomain/phpmyadmin, then I had to make sure the following was also enabled in the file (but using my own location where my certificate resided). Use only 1 certificate for the server to avoid conflict (so I copied the same cert I am using for my webmail to another directory and used that path/files




SSLEngine on



SSLOptions +StrictRequire



SSLCertificateFile /etc/ssl/myfolder/mycert.csr



SSLCertificateKeyFile /etc/ssl/myfolder/mycert.key



Then restarted apache2 with sudo /etc/init.d/apache2 restart




Then it worked. But one thing to also add. I am using my own private certificate, so anyone who tries to go to my site by https:// they will get a message stating that the certificate is not signed and cannot be trusted. Once I decide to go to a valid certificate, I will need to figure out how to port my current 5 year SSL cert I purchased from godaddy (which I already CSR'd into a crt file for my new server and put it to use. That is another thing I need to do next.



Additional resource for help:
https://help.ubuntu.com/8.04/serverguide/C/httpd.html here is a good link where I read up on getting it to work.


Friday, October 13, 2017

ssh publickey permission denied only from a particular host

Our lab's compute cluster has a two-interface 'gateway' machine which we use to access the cluster nodes. Call this gateway1.publicdomain.com. Normally I access this machine from my laptop, laptop.anydomain.com like this:



ssh joe@gateway1.publicdomain.com



I have set up a public key in .ssh/id_rsa.pub on laptop, and copied that to .ssh/authorized_keys on gateway1. Ordinarily this works fine.



Today I am using a public access point rather than my usual work connection. When I do



ssh joe@gateway1.publicdomain.com




I get the response:



Permission denied (publickey,gssapi-with-mic).



Apparently it won't accept my id_rsa credentials (Problem 1) and I am not prompted for a password (Problem 2) even though ordinarily when I log in from a previously unknown host I am prompted for a password.



I am still able to ssh to gateway1 from another machine (call it otherhost.otherdomain.com) without problem, either with password or (after setting up the relevant id_rsa* files) with publickey authentication. I can also log into otherhost itself using publickey credentials from laptop, so I know there's nothing fundamentally broken about laptop's ssh setup.



Finally, even when I delete my public key form .ssh/authorized_keys on gateway1, I still get the same "Permission denied" message and no password prompt.




So I guess my question is, what can cause gateway1 to reject my publickey credentials from my laptop, and prevent password login, but not from another host? I have confirmed that the id_rsa.pub on laptop and authorized_keys on gateway1 are in sync.



EDIT: I haven't been able to duplicate the problem since I originally posted, because it only happened when I was connected to a particular wireless access point (not belonging to me or to my lab). I still don't know how this could occur.

ubuntu - FQDN, DNS, Hosts, postfix



Let's say I bought this domain: example.com
Name servers under DNS manager all points to example.com
That subdomain(example.com) of nameservers points to an IP.

Also, note that hostname unique is under an A record to said IP



My First question: under etc/hosts, how should the FQDN be




  • xx.xx.xxx.xxx unique.example.com unique OR

  • xx.xx.xxx.xxx example.com example OR

  • xx.xx.xxx.xxx unique unique




Both have dns records and both represent the same ip. In the first case unique is the hostname and the FQDN is unique.example.com. In the second case, example is the hostname and example.com is the FQDN.



So, given a FQDN have to represent the server name. Which one is true ?!



Given all that, when I try to install postFix, and it asks for FQDN, I just don't know what to write as both options seems valid.



However, if I write unique.example.com then my emails become user@unique.example.com which is not what I would expect.



Context: Ubuntu 14.04 VPS, webmin/virtualmin. I accidentaly installed sendmail package and my whole postfix virtual min is no longer working! So, that question comes from me trying to install back postfix.


Answer




unique.example.com is the correct FQDN, where unique denotes the hostname and example.comis the parent domain.



example is not a hostname, and com is a TLD (Top Level Domain), so example.com is not a fqdn and as such is not valid and not same as unique.exampl.com. Read deatils here: What is a fully qualified domain name (FQDN)?




A fully qualified domain name (FQDN) is the complete domain name for a
specific computer, or host, on the Internet. The FQDN consists of two
parts: the hostname and the domain name. For example, an FQDN for a
hypothetical mail server might be mymail.somecollege.edu. The hostname
is mymail, and the host is located within the domain somecollege.edu.





When you enter unique.example.com as fqdn, this will be used as to configure myhostname and mydestination in /etc/postfix/main.cf and so there is nothing to be worried about.


raid - Are consumer class hard disks okay for zfs?

I just recently bought a new server an HP DL380 G6. I replaced the stock smart array p410 controller with an LSI 9211-8i.



My plan is use ZFS as the underlying storage for XEN which will run on the same baremetal.



I have been told that you can use SATA disks with the smart array controllers but because consumer drives lack TLER, CCTL and ERC its not recommended. Is this the case?



I was wondering if using the LSI controller in JBOD (RAID passthrough mode) does the kind of disks I use really have as much of an impact as they would with the smart array controller?




I am aware that trying to use a RAID system not backed by a write cache for virtualization is not good for performance. But I was conisdering adding an SSD for ZFS. Would that make any difference?



I reason I am so obsessed with using ZFS is for dedup and compression. I don't think the smart array controller can do any of those features.

Thursday, October 12, 2017

apache 2.2 - MySQL 4.0, PHP 5.3 and Server 2008



Right off the bat I know that these two versions don't play nicely together but oddly enough XAMPP somehow allows it to happen. We have a machine running XAMPP that somehow (beyond me) allows PHP 5.3.1 to talk to a MySQL 4.0.18 database. We tried to duplicate this setup (minus XAMPP) and install the same version of Apache, MySQL and PHP on a new machine but logically we are stuck trying to get around the following error message:




Warning: mysql_connect(): Connecting
to 3.22, 3.23 & 4.0 is not supported.

Server is 4.0.18-max-debug in
C:\Program Files\Apache Software
Foundation\Apache2.2\htdocs\nktest.php
on line 3 Warning: mysql_connect():
Connecting to 3.22, 3.23 & 4.0 servers
is not supported in C:\Program
Files\Apache Software
Foundation\Apache2.2\htdocs\nktest.php
on line 3 Failed: Connecting to 3.22,
3.23 & 4.0 servers is not supported





Unfortunately we are stuck w/ MySQL 4.0.18 because the application we are running on that box requires it and comes w/ a pre-built version. We also need to use PHP 5.3 because of another package that depends on that hence we are stuck. I have tried to figure out what XAMPP is doing behind the scenes to make all of this work but I can't seem to make sense of it.



In short, does anybody know of a way to enable connectivity to a MySQL 4.0.18 database w/ PHP 5.3.1 on a Windows Server 2008 machine? The application needs access to all standard MySQL library functions, i.e. mysql_connect() so using strictly mysqli is not an option.


Answer



We ended up getting this working by doing two things:



1) Compile PHP 5.3.1 on Windows w/ the --disable-mysqlnd flag
2) Omit mysqli from the compilation.




PHP 5.3.1 is now talking to MySQL 4.018, phew.


apache 2.2 - No SSL certificate error when configuring redirection in virtual host



My server is working fine with a self-signed SSL certificate until I added the following lines to redirect request containing wwww to non-www site:




ServerName www.mydomain.com
Redirect permanent / https://mydomain.com/




The error I got is:




Server should be SSL-aware but has no certificate configured [Hint:
SSLCertificateFile] ((null):0)




I thought a simple redirection would not require SSL. What should be done to get this simple redirection to work?


Answer




The problem is that you cannot have multiple NameVirtualHost if you use SSL, that's a common problem with many different webservers.



The reason is in the network layers. HTTP is on top of SSL, this means that first the SSL connection has to be established, and then the HTTP request is sent. But the HTTP request decides which NameVirtualHost has to serve this request, at the same time SSL ceritificates can be specific to NameVirtualHosts, so how could the SSL connection be established if the NameVirtualHost to handle this request is not known yet at the time of the SSL handshake?



There are more people talking about this issue and suggesting workarounds. Like putting the different Virtual Hosts on different IPs or Ports, this would solve the issue because the IPs and Ports are known before the SSL connection has to be established:



NameBasedSSLVHosts



On top of this, your VirtualHost is missing the SSL related directives like SSLEngine on and the other SSL* directives. I think that's probably the reason why you get this error, because you configured a VirtualHost without SSL to listen on port 443, while another VirtualHost on port 443 has SSL enabled. For the above described reason that can't work.


Wednesday, October 11, 2017

ubuntu - who should own the web root of my server?




I read a book about security in web servers and I found this:




If your web server has the ability to write to the files in your
WordPress directories, then the automatic upgrade functionality works.
If not,WordPress prompts for your FTP credentials to update the files
for you. Both of these situations concern us. In general, your web
user should not have write permissions to your entire web root
. This
is just asking for trouble, especially on a shared hosting platform;

realizing, of course, that certain directories such as the uploads
folder must be writable by the web user in order to function.




Professional Wordpress by Hal Stern



What I want to ask is who is the web user of my server? I'm using Nginx and PHP5-FPM. The web root folder of my server is owned by raymond:raymond. Nginx is running as nginx:nginx and PHP5-FPM's listen.owner is set to raymond and listen.group is also set to raymond



The web root directory permissions is drwxr-x-r-x, my public_html is also set like that.




So how can I know if I'm in trouble with this setup?



BTW, I'm using Linode for my host! I'm not in a shared hosting environment. Thanks!


Answer



You've answered your own question.



The "web user" refers to the identity of the user running nginx - in this case, uid=nginx, gid=nginx.



As log as this user does not own or have write access to files that should not be modifiable (such as the abovementioned wordpress config), you're fine.




EDIT: Unless PHP5-FPM is started as a separate process which runs as root, it cannot excercise more permissions than those present on the web server.


Multipart ranges in Nginx reverse proxy

I am trying to setup Nginx as a reverse proxy. The upstream server is serving some media files.




Because of the large amount of requests for these files and also since these files are not going to change for at least a couple of weeks, I'm caching the upstream responses in Nginx and serving all subsequent requests from the cache.



proxy_cache_path /home/bandc/content levels=1:2 keys_zone=content_cache:10m max_size=10g inactive=15d use_temp_path=off;

upstream mycdn {
server myserver.dev;
}

location / {

proxy_cache content_cache;
proxy_pass http://mycdn;
proxy_cache_methods GET HEAD;
proxy_cache_valid 200 302 7d;
proxy_cache_valid 404 10m;
add_header x-cache $upstream_cache_status;
....
}



The clients can send Range requests for large media files. The upstream server does support range requests.



The problem is if I send a request with multiple byte ranges before sending any GET or single Range request (i.e the response hasn't been cached before this multiple byte ranges request), Nginx delivers the whole file with 200 OK instead of the requested ranges with 206 Partial Content. But once the content has been cached, all multipart range requests work flowlessly.



I looked around a bit and found this blog post:




How Does NGINX Handle Byte Range Requests?
If the file is up‑to‑date in the cache, then NGINX honors a byte range request and serves only the specified bytes of the item to the client. If the file is not cached, or if it’s stale, NGINX downloads the entire file from the origin server. If the request is for a single byte range, NGINX sends that range to the client as soon as it is encountered in the download stream. If the request specifies multiple byte ranges within the same file, NGINX delivers the entire file to the client when the download completes.





Is there any way to ensure that if the file is not cached yet, multipart range should be served from upstream alone (without caching) and eventually from local cache after Nginx caches it when GET or a Range request with a single byte range is performed?

Tuesday, October 10, 2017

amazon ec2 - Can't install PM2 services via Ansible on an Ubuntu EC2 instance

I'm using Ansible to create a pm2 service on a EC2 / Ubuntu instance. Below is the script. When I run it, PM2 is installed and the service is enabled. When I run pm2 list, I don't the see the service, but I can grep it (ps aux | grep node) and see that it's running. It also seems like a shadow copy of pm2 is running and loading the app, but I can't seem to control it.




- hosts: comm
sudo: yes
tasks:
- npm: name=pm2 global=yes
- name: configure pm2 to restart on startup
shell: pm2 startup ubuntu >& /dev/null chdir=~/ executable=/bin/bash
sudo: yes

sudo_user: root
- command: sudo env PATH=$PATH:/usr/bin pm2 startup ubuntu -u ubuntu
sudo: yes
- command: /usr/bin/pm2 save
- command: /usr/bin/pm2 start /home/ubuntu/something/app.js --name something

dkim - Mails marked as spam in gmail for some accounts

I have a couple of domains running on my VPS with VirtualMin.



I have DKIM enabled, reverse DNS is setup and SPF records are added by Virtualmin.



Now I have a main account (admin) which I can easily use to send emails to gmail accounts with, they don't appear in the spam folder.




When I use an additional user from the same domain they ALWAYS go in to spam folder of Gmail.



I tried a couple of spam checkers and they all come back like this:



Main account:
Summary of Results
SPF check: pass
DomainKeys check: neutral
DKIM check: pass

Sender-ID check: pass
SpamAssassin check: ham


Another user from the same domain:



SPF check:          pass
DomainKeys check: neutral
DKIM check: pass
Sender-ID check: pass

SpamAssassin check: ham


I cannot check gmails spam score, or why they are marked as spam, but are there things I can do? This happends with every domain on the virtualmin installation.



I also found out that some businesses that I wrote to also didn't get my email, probably filtered, but when I wrote them with the main mail address they received it. Exact same message, same outlook 2013 client.

apache 2.2 - Which are the required modules in httpd.conf for a Dedicated Server?



We have a dedicated server running on Godaddy. We have hosted on this server java web applications and few wordpress blogs. Java web applications running on Tomcat 5.5+mod_jk+Apache 2. I'm trying to optimize the Apache server and i see there are several modules loaded. Please let me know, which are the required modules for running above applications? The loaded modules list is -





LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule auth_digest_module modules/mod_auth_digest.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_alias_module modules/mod_authn_alias.so
LoadModule authn_anon_module modules/mod_authn_anon.so
LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authz_host_module modules/mod_authz_host.so

LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_owner_module modules/mod_authz_owner.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_default_module modules/mod_authz_default.so
LoadModule ldap_module modules/mod_ldap.so
LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
LoadModule include_module modules/mod_include.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule logio_module modules/mod_logio.so

LoadModule env_module modules/mod_env.so
LoadModule ext_filter_module modules/mod_ext_filter.so
LoadModule mime_magic_module modules/mod_mime_magic.so
LoadModule expires_module modules/mod_expires.so
LoadModule deflate_module modules/mod_deflate.so
LoadModule headers_module modules/mod_headers.so
LoadModule usertrack_module modules/mod_usertrack.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule mime_module modules/mod_mime.so
LoadModule dav_module modules/mod_dav.so

#LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
#LoadModule info_module modules/mod_info.so
LoadModule dav_fs_module modules/mod_dav_fs.so
LoadModule vhost_alias_module modules/mod_vhost_alias.so
LoadModule negotiation_module modules/mod_negotiation.so
LoadModule dir_module modules/mod_dir.so
LoadModule actions_module modules/mod_actions.so
#LoadModule speling_module modules/mod_speling.so
#LoadModule userdir_module modules/mod_userdir.so

LoadModule alias_module modules/mod_alias.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule cache_module modules/mod_cache.so
LoadModule suexec_module modules/mod_suexec.so
LoadModule disk_cache_module modules/mod_disk_cache.so

LoadModule file_cache_module modules/mod_file_cache.so
LoadModule mem_cache_module modules/mod_mem_cache.so
#LoadModule cgi_module modules/mod_cgi.so

Answer




  • If you don't use LDAP support for anything, you can comment out every module mentioning LDAP.


  • If you don't use WebDAV, feel free to comment out mod_dav.


  • If you don't use Apache's caching abilities for storing stuff from your Tomcat, comment out all the cache modules.


  • If you don't use suexec for any CGI scripts, comment out that one.



  • If you don't use Server Side Includes (SSI, it looks like , comment out mod_include


  • If you don't use basic HTTP authentication (the one doable with .htaccess throwing a popup to your browser window asking credentials), comment out all the auth modules.




Those were the easy ones. For every other module you'll need to test if they break anything. Of course, please test that hell didn't break loose after disabling the modules I suggested.


Monday, October 9, 2017

windows server 2003 - DHCP failing to update DNS



We have a Windows Server 2003 SP2 machine that is a domain controller, DNS server, and DHCP server. (We realize that having all three roles running on the same computer is not the optimal configuration but there are no other machines available.)



There is another DNS server on the network, but this particular server is listed as the primary for client workstations. All zones are Active Directory Integrated and are configured for secure dynamic updates.




The server is the only authorized DHCP server on the network. It is enabled for Dynamic DNS updates.



We've been experiencing some strangeness. Sometimes, client workstations will lose access to the Internet. The resolution appears to be manually changing the IP address to a different one. Moreover, bad address entries are starting to appear in DHCP. We've been manually deleting them, but they keep on appearing.



This lead us to believe the problem is caused by the DHCP server. I took a look at the audit logs for DHCP. The log showed a whole bunch of Event ID error code 31: DNS Update Failed



31,07/01/09,11:47:26,DNS Update Failed,10.0.1.107,TEST.private.local,-1,



After researching the issue, we found that if DHCP is installed on a domain controller that is also a DNS server, we should create a specific user account for dynamic DNS registration credentials. We did that but the errors are not stopping.




Any suggestions? Any help would be appreciated.


Answer



SUGGESTION: You need to make sure that the reverse DNS is properly setup for the zone. I believe DNS auto-update populates both the forward (name to IP) and the reverse (IP to name) zones. If the reverse zone is not setup properly, the update could fail and give this errors. Bed reverse DNS can also trigger other strange behavior.



ALSO: It is not a problem to have DC, DNS and DHCP on the same server, unless the network is huge. However, you absolutely need to setup a second DC. Without a working DC, your network becomes a bunch of paperweights.


domain name system - Setting different NS records as authoritative on authoritative DNS

I have DNS servers for a domain set to one set of authoritative DNS servers on the registrar. However, those DNS servers zone file for the domain have a different set of NS records for it. Some DNS servers are passing the request on merrily to the NS servers set in the zone file; however, some others (such as Google, Level 3 and OpenDNS' public DNS servers) aren't resolving the records properly. They return the proper NS records but requests for A records at the sub-delegated DNS server are not being returned. I have provided plenty of output below; but the gist of it is, the requests aren't being referred to the NS records I set at QUICKROUTEDNS.COM for the domain which are NS records pointing to Amazon's cloud DNS. Instead the requests are stopping at QUICKROUTEDNS.COM. So how do I instruct DNS servers to continue their query on to Amazon as its authoritative for the domain, without changing the DNS records at the registrar?



Here's an example:



The domain's DNS records at the registrar:



Name Server: NS1.QUICKROUTEDNS.COM
Name Server: NS2.QUICKROUTEDNS.COM
Name Server: NS3.QUICKROUTEDNS.COM



Pulling the NS records for the domain (the authoritative DNS, QUICKROUTEDNS.COM, has these servers set as the NS record):



$ host -t NS domain.com 
domain.com name server ns-1622.awsdns-10.co.uk.
domain.com name server ns-1387.awsdns-45.org.
domain.com name server ns-774.awsdns-32.net.
domain.com name server ns-48.awsdns-06.com.



An A record from the Amazon DNS servers hosting the domain:



$ host www.domain.com ns-1387.awsdns-45.org
Using domain server:
Name: ns-1387.awsdns-45.org.
Address: 205.251.197.107#53
Aliases:

www.domain.com has address 201.201.201.201



Yet, when I request it from any given nameserver:



$ host www.domain.com 8.8.8.8
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases:

Host www.domain.com not found: 3(NXDOMAIN)



This is consistent amongst almost every DNS server, although there are a FEW that will report the A record as expected.



Here is a dig +trace output when trying to pull the A record:



$ dig @8.8.8.8 www.domain.com A +trace                                                                         

; <<>> DiG 9.8.3-P1 <<>> @8.8.8.8 www.domain.com A +trace
; (1 server found)

;; global options: +cmd
. 1341 IN NS m.root-servers.net.
. 1341 IN NS j.root-servers.net.
. 1341 IN NS a.root-servers.net.
. 1341 IN NS d.root-servers.net.
. 1341 IN NS f.root-servers.net.
. 1341 IN NS c.root-servers.net.
. 1341 IN NS b.root-servers.net.
. 1341 IN NS e.root-servers.net.
. 1341 IN NS i.root-servers.net.

. 1341 IN NS h.root-servers.net.
. 1341 IN NS g.root-servers.net.
. 1341 IN NS l.root-servers.net.
. 1341 IN NS k.root-servers.net.
;; Received 228 bytes from 8.8.8.8#53(8.8.8.8) in 58 ms

net. 172800 IN NS a.gtld-servers.net.
net. 172800 IN NS e.gtld-servers.net.
net. 172800 IN NS c.gtld-servers.net.
net. 172800 IN NS b.gtld-servers.net.

net. 172800 IN NS g.gtld-servers.net.
net. 172800 IN NS i.gtld-servers.net.
net. 172800 IN NS j.gtld-servers.net.
net. 172800 IN NS k.gtld-servers.net.
net. 172800 IN NS h.gtld-servers.net.
net. 172800 IN NS f.gtld-servers.net.
net. 172800 IN NS d.gtld-servers.net.
net. 172800 IN NS m.gtld-servers.net.
net. 172800 IN NS l.gtld-servers.net.
;; Received 503 bytes from 192.36.148.17#53(192.36.148.17) in 586 ms


domain.com. 172800 IN NS ns1.quickroutedns.com.
domain.com. 172800 IN NS ns2.quickroutedns.com.
domain.com. 172800 IN NS ns3.quickroutedns.com.
;; Received 153 bytes from 192.55.83.30#53(192.55.83.30) in 790 ms

domain.com. 3600 IN SOA cns1.atlantic.net. noc.atlantic.net. 2016033004 28800 7200 604800 3600
;; Received 88 bytes from 69.16.156.227#53(69.16.156.227) in 712 ms



As we can see, it's only getting to the QUICKROUTEDNS.COM nameservers and not going to request from the Amazon nameservers. So, how do I tell DNS servers to fetch its queries from the Amazon servers and NOT to stop at QuickRouteDNS.COM?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...