Wednesday, November 30, 2016

centos - When cron is completed How to get email notification and log in a file (both)




I am a Newbie to linux. I'm trying to figure out things. Can someone kindly help me how to combine these two commands?



(1) Normally cron can results can be directed to a log file by editing crontab in the below manner



*/10 * * * * /scripts/mysc.sh >> /home/ara/Desktop/test/log.txt 2>&1 


(2) and in case we need cron results to be emails we can use MAILTO=someemail@domain.com such as



MAILTO=someemail@domain.com

*/10 * * * * /scripts/mysc.sh


But how to combine both options (1) and (2)? I have seen some webhosting space do have both options enabled simultaneously. I did my research/googling but failed to do it. I'm using centos 6.5 and use crontab -e to edit.


Answer



Your first example sends both stderr and stdout to the file (2>&1) ; the MAILTO variable set in the cron will capture any output that is not redirected, and this combined with directing the output to the file means that no output is available for the cron to email.



I'd suggest using tee to append the output to the file as well as sending it to stdout; this answer - https://serverfault.com/a/472878/102867 - is very similar one to what you're asking to acheive.



Alternatively, follow the suggestion in the first answer, and write a wrapper script to more gracefully handle the output of the script, and you can then both log, and have the output of your script mailed



Tuesday, November 29, 2016

domain name system - DNS failover across multiple datacenters?

I've got a site that is starting to get a lot of traffic and just the other day, we had a network outage at the datacenter where our loadbalancer (haproxy) is hosted at.
This worried me as despite all my efforts of making the system fully redundant, I still could not make our DNS redundant, which I think isn't an easy solution.



Only thing I was able to find was to sign up for DNS failover from places like dnsme, etc .... but they cost too much for budding startups. Even their Corporate plan only gives you 50 million queries per month and we use that up in a week.




So my question is, are there any self hosted DNS we can do that provides the failover like how dnsme does it?

php - 100% SSD usage Linux

Every 20-30 seconds my HDD usage goes to 100% (iostat).



iotop is showing that [flush-8:0] is using 99% HDD during these times. In between HDD usage is 1-10%.



iostat output:





04/22/2013 08:58:44 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 1.55 1188.88 3.43 569.93 0.03 6.88 24.69 0.25 0.43 0.12 7.15

04/22/2013 08:58:46 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 632.50 1.00 753.50 0.01 5.41 14.72 0.77 1.02 0.02 1.35

04/22/2013 08:58:48 AM

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 1001.00 4.50 26.50 0.04 4.01 267.74 0.08 1.63 1.15 3.55

04/22/2013 08:58:50 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 2.00 0.00 0.03 0.00 26.00 0.00 16.75 1.50 0.30

04/22/2013 08:58:52 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 2332.50 2.00 5370.00 0.03 30.04 11.46 113.70 20.79 0.15 79.30


04/22/2013 08:58:54 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 508.50 3.50 2102.00 0.03 10.21 9.96 143.96 63.78 0.47 99.50

04/22/2013 08:58:56 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 46.50 1.50 423.50 0.01 1.85 8.95 117.26 288.18 2.35 100.05

04/22/2013 08:58:58 AM

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 924.50 3.00 34.00 0.02 3.76 209.30 1.04 203.03 1.73 6.40

04/22/2013 08:59:00 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 3.50 0.00 0.04 0.00 21.71 0.03 8.43 8.43 2.95

04/22/2013 08:59:02 AM
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sda 9.00 2662.50 9.00 1135.50 0.08 14.86 26.72 1.03 0.90 0.04 4.50



iotop:




[root@a18 ~]# iotop -o -a
unable to set locale, falling back to the default locale

Total DISK READ : 19.47 K/s | Total DISK WRITE : 2.00 M/s
Actual DISK READ: 19.47 K/s | Actual DISK WRITE: 0.00 B/s

TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
2055 be/4 root 0.00 B 1968.00 K 0.00 % 15.75 % [flush-8:0]
20991 be/4 lighttpd 7.90 M 0.00 B 0.00 % 7.05 % lighttpd -f /etc/lighttpd/lighttpd.conf
23832 be/4 root 36.00 K 714.59 M 0.00 % 6.94 % php /var/www/base/bg-worker.php


How can I figure out what is this problem caused by?



Using SSD RAID 1. Filesystem EXT4.




I have PHP server with heavy writing and lots of small files deletion.



CentOS 6 64-bit.

Monday, November 28, 2016

apache 2.2 - "Directory index forbidden by Options directive" when deleting or renaming folders through webdav

I am trying to delete folders through webdav but all I get is 403 on the client and "Directory index forbidden by Options directive" in the Apache error log. I enabled "options indexes" for the folder and I stopped getting the errors in either the client or the log, but I still can't rename or delete folders through webdav.



Any ideas why I'm unable to edit folders through webdav?



I am running WAMP, default installation with Apache 2.2.17. I can connect, create files, delete files, rename them, etc. I can create folders but not delete them or rename them, once they're created.



On the access log, whenever I try to delete, I get this: "DELETE /uploads/shahs HTTP/1.1" 301 243




On the error log, I get: Directory index forbidden by Options directive:



The Webdav client gives a 403 when trying to delete or rename folders.



Once, I added "options indexes," I stopped getting the error message in the Apache error log and the 403 on the webdav client, but now, deleting or renaming does nothing. No error messages, but nothing happens, at all.

Intranet DNS Name



We are using Windows Server 2003 in a small environment with its own domain setup. The company intranet is currently http://ServerName (the server running IIS). How do I change this so instead of having the servername in the address field, people can type in "intranet" or another name?




Thanks.


Answer



Add a CNAME record to your internal DNS with the name intranet pointing to ServerName.yourdomain.local (or whatever your DNS domain name is).


Sunday, November 27, 2016

web server - How do I configure the default virtual host return a 404 header in apache?



I know that similar questions have been asked, but the available answers are not very clear, so please bear with me.



After setting up a few s in apache, I'd like to configure the _default_ ServerName so that it returns the 404 message. I.e., unless some explicitly available domain is specified in the Host http header, return 404. (Ideally something more direct than pointing to a now-nonexistent directory.)




Any help would be greatly appreciated.


Answer



Did you try:




Redirect 404 /
ErrorDocument 404 "Page Not Found"


in the default VirtualHost?



Monitor DELL hardware on VMware ESXi 5.5 server



Despite researching this topic quite a bit online (to be fair I'm not a full time sysadmin) I'm unable to figure this out.




We have a bunch of VMWare ESXi 5.5 servers, some of which are integrated into vSphere, some of which are not (for cost reasons).



All of them run the standard ESXi image, with the exception of one machine which is actually running the DELL VMWare ESXi image.



What I would like to accomplish seems simple: Configure the system so that it can be queried via SNMP from a remote host, whether it's snmpwalk, Nagios, PRTG etc. I'd like to see information from temperature sensors, installed disks and their status, fan speed, PSU status etc.



I was under the impression that installing the VMWare version from DELL would automagically enable the necessary modules (OpenManage most importantly), but it seems like that is not the case.



I have conflicting information whether this is even possible at all, some documents say that you cannot query a DELL VMWare ESXi server via SNMP and you need to use a CIM client. Then there is the OMSA VIBs one can install, etc.




I imagine this being a fairly common requirement, yet the docs available pull one in all different directions.



Is what I am trying to do possible (without a complete vSphere environment) even possible?


Answer



Yes, you can monitor the standalone ESXi Host using any SNMP monitoring software but some items may only be visible using a monitoring tool that supports the CIM protocol.



All of my ESXi Hosts are part of vCenter but we monitor them directly (using the vmkernal Host IP address) with SolarWinds NPM. There are 5 or 6 CIM modules built into ESXi 5.5 that give you hardware health but RAID card health is not one of them. You will need to add the Dell OMSA VIB that adds the additional CIM agents including the one for the RAID array. Brian Atkinson's post is still the best I have found that describes the process,



https://communities.vmware.com/people/vmroyale/blog/2012/07/26/how-to-use-dell-dset-with-esxi




You only need to follow the instructions for installing the OMSA ESXi VIB if you are going to use a third party monitoring tool that gives historical information and does alerting. If you wish to use the Dell OMSA Server you can install it remotely on bare bones server, remotely in a VM or locally as a VM.



You can use the OMSA server to connect to DRAC and iDRAC Out of Band (OOB/ IPMI/ iLo) management cards or to the ESXi Host after you install the OMSA VIB on the ESXi Host. You will not see the RAID Health information in the DRAC or iDRAC though - only when connecting the OMSA Server to an ESXi Host - I repeat the Server keyword so there is no confusion between the Server which is acting as a client to the OMSA VIB that is installed on the ESXi Host.



Some useful resources:



Show the current CIM providers on an ESXi Host
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2053715




Show the currently installed VIBs on the ESXi Host from the Host's CLI,
esxcli software vib list



You do see some minor additional hardware health details when you connect to a vCenter server versus the ESXi Host directly but generally if you do not see the hardware health you are looking for in the Configuration/ Health Status panel then you are missing a CIM provider and you need to locate and install the VIB on the ESXi Host. When you add the Dell OMSA VIB to the ESXi Host you will see a Storage sensor added to the Health Status page which shows the RAID volumes, drives, controller and battery health for your storage controller. You may need to reset the sensors for it to show up and sometimes it takes 15 to 20 minutes the first time after the VIB install and reboot of the ESXi Host.



If you do not see a sensor on the ESXi Host's Health Status page when you connect with the vSphere Client then you are most likely not going to see it when you are remotely polling the sensors with monitoring software.



Also you should note that not all servers have the same sensors and you may not be able to get the same health status from all depending on the Server hardware, RAID card and version of the CIM available for the combination. You may also need to upgrade or change the VIBs for the RAID card in order for the health status to work. The CIM provider (which is the OMSA VIB in this case) talks to the hardware through the device VIB (the real device driver) and passes this information to the CIM Broker on the ESXi Host - also known as the Small Footprint CIM Broker Daemon (sfcbd). When you poll the ESXi Host for hardware health using robust monitoring software it will get some information using SNMP queries, some using CIM and some using the ESXi API (which are SOAP requests). The CIM client talks to the sfcbd process on the ESXi Host.



Sometimes the CIM process just stops working. When that happens you will be restarting the sfcbd-watchdog process on the ESXi Host. This will restart the sfcbd service and CIM polling will work again. From the CLI of the Host, /etc/init.d/sfcbd-watchdog restart




I think that covers most of the items you need to get you running.


Saturday, November 26, 2016

How to accomodate very large SQL database in Azure?

I have a database with a growth of more than 100 Gb per week, and 5 TB yearly.




Since this is financial data, we can't purge it. If we keep this data for at least 10 years, size will become 50 TB.



Please suggest how we can accomodate this amount of data in Azure VMs with limitation of 1 TB disc in Azure.



Thanks, Subhendu

Friday, November 25, 2016

apache 2.2 - Apache2 - 301 Redirect when missing "/" at the end of directory in the url




I haven't really noticed this Redirect(301) when requesting a url like this without slash("/") at the end: http://server/directory



The server will respone with a 301 Redirect Permanent header with a Location header locating to http://server/directory/.



See this live example:



User Request:



GET /social HTTP/1.1
( http://192.168.1.111/social )



Apache Server Responce:



HTTP/1.1 301 Moved Permanently
Location: http://192.168.1.111/social/






User Request:



GET /social/ HTTP/1.1
( http://192.168.1.111/social/ )


Apache Server Responce:



HTTP/1.1 200 OK






Apache access.log:



192.168.1.130 - - [05/Apr/2014:22:06:47 +0200] "GET /social HTTP/1.1" 301 558 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0"
-
192.168.1.130 - - [05/Apr/2014:22:06:47 +0200] "GET /social/ HTTP/1.1" 200 942 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0"






The /social/ directory contains an index.html file.



Apache Software: Apache/2.2.22 (Ubuntu)
Directory Options: Options Indexes FollowSymLinks MultiViews



So, my question is: Why is apache doing this? And how to prevent the redirect and send out the index.html directly? Clients have to send two requests which is really unnecessary. And maybe some of the clients doesn't allow Redirects and will not be able to go to the site without the ending slash ("/").



I don't want to disable the redirect. I wan't the server to send out the response directly without any redirect. Even when requesting /social.




Is apache designed to redirect those requests? The server could just send the data without redirecting, right? Should I use the mod_rewrite to prevent this? Or another configuration? Or should I just let it be like this and add a slash at the end of all html links and live with some redirects?



What do you guys think?


Answer



Sending the data without a redirect would break relative links. If http://server/directory contains file, then the full URL for that would be http://server/directory/file. A link specified like will point to http://server/directory/file if the base URL is http://server/directory/, but if the base URL was only http://server/directory it would point to http://server/file instead, which is not the intended result.



Apache could have generated the directory listing in two different ways depending on the URL instead of redirecting. However that would not work if there was an index.html file in the directory. So instead Apache is using the approach, that works in both cases.



This is not a new behavior, one decade ago Apache was behaving the same way. Clients which cannot handle a redirection should have been fixed by now. But for any clients who cannot handle a redirection, Apache should be sending along a tiny html file with a link that can be followed instead.


domain name system - How to find all hostnames in DNS attached to one IP?



If I have multiple hosts configured on one machine (a la apache's VirtualHosts), how can I do a lookup on the IP and find all domains configured to reach it?



For example, I have several web and email domains hooked-to my server. How can I find all domains that point to it?



Is it even possible?



I have DNS A entries for all the domains I own, plus I know some friends' domains point to my server. What I'd like to see is if folks I don't know about are pointing there, too. (Or if someone has repointed their domain elsewhere, and I can delete their 'old' website from my server.)


Answer




Not really, no. This is all about the difference between forward and reverse DNS lookups.



A forward lookup is the standard name->IP lookup. So, you would have to know all the names in advance.



What you want is to do an IP->name lookup, but somehow get all the names you've applied in your Apache config and in DNS as A records (or CNAMES or whatever).



What you will probably find is that doing a reverse lookup (e.g. dig @nameserver $ip -x) will return the hostname given to that IP by the people who own that netblock, which could be your ISP. It might have a name like 45-23-45-231.big-isp.com, which doesn't mean a whole lot to you. And crucially, there is only one reverse record, but potentially many forward ones.



I suppose it boils down to the question - how does the reverse zone know about any of the records in the forward zone? In most setups, the forward zone is made available to the customer to make changes to, but the reverse zone is maintained by the owners of the netblock. The two systems don't need to know anything about each other to function.


Thursday, November 24, 2016

domain name system - Active Directory - List ISP DNS servers as Forwarders?

Background: I have a relatively small Active Directory domain (Windows 2003 Functional level) with two domain controllers, both running DNS servers. They are the primary and secondary DNS servers for the LAN. No other local DNS. I do not have any subdomains or recursion going on.



My Question: In the DNS Manager, under server Properties, Forwarders tab. Should I have my ISP's DNS servers listed here (or the Google ones)? Or should I leave the Forwarders tab blank and rely on the Root Hints servers?




My Forwarders Tab



I Googled before posting. About half the advice I read said the use the ISP DNS as forwarders, and half said to just use the Root Hints. So, I have no idea which is "best" for my setup (which I imagine is pretty typical for a small shop).

domain does not have ping in CentOS VPS

I have a VPS with CentOS 6.x, DirectAdmin control panel, and CSF.
There are about 40 domains on it, but one of them did not load since yesterday (itgates.ir).

When I ping this domain, is there not any response. Its files are ok, and I don't know what's ther problem for!
All other domains are working well...
What is the problem?

Tuesday, November 22, 2016

networking - Network adapter drops connection when windows starts




I have a bit of an issue that is just frustrating me.



When I turn my notebook on with the network cable plugged in, the link lights above the port flash happily, until windows starts up. As soon as windows starts up the lights simlpy turn off and windows reports that the cable is unplugged.



More interestingly, if I go to device manager and disable the Local Area Connection, the lights start flashing again, but when I enable it, the lights stop.



We have tried re-installing drivers, different network cables etc. We have also confirmed that the network cables work by plugging them into different machines / devices (IP Phone).



This seems to be happening in one building of our company where I am now. When I plug my machine back into the network at my normal desk, it works fine.




The Administrators assure me that they do not do anything on the switches and the like that would stop me from connecting to the network from any available port.



Could this be a policy setting? I've tried turning off the ability for windows to power down the device when it's not in use, with no effect.



Any ideas would be welcome.



Thanks



Gineer



Answer



Two quickies,




  • what about speed? is your card a gigabit ethernet? is the switch where it's connected to forcing a certain speed without duplex negotiation?

  • have you tried running a live linux distro? This might help ruling out any hardware or location issue


HP P410 RAID CARD Issue - unassigned drives not detected by the OS



i just bought lot of HP DL180 G6 Servers,



it's using HP P410 RAID Card,




i plugged 12 x 2TB SAS Drives + 2 x 73GB SAS (for OS - RAID1)



all drives showing up IN RAID Creating array page,



so i just created RAID1 Array for 2x73Gb to install OS (Centos 6)



OS installed fine,



but i can't detect the rest of unassigned drives (that's not in RAID Arrays)







so my Q. is :



is there any way to make unassigned drives showing up in OS (without creating raid array for them) ?



i don't want to create RAID arrays for them, i just want them showing up like all Dell servers i have,



if i change RAID Card Mode to HBA, is that help ?




any advices ?



Thanks


Answer



The Smart Array P410 (and all Smart Array controllers) are RAID devices only. There's no HBA or pass-through mode.



What are you attempting to do? Are you installing something like ZFS or Windows Storage Spaces where you want to pass full disks to the operating system to be managed?



If so, creating a bunch of RAID 0 single-disk logical drives is the wrong choice!




If you do this, you will lose hot-swap ability during a drive failure. If a disk fails, that RAID 0 logical drive fails and won't be rescanned by your operating system without a reboot or Smart Array reconfiguration. A dedicated HBA is a better choice, depending on what you're doing...



See: ZFS SAS/SATA controller recommendations



Disabling RAID feature on HP Smart Array P400



Solaris: detect hotswap SATA disk insert



MegaRAID JBOD substitute


firewall - NAT translation with Cisco ASA 5505

I am trying to setup NAT translation on a ASA 5505, however the new public IP address never actually becomes available after adding it. I'm sure I'm doing something stupid, but so far the problem has eluded me. Basically, I'm trying to map XX.XX.115.195 => 192.168.125.7. XX.XX.115.194 is the public IP of the firewall, and it is accessible, but 115.195 never seems to get picked up. I inherited the original configuration so it is possible that one of the other rules is preventing this from happening. I've included what I believe are the relevant sections below.



Below is the specific rule I added. I've confirmed I'm able to reach the 125.7 server from inside the firewall on the usual ports and protocols, but from the outside the public 115.195 does not respond to anything.




static (outside,inside) 192.168.125.7 XX.XX.115.195 netmask 255.255.255.255




ASA Version 7.2(4)
!
interface Vlan1
nameif inside
security-level 100
ip address 192.168.125.1 255.255.255.0
!
interface Vlan2
nameif outside
security-level 0

ip address XX.XX.115.194 255.255.255.248
!
access-list outside-in extended permit tcp any host XX.XX.115.194 eq 44000
access-list outside-in extended permit tcp any host XX.XX.115.194 eq https
access-list outside-in extended permit tcp any host XX.XX.115.194 eq 4000
access-list inside_nat0_outbound extended permit ip any 192.168.125.192 255.255.255.192

nat (inside) 0 access-list inside_nat0_outbound
nat (inside) 1 0.0.0.0 0.0.0.0
static (inside,outside) tcp interface 44000 192.168.125.15 44000 netmask 255.255.255.255

static (inside,outside) tcp interface https 192.168.125.15 https netmask 255.255.255.255
static (inside,outside) tcp interface 4000 192.168.125.15 4000 netmask 255.255.255.255
static (outside,inside) 192.168.125.7 XX.XX.115.195 netmask 255.255.255.255
access-group outside-in in interface outside

apache 2.2 - rewriterules +httpd.conf +.htaccess problem (urgent-site down)



I'm migrating one of my websites from dreamhost shared to dreamhost ps.
the files copied OK and DNS resolved to the new server. However trying to get a pagw brings error 403 access forbidden.




If I remove the .htaccess file from the directory of the site. the homepage loads OK but naturally any other page does not (because it requires rewriting currently defined in .htaccess).



I believe that httpd.conf blocks usage of .htaccess in the new server. quote:




Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all




I tried removing htaccess and implementing the rewrite rules in httpd.conf. without success.



My questions:
1. I tried AllowOverride All to make .htacces work. It didn't change anything. What else should be checked/done to make it work?




  1. I created a directory section in httpd.conf with the directory in which the httpdocs sit and put all the rewrite rules there. (and restarted the server). it didn't have effect. Any hints on what should be done to add directory part with rewriterules to .httpd.conf and troubleshooting tips?


  2. How can I check if mod-rewrite is working on this server? It is enabled on the httpd.conf file. The server is on fastcgi mode.





Any other tips are very appreciated.



Thanks,
Niro



How can I move the rewrite rules in .htaccess in httpd.conf (and make it work)


Answer



Thanks for the answers. I finally discovered that the actual httpd.conf file was under another path, It was not visible due to permissions. Just in case someone need it it was at:




/usr/local/dh/apache2/apache2-psnum/etc/ (psnum is the server number)



I placed the rewrites there under the directory section of my site and everything works.


Monday, November 21, 2016

windows - Members of local System account




I'm currently in the process of migrating some shares. Using my AD account, I was able to navigate to a share and its subfolders. Looking at the NTFS permissions, I don't belong to any groups that grant "Full" rights. The only two groups that have "Full" rights are the local administrators (I am not a member) and the System.



Any way to determine exactly how I am able to get "Full" rights without being an explicit member of a group listed in the ACL?


Answer



"Full Control" allows you to do anything imaginable to the folder or its files, like delete them or fiddle with the ACLs. You don't need Full Control to navigate a folder or even to write to it. "List folder / read data" allows you to look around in the folder or read the contents of a file. (Which one it is depends on whether the object in question is a folder or a file.) "Create files / write data" and "Create folders / append data" allow you to do exactly what they say, but you can have that access without having Full Control.



Note that applications running with administrative privileges can use SeBackupPrivilege and SeRestorePrivilege to read or write anywhere (respectively), no matter what ACLs say. Read more about privileges at TechNet.


Sunday, November 20, 2016

What does Zabbix message "Free disk space is less than XX% on volume Shared memory" mean?



I use Zabbix to monitor my environment. The Zabbix server warns me with the following message:



"Free disk space is less than 20% on volume Shared memory"



Can someone explain "volume Shared memory" in this context?




How should I address this issue?


Answer



/dev/shm is a temporary filesystem mounted usually under /run/shm for IPC (inter process communication) which - in my opinion - should not be monitored in your case.


Saturday, November 19, 2016

ubuntu - How to setup a mailserver on Google Cloud VM?



I want to know how to setup a mail server like postfix on Google VP instance.




I'm running Ubuntu 16.04 (and LAMP stack) and can't get the mail server to send email from website.



I have installed postfix, and opened port 25, but no luck.



Any ideas on how to proceed?



Error logs: Network is unreachable and Connection timed out


Answer



According to https://cloud.google.com/compute/docs/tutorials/sending-mail/, you cannot set up a mail server the usual way, as ports 25, 465 and 587 are blocked for outbound connections on Google Cloud. Instead, you might take a look at relaying services such as Mailgun or SendGrid, which allow sending through port 2525 or an API instead. These services might cost a little bit of money, however.


raid - SAS Expanders vs Direct Attached (SAS)?



I have a storage unit with 2 backplanes. One backplane holds 24 disks, one backplane holds 12 disks. Each backplane is independently connected to a SFF-8087 port (4 channel/12Gbit) to the raid card.



Here is where my question really comes in. Can or how easily can a backplane be overloaded? All the disks in the machine are WD RE4 WD1003FBYX (black) drives that have average writes at 115MB/sec and average read of 125 MB/sec



I know things would vary based on the raid or filesystem we put on top of that but it seems to be that a 24 disk backplane with only one SFF-8087 connector should be able to overload the bus to a point that might actually slow it down?




Based on my math, if I had a RAID0 across all 24 disks and asked for a large file, I should, in theory should get 24*115 MB/sec which translates to 22.08 GBit/sec of total throughput.



Either I'm confused or this backplane is horribly designed -- at least for a performance-based environment.



I'm looking at switching to a model where each drive has it's own channel from the backplane (and new HBA's or raid card).



EDIT: more details



We have used both pure linux (centos), open solaris, software raid, hardware raid, EXT3/4, ZFS.




Here are some examples using bonnie++



4 Disk RAID-0, ZFS



WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
194MB/s 19% 92MB/s 11% 200MB/s 8% 310/sec
194MB/s 19% 93MB/s 11% 201MB/s 8% 312/sec
--------- ---- --------- ---- --------- ---- ---------
389MB/s 19% 186MB/s 11% 402MB/s 8% 311/sec



8 Disk RAID-0, ZFS



WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
324MB/s 32% 164MB/s 19% 346MB/s 13% 466/sec
324MB/s 32% 164MB/s 19% 348MB/s 14% 465/sec
--------- ---- --------- ---- --------- ---- ---------
648MB/s 32% 328MB/s 19% 694MB/s 13% 465/sec



12 Disk RAID-0, ZFS



WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
377MB/s 38% 191MB/s 22% 429MB/s 17% 537/sec
376MB/s 38% 191MB/s 22% 427MB/s 17% 546/sec
--------- ---- --------- ---- --------- ---- ---------
753MB/s 38% 382MB/s 22% 857MB/s 17% 541/sec



Now 16 Disk RAID-0, it's gets interesting



WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
359MB/s 34% 186MB/s 22% 407MB/s 18% 1397/sec
358MB/s 33% 186MB/s 22% 407MB/s 18% 1340/sec
--------- ---- --------- ---- --------- ---- ---------
717MB/s 33% 373MB/s 22% 814MB/s 18% 1368/sec


20 Disk RAID-0, ZFS




WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
371MB/s 37% 188MB/s 22% 450MB/s 19% 775/sec
370MB/s 37% 188MB/s 22% 447MB/s 19% 797/sec
--------- ---- --------- ---- --------- ---- ---------
741MB/s 37% 376MB/s 22% 898MB/s 19% 786/sec


24 Disk RAID-0, ZFS




WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
347MB/s 34% 193MB/s 22% 447MB/s 19% 907/sec
347MB/s 34% 192MB/s 23% 446MB/s 19% 933/sec
--------- ---- --------- ---- --------- ---- ---------
694MB/s 34% 386MB/s 22% 894MB/s 19% 920/sec


(anyone starting to see the pattern here?) :-)



28 Disk RAID-0, ZFS




WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
358MB/s 35% 179MB/s 22% 417MB/s 18% 1105/sec
358MB/s 36% 179MB/s 22% 414MB/s 18% 1147/sec
--------- ---- --------- ---- --------- ---- ---------
717MB/s 35% 359MB/s 22% 832MB/s 18% 1126/sec


32 Disk RAID-0, ZFS




WRITE     CPU    RE-WRITE  CPU    READ      CPU    RND-SEEKS
354MB/s 35% 190MB/s 22% 420MB/s 18% 1519/sec
354MB/s 35% 190MB/s 22% 418MB/s 18% 1572/sec
--------- ---- --------- ---- --------- ---- ---------
708MB/s 35% 380MB/s 22% 838MB/s 18% 1545/sec


More details:



Here is the exact unit:




http://www.supermicro.com/products/chassis/4U/847/SC847E16-R1400U.cfm


Answer



Without knowing the exact hardware you're using, the max you can get through two SAS SFF-8087 is 24Gbps, or 3 GBps; but many controllers-expander combinations will not actually use all 4 channels in the SFF-8087 correctly and you end up getting approximately a single link (0.75GBps).



Considering your performance numbers, I would venture a guess that the latter is the case.


iis 7 - IIS 7.0 install two SSL certificates with two different host headers



I have two sites:



sub1.example.org

sub2.example.org



I have two SSL certificates for each of the above domains.



When install certificate #2 for sub2.example.org, it tells me:




At least one other site is using the
same HTTPS binding and the binding is
configured with a different

certificate




Is it not possible in any way to install these two certificates on one server?!


Answer



You can't assign different SSL certificates to sites that are only differentiated by host headers. You would need to have the sites on separate IP addresses.



Another option is to setup a wildcard SSL certificate (which you could then apply to all sites hosted under *.example.org on the server.) There is a catch though - you still can't apply the certificate through the GUI. Instead you need to use a command line to apply the certificate.



http://www.sslshopper.com/article-ssl-host-headers-in-iis-7.html

http://blogs.iis.net/thomad/archive/2008/01/25/ssl-certificates-on-sites-with-host-headers.aspx


networking - What is the purpose of the "Network IP address", why can it not normally be used?








Maybe a noob question but when referring to IP subnets, what is the purpose of a network IP.



i.e. with a network like 192.168.1.0/24, you can't normally use the .0 for a host address. Likewise the .255 is assigned to the broadcast. This I understand but the .0 I do not. What is the purpose of it and why are point to point links with /31 mask bits able to do away with it?

email - How to check if a mail server is flagged as SPAM by GMail?

This summary is not available. Please click here to view the post.

Friday, November 18, 2016

Is a matching entry in /etc/hosts required for hostname?



I was installing a Tomcat webapp that refused to work until I stumbled on someone else's issue with an unrelated product. The solution was to add the machine's name to /etc/hosts, to match the name returned by hostname. Is this required for general Linux networking to function correctly?



My webapp is running in a virtual machine so that I can test the webapp, and I don't normally bother with the /etc/hosts file on VMs. I just shook my fist and cursed Tomcat and webapp's behavior. I read /etc/hosts , /etc/sysconfig/network and hostname?, but that doesn't say if it's required or not.


Answer




If DNS won’t resolve a system’s hostname to an IP, things may break, depending on how they’re configured — unless you manually add an entry to /etc/hosts. This seems to be one of those occasions. The reverse can also apply in some situations, too.



It’s generally considered good practice to add such an entry to /etc/hosts — in fact, most Unixish operating systems tend to do this for you as part of their initial configuration (or, if you’re using DHCP, when they obtain a lease).


Wednesday, November 16, 2016

linux - disk space overhead in ext4

I'd like to know if there's some rule (or formula) I can apply to find out how much of disk space will be used by the filesystem in an ext4 partition. for example, in a partition of 100 GB, how much can I actually use? does it depend on other parameters like inode size, etc?

Azure Ubuntu VM: Is a connection to 168.63.129.16:80 mandatory for Basic DDOS protection?

Yesterday I noticed some suspicous activity when running netstat | grep http on my Azure Ubuntu VM:



There were over 60 lines like this:



tcp        0      0 ser:http               hosted-by.blazing:29248 SYN_RECV   
tcp 0 0 ser:http hosted-by.blazingf:59438 SYN_RECV
tcp 0 0 ser:http 8.8.8.8:7057 SYN_RECV
# [SNIP]



I am guessing this is a SYN flood attack, and given the presense of 8.8.8.8 possibly some IP Spoofing? I don't have any DDOS protection from Azure, just a standard Ubuntu VM. I tried a few things:



Uncommented the line net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf and ran sysctl -p but the above packets continued.



I already have my own iptables script in place, to lock the server down a bit. Whilst checking over this script, I noticed some unrelated lines in /var/log/syslog:



INFO Exception processing GoalState-related files: [ProtocolError] 
[Wireserver Exception] [HttpError] [HTTP Failed]
GET http://168.63.129.16/machine/?comp=goalstate -- IOError
timed out -- 6 attempts made



Some investigation into this IP, shows that it's part of Azure's infastructure, so I went ahead and added this to my firewall script, to allow outgoing traffic to this IP on port 80.



Suddenly the earlier SYN traffic stopped.



UPDATE



Okay, some further investigation shows that Azure provides a basic level of DDOS protection:





Basic: Automatically enabled as part of the Azure platform. Always-on traffic monitoring, and real-time mitigation of common network-level attacks, provide the same defenses utilized by Microsoft’s online services. The entire scale of Azure’s global network can be used to distribute and mitigate attack traffic across regions. Protection is provided for IPv4 and IPv6 Azure public IP addresses.




I guess my question now, for someone in the know: Would allowing outgoing HTTP traffic to 168.63.129.16 be a critical part of this protection, and explain the behaviour I've seen?

Tuesday, November 15, 2016

PHP and IIS best practices

I'm running IIS and PHP and we're running into some bottlenecks under load testing. The pages are cached, but sometimes we can get load times up to 30 seconds for a user. This seems to happen when the cache expires. We're looking into a lot of different things to fix this issue, so one of our first places to look is at IIS and PHP. Normally I run PHP under Apache and don't really have these issues. Anyone have some good tips/best practices for running PHP under IIS? We do have FastCGI turned on already.

exchange - Emails are going to spam

Hey guys, I am currently running exchange 2010, I have implemented SPF record, and tried to implement dkim/domain keys using domain sink, but it doesn't seem to work. The problem I am having is that all my emails go to spam, whenever I email some one whether it is msn/yahoo/gmail. For Msn i fixed it, since I subscribed to senders framework program.



here are the orignal copies of Gmail and yahoo:



Yahoo:
From Sami Sheikh Wed Jan 27 14:15:51 2010
X-Apparently-To: sunny_3000ca@yahoo.ca via 98.136.167.166; Wed, 27 Jan 2010 06:19:52 -0800
Return-Path:
X-YahooFilteredBulk: 67.55.9.182

X-YMailISG: 58M0TdIWLDvbv_d_qz4ABPsuq0Fmn1fLYMy08ZnNKPgA1aH3sVNx_KKFsiBK8ZOTBVDwBVnpTvRNkuTZc2UDsNMbj6nV9hfE43MQz3tXRV3.rh62wcp4oqT8AuzKKU5JSxU5g2AH4NzOmT5nGNiRyNEi6xazlMZTDm0rnfWbVECGV4RHzwM1TEadla6Bq_itel6hNinq_6MnPRxu2vX_fddmlCAG1Fi6X0ivjkKPqSr..MvpO8MnlTQTZZjRSoxLZUOqg0vjTPEPary5d_xf3MaS6IsRIScPMMk-
X-Originating-IP: [67.55.9.182]
Authentication-Results: mta1066.mail.mud.yahoo.com from=; domainkeys=neutral (no sig); from=SamChrisNetwork.info; dkim=neutral (no sig)
Received: from 127.0.0.1 (EHLO sam.samchrisnetwork.info) (67.55.9.182)
by mta1066.mail.mud.yahoo.com with SMTP; Wed, 27 Jan 2010 06:19:52 -0800
Received: from Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55]) by
Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55%24]) with mapi; Wed, 27
Jan 2010 09:15:52 -0500
From: Sami Sheikh
To: "sunny_3000ca@yahoo.ca"

Subject: Test
Thread-Topic: Test
Thread-Index: AcqfWzrrj8hB3VnJTHC0K4Ev4D+qpw==
Date: Wed, 27 Jan 2010 14:15:51 +0000
Message-ID:
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Content-Type: text/plain; charset="us-ascii"

Content-ID: <660dccae-e8e8-4aa0-b13d-5c57052b5335>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Content-Length: 26



Gmail:



Delivered-To: sampimpinthug@gmail.com
Received: by 10.204.102.18 with SMTP id e18cs53728bko;
Thu, 28 Jan 2010 09:58:46 -0800 (PST)

Received: by 10.224.116.70 with SMTP id l6mr6467857qaq.157.1264701525683;
Thu, 28 Jan 2010 09:58:45 -0800 (PST)
Return-Path:
Received: from sam.samchrisnetwork.info (dsl-67-55-9-182.acanac.net [67.55.9.182])
by mx.google.com with ESMTP id 15si2150271qyk.91.2010.01.28.09.58.45;
Thu, 28 Jan 2010 09:58:45 -0800 (PST)
Received-SPF: pass (google.com: domain of SheikhS@samchrisnetwork.info designates 67.55.9.182 as permitted sender) client-ip=67.55.9.182;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of SheikhS@samchrisnetwork.info designates 67.55.9.182 as permitted sender) smtp.mail=SheikhS@samchrisnetwork.info
Received: from Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55]) by
Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55%24]) with mapi; Thu, 28

Jan 2010 12:58:15 -0500
From: Sami Sheikh
To: "sampimpinthug@gmail.com"
Subject: test
Thread-Topic: test
Thread-Index: AcqgQ3ZLj8tW8+jFSA+Vgz5dd1gwMQ==
Date: Thu, 28 Jan 2010 17:58:14 +0000
Message-ID:
Accept-Language: en-US
Content-Language: en-US

X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Content-Type: multipart/alternative;
boundary="_000_D8C475B722E95D449334E73DD06751ECB0AF10SamSamChrisNetwor_"
MIME-Version: 1.0



--_000_D8C475B722E95D449334E73DD06751ECB0AF10SamSamChrisNetwor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable




test



--_000_D8C475B722E95D449334E73DD06751ECB0AF10SamSamChrisNetwor_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable



http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">





test<=
/p>=



--_000_D8C475B722E95D449334E73DD06751ECB0AF10SamSamChrisNetwor



report from Port25:



This message is an automatic response from Port25's authentication verifier service at verifier.port25.com. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at .




Thank you for using the verifier,



The Port25 Solutions, Inc. team



==========================================================





SPF check: pass

DomainKeys check: neutral
DKIM check: neutral
Sender-ID check: pass
SpamAssassin check: ham



==========================================================





HELO hostname: sam.samchrisnetwork.info

Source IP: 67.55.9.182
mail-from: SheikhS@SamChrisNetwork.info






SPF check details:



Result: pass
ID(s) verified: smtp.mail=SheikhS@SamChrisNetwork.info
DNS record(s):

SamChrisNetwork.info. 3600 IN TXT "v=spf1 ip4:67.55.9.182/24 mx a:sam.samchrisnetwork.info mx:mail.samchrisnetwork.info mx:sam.samchrisnetwork.info ~all"






DomainKeys check details:



Result: neutral (message not signed)
ID(s) verified: header.From=SheikhS@SamChrisNetwork.info
DNS record(s):







DKIM check details:



Result: neutral (message not signed)
ID(s) verified:



NOTE: DKIM checking has been performed based on the latest DKIM specs (RFC 4871 or draft-ietf-dkim-base-10) and verification may fail for older versions. If you are using Port25's PowerMTA, you need to use version 3.2r11 or later to get a compatible version of DKIM.







Sender-ID check details:



Result: pass
ID(s) verified: header.From=SheikhS@SamChrisNetwork.info
DNS record(s):
SamChrisNetwork.info. 3600 IN TXT "v=spf1 ip4:67.55.9.182/24 mx a:sam.samchrisnetwork.info mx:mail.samchrisnetwork.info mx:sam.samchrisnetwork.info ~all"







SpamAssassin check details:



SpamAssassin v3.2.5 (2008-06-10)



Result: ham (0.6 points, 5.0 required)



pts rule name description







-0.0 SPF_PASS SPF: sender matches SPF record
-0.7 BAYES_20 BODY: Bayesian spam probability is 5 to 20%
[score: 0.1146]
1.4 AWL AWL: From: address is in the auto white-list



==========================================================
Explanation of the possible results (adapted from






"pass"
the message passed the authentication test.



"fail"
the message failed the authentication test.



"softfail"
the message failed the authentication test, and the authentication
method has either an explicit or implicit policy which doesn't require
successful authentication of all messages from that domain.




"neutral"
the authentication method completed without errors, but was unable
to reach either a positive or a negative result about the message.



"temperror"
a temporary (recoverable) error occurred attempting to authenticate
the sender; either the process couldn't be completed locally, or
there was a temporary failure retrieving data required for the
authentication. A later retry may produce a more final result.




"permerror"
a permanent (unrecoverable) error occurred attempting to
authenticate the sender; either the process couldn't be completed
locally, or there was a permanent failure retrieving data required
for the authentication.



==========================================================






Return-Path:
Received: from sam.samchrisnetwork.info (67.55.9.182) by verifier.port25.com (PowerMTA(TM) v3.6a1) id hc0mn60hse8h for ; Wed, 27 Jan 2010 07:11:31 -0500 (envelope-from )
Authentication-Results: verifier.port25.com smtp.mail=SheikhS@SamChrisNetwork.info; mfrom=pass;
Authentication-Results: verifier.port25.com header.From=SheikhS@SamChrisNetwork.info; domainkeys=neutral (message not signed);
Authentication-Results: verifier.port25.com; dkim=neutral (message not signed);
Authentication-Results: verifier.port25.com header.From=SheikhS@SamChrisNetwork.info; pra=pass;
Received: from Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55]) by Sam.SamChrisNetwork.info ([fe80::b8d3:44f5:68fe:dc55%24]) with mapi; Wed, 27 Jan 2010 09:12:06 -0500
From: Sami Sheikh
To: "check-auth@verifier.port25.com"

Subject: Test
Thread-Topic: Test
Thread-Index: AcqfWrTNJAbICp6MQsiQwUi89zjagw==
Date: Wed, 27 Jan 2010 14:12:04 +0000
Message-ID: <7F8B8F33-B676-4736-8F74-AA7B40777F20@SamChrisNetwork.info>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Content-Type: text/plain; charset="us-ascii"

Content-ID:
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0



Test



Sent from my iPhone

amazon web services - Can I use AWS SNS to communicate between ECS Tasks?



I have an ECS Cluster which can scale some tasks in a service based on the load. I also have a scheduled audit ECS task which run periodically and wants to send a notification to these other tasks in the service so they can update their data set.



Can I use SNS to publish to a topic from the audit task and consume the notification in other tasks in the cluster? I'm able to publish to SNS but I don't see how notification would be received in other containers since subscriptions can only be http/email/sns/application/sqs etc.




The tasks are implemented in golang. I wanted to avoid adding a new message bus and am hoping aws has some framework for this.



Thanks


Answer



You can communicate between ECS containers using SNS.



Based on what little you've said you should consider Simple Queuing Service. If you need to send messages to multiple destinations it can be done with a combination of SNS and SQS.


domain name system - Windows DNS servers repeatedly requesting records in zone when they get SERVFAIL response



We're seeing high levels (over 2000 requests/second) of DNS queries from our caching DNS servers to external servers. This may have been happening for a long time - this came to light recently because of performance problems with our firewall. Talking to colleagues at other institutions it's clear that we're making more queries than they are.



My initial thought was that the problem was lack of caching of SERVFAIL responses. Having done more investigation it's clear that the problem is a high level of requests for the failing record from the Windows DNS servers. It seems that in our environment a single query to one of the Windows DNS servers for a record from a zone which returns SERVFAIL results in a stream of requests for that record from all of the Windows DNS servers. The stream of requests doesn't stop until I add a fake empty zone on one of the Bind servers.



My plan tomorrow is to verify the configuration of the Windows DNS servers - they should just be forwarding to the caching Bind servers. I figure we must have something wrong there as I can't believe that no-one else has hit this if it's not a misconfiguration. I'll update this question after that (possibly closing this one and opening a new, clearer one).







Our setup is a pair of caching servers running Bind 9.3.6 which are used either directly by clients or via our Windows domain controllers. The caching servers pass queries to our main DNS servers which are running 9.8.4-P2 - these servers are authoritative for our domains and pass queries for other domains to external servers.



Behaviour we're seeing is that queries like the one below aren't being cached. I've verified this by looking at network traffic from the DNS servers using tcpdump.



 [root@dns1 named]# dig ptr 119.49.194.173.in-addr.arpa.

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5_8.6 <<>> ptr 119.49.194.173.in-addr.arpa.
;; global options: printcmd
;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8680
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;119.49.194.173.in-addr.arpa. IN PTR

;; Query time: 950 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 9 13:34:20 2014
;; MSG SIZE rcvd: 45



Querying google's server directly shows that we're getting a REFUSED response.



[root@dns1 named]# dig ptr 119.49.194.173.in-addr.arpa. @ns4.google.com.

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5_8.6 <<>> ptr 119.49.194.173.in-addr.arpa. @ns4.google.com.
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 38825

;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;119.49.194.173.in-addr.arpa. IN PTR

;; Query time: 91 msec
;; SERVER: 216.239.38.10#53(216.239.38.10)
;; WHEN: Sun Mar 9 13:36:38 2014
;; MSG SIZE rcvd: 45



This isn't just happening with google addresses or reverse lookups but a high proportion of the queries are for those ranges (I suspect because of a Sophos reporting feature).



Should our DNS servers be caching these negative responses? I read http://tools.ietf.org/rfcmarkup?doc=2308 but didn't see anything about REFUSED. We don't specify lame-ttl in config file so I'd expect that to default to 10 minutes.



I believe this (the lack of caching) is expected behaviour. I don't understand why the other sites I've talked to aren't seeing the same thing. I've tried a test server running the latest stable version of Bind and that shows the same behaviour. I also tried Unbound and that didn't cache SERVFAIL either. There's some discussion of doing this in djbdns here but conclusion is that the functionality has been removed.



Are there settings in the Bind config that we could change to influence this behaviour? lame-ttl didn't help (and we were running with default anyway).



As part of investigation I've added some fake empty zones on our caching DNS servers to cover the ranges leading to most requests. That's dropped the number of requests to external servers but isn't sustainable (and feels wrong as well). In parallel with this I've asked a colleague to get logs from the Windows DNS servers so that we can identify the clients making the original requests.



Answer



Cause was obvious once I looked at the configuration of the Windows DNS servers (something got lost in the verbal report).



Each DC was configured to forward requests not only to the two caching Bind servers but also to all the other Windows DNS servers. For requests which were successful (including NXDOMAIN) that would work fine as the Bind servers would answer and we'd never fall through to the other Windows DNS. However for things that returned SERVFAIL one server would ask all the others which would in turn ask the Bind servers. I'm really surprised that this didn't cause more pain.



We'll take the extra forwarding out and I fully expect the volume of requests to drop dramatically.


Monday, November 14, 2016

ubuntu 14.04 - Unbuntu server running Apache with an SSL Cert Issue

Bit of background as im new to the Ubuntu world, but I've started support for a company with some Ubuntu servers.



The old SSL certificate (not self signed) is set to expire soon. I created the request CSR and have got the new certificate. I have followed the instructions (https://www.digicert.com/ssl-certificate-installation-ubuntu-server-with-apache2.htm) according to the website I just have to add in the below into the ssl VirtualHost.conf file.




VirtualHost test.examlple.co.uk:443
DocumentRoot /var/www/
SSLEngine on
SSLCertificateFile /path/to/your_domain_name.crt

SSLCertificateKeyFile /path/to/your_private.key
SSLCertificateChainFile /path/to/DigiCertCA.crt


I have put all the files in what I believe is the correct place and have checked the server. However when i run a test via https://globalsign.ssllabs.com/ it is still showing that the old SSL cert is being used.

I have done the following so far.....




  • I have checked in the /etc/apache2/sites-enabled folder and the redmine.conf file has nothing SSLCertificate related.


  • Checked in the /etc/apache2 folder for the apache2.conf file and there is nothing SSLCertificate related there either.

  • /etc/apache2/sites-available folder has 4 conf files, 3 which has nothing SSL related and then the default-ssl.conf file which is where I put the above information. I restart the Apache server and its like nothing has changed..



Any assistance would be great, or just pointing me in the correct direction.

Below is the contents of the default-ssl.conf with information taken out.






ServerAdmin webmaster@localhost

DocumentRoot /var/www/html

# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn


ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf


# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on

# A self-signed (snakeoil) certificate can be created by installing
# the ssl-cert package. See
# /usr/share/doc/apache2/README.Debian.gz for more info.
# If both key and certificate are stored in the same file, only the
# SSLCertificateFile directive is needed.

SSLCertificateFile /etc/ssl/certs/domain.co.uk.crt
SSLCertificateKeyFile /etc/ssl/private/domain.co.uk.key
SSLCACertificateChainFile /etc/apache2/ssl.crt/intermediate_domain_ca.crt

# Server Certificate Chain:
# Point SSLCertificateChainFile at a file containing the
# concatenation of PEM encoded CA certificates which form the
# certificate chain for the server certificate. Alternatively
# the referenced file can be the same as SSLCertificateFile
# when the CA certificates are directly appended to the server

# certificate for convinience.
#SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt

# Certificate Authority (CA):
# Set the CA certificate verification path where to find CA
# certificates for client authentication or alternatively one
# huge file containing all of them (file must be PEM encoded)
# Note: Inside SSLCACertificatePath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.

#SSLCACertificatePath /etc/ssl/certs/
#SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt

# Certificate Revocation Lists (CRL):
# Set the CA revocation path where to find CA CRLs for client
# authentication or alternatively one huge file containing all
# of them (file must be PEM encoded)
# Note: Inside SSLCARevocationPath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.

#SSLCARevocationPath /etc/apache2/ssl.crl/
#SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl

# Client Authentication (Type):
# Client certificate verification type and depth. Types are
# none, optional, require and optional_no_ca. Depth is a
# number which specifies how deeply to verify the certificate
# issuer chain before deciding the certificate is not valid.
#SSLVerifyClient require
#SSLVerifyDepth 10


# SSL Engine Options:
# Set various options for the SSL engine.
# o FakeBasicAuth:
# Translate the client X.509 into a Basic Authorisation. This means that
# the standard Auth/DBMAuth methods can be used for access control. The
# user name is the `one line' version of the client's X.509 certificate.
# Note that no password is obtained from the user. Every entry in the user
# file needs this password: `xxj31ZMTZzkVA'.
# o ExportCertData:

# This exports two additional environment variables: SSL_CLIENT_CERT and
# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
# server (always existing) and the client (only existing when client
# authentication is used). This can be used to import the certificates
# into CGI scripts.
# o StdEnvVars:
# This exports the standard SSL/TLS related `SSL_*' environment variables.
# Per default this exportation is switched off for performance reasons,
# because the extraction step is an expensive operation and is usually
# useless for serving static content. So one usually enables the

# exportation for CGI and SSI requests only.
# o OptRenegotiate:
# This enables optimized SSL connection renegotiation handling when SSL
# directives are used in per-directory context.
#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire

SSLOptions +StdEnvVars


SSLOptions +StdEnvVars



# SSL Protocol Adjustments:
# The safe and default but still SSL/TLS standard compliant shutdown
# approach is that mod_ssl sends the close notify alert but doesn't wait for
# the close notify alert from client. When you need a different shutdown
# approach you can use one of the following variables:
# o ssl-unclean-shutdown:
# This forces an unclean shutdown when the connection is closed, i.e. no
# SSL close notify alert is send or allowed to received. This violates

# the SSL/TLS standard but is needed for some brain-dead browsers. Use
# this when you receive I/O errors because of the standard approach where
# mod_ssl sends the close notify alert.
# o ssl-accurate-shutdown:
# This forces an accurate shutdown when the connection is closed, i.e. a
# SSL close notify alert is send and mod_ssl waits for the close notify
# alert of the client. This is 100% SSL/TLS standard compliant, but in
# practice often causes hanging connections with brain-dead browsers. Use
# this only for browsers where you know that their SSL implementation
# works correctly.

# Notice: Most problems of broken clients are also related to the HTTP
# keep-alive facility, so you usually additionally want to disable
# keep-alive for those clients, too. Use variable "nokeepalive" for this.
# Similarly, one has to force some clients to use HTTP/1.0 to workaround
# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
# "force-response-1.0" for this.
BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive

BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown






# vim: syntax=apache ts=4 sw=4 sts=4 sr noet`

iptables - port forwarding to backend server

I'm running an openvpn server on my VPS with a public IP. There is a backend server connected to the VPN. These are the IPs on the VPN: VPS: 10.8.0.1 backend server: 10.8.0.2.
eth0 is the public interface, tun0 is the VPN interface



Now, I'd like to forward, for instance, port 22 on the backend server to port 2200 on the VPS. Here is what I did on the VPS (based on several tutorials and already asked questions):





  1. opened port 2200

  2. enabled IPv4 forwarding

  3. put this into /etc/ufw/before.rules (yes, I'm using ufw and it works correctly):



    *nat



    :PREROUTING ACCEPT [0:0]



    :POSTROUTING ACCEPT [0:0]




    -A PREROUTING -i eth0 -p tcp --dport 2200 -j DNAT --to-destination 10.8.0.2:22



    -A POSTROUTING -d 10.8.0.2 -p tcp --dport 22 -j SNAT --to-source VPS-public-IP:2200



    -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE


  4. Reloaded ufw or even rebooted everything...


  5. Tried other solutions, commenting some lines out (such as the first POSTROUTING rule above). Nothing -obviously- helped.





Output of nmap VPS-public-IP -p 2200 says the port is 'filtered' and when I try to ssh to port 2200, it just hangs and does nothing, I don't even get any error - that also happens when i try to ssh from the VPS to the backend server over the VPN (which normally works). What am I missing?

Sunday, November 13, 2016

Linux Routing with two NICs (LAN vs Internet) with NAT and bridging for VMs



My Setup:



There is only one physical machine in this setup, a Host System for Virtual Machines (VMs) with two network adapters.




One NIC (eth0) is connected to an internal network (LAN subnet, e.g. 10.x.x.x/24) and shall be used for internal traffic.



The other NIC (eth1) is connected to public internet (it has a public routable IP configured). This connection shall be used to port-forward public internet traffic to internal IPs of the VMs (incoming traffic) and to allow the VMs to access public internet (outgoing traffic) via NAT.



Virtual Machines use IP addresses in the LAN-Subnet (10.x.x.x/24, same as eth0)



I've got a bridge device (br0) configured for virtual network interfaces of the VMs (vnet0, vnet1, ...) and the LAN-NIC (eth0). That means:





  • br0 has an IP-Adress in the LAN subnet (10.x.x.x/24)

  • eth0 is added to the bridge

  • vnet0, vnet1, ... (used by the VMs) are dynamically added to the bridge



Problems



Communication within the LAN works fine. Also the VM-Host is accessable via the public IP and has internet access.



My problem is the NAT configuration to allow the VMs to access public internet, too.




I tried to use a simple (S)NAT rule:



iptables -t nat -I POSTROUTING -s 10.x.x.x/24 ! -d 10.x.x.x/24 -j SNAT --to-source y.y.y.102


Whereas y.y.y.102 is the public routable IP of the second NIC (eth1).



I found out that I need to enable "ip_forward" and "bridge-nf-call-iptables":




echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables


Else the bridged packages won't be processed by iptables.



Now the packets from the VMs seem to go through the following Chains of iptables:




  • "FORWARD" (regular) - I accept them there (-j ACCEPT, counter goes up)


  • "PREROUTING" (nat) - I accept them there (policy ACCEPT, counter goes up)

  • "POSTROUTING" (nat) - They match the SNAT rule



But not all packets seem to arrive at PRE/POSTROUTING for any reason I couldn't figure out so far.



However, more interestingly tcpdump -i eth0 vs. tcpdump -i eth1 show that the packets (I tried to ping an external IP from within a VM) seem to be sent via the wrong interface eth0 (=LAN-NIC). Even the NAT rule was applied, so the source address was changed to the IP of the other NIC (eth1).



QUESTIONs:




How can I configure the system to output the NATed packets with the public IP as source address to be sent over the correct NIC (eth1)?



Do I somehow need to add eth1 to the bridge (br0)? If so, how do I assign the public IP address correctly? Usually the IP needs to be configured on the bridge device. Would I need to assign an alias adress to the bridge (public IP on br0:0)?



Configuration Details



The routing configuration on the host system:



# ip r
default via y.y.y.126 dev eth1

10.x.x.0/24 dev br0 proto kernel scope link src 10.x.x.11
y.y.y.96/27 dev eth1 proto kernel scope link src y.y.y.102



  • IP: y.y.y.126 is our router for public internet.

  • IP: y.y.y.102 is the public IP of the host machine

  • IP: 10.x.x.11 is the LAN IP of the host machine

  • SUBNET: 10.x.x.0/24 is the LAN

  • SUBNET: y.y.y.96/27 is the public IP subnet




NIC configuration:



# ifconfig
br0: flags=4163 mtu 1500
inet 10.x.x.11 netmask 255.255.255.0 broadcast 10.x.x.255
inet6 ####::###:####:####:#### prefixlen 64 scopeid 0x20
ether ##:##:##:##:##:## txqueuelen 0 (Ethernet)
RX packets 2139490 bytes 243693436 (232.4 MiB)

RX errors 0 dropped 0 overruns 0 frame 0
TX packets 29085 bytes 2398024 (2.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0: flags=4163 mtu 1500
inet6 ####::###:####:####:#### prefixlen 64 scopeid 0x20
ether ##:##:##:##:##:## txqueuelen 1000 (Ethernet)
RX packets 2521995 bytes 290600491 (277.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 383089 bytes 48876399 (46.6 MiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xdfa60000-dfa7ffff

eth1: flags=4163 mtu 1500
inet y.y.y.102 netmask 255.255.255.224 broadcast y.y.y.127
inet6 ####::###:####:####:#### prefixlen 64 scopeid 0x20
ether ##:##:##:##:##:## txqueuelen 1000 (Ethernet)
RX packets 2681476 bytes 597532550 (569.8 MiB)
RX errors 0 dropped 130 overruns 0 frame 0
TX packets 187755 bytes 21894113 (20.8 MiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xdfa00000-dfa1ffff


Bridge configuration:



# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.002590eb1900 no eth0
vnet0



And iptables rules:



# iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
723 106K DROP udp -- * * y.y.y.0/24 0.0.0.0/0 udp spt:5404
586 40052 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
5 420 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0

0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
2 458 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4
2 458 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1343 173K ACCEPT tcp -- * * 10.x.x.2 0.0.0.0/0 tcp spt:3389
1648 127K ACCEPT tcp -- * * 0.0.0.0/0 10.x.x.2 tcp dpt:3389
18 1040 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

18 1040 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 525 packets, 84016 bytes)
pkts bytes target prot opt in out source destination


# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 13 packets, 1218 bytes)
pkts bytes target prot opt in out source destination


Chain INPUT (policy ACCEPT 5 packets, 420 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 13 packets, 880 bytes)
pkts bytes target prot opt in out source destination

Chain POSTROUTING (policy ACCEPT 14 packets, 920 bytes)
pkts bytes target prot opt in out source destination
5 300 SNAT all -- * * 10.x.x.0/24 !10.x.x.0/24 to:y.y.y.102



And here a captured NATed packet (ping from VM) on LAN interface card:



# tcpdump -i eth0
12:53:55.243350 IP y.y.y.102 > y.y.y.110: ICMP echo request, id 2, seq 5, length 40


Output of "ip rule":



# ip rule

0: from all lookup local
32766: from all lookup main
32767: from all lookup default

Answer




  1. Check that your VMs have ip addresses on 10.x.x.x/24 (netmask 255.255.255.0)


  2. Set 10.x.x.11 (br0 ip address) as the default gateway of your VMs


  3. Enable ip forwarding on the physical host


  4. Enable SNAT with:




    iptables -t nat -A POSTROUTING -s 10.x.x.x/24 -o eth1 -j SNAT --to y.y.y.102


linux - Why does my hostname appear with the address 127.0.1.1 rather than 127.0.0.1 in /etc/hosts?



This may be a bit of a noobish question, but I was taking a look at /etc/hosts on my new Xubuntu install and saw this:



127.0.0.1 localhost

127.0.1.1 myhostname


On most 'nixes I've used, the second line is omitted, and if I want to add my hostname to the hosts file, I'd just do this:



127.0.0.1 localhost myhostname


Is there a difference between these two files in any practical sense?


Answer




There isn't a great deal of difference between the two; 127/8 (eg: 127.0.0.0 => 127.255.255.255) are all bound to the loopback interface.



The reason why is documented in the Debian manual in Ch. 5 Network Setup - 5.1.1. The hostname resolution.



Ultimately, it is a bug workaround; the original report is 316099.


Saturday, November 12, 2016

exim - Cannot send mails to different domain with mail server in HostGator and website in Digital Ocean (From header is missing)

Recently I moved from HostGator with cPanel to a Digital Ocean droplet.



I still want to use the email service from HostGator.



So in DigitalOcean I added an A record pointing to HostGator IP, and MX record pointing to mail.mydomain.com



I can receive and send emails normally.




When I want to send an email from a Laravel application, unless it uses the same domain of my website, the mail is not sent, for example if I wanted to send to a Gmail account.



I tried with telnet, and in the inbox I saw that gmail rejects the message because it doesn't have a "From header".



Before the migration everything worked correctly so I don't think that I need to add that header to my Laravel email.



Is there something that I am missing in DNS configuration or cPanel?



This is the code that I've been using before the migration and it was working fine, now it sends emails but only when the recipient has the same domain as HostGator, which is my-domain.com:




MAIL_DRIVER=smtp
MAIL_HOST=cloud232.hostgator.com (I tried with mail.my-domain.com and it works too)
MAIL_USERNAME=noreply@my-domain.com
MAIL_PASSWORD=password
MAIL_ENCRYPTION=ssl (I tried with tls and port=587 and it works too)
MAIL_FROM_NAME=name
MAIL_PORT=465 (I tried with tls and port=587 and it works too)



Thanks in advance.

Set root domain record to be a CNAME




I need to create an NS record for a domain that is a CNAME, for the purpose of having two domains pointed at one IP, and not having to maintain the current IP address in two different places.



The DNS provider for this domain is DynDNS, but they block this operation:




CNAME cannot be created with label
that is equal to zone name





I can do this with another domain whose DNS is served by 1and1:



root@srv-ubuntu:~# dig myseconddomain.co.uk

; <<>> DiG 9.4.2-P1 <<>> myseconddomain.co.uk
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61795
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0


;; QUESTION SECTION:
;myseconddomain.co.uk. IN A

;; ANSWER SECTION:
myseconddomain.co.uk. 71605 IN CNAME myfirstdomain.co.uk.
myfirstdomain.co.uk. 59 IN A www.xxx.yyy.zzz

;; Query time: 298 msec
;; SERVER: 10.0.0.10#53(10.0.0.10)
;; WHEN: Tue Aug 18 14:17:26 2009

;; MSG SIZE rcvd: 78


Is this a breach of the RFCs or does DynDNS have a legitimate reason for blocking this action?



Followup
Thanks to the two answers already posted I now know that 1and1 IS breaching RFCs to do this. However it does work and they seem to support it. For a company that hosts so many domains it seems very odd that they get away with doing this on such a massive scale without objection.



More followup




The output of "dig myseconddomain.co.uk ns" as requested.



root@srv-ubuntu:~# dig myseconddomain.co.uk ns

; <<>> DiG 9.4.2-P1 <<>> myseconddomain.co.uk ns
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18085
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2


;; QUESTION SECTION:
; myseconddomain.co.uk. IN NS

;; ANSWER SECTION:
myseconddomain.co.uk. 4798 IN NS ns67.1and1.co.uk.
myseconddomain.co.uk. 4798 IN NS ns68.1and1.co.uk.

;; ADDITIONAL SECTION:
ns67.1and1.co.uk. 78798 IN A 195.20.224.201
ns68.1and1.co.uk. 86400 IN A 212.227.123.89


;; Query time: 59 msec
;; SERVER: 10.0.0.10#53(10.0.0.10)
;; WHEN: Wed Aug 19 12:54:58 2009
;; MSG SIZE rcvd: 111

Answer



Correct, it is a breach of RFC 1034, section 3.6.2, paragraph 3:





... If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different. ...




This applies here because the root of your zone must also have SOA and NS records.


Friday, November 11, 2016

Linux automatically creating LVM partitions on RAID members?



I've had a software RAID1 array in production for over a year which has LVM partitions on top of /dev/md0. I rebooted over the weekend to apply some kernel patches and now the array won't come up. Getting the "Continue to wait; or Press S to skip mounting or M for manual recovery" on boot. I hit M, login as root, and the RAID array is up, but none of the LVM partitions are available. It's like everything is gone. I stopped the array and brought it up on a single disk (it's RAID1) with --run. Ok, the lvm stuff is there now. So I added a new disk and add it to the degraded array. It starts rebuilding. I do an fdisk of the new disk I just added and there's a brand new partition there of type 'Linux LVM'. I did not add that partition. What's going on? I'm not even using partitions, I'm just using the raw devices.


Answer



Only way I could get the software raid array stable was to leave LVM off and hard-partition. I tried using both raw devices, as well as type 0xFD partitions and as soon as I tried using LVM across /dev/md0, the partition types would automatically change from 0xFD to "LVM" on all the raid members. Very, very strange. I've been using LVM over Linux software RAID for nearly a decade and have never seen this problem before. I'm buying an Areca card.


ssl - Nginx sslv3 poodle disable



I tried setup SSL cert without SSLv3 in my nginx, but SSL Labs say, my server have SSLv3 how to disable it.



My config:



    add_header Strict-Transport-Security max-age=31536000;

add_header X-Frame-Options DENY;

ssl_session_cache shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED";
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains';

Answer



Here is a good Tutorial how to configure nginx with the best settings.




https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html



Your configuration for SSLv3 is correct.



ssl_protocols TLSv1 TLSv1.1 TLSv1.2;


In the post is a section for your ciphers.




ssl_ciphers 'AES256+EECDH:AES256+EDH';

domain name system - how to find out who is managing my DNS records?



I have a following situation:
a website is registered with registrar X, hosted on server Y and about to move to server Z.




The both servers Y and Z do not manage the domain DNS as according to them “it is managed by the company it was purchased from and then pointed to their servers in order to publish the site”.



In fact on both servers there is no trace of my domain DNS management (I cannot update the ‘A’ records for example). I spoke to both servers Y and Z and they said they cannot manage something they don’t “see”.



I spoke to the registrar X and the deny the DNS management for this particular domain as “their settings only require you to point to your DNS host addresses. Any further settings such as 'A' records would have to be configured using the control panel of your DNS hosting provider.”



In fact inside the control panel of registrar X I can only update the name servers and that’s all. I cannot update the ‘A’ records.



I looked up for the website DNS on web and it looks that it points to the server Y.




Moreover the default DNS name servers for both server Y and Z are exactly the same.



Is there any way for me to find who is managing this domain DNS? What can I do?


Answer



There are three parts to this.



The first is your domain registrar. This organization is who you purchased the domain name from. The domain registrar is going to be the organization that specifies if you are using delegated Name Servers. The delegated Name Servers are where your zone file is going to be located.



The second is your Name Server. Whatever your domain registrar is configured to use as your Name Servers is where your zone file is. That is where you want to access to make changes to DNS records.




The third is your hosting provider. You might have a third company that hosts Internet (Web, Files, Email) content. You can who the hosting providers are by reviewing the A and MX records in the Name Server.



The easiest way to determine what the authoritative Name Servers for your domain are is to go to MXToolbox and lookup your domain name.



To demonstrate let's go to MXToolbox and lookup the mail exchanger (MX) record for example.org.



http://mxtoolbox.com/SuperTool.aspx?action=mx%3aexample.org&run=toolpage



It will say "No Recrods Exist" and near that you will see it says "Reported by [xyz]". Whatever the name comes after "Reported by" is your Name Server. Whoever owns the Name Server is who manages your DNS. You will need to contact them if you are not able to make changes to your DNS records on your own.



linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...