Friday, November 30, 2018

windows - How to register hostname via DHCP?



I am a software developer and I am working on the develpoment of a network device (SIP phone). When it boots up, I want it to register a domain name on our network so that a customer can easily browse to the web interface. I have talked with the company that is developing the software for the device, and they have in other projects provided the host name in DHCP Option 12 and then that somehow (magically) gets registered in DNS with the IP address returned from the DHCP request.




So I have a test build of the software modification that includes the device setting DHCP Option 12 with a host name based on the MAC address (e.g. SIP100_0026FDF00057). However, I can not query that hostname from my Windows machine. The DHCP and DNS servers are on Windows Server.



Is there some special configuration on the DHCP and/or DNS to make this registration happen?


Answer



Windows systems that are members of a domain can automatically register their hostname in the domain DNS; but this can be done only by Windows systems.



Microsoft DHCP can be configured to register DNS names on behalf of those clients which can't do that by themselves (like Linux ones); this is what should be done if you want your device to automatically appear in your DNS. You can configure this on the DHCP server's properties.



Be careful, though, as this will mean any client will get registered in the DNS if a DHCP lease is handed to it.




More info here.


Thursday, November 29, 2018

storage - DL380 G4 - How to properly configure raid arrays



All,



I have a DL380 G4 server with the following drives:





  • 2x 36.4GB - 15K

  • 2x 36.4GB - 10K

  • 2x 300GB - 10K



I'm trying to figure out how best to configure these drives. This is the first time I'm working with an HP servers. It has an HP Smart Array 6i controller and a total of 6 drive bays. I don't need to use all of them.



The target OS most likely will be CentOS 5.5.




Any insight would be appreciated.



Thanks.



EDIT:




  • This will be a CPanel webserver (with MySQL, Exim, Apache, etc.) I'm getting off of a dedicated server to get the machine colocated.

  • When I fired up the SA 6i configuration utility, it didn't give me a RAID 1 option. Just 5, 1+0, and 0. I'll see if the firmware needs to be updated tomorrow morning.



Answer



If I remember correctly on that RAID controller the 1+0 option will let you create a two disk mirror. I can't remember the logic behind it but I remember it seeming dumb.



Since it's a new system there is no harm in trying.



Mezgani's recommendations are pretty much spot on too :-)


suggest me a good RAID card each for SATA and SAS for Ubuntu 10.04, which will get recognized easily?

I have an IBM x3550 server. The RAID controller that comes along is an LSI Megaraid and does not get recognized by Ubuntu boot up CD. I am looking for an alternate RAID card, which works for 10.04 PCI Express

Tuesday, November 27, 2018

storage - Is brand-name NAS overpriced?




NetGear's ReadyNAS 2100 has 4 disk slots and costs $2000 with no disks. That seems a bit too expensive for just 4 disk slots.



Dell has good network storage solutions too. PowerVault NX3000 has 6 disk slots, so that's an improvement. However, it costs $3500; the NX3100 doubles the number of disks at double the price. Just in case I'm looking at the wrong hardware for lots of storage, the trusty PowerVault MD3000i SAN has a good 15 drives, but it starts at $7000.



While you can argue about support from Dell, Netgear or HP or any other company being serious, it's still pretty damn expensive to get those drives RAID'ed together in a box and served via iSCSI. There's a much cheaper option: build it yourself. Backblaze has built it's own box, housing 45 (that's forty five) SATA drives for a little under $8000, including the drives themselves. That's at least 10 times cheaper than current offers from Dell, Sun, HP, etc.



Why is NAS (or SAN - still storage attached to a network) so expensive? After all, it's main function is to house a number of HDDs, create a RAID array and serve them over a protocol like iSCSI; nearly everything else is just colored bubbles (AKA marketing terms).


Answer



This really depends on your point of view.




If I'm an ISV who needs to launch on the tiniest possible budget but I need a crapload of storage, then yes, a brand-name box will be too expensive and the risk/reward of a home-made FreeNAS box would most likely be an acceptable solution.



However, if I'm a mega-multi-national corporation with 10,000 users and I run a datacentre that supports a billion-dollar-a-year company and if the datacentre goes offline it's going to cost in the order of $100,000 a minute then you can bet your arse I'm going to buy a top-shelf brand-name NAS with a 2-hour no-questions-asked replacement SLA. Yes, it's going to cost me 100x more than a DIY box, but the day your entire array fails and you've got 10TB of critical storage offline, that $100,000 investment is going to pay for itself in about 2 hours flat.



For someone like Backblaze, where storage volume is king, then it makes sense for them to roll their own - but that's the core competancy - providing storage. Dell, EMC, etc - their products are aimed at those who storage is not their primary focus.



Of course, it's all totally pointless if you don't have backups, but that's another story for another day.


Monday, November 26, 2018

domain name system - Windows DNS Server: Problems after DNS server cache clearing. What does Clear-DnsServerCache do?



On Friday I changed a public DNS A-record to a new IP address on our provider's DNS service for our public web-domain. To make these changes populate faster in our intranet (for our intranet clients/users) I used the powershell command Clear-DnsServerCache on our Windows 2012R2 DNS Server machine.



My understanding of the command is that only the cache will be deleted. No records or anything else will be touched. Therefore (so I thought) the only negative implications of deleting the whole cache might be lower speed performance in resolving names. Hence I did not bother only deleting the cached records for the affected domain name, but deleted the whole cache. As we are only having around 20 people working on this site, I considered the performance penalties of a deleted cache as negligible.




Note: The machine on which the DNS server is running, is also a synchronized AD Domain Controller. It is a Windows 2012 R2 Standard machine. This DNS server hosts AD-integrated zones. Replication is active with two other DNS/ActiveDirectory-servers located in our headquarter. On this server we also have reverse DNS lookup set up and we have query-forwarding active to all replicated servers (each DNS Server has the other two replicated servers as query-forward servers set up).



Today (after the weekend) we have massive DNS problems. _gc, _kerberos and _ldap forward-lookup entries in the AD integrated zones are missing in the DNS server. Hence we got problems of people not being able to find the domain name server et cetera. My team discusses now what the reason could be.



Can it be, that the Clear-DnsServerCache did cause this?
The Technet article at https://technet.microsoft.com/en-us/library/jj649893(v=wps.630).aspx did not help either.



Side-question: is Clear-DnsServerCache doing the same as dnscmd /clearcache and also the same as in the GUI (DNS Management Console, View -> Advanced, then right-click on Cached Lookups and then Clear Cache)?



Update 2017-02-08

Thanks to all the commenters. Based on your input I am now confident that our problems have nothing to do with Clear-DnsServerCache. Which leaves the question what caused our loss of multiple AD-relevant SRV records.
If I find out I will come back and write another update. Though I have my doubts whether we will ever find out.


Answer



Q: Side-question: is Clear-DnsServerCache doing the same as dnscmd /clearcache and also the same as in the GUI (DNS Management Console, View -> Advanced, then right-click on Cached Lookups and then Clear Cache)?



A: Yes.



That command should not have had any affect on the AD SRV records.



That being said, you can recreate the missing SRV records by using one of the following methods:





  1. Restarting the Netlogon service on one of your DC's.


  2. Importing the SRV records from C:\SystemRoot\Config\NetLogon.dns on one of your DC's.


  3. Running DCDiag /Fix


  4. Running NetDiag /Fix



Sunday, November 25, 2018

svn - Subversion, Bluehost, and TortoiseSVN

Setting Up TortoiseSVN (on Windows) to SSH Tunnel to a Bluehost Subversion Server



I had a lot of trouble setting this up, so I hope this can be a resource to others. Please fix up any errors you find in my instructions.



1. Request SSH Access




You'll need SSH Access, so make sure you request that through the "SSH/Shell Access" option on your CPanel.



2. Download a SSH Client



You'll also need a SSH client, so download the latest version of PuTTY. You will also need a FTP client; I recommend FileZilla.



3. Install Subversion



Use PuTTY to log into your server and install Subversion using the following instructions (you may want to go ahead and update the version numbers):

http://www.bluehostforum.com/showthread.php?12099-Setting-up-Subversion-on-Bluehost



Make sure that you correctly installed Subversion by creating a repository at /home/username/svn and importing a project into it, using this tutorial:



http://svnbook.red-bean.com/en/1.5/svn.intro.quickstart.html



4. Create a SSH Authentication Key Pair



Bluehost won't allow us to tunnel directly over SSH (read more), so we'll need to set up some authentication keys.




You can do this via SSH, using this tutorial:
http://tortoisesvn.net/ssh_howto



Or you can simply log into your CPanel and create a SSH Key via the "SSH/Shell Access" option GUI. Either way, make sure you authorize the key (by manually adding it to authorized_keys as in the above tutorial or through the CPanel GUI interface).



Remember to provide a passphrase for your key. Many tutorials suggest leaving it blank so that TortoiseSVN won't prompt you for a password. However, we can set Pageant up for this very same purpose without creating an unsafe SSH key.



5. Convert the Private Key



After Step 5, you should have both a private key file (such as id_dsa) and a public key file (such as id_dsa.pub). Download the private key file to your desktop.




Download PuTTYgen.



Open PuTTYgen, go to Conversions > Import Key, and find your private key file on your desktop. Enter your private key's passphrase and then click "Save private key." Save the converted PuTTY key to a place that you will remember (and won't change).



6. Create a PuTTY Session



Open PuTTY and enter the following fields:



Session > Host Name: (Your Host Name)

Session > Saved Sessions Name: "Subversion"
Connection > SSH > Auth > Private key file for authentication: (Your Converted Private Key)



Go back to the "Session" screen and click "Save" near "Saved Sessions" to save this information.



Now that the Session has been created, select "Subversion" in "Saved Sessions," click "Load", and then click "Open". You'll be asked for your username as well as your passphrase, and then you should gain access to your server.



7. Configure the PuTTY Session in Pageant



Download Pageant.




Open up Pageant, and it should appear in your taskbar. Right-click the Pageant icon and select "Add Key." Find your private key file and then enter your passphrase.



Open up PuTTY again and reconnect using your "Subversion" session. You'll be asked for your username, but you should no longer have to enter a passphrase.



8. Add the svnserve Path Command to Authorized Keys



Find authorized_keys in /home/username/.ssh/ and modify this file so that the following appears right before your key (Pageant may block your FTP client, so you may need to close it):



command="/home/username/bin/svnserve -t" (KEY NAME) (KEY)




(Source: http://www.mikespicer.net/wp/?p=41)



9. Connect TortoiseSVN



Download and install TortoiseSVN.



Open up Pageant and again add your private key.



Right-click somewhere, select "TortoiseSVN > RepoBrowser," and a dialog box will come up. Type in "svn+ssh://username@Subversion/home/bin/svn".




You should now see your repository (finally!).

Saturday, November 24, 2018

Limit on number of tables in a MySql Database




Is there a limit on number of tables which can be created in a MySql database?
Is this limit engine specific?
For MyISAM?


Answer



It is engine specific. InnoDB I think supports 2 billion tables across the tablespace.
MyIsam does not place any limit on the number of tables, you will be limited though by things like the inodes in the underlying filesystem or limits on the operating system.



Here is reference http://www.mysqlfaqs.net/mysql-faqs/Database-Structure/What-is-limit-on-the-maximum-number-of-tables-allowed-in-MySQL



On a practical level I'd expect that the performance of your application would collapse before you reached the limits.



ubuntu - Sharing a linux desktop server for multiple users: remote desktop or virtualization?

We are a small web software company (~ 10 people). At present, every dev works on his local machine (some windows, some ubuntu) using a local apache. We have a samba share for shared files and central SVN repositories.



I would like to centralize our infrastructure in the future, making everybody work on a central server. There are 2 options:




  • Virtualization: everybody gets an own virtual box on a central, fat server.
    Pro: quick setup, isolation of the users, new boxes added fast.
    Con: as every user has its own OS, a little hungry for hardware. Updating software (new Eclipse versions etc.) does not affect everybody unless they start to use a new vm, which leads to fragmentation or lost working time again. Potential security issues due to missing security updates and users using the box as root.

  • Remote Desktop: everybody connects to a central ubuntu server, using a remote desktop from there. Options are a real X client, xrdp, VNC and the like.
    Pro: easy to use, central data storage, software updates effective immediately, central control easy. Does not need as much hardware. Users are not root. SVN repositories might be local, meaning speedup.
    Con: users are not isolated (potential security issues inside of the team), an apache restart etc. hits everybody.




Both solutions need a fast network and a fat server. At the moment, I would tend to use xrdp as remote desktop access. What experiences do you have? Any downsides to one approach over the other? Options I've missed? Is there anybody out here that successfully virtualised a software dev team?

nginx - how to redirect child domains to specific localhost ports in windows server 2008 VPS?

I have a VPS which is running Windows server 2008 datacenter with one Static IP address.



I can access from Internet to my web services by specific port e.g mydomain.com:3000/api and mydomain.com:4000/api. I want to change this web services access address into sub domain method (child) like App1WS.mydomain.com/api and App2WS.mydomain.com/api




update



I add this line to host file



127.0.0.1      mobileapp.myrealdomain.com


Then in nginx, I add these lines at the end



 server {

listen 80;
#root html;
root c:/apps/web;
server_name mobileapp.myrealdomain.com;
location / {
proxy_pass "http://127.0.0.1:3000";
}
}



Then I started my loopback project which is accessible via:



C:\apps\mobileapp>node .
Web server listening at: http://localhost:3000
Browse your REST API at http://localhost:3000/explorer


Then I've add Alias(CNMAE) record to DNS:



alias name: mobileapp

FQDN: mobileapp.myrealdomain.com
FQDN for target host: 127.0.0.1:3000.


From other location on Internet myrealdomain.com loads homepage but still mobileapp.myrealdomain.com showing



server DNS address could not be found.


In VPS mobileapp.myrealdomain.com works properly and loads server up time




{"started":"2017-09-23T10:29:30.626Z","uptime":48.709}

Friday, November 23, 2018

hard drive - RAID - buy spare disks at same time as originals?



When using RAID1, is it best practice to buy spare disks at the same time as the original disks to ensure you have the identical type in case they become unavailable in future?


Answer



You won't need an identical type.



The only "hard" requirement with RAID controllers is that the replacement disk is the same bus type (do not replace SAS disks with SATA or vice-versa) and at least the size of the original disk. Software RAID solutions would not even enforce the same bus type.




It makes sense to have similar performance characteristics (rotational speed, peak transfer rate, access times) on all members of an array so you would not give away performance, but if you replace a slower disk with a faster one, this is of no concern.


Thursday, November 22, 2018

Active Directory Domain Names - Forest/Tree/Children



I've been doing some reading on suggested top-level-domains for AD and whatnot. I used to setup domains as company.local and that worked just fine, however, more people want to use their external domain company.com instead of the .local suffix.



I've got a quick clarification question, how am I supposed to set up my first forest if we're going to actually use our registered domain name?




It's easy enough to setup a new forest with company.com but wouldn't I then have to add a child-domain of corp.company.com to a new DC? Essentially requiring two DCs just to set up the one domain.



Or would I create the first forest as corp.company.com and be done with it? That seems to make a lot more sense.


Answer



Bingo on your last statement.



Set up your AD forest as corp.



corp.company.com.




Edit: Also read this post by MDMarra: What should I name my Active Directory?


upload - Windows file uploaded to ubuntu server automatically assumes 777 permissions



Do you know how can I set permissions beforehand for a folder that I want to upload on a linux server? I'm working on a piece of software (php/mysql) on my local windows machine, I then tar the folder and upload it to my ubuntu web server. When I untar the contents, all the files and folders have automatically 777 permissions. Can I change that some way?


Answer



Since Windows does not have Unix-style permissions, tar-ing up files on Windows will not result in usable Unix permissions when it is unpacked on a Unix system.



When creating, use --mode to set a specific mode.
When unpacking, use --no-same-permissions to ignore the stored permissions and use the current users' umask.




man tar :)


Wednesday, November 21, 2018

linux - shell /bin/false allowing SFTP access [Ubuntu 12.04]



I have a Linux installation (Ubuntu 12.04), managed not only by me. I had restricted SSH access to a user using



/usr/sbin/usermod -s /bin/false my_user



This didn't allow neither SFTP access nor console access.



However today, I found out that users with this shell, do have SFTP access and I'm very sure they didn't have access in the past.



Could there be a config change which is allowing this? Unfortunately, I can't contact any of the others guys to see if any accidental changes were made.


Answer



It could be that you have




Subsystem       sftp    internal-sftp


and/or



Match Group sftpusers
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no



or



 Match User username
ChrootDirectory %h
ForceCommand internal-sftp


configured which will allow users sftp access even if they have a /bin/false shell.
If you didn't set this up you could always audit the /var/log/audit.log etc to see who did it by looking for who made edits (everyone does use sudo don't they) to /etc/ssh/sshd_config and restarted the sshd service.



Tuesday, November 20, 2018

central processing unit - Tracking down high Windows Server CPU utilization



I've got a Windows 2003 server (64-bit) running as a VM on a remote hosting facility. (I'm just leasing this one particular virtual instance, so I don't know what sort of underlying hardware it's running on, other than that it's presenting itself to the VM as having 8 CPU's available.)




The problem is that starting about 1-2 weeks ago, taskmgr.exe began showing something like a 60% total CPU load, spread out evenly across 7 of its 8 procs, but with one proc spiked at 100%. And the server is responding like you'd expect when it's that busy: it's a dog. I'd obviously like to track down what's causing this.



The problem is that the CPU %'s for each process, as shown in either taskmgr.exe or procexp.exe, don't add up to anywhere near 100%. In other words, the system idle process is somewhere around 40%, and a few other processes maybe add up another 10%, but where's the other 50% coming from? In other words, something is chewing up 50% of my CPU, and it's not listed anywhere in task manager. ("Show processes from all users" is checked.)



I've tried stopping all the services I could, but none of them had an impact on the CPU. Restarting the server doesn't make any difference: by the time I log back in, the CPU is pegged again. Procexp.exe doesn't show anything out of the ordinary.



I can think of two possible explanations: (1) There's some sort of rootkit that's made its way onto my server and is hiding itself from the process list; or (2) taskmgr.exe is suddenly (and for the first time) showing utilization on the rest of the box, not just this particular instance (though that doesn't seem right).



Any other suggestions for tracking this down?


Answer




I see two possible things you should look into.



First, whenever I hear someone talk about high CPU load, without being able to identify any offending processes, IO contention is my first guess. When there is high IO contention, processes stack up in uninterruptible sleep state, filling the OS's process scheduler with tasks that are just there waiting for data to be read from or written to disk. The individual processes would not show as having high CPU load. You'll need to look at performance statistics for the disk subsystem that's servicing this VM to see if any one or more of the many possible IO bottlenecks are being hit.



Second, you mentioned having an 8-CPU VM. Are you absolutely sure you need that many cores? You're sure? Okay, ask yourself again if you really need them. The reason being is that under virtualization, multiple cores don't work the same as if you were running on bare metal. The only time your VM is going to get CPU time on the host is when there are 8 cores available. If 8 cores aren't available, you don't get CPU cycles. On an even moderately-loaded host, needless to say, it's much more difficult for the hypervisor to schedule CPU time for an 8-core VM than it is for a single-core VM. For this reason, I recommend sticking with a single core unless it's 100% absolutely necessary for the application, at which point I may allocate 2 or at the very most 4 cores, in which case I'll make darn sure that there's not much else going on on the host, so that this VM's performance doesn't suffer.



So - may you have a rootkit? Sure, possibly so, and you had better do some due diligence to determine whether or not that's the case. If not, though, you certainly have some other things to look at.


Monday, November 19, 2018

How Data Moving in Storage Area Network




I have a question about how data moves through a SAN.



For example: Within an iSCSI SAN if Server A is mounted with a SAN disk and wants to copy data to Server B which is also mounted with a SAN disk, does the data transfer happen through the SAN raid/disks or the data moving through network server a --> server b?



In file level storage when using let say's file server in windows environment as long as I understand if we move data from client a to client b using file sharing from file server the data moved using network client a --> client b hence if in client a network slower than client b network the copy process would be as slow as client a network correct?



Sorry for my bad English.


Answer



Yes, you are right. Data flow path:





  • hostA asks SAN for data

  • hostA sends the data (over smb?) to hostB

  • hostB tels SAN to write down the data



SAN doesn't understand file systems, network protocols between hostA/hostB. So the slowest link will limit the transfer.



As a solution to this (and many others) problem, there are network filesystems like Ceph.



centos - sshd is not responding to requests from tun0



I am running CentOS 6. It is connected to OpenVPN with the following routes:




Destination Gateway Genmask Flags Metric Ref Use Iface




100.207.0.0 0.0.0.0 255.255.255.255 UH 0 0 0 tun0



101.19.0.0 0.0.0.0 255.255.255.255 UH 0 0 0 tun0



10.97.156.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0



169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0



0.0.0.0 10.97.156.1 0.0.0.0 UG 0 0 0 eth0





When connected, my ifconfig shows the correct address for tun0:




inet addr:101.19.23.64




After setting up the VPN, I restart sshd.
When I try to ssh in to this system from a host on the VPN, the connection attempt times out.
If I use tcpdump -i tun0 I get:





09:25:30.592685 IP 100.207.1.200.26605 > 101.19.23.64.ssh: Flags [S], seq 2108197737, win 8192, options [mss 1366,nop,nop,sackOK], length 0




However, no response ever goes back across the tunnel. The response isn't being sent across eth0 either- I trapped eth0 and didn't have a packet trying to go back to 100.207.1.200.



I know sshd is listening on all interfaces because netstat -l shows:





tcp 0 0 *:ssh : LISTEN




I even made sure my iptables allow incoming connections on the VPN port and the SSH port, although that shouldn't be an issue because the connections should be piggybacking on the current VPN session.



Any ideas? I'm at a loss, because as far as I can tell everything is set up properly.


Answer



The routes for tun0 are incorrect: the netmasks are 255.255.255.255 (there is no ip matching such a route).



You probably need 255.255.255.0 oder 255.255.0.0 depending on how you configured your VPN.



windows server 2003 - Multiple Sites with SSL on IIS6

This may sound a little stupid, but I'm more of a dev guy than a server admin and I am incredibly boneheaded when it comes to IIS. So please bear with me here.



I currently have a client that has a dedicated 2K3 box running IIS6. They are hosting one site on this box, with two domains resolving to that site (domain1.com, domain2.com). The site is stored in the typical C:\Inetpub\www folder. In addition, there is a section of the site that is protected by SSL. There are two SSL certs, one for each domain.



When a user goes to https: // domain1 . com, everything is fine. However, when the user goes to https: // domain2 . com, Internet Explorer kicks up a security warning. Obviously, this is not what we want. (I had to put spaces in the URL because ServerFault won't let me post more than one URL at a time since I'm new here).



Here is a bit of information as best as I can give it about the site setup in the IIS Manager.



There are two sites listed here, "domain1.com" and "Administration". When I go into the properties for domain1.com, the IP address has a value of "(All Unassigned)".




Under Advanced, "domain2.com" is listed with an IP address of "Default". Also, in "Multiple SSL identities for this Web site" there is one entry with the IP address of "Default" and the standard SSL port of 443.



I can view the certificate for domain1.com under "Directory Security > View Certicate". It appears everything is OK there.



So, to recap: I am trying to set up separate SSL certificates for separate domains that both lead to the same place. Is this possible? If anyone can explain the process to me (and dumb it down as much as possible) or at least point me in the right direction, it would be greatly appreciated.



Please let me know if this doesn't make any sense, or if I didn't provide you with enough (or the right) information.

memory - APT FATAL -> Failed to fork

I saw a lot of questions regarding this, but mine seems a little different.



Here's what I receive:



/etc/cron.weekly/apt-xapian-index:
FATAL -> Failed to fork.
run-parts: /etc/cron.weekly/apt-xapian-index exited with return code 100



and



/etc/cron.daily/apt:
FATAL -> Failed to fork.


and




/etc/cron.daily/apt:
DB Update failed, database locked


I always have at least 600 MB of free RAM.
If I try to manually run sudo /etc/cron.daily/apt nothing happens.. the shell hangs.



What could be the problem?



EDIT: Ubuntu Server 14.04

Sunday, November 18, 2018

domain name system - Setting up a dns server on CentOs 5.8



I'm having some problems with setting up my dns server on my vps (CentOs 5.8 32 bit)



I have configured a dns zone with the ISPConfig 3 wizard.
My name servers are registered at my domain registrar (at Yahoo)



I configured my domain to use my name servers:




ns1.mydomain.com
ns2.mydomain.com


Still, when I go to my domain, it says page not found.






The real error is "can't find domainname.com"







named.conf (in /var/named/chroot/etc)



//
// Sample named.conf BIND DNS server 'named' configuration file
// for the Red Hat BIND distribution.
//
// See the BIND Administrator's Reference Manual (ARM) for details, in:
// file:///usr/share/doc/bind-*/arm/Bv9ARM.html

// Also see the BIND Configuration GUI : /usr/bin/system-config-bind and
// its manual.
//
options
{
// Those options should be used carefully because they disable port
// randomization
// query-source port 53;
// query-source-v6 port 53;


// Put files that named is allowed to write in the data/ directory:
directory "/var/named"; // the default
dump-file "data/cache_dump.db";
statistics-file "data/named_stats.txt";
memstatistics-file "data/named_mem_stats.txt";

};
logging
{
/* If you want to enable debugging, eg. using the 'rndc trace' command,

* named will try to write the 'named.run' file in the $directory (/var/named).
* By default, SELinux policy does not allow named to modify the /var/named directory,
* so put the default debug log file in data/ :
*/
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
//

// All BIND 9 zones are in a "view", which allow different zones to be served
// to different types of client addresses, and for options to be set for groups
// of zones.
//
// By default, if named.conf contains no "view" clauses, all zones are in the
// "default" view, which matches all clients.
//
// If named.conf contains any "view" clause, then all zones MUST be in a view;
// so it is recommended to start off using views to avoid having to restructure
// your configuration files in the future.

//

Answer



There is something wrong with the configuration of the DNS server software on your server. This can be seen from the following DNS diagnosis questions.



You domain is correctly delegated (dig ns uk2be.com):



;; QUESTION SECTION:
;uk2be.com. IN NS


;; ANSWER SECTION:
uk2be.com. 172800 IN NS ns1.uk2be.com.
uk2be.com. 172800 IN NS ns2.uk2be.com.


The glue records are existing (dig ns1.uk2be.com and dig ns2.uk2be.com), although it's a single server:



;; QUESTION SECTION:
;ns1.uk2be.com. IN A


;; ANSWER SECTION:
ns1.uk2be.com. 172726 IN A 46.37.174.74

------

;; QUESTION SECTION:
;ns2.uk2be.com. IN A

;; ANSWER SECTION:
ns2.uk2be.com. 172714 IN A 46.37.174.74



But your DNS server is not responding to any query (dig soa uk2be.com @46.37.174.74 or dig www.uk2be.com @46.37.174.74):



; <<>> DiG 9.7.3 <<>> soa uk2be.com @46.37.174.74
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 24146
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0


;; QUESTION SECTION:
;uk2be.com. IN SOA

;; Query time: 21 msec
;; SERVER: 46.37.174.74#53(46.37.174.74)
;; WHEN: Fri Aug 17 18:30:18 2012
;; MSG SIZE rcvd: 27

------


; <<>> DiG 9.7.3 <<>> www.uk2be.com @46.37.174.74
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 21070
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.uk2be.com. IN A

;; Query time: 17 msec

;; SERVER: 46.37.174.74#53(46.37.174.74)
;; WHEN: Fri Aug 17 18:30:48 2012
;; MSG SIZE rcvd: 31


Providing your server is actually using this IP address (46.37.174.74), something is wrongly configured on your DNS software. Which DNS software are you using? Can you see anything wrong in the logs?



For testing purposes, you might want to try a few online tools:





Windows Server 2016 Storage Spaces - changing or adding disks

This may be a very basic question, but we are a small company with minimal IT support, so it's left to me to figure this stuff out. We have a server running Windows Server 2016, with one instance, HV1, acting as a hypervisor and two VMs running on it - the first VM, DC1, is acting as domain controller and our main file and application server, and the second VM, BU1, is running Windows Backup Server for backup to Azure.



When I added the backup server, I had to add storage for a staging area. Following some walk-throughs, I ended up with a system that seems cludgy to me, but it's working - a single physical drive (as this was just for staging for Azure backup, I saw no need to set up RAID or other redundancy) is assigned to a Server Spaces (not Server Spaces Direct)storage pool on HV1. HV1 than passes this as if it were a physical disk to BU1, which then has a second storage pool running on that disk. It is this second pool that's seen by the backup server as the staging area.




Probably not ideal, but it works.



Note that this is just the staging area, and is entirely separate from the drive space used by HV1 itself (a small solid state module), by DC1 (a Raid 1 array for the OS and a Raid 6 array for data, all running on a hardware raid controller, and being passed to the DC1 VM directly, and by the BU1 OS (which is on the same Raid 1 array) as I didn't know about Storage Spaces when it was set up.



Now, the actual question: I've found that I underestimated the size of the staging area needed, and need to expand it. I can either replace the physical drive with a larger one, or add a second drive (though I'd prefer to replace, as I'd like to keep one empty bay in the chassis).



If I add a drive, it seems that it should be fairly trivial to add it to the first storage pool, the one on HV1, as that's kind of the point. But will the larger size percolate up to the second storage pool, the one running on BU1? Will it notice that the underlying, virtual, disk has gotten larger?



Is it possible to just replace the drive with a bigger one? I don't really care about the data, but I do care about not having to reconfigure the BU1 system. Can I, for example, shut down the BU1 VM, delete the first storage pool, swap the disks, create a new storage pool on HV1, give it the same name, and then restart the BU1 VM? Will it know it's changed, or will it just accept it? Is there another way to do this?




Please note that I don't do this professionally, so small words and very detailed instructions would be much appreciated :)



Although HV1 is running Server Core, I can use the server manager gui tools from DC1 to manage all 3 servers, or use PowerShell on any of them directly.



Thanks.

redirect http to https using .htaccess failing



I have a website which uses CloudFlare flexible SLL hosted on HostGator.



I want to redirect all HTTP requests to corresponding HTTPS URL. No exceptions. I intend to put the rule at top with L, so when it’s handled all following rewrite rules should not be tested.




My current code is this:



RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]


But that results in endless redirects to the HTTPS version. Here is FireFox Live HTTP Headers:





https://example.net/



GET / HTTP/1.1
Host: example.net
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: da,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate, br
DNT: 1
Connection: keep-alive
Upgrade-Insecure-Requests: 1



HTTP/2.0 301 Moved Permanently
Date: Wed, 15 Feb 2017 15:20:35 GMT
Content-Type: text/html; charset=iso-8859-1
Set-Cookie: __cfduid=d07edac1644bccce1642d2c845767f9951487172035; expires=Thu, 15-Feb-18 15:20:35 GMT; path=/; domain=.example.net; HttpOnly
Location: https://example.net/
Server: cloudflare-nginx
cf-ray: 3319bea4dd2f3cfb-CPH
X-Firefox-Spdy: h2






http://ocsp.digicert.com/




POST / HTTP/1.1
Host: ocsp.digicert.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: da,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Content-Length: 83
Content-Type: application/ocsp-request
DNT: 1
Connection: keep-alive 0Q0O0M0K0I0 +



HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: public, max-age=172800
Content-Type: application/ocsp-response
Date: Wed, 15 Feb 2017 15:20:35 GMT
Etag: "58a44f61-1d7"
Expires: Wed, 22 Feb 2017 03:20:35 GMT
Last-Modified: Wed, 15 Feb 2017 12:53:53 GMT
Server: ECS (arn/459D)
X-Cache: HIT
Content-Length: 471






https://example.net/



GET / HTTP/1.1
Host: example.net
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: da,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate, br
Cookie: __cfduid=d07edac1644bccce1642d2c845767f9951487172035
DNT: 1
Connection: keep-alive
Upgrade-Insecure-Requests: 1




HTTP/2.0 301 Moved Permanently
Date: Wed, 15 Feb 2017 15:20:35 GMT
Content-Type: text/html; charset=iso-8859-1
Location: https://example.net/
Server: cloudflare-nginx
cf-ray: 3319bea7ddfb3cfb-CPH
X-Firefox-Spdy: h2






https://example.net/



GET / HTTP/1.1
Host: example.net
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: da,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate, br
Cookie: __cfduid=d07edac1644bccce1642d2c845767f9951487172035
DNT: 1
Connection: keep-alive
Upgrade-Insecure-Requests: 1



HTTP/2.0 301 Moved Permanently
Date: Wed, 15 Feb 2017 15:20:36 GMT
Content-Type: text/html; charset=iso-8859-1
Location: https://example.net/
Server: cloudflare-nginx
cf-ray: 3319beaaae7e3cfb-CPH
X-Firefox-Spdy: h2





I have seen other similar questions, but most suggested solutions are a variation of what I currently use, and I have tried them (but do please feel free to recommend whatever worked for you, and I will try it).


Answer




Cloudflare Flexible SSL: secure connection between your visitor and CloudFlare, but no secure connection between CloudFlare and your web server. You don't need to have an SSL certificate on your web server, but your visitors still see the site as being HTTPS enabled. Source




Because you redirect to HTTPS from your server, rather than with a Cloudflare page rule, even HTTPS requests by the client will still always trigger the redirect rule.



1. Client ---> HTTP ----> Cloudflare CDN ----> HTTP ----> Your server

|
<------- Response: Redirect to HTTPS <-

2. Client ---> HTTPS ----> Cloudflare CDN ----> HTTP ----> Your server
|
<------- Response: Redirect to HTTPS <-

3. Client ---> HTTPS ----> Cloudflare CDN ----> HTTP ----> Your server
|
<------- Response: Redirect to HTTPS <-



Cloudflare doesn't talk HTTPS to your webserver and that creates an Infinite Redirect loop.



To resolve that you'll need to remove the redirect from your .htaccess file and set up a Cloudflare page rule instead.


Saturday, November 17, 2018

domain name system - Reverse DNS (PTR) for Azure VM

I'm running an Azure VM (Classic) that hosts an email server. Some domains are not allowing my sent emails due to missing reverse dns/prt record.



I tried to follow this guide: https://azure.microsoft.com/en-us/blog/announcing-reverse-dns-for-azure-cloud-services/



I have a custom domain (say mail.mydomain.com) mapped to the IP and I tried to add a reverse DNS with:





Set-AzureService –ServiceName "mycloudservice" –Description "Reverse
DNS for mailserver" –ReverseDns Fqdn "mail.mydomain.com."




But I get the following error:




Set-AzureService : BadRequest: The reverse DNS FQDN

telemetry.yara.com. must resolve to one of: a). the DNS name of thi s
Hosted Service (xxxx.cloudapp.net), b). the DNS name of a different
Hosted Service in this subscription (a4684608-5
4c0-4c96-b42f-daf646401c58), c). a Reserved IP belonging to this
subscription, or d). the IP of a deployment or of a VM in this
subscription.




Note that this VM also has an instance IP (long story short: we need to ping the IP) and the domain is mapped to the instance IP and not the virtual public ip. Can that be the cause of why I can't add the PTR?




Any ideas on how to add the PTR while still having the domain point to the instance ip?




  • Instance IP address
    An instance IP address is a public IP address that can be used to access virtual machines in Azure. Unlike a VIP, each virtual machine in a domain name can have its own instance IP address. Additional charges may apply when using public IP addresses.

Friday, November 16, 2018

HPE SAS Expander card with breakout cables?

I have an old HP ML350 G6 server and I was wondering if I could buy a SAS expander card (this one 468406-B21) and buy a couple of SFF-8087 breakout cables and use it for my internal SATA drives instead of connecting it to a back plane?



Or does the expander card only work with a back plane?



I just ordered a LSI SAS2008-8I cards but I'm thinking about canceling the order and buy this expander card instead. I have 6 WD Black SATA drives connected to the motherboard right now that I'm using with ZFS but it's really slow and I'm hoping a dedicated card will fix this.

ubuntu - Apache www permissions for php script

I have a php script which run a private social network.
It's running on a Ubuntu 16.04.03 with an mariadb and apache 2 and php7 config.



I have created a new folder in /var/www/myscript and copied all files in this folder.



My question is now, what permissions are necessary to make the config safe.
Is it safe to set all files to 640 and all folders to 750 and owner and user to www-data?



I found this tutorial, http://fideloper.com/user-group-permissions-chmod-apache
The permissions looks different, are these permissions good, too?




It would be great if someone can help me.
Thank you very much.
Best regards

Wednesday, November 14, 2018

How to redirect all HTTP traffic to HTTPS for a Django 1.4 application running on an EC2 with nginx/uWSGI behind ELB with an SSL cert

I would like to know how to put my entire django site behind HTTPS. If anyone tries to come via HTTP I want that user to be redirected to HTTPS. Currently, firefox is giving me the error "Firefox has detected that the server is redirecting the request for this address in a way that will never complete."



My setup is :



1.One AWS load balancer (ELB) with an SSL cert.ificate The ELB has two listeners:





  • load balancer port 80 (HTTP) pointing to instance port 80 (HTTP)

  • load balancer port 443 (HTTPS) pointing to instance port 80 (HTTP)



2.One EC2 instance behind the ELB running nginx/uWSGI



nginx configuration




server {

listen 80;
return 301 https://$host$request_uri;
}

server {

listen 443 ssl;
set $home /server/env.example.com;


client_max_body_size 10m;
keepalive_timeout 120;


location / {

uwsgi_pass uwsgi_main;
include uwsgi_params;
uwsgi_param SCRIPT_NAME "";

uwsgi_param UWSGI_CHDIR $home/project;
uwsgi_param UWSGI_SCRIPT wsgi;
uwsgi_param UWSGI_PYHOME $home;
}
}


uwsgi configuration



# file: /etc/init/uwsgi.conf

description "uWSGI starter"
start on (local-filesystems
and runlevel [2345])
stop on runlevel [016]
respawn
exec /usr/local/sbin/uwsgi \
--uid www-data \
--socket 127.0.0.1:5050 \
--master \
--logto /var/log/uwsgi_main.log \

--logdate \
--optimize 2 \
--processes 8 \
--harakiri 120 \
--vhost \
--no-site \
--post-buffering 262144


3.Django settings file has the following settings specific to SSL/HTTPS




SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True


Any ideas how to properly setup HTTPS?



Thanks

Monday, November 12, 2018

raid - Why does linux automatically start a hardware RAID1 resync, on two new and blank disks and how do I stop it?



I've looked at similar posts about software RAID, but this is a hardware RAID question, and googling did not turn up anything so I turn to you guys.




I understand that hardware RAID has a single-point of failure, i.e. the RAID controller, but I'm willing to take this risk.



Here's a little background on the situation at hand. I have a DELL Precision T7600 at work that I'm responsible for maintaining, which just lost a hard-drive, thankfully just the /home directory was on it, and has now been recovered. Now I've been tasked with making a RAID 1 of the OS drive so that our downtime is to a minimum.



I plugged in two brand spaking new WD Blacks in the system, and fired up a Manjaro Live USB after making the RAID 1 array using the integrated RAID controller. The distro picked up the RAID array as /dev/md126, and immediately started a resync with its partner drive. Now this would make sense if one of the drives had just failed and a rebuild was necessary, but this is not the case. They are both new blank drives to reiterate. cat /proc/mdstat gives me an estimate of ~200 min, which I'm not willing to spend without doing something about it.



So my question to you guys is why is this happening and how can I stop this useless time-wasteful process?



Now granted, I'm very new to RAID, so if I'm misunderstanding something feel free to correct my ignorance :).


Answer




Your drives are completely new, but what if they have some data? The OS does not know that until it does the resync. Otherwise you might end up with inconsistent array (in case the drives were used and already had some data).



When creating the array, you can bypass the initial sync if you are sre that the drives are new using the --assume-clean parameter, it is not recommended though.



You can use the array while it is resyncing and resync only minimally affects performance.


Sunday, November 11, 2018

linux - Cannot allocate additional space after growing RAID array



I added three new drives to a Dell 2950 (running RHEL 5) with a PERC 6/i storage controller. The machine was previously running RAID 1 on two drives + hotswap. Rather than create an identical RAID 1 array with the new drives, I opted to gain additional storage by using OpenManage to convert the original Virtual Drive to a RAID 5 array that incorporated the new drives.



All of the above went off without a problem, but when I try to create a new partition with the additional space, fdisk informs me that there are "No free sectors available", even though it seems to recognize the additional space.



My current filesystem usage:




[root@local ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.7G 2.1G 7.1G 23% /
/dev/sda1 487M 35M 427M 8% /boot
none 4.0G 0 4.0G 0% /dev/shm
/dev/sda3 487M 11M 451M 3% /tmp
/dev/sda5 4.9G 1.2G 3.5G 25% /usr
/dev/mapper/VarGroup-var
50G 757M 47G 2% /var



fdisk output, showing additional disk space:



[root@local ~]# fdisk -l

Disk /dev/sda: 290.9 GB, 290984034304 bytes
255 heads, 63 sectors/track, 35376 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 64 514048+ 83 Linux
/dev/sda2 65 1339 10241437+ 83 Linux
/dev/sda3 1340 1403 514080 83 Linux
/dev/sda4 1404 8844 59769832+ 5 Extended
/dev/sda5 1404 2040 5116671 83 Linux
/dev/sda6 2041 2301 2096451 82 Linux swap
/dev/sda7 2302 8844 52556616 8e Linux LVM


Is there any way to incorporate the additional disk space without destructive repartitioning?



Answer



PC partition formats were decided 30+ years ago and aren't particularly flexible. You can only have four primary partitions, numbered 1 to 4. If you want more than four partitions, one of them must be an extended partition (sda4 in your setup); an extended partition is a container that contains any number (well, up to 11 in Linux under most common setups) of logical partitions.



You currently have 3 primary partitions (sda1 through sda3), so you can only create new logical partitions. But the extended partition is full, so there is no room for these new logical partitions. This explains fdisk's cryptic message.



As far as I remember, fdisk can't extend an extended partition. Try parted or cfdisk instead. Extend the extended partition (sda4) to range until the end of the disk, and create a new logical partition in the space now available.


BBWC Parts for Proliant DL380 G5 / HP Smart Array P400




I have a Proliant DL380 G5 server with HP Smart Array P400 controller, the performance of the disk I/O is bad lately and I found out that during my research that I can improve it by enabling the write cache memory on the HP Smart Array P400 controller.



I also found out that in order to keep the data on the disks attached to the array safe in case of power failures or server hang-up, I will need to add a battery-backed write cache (BBWC) module.



I looked up the Smart Array spare part numbers here, but I got totally lost:




  • The memory boards read that there are 512 and 256 MB modules, is it safe to install the 512 and forget the 256 one? Knowing that my array have 256 MB total cache size in it.

  • I would assume that I need the Battery Cable and Battery Pack to be installed with with the memory board. Correct?

  • I couldn't find anything to show how the actually installation is done (step-by-step), can you provide any known link to do the job?




Thanks,


Answer



You need a 405148-B21, the 512MB one, yes it's safe and yes you need the cable and battery.



For instructions it's page 20 of THIS.


SSL Not being served by Apache



I am running multiple virtual hosts on my apache and I want one virtual host to server ssl.



I have followed the instructions given to me from where I purchased my certificate.




Whenever I visit my site using https, I get an "Unable to connect error" in Firefox.



My ssl.conf which is included by httpd.conf looks like this:



NameVirtualHost xxx.xxx.xxx.xxx:443



DocumentRoot "/var/www/html/path/to/dir"

ServerName *.xxx.xxx.com
ServerAlias *.xxx.xxx.com
Alias /path "/var/www/html/development/path/to/somewhere/else"

SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
SSLCertificateFile /etc/ssl/crt/STAR_xxx_xxx_com.crt
SSLCertificateKeyFile /etc/ssl/crt/private.key
SSLCACertificateFile /etc/httpd/conf/STAR_xxx_xxx_com.ca-bundle



Order Deny,Allow
Allow from all
Options -Indexes
AllowOverride All







What else can I do to solve this?



EDIT Some other thoughts:




  1. I have read that my apache has to be compiled somehow with SSL. Is this an issue?

  2. In some configurations the Listen 443 in enclosed in tags. Is this also an issue?




When I do a lsof -I :443 I get a




COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME




httpd   8872   root    5u  IPv6 78180368       TCP *:https (LISTEN)
httpd 8874 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8875 apache 5u IPv6 78180368 TCP *:https (LISTEN)

httpd 8876 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8877 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8878 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8879 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8880 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8881 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8893 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8894 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 8895 apache 5u IPv6 78180368 TCP *:https (LISTEN)
httpd 9067 apache 5u IPv6 78180368 TCP *:https (LISTEN)



I think prolly not waht I expected since I only want one virtual host to listen to 443. Or is this correct?


Answer



Was able to solve this problem by restarting the iptables



service iptables stop
service iptables start

Saturday, November 10, 2018

storage area network - What is the difference between SAN vs NAS







I would like to know the difference between SAN and NAS (explained in basic English before getting too technical).




I saw a few site on this. They have topologies and difference diagrams. I have a vague understanding, but need to make that concrete.



So if somebody can take it from conceptual differences to the technical differences to the implementation differences, it would be great.

bind - BIND9 DNS Records not Propagating



Kind of new to managing DNS via BIND.



We have a setup with a master server and a slave server. I've updated the zone file on the primary name server for our domain but the changes aren't propagating over to the secondary server. The funny thing is that I'm making a change in the zone file for a different domain on the server and those changes ARE being propagated to the secondary server.




Anyway I can force the change to be made?



Also, there was a third nameserver that used to be operational but has been offline for a few months now. I removed it from the zone file for the two domains that have it listed as a name server and it still (over 24 hours later) shows up from time to time when I run a record check.



Any help on this would be greatly appreciated.



Nate


Answer





  1. Every time you make a change to a zone file you should increment the serial (most people make the serial YYYYMMDDNN, where NN is the revision that day).

  2. Slaves can get notifications upon update, but it generally has to be configured, otherwise you can usually do a rndc refresh example.com on the slave and it will pull.

  3. NS records have to be updated both in the zone file and at the registrar.

  4. DNS records can be cached, sometimes for weeks, and largely depending on how you have your zone configured. If you do not want your records to be cached, modify the TTL and such.


Sql server backups best practices. Too many files created?



I know this has been asked before, but I'm trying to understand this better.



I traditionally used Backup exec to backup sql servers, but I now have two sql servers that are off-site and of course I don't want to back them up across the vpn. I have created maintenance plans for each of them. First I did a full backup, then changed the plan to do differential. I do not need to back up transaction logs for these servers because of the nature of the databases, at least not yet.



My question is this, I thought the differential backups would simply append the latest differential. But it's creating new files every night. If some of the databases don't change everyday, then I don't want all of these duplicate backup files.




Do I just need to set expiration of these differential files, will that actually delete the expired files?



Basically I'm just trying to come up with a solution to just have one full backup, and daily backups after that.



I'm reading more and more about it, but not really finding the right answer on google, and waiting for my SQL book to come in.



Am I missing anything?



Thanks to anyone who reads or responds, I appreciate it.




SQL2005, Server 2003, 32bit/64bit


Answer



You should set up maintenance plans to do 3 things:




  • Weekly Full backup

  • Daily differential backup

  • Clean up old backups (use the Maintenance Cleanup Task item)




Expiration does not refer to how long files are kept. Also, since you aren't using transaction log backups, make sure you have the database recovery model set to Simple.


IPv6- loopback and link local addresses

Good Afternoon,



I'm a bit new to IPv6 and am curious on a couple of things. The loopback address is reserved as ::1 /128. If the mask is /128, wouldn't that indicate no available bits for hosts as all 128 are assigned to the network?



Also, I find the notation of link-local addresses a bit odd. The range indicates FE80 /10. But in practice, if you look at many assigned link-local addresses, they have other prefixes such as /12, /14, etc.




I'm sure I'm missing something simple, but can anyone help clear it up? Thank you.

kernel - Our Server Rooted but exploit doesnt work?

My friend's hosting server got rooted and we have traced some of attacker's commands.. We've found some exploits under /tmp/.idc directory.. We've disconnected the server and are now testing some local kernel exploits that the attacker tried on our server.
Here is our kernel version:
2.4.21-4.ELsmp #1 SMP
We think that he got root access by the modified uselib() local root exploit but the exploit doesn't work!
loki@danaria {/tmp}# ./mail -l ./lib



[+] SLAB cleanup

child 1 VMAs 32768



The exploit hangs like this.. I've waited over 5 minutes but nothing has happened. I've also tried other exploits but they didn't work.. Any ideas? or experimentations with this exploit? Because we need to find the issue and patch our kernel but we can't understand how he used this exploit to get root...
Thanks

Multiple wildcard certificates on one IIS site?




We already have an SSL certificate for *.foo.com pointing to an IIS site. Now we want to point *.bar.co.nz point to the same web application and will purchase another wildcard certificate.



Is it possible to set up two wildcard SSL certificates under the one IIS site?


Answer



You would have to include both domains in a single wildcard certificate (with the use of "subject alternative name"). For more information, see RobLL's answer to this question.


Inconsistency between "du -sh" and "df -h"








I am running a server with Debian stable.



If I call:



df -h


this is the result I get:




Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/my-var 2.8G 2.3G 358M 87% /var


While if I call:



du -sh /var



This is the result I get:



832M    /var


How can this happen? Which one is correct? Thank you!

Thursday, November 8, 2018

Why is it recommended to work in Linux with a non-root user?











The very first rule I've been taught about Linux is never to use root as your main user and I was wondering why


Answer



Because root is the superuser. As root you can do anything, as such it is easy for you or someone to kill your system.



A quick search could have saved this


Wednesday, November 7, 2018

chroot - df shows too much space on tmpfs

I have a server (SLES 11 running on a VMware hypervisor if that matters) with a tmpfs partition meant for mysql temporary tables, and I run mysql chrooted.



df -h gives me strange outpupt:



root@db12.lab:~# df -h /usr/chroot/tmp/
Filesystem Size Used Avail Use% Mounted on

tmpfs 77G 66G 7.9G 90% /usr/chroot/tmp


While mount goes like this:



root@db12.lab:~# mount | grep tmpfs
tmpfs on /usr/chroot/tmp type tmpfs (rw,size=512m)


The database runs allright and I don't see any fs-related errors in the log. I did try to stop the daemons and mount/unmount the FS, but it didn't help.




I wonder what it might mean and how such kind of problems is solved?



It doesn't affect anything, but it's somewhat mystical and I'd like it to go.

Tuesday, November 6, 2018

Convert RAID-0 to RAID-1 on HP ML350G6 with P410i zero memory



I have an HP ML350 G6 with a P410i zero memory RAID controller. As far as I can understand that means I can't expand a current single drive "RAID-0" configuration to a RAID-1 using the HP Offline ACU without installing memory and BBWC. Is that correct?



What makes me think about this is the fact that expanding RAID-0 to RAID-1 should be pretty similar to replacing a failed drive in an already existing RAID-1? So then why can't I expand without memory and BBWC?



Is my best option otherwise to (i) use Ghost to capture the disk, create a new RAID-1 with the existing drive and a new one or (ii) buy memory+BBWC and do it online?



Thanks



Answer



Array transformations are possible, but you need to have a BBWC or FBWC unit in place to do so (offline OR online). You may as well get cache memory and a battery since write performance is very poor without them.



See the HP Smart Array Configuration Utility manual.



Also see: RAID5 on SmartArray P410i online resize


apache 2.2 - Mac OS X Mountain Lion adding www to virtual host on localhost



I'm new to configuring my Mac for localhost so this may be a dumb question. I want to set up a virtual host so that http://localhost/domain points to http://domain.dev. In my Apache configuration, I have localhost pointing to my Sites folder. That works correctly. I can browse to localhost/domain.




In my hosts file, I set up domain.dev as 127.0.0.1.



In my vhosts file, I set up the following entry:




DocumentRoot "/Users/username/Sites/domain/"
ServerName domain.dev




But when I uncomment this line in the Apache configuration:



Include /private/etc/apache2/extra/httpd-vhosts.conf


The browser adds "www" to the domain.dev, which it then can't find. What am I doing wrong?


Answer



Just use ServerAlias.





DocumentRoot /Users/username/Sites/domain/
ServerName domain.dev
ServerAlias www.domain.dev


Monday, November 5, 2018

linux - Block specific IP in wordpress site

We are using application load balancer in front of a wordpress server hosted on aws ec2. Here we are using woocommerce plugin in wordpress site and found some suspicious activity from a IP which we want to block. So is there any way to do that using any tools. I know fail2ban however I can't use it since fail2ban does log analysis for blocking IP's and the IP isn't listed in any of the logs. Can anyone help with this one.

ftp - Setting umask 002 for the apache, lighttpd, ... didn't work?



I'm trying to make the files that created by webserver (Apache, lighttpd, ...) can be writable by the ftp users. Adding apache to nobody group and the vise versa. umask 002 for ftp works fine. But the webserver seems to ignore my umask setting in /etc/sysconfig/:




grep umask /etc/sysconfig/httpd 
umask 002


or /etc/init.d/:



start() {
echo -n $"Starting $prog: "
umask 002

daemon $lighttpd -f $LIGHTTPD_CONF_PATH
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
return $RETVAL
}


That files are still created with 755 permission:




-rwxr-xr-x 1 apache nobody 28243 Jul 28 09:49 ssvzone_997.js
-rwxr-xr-x 1 apache nobody 26224 Jul 28 09:49 ssvzone_998.js
-rwxr-xr-x 1 apache nobody 19686 Jul 28 09:49 ssvzone_999.js

-rwxr-xr-x 1 lighttpd nobody 23949 Jul 29 15:50 ssvzone_999_1.js
-rwxr-xr-x 1 lighttpd nobody 20668 Jul 29 15:50 ssvzone_999_2.js
-rwxr-xr-x 1 lighttpd nobody 22294 Jul 29 15:50 ssvzone_999_3.js


So, what is the root cause?




PS: I saw the some similar questions but none of them can help.


Answer



Don't know why file_put_contents() function creates files with 755 permission but @chmod 664 do a trick.


windows - How to track and log file transfers between Production and Non-Production environments?



We have a security/compliance audit that we are preparing for and since we deal with financial institutions, one of the potential flags mentioned was how we track/monitor files that are transferred between our Production and Non-Production environments.



We run a Windows shop. Our IT dept. (the Domain Admins) have access to both our PROD and Non-Prod (Corporate) domains. When builds or files need to be pushed to production, IT is required to perform any file transfers.




To satisfy this requirement we were asked to look at a number of DLP solutions which are turning out to be relatively costly.



We have also explored potentially requiring the IT team to use some sort of FTP or Managed File Transfer system in order to move files between the environments, but that just seems cumbersome.



Are there any other potential solutions we can explore here? The main requirement is that we have some sort of TRACKING or LOGGING of any files copied between the environments. Aside from doing a giant "DIFF" of the environments at the end of each day, not sure what we can do.


Answer



There are many ways to track file movement between systems and environments. However, this is not really a technology product situation. This is a business process and information security situation. Even if you buy a really expensive DLP, you need the policies, processes, and audits to make it meaningful. (See my closing note.)



Rather than waste a bunch of time and money researching shiny things to spend money on, I suggest you check out the Information Security site to get more guidance on this topic.




Once you have a good grasp on the goals of a good segregation of dev, test, and ops environments and "separation of duties" you will likely recognize that the technical problems are not that hard. They sure don't sound hard for what you have described.



I would recommend:




  1. Solid policies for access control, roles and responsibilities, and separation of duties between the PROD and DEV environments.

  2. Sufficient event monitoring and auditing for PROD environment at minimum. Better if it exists in both environments.

  3. Standard Operating Procedures for how the responsible parties (your IT dept.) are supposed to receive, verify, transport, and implement code or file changes in PROD.




Then you can go buy the shiny if it makes number 3 easier/faster/more cost effective.



< rant >
An initial response of "let's buy something to satisfy an audit requirement" is almost always a long term business or security FAIL. It drives the compliance part of the IT industry, but it won't help you much with actual security and will probably cost you more in the long run.


Friday, November 2, 2018

Is it possible to make an individual daily cron run with UTC time while keeping the system localtime?

I have a cron script under /etc/cron.d as so:




SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin




0 0 * * * root /usr/local/sbin/app_logrotate >> /var/log/app-newday.log




This works but it executes always at 00:00 local time. The app in question uses UTC time ( I cannot change this). In my time zone this is a few hours behind resulting in the date tag on the daily logfile this application creates never being a new day.
I can't change the system localtime to UTC as other applications depend on localtime.
I was wondering if it is possible to execute this cron only at 00:00 UTC while keeping my system localtime.
I have tried adding TZ=UTC to the cron script which did not work.
Does anyone know how this can be done?

Nginx - Redirect a bunch of domains to a single domain, with SSL





Nothing insidious going on but I've got a bunch of domains that we've bought for a service and I don't want to buy an SSL certificate for them all (there's about 11).



As an example, we've got:



example.com
thisisexample.com

ourexample.com
theexample.com


I have an SSL certificate for example.com and it is the main domain we're going to use. To protect our IP we've bought up a lot of similar domains and we're redirecting them all to example.com.



I've setup some redirects already and they're working fine on HTTP/80, both www and non-www.



However, accessing any of the domains on HTTPS/443 shows the privacy error.




Is there any way around this without having to buy lots of certificates? Can I not redirect https for one domain to another and allow that to terminate SSL?



EDIT:



My question is relating to Nginx, not Apache2.


Answer



Turns out I can avoid buying lots of SSL certificates via using LetsEncrypt for the domains I want to 301 redirect.


Thursday, November 1, 2018

linux - System not being able to handle soft interrupts but having idle time?

I have a constant 5% and more CPU time spent in handling soft interrupts. Doe to that, the ksoftirqd is running almost constantly, but is using a very small amount of CPU (less than 1%).



However, regardless of this heavy load there is still a fairly high percentage of idle time (30% and more)(this is the top value for idle, or the idle from mpstat).



Some background (However, I would like a conceptual answer, not one that solves the problem on my system). The system is used for routing (echo 1 > /proc/sys/net/ipv4/ip_forward) and NAT with iptables, and runs additional user space application not related to networking. Also, the load average is always above 1 (it's a single core processor)(this is the value Load average from top, or the output of sar -q).



What is preventing the system to use the idle time to prevent the handling of soft interrupts from being missed?



I would expect to see the idle time (id in top) be used for serving software interrupts (si in top) and not have the processor miss tasks and be idle at the same time.

filesystems - Thoughts on Linux server file system layout



I'm wondering and I'm sure that many out there are wondering also, on which would be the best or at least the optimal file system layout for a GNU/Linux based server. I'm aware that there is no general layout, because layouts vary based on what the final user want to achieve so I will narrow down my question to a very specific implementation. The purpose of the server is as mid size SIP telephony server. The file system layout that I came up with is the following:




The full size of the hard drive is 146 GB




  • 1 GB primary partition mounted as /boot

  • 16 GB primary partition mounted as /

  • 16 GB extended partition mounted as swap (the server has 8 GB memory and it won't get bigger soon at least)

  • 52 GB extended partition mounted as /var

  • 16 GB extended partition mounted as /var/log

  • 30 GB extended partition mounted as /usr

  • 5 GB extended partition mounted as /tmp


  • 10 GB extended partition mounted as /home



I put the swap in the middle considering that will allow faster access, made a big var partition because there will be a lot of variable data like database files.
I moved the /var/log and /tmp into different partition to be sure that it they are filled they won't bring the entire system down and moved /usr also to be able to make it read-only if there is a need for such a measure. I made a small /home partition because the number of user will be low so there is no need for a big storage space the home directories.



There are many arguments for and against this layout I suppose and I'm curios (trying to pick the mind of the more experienced or wiser than me) about what are others thinking about so: is this fragmentation and order good considering fast access (that is why I put the swap almost in the middle), security and data safety? Any thoughts? Thanks!


Answer



Two things:





  1. /boot only needs to be about 256MB, 512MB if you really want to be safe. How many kernels do you really need?

  2. For the love of [insert deity here] use LVM



In general I will use:




  • First primary partition 256MB /boot (ext2)

  • Second primary partition as physical volume (PV) in LVM


  • Logical Volume /

  • Logical Volume /home

  • Logical Volume /usr (optional)

  • Logical Volume /tmp (optional, prefer hdd over ssd)

  • Logical Volume /var (optional, prefer hdd over ssd)

  • Logical Volume swap (2 * RAM && <= 4GB)



Sizes depend on usage, but leave some (most) unused space in the volume group (VG) to expand any logical volumes that fill up.


domain name system - Bouncing email due to bad initial MX TTL?



Last Friday night we changed mail servers. We moved off Office 365 to another hosted Exchange Solution (Intermedia.net).



We use Godaddy for DNS, and after the server migration stuff was ready and clients were good to go, I edited our MX records. Godaddy is fast so within an hour or so I saw on whatsmydns.com that the new, proper MX records had been propagating nicely.



Queue Monday morning. Email is coming through but I'm starting to hear about bouncebacks. More of the same Tuesday. I'd been pulling my hair out when I just eyeball our DNS entries on Tuesday night and see that the TTL on the new MX record is 1 Week. Yikes. I change it to 1/2 an hour. Today, one bottleneck messagelabs/symantec) has updated to point to the right server, but we're still getting some unfortunately large external senders, ie Postini, bouncing their messages off the old server.



Is that initial 1 week TTL to blame? Would Postini resepct that initial TTL despite my haviong shortened it yesterday? I'm going nuts because it seems pretty helpless. I had IT folks at one of our clients contact Postini to open a ticket, since their user emails to us are bouncing, but it could take time to move on that front. My only hope is that it's Thanksgiving so work is mostly over until the weekend. Friday night will be the official 1 week from that initial bad TTL. Should I have hope that things will 'just work' come Monday? I don't know what else to check. The domain is removed from O365, the new MX records seem nicely propagated. I'm about to jump off a pier.


Answer




Never shutdown the old server until you know that the TTL of the MX record pointing to the old server has expired. If the TTL is/was 1 week then leave the old server running for 1 week to catch any emails from clients that may have that MX record cached.



When implementing an email cutover always check the MX record during your planning phase and adjust it accordingly. Personally I don't see any good reason to set the TTL on the MX record to anything more or less than 1 hour.



The new TTL on the MX record doesn't have any bearing on the old TTL of the MX record. So if the old MX record TTL was 1 week (or whatever) then any client that has that cached is going to hold it for that period of time (for whatever remaining TTL is in their cache). The fact that you changed it has no bearing, because those clients aren't going to look it up again until it expires in their cache.


linux - Chroot / sFtp / Ftps Problems

regular debian admin, i am faced to a complex situation and i appreciate your advices...



I have a server with chroot environment that is working without problems up to now :
Chroot user : store

chroot path : /home/store
I have in this chroot environment all files (lib, bin, usr, etc) required for chroot to work.
On this chroot, i have several jailed FTP / FTPs (explicit) accounts that are working correctly. Each jailed ftp is in : /home/store/home/store/partners/xxxxxx (where xxxx is the name of the partner)
In each jailed FTP i have two folders : data (Listing possible), and photos (not listable).
Photos is a unique directory mounted from fstab for each partner inside the chroot environnement, allowing RETR commands in FTP for each partners.



Problem is : I have a partner that must for security reason access to data in sFTP (FTP over SSH).
To resolve this problem, i've chrooted a new user (lets say ALDA) to /home/store/home/store/partners/ALDA and install all necessary files to work (bin, etc, lib). User does access in sFTP to it's chrooted content, but he can access all bin/ lib/ etc/ folders that are needed for chroot, and that i managed to hide in FTPs.
For my luck, write restrictions are working and user can bring no modifications to files or dirs, but user car list/retrieve all content...




1rst question : How can i restrict access / visibility to all chroot environnement files other than DATA and PHOTOS folders.
2nd question : In sFTP, user can list all mounted photo directory, and parse content... How can i totally block all listing of this specific folder, but allow when exact path is given, all "get" commands...



Thanks for your valuable time and advices..



Tdldp

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...