Friday, August 31, 2018

networking - difference between local and inet socket?

I noticed that, while setting up opendkim, the options for the Socket are:



#SOCKET="local:/var/run/opendkim/opendkim.sock" # default
#SOCKET="inet:54321" # listen on all interfaces on port 54321

#SOCKET="inet:12345@localhost" # listen on loopback on port 12345
#SOCKET="inet:12345@192.0.2.1" # listen on 192.0.2.1 on port 12345


What is the difference (if any) between the local:[...].sock socket and the inet:[...]@localhost socket? Do user permissions come into play for one or the other? Is there a security benefit from using one or the other? Are there any functional differences at all?

centos5 - Near 100% disk usage, df and du show very different results, lsof not the answer

The issue is that my CentOS 5.8 machine is telling me that I am nearly out of disk space when I am pretty confident this is not the case. I've done a fair amount of researching on this issue and have been unable to find a solution.




'df -h' shows 210G used, 8.6G avail



'ncdu' shows 28.6G used (same for apparent size)



As you can see, this is nearly a 10x difference in the reported 'used' space. Knowing what is stored on this drive, I think 28.6G is closer to reality.



Looking at the output of 'lsof' there are very few lines with (deleted) at the end. Moreover, the largest size of any of these lines is 6190. Finally, I've rebooted the machine a number of times which, if I understand the other threads correctly, would resolve the issue of phantom files anyhow.



Here is a summary of the output from ncdu:





22.7GiB [##########] /opt
2.8GiB [# ] /usr
1.5GiB [ ] /var
812.4MiB [ ] /root
310.6MiB [ ] /home
194.3MiB [ ] /lib
156.4MiB [ ] /etc
36.5MiB [ ] /sbin

7.3MiB [ ] /bin
128.0KiB [ ] /tmp
20.0KiB [ ] /mnt
e 16.0KiB [ ] /lost+found
e 8.0KiB [ ] /srv
e 8.0KiB [ ] /selinux
8.0KiB [ ] /media
e 4.0KiB [ ] /backup
> 0.0 B [ ] /sys
> 0.0 B [ ] /proc

> 0.0 B [ ] /net
> 0.0 B [ ] /misc
> 0.0 B [ ] /dev
> 0.0 B [ ] /boot
0.0 B [ ] .autorelabel
0.0 B [ ] .autofsck


Output of 'df -Th':





Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
ext3 221G 210G 8.6G 97% /
/dev/sda1 ext3 99M 25M 74M 25% /boot
tmpfs tmpfs 1.7G 0 1.7G 0% /dev/shm


This post mentions that outside of phantom files there are two other possible explanations:





  1. corrupt filesystem

  2. compromised machine



I'm looking for help on how to test the validity of these explanations. Obviously, explanation #2 is particularly concerning.



Thanks for your help!

Wednesday, August 29, 2018

Push DNS for only a domain OpenVPN



I have an OpenVPN server to access an Amazon VPC. I have a bind DNS on that same VPN server for solving local names (say *.local.example.com) and for everything else, bind uses forwarders as google DNS.



My problem is that I would like to avoid having my VPN/local DNS receive every DNS queries and forward them (for most of the time) and cache them since it is not a powerful server.



My question here is whether I can make the VPN users query my local bind DNS for the local queries and use their own DNS (e.g. defined in their resolv.conf before they connect to the VPN) for all others by pushing some configuration with OpenVPN.




Server : Debian 8, OpenVPN and bind9



Thanks



-- Edit --



To clear things a bit, here is my goal if possible :



A home user connects to OpenVPN server, which is also a local DNS (for only a set of private addresses). When the home user requests google.com, his query is directed to say 8.8.8.8. When the request is for local.mycompany.com, the query goes to my OpenVPN server/DNS. All this, without using a client-side add-on (push it with OpenVPN ?)




All this is to avoid a unnecessary load of DNS queries on my small VPN server/DNS (that he will anyway forward to Google DNS).


Answer



'Split Horizon' DNS, if servers behind the VPC are on a different domain then it is 'split brain' anyway - DNSMasQ is your friend:



https://www.linuxsysadmintutorials.com/configure-dnsmasq-to-query-different-nameservers-for-different-domains.html


vmware esx - vSphere templates what to put in them?



I am new to vmware and templates. From what I understand templates are meant to eliminate some repetitive tasks when creating virtual machines.



I am currently setting up a virtual machine on win2008 R2, and configuring it to be a base template using the following link as a guide. http://jeremywaldrop.wordpress.com/2008/10/28/how-to-build-a-windows-2008-vmware-esx-vm-template/



Ok great, I can eliminate some of the OS setup tasks when creating a virtual machine using my base template.



My question however is, what else are other users here putting in their templates?




Should I setup external applications in my templates such as SQL Server 2005, notepad++, wireshark? Or are templates only meant to be used for OS type settings like in the above guide?



Thanks.


Answer



There's 2 main components to templates, if you're running vSphere (ESX and vCenter). The template Vms themselves, and Guest Customization. If you configure guest customization (just by coping sysprep for each OS onto the vcenter), all the windows deployment steps (naming, network, license, time zone, owner info) is taken care of for you.



So once that's running, all you have to do with the templates is provide a base OS configuration, patched, with a temporary network name. If you have 'standard build' apps that you must layer on your builds (e.g. monitoring agents, wallpaper) then you can do this to the template VM before marking it as a template.



Beyond that, I don't think there's too much more to it, everything else you could add on is at your discretion. I find that keeping a relatively small number of templates/guest customizations, usually one per OS, is sufficient and saves a chunk of deployment time.


Tuesday, August 28, 2018

script works manually but not in cron job

I create my system graphs in rrd using perl script. When I run the script manually it'll update the graphs. On the other hand cron shows in logs that the job has been executed but it does not update graphs.. Any help please?



I've set apache2 directory permission root:apache with 770




Distro: CentOs 5.5.



Cron Config:



0,5,10,15,20,25,30,35,40,45,50,55 * * * *       /home/user/graphs/script-rrd.pl  > /dev/null 2>&1


I've already check /var/log/cron it shows that the above script has been executed but it does not update graphs in /var/html/www/graphs.png



Resolution of this issue:




cp -rf /opt/rrdtool-1.4.4/lib/perl/5.8.8/i386-linux-thread-multi/* /usr/lib/perl/site_perl



after copying rrd stuff in /usr/lib/perl5/site_perl issue has been resolved.



I've already created PERL5LIB. After creating PERL5LIB i was able to run scripts manually but it was not being run by cron so that is why i've to copied files as the above.



SELINUX is as following:
SELINUX=disabled
SELINUXTYPE=targeted

SETLOCALDEFS=0

Monday, August 27, 2018

domain name system - How to Detect Split Horizon DNS from a Single Device



I'm building an iOS app that uses Bonjour for device discovery on the same WiFi network. It works fine on some networks, but not on others (like Starbucks or Panera). The devices see themselves, but not each other.



I got a tip that these networks may be using Split Horizon DNS. I've confirmed that I cannot ping one device from another.



The problem is, I want to show an error message if the WiFi network won't work.



I thought maybe I wouldn't be able to ping myself on such a network, but I can.




What is the best strategy for detecting Split Horizon from a single device? In other words, I cannot ping another device at runtime since I don't know anything about other devices.


Answer



As mentioned, this is almost certainly due to wireless clients being isolated. It makes perfect sense to do this on public wifi networks and I'd be surprised if any public networks don't do it. (With client to client communication enabled someone could sit on the network trying to hack other users devices. It's a large security risk for users and when you're providing a hotspot for Internet access what's the point in allowing clients to see each other).



If you can't ping other clients at all (by IP address) then it clearly has nothing to do with DNS.



I can't see any way to detect this and there is no real simple solution to get round it. Some apps use a central server which all clients connect to, which relays data between clients (such as IM apps), although depending on the goal of your app that may not be a viable solution.



The most obvious answer is that your app will just have to tell the user it can't find any other clients, maybe with a more information button/section that details the fact that it may not be able to discover other clients on certain networks (especially public ones).


linux - How to list Apache enabled modules?



Is there a command that list all enabled Apache modules?


Answer



To list apache loaded modules use:



apachectl -M



or:



apachectl -t -D DUMP_MODULES 


or on RHEL,CentoS, Fedora:



httpd -M



For more options man apachectl. All these answers can be found just by little google search.


Sunday, August 26, 2018

linux - Is this the right way to add 3 additional hard disk to /etc/fstab?



I got 4 hard disk. I simply create a single partition in all of them. I format them all with ext 3. And then I rewrite fstab



All right, I'll just add a few line to fstab



#
# /etc/fstab
# Created by anaconda on Wed Dec 19 15:22:22 2012

#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root / ext4 usrjquota=quota.user,jqfmt=vfsv0 1 1
UUID=1450c2bf-d431-4621-9e8e-b0be57fd79b6 /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_home /home ext4 usrjquota=quota.user,jqfmt=vfsv0 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0
/home2 /dev/sdb1 auto auto,defaults 0 3
/home3 /dev/sdc1 auto auto,defaults 0 4
/home4 /dev/sdd1 auto auto,defaults 0 5


Am I doing this correctly?




Any suggestion to improve?



I wonder why /dev/sda1 shows up nowwhere in fstat.



So I just change this and after that restart server right?



this is the result of fdisk -l



root@host [/home/freemark/backup]# fdisk -l


Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009e006

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.

/dev/sda2 64 182402 1464625152 8e Linux LVM

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0007ad8f

Device Boot Start End Blocks Id System

/dev/sdb1 1 182401 1465136001 8e Linux LVM

Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e43c4

Device Boot Start End Blocks Id System

/dev/sdd1 1 182401 1465136001 8e Linux LVM

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006f9d9

Device Boot Start End Blocks Id System

/dev/sdc1 1 182401 1465136001 8e Linux LVM

Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000



Disk /dev/mapper/VolGroup-lv_swap: 36.0 GB, 35953573888 bytes
255 heads, 63 sectors/track, 4371 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/VolGroup-lv_home: 1410.1 GB, 1410133393408 bytes
255 heads, 63 sectors/track, 171438 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


After I reboot, I still cannot access /homne2 /home3/ and /home4



Also everything has become so slow.




At first there is a space between auto, and defaults. I removed that and reboot again. Now things work well. Still there is no /home2



if I type mounts



I got



root@host [/etc]# mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw,usrjquota=quota.user,jqfmt=vfsv0)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/VolGroup-lv_home on /home type ext4 (rw,usrjquota=quota.user,jqfmt=vfsv0)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0)
/tmp on /var/tmp type none (rw,noexec,nosuid,bind)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)



Update: Looks like I make a mistake. The mount name should be on the second column. Let me get this fixed first before anyone answer.


Answer




  • You have the syntax of the fstab file reversed. The device name goes in the first field, the mount point into the second:



    /dev/sdd1 /home4 auto auto, defaults 0 5


  • /dev/sda1 doesn't appear because it is referenced via the UUID:



    UUID=1450c2bf-d431-4621-9e8e-b0be57fd79b6 /boot ext4 defaults 1 2


  • It is usually not necessary to restart a server after an edit to /etc/fstab, you simply can mount the new disks by hand, e.g. with mount /home4 it the mount points are correctly referenced in the fstab.





Also, your disks appear to be partitioned as LVM partitions, not Linux partitions. Did you intend to add these disks to your /home LVM volume? This is what I would recommend to do (and maybe bring a RAID into the mix with so many disks), but you would have to things quite differently (search for LVM on SF or google).


domain name system - Windows Server - DHCP/DNS Updates - purging outdated DNS records

We have Windows 2008 Servers running and providing DHCP and DNS Services to the network. So far everything is working great.




The problem is, the clients which are provided with IPs from DHCP are automatically listed in DNS with their respective hostnames. Clients are listed 2-4 times with the same name but different IPs. There is always one right IP/hostname combination and 2-4 outdated one.



Is there any easy automated way to get rid of all the outdated ones? I assume there must be some way of making an expiration time in the DNS when outdated records get purged?

firewall - (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name



My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories:




  1. the known public, registered via mac, given static dhcp lease

  2. the anonymous lan connections, given lease from specific dhcp range

  3. switches, unix hosts firewall




Now, consider following hosts which are of interest




  1. 111.111.111.111 (Zywall USG 300 WAN)

  2. 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT

  3. 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld

  4. 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN

  5. 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN




DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though..



NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) as such:




  • Type: Virtual Server

  • Interface: WAN

  • Original IP: any

  • Mapped IP: 192.168.1.2

  • Original port: 80


  • Mapped port: 80



While NAT-Loopback is activated it causes device unreachable from external interfaces (havent tried though, if it makes LAN -> WAN IP:80 work)



Our problem follows;



When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall.



I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login.




So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table.



I need to know how to setup NAT / Policy Route, so that LAN > WAN > LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.


Answer



Solution ended up being maintenence of the internal DNS lookup table (much like an /etc/hosts file) where i put in mydomainX.tld and map it to their appropiate IP's.. Would have like to get around this though and there's a bounty out for an answer which allows for LAN -> WAN IP : PORT go through the NAT table


Friday, August 24, 2018

https - Wordpress SSL config on Google cloud instance

I am trying to configure a Wordpress instance hosted on Google cloud to use https instead of http



I am trying to set it up as per these instructions:




https://jamescoote.co.uk/add-letsencrypt-ssl-certificate-to-wordpress/



to use letsencrypt



I've added installed the certificates as per those instructions. I also sylinked ssl.conf and ssl.load into mod-enabled.



I added the cert paths into the default-ssl.conf and symlinked that into sites-enabled but whenever I do this I can't get apache to restart. I get this message:



apache2.serviceJob for apache2.service failed. See 'systemctl status apa

che2.service' and 'journalctl -xn' for details.



but when I try those commands it doesn't give me enough information to solve the problem.



The contents of the default-ssl.conf look like this (I've changed the hostname but the rest is as is):





ServerAdmin webmaster@localhost




    DocumentRoot /var/www/html

# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined


# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf

# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.

SSLEngine on

# A self-signed (snakeoil) certificate can be created by installing
# the ssl-cert package. See
# /usr/share/doc/apache2/README.Debian.gz for more info.
# If both key and certificate are stored in the same file, only the
# SSLCertificateFile directive is needed.
# SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
# SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key


SSLCertificate /etc/letsencrypt/live/hostname/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/hostname/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/hostname/fullchain.pem

# Server Certificate Chain:
# Point SSLCertificateChainFile at a file containing the
# concatenation of PEM encoded CA certificates which form the
# certificate chain for the server certificate. Alternatively
# the referenced file can be the same as SSLCertificateFile
# when the CA certificates are directly appended to the server

# certificate for convinience.
#SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt

# Certificate Authority (CA):
# Set the CA certificate verification path where to find CA
# certificates for client authentication or alternatively one
# huge file containing all of them (file must be PEM encoded)
# Note: Inside SSLCACertificatePath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.

#SSLCACertificatePath /etc/ssl/certs/
#SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt

# Certificate Revocation Lists (CRL):
# Set the CA revocation path where to find CA CRLs for client
# authentication or alternatively one huge file containing all
# of them (file must be PEM encoded)
# Note: Inside SSLCARevocationPath you need hash symlinks
# to point to the certificate files. Use the provided
# Makefile to update the hash symlinks after changes.

#SSLCARevocationPath /etc/apache2/ssl.crl/
#SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl

# Client Authentication (Type):
# Client certificate verification type and depth. Types are
# none, optional, require and optional_no_ca. Depth is a
# number which specifies how deeply to verify the certificate
# issuer chain before deciding the certificate is not valid.
#SSLVerifyClient require
#SSLVerifyDepth 10


# SSL Engine Options:
# Set various options for the SSL engine.
# o FakeBasicAuth:
# Translate the client X.509 into a Basic Authorisation. This means that
# the standard Auth/DBMAuth methods can be used for access control. The
# user name is the `one line' version of the client's X.509 certificate.
# Note that no password is obtained from the user. Every entry in the user
# file needs this password: `xxj31ZMTZzkVA'.
# o ExportCertData:

# This exports two additional environment variables: SSL_CLIENT_CERT and
# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
# server (always existing) and the client (only existing when client
# authentication is used). This can be used to import the certificates
# into CGI scripts.
# o StdEnvVars:
# This exports the standard SSL/TLS related `SSL_*' environment variables.
# Per default this exportation is switched off for performance reasons,
# because the extraction step is an expensive operation and is usually
# useless for serving static content. So one usually enables the

# exportation for CGI and SSI requests only.
# o OptRenegotiate:
# This enables optimized SSL connection renegotiation handling when SSL
# directives are used in per-directory context.
#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire

SSLOptions +StdEnvVars


SSLOptions +StdEnvVars



# SSL Protocol Adjustments:
# The safe and default but still SSL/TLS standard compliant shutdown
# approach is that mod_ssl sends the close notify alert but doesn't wait for
# the close notify alert from client. When you need a different shutdown
# approach you can use one of the following variables:
# o ssl-unclean-shutdown:
# This forces an unclean shutdown when the connection is closed, i.e. no
# SSL close notify alert is send or allowed to received. This violates

# the SSL/TLS standard but is needed for some brain-dead browsers. Use
# this when you receive I/O errors because of the standard approach where
# mod_ssl sends the close notify alert.
# o ssl-accurate-shutdown:
# This forces an accurate shutdown when the connection is closed, i.e. a
# SSL close notify alert is send and mod_ssl waits for the close notify
# alert of the client. This is 100% SSL/TLS standard compliant, but in
# practice often causes hanging connections with brain-dead browsers. Use
# this only for browsers where you know that their SSL implementation
# works correctly.

# Notice: Most problems of broken clients are also related to the HTTP
# keep-alive facility, so you usually additionally want to disable
# keep-alive for those clients, too. Use variable "nokeepalive" for this.
# Similarly, one has to force some clients to use HTTP/1.0 to workaround
# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
# "force-response-1.0" for this.
BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive

BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown






# vim: syntax=apache ts=4 sw=4 sts=4 sr noet



when I checked to see if the certificates had been granted successfully they had. I guess there's some apache config that I haven't applied somewhere but I can't figure out what it is, any pointers would be appreciated.




Thanks



More details as requested:



This is the contents of the error log from apache:



[Thu Feb 23 06:46:55.153392 2017] [mpm_prefork:notice] [pid 1215] AH00163: Apache/2.4.10 (Debian) configured -- resuming normal operations
[Thu Feb 23 06:46:55.153424 2017] [core:notice] [pid 1215] AH00094: Command line: '/usr/sbin/apache2'
[Thu Feb 23 06:51:40.656914 2017] [authz_core:error] [pid 12411] [client 146.148.7.38:50713] AH01630: client denied by server configuration: /var/www/html/wp-config.old, referer: hostname/wp-config.old

[Thu Feb 23 07:42:52.938926 2017] [authz_core:error] [pid 12408] [client 146.148.7.38:51000] AH01630: client denied by server configuration: /var/www/html/wp-config.old, referer: hostname/wp-config.old
[Thu Feb 23 11:09:56.509913 2017] [mpm_prefork:notice] [pid 1215] AH00169: caught SIGTERM, shutting down
[Thu Feb 23 11:13:34.728029 2017] [mpm_prefork:notice] [pid 17535] AH00163: Apache/2.4.10 (Debian) configured -- resuming normal operations
[Thu Feb 23 11:13:34.728083 2017] [core:notice] [pid 17535] AH00094: Command line: '/usr/sbin/apache2'



Which doesn't strike me as being related to the issue but maybe I should remove that file anyway (although it starts ok when I removed the 443 section from the virtual host file)



This is the message from systemctl:



Failed to get D-Bus connection: No such file or directory - I got that having added the virtualHost section to my wordpress.conf to listen on 443, which what I had originally tried when starting to do this.

Thursday, August 23, 2018

OpenSSL ChangeCipherSpec vulnerability - ubuntu solution



I checked a site with this tool and the result came back that " This server is vulnerable to the OpenSSL CCS vulnerability (CVE-2014-0224) and exploitable."




I searched around and found that for not being vulnerable the version must be higher than this output:



OpenSSL 1.0.1 14 Mar 2012
built on: Mon Jun 2 19:37:18 UTC 2014


My current version is



OpenSSL 1.0.1c 10 May 2012
built on: Fri May 2 20:25:02 UTC 2014



I tried couple ways to upgrade my openssl like this and this but I still get the same version. For example when I execute the sudo apt-get dist-upgrade I get this message:



Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.



The first time I run this command, packages were installed and I did reboot my machine with sudo reboot.



Any clue how can I update my openSSL to avoid this vulnerability? Anything else I might be missing?


Answer



Ok, as suggested in the question comments your problem is that you are running Ubuntu 12.10, which stoped being supported earlier this year, just about a month before the OpenSSL CCS issue was published. Hence, there aren't any good OpenSSL versions for Ubuntu 12.10, and there won't be.



Getting an openssl/libssl packages from a newer Ubuntu might not be trivial, given that other packages you have installed might depend on a specific openssl version. Seem to recall libssl being fairly version critical when compiled against.



While there are things you could do, such as backporting the fix yourself (non-trivial) you really need to upgrade to a supported version of Ubuntu, given all other potential security issues in other packages. Especially since you appear to be running a web server, which usually has a fairly large attack surface.




For a server you usually want to go with a LTS version of Ubuntu. Especially these days, with the new non-LTS versions only being supported nine months, and the LTS versions getting five years of supported. Current LTS versions being Ubuntu 12.04 and Ubuntu 14.04.


Wednesday, August 22, 2018

apache 2.2 - mod_rewrite rules for non-default page cache directory for Rails application with distinction between mobile directory and www directory

My Question is similar (almost the same) to another question asked here: but with a twist.




Does anybody have some working Apache mod_rewrite rules that enable
Phusion Passenger (mod_rails) to use a non-default location for the
page cache within a Rails application? I'd like the cached files to go
in /public/cache rather than the default of /public.





In my case I have 2 custom directories. "/public/www.MYSITE.com/" and "/public/m.MYSITE.com" for mobile requests.



This particular mod_rewrite code works for me ONLY for www.MYSITE.com requests:



RewriteRule ^$ /www.MYSITE.com/index.html [QSA]
RewriteRule ^([^.]+)$ /www.MYSITE.com/$1.html [QSA]


Now is there a way to take the www.MYSITE.com or m.MYSITE.com from the incoming url and substitute it for the directory location to look for the cached page? Everything about the url for both mobile and www requests are the same except for the prefix of the host "m" for mobile browsers and "www" for everything else.




Just to clarify I have mod_rewrite conditions/rules to detect mobile browser which works fine I just need to tell apache which cache directory it is located in based on the subdomain of the request.

Tuesday, August 21, 2018

vpn - My ISP has me on a static IP address, but I'm sharing my connection. What IP address should I provide for Remote login?

The company I work for wants to set me up with Remote Desktop access to one of their servers.



My ISP has assigned me a static IP address; however at home I have multiple machines sharing the Internet through an ADSL modem/wireless router.



If I provide the company with the static IP that my ISP assigned me, will I be able to log in through any one of those machines?



Or will I need some special set up on either a machine or the router, to make it all work?

On NGINX PHP fastcgi skips right over memcache.ini and mongo.ini files

I have a strange issue I can’t seem to figure and googled and researched a lot to no end.



I’ve got php-fastcgi via spawn-fcgi running fine on nginx (cent os 5). However, I’m trying to load memcache and mongo php extensions and for some reason nginx skips right over their ini files in /etc/php.d/. I know memcache and mongo extensions are installed correctly because my apache instance on the same box loads them fine. I’m thinking maybe I need to build php from source and specifically include them in the ./configure .. just a bit gun shy to do this because I’m not sure how this will impact the installation since I installed everything from yum.




Finally, all permissions check out and I've even tried putting the extensions directly into /etc/php.ini without success and the issue persists.



Just wanted to also add that simply restarting the nginx service to load the modules (as is required) wasn't enough.



Here is what phpinfo() on NGINX shows:



Configuration File (php.ini) Path: /etc/php.ini



Scan this dir for additional .ini files: /etc/php.d




additional .ini files parsed: /etc/php.d/dbase.ini, /etc/php.d/dom.ini, /etc/php.d/gd.ini, /etc/php.d/imap.ini, /etc/php.d/json.ini, /etc/php.d/mbstring.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/xmlreader.ini, /etc/php.d/xmlwriter.ini, /etc/php.d/xsl.ini



Here is what phpinfo() on Apache shows:



Configuration File (php.ini) Path: /etc/php.ini



Scan this dir for additional .ini files: /etc/php.d



additional .ini files parsed: /etc/php.d/dbase.ini, /etc/php.d/dom.ini, /etc/php.d/gd.ini, /etc/php.d/imap.ini, /etc/php.d/json.ini, /etc/php.d/mbstring.ini, /etc/php.d/memcache.ini, /etc/php.d/mongo.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/xmlreader.ini, /etc/php.d/xmlwriter.ini, /etc/php.d/xsl.ini




..notice apache loaded memcache.ini and mongo.ini and nginx didn't. What could be wrong?

Saturday, August 18, 2018

ip - If I change a router's subnet mask to one that includes the old one, will the hosts with the old configuration still work?



Currently I have a router that is addressed as 10.2.1.1 on a 10.2.1.0/24 network. All of my hosts have default gateway 10.2.1.1 with mask 255.255.255.0.




I want to know: if I change my router to 10.2.0.0/16 will the hosts that have subnet mask corresponding to /24 still work in the interim (before I reconfigure)?



EDIT: If it is not possible, what is the best way to transition from a smaller subnet to a larger subnet, assuming these are all Windows hosts behind a pfSense device?



EDIT 1: For clarification, I will keep the router address as 10.2.1.1, just make the subnet bigger (/16 instead of /24).


Answer



It would work somewhat. It depends on how you define work. If you change the netmask on you router, so that instead of having 10.2.1.1/24 it will have 10.2.1.1/16 then:



A host with a 10.2.1.0/24 address could still reach any system with an address between 10.2.1.0-10.2.1.255 with a mask of /24 or /16. The systems would simply use arp resolution and connect directly to the each other. Since from the perspective of both systems they will each be on their local network.




A host with a 10.2.1.0/24 would be able to connect to any host outside of the 10.2.0.0/16 network. It would ARP for the gateway address and connect through the via your router.



The only thing they couldn't reach is hosts on 10.2.0.0/16, but not in the 10.2.1.0/24 range. The host with an address in 10.2.1.0/24 would try to connect via the router, but a host on that subnet, but outside of 10.2.1.0/24 would try to connect directly. Even this can be mitigated, on some routers using something called proxy-arp. You basically have to convince the router to reply to ARP requests on behalf of a system with a 10.2.1.0/24 when the request came from a system not within that subnet.



The key point here is that will work somewhat, but you must fix the netmask on all your systems fixed to the new subnet before you start assigning address space from the other portion of the network.


Friday, August 17, 2018

megaraid - setup raid 6 with 12 - 2 TB drives

I have two 500GTB 7.2k Sata 2.5in drives, plus twelve 2TB drives, and LSI 9260-8i SAS/SATA Card. I need to have OS on one label the rest for storage. I am unsure how to build the raid6. It has the LSI MegaRaid utility. Should I build it all as one array with raid 6? Then it asks if I want to add the array hole to the Span? What's the best way to handle this?

ssd - LSI 9207-8i and Samsung 850 PRO TRIM support

A Dell R610 server with LSI 9207-8i HBA card has 6 Samsung 850 PRO SSDs connected to it.




hdparm shows TRIM support:



sudo hdparm -I /dev/sdc | grep -i trim
* Data Set Management TRIM supported (limit 8 blocks)


However executing the Samsung magician software on Ubuntu 14.04 returns the following error:



ERROR : This feature is not supported for disks connected to LSI RAID HBA Cards.



Neither does the fstrim command help:



fstrim: /: FITRIM ioctl failed: Operation not supported


The compatibility matrix doesn't list the Samsung 850 PRO so should I get another controller that supports this SSD for TRIM to work?



I do not need any hardware RAID capabilities and intend to configure these 6 drives with RAID 10 using mdadm.

Thursday, August 16, 2018

Install HP drive in Dell PowerEdge 2950?



First, I do agree that what I'm asking here is certainly unsupported by either HP or Dell.



I'm just exploring some options here.



The scenario: We have a server, a Dell PowerEdge 2950, running SQL Server. This poor old server has been repeatedly brought to its knees every month-closing. Simply because the drives within are toooo durn slow. Something needs to be done with the storage.




The long-term solution: We're going to virtualize the server, backed by SAN Storage. We're purchasing an EMC Storage Array, but that will take some time (6-8 weeks) before we actually have the array, so in the meantime, a 'quick-win' solution is necessary.



The situation: One of the IT Managers suggested purchasing HP PN 691856-B21 and test it running within the Dell server.



My Questions:



(1) I'm almost sure that HP's drive caddy will NOT fit into Dell's chassis, but I've yet to see any explicit statement for that. So, can someone explicitly confirm that the HP drive mentioned above cannot directly fit the PowerEdge 2950?



(2) My suggested solution would be to purchase off-the-shelf SATA SSD drives and use Dell PN CC852 (caddy) and Dell PN PN939 (interposer), but first I want to know if this is a good idea.




Do remember that this solution is just to work around the performance problem until the EMC Storage Array arrives. So I'm not keen on a too-expensive solution.


Answer



1) We have a mix of Dell and HP servers on site here and I can confirm that a HP drive caddy will not fit a Dell Chassis.



2) As an interim step, fitting a commodity SSD drive in a Dell caddy should work just fine, with the SAS->SATA adapter you specify.


domain name system - Secondary server testing environment

I want to test our secondary e-mail server but the primary one right now works correctly. How can I manage to make that server slow down a bit by using dns records and test the secondary server ?



Once at a time I did this by not writing the ns servers addresses correctly.



Do you know another approach ?



Thanks

Baris

Wednesday, August 15, 2018

installation - Ubuntu 12.04 - apt-get install ia32-libs unmet dependencies

Trying to install ia32-libs. I run sudo apt-get install ia32-libs. The output is as follows:




Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:

ia32-libs : Depends: lib32v4l-0 (>= 0.5.0)
E: Unable to correct problems, you have held broken packages.


I have tried sudo apt-get install -f, sudo apt-get update, and sudo apt-get upgrade. I tried doing sudo apt-get install lib32v41-0, but that doesn't work either because the package cannot be found.



sudo apt-get install lib32v41-0
Reading package lists... Done
Building dependency tree
Reading state information... Done

E: Unable to locate package lib32v41-0


Any suggestions on how to get ia32-libs installed?

django - Nginx - connect() failed (111: Connection refused) while connecting to upstream



I am running a site that uses Django, Nginx, Gunicorn, Supervisord and fail2ban (which only allows ssh, http and https). The site is live and working correctly but there are some nginx error log entries that are concerning:



connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: www.example.com, request: "GET /example/url/to/get/ HTTP/1.1", upstream: "http://[::1]:8000/example/url/to/get/", host: "www.example.com"

upstream server temporarily disabled while connecting to upstream, client: x.x.x.x, server: www.example.com, request: "GET /example/url/to/get/ HTTP/1.1", upstream: "http://[::1]:8000/example/url/to/get/", host: "www.example.com"


Here is my nginx config:




upstream app_server_wsgiapp {
server localhost:8000 fail_timeout=0;
}

server {
listen 80;
server_name www.example.com;
return 301 https://www.example.com$request_uri;
}


server {
server_name www.example.com;
listen 443 ssl;

if ($host = 'example.com') {
return 301 https://www.example.com$request_uri;
}

ssl_certificate /etc/nginx/example/example.crt;

ssl_certificate_key /etc/nginx/example/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-
AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-
SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-
SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-
SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-

AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-
SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-
SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-
SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-
SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-
CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;

access_log /var/log/nginx/www.example.com.access.log;
error_log /var/log/nginx/www.example.com.error.log info;

keepalive_timeout 5;

proxy_read_timeout 120s;

# nginx serve up static and media files
location /static {
autoindex on;
alias /static/path;
}


location /media {
autoindex on;
alias /media/path;
}

location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {

proxy_pass http://app_server_wsgiapp;
break;
}
}
}


I do not have any errors in the Gunicorn logs.



Like I said, the site is working correctly. But I don't want to ignore error logs which could potentially become a bigger issue later.



Answer



Could this be because your system is dual-stack, but your upstream is IPv4 only?



It looks as if localhost is resolving to [::1], which depending on your upstream might be the problem in and of itself.



Given you are communicating over loopback, I would tend to assume the Connection refused is 'real' - it is reflective of the actual issue.



You can check whether this is the problem by replacing localhost with 127.0.0.1 in your upstream config:



upstream app_server_wsgiapp {

server 127.0.0.1:8000 fail_timeout=0;
}

Tuesday, August 14, 2018

cron - Bash script runs manually, but fails on crontab



I'm a newbie to shell scripting. I have written a shell script to do incremental backup of MySQL database.The script is in executable format and runs successfully when executed manually but fails when executed through crontab.
Crontab entry is like this : */1 * * * * /home/db-backup/mysqlbackup.sh
Below is the shell script code -



 #!/bin/sh
MyUSER="root" # USERNAME
MyPASS="password" # PASSWORD
MyHOST="localhost" # Hostname
Password="" #Linux Password


MYSQL="$(which mysql)"
if [ -z "$MYSQL" ]; then
echo "Error: MYSQL not found"
exit 1
fi
MYSQLADMIN="$(which mysqladmin)"
if [ -z "$MYSQLADMIN" ]; then
echo "Error: MYSQLADMIN not found"
exit 1
fi

CHOWN="$(which chown)"
if [ -z "$CHOWN" ]; then
echo "Error: CHOWN not found"
exit 1
fi
CHMOD="$(which chmod)"
if [ -z "$CHMOD" ]; then
echo "Error: CHMOD not found"
exit 1
fi


GZIP="$(which gzip)"
if [ -z "$GZIP" ]; then
echo "Error: GZIP not found"
exit 1
fi
CP="$(which cp)"
if [ -z "$CP" ]; then
echo "Error: CP not found"
exit 1

fi
MV="$(which mv)"
if [ -z "$MV" ]; then
echo "Error: MV not found"
exit 1
fi
RM="$(which rm)"
if [ -z "$RM" ]; then
echo "Error: RM not found"
exit 1

fi
RSYNC="$(which rsync)"
if [ -z "$RSYNC" ]; then
echo "Error: RSYNC not found"
exit 1
fi

MYSQLBINLOG="$(which mysqlbinlog)"
if [ -z "$MYSQLBINLOG" ]; then
echo "Error: MYSQLBINLOG not found"

exit 1
fi
# Get data in dd-mm-yyyy format
NOW="$(date +"%d-%m-%Y-%T")"

DEST="/home/db-backup"
mkdir $DEST/Increment_backup.$NOW
LATEST=$DEST/Increment_backup.$NOW
$MYSQLADMIN -u$MyUSER -p$MyPASS flush-logs
newestlog=`ls -d /usr/local/mysql/data/mysql-bin.?????? | sed 's/^.*\.//' | sort -g | tail -n 1`

echo $newestlog
for file in `ls /usr/local/mysql/data/mysql-bin.??????`
do
if [ "/usr/local/mysql/data/mysql-bin.$newestlog" != "$file" ]; then
echo $file
$CP "$file" $LATEST
fi
done
for file1 in `ls $LATEST/mysql-bin.??????`
do

$MYSQLBINLOG $file1>$file1.$NOW.sql
$GZIP -9 "$file1.$NOW.sql"
$RM "$file1"
done
$RSYNC -avz $LATEST /home/rsync-back



  • First of all, when scheduled on crontab it is not showing any errors. How can I get to know whether the script is running or not?

  • Secondly, what is the correct way to execute the shell script in a crontab.


  • Some blogs suggest for change in environment variables. What would be the best solution


Answer



Scripts being ran from crontab do not always have the same environmental variables you normally take gfor granted... Do this:



Change



 #!/bin/sh



to



 #!/bin/sh -x


with -x, you may try setting PATH manually, and even kicking off /etc/profile if all else fails.



#!/bin/sh -x
PATH=/bin:/sbin:/usr/bin:/usr/sbin
/bin/sh /etc/profile


Sunday, August 12, 2018

linux - How to add disk space to /var/www for my websites?




I did $ df -h and it threw this:



Filesystem            Size  Used Avail Use% Mounted on
/dev/md1 9.7G 1.7G 7.6G 18% /
/dev/md2 683G 211M 649G 1% /home
tmpfs 4.0K 0 4.0K 0% /dev/shm


The problem is that my websites are located in /var/www, which I guess belongs to /, and they won't be able to use the disk space for things like images, that need to be placed inside the webroot, eg: /var/www/my_site/public_html/




What can I do about it? Should I move disk space from /home to / ? How?
Or move the sites to /home ?
Any thoughts?



Im using centos 5.5 and apache 2


Answer



Easier way will be to move the data to the larger partition, and symlink back into place.



 $ mv /var/www /home/

$ ln -s /home/www /var/www

How to download security updates for Ubuntu Hardy



Which command should I use to download security uodates for Ubuntu Hardy?



Ideally would like the command to also create a log file listing files downloaded, install and any errors.



Thanks for help.


Answer




A good start:



$ apt-get update
$ apt-get upgrade


And check /var/log/dpkg.log and /var/log/apt/term.log


windows server 2012 - Active Directory with custom domain name?




I'm new to the Active Directory world (I know how to use it, not set it up :) ). I just bought a server from Leaseweb running Windows Server 2012 R2 and plan to use it for Active Directory. It has its own IP address, and everything.



My question is though, I obviously can't bind computers to it's domain using a .local network, so how would I set up an active directory using a domain I own?




Say I own example.com and I create a subdomain in my DNS settings on my domain's hosting provider ad.example.com. Is it possible to use that subdomain on my Windows server as the active directory name?



Looking for some guidance on how to setup active directory correctly using a server that isn't on my local network. Please let me know what information you need to know.



Thank you in advance!


Answer



Setting up an Active Directory Domain is the same, whether the computer you're doing it on is local or remote. The difference will be in how you join to and connect to the remote Active Directory domain - this is usually done via VPN, as it's a bad practice (for security reasons) to expose your domain controllers to access from the internet.



As to how, precisely, you set up an Active Directory forest... that's too broad a topic for our Q&A format. Do it (find a guide online, if needed), and then feel free to come back to search for answers or ask about any specific problems or questions you come across.


Saturday, August 11, 2018

active directory - Correct licensing if users have multiple AD accounts

When licensing User CALS for Windows Server, I have following problem (or question):




Given you have 30 employees in your domain. 5 of them are administrators.
Each of these administrator has two domain accounts, one for working, with almost normal permissions, and one account with domain admin permissions, for logging in into servers and workstations. This helps distinguishing who was responsible for a software installation, reboot etc.



Now, do I need to buy 35 licenses for those 30 physical employees because I have 35 user accounts or is it enough to buy one user CAL per (physical) admin?



Same question appears for a special backup user which is used by backup scripts (=no physical user) to tranfer files to our file pool.



I already checked Active Directory Licensing but it doesn't completely cover my question.

Nginx reverse proxy and wordpress

Everything working well but I'm getting an issue with WordPress while I'm in the extensions page.



The problem is that plugin icons are not getting displayed and when I click on the plugin icons of the plugin which I want to install, it opens up an empty window which keeps loading forever. But when I click on "install" button, the plugin installs without any issue.




You can look this image to see what happen



Issue Wordpress



Here the config :



Config



Here is the Webserver config:




Front end Nginx



server {
listen 443 ssl;

# SSL
ssl on;
ssl_certificate /etc/ssl/nginx/nginx.crt;
ssl_certificate_key /etc/ssl/nginx/nginx.key;

ssl_session_cache shared:SSL:40m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

server_name domaine.tld;

# Proxy Pass to Varnish and Add headers to recognize SSL
location / {
proxy_pass http://127.0.0.1:80;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Secure on;
}
}



Backend Nginx



server {
listen 8000;

server_name domaine.tld;
root /var/www/domaine;
index index.php;

# Custom Error Page

error_page 404 403 /page_error/404.html;
# Log
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

location / {
try_files $uri $uri/ /index.php?$args;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}


# PHP-FPM
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param HTTPS on;

}
}


Varnish Default



DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \

-s malloc,256m"


Varnish VCL 4.0



backend default {
.host = "127.0.0.1";
.port = "8000";
.connect_timeout = 600s;
.first_byte_timeout = 600s;

.between_bytes_timeout = 600s;
.max_connections = 800;
}

# Only allow purging from specific IPs
acl purge {
"localhost";
"127.0.0.1";
}


# This function is used when a request is send by a HTTP client (Browser)
sub vcl_recv {

# Redirect to https
if ( (req.http.host ~ "^(?i)www.domaine.tld" || req.http.host ~ "^(?i)domaine.tld") && req.http.X-Forwarded-Proto !~ "(?i)https") {
return (synth(750, ""));
}

# Normalize the header, remove the port (in case you're testing this on various TCP ports)
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");


# Allow purging from ACL
if (req.method == "PURGE") {
# If not allowed then a error 405 is returned
if (!client.ip ~ purge) {
return(synth(405, "This IP is not allowed to send PURGE requests."));
}
# If allowed, do a cache_lookup -> vlc_hit() or vlc_miss()
return (purge);
}


# Post requests will not be cached
if (req.http.Authorization || req.method == "POST") {
return (pass);
}

# Did not cache .ht* file
if ( req.url ~ ".*htaccess.*" ) {
return(pass);
}


if ( req.url ~ ".*htpasswd.*" ) {
return(pass);
}

# Don't cache phpmyadmin
if ( req.url ~ "/nothingtodo" ) {
return(pass);
}


# --- Wordpress specific configuration

# Did not cache the RSS feed
if (req.url ~ "/feed") {
return (pass);
}

# Don't cache 404 error
if (req.url ~ "^/404") {
return (pass);

}

# Blitz hack
if (req.url ~ "/mu-.*") {
return (pass);
}


# Did not cache the admin and login pages
if (req.url ~ "/wp-(login|admin)") {

return (pass);
}

# Do not cache the WooCommerce pages
### REMOVE IT IF YOU DO NOT USE WOOCOMMERCE ###
if (req.url ~ "/(cart|my-account|checkout|addons|/?add-to-cart=)") {
return (pass);
}

# First remove the Google Analytics added parameters, useless for our backend

if(req.url ~ "(\?|&)(utm_source|utm_medium|utm_campaign|gclid|cx|ie|cof|siteurl)=") {
set req.url = regsuball(req.url, "&(utm_source|utm_medium|utm_campaign|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "");
set req.url = regsuball(req.url, "\?(utm_source|utm_medium|utm_campaign|gclid|cx|ie|cof|siteurl)=([A-z0-9_\-\.%25]+)", "?");
set req.url = regsub(req.url, "\?&", "?");
set req.url = regsub(req.url, "\?$", "");
}

# Remove the "has_js" cookie
set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");


# Remove any Google Analytics based cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=[^;]+(; )?", "");

# Remove the Quant Capital cookies (added by some plugin, all __qca)
set req.http.Cookie = regsuball(req.http.Cookie, "__qc.=[^;]+(; )?", "");

# Remove the wp-settings-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-1=[^;]+(; )?", "");

# Remove the wp-settings-time-1 cookie

set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-time-1=[^;]+(; )?", "");

# Remove the wp test cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wordpress_test_cookie=[^;]+(; )?", "");

# remove cookies for comments cookie to make caching better.
set req.http.cookie = regsub(req.http.cookie, "dcd9527364a17bb2ae97db0ead3110ed=[^;]+(; )?", "");

# remove ?ver=xxxxx strings from urls so css and js files are cached.
set req.url = regsub(req.url, "\?ver=.*$", "");

# Remove "replytocom" from requests to make caching better.
set req.url = regsub(req.url, "\?replytocom=.*$", "");
# Strip hash, server doesn't need it.
set req.url = regsub(req.url, "\#.*$", "");
# Strip trailing ?
set req.url = regsub(req.url, "\?$", "");

# Are there cookies left with only spaces or that are empty?
if (req.http.cookie ~ "^ *$") {
unset req.http.cookie;

}

# Drop any cookies sent to Wordpress.
if (!(req.url ~ "wp-(login|admin)")) {
unset req.http.cookie;
}

# Cache the following files extensions
if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)") {
unset req.http.cookie;

}

# Normalize Accept-Encoding header and compression
# https://www.varnish-cache.org/docs/3.0/tutorial/vary.html
if (req.http.Accept-Encoding) {
# Do no compress compressed files...
if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";

} elsif (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
unset req.http.Accept-Encoding;
}
}

# Check the cookies for wordpress-specific items
if (req.http.Cookie ~ "wordpress_" || req.http.Cookie ~ "comment_") {
return (pass);

}
if (!req.http.cookie) {
unset req.http.cookie;
}

# --- End of Wordpress specific configuration

# No cache for big video files
if (req.url ~ "\.(avi|mp4)") {
return (pass);

}

# Did not cache HTTP authentication and HTTP Cookie
if (req.http.Authorization || req.http.Cookie) {
# Not cacheable by default
return (pass);
}

# Cache all others requests
return (hash);

}

sub vcl_pipe {
# Note that only the first request to the backend will have
# X-Forwarded-For set. If you use X-Forwarded-For and want to
# have it set for all requests, make sure to have:
# set bereq.http.connection = "close";
# here. It is not set by default as it might break some broken web
# applications, like IIS with NTLM authentication.
#set bereq.http.Connection = "Close";

return (pipe);
}

sub vcl_pass {
return (fetch);
}

sub vcl_synth {
if (resp.status == 750) {
set resp.status = 301;

set resp.http.Location = "https://www.paris-vendome.com" + req.url;
return(deliver);
}
}


# The data on which the hashing will take place
sub vcl_hash {
hash_data(req.url);
if (req.http.host) {

hash_data(req.http.host);
} else {
hash_data(server.ip);
}

# hash cookies for requests that have them
if (req.http.Cookie) {
hash_data(req.http.Cookie);
}


# If the client supports compression, keep that in a different cache
if (req.http.Accept-Encoding) {
hash_data(req.http.Accept-Encoding);
}

return (lookup);
}

# This function is used when a request is sent by our backend (Nginx server)
sub vcl_backend_response {

# Remove some headers we never want to see
unset beresp.http.Server;
unset beresp.http.X-Powered-By;

# For static content strip all backend cookies
if (bereq.url ~ "\.(css|js|png|gif|jp(e?)g)|swf|ico") {
unset beresp.http.cookie;
}

# Only allow cookies to be set if we're in admin area

if (beresp.http.Set-Cookie && bereq.url !~ "^/wp-(login|admin)") {
unset beresp.http.Set-Cookie;
}

# don't cache response to posted requests or those with basic auth
if ( bereq.method == "POST" || bereq.http.Authorization ) {
set beresp.uncacheable = true;
set beresp.ttl = 120s;
return (deliver);
}


# don't cache search results
if ( bereq.url ~ "\?s=" ){
set beresp.uncacheable = true;
set beresp.ttl = 120s;
return (deliver);
}

# only cache status ok
if ( beresp.status != 200 ) {

set beresp.uncacheable = true;
set beresp.ttl = 120s;
return (deliver);
}

# A TTL of 24h
set beresp.ttl = 24h;
# Define the default grace period to serve cached content
set beresp.grace = 30s;


return (deliver);
}

# The routine when we deliver the HTTP request to the user
# Last chance to modify headers that are sent to the client
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "cached";
} else {
set resp.http.x-Cache = "uncached";

}

# Remove some headers: PHP version
unset resp.http.X-Powered-By;

# Remove some headers: Apache version & OS
unset resp.http.Server;

# Remove some headers: Varnish
unset resp.http.Via;

unset resp.http.X-Varnish;

unset resp.http.Age;
unset resp.http.Link;

return (deliver);
}

sub vcl_hit {
return (deliver);

}
sub vcl_miss {
return (fetch);
}

sub vcl_init {
return (ok);
}

sub vcl_fini {

return (ok);
}


I think that the issue is not related to varnish, but with the backend, because when I test with this config ( no varnish / no backend ), everything works without any issue:



server {
listen 80;
server_name domaine.tld;
return 301 https://www.domaine.tld$request_uri;

}


server{
listen 443;
ssl on;
ssl_certificate /etc/ssl/nginx/nginx.crt;
ssl_certificate_key /etc/ssl/nginx/nginx.key;
ssl_session_timeout 10m;


root /var/www/domaine;
index index.htm index.html index.php;

server_name domaine.tld;

server_tokens off;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

error_page 404 403 /page_error/404.html;

error_page 500 502 503 504 /page_error/50x.html;

gzip on;
etag off;


location / {
try_files $uri $uri/ =404;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;

}

location ~ \.php$ {


try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param HTTPS on;
}
}


If I have missed anything or if you would like more information, please does not hesitate.
Sorry for the big post but I'm desperate.
Hope somebody will help me
Thanks in advance

samba - Directories shown as files, when sharing a mounted cifs drive



I have an issue where a directory is shown as a file when accessing a samba share ( on Ubuntu 12.10 ) from a Windows machine.



The output from ls -ll in the folder on the linuxbox is as follows:



chubby@chubby:/media/blackhole/_Arkiv$ ls -ll
total 0
drwxrwxrwx 0 jv users 0 Jun 18 2012 _20
drwxrwxrwx 0 jv users 0 Apr 17 2012 _2006

drwxrwxrwx 0 jv users 0 Apr 17 2012 _2007
drwxrwxrwx 0 jv users 0 May 12 2011 _2008
drwxrwxrwx 0 jv users 0 Feb 19 09:53 _2009
drwxrwxrwx 0 jv users 0 Dec 20 2011 _2010
drwxrwxrwx 0 jv users 0 May 8 2012 _2011
drwxrwxrwx 0 jv users 0 Mar 5 11:37 _2012
drwxrwxrwx 0 jv users 0 Feb 28 10:09 _2013
drwxrwxrwx 0 jv users 0 Feb 28 11:18 _Mailarkiv
drwxrwxrwx 0 jv users 0 Jan 3 2011 _Praktikanter



The entry in /etc/fstab is:



# Mounting blackhole
//192.168.0.50/kunder/ /media/blackhole cifs uid=jv,gid=users,credentials=/home/chubby/.smbcredentials,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0


When I access the share directly from the NAS on my Windows box, there are no issues.



The version of Samba is 3.6.6, but I couldn't find anything in the changelogs that seem relevant.




I've tried mounting it in different locations with different permissions, users and groups but I have not made any progress



Due to my low reputation on serverfault ( mostly stackoverflow user ) I'm unable to post a screenshot that shows that the directories are shown as files.



If I type the full path in explorer, the directory listing works excellently, except for any subdirectories that are then shown as files.



Any attack vector for this issue would be greatly appreciated.



Please let me know if I have provided insufficient details.




Edit:
The same share when accessed from a OS X, works perfectly listing the directories as directories.
Best Regards!


Answer



I have finally solved the problem.



I'll try to write this answer out more when I have the time.



The issue is connected to resharing a cifs filesystem, and then accessing this from a Windows7 computer.




The samba bug is here:
https://bugzilla.samba.org/show_bug.cgi?id=9346



This apparently stems from the way information is set on the inode in cifs.



See bug here:
https://bugzilla.kernel.org/show_bug.cgi?id=52791



So the way Samba tells determines ( for its Windows clients ) is by counting the number of hardlinks, rather than testing for the attribute. As cifs ( for some obscure reason ) always sets this to zero, where a directory always will have at least two, the directory will appear as a file for Windows clients.




So to "fix" this I installed my current kernel-headers and the linux source code:



sudo apt-get install linux-headers-$(uname -r) linux-source


I then went to /usr/src/linux-source-3.5.0 and extracted the archive there.



Finding the folder /usr/src/linux-source-3.5.0/linux-source-3.5.0/fs/cifs
I change the following in the file inode.c (line 135):




set_nlink(inode, fattr->cf_nlink);


to:



if(fattr->cf_cifsattrs & ATTR_DIRECTORY)
set_nlink(inode, 2);
else
set_nlink(inode, fattr->cf_nlink);



I then created a makefile to ease compilation ( and avoid annoying insmod errors ):
Makefile2:



obm-m := cifs.o
KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules



This allows us to do ( in the same folder ):
sudo make -f Makefile2



This gives us a file called cifs.ko.



So now we can stop Samba, unmount any shares we have, remove the current cifs, and install our recompiled one.



sudo service smbd stop

sudo umount /path/to/share
sudo rmmod cifs
sudo insmod cifs.ko
sudo mount -a
sudo service smbd start


For me this did the trick, if you restart the box though, this change will not persist.
I'll add to this post when I've figured out a good way to do this.




Please throw any questions or clarifications you require my way, I'll probably learn from it :)



Also thanks to kukks in #samba on freenode, I learned a lot of stuff there, though I ended up moving in another direction.


Friday, August 10, 2018

ubuntu - keepAlive in Apache causing apache to reach its max_clients

I have an Apache 2.2 running on ubuntu 11.4 with 16Gb RAM, for image hosting from mobile phones through GPRS,since connection is slow i have enabled keepalive and set time out to 6,**based on average loading time.But usually even with 10-20 users apache is reaching its max_clients of 300 and preventing further connections.But the interesting thing is even **with keepalive turned OFF Apache is reaching its maxcients and refuses to accept new connection



**KeepAlive ON /tried Off also




MaxKeepAliveRequests 100



keepalivetimeout to 6 (since lot of dynamic images and slow connection)



StartServers 100



MinSpareServers 100



MaxSpareServers 150




ServerLimit 300



MaxClients 300



MaxRequestsPerChild 3000**



What should i do,to improve performance without hitting max_clients.Caching and deflate module is also enabled.Is to ok to set maxrequestperchild to 10 to prevent reaching max-clients.

mac osx - ssh hangs without password prompt -- works in root or other accounts



I had ssh key based login working fine.
Then, I changed the hostname on my computer, and the key based login stopped working.
Seemed to make sense. the keys probably relied on my old hostname.
So, I deleted all of my keys and all the files in ~/.ssh/ and regenerated them (and changed the authorized_keys on the servers I connect to)




Now, any time I try to ssh, it just hangs without the password prompt, no matter where I an trying to ssh to--even servers where I don't have key based login set up. There is nothing in .ssh/config.



Moreover, when I 'su -' to root, ssh works perfectly. no problems at all. This only happens on my user account.



Below is some debugging info from ssh




ssh -vv mylogin@myremoteserver.com
OpenSSH_5.2p1, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /Users/myname/.ssh/config

debug1: Reading configuration data /usr/etc/ssh_config
......
debug1: Host 'myremoteserver.com' is known and matches the RSA host key.
debug1: Found key in /Users/myname/.ssh/known_hosts:1
debug2: bits set: 512/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS

debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received


And then it just hangs here.....



Here is the dtruss (like strace but for OSX) output near the end where it hangs:

sudo dtruss ssh -vv mylogin@myremoteserver.com




select(0x4, 0x508200, 0x0, 0x0, 0x0) = 1 0
read(0x3, "$\222\351{L\363\261\25063sN\216\300@q7\203\276b\257\354\337\356\260!{\342\017\271=\222,\245\347t\006\225\257\333;\204\020]\242\005z#\0", 0x2000) = 48 0
write(0x2, "debug2: service_accept: ssh-userauth\r\n\0", 0x26) = 38 0
connect(0x4, 0xBFFFEEA2, 0x6A) = 0 0
write(0x4, "\0", 0x4) = 4 0
write(0x4, "\v5\004\0", 0x1) = 1 0
read(0x4, "\0", 0x4) = -1 Err#4



It seems to be trying ro read something and just hangs on this. If anyone has some suggestions or ideas, I would be very grateful!


Answer



For me, upgrading to Snow Leopard solved the issue. So, I think it was related to a bug in OSX.


sql server - Querying the Active Directory domain of a Windows 2008 host in SQL




There is code in our shop that must query a SQL Server 2008 server, determine the Active Directory domain that the host belongs to, and, in SQL, create Windows login principals based on this information. Under Windows 2003 server, it was possible to query the domain's name through SQL Server like so:



DECLARE @Domain nvarchar(255) 
EXEC master.dbo.xp_regread 'HKEY_LOCAL_MACHINE', 'SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon', N'CachePrimaryDomain',@Domain OUTPUT

SELECT @Domain AS Domain


However, this no longer works in Windows 2008 ('CachePrimaryDomain' registry key doesn't exist anymore). Anyone know if there is a registry key that reliably reports the Active Directory domain a Windows 2008 server belongs to? Better yet, is there an entirely different way of handling this that makes more sense? Thanks.


Answer




First be sure the machine is on a domain and not part of a workgroup.



Then you can find the "Domain" key here:



HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters


You may need to use T-SQL string functions SUBSTRING and CHARINDEX if you are only looking for the left half of the domain before the '.'



If you are looking for another way to do this without the registry, consider a SQLCLR project or potentially a PowerShell script that uses the Domain.GetComputerDomain() .NET method.



Thursday, August 9, 2018

reverse proxy - Simple Nginx proxy_pass (driving me crazy)

Nagios is served by an nginx virtual server named "nagios" with the following configuration:



    # nagios server

server {
server_name nagios;
root /usr/share/nagios/share;
listen 80;
index index.php index.html index.htm;
access_log /etc/nginx/logs/nagios.access.log;
allow 10.10.0.0/16;
allow 127.0.0.1;



location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param AUTH_USER "nagios";
fastcgi_param REMOTE_USER "nagios";
fastcgi_index index.php;
include fastcgi.conf;
}

location ~ \.cgi$ {

root /usr/share/nagios/sbin;
rewrite ^/nagios/cgi-bin/(.*)\.cgi /$1.cgi break;
fastcgi_param AUTH_USER "nagios";
fastcgi_param REMOTE_USER "nagios";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi.conf;
fastcgi_pass unix:/run/fcgiwrap.sock;
}

location /nagios {

alias /usr/share/nagios/share;
}


This works well from within the LAN. For accessing from external sites. I have a single public address ("newcompany.com"), and I would like to reverse-proxy the entire Nagios site (including the CGI location) to "https://newcompany.com/nagios". I have tried all kinds of rewrites and proxy_passes, none of which wok. Can somebody show me how the location directive "/nagios" within the secured "newcompany.com" server should look like in order to properly reverse-proxy to the nagios server? Here is the current (broken) version of the upstream server:



server {
server_name newcompany.com antergos1;
listen 80 default_server;
root /usr;

index index.php index.html index.htm;
access_log logs/default.access.log;
error_log logs/default.error.log;


location ~ \.(php|html|html|cgi)$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param AUTH_USER $remote_user;
fastcgi_param REMOTE_USER $remote_user;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi.conf;
}

location /nagios {
index index.php index.html index.htm;
proxy_pass http://nagios/;
}

web server - deleting linux cached ram




I have a webserver that has 8GB of ram and is running a pretty intensive php site (1 site) that does file manipulation, graphing, emailing, forums, you name it. The environment is far from static which leads me to believe that very little could be gained from caching anything in ram since almost every request to the server creates new or updated pages. And a lot of caching is done client side so we have a ton of 304 requests when it comes to images, javascript, css.




Additionally I do have language files that are written to flat files on the server where cached ram definitely is good rather than reading from disk. But there are only a handful of files like this.



In about a two weeks I've gone from having 98% free ram to 4% free ram. This has occurred during a time in which we also push several large svn updates onto the server.



My question is whether my server will be better tuned if I periodically clear my cache (I'm aware of Linus Torvalds' feeling about cache) using the following command:



sync; echo 3 > /proc/sys/vm/drop_caches



Or would I be better off editing the following file:



/proc/sys/vm/swappiness  


If I replace the default value of 60 with 30 I should have much less swapping going on and a lot more reuse of stale cache.



It sure feels good to see all that cache freed up using the first command but I'd be lying to you if I told you this was good for the desktop environment. But what about a web server like I've described above? Thoughts?



EDIT: I'm aware that the system will acquire memory as it needs it from the cache memory but thanks for pointing that our for clarity. Am I imagining things when Apache slows down when most of the server memory is stored in cache? Is that a different issue altogether?



Answer



Clearing caches will hinder performance, not help. If the RAM was needed for something else it would be used by something else so all you are doing is reducing the cache hit/miss ratio for a while after you've performed the clear.



If the data in cache is very out of date (i.e. it is stuff cached during an unusual operation) it will be replaced with "newer" data as needed without you artificially clearing it.



The only reason for running sync; echo 3 > /proc/sys/vm/drop_caches normally is if you are going to try do some I/O performance tests and want a known state to start from (running the cache drop between runs to reduce differences in the results due to the cache being primed differently on each run).



The kernel will sometimes swap a few pages even though there is plenty of RAM it could claim back from cache/buffers, and tweaking the swappiness setting can stop that if you find it to be an issue for your server. You might see a small benefit from this, but are likely to see a temporary performance drop by clearing cache+buffer artificially.


Wednesday, August 8, 2018

storage - ZFS memory requirements in a "big files" DAS scenario



I have some old server hardware that I want to build a FreeNAS data server with, but it only has 8 GB of memory and I can't really expand on that.




I plan on putting 6 4tb drives in there, with double parity, yielding about 14 gb of actual storage. Almost double what the "1gb per 1tb" rule entitles the system to.



However, the server will only be accessed by a single system, and the usage pattern will be highly sequential data streams, nothing that can really benefit from extensive caching. No multiple clients, no small random access, nothing running in jails. Just a plain "huge files" server.



Would 8 gb of ram be able to cut it, or do I need to shell out an extra 1000$ to buy a new system?


Answer



8GB of RAM is fine.



I'd urge you to consider an alternative to FreeNAS, since it's not the beat or most reliable ZFS implementation. But sure, the amount of RAM you have is okay.
Be sure not to enable deduplication. Compression is fine, though.



How the heck is http://to./ a valid domain name?



Apparently it's a URL shortener. It resolves just fine in Chrome and Firefox. How is this a valid top-level domain?



Update: for the people saying it's browser shenanigans, why is it that: http://com./ does not take me to: http://www.com/?



And, do browsers ever send you a response from some place other than what's actually up in the address bar? Aside from framesets and things like that, I thought browsers tried really hard to send you content only from the site in the address bar, to help guard against phishing.


Answer



Basically, someone has managed to convince the owners of the ccTLD 'to.' (Tonga?) to assign the A record to their own IP address. Quite a coup in the strange old world of URL shorteners.




Normally these top-levels would not have IP addresses assigned via a standard A record, but there is nothing to say that the same could not be done to .uk, .com, .eu, etc.



Strictly speaking there is no reason to have the '.' specified, though it should prevent your browser from trying other combinations like 'to.yourdomain.com' first, and speed up the resolution of the address. It might also confuse browsers, as there is no dot, but Safari at least seems to work ok with it.


ubuntu - Copy lxd containers between hosts



I've installed lxd on two ubuntu hosts that can only communicate via an intermediate server (on which I don't have su privileges). I've created a container on my localhost and now wish to load the container on the remote server.



I consulted the basic.sh test script in the lxc/lxd repo to confirm that I'm using the correct approach (I discovered along the way that I was misunderstanding images vs containers).



I've created a container test on my localhost, installed all the necessary goodies within it, stopped it, published it, and executed the following commands:



lxc image export test



This gives me a tarball 42cf01c53cb9e...83e3c48.tar.gz (shortened here), as described in the documentation (I'm running lxc and lxd versions 2.0.0.beta3). Attempting to import that image on the same host via



lxc image import 42cf01c53cb9e...83e3c48.tar.gz --alias testimage


yields the error:



exit status 2 (tar: metadata.yaml: Not found in archive)



The basic.sh script leads me to believe that I was following the correct route though (except for the tar.gz vs tar.xz descrepancY). I'm able to export standard images and obtain an .xz file (when I obtain them using lxd-images). For example,



lxd-images import ubuntu --alias ubuntu
lxc image export ubuntu


produces a meta-ubuntu...tar.xz and ubuntu...tar.xz file, which can be imported (on a different server) with




lxc image import meta...tar.xz rootfs ubuntu...tar.xz --alias imported_ubuntu


How do I copy containers between hosts?



Thanks!



Edit: I've investigated further and have published my test container, which creates an image of it. Then I get the .gz file though (without the meta-data) when I export it. If I hijack metadata from the original image, then I can't get the container started although import no longer crashes on me --- I obviously don't know what I'm doing. Pulling the image over to a second host using lxd's remote: approach (after adding the host using the lxd config) does not result in it appearing in lxc images list.


Answer



The later release (non-beta) of lxd (v2.0) seems to have resolved my issue. The steps, which may be found in the excellent documentation here, are:





  1. Publish an image (without stopping the container) on host A;



    $ lxc publish --force container_name --alias image_name
    Container published with fingerprint: d2fd708361...a125d0d5885

  2. Export the image to a file;



    $ lxc image export image_name 

    Output is in dd2fd708361...a125d0d5885.tar.gz

  3. Copy the file to host B, and import;



    $ lxc image import dd2fd708361...a125d0d5885.tar.gz --alias image_name
    Transferring image: 100%

  4. Launch the container (from the image) on host B;



    $ lxc launch image_name container_name

    Creating container_name
    Starting container_name



In some instances the publish command may lead to a split xz tar-ball --- but both formats are supported. Simply import the meta-data and rootfs components with



    lxc image import   --alias image_name

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...