Saturday, March 31, 2018

apache 2.2 - Access Denied for PHP Files Only





  • Apache HTTP Server 2.2.21 with VirtualHosts under SuExec

  • PHP 5.3.8 via fcgid

  • Arch Linux 2011.08.19



I am getting 403 Access Denied errors from Apache any time I try to access a PHP file. HTML files and text files work fine. I've played with every conceivable permissions combination on the PHP files I can think of, from 644 to 777. Doesn't change anything.



I also played with the permissions on the FCGI wrapper and parent folder. With o+x (777, 775, 773, 771), I get this in the browser:





Access forbidden!



You don't have permission to access the requested object. It is either
read-protected or not readable by the server.




…and this in the vhost error log:




client denied by server configuration: /srv/www/hostname/fcgid-bin/php-fcgid-wrapper





With o-x (776, 774, 772, 770, or below), I get this in the browser:




Forbidden



You don't have permission to access
/fcgid-bin/php-fcgid-wrapper/index.php on this server.




Additionally, a 403 Forbidden error was encountered while trying to
use an ErrorDocument to handle the request.




…and this in the log:




(13)Permission denied: access to /fcgid-bin/php-fcgid-wrapper/index.php denied





This is really boggling my mind seeing as my setup was working fine until I started getting this and I don't know what I possibly could have done to change that. /usr/bin/php-cgi and the wrapper both work fine with the exact same input files when called directly.



Here's my vhost config:




ServerAdmin admin@hostname.com
DocumentRoot "/srv/www/hostname/public/"
ServerName hostname.com
ServerAlias www.hostname.com
SuexecUserGroup hostname hostname

ErrorLog "/srv/www/hostname/logs/error.log"
LogLevel debug
CustomLog "/srv/www/hostname/logs/access.log" combined


Order allow,deny
Allow from all


# http://www.linode.com/forums/viewtopic.php?t=2982








AddHandler php-fcgi .php
Action php-fcgi /fcgid-bin/php-fcgid-wrapper
Alias /fcgid-bin/ /srv/www/hostname/fcgid-bin/



SetHandler fcgid-script
Options +ExecCGI


ReWriteEngine On
ReWriteRule ^/fcgid-bin/[^/]*$ / [PT]










Answer




Order allow,deny

Allow from all



That doesn't include /srv/www/hostname/fcgid-bin/; assuming there's no Allow applying to it elsewhere in your config, this is the problem. You'll need to Allow access to this location.


linux - NTP client on CentOS 5 fails behind Cisco ASA firewall



I have a CentOS server on which I want to set up an NTP client to get accurate time for the server. The server is on a local subnet with NAT behind an ASA 5505 firewall, which acts as NAT router, and which in turn directly connects to the internet DSL modem, no other router.




The problem is that the NTP client on the CentOS server just never manages to synchronize with any NTP server I choose. Setting up the ASA 5505 as NTP client works completely fine however. Using the same IP addresses on the CentOS server still gives me no sync, even when waiting for hours.



ntp.conf is:




restrict 127.0.0.1
restrict -6 ::1

server 127.127.1.0 # local clock

fudge 127.127.1.0 stratum 10

driftfile /var/lib/ntp/drift

keys /etc/ntp/keys

server 89.109.251.21
server 176.9.47.150
server 63.15.238.180



Using ntpq tells me that none of these servers are being reached (while at least two of them ARE reachable at any time from the ASA so they are okay):




peers
remote refid st t when poll reach delay offset jitter

*LOCAL(0) .LOCL. 10 l 25 64 377 0.000 0.000 0.001
89.109.251.21 .INIT. 16 u - 1024 0 0.000 0.000 0.000
odin.tuxli.ch .INIT. 16 u - 1024 0 0.000 0.000 0.000

63.15.238.180 .INIT. 16 u - 1024 0 0.000 0.000 0.000


For the moment that shows .INIT. on the refid, it takes about an hour before that changes into something else, but still the "reach" counter keeps staying at 0.



the "as" command gives following:



ind assID status  conf reach auth condition  last_event cnt




1 40263 9614 yes yes none sys.peer reachable 1
2 40264 8000 yes yes none reject
3 40265 8000 yes yes none reject
4 40266 8000 yes yes none reject


This does not change even after 24h, it is always "reject".



Querying with "rv" always gets the response "peer_unfit" and "peer_stratum" which is natural since the stratum stays at 16 all the time.




Sounds like a network problem, yet I do not find the problem.



I have no rule whatsoever in the ASA restricting or allowing the port 123 for NTP. But theoretically I should not need it - for UDP the firewall SHOULD know that the reply packet is related / established so it should let it through, or am I wrong here?



Or is the problem related to some authentication config - does the ntp key line in the confi g have anything to do with it?



EDIT:
FIREWALL ASA 5505 CONFIG (shortened):






ASA Version 8.2(5)
!
names
!
interface Ethernet0/0
switchport access vlan 2
!
interface Ethernet0/1

!
interface Ethernet0/2
!
interface Ethernet0/3
!
interface Ethernet0/4
!
interface Ethernet0/5
switchport access vlan 3
!

interface Ethernet0/6
switchport access vlan 3
!
interface Ethernet0/7
switchport access vlan 3
!
interface Vlan1
nameif inside
security-level 100
ip address 10.111.11.251 255.255.255.0

!
interface Vlan2
nameif outside
security-level 0
ip address 192.168.1.2 255.255.255.252
!
interface Vlan3
no forward interface Vlan1
nameif dmz
security-level 50

ip address 192.168.240.254 255.255.255.0
!
!
ftp mode passive
clock timezone CEST 1
clock summer-time CEST recurring last Sun Mar 2:00 last Sun Oct 2:00
object-group network XenServer
network-object host 192.168.240.240
network-object host 192.168.240.241
network-object host 192.168.240.242

access-list MAILSERVER extended permit tcp any any eq www
access-list MAILSERVER extended permit tcp any any eq https
access-list MAILSERVER extended permit tcp any any eq smtp
access-list MAILSERVER extended permit tcp any any eq ftp
access-list MAILSERVER extended permit tcp any any eq ftp-data
access-list MAILSERVER extended permit icmp any any echo-reply
access-list MAILSERVER extended deny ip any any log
access-list NEPLAN extended permit tcp any host 192.168.240.231 eq 10000
access-list NEPLAN extended permit tcp any host 192.168.240.231 eq https
access-list NEPLAN extended permit tcp any host 192.168.240.253 eq 10000

access-list NEPLAN extended permit tcp any host 192.168.240.253 eq https
access-list NEPLAN extended permit tcp any object-group XenServer eq https
access-list NEPLAN extended permit tcp any object-group XenServer eq ssh
access-list NEPLAN extended permit tcp any host 192.168.240.231 eq www
access-list NEPLAN extended permit tcp any host 192.168.240.238 eq www
access-list INTERNET extended permit ip 192.168.240.0 255.255.255.128 any
access-list INTERNET extended permit ip host 192.168.240.136 any
access-list INTERNET extended permit ip host 192.168.240.230 any
access-list INTERNET extended permit ip host 192.168.240.220 any
access-list INTERNET extended permit ip host 192.168.240.221 any

access-list INTERNET extended permit ip host 192.168.240.222 any
access-list INTERNET extended permit ip host 192.168.240.210 any
access-list INTERNET extended permit ip host 192.168.240.211 any
access-list INTERNET extended permit icmp any any echo-reply
access-list INTERNET extended permit ip object-group XenServer any
access-list INTERNET extended deny ip any any log
mtu inside 1500
mtu outside 1500
mtu dmz 1500
icmp unreachable rate-limit 1 burst-size 1

arp timeout 14400
global (outside) 91 interface
global (dmz) 92 interface
nat (inside) 92 10.111.11.0 255.255.255.0
nat (dmz) 91 192.168.240.0 255.255.255.0
static (dmz,outside) tcp interface https 192.168.240.136 https netmask 255.255.255.255
static (dmz,outside) tcp interface smtp 192.168.240.136 smtp netmask 255.255.255.255
static (dmz,outside) tcp interface ftp 192.168.240.136 ftp netmask 255.255.255.255
static (dmz,outside) tcp interface ftp-data 192.168.240.136 ftp-data netmask 255.255.255.255
static (dmz,outside) tcp interface www 192.168.240.136 www netmask 255.255.255.255

access-group NEPLAN in interface inside
access-group MAILSERVER in interface outside
access-group INTERNET in interface dmz
route outside 0.0.0.0 0.0.0.0 192.168.1.1 1

ntp server 89.109.251.21
ntp server 176.9.47.150
ntp server 63.15.238.180

webvpn


!
class-map inspection_default
match default-inspection-traffic
!
!
policy-map type inspect dns preset_dns_map
parameters
message-length maximum client auto
message-length maximum 512

policy-map global_policy
class inspection_default
inspect dns preset_dns_map
inspect ftp
inspect h323 h225
inspect h323 ras
inspect ip-options
inspect netbios
inspect rsh
inspect rtsp

inspect skinny
inspect esmtp
inspect sqlnet
inspect sunrpc
inspect tftp
inspect sip
inspect xdmcp
!
service-policy global_policy global
prompt hostname context

no call-home reporting anonymous
call-home
profile CiscoTAC-1
no active
destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
destination address email callhome@cisco.com
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly

subscribe-to-alert-group configuration periodic monthly
subscribe-to-alert-group telemetry periodic daily
Cryptochecksum:590d5cd7306d6a21eb875098d3b33661
: end
NEP-ASA-SL20-1#



The servers which have problems with NTP are 192.168.240.240 and 192.168.240.241 (network object group XenServers - this is a XenServer DomU. Tried already with another standalone server - same problem so it doesn't seem related to Xen).


Answer




The solution for this is to add a static NAT entry for UDP packets with destination port 123 (plus open that inbound port specifically):




static (dmz,outside) udp interface ntp 192.168.240.240 ntp netmask 255.255.255.255


Yes, I know, this SHOULD not be necessary. Opening the inbound port 123 specifically does not handle it - it does require the static NAT entry.
This also shows that the UDP packets sent by my CentOS server have both destination and source port set to 123 for NTP.



Can anyone shed light on why the firewall refuses to classify this traffic as related traffic? Is this because the source port is an "privileged" port, i.e. < 1023?

I cannot find any documentation or reference anywhere on this.


Thursday, March 29, 2018

linux - Setting IPv4 as preferred protocol over IPv6



I'm using both IPv6 and IPv4 in a LAN network containing Slackware 13.0 boxes. How can I set IPv4 as preferred protocol on the workstations in this network? I want to use IPv6 either explicitly or when there are only AAAA records available. For example, if I try to open http://ipv6.org/ from Firefox, I will always connect via IPv6. The situation is the same with other applications. I tried creating /etc/gai.conf and adding the following to it:



precedence ::ffff:0:0/96  100



This should control the behavior of getaddrinfo(3) at least in Debian, but it didn't help on Slackware.



Any ideas will be appreciated. Thanks in advance!


Answer



According to the man page, inserting a precedence value in gai.conf disables the all the other default rules. Try setting all the rules as listed in RFC 3484 (10.3):



  Prefix        Precedence Label
::1/128 50 0

::/0 40 1
2002::/16 30 2
::/96 20 3
::ffff:0:0/96 100 4

Can a drive connected to a P410 SmartArray in a single drive "RAID 0" be read on a normal HBA?



I've got an older machine (HP DL180 G6) using an HP SmartArray controller (model P410) with 12 drives connected to it. I was not all that interested in the controller's functions, as I wanted to set up a ZFS array, but I found out too late the controller had no passthrough mode.



As a workaround, I created 12 logical "RAID 0" volumes - one for each drive. This setup has worked well for about 3 years now.




The controller has started to show signs of failure, so I want to take this opportunity to move to a plain old SATA HBA now that the funds are available.



After swapping out the the controller for the HBA, will I need to take other steps to have my drives readable, or will it "just work"? (In other words: Did the SmartArray do anything to the data structures that would render the data unreadable to something else?)


Answer



For a DL180 G6, you have a couple of options:




  • Continue to use your multiple RAID 0 arrays - The problem with this is that a drive failure is essentially a Logical Drive failure, and would probably require a reboot to recognize a replacement disk.


  • Upgrade to a Smart Array P420 or H220 or H240. The P420 can be placed in "HBA mode". The H220 and H240 are HBAs (LSI chipsets). This will give you the raw disk access you're asking for.


  • Screw it and just make a hardware RAID array of the level you desire (RAID 1+0), create a small logical drive for your OS (sda) and another large logical drive that can be consumed by your zpool. This gives you ZFS volume management and flexibility, but hardware RAID, easy drive replacement, monitoring and a flash/battery-backed write cache.





People on the internet will say "no, don't do this... ZFS wants raw disks", but in reality, this maximizes your disk space because you don't need to allocate OS disks. HP hardware RAID is very resilient. Write cache is nice to have. ZFS is really best suited for the flexibility and performance enhancements of lz4 compression and ARC/L2ARC. If you're not in a position to have proper ZIL SLOG devices and a really well architected setup, the ZFS purist raw disk thing isn't as crucial.


Wednesday, March 28, 2018

linux - Troubles installing KDE with pkgsrc

Im in school right now and I have taken two classes, Networking and Unix Development, that focus on C programming in Unix. Specifically we have been using NetBSD for our machines that we develop on (rather our programs must work on NetBSD). Well our school network has been really finicky as of late and I haven't been able to SSH in. I thought this would be the perfect time to create a NetBSD box of my own because 1)my programs must compile and run on NetBSD and 2)I really don't know how to manipulate/operate a Unix environment (although I understand the internal workings).



With that being said I set out on getting NetBSD working today since its my off day. I have learned a ton about operating NetBSD/Unix (I guess I never really knew much) but I am stuck on trying to install KDE right now. I would like to say that my Google searches were successful/resourceful but I am afraid they weren't. I don't know if what I was searching was to vague or not the right thing, but here I am looking for help.



I am using pkgsrc to install the binary of KDE 3.5.10. When I use pkg_add kde-3.5.10 it starts doing whatever it is supposed to do (I don't know the optional command args to make pkg_add report on what its doing). It seems to be working for ~5mins but then fails and gives the following errors:




  • pkg_add: Read error for lib/liblcms.so.1.0.18: Premature end of gzip compressed data: Input/output error


  • original MD5 checksum failed, not deleting: /usr/pkg/lib/liblcms.so.1.0.18

  • pkg_add: Couldn't remove /usr/pkg/lib/pkgconfig/lcms.pc

  • ...

  • pkg_add: Can't install dependency lcms>=1.12nb2

  • ...

  • pkg_add: 1 package additino failed



I really have not ideas what those errors mean. Any error that is ... is the same error as above but with a different path/dependency (let me know if you want to see them all).




The steps I took to the point to where I could actually try and install KDE were:




  • Install NetBSD 5.0.1

  • Use dhcpcd with one of my network cards

  • Setting the appropriate environment variables and getting pkgsrc via CVS

  • Setting the appropriate environment variable for the location of binary files

  • Executing pkg_add




I'm sorry if this is a trivial error and something that I should be able to figure out on my own, but today was the first day I attempted to install Unix/Linux ever. All the programming assignments I had done up to this point just required me to SSH into a server, use an editor (Emacs) to write my code, and compile it with a Makefile. Any help, tips, pointers would be GREATLY appreciated. :D



Thanks again for your help.



On a side note I didn't know if I ought to post this on ServerFault or SuperUser. If these kinds of questions are more geared towards SuperUser, please let me know and I will post future questions there.

linux - What is the different usages for sites-available vs the conf.d directory for nginx

I have some experience using linux but none using nginx. I have been tasked with researching load-balancing options for an application server.



I have used apt-get to install nginx and all seems fine.



I have a couple of questions.



What is the difference between the sites-available folder and the conf.d folder. Both of those folders were INCLUDED in the default configuration setup for nginx. Tutorials use both. What are they for and what is the best practice?



What is the sites-enabled folder used for? How do I use it?




The default configuration references a www-data user? Do I have to create that user? How do I give that user optimal permissions for running nginx?

ubuntu - Notification for Disk-Space Shortage Issue in Server




I have a ubuntu server, and facing frequent space issue, i.e. logs are eating up lot of disk space. So, I want a check to be applied, so that whenever there is less than 5 GB free disk space, I should get an e-mail notification, so that I can delete the logs. How can I configure this. Do I need any other application?


Answer



On my Ubuntu server, I have the following script in /etc/cron.daily that alerts me by email whenever /dev/sdc (my /srv partition) has less than 200MB of free space.



ALERT=200
UNIT=M
PARTITION=/dev/sdc

df -B$UNIT | grep "^$PARTITION" |
while read partition size used free perc mnt ;

do
free_space=$(echo $free | tr -d $UNIT )
if [ $free_space -le $ALERT ]; then
echo "Partition $partition ($mnt) running out of space ($free) on $(hostname) as on $(date)" |
mail -s "Alert: $mnt almost out of disk space on $(hostname) - $free" root
fi
done


It was initially taken and adapted from this blog post on nixCraft. Save this into a file in /etc/cron.hourly as root, modify the first 3 lines to suit your server and needs, and make the file executable. If you want to have it executed more often, save it as a script and create a regular cron job.




Note that you will need something providing the mail command, typically from the packages qmail-run or courier-mta.


Tuesday, March 27, 2018

python - Apache stops serving request

I am running django with mod_wsgi and every thing works fine most of the time but at times I observe that all of sudden Apache stops serving any requests, monitoring service on server says httpd is still running but requests take too long and fails with premature script headers.



I am running this setup on RHEL with python 2.6




wsgi directives



WSGISocketPrefix /var/run/wsgi
WSGIScriptAlias / /srv/bin/bootstrap.wsgi
WSGIDaemonProcess bstapp user=django-user group=django-user
WSGIProcessGroup bstapp

Monday, March 26, 2018

apache 2.2 - Tomcat cookies not working via my ProxyPass VirtualHost



I'm having some issues with getting cookies to work when using a ProxyPass to redirect traffic on port 80 to a web-application hosted via Tomcat.



My motivation for enabling cookies is to get rid of the "jsessionid=" parameter that is appended to the URLs.



I've enabled cookies in my context.xml in META-INF/ for my web application.
When I access the webapplication via http://url:8080/webapp it works as expected, the jsessionid parameter is not visible in the URL, instead it's stored in a cookie.



When accessing my website via an apache2 virtualhost the cookies doesn't seem to work because now "jsessionid" is being appended to the URLs. How can I solve this issue?




Here's my VHost configuration:





ServerName somedomain.no
ServerAlias www.somedomain.no


Order deny,allow
Allow from all



ProxyPreserveHost Off
ProxyPass / http://localhost:8080/webapp/
ProxyPassReverse / http://localhost:8080/webapp/

ErrorLog /var/log/apache2/somedomain.no.error.log
CustomLog /var/log/apache2/somedomain.no.access.log combined




EDIT: The cookie is actually being set when I visit from http://somedomain.no, but the cookie has its Path set to "/webapp".


Answer



I figured it out.



Add this to the VHost configuration:




ProxyPassReverseCookiePath /webapp /


ubuntu - Zone transfer between PowerDNS and Bind9



I have a problem when trying to transfer a full zone from a PowerDNS server to a Bind9 one. The weird part is that there are several zones on the PowerDNS server which serves as a hidden master (with a MySQL backend) but only one zone is failing to be transfered to the Bind9 server.



The two servers are running Ubuntu 16.04 LTS. With:





  • Bind9 version = 9.10.3.dfsg.P4-8ubuntu1

  • PowerDNS version = 4.0.0~alpha2-3build1



The Bind9 slave zone is configured like this:



zone "example.net" {
type slave;
file "/var/lib/bind/slaves/db.example.net";
masters {

10.0.0.1;
};
};


And the DNS zone from PowerDNS is:



% sudo pdnsutil show-zone example.net
This is a Master zone
Last SOA serial number we notified: 2016050801 == 2016050801 (serial in the database)

Zone is not actively secured
Metadata items: None
No keys for zone 'example.net.'.

% sudo pdnsutil list-zone example.net
example.net. 10800 IN MX 10 mx1.example.org.
example.net. 10800 IN MX 50 mx2.example.org.
example.net. 10800 IN NS ns1.example.org.
example.net. 10800 IN NS ns2.example.org.
example.net. 86400 IN SOA ns1.example.org. hostmaster.example.org. 2016050801 28800 7200 604800 86400

...


Note the difference between .net and .org in this output.
And here is the PowerDNS output in the log while trying to provide the zone to Bind.



May  9 00:44:14 hdns01 pdns[40494]: AXFR of domain 'example.net.' initiated by 10.0.0.2
May 9 00:44:14 hdns01 pdns[40494]: AXFR of domain 'example.net.' allowed: client IP 10.0.0.2 is in allow-axfr-ips
May 9 00:44:14 hdns01 pdns[40494]: AXFR of domain 'example.net.' failed: not authoritative



And the corresponding logs given by Bind.



May  9 00:44:14 rdns01 named[32973]: zone example.net/IN: refresh: unexpected rcode (REFUSED) from master 10.0.0.1#53 (source 0.0.0.0#0)
May 9 00:44:14 rdns01 named[32973]: zone example.net/IN: Transfer started.
May 9 00:44:14 rdns01 named[32973]: transfer of 'example.net/IN' from 10.0.0.1#53: connected using 10.0.0.2#55376
May 9 00:44:14 rdns01 named[32973]: transfer of 'example.net/IN' from 10.0.0.1#53: failed while receiving responses: NOTAUTH
May 9 00:44:14 rdns01 named[32973]: transfer of 'example.net/IN' from 10.0.0.1#53: Transfer status: NOTAUTH
May 9 00:44:14 rdns01 named[32973]: transfer of 'example.net/IN' from 10.0.0.1#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.004 secs (0 bytes/sec)



So Bind9 is saying that the server is not authoritative. That's weird. So lets use dig to make things a little bit clear.



% dig @10.0.0.1 example.net. SOA          

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @10.0.0.1 example.net. SOA
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47002

;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1680
;; QUESTION SECTION:
;example.net. IN SOA

;; ANSWER SECTION:
example.net. 86400 IN SOA ns1.example.org. hostmaster.example.org. 2016050801 28800 7200 604800 86400


;; Query time: 2 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Mon May 09 00:53:51 CEST 2016
;; MSG SIZE rcvd: 104


Seems pretty authoritative to me. So after that I tried to do an AXFR with dig. And surprise it works...



% dig -t axfr example.net @10.0.0.1


; <<>> DiG 9.10.3-P4-Ubuntu <<>> -t axfr example.net @10.0.0.1
;; global options: +cmd
example.net. 86400 IN SOA ns1.example.org. hostmaster.example.org. 2016050801 28800 7200 604800 86400
...
;; Query time: 73 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Mon May 09 00:56:42 CEST 2016
;; XFR size: 58 records (messages 3, bytes 1952)



I don't know where to look anymore.



Thanks for your help.



UPDATE:



Logs from a packet capture:



1   0.000000    10.0.0.2    10.0.0.1    DNS 82  Standard query 0xe0dd SOA example.net OPT

2 0.002902 10.0.0.1 10.0.0.2 DNS 82 Standard query response 0xe0dd Refused SOA example.net OPT
6 0.004506 10.0.0.2 10.0.0.1 DNS 97 Standard query 0x205c AXFR example.net
8 0.006432 10.0.0.1 10.0.0.2 DNS 97 Standard query response 0x205c Not authoritative AXFR example.net


PowerDNS logs from a successful manual AXFR:



May  9 08:19:51 hdns01 pdns[40494]: AXFR of domain 'example.net.' initiated by 10.0.0.2
May 9 08:19:51 hdns01 pdns[40494]: AXFR of domain 'example.net.' allowed: client IP 10.0.0.2 is in allow-axfr-ips
May 9 08:19:52 hdns01 pdns[40494]: AXFR of domain 'example.net.' to 10.0.0.2 finished



PowerDNS config file:



#################################
# allow-axfr-ips Allow zonetransfers only to these subnets
#
allow-axfr-ips=127.0.0.0/8,::1,10.0.0.2

#################################

# also-notify When notifying a domain, also notify these nameservers
#
also-notify=10.20.1.78,10.0.0.2

#################################
# daemon Operate as a daemon
#
daemon=yes

#################################

# include-dir Include *.conf files from this directory
#
# include-dir=
include-dir=/etc/powerdns/pdns.d

#################################
# launch Which backends to launch and order to query them in
#
# launch=
launch=


#################################
# master Act as a master
#
master=yes

#################################
# setgid If set, change group id to this gid for more security
#
setgid=pdns


#################################
# setuid If set, change user id to this uid for more security
#
setuid=pdns


And the MySQL backend config part inside the /etc/powerdns/pdns.d/ directory.



# MySQL Configuration

#
# Launch gmysql backend
launch+=gmysql

# gmysql parameters
gmysql-host=127.0.0.1
gmysql-port=
gmysql-dbname=pdns
gmysql-user=MYUSER
gmysql-password=MYPASSWORD

gmysql-dnssec=yes
# gmysql-socket=

Answer



At my request the poster came into our #powerdns IRC channel, where we quickly figured out that there was actually a typo between the domain names on master and slave - hidden by the obfuscation that was done to ask the question here.


Sunday, March 25, 2018

linux - EXT4 "No space left on device (28)" incorrect



I have been through the other questions/answers regarding inode usage, mounting issues and others but none of those questions seem to apply...




df -h




/dev/sdd1 931G 100G 785G 12% /media/teradisk




df -ih





/dev/sdd1 59M 12M 47M 21% /media/teradisk




Basically, I have an EXT4 formatted drive 1TB in size, and am writing arount 12 million (12201106) files into one directory. I can't find any documentation on a files-per-directory limit for EXT4 but the filesystem reports no space left.



Oddly, I can still create new files on the drive and target folder but when doing a large cp/rsync, the calls to mkstemp and rename report no space left on device.




rsync: mkstemp "/media/teradisk/files/f.xml.No79k5" failed: No space left on device (28)




rsync: rename "/media/teradisk/files/f.xml.No79k5" -> "files/f.xml": No space left on device (28)




I know storing this many files in one directory isn't advised for a ton of reasons, but unless I can help it I don't wish to split them up.



Inode and space usage for tmpfs, the device and everything else looks fine. Any ideas of the cause?


Answer



The XFS filesystem would be a more supportable (long-term) solution for what you're trying to do now. Large file-count directories are not a problem for XFS. Of course, fixing this at the application level would also be helpful...


Saturday, March 24, 2018

windows - Elevated command prompt without prompting




I have the following situation on a set of servers (Windows Server 2008 R2):




  • Domain User A (local administrator) launches cmd.exe on server 1. This is as elevated (seen in title Administrator:*)

  • Domain User B (local administrator) launches cmd.exe on server 1. This is as elevated command prompt.

  • Domain User A (local administrator) launches cmd.exe on server 2. This is NOT launched as elevated command prompt.

  • Domain User B (local administrator) launches cmd.exe on server 2. This is NOT launched as elevated command prompt.


    • Builtin administrator launches cmd.exe on server 2. This is as elevated command prompt.





All cmd.exe launches are without prompting.
For all command prompt shortcuts the advanced setting for "Run as Administrator" is switched off.



There seems to be a different setting that causes the same effect as the "Run as Administrator" checkbox. I cannot find the setting however (neither in the system nor online). The machines are part of the same domain (and domain policy). It seems to be a machine setting, since for the normal users the behavior is equal. Only the builtin\administrator works differently.



What is the setting?




ps. The setting "Run as administrator" works fine to mimic the behavior, but I would like to understand the situation as I have it.



Things checked (based on comments):




  • What I also see is that if you launch the run dialog it already displays the message "This task will be created with administrative privileges"

  • I have also checked this AppCompatFlags setting, but it has not been set: https://superuser.com/a/697002/1030237


Answer



While there might be multiple possible causes for command windows being administrative by default, if Explorer is running elevated that is a sure sign that User Account Control has been disabled.




On Windows 7 or Server 2008 R2 this may be because the UAC slider has been set to "Never Notify".



On Windows 10 or Server 2016 the only way to disable UAC that I am aware of is by setting the local security policy option "UAC: Run all administrators in Admin Approval Mode" to Disabled.


Linux howto/tutorial/help sites?




I've found HowtoForge very helpful. Is there any other similar sites that doesn't have too old tutorials which won't work nowadays?


Answer



I find that the Ubuntu community is very active, and hence there howtos are generally quite recent.



http://ubuntuforums.org would be the direct route



but I find that just adding the word "Ubuntu" to my google search will often put those results at the top, and find exactly what I'm looking for without having to re-search their site.


linux - Swap with a huge amount of ram available

I have an old, legacy server with an odd problem with swap.




  • Linux version: Red Hat Enterprise Linux Server release 5.6 (Tikanga)

  • Kernel version: 2.6.18-238.el5

  • Server is virtual.

  • Server has 2 virtual socket.



I know swap partition is to small, going to add a swap file, but, after few hours after reboot, the situation is this:




free -m
total used free shared buffers cached
Mem: 15922 15806 116 0 313 13345
-/+ buffers/cache: 2147 13775
Swap: 2047 2042 4


Oracle database is installed, but almost unused. I'd like to understand why memory distribution goes this way. I mean 13345 cached, means free. Why filling swap?




A previous sysadmin configured swappiness to: 3.

Huge pages are not configured.



I saw some post similar, but with no solution to understand. An answer here: linux redhat 5.4 - swap while memory is still available talks about numa, so I digged a bit (I'm a dba, not a sysadmin, so sorry if I miss something).



grep NUMA=y /boot/config-`uname -r`
CONFIG_NUMA=y
CONFIG_K8_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_ACPI_NUMA=y


dmesg | grep -i numa
NUMA: Using 63 for the hash shift.


So, the question is: how can I understand why is this machine swapping?



Update
With a: vmstat 2




procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 0 2090852 122224 324056 13679328 320 0 498 1898 1088 3555 32 10 56 2 0
1 0 2090724 139740 324068 13680984 64 0 76 932 1028 3534 7 2 90 2 0
0 0 2090724 132416 324068 13681436 0 0 16 240 1016 3401 3 1 96 1 0
4 0 2090660 116916 324084 13683404 0 0 72 1396 1070 3617 11 9 80 1 0
0 0 2090420 126544 324084 13687008 128 0 188 1872 1068 3436 35 8 56 2 0


Update 3




ipcs -ma

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x61a4d538 5210113 oracle 660 4096 0
0xba8cafdc 5242883 oracle 660 4096 0
0x16621634 5308420 oracle 660 4096 0
0xc15f3dac 5373957 oracle 660 4096 0


------ Semaphore Arrays --------
key semid owner perms nsems
0x24690d60 98304 oracle 660 125
0x24690d61 131073 oracle 660 125
0x24690d62 163842 oracle 660 125
0x24690d63 196611 oracle 660 125
0x24690d64 229380 oracle 660 125
0x24690d65 262149 oracle 660 125
0x24690d66 294918 oracle 660 125
0x24690d67 327687 oracle 660 125

0x24690d68 360456 oracle 660 125
0x6285541c 491529 oracle 660 125
0x6285541d 524298 oracle 660 125
0x6285541e 557067 oracle 660 125
0x6285541f 589836 oracle 660 125
0x62855420 622605 oracle 660 125
0x62855421 655374 oracle 660 125
0x62855422 688143 oracle 660 125
0x62855423 720912 oracle 660 125
0x62855424 753681 oracle 660 125

0xaee7ccbc 884754 oracle 660 125
0xaee7ccbd 917523 oracle 660 125
0xaee7ccbe 950292 oracle 660 125
0xaee7ccbf 983061 oracle 660 125
0xaee7ccc0 1015830 oracle 660 125
0xaee7ccc1 1048599 oracle 660 125
0xaee7ccc2 1081368 oracle 660 125
0xaee7ccc3 1114137 oracle 660 125
0xaee7ccc4 1146906 oracle 660 125
0xfb4a455c 1277979 oracle 660 125

0xfb4a455d 1310748 oracle 660 125
0xfb4a455e 1343517 oracle 660 125
0xfb4a455f 1376286 oracle 660 125
0xfb4a4560 1409055 oracle 660 125
0xfb4a4561 1441824 oracle 660 125
0xfb4a4562 1474593 oracle 660 125
0xfb4a4563 1507362 oracle 660 125
0xfb4a4564 1540131 oracle 660 125

------ Message Queues --------

key msqid owner perms used-bytes messages

Friday, March 23, 2018

linux - How do you make it obvious you are on a production system?



A few of us at my company have root access on production servers. We are looking for a good way to make it exceedingly clear when we have ssh'd in.




A few ideas we have had are:




  • Bright red prompt

  • Answer a riddle before getting a shell

  • Type a random word before getting a shell



What are some techniques you guys use to differentiate production systems?



Answer



The red prompt is a good idea, which I also use.



Another trick is to put a large ASCII-art warning in the /etc/motd file.
Having something like this greet you when you log in should get your attention:




_______ _ _ _____ _____ _____ _____
|__ __| | | |_ _|/ ____| |_ _|/ ____| /\
| | | |__| | | | | (___ | | | (___ / \
| | | __ | | | \___ \ | | \___ \ / /\ \

| | | | | |_| |_ ____) | _| |_ ____) | / ____ \
|_| |_| |_|_____|_____/ |_____|_____/ /_/ \_\


_____ _____ ____ _____ _ _ _____ _______ _____ ____ _ _
| __ \| __ \ / __ \| __ \| | | |/ ____|__ __|_ _/ __ \| \ | |
| |__) | |__) | | | | | | | | | | | | | | || | | | \| |
| ___/| _ /| | | | | | | | | | | | | | || | | | . ` |
| | | | \ \| |__| | |__| | |__| | |____ | | _| || |__| | |\ |
|_| |_| \_\\____/|_____/ \____/ \_____| |_| |_____\____/|_| \_|



__ __ _____ _ _ _____ _ _ ______
| \/ | /\ / ____| | | |_ _| \ | | ____|
| \ / | / \ | | | |__| | | | | \| | |__
| |\/| | / /\ \| | | __ | | | | . ` | __|
| | | |/ ____ \ |____| | | |_| |_| |\ | |____
|_| |_/_/ \_\_____|_| |_|_____|_| \_|______|



You could generate such a warning on this website or you could use the figlet
command.



figlet



Like Nicholas Smith suggested in the comments, you could spice things up with some dragons or other animals using the cowsay command.



dragon cowsay



Instead of using the /etc/motd file, you could also call cowsay or figlet in the .profile file.



linux - DocumentRoot takes one argument, Root directory of the document tree error

Ok so I have this linux OS on my pc xubuntu and the external IP is not working via internal or external network, not working either on my pc. Just says 404 not found. What can I do?



this is the 000-default.conf, I also tried /var/www/html:




Servername

ServerAdmin webmaster@localhost
DocumentRoot "/var/www" # try quoting
DirectoryIndex index.html # just in case
ErrorLog /var/log/apache2/error.log # fully specified
CustomLog /var/log/apache2/acces.log # fully specified
# quoted
AllowOverride All
Require all granted # required in Apache 2.4






this is the error.log:



    [Mon Apr 09 17:00:10.218418 2018] [mpm_event:notice] [pid 19936:tid 139990600419200] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 17:00:10.218550 2018] [core:notice] [pid 19936:tid 139990600419200] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 17:08:21.130698 2018] [mpm_event:notice] [pid 19936:tid 139990600419200] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 17:08:22.235195 2018] [mpm_event:notice] [pid 20519:tid 139868456769408] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 17:08:22.235323 2018] [core:notice] [pid 20519:tid 139868456769408] AH00094: Command line: '/usr/sbin/apache2'

[Mon Apr 09 17:10:02.485055 2018] [mpm_event:notice] [pid 20519:tid 139868456769408] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 17:12:24.738049 2018] [mpm_event:notice] [pid 1729:tid 140330444584832] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 17:12:24.794357 2018] [core:notice] [pid 1729:tid 140330444584832] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 17:25:02.357488 2018] [mpm_event:notice] [pid 1729:tid 140330444584832] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 17:25:03.449348 2018] [mpm_event:notice] [pid 3730:tid 140047470462848] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 17:25:03.449459 2018] [core:notice] [pid 3730:tid 140047470462848] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 17:50:02.495314 2018] [mpm_event:notice] [pid 3730:tid 140047470462848] AH00493: SIGUSR1 received. Doing graceful restart
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
[Mon Apr 09 17:50:02.500105 2018] [mpm_event:notice] [pid 3730:tid 140047470462848] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 17:50:02.500112 2018] [core:notice] [pid 3730:tid 140047470462848] AH00094: Command line: '/usr/sbin/apache2'

[Mon Apr 09 18:03:51.249126 2018] [mpm_event:notice] [pid 3730:tid 140047470462848] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 18:03:52.335467 2018] [mpm_event:notice] [pid 4815:tid 140214087751552] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 18:03:52.335607 2018] [core:notice] [pid 4815:tid 140214087751552] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 18:03:59.131805 2018] [mpm_event:notice] [pid 4815:tid 140214087751552] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 18:04:00.216384 2018] [mpm_event:notice] [pid 4918:tid 140288963467136] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 18:04:00.216562 2018] [core:notice] [pid 4918:tid 140288963467136] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 18:23:59.494573 2018] [mpm_event:notice] [pid 4918:tid 140288963467136] AH00491: caught SIGTERM, shutting down
[Mon Apr 09 18:24:00.582049 2018] [mpm_event:notice] [pid 5302:tid 140040924424064] AH00489: Apache/2.4.18 (Ubuntu) configured -- resuming normal operations
[Mon Apr 09 18:24:00.582221 2018] [core:notice] [pid 5302:tid 140040924424064] AH00094: Command line: '/usr/sbin/apache2'
[Mon Apr 09 18:39:00.113290 2018] [mpm_event:notice] [pid 5302:tid 140040924424064] AH00491: caught SIGTERM, shutting down



at systemctl status apache2 I get



        ● apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: active (running) since ma 2018-04-09 19:30:35 EEST; 6min ago
Docs: man:systemd-sysv-generator(8)

Process: 1509 ExecStart=/etc/init.d/apache2 start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/apache2.service
├─1672 /usr/sbin/apache2 -k start
├─1675 /usr/sbin/apache2 -k start
└─1676 /usr/sbin/apache2 -k start

huhti 09 19:30:33 ossasecurity-desktop systemd[1]: Starting LSB: Apache2 web server...
huhti 09 19:30:33 ossasecurity-desktop apache2[1509]: * Starting Apache httpd web server apache2
huhti 09 19:30:34 ossasecurity-desktop apache2[1509]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
huhti 09 19:30:35 ossasecurity-desktop apache2[1509]: *

huhti 09 19:30:35 ossasecurity-desktop systemd[1]: Started LSB: Apache2 web server.


What other info do you need? I have to mention that there is an index.html in the /var/www/ directory!

.htaccess - rewrite rule does not rewrite url as expected

I have a problem with a CMS website, that normally generates readable urls. Sometimes it happens that navigation links are shown as www.domain.com/22, which results in an error, instead of www.domain.com/contact. I have not found a solution for this yet, but the page is working if the url is www.domain.com/index.php?id=22.



Therefore, I'm trying to rewrite www.domain.com/22 to www.domain.com/index.php?id=22 and I have used this rewrite rule:



RewriteRule ^([1-9][0-9]*)$ index.php?id=$1 [NC]



I tested it using http://htaccess.madewithlove.be and here it shows the correct result, but on the website no rewrite is happening.



Begin: Rewrite stuff





RewriteEngine On




RewriteRule ^(typo3|t3lib|tslib|fileadmin|typo3conf|typo3temp|uploads|showpic.php|favicon.ico)/ - [L]



RewriteRule ^typo3$ typo3/index_re.php [L]



RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-l



RewriteRule .* index.php [L]




Options +FollowSymLinks



RewriteCond %{HTTP_HOST} ^domain.dk$ [OR]
RewriteCond %{HTTP_HOST} ^www.domain-alias.com$
RewriteRule (.*) http://www.domain.com/$1 [R=301,L]



RewriteRule ^([1-9][0-9]*)$ index.php?id=$1 [NC]



End: Rewrite stuff

Thursday, March 22, 2018

windows - Mapped Network Shares Not Showing Up After Successful Batch Execution

Good evening,



I'm running into a strange issue on a Windows 7 machine. I'm working on deploying OpenVPN to our mobile workforce and they've requested the ability to have their home drive and another share automatically mapped when they log into the VPN.



So far, I'm using the following lines in a batch file:




net use O: \\172.23.6.127\shares /persistent:no
net use U: \\172.23.6.127\%USERNAME% /persistent:no


The command prompt opens as expected and the script executes successfully, however, I cannot see the network drives listed. If I immediately attempt to run the script manually, I get an error 53 telling me that the name is already in use.



As the image below shows, if I disconnect the VPN tunnel then reconnect without running the script automatically (i.e., by removing the UP script on the config folder), then run the batch file manually, the shares are mapped and they show up.



Image showing successful manual share mapping




Any assistance would be really appreciated, thanks.

windows - roaming profile vs folder redirection




I can't seem to find a consensus on what the differences are between the two. Roaming Profiles, Folder redirection or... both is one example. The top answer doesn't answer the question as to what data isn't shared if not using roaming profiles.




  • What is the difference between roaming profile and folder redirection?


  • What data "roams" with roaming profiles that doesn't roam with folder redirection?


  • Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine?




Thanks for any insight.


Answer





What is the difference between roaming profile and folder redirection?




At the most basic level, a Windows user profile is the entirety of the directories and files within the directories that contain user-specific data (a very basic way to look at it is the profile is anything and everything contained within the c:\users\username directory) as well as the various registry entries that contain user specific settings within the HKCU registry hive.



A pure roaming profiles implementation will COPY the data from the entire user profile from a fileshare to a system on user logon and copy data for the entire user profile back to the fileshare on logoff. In cases where a user who has roaming profiles enabled logins to multiple systems and makes conflicting changes to the same file in their profile, the last logoff/write will win. As users start saving things to their my documents folder, saving pictures off their camera, uploading their iTunes libraries (these things never happen in an enterprise environment, right? :), the size of the user profile data being copied back and forth can start to cause long delays and increase the time it takes during both user login and user logoff.




What data "roams" with roaming profiles that doesn't roam with folder redirection?





Folder redirection provides a mechanism to point specific folders (My Docs/AppData/Pictures/etc) within the user profile to a fileshare. If a user logins into multiple systems and has folder redirection applied on all systems, his My documents on all systems would point back to the same fileshare location regardless of which machine he logs into. Note that the use of badly written applications that hard code a path (as opposed to reading the registry or querying windows for the proper location) into their application may NOT work correctly with folder redirection.



Data that "roams" with roaming profiles would include such things like Outlook profile Settings, Desktop wallpaper settings, screen saver settings, explorer view settings, installed/default printers, etc..). Folder redirection would not account for these things as it does not account for any data contained in folders that cannot be redirected (appdata\local, etc), or account for any settings contained in the HKCU registry hive.




Why is it a bad idea to redirect AppData? What are the consequences of not redirecting this folder should a user log onto the domain with a different machine?





First, a note, that only the Appdata\Roaming folder is redirected. The Appdata\Local and Appdata\LocalLow folders are not redirected.



Redirecting the AppData folder is a mixed bag and the user experience depends largely on the applications being used. In a redirected folder solution, all the I/O to the Appdata\Roaming folder can cause performance issues (impacting file servers, network, and the system being used) with folder redirection as it would need to read/write that data over the network to the fileshare. In addition, if an application is being used on multiple systems and require a file lock to the same file, folder redirection may not work as there is only a single copy on the file server that can be accessed and locked. All that being said, you start with application profiling and unless there is some serious indications of possible performance issues, I usually would recommend starting with redirecting AppData and watch for performance issues. There are some tools (Citrix Profile Manager and other profile management tools) that provide methods to be more granular in the folders being copied vs redirected within AppData.


Wednesday, March 21, 2018

nat - How to access server using public ip when in the network itself?

I've asked this question and even searched around but didn't get a useful answer for me.



Basically what im doing is i have a webserver on internal ip 192.168.0.100 port 80. So if im in the network it would be accessible if i type in 192.168.0.100/myportal/login.php



Ok no prob so far. Now, i would like for internal network users to access it via our public static ip which is 219.92.xx.xxx/myportal/login.php




If im outside of this network, no problem.i can access it. But how do i make it so that if im in the internal network, i can use the public ip?



now it's not practical because i have to use two different address depending on my network situation.



Why i want this? simple. because i want to buy a domain name and use it with my public ip which im hosting my own webserver. so now i cant access using public ip inside, i wont be able to use my domain later assigned to that ip.



for example, i wont be able to access it via www.vportal.com/myportal/login.php if im inside the network. so to conclude, it's not practical am i right? i would need to use internal ip when im inside. only can use domain,when im outside.



Now, im certain there is a way around this but i really hope someone can give me some idea or solution because i am NOT a network person. but i do know all the basics.




FYI, my setup is a simple setup which is modem and router.one server is using wired connection. my router is dlink dir615. now what can i do with what i got now?is it possible?



i've read about nat loopback but i know it's not possible for my situation. i really hope somebody can help and explain to me in layman's way. i really want to learn this.



thanks.

logging - What is the correct method to log the original domain in the apache log files after a redirect 301?




Thanks for reading my question.



I'm using Debian5 + apache2
In apache2.conf I have multiple VirtualHost sections.
I have a main domain www.mymain.com in a virtual host section and multiple other domains in another virtual host section.
This later has a redirect command as follows
redirect 301 / http://www.mymain.com



All redirects work fine.
I would like to log the original domain used but have tried every format string at http://httpd.apache.org/docs/2.3/mod/mod_log_config.html#formats to no avail.



I have used all the standard logs and my own custom (all in) log for both VirtualHost sections and the required data doesn't appear anywhere in any of the log files.



ie
user clicks link to www.myotherdomain.com

they are successfully 301 redirected by apache to www.mymain.com



i'd like to see
blah blah www.myotherdomain.com blah blah
in the log



any input appreciated.



TIPS for folks with similar issue.



Most browsers cache 301 redirects. A brute force solution is to test using a portable install of Firefox (for example). Do tests and then delete and reinstall the application.
Or just copy a clone of the original install anytime you want to do a test.




beware using *.domain.tld in serveralias commands in apache2.conf



UseCanonicalName Off applies even if all your servernames are virtual



remember that a 301 redirect generates 2 entirely separate log entries because it's 2 entirely separate events ie



browser requests www.myolddomain.tld -> server A returns 301 "command"
and
browser requests www.newdomain.tld (as listed in 301 cache) -> server B returns result



in theory server B may have no knowledge that a 301 occurred at all



Of course if server A and B are the same it would be very useful to be able to add the info to server B's logs.


Answer



It sounds like you want to use a CustomLog format (for the virtual host that sends the redirect) that includes %V, and to make sure that you've set UseCanonicalName Off.



Log format documentation is here.


linux - Configuring ENUM on BIND 9.8.1

In Bind 9.8.1 Ubuntu server 12.04, I configured ENUM (Electronic Number Mapping).



1) Create named.conf.enum file in /etc/bind named.conf.enum content:




zone "adras.af" { type master; file "/etc/bind/db.adras.af"; };





2) Include the named.conf.enum in named.conf file




include "/etc/bind/named.conf.enum";




3) Make the db.adras.af file:





$TTL 86400



e164.arpa. IN SOA servera.adras.af root.adrasnew.af. (



2004011522 ; Serial



21600 ; Refresh



3600 ; Retry




604800 ; Expire



3600 ; Minimum TTL



adras.af. IN NS servera.adras.af.



;



servera.adras.af. IN A 192.168.1.2




0.9.8.7.6.5.4.3.2.1.e164.arpa. IN NAPTR 10 100 "u" "E2U+sip" "!^.*$!sip:info@adrasnew.af!".



0.9.8.7.6.5.4.3.2.1.e164.arpa. NAPTR 10 101 "u" "E2U+h323" "!^.*$!h323:info@adrasnew.af!".



0.9.8.7.6.5.4.3.2.1.e164.arpa. NAPTR 10 102 "u" "E2U+msg" "!^.*$!mailto:info@adrasnew.af!".



8.1.2.7.5.9.3.3.1.6.1.e164.arpa. NAPTR 100 10 "U" "SIP+E2U" "!^.*$!sip:16133957218@adrasnew.af!".




I configured the ENUM in Bind 9 like above, Bind 9 was successfully restarted, when we test our configuration:




dig @0.9.8.7.6.5.4.3.2.1.e164.arpa -t NAPTR



The server displays this message:



dig: couldn't get address for '@0.9.8.7.6.5.4.3.2.1.e164.arpa':not found

lsi - RAID 6 array rebuild with hotspare




I have an 11-disk (D0 - D10) RAID6 array with a hotspare (D11) - a 12 disk array total.



Disk D3 failed earlier today and the hotspare is now being rebuilt (presumably, turning itself into a datacopy of D3). I want to replace the dead D3 with a working disk. The controller is running LSI MegaRAID Storage Manager (v13.04.03.01).



1) Should I wait for the rebuild to complete before replacing D3? I don't want to confuse things and it appears to be rebuilding fine



2) Right now, D11 is still designated as the 'hot spare' when I check the physical drive's info page on the MegaRAID GUI. When I replace D3, should it become the new hotspare and D11 become part of the main array (this would seem most sensible) OR, once D3 is replaced will the [rebuilt] info from D11 be moved BACK onto D3 and then D11 be wiped and retain its role as the hot spare?



I realize question 2 is probably entirely dependent on how the manager decided to do things, but I just don't know if this will be automatic or if I need to actively tell it what to do. As far as I can see there's no option to designate a disk as the hot spare.




Any thoughts or advice regarding greatly appreciated.


Answer



I strongly suggest to first let the controller rebuild the array, then remove the broken disk. After that, when replacing D3 with a working disk, the array should reconstruct D3, restoring D11 to its original hotspare role.



Anyway, please consult your controller manual to be sure what to expect.


Monday, March 19, 2018

Active Directory Corrupted In Windows Small Business Server 2011 - Server No Longer Domain Controller

I have a rather bad problem with my Windows SBS 2011. First of all, I'll give the background to what caused the problem. I was setting up a new small business server network. I had my job about finished. The server was working great, all the workstations had joined the domain, and I had all my applications and data moved to the server. I thought I was done. But then it happened. I tried adding one more computer to the domain, and to my dismay the computer name was set to the same name as the server.




Apparently when a computer joins a domain with the same name as another machine that is already on the domain, it overrides the first one. For normal workstations, this is not a big deal, you just delete the computer from AD and rejoin the original computer to the domain. However, for a server that is the domain controller it is a whole different story. Since the server got overridden in AD, it is no longer the domain controller. The DNS service is not working and all kinds of other services are failing also.



So the question is, what are my options? I am embarrassed to admit it, but since this is a new server one thing I did not have setup yet was backup. So I have no backups to work from. I am worried that things are broken enough that I might need to do a reinstall. However, I already have several days worth of configuration into this server, so I would obviously prefer if there was a fix that would prevent me from needing to do a reinstall. All the server components are there and installed correctly, but they are misconfigured (I think it is basically just Active Directory). So I have the feeling that if I did the right thing I could solve the issue without a reinstall. Is there anyway to rerun the component that installs the initial configuration to "convert" the base windows server 2008 r2 install into a SBS? In other words in the program files folder there is an application called SBSsetup.exe, is there anyway to rerun this and have it reconfigure AD, etc. to work with SBS?



Any insight will be greatly appreciated. Thanks.

mod rewrite - Custom Apache ErrorDocument with proxy balancer & RewriteEngine



I'm running into various problems trying to add custom ErrorDocuments to my server.



I'm using proxy balancer to share the load between two instances of Zope and some simple rewrite rules to map my domain to the local zope instances. I'm pretty sure Zope isn't the problem, but have mentioned it to explain what the balancer redirects to.



I've tried a number of suggestions, but the 'closest' I can get is included below and results in the error:





"Firefox has detected that the server
is redirecting the request for this
address in a way that will never
complete."




Other variations result in:





"The server is temporarily unable to
service your request due to
maintenance downtime or capacity
problems. Please try again later.



Additionally, a 503 Service
Temporarily Unavailable error was
encountered while trying to use an
ErrorDocument to handle the request."





If I include a simple




ErrorDocument 503 Hello




It renders fine.



What am I doing wrong? I'm worried that it may be something to do with the balancer/rewrite getting 'in the way' of the custom errors? Or that my DocumentRoot is incorrectly set?




The rest of this configuration runs fine without the custom errors.




VirtualHost XXX.XXX.XXX.XXX:80>
ServerAdmin webmaster@localhost
ServerName sub.domain.com


BalancerMember http://XXX.XXX.XXX.XXX:81

BalancerMember http://XXX.XXX.XXX.XXX:82


RewriteEngine On
RewriteRule ^(.*)$ balancer://domain_dev$1 [P,L]


Order allow,deny
Allow from all





Listen 81
Listen 82


CustomLog /var/log/apache2/domain-dev-1.log combined
ErrorLog /var/log/apache2/domain-dev-error-1.log


ErrorDocument 503 http://sub.domain.com/custom-errors/customerror.html
Alias /customerrors /var/www/custom-errors/

RewriteEngine On
RewriteRule ^(.*)$ http://localhost:6080/++skin++SandboxSkin/site/++vh++http:sub.domain.com:80/++$1 [P,L]
RewriteLog /var/log/apache2/domain-dev-rewrite-1.log
RewriteLogLevel 0




Order allow,deny
Allow from all





CustomLog /var/log/apache2/domain-dev-2.log combined
ErrorLog /var/log/apache2/domain-dev-error-2.log


RewriteEngine On
RewriteRule ^(.*)$ http://localhost:6081/++skin++SandboxSkin/site/++vh++http:sub.domain.com:80/++$1 [P,L]
RewriteLog /var/log/apache2/domain-dev-rewrite-2.log
RewriteLogLevel 0


Order allow,deny
Allow from all





Answer



Setting LogLevel Debug I found:



[Mon Sep 14 19:26:06 2009] [debug] proxy_util.c(2015): proxy: connected /++skin++SandboxSkin/site/++vh++http:sub.domain.com:80/++/custom-errors/customerror.html to localhost:6080


Which confirmed that the proxy was trying to serve the error from the offline servers location.




Adding:



RewriteCond %{REQUEST_URI} !^/custom-errors/


Stopped the proxy rewriting any redirects with 'custom-errors' directories in the request.



After that, the following simplified ErrorDocument rule worked fine:



DocumentRoot "/var/www"

ErrorDocument 503 "/custom-errors/customerror.html"

redhat - User account automatically filling up with dead.letter file

I have one user account on a server with about 400 accounts that is filling up automatically. The dead.letter file in the users home directory automatically grows until the account is full (about 10 - 40 Mb per day). The user is using Microsoft Outlook to send and receive mail.



What can be causing this and how can I avoid it from happening?



Right now I have an emergency cron-job to delete the file but I would like "real" solution.



Edit: The server version is Red Hat Enterprise Linux ES release 4 (Nahant Update 4)




Edit 2: It seems mainly spam and I see different mailer headings (from php to Outlook Express) and a frequent appearing header is USER_NAME@vsap.no.loop



Update: I have asked the hosting provider where I use that dedicated server to look into the problem as well, as it's their Control Panel that could be a cause of the problem.

Saturday, March 17, 2018

hp proliant - Why does my supposedly hardware-based RAID appear as a "fake raid"




I have a low-end server for a SOHO set-up: a Gen8 HP Microserver. It has a built-in Dynamic Smart Array B120i (RAID) Controller. When booting the server up before any OS was installed I was able to open the HP Smart Array configuration utility and create a logical drive spanning my 4 physical disks with RAID 1+0.



After some messing around I was able to install CentOS 7 and had a look at the disks with lsblk:



NAME           MAJ:MIN RM   SIZE RO TYPE   
sda 8:0 0 698.7G 0 disk
├─sda1 8:1 0 698.7G 0 part
└─ddf1_Storage 253:2 0 698.5G 0 dmraid
sdd 8:48 0 698.7G 0 disk

├─sdd1 8:49 0 698.7G 0 part
└─ddf1_Storage 253:2 0 698.5G 0 dmraid
sde 8:64 0 698.7G 0 disk
├─sde1 8:65 0 698.7G 0 part
└─ddf1_Storage 253:2 0 698.5G 0 dmraid
sdf 8:80 0 698.7G 0 disk
├─sdf1 8:81 0 698.7G 0 part
└─ddf1_Storage 253:2 0 698.5G 0 dmraid



So, this looks like a software based RAID, aka fake RAID, rather than the one disk that I had expected to see.



Can someone explain what, if anything, the built-in RAID controller is actually doing for me?


Answer



This is a Dynamic Smart Array controller. It's not a fully-featured HP RAID controller. However, it's better than a pure "fakeraid" solution, provided you're using a compatible OS. The RAID logic is moved to the "hpvsa" driver in Linux rather than the hardware.




The drivers are kernel specific ... The B120i is a chipset AHCI SATA
controller that requires a kernel specific proprietary driver for
software derived RAID functionality. Many refer to these chipset SATA

controllers as 'fake raid' ... the HPE recommendation is to use the
'mdadm' software RAID feature included with the OS.



-- HPE Support Forum




To install a supported OS onto this RAID controller, you have to slipstream a driver into the installation process. Here's the current driver as of this writing.



The main thing is that there's an upgrade path to a proper HP Smart Array controller, and the on-disk format allows that migration.







However, the output you're showing indicates that you haven't created a real logical drive.



Here's output from lsblk on a Dynamic Smart Array. /dev/sda is the block device that is represented by the logical drive.



[root@Tudor_Ranch ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 500M 0 part /boot

├─sda2 8:2 0 10G 0 part /
├─sda3 8:3 0 10G 0 part /usr
├─sda4 8:4 0 1K 0 part
├─sda5 8:5 0 6G 0 part /var
├─sda6 8:6 0 4G 0 part [SWAP]
└─sda7 8:7 0 1G 0 part /tmp


Similarly, this is evident in the hpssacli command output:




=> ctrl all show config

Smart Array B320i RAID in Slot 0 (Embedded)

Internal Drive Cage at Port 1I, Box 1, OK

Internal Drive Cage at Port 2I, Box 0, OK
array A (Solid State SATA, Unused Space: 176704 MB)



logicaldrive 1 (40.0 GB, RAID 1, OK)
logicaldrive 2 (60.0 GB, RAID 1, OK)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, Solid State SATA, 200 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, Solid State SATA, 200 GB, OK)


I think you just installed your OS using Linux MD software RAID.



Also see:




HP DL380e Linux not seeing drive array for installation


Friday, March 16, 2018

iis 7 - Force users to access SSL site using specific host header



So i am running IIS7 with one SSL site on it. I have a few different domains and subdomains that all point to my external IP. When using http they all direct to their respective sites using host headers. Whenever someone uses https on any of the domains they all point to my SSL site.



I only want people who type in https://sub.domain.com (for example) to end up at my secure site and for anything else to just not go there, it can throw an error or direct to the http version, it doesn't matter.



Is there a way of getting IIS7 to check the host header and throw an error if it doesn't match my specific subdomain?



Thanks,




Michael


Answer



You should use URL Rewrite for that, see Canonicalization and Host Names below:
http://blogs.msdn.com/b/carlosag/archive/2008/09/02/iis7urlrewriteseo.aspx


domain name system - When does a Windows client stop using a secondary DNS server and revert back to primary



I am trying to get a solid understanding of exactly how a Windows client works with DNS. For example, lets say that I configure a network adapter with a primary and a secondary DNS server.



How long does it take to fail over and start using the secondary DNS server if the primary DNS server fails?



What is required for it to start using the primary DNS server once the primary DNS server comes back online? Will this eventually happen automatically?


Answer



If a query to your primary DNS server results in something analogous to host-not-reachable then the client resolver will automatically try the same query against the next DNS server, and so on until it either successfully contacts a DNS server or runs out of servers to try. So essentially it takes as long to fail over to the secondary server as it does to time out a connection to the first.




I believe the Windows resolver will then continue to use whichever server answers for a period of 15 minutes (or until the TCP/IP stack is reset via a reboot or something) and then will start over again at the top of the list.



Note that this failover only happens when a server is not reachable, not when the queried record is not resolvable. If the primary server can be reached but responds with a no-such-host answer then the failover does not occur.



Here's a KB article that mentions the 15 minute thing for XP.


Thursday, March 15, 2018

domain name system - Email sent from server with rDNS & SPF being blocked by Hotmail



I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue.



The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email.



The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177

This IP is new to me since August 2010, I had a different IP for the previous 3-4 years.



This IP is not on any credible spam lists
http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177



The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records.



I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net".



I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status."




Sender Score: 100
https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14



My Mcafee threat level seems fine



I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry.
v=spf1 a include:_spf.google.com ~all



Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available".




From my home I can run "nslookup -type=TXT canadaka.net" and it returns:




Server:
google-public-dns-a.google.com
Address: 8.8.8.8



Non-authoritative answer: canadaka.net
text = "v=spf1 a

include:_spf.google.com ~all"




One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com



This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing.




Delivered-To: XXXX@dirtbiker.ca
Received: by 10.146.168.12 with SMTP

id q12cs91243yae;
Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id
uu7mr4292541icb.68.1298858509242;
Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received:
from canadaka.net ([66.199.162.177])
by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45;
Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of
postmaster@canadaka.net designates
66.199.162.177 as permitted sender) client-ip=66.199.162.177;
Authentication-Results: mx.google.com;

spf=pass (google.com: domain of
postmaster@canadaka.net designates
66.199.162.177 as permitted sender) smtp.mail=postmaster@canadaka.net
Message-Id:
<4d6b020c.c92c2b0a.4603.6378SMTPIN_ADDED@mx.google.com>
Received: from coruscant
([127.0.0.1]:12907) by canadaka.net
with [XMail 1.27 ESMTP Server] id
for from
; Sun, 27

Feb 2011 18:01:29 -0800 Date: Sun, 27
Feb 2011 18:01:29 -0800 Subject: Test
To: XXXX@dirtbiker.ca From: XXXX
Reply-To:
XXXX@canadaka.net X-Mailer: PHP/5.2.13




I can send to gmail and other email services fine. I don't know what i'm doing wrong!



UPDATE 1




I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder.



UPDATE 2



I used Telnet to send a test message to port25.com, seems my SPF is not being detected.
Result: neutral (SPF-Result: None)
canadaka.net. SPF (no records)
canadaka.net. TXT (no records)




I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.



Now that the new nameservers have taken affect, I pass the SPF check email along with Microsofts SenderID Wizard. Seems some lookups were using the 4th nameserver and skipping the first 3???



MAIL SENT THROUGH GOOGLE SMTP:




canadaka.net. SPF (no records)
canadaka.net. 86400 IN TXT "v=spf1 a
include:_spf.google.com ~all"

canadaka.net. 86400 IN A
66.199.162.177
_spf.google.com. SPF (no records)
_spf.google.com. 300 IN TXT "v=spf1 ip4:216.239.32.0/19
ip4:64.233.160.0/19 ip4:66.249.80.0/20
ip4:72.14.192.0/18 ip4:209.85.128.0/17
ip4:66.102.0.0/20 ip4:74.125.0.0/16
ip4:64.18.0.0/20 ip4:207.126.144.0/20
ip4:173.194.0.0/16 ?all"





MAIL SENT FROM TELNET ON SERVER




canadaka.net. SPF (no records)
canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all"
canadaka.net. 86400 IN A 66.199.162.177



Answer



My issue was fixed by contacting Microsoft and they had to manually remove a block on my IP. Once the block was removed emails from my server were reaching Hotmail, but going directly to junk mail.




I have created a separate question to try and resolve the junk mail problem:
Hotmail marking messages as junk


Wednesday, March 14, 2018

Is it possible to redirect a URL from HTTP to HTTPS on the same port for IIS?



I have a website located on a custom port number in the server.
Currently, it is serving the users using HTTP.



I was wondering if it is possible to redirect from HTTP to HTTPS while still reusing the same port number in IIS.
E.g http://www.example.com:8000 becomes https://www.example.com:8000




Some of the information I have seen is that I need to use a second binding.
E.g. Bind port 80 for HTTP and 443 for HTTPS and then do a redirect for port 80.


Answer



I don't know what version of IIS you are using but if it is IIS7/7.5 then IIS URL REwrite will do just fine.



Here is a rule to copy into your root web.config



URL REWRITE



http://www.iis.net/download/urlrewrite














postfix - Mail Server on CentOS - Can the domain name and hostname be the same?



Don't know how silly this question sounds, but can I make the mail server hostname the same as the domain name?




Example:



Typical Mailserver setup:



hostname: mail.example.com
domain name: example.com


My Mailserver Requirement




hostname: example.com
domain name: example.com


I am busy setting up Postfix for my CentOS server and I am editing the /etc/postfix/main.cf file



Thanks


Answer



Yes, you can, but with such settings would be "$mydomain = $myhostname = $myorigin". In such case you should be more careful with permissions in options like mydestination


Tuesday, March 13, 2018

linux - Repairing CentOS files permission

I've screwed my CentOS 6 server. I've ran chmod on a few symlinks and changed permissions important file such those in /bin and all commands even clear says Permission Denied. Now It is unable to boot.



How do I restore permissions?

.htaccess - http:// to https:// Redirection with page reload in Apache 2 (HD Web Hosting)

I'm trying to check to see if the address accessed is using a secure connection when accessing a certain page, or not. If not, I want to redirect the traffic to the proper https:// address.



I have tried doing this in several ways in the .htaccess file.



I was able to rewrite http://foosite.com/contact.shtml and http://www.foosite.com/contact.shtml addresses as https://www.foosite.com/contact.shtml with:




RewriteEngine on
RewriteCond %{HTTPS} off
RewriteCond %{REQUEST_URI} (contact.*)
RewriteRule (.*) https ://www.foosite.com%{REQUEST_URI}


...but this only rewrites the URL, it does not reload the page, therefore there is no encryption and there is a warning/caution sign next to the HTTPS in the address bar (as there should be).



I need to reload that page so that the encryption is enforced.




Based on what I am looking at I was thinking something like:



RedirectCond %{HTTPS} off
RedirectCond %{REQUEST_URI} (contact.*)
Redirect 301 https ://www.foosite.com/contact.shtml


...but this is based purely on conjecture after looking at some posts in here and imagining what might work. Conjecture is not a good thing to count on, anyway. I don't even know if there is a RedirectCond tag.




So, as I am not familiar with .htaccess at all, just looking to secure a single form, what would work to redirect a page to the HTTPS address of it when it isn't loaded securely?

Monday, March 12, 2018

Why can't I access a webserver through a load balancer on my local network?

When I try to use curl (or wget, lynx, etc) to connect from a server on our local network to our website, which is on a local server behind a CoyotePoint load balancer, curl fails. Ping does not have this problem.



When I curl directly to any of the servers behind that load balancer (from and to the same local network), I also have no problem. It doesn't matter whether the local server I'm curling from is behind the load balancer or not.




Does anyone have any idea why I can't access my webserver through the load balancer on my local network?



Edit: additional information:



The error message from curl:



*   Trying [ip address]... connected
* Connected to [web address] ([ip address]) port 80 (#0)
> GET / HTTP/1.1

> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.9 libssh2/1.2.4
> Host: [web address]
> Accept: */*
>
* Closing connection #0
* Failure when receiving data from the peer
curl: (56) Failure when receiving data from the peer


The IP address is the correct external address, not the internal network IP.




I am attempting to curl using the web address, not an IP address. That web address resolves to the correct IP address to connect to the site (externally) through our load balancer.



As I understand our networking (I'm obviously no expert at this) all of our servers and our load balancer are all on the same network.

linux - Where do these mysterious DNS lookups come from and why are they slow?

I have recently obtained a new dedicated server which I'm now setting up. It's running on 64-bit Debian 6.0. I have cloned a fairly large git repository (177 MB including working files) onto this server. Switching to a different branch is very very slow. On my laptop it takes 1-2 seconds, on this server it can take half a minute. After some investigation it turns out to be some kind of DNS timeout. Here's an exhibit from strace -s 128 git checkout release:



stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=132, ...}) = 0
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 5
connect(5, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("213.133.99.99")}, 16) = 0
poll([{fd=5, events=POLLOUT}], 1, 0) = 1 ([{fd=5, revents=POLLOUT}])

sendto(5, "\235\333\1\0\0\1\0\0\0\0\0\0\35Debian-60-squeeze-64-minimal\n\17happyponies\3com\0\0\1\0\1", 67, MSG_NOSIGNAL, NULL, 0) = 67
poll([{fd=5, events=POLLIN}], 1, 5000) = 0 (Timeout)


This snippet repeats several times per 'git checkout' call.



My server's hostname was originally Debian-60-squeeze-64-minimal. I had changed it to shell.happyponies.com by running hostname shell.happyponies.com, editing /etc/hostname and rebooting the server.



I don't understand the DNS protocol, but it looks like Git is trying to lookup the IP for Debian-60-squeeze-64-minimal as well as for happyponies.com. Why does Debian-60-squeeze-64-minimal come back even though I've already changed the host name? Why does Git perform DNS lookups at all? Why are these lookups so slow? I've already verified that all DNS servers in /etc/resolv.conf are up and responding slowly, yet Git's own lookups time out.




Changing the host name back to Debian-60-squeeze-64-minimal seems to fix the slowness.



Basically I just want to fix whatever DNS issues my server has because I'm sure they will cause more problems that just slowing down git checkout. But I'm not sure sure what the problem exactly is and what these symptoms mean.

installation - HP Smart Array B110i SATA RAID Controller drivers crash HP DL320 G6



I'm trying to install Windows Server 2012R2 on a HP DL320 G6 with a Smart Array B110i SATA RAID Controller. During the install, it asks me to load the driver for the RAID, I load the drivers from cp022401 (also tried cp020545) and the machine promptly crashes with the HP BIOS frowny face.



That particular server was running Hyper-V 2012 with no problem, so I know that the hardware is fine, I'm just replacing the old hard drives with newer/bigger ones.



Do you have any idea how to install the B110i drivers successfully on Win2012R2?



Here's the driver that makes the server crash

Here's the message I get when the machine crashes


Answer



Ok, finally figured out what was wrong! Thanks to Chopper3 for pointing me in the right direction.



The problem is that the firmware was too old for the Windows Server 2012R2 B110i driver.



I had to upgrade the firmware using Smart Update Firmware DVD Proliant Support Pack v10.10 first, then I was able to run SPP to get the latest version. After all that, the B110i driver ran fine.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...