Monday, October 31, 2016

networking - IPv6 and IPv4 mapped address

I've a box with 1 private IPv4 (192.168.0.X) and some IPv6 (let's call one Y::Z).
I have an application listening on 192.168.0.X on port 1234 and an application that want to connect to that service but use Y::Z as source address.



So I thought about using ::ffff:0:0/96 prefix but telnetting ::ffff:192.168.0.X (using source address Y::Z) give me a "network unreachable" error.
I've tried to add routing rules but seems that nothing works.



How I can allow




 telnet -b Y::Z ::ffff:192.168.0.X 1234


to work?



Thanks.



edit:
OS: Debian Squeeze (in an OpenVZ container, kernel 2.6.32).




I also forgot to mention that



 telnet -6 ::ffff:192.168.0.X 1234


works without any error.

hardware raid - Install of H700 controller in dell R610



We have just brought a refurbished dell R610 2 X 6 Core X5650, 96G Ram, SAS 6ir Raid Controller, 2 X Fujitsu MBC2073RC 2.5-Inch 15K 73GB SAS Hard Drive.



We are going to also be installing additional drives:





  • 2 X Intel 730 2.5-Inch 480 GB SSD's SSDSC2BP480G4R5

  • 2 X Samsung Momentus SpinPoint ST2000LM003 2TB 2.5"



I also seperatly brought a refurbished H700 Raid controller to use rather than the SAS 6ir card.



I now have all the parts in front of me (I hope) but on searching for install instructions I can't seem to find decent instructions / videos on how to install this card in the H700. Does anyone have links or can describe what to do? Eg should I:




  • Remove the existing SAS 6ir and install the H700 where the 6iR was? (I am guessing yes...)


  • etc



Eg in the Dell R610 owners manual on page 89 they talk generally about installing an expansion card. I also looked in the Dell PowerEdge RAID Controller Cards H700 and H800 but that talks a lot about configuration of parameters for performance, configuring RAID etc but it doesn't really talk much about actual installation.



So how do I install this H700 card in our Dell R610? Hints, pointers, links?



[Edit] Photos are:



H700




H700



H700 Label



H700 Label



H700 plus cables



H700 Overview




Cable Ends



H700 Cable Ends



R610 Overview



R610 Overview



SAS 6ir Card




SAS 6ir Card


Answer



There are three types of H700 cards.




  1. Dell PERC H7000 Integrated Card


    • This looks almost exactly like the adapter card, key difference is the PCIe backing plate seen in 2nd image below.





enter image description here




  1. Dell PERC H700 Adapter Card



enter image description here





  1. Dell PERC H7000 Modular Card



I couldn't find a good pic for one of these. They only existed for a short time until Dell went to another version and called it the H710, which only lasted a short while until the H800 modular card.



The integrated card installs into the dedicated internal storage slot of the server. See your server motherboard map that came with it to identify the exact location. I put a photo of the motherboard view at the bottom of the post.
The adapter goes into a PCIe slot - self explanatory
The Modular installs only into blade servers, if you have that one. Bad luck.






enter image description here



Once you've completed physical installation update your firmware to the most recent version. I believe it's AO6 but you will want to validate. Here's a link to AO6.



http://www.dell.com/support/home/us/en/19/Drivers/DriversDetails?driverId=3HD0T


Sunday, October 30, 2016

linux - CentOS 7 disk space

I have a Centos7 server in which when executing the command df -h the occupied disk space is of 90GB but if I do a du -sh / * the sum of all directories does not correspond to 90GB, in this case it is around 60GB.



What could be causing this difference in occupied disk space values?

Export Bugzilla and reimport/move to new server?



I've created a new VM to host our outdated Bugzilla server and now I need to move the database over to the new VM.



Our old server is running Bugzilla 2.16.3, which I plan on using on the VM as well. This is just to get it off the old hardware. I don't want to mess with VM Converter for something like this.




So my question is:




  1. How can I export/reimport the Bugzilla database and settings?

  2. Anything else I need to setup, understand before I do this?



I'm not a Linux expert either, so forgive me. I'll be using Fedora 9 to host Bugzilla with.


Answer




Things you must take care of:




  • Web server setup. Apache + mod_perl most likely.

  • Exporting SQL dump and importing it to a new server.



export for mysql:



mysqldump --force --opt --user=$USER --password=$PASSWORD --databases $db > bugzilla.sql




import for mysql:



mysql -u $USER -p < bugzilla.sql




  • Install missing PERL CPAN modules



Run ./checksetup.pl in bugzilla directory to get a report on what's missing. It'll also show you the lines needed for installation. If installing via perl directly fails (which often does), try using your local disto package manager to install missing stuff (yum on Fedora I take it?)





  • Point your browser to the new installation and debug further issues.


linux - Apache Virtual Hosts Not Working

EDIT: I was able to get this to work. There was a VirtualHost entry in httpd.conf that was effecting the Virtual Hosts in my vhosts.conf file.



I am trying to set up a CentOS server and configure it with two virtual hosts. This server is going to be replacing a Solaris server with the same settings. On the Solaris (current) server, the virtual hosts work, but on the new server, the first one is called regardless of what sent it (modifying hosts file).




SSL virtual host works.



I have tried to add "NameVirtualHost *:80", but get




[Thu Jun 30 14:43:38 2011] [warn] default VirtualHost overlap on port 80, the first has precedence
[Thu Jun 30 14:43:38 2011] [warn] NameVirtualHost *:80 has no VirtualHosts





Does anyone have any ideas?



EDIT: I forgot to post my configurations.




NameVirtualHost *:80
...

DocumentRoot "/var/www/html/domain1"
ServerName domain1
ServerAlias www.domain1

AllowOverride All
Options None
Order allow,deny
Allow from all

DirectoryIndex index.html index.php


DocumentRoot "/var/www/html/domain2"
ServerName domain2
ServerAlias www.domain2

AllowOverride All
Options None
Order allow,deny
Allow from all

DirectoryIndex index.html index.php


Thursday, October 27, 2016

centos - Finding Source of 100% CPU Usage

I recently had a crash on a Dell Poweredge 2850 that I traced back to a bad RAID memory card. I replaced the card and reset the battery and got the server to boot again.



After booting up I noticed that one of the CPUs always goes to 100%. It is usually CPU 1 (2nd CPU) but out of about 10 boots it was CPU 3 (4th CPU) once.



The process that is causing the high load is events/1 (or events/3 the one time it happened on core 3). I've looked through dmesg and didn't find anything abnormal. Does anyone have any suggestions as to how I may be able to find what is actually causing the CPU usage?



I also noticed that when I plug in a monitor at boot that on the CentOS loading screen the loading bars get to around half way then the screen blacks out (no login screen is shown). Otherwise everything starts up and runs normally.



Server info:




CentOS release 6.9 (Final)


CPU Info:



processor   : 1
vendor_id : GenuineIntel
cpu family : 15
model : 4

model name : Intel(R) Xeon(TM) CPU 3.00GHz
stepping : 3
microcode : 5
cpu MHz : 3000.000
cache size : 2048 KB
physical id : 3
siblings : 2
core id : 0
cpu cores : 1
apicid : 6

initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc pebs bts pni dtes64 monitor ds_cpl cid cx16 xtpr
bogomips : 5985.27
clflush size : 64
cache_alignment : 128
address sizes : 36 bits physical, 48 bits virtual

power management:


Please add a comment if you want to see any specific config files or outputs.



UPDATE 1:



cat /proc/interrupts



            CPU0       CPU1       CPU2       CPU3       

0: 133 0 0 1 IO-APIC-edge timer
1: 0 0 0 2 IO-APIC-edge i8042
4: 0 0 0 2 IO-APIC-edge
8: 0 0 0 1 IO-APIC-edge rtc0
9: 0 0 0 0 IO-APIC-fasteoi acpi
12: 0 0 0 4 IO-APIC-edge i8042
14: 0 0 0 147 IO-APIC-edge ata_piix
15: 0 0 0 0 IO-APIC-edge ata_piix
16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2
18: 0 0 0 301 IO-APIC-fasteoi uhci_hcd:usb4, radeon

19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3
23: 0 0 0 49 IO-APIC-fasteoi ehci_hcd:usb1
46: 0 0 3804 4767 IO-APIC-fasteoi megaraid
64: 0 288 0 104 IO-APIC-fasteoi eth0
NMI: 0 1 0 0 Non-maskable interrupts
LOC: 24325 76909 25269 31039 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 0 1 0 0 Performance monitoring interrupts
IWI: 0 0 0 0 IRQ work interrupts
RES: 2295 703 1357 886 Rescheduling interrupts

CAL: 3986 421 156 175 Function call interrupts
TLB: 526 95 803 3519 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 1 1 1 1 Machine check polls
ERR: 0
MIS: 0



sar



Linux 2.6.32-696.16.1.el6.x86_64 (HOSTNAME)     12/30/2017  _x86_64_    (4 CPU)

09:57:37 AM LINUX RESTART

10:00:01 AM CPU %user %nice %system %iowait %steal %idle
10:10:01 AM all 0.10 0.07 21.09 1.49 0.00 77.25
10:20:01 AM all 0.15 0.00 21.00 0.00 0.00 78.85
10:30:01 AM all 0.11 0.00 20.92 0.00 0.00 78.97

10:40:01 AM all 0.09 0.00 20.81 0.01 0.00 79.09
Average: all 0.11 0.02 20.96 0.37 0.00 78.54

12:35:32 PM LINUX RESTART


top



Tasks: 164 total,   2 running, 162 sleeping,   0 stopped,   0 zombie
Cpu(s): 0.2%us, 20.8%sy, 0.0%ni, 78.9%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st

Mem: 8058904k total, 453272k used, 7605632k free, 22240k buffers
Swap: 8191996k total, 0k used, 8191996k free, 174064k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20 root 20 0 0 0 0 R 99.9 0.0 5:50.67 events/1


UPDATE 2:



Once I regained physical access to the box I completely swapped out the PERC controller with one from a parts server. I reseated the memory card and the battery. Since RAID config did not match due to the new hardware I restored it from disk. After booting up I got the same 100% CPU usage.




I reset the BIOS/CMOS by pulling the CMOS battery and holding the power button down 10 seconds. Rebooted and set up RAID to read from hard drive again. CPU still at 100%.



I ran yum update and rebooted. Still 100%. Below is top showing individual CPUs.



top



top - 11:59:19 up 21 min,  1 user,  load average: 1.00, 0.97, 0.72
Tasks: 164 total, 2 running, 162 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

Cpu1 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.0%us,100.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8058904k total, 456996k used, 7601908k free, 22480k buffers
Swap: 8191996k total, 0k used, 8191996k free, 173792k cached


sar



Linux 2.6.32-696.16.1.el6.x86_64 (HOSTNAME)     01/04/2018  _x86_64_    (4 CPU)


10:40:45 AM LINUX RESTART

10:50:01 AM CPU %user %nice %system %iowait %steal %idle
11:00:01 AM all 0.08 0.00 20.86 0.00 0.00 79.06
11:40:01 AM all 0.00 0.00 0.00 0.00 0.00 0.00
11:50:01 AM all 0.08 0.00 20.87 0.02 0.00 79.03
12:00:01 PM all 0.08 0.00 20.89 0.00 0.00 79.02
Average: all 0.00 0.00 20.83 0.00 0.00 79.78



cat /proc/interrupts



            CPU0       CPU1       CPU2       CPU3       
0: 133 0 0 6 IO-APIC-edge timer
1: 0 0 0 2 IO-APIC-edge i8042
4: 0 0 0 2 IO-APIC-edge
8: 0 0 0 1 IO-APIC-edge rtc0
9: 0 0 0 0 IO-APIC-fasteoi acpi
12: 0 0 0 4 IO-APIC-edge i8042

14: 0 0 0 147 IO-APIC-edge ata_piix
15: 0 0 0 0 IO-APIC-edge ata_piix
16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2
18: 0 0 302 302 IO-APIC-fasteoi uhci_hcd:usb4, radeon
19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3
23: 0 0 0 53 IO-APIC-fasteoi ehci_hcd:usb1
46: 0 0 4074 4912 IO-APIC-fasteoi megaraid
64: 0 4917 0 108 IO-APIC-fasteoi eth0
NMI: 0 0 0 28 Non-maskable interrupts
LOC: 197497 401002 148354 1361329 Local timer interrupts

SPU: 0 0 0 0 Spurious interrupts
PMI: 0 0 0 28 Performance monitoring interrupts
IWI: 0 0 0 0 IRQ work interrupts
RES: 5891 1183 2828 8249 Rescheduling interrupts
CAL: 3641 1441 156 184 Function call interrupts
TLB: 837 3324 833 202 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 6 6 6 6 Machine check polls

ERR: 0
MIS: 0


UPDATE 3:



I added the noapic and nolapic arguments to the Kernel command in GRUB. Here are the results from top and cat /proc/interrupts



top




top - 14:55:01 up 5 min,  1 user,  load average: 1.76, 1.27, 0.58
Tasks: 111 total, 2 running, 109 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.4%us, 99.6%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8059152k total, 442016k used, 7617136k free, 22252k buffers
Swap: 8191996k total, 0k used, 8191996k free, 173556k cached


cat /proc/interrupts



          CPU0       

0: 447518 XT-PIC-XT-PIC timer
1: 2 XT-PIC-XT-PIC i8042
2: 0 XT-PIC-XT-PIC cascade
3: 1 XT-PIC-XT-PIC
4: 4 XT-PIC-XT-PIC
5: 50 XT-PIC-XT-PIC ehci_hcd:usb1
7: 8825 XT-PIC-XT-PIC uhci_hcd:usb4, radeon, megaraid
8: 1 XT-PIC-XT-PIC rtc0
9: 0 XT-PIC-XT-PIC acpi
10: 0 XT-PIC-XT-PIC uhci_hcd:usb3

11: 1586 XT-PIC-XT-PIC uhci_hcd:usb2, eth0
12: 4 XT-PIC-XT-PIC i8042
14: 148 XT-PIC-XT-PIC ata_piix
15: 0 XT-PIC-XT-PIC ata_piix
NMI: 0 Non-maskable interrupts
LOC: 0 Local timer interrupts
SPU: 0 Spurious interrupts
PMI: 0 Performance monitoring interrupts
IWI: 0 IRQ work interrupts
RES: 0 Rescheduling interrupts

CAL: 0 Function call interrupts
TLB: 0 TLB shootdowns
TRM: 0 Thermal event interrupts
THR: 0 Threshold APIC interrupts
MCE: 0 Machine check exceptions
MCP: 2 Machine check polls
ERR: 0
MIS: 0



I also tried booting to another much older version of the Kernel (Centos 6.7) which yielded the same result as before: 100% CPU usage on a random core.



UPDATE 4:



I got distracted by another project and left the server on for a few hours. I checked top before shutting it down and noticed that the CPU usage had dropped back down to normal (less than 1% per core). I restarted to see if the problem would re-emerge and it did not. I want to know what caused this and am willing to continue trying different things to figure it out if anyone has any suggestions. The only thing I noticed out of the ordinary was a message in /var/spool/mail/root:



Invalid system activity file: /var/log/sa//sa04


This was generated before I checked top.




UPDATE 5:



I found the source of the problem! When I took a break to work on my other project I unplugged the monitor and took it with me. When I checked back in (via SSH) the CPU usage was normal. When I thought back to what may have changed the only thing I could think of was the monitor. To test the theory I rebooted with monitor plugged in. Voila! 100% CPU usage. I unplugged the monitor and CPU usage instantly dropped.



So now I am left wondering what is causing the CPU usage when a monitor is plugged in?



UPDATE 6:



lspci




00:00.0 Host bridge: Intel Corporation E7520 Memory Controller Hub (rev 09)
00:02.0 PCI bridge: Intel Corporation E7525/E7520/E7320 PCI Express Port A (rev 09)
00:04.0 PCI bridge: Intel Corporation E7525/E7520 PCI Express Port B (rev 09)
00:05.0 PCI bridge: Intel Corporation E7520 PCI Express Port B1 (rev 09)
00:06.0 PCI bridge: Intel Corporation E7520 PCI Express Port C (rev 09)
00:1d.0 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 (rev 02)
00:1d.1 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 (rev 02)
00:1d.2 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 (rev 02)
00:1d.7 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller (rev 02)

00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2)
00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge (rev 02)
00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller (rev 02)
01:00.0 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor (A-Segment Bridge) (rev 06)
01:00.2 PCI bridge: Intel Corporation 80332 [Dobson] I/O processor (B-Segment Bridge) (rev 06)
02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller 4 (rev 06)
05:00.0 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI Bridge A (rev 09)
05:00.2 PCI bridge: Intel Corporation 6700PXH PCI Express-to-PCI Bridge B (rev 09)
06:07.0 Ethernet controller: Intel Corporation 82541GI Gigabit Ethernet Controller (rev 05)
07:08.0 Ethernet controller: Intel Corporation 82541GI Gigabit Ethernet Controller (rev 05)

09:0d.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV100 [Radeon 7000 / Radeon VE]


UPDATE 7:



Adding noacpi and nomodeset to the boot options made the CPU usage problem disappear. CentOS also booted to a login screen instead of blacking out the monitor mid loading screen. What does this indicate?

ubuntu - Setting up a Linux client for OpenLDAP over SSL

I'm trying to set up SSL with a server running OpenLDAP (and using OpenSSL, not GnuTLS).



The server seems to be working fine: I can authenticate using ldap:// and can also use ldaps:// from Apache Directory Studio. I can use LDAPS from the client as well, as long as I have this setting in /etc/ldap.conf:




tls_checkpeer no


As soon as I try to use tls_checkpeer yes the SSL connection is refused.



I have the following settings on the server:



olcTLSCACertificateFile  /etc/ssl/certs/cacert.pem
olcTLSCertificateFile /etc/ssl/private/newcert.pem
olcTLSCertificateKeyFile /etc/ssl/private/newreq.pem



The client has these related entries:



# ssl on
uri ldaps://192.168.1.15
tls_checkpeer no
# tls_cacertdir /etc/ssl/certs
# tls_cacertfile /etc/ssl/certs/cacert.pem



The file /etc/ssl/certs/cacert.pem is accessible to users for reading. With the above configuration, it works. If I uncomment one of the two commented tls_* configuration entries and change to tls_checkpeer yes it fails.



I've tried using both cacert.pem and newcert.pem for the certificate (tls_cacertfile) and it didn't work. The cacert.pem has a -----BEGIN CERTIFICATE----- section, as does newcert.pem.



However, the cacert.pem has this under X509v3 extensions:



X509v3 Basic Constraints: 
CA:TRUE



...and the newcert.pem file has this in the same section:



X509v3 Basic Constraints: 
CA:FALSE
Netscape Comment:
OpenSSL Generated Certificate


Other certificates in /etc/ssl/certs have nothing in them except the block marked by BEGIN CERTIFICATE.




Using this command:



openssl s_client -connect 192.168.6.144:636 -showcerts


I can see the contents of cacert.pem and newcert.pem being used for the session.



I've not made changes to /etc/ldap/ldap.conf on either the client or the server.




Errors from the client include:



Feb  8 14:32:24 foo nscd: nss_ldap: could not connect to any LDAP server as cn=admin,dc=example,dc=com - Can't contact LDAP server
Feb 8 14:32:24 foo nscd: nss_ldap: failed to bind to LDAP server ldaps://bar: Can't contact LDAP server
Feb 8 14:32:24 foo nscd: nss_ldap: could not search LDAP server - Server is unavailable


There's no special log entries on the server. The client is Ubuntu Lucid Lynx 10.04, as is the server. All are using nscd.



Attempting to replicate the problem on a Red Hat Enterprise Linux 5.7 system fails in the opposite direction: something that should probably fail, does not: using tls_checkpeer yes with an empty tls_cacertdir directory. I need SSL to work on both systems; we have a mix of both Ubuntu and RHEL.




I restarted nscd after each configuration change.



These are my actual questions:




  • How do I get the tls_checkpeer option working? (main question)

  • Does ssl on actually do anything on the client?




Thanks.

performance - SSD drives and RAID configurations vs LVM




Background:



I'm familiar with the basic RAID levels, and am curious to know if using SSD devices in a RAID0 or RAID5 would be a better deployment than adding them to a large LVM volume.



Specifically, I'm concerned about heat, sound, and power consumption in a small server room, and am planning to move to SSDs from hard disks. The servers in question have 4-6 SATA-II channels, so this is just about how to get the highest performance out of the drives after the switch, not worried about adding new controllers or anything else in the drastic category other than replacing drives.



RAID0



With RAID0, I realize I have no recoverability from a drive loss - but in a dominantly read environment, I believe the SSDs will not likely ever come close to hitting their estimated 1000000-hours MTBF, and certainly won't hit the write-cycle issues that plagued flash memory for a long time (but now seem to effectively be a thing of the past).




RAID5



With RAID5 I'd be "losing" one of the drives for parity, but in the event any one of them dies, I can recover by just replacing that unit.



LVM



With LVM, I'm effectively creating a software JBOD - simple, but if a drive dies, whatever is on it is gone like in RAID0.







Question:



What does the SF community suggest as the best approach for this scenario?


Answer



First of all, LVM configuration and RAID settings should be two independent decisions. Use RAID to set up redundancy and tweak performance, use LVM to build the volumes you need from the logical disks that RAID controller provides.



RAID0 should not appear in your vocabulary. It is only acceptable as a way to build fast storage for data that nobody cares about if it blows up. The need for it is largely alleviated by the speed of SSDs (enterprise-class SSD can do 10+ times more IOPS than the fastest SAS hard disk, so there's no longer need to spread the load over multiple spindles), and, should you ever need it, you can also achieve the same result with LVM striping, where you have much more flexibility.



RAID1 or RAID10 doesn't make much sense with SSDs, again, because they are much faster than regular disks, you don't need to waste 50% of your space in exchange for performance.




RAID5, therefore, is the most appropriate solution. You lose a bit of space (1/6th or 1/4th), but gain redundancy and peace of mind.



As for LVM, it's up to you do decide how to use the space you get after creating your RAID groups. You should use LVM as a rule, even in its simplest configuration of mapping one PV to one VG to one LV, just in case you need to make changes in the future. Besides, fdisk is so 20th century! In your specific case, since most likely it'll be single RAID group spanning all disks in the server, you won't be joining multiple PVs in a VG, so striping or concatenating don't figure in your setup, but in the future, if you move to larger external arrays (and I have the feeling that eventually you will), you'll have those capabilities at your disposal, with minimal changes to your existing configuration.


Wednesday, October 26, 2016

Nginx reverse proxy server not serving cached homepage correctly



I've been configuring a new server for a web development client recently, and run into an interesting problem. The site is fairly high-traffic, but the CMS is fairly heavy and inefficient at generating pages in realtime. This slows page loads down fairly significantly when a page needs to be generated directly from the CMS. The CMS is unfortunately tied to Apache for a number of reasons, so serving pages from it through nginx isn't a viable option. Additionally, the Apache .htaccess file for the CMS is necessarily somewhat complex and involved, so requests served from Apache are relatively slow.




Due to these factors, I'm configuring nginx as a reverse proxy server, and have written a plugin for the CMS that will render pages to static HTML, to be stored in a specific cache directory wherever it's reasonable. My intent is to configure nginx to serve requests for a cached file directly, thus avoiding apache (and more importantly, avoiding the CMS) entirely.



So far this process has gone smoothly, and I have nginx successfully serving the cached files generated by the CMS (at a nice, responsive 25ms, compared to the CMS's 500+ ms), and passing through to the CMS if a cache file does not exist, all with one exception: the homepage. For some reason, the homepage location blocks don't seem to be activating at all.



Here's the site config (anonymized):



# reverse proxy config for example.com, serves all non-dynamic files
server {
listen 80;

server_name example.com www.example.com;
root /var/www/example.com/;

add_header "X-Index-0" "block-0";

location ^~ ^(index)?\.?(htm|html|php|asp|aspx)?$ {
try_files /var/www/example.com/cache/index.html @apache;
add_header "X-Index-1" "block-1";
}


location / {
try_files /var/www/example.com/cache/index.html @apache;
add_header "X-Index-2" "block-2";
}

location ~ ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|js|ttf|woff|svg|otf)$ {
etag on;
expires 30d;
}


location ~ ^(.+\.html)$ {
root /var/www/example.com/static-cache;
add_header "X-Cache-Hit" $uri;
try_files $1 @apache;
expires 7d;
}

location @apache {
add_header 'X-Block-Apache' 'block';
proxy_set_header 'Host' $host;

proxy_set_header 'X-Forwarded-From' $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}
}


The problem is, every request for the root of the site (example.com) gets routed to the @apache block, despite the presence of the /var/www/example.com/cache/index.html (verified by opening it in a text editor with the same user nginx runs as). I added the add_header lines to each location block in an attempt to understand the location blocks that are being activated and in what order. Unfortunately, the only header I'm getting in response to the request is 'X-Block-Apache' one, indicating to me that neither of the location blocks targeted at the root of the site (and any requests for index files) are activating.



This impression is reinforced by the fact that going to http://example.com/index.html gives me an 'X-Cache-Hit' header, and is very obviously served from nginx (something like 15x the speed compared to the page being served from the CMS). However, shouldn't the first location block trigger in this case? What can I do to get the index file serving appropriately from nginx if visiting example.com does not seem to trigger the "location /" block?




Thanks for any assistance!


Answer



Your try_files is trying to load the file you specified, /var/www/example.com/var/www/example.com/cache/index.html, which doesn't exist.



Remember that you need to specify the path relative to the document root.



try_files /cache/index.html @apache;

Fantec SRC-2080x7 connect backplane from SAS to Sata




I've read this topic: How exactly does a SAS SFF-8087 breakout cable work? + RAID/connection questions



Which basically explains that you can go from sata to SAS, but not vice versa.



However, from the store page of the Fantec SRC-2080x7 chassis I've seen a few reviews where people seem to use a breakout cable to connect the sata ports from the motherboard to the Mini SAS port (SFF-8087) on the backplane (where sata HDD's are connected).



Is there an exception to this backplane regarding this cable?



Because the SFF-8087 breakout cable doesn't seem to be working for me. Which would be consistent with the topic I linked above, though I'd be suprised if the people in the reviews haven't tested it before posting their review.




Note: The store page is in german and I've been translating everything to english by using Google Translate



This is the page of the chassis by the manufacturer, but also in german (even the english language at the top-right doesn't help).



EDIT: My backplane model is DH-6GMSAS-03A


Answer



SFF-8087 connectors are frequently used for SATA multi-port connections as well - on backplanes or sometimes even on RAID controllers. In reverse, you can use a SFF-8087-to-4xSATA fanout cable to connect standard on-board SATA ports to a (passive) SAS cage or the Fantec case you've linked to, where you plug the SATA drives (obviously, SAS drives would fit but won't work).



That said, I've seen some really low-quality fanout cables that were hard to correctly plug into the SFF-8087 receptacle - and in one case extremely hard to remove again. Make sure everything is plugged correctly, the 8087 is latched, and the drives are powered before or together with system power.



sFTP access issues on Ubuntu



I've setup sFTP access on an Ubuntu 9.10 Karmic server but i'm having what i think are permission issues.



With the sFTP account i've created it logs in automatically to:





/srv/www/domain.com/




However, i'm only actually able to upload to:




/srv/www/domain.com/public_html





This is not workable as i need to create directories etc parallel to public_html. I appreciate this is something i've probably done wrong as its patched together from a few help files.



I followed these instructions to create a group for sFTP access, i then created a user and then mod'ed their home directory using:




usermod -d /srv/www/domain.com newuser




Let me know if there's any other information you need to troubleshoot this.




OUTPUT OF COMMANDS



> ls -al /srv/www/domain.com/ | grep public_html
drwxr-xr-x 2 newuser newuser 4096 2010-08-24 12:38 public_html

> ls -al /srv/www/ | grep domain.com
drwxr-xr-x 5 root root 4096 2010-08-24 12:21 domain.com

> groups newuser

newuser : newuser filetransfer

> ls -ld /srv/www/domain.com/
drwxr-xr-x 5 root root 4096 2010-08-24 12:21 /srv/www/domain.com/

Answer



You are using the ChrootDirectory directive of OpenSSH.



This will only work if the home directory of the respective user is owned by root:root and is not world or global writable (ie. has the permission mask 0755 but not 0770). sshd will issue a warning in your auth.log (or the destination of syslog facility AUTH) otherwise.


Tuesday, October 25, 2016

security - Site hacked - any ideas of what to do, where to look?

A site I host was recently hacked. The index page had the following code added to the bottom (just above the closing body tag):



   


Followed by...




Lots and lots of tags going to
spammy sites...





Our server has suphp installed, so I don't think it could've happened from another account. This account does have Wordpress installed, so that may be the problem.



Any tips on where to go from here?



Thanks!

domain name system - Office 365 MX record for subdomain




I have a domain registered with an external registrar (marcaria.com). It is configured to be handled by Office365. (DNS name servers set to ns1 (and ns2) .bdm.microsoftonline.com). I have a contracted newsletter agency, who wants to use a subdomain to send out newsletters.



They asked me to create a subdomain and delegate the NS record to them. Turns out this can't be done in Office 365 - I couldn't create subdomains with NS records. (Did I miss something?)



Since NS failed, they gave me a list of A, TXT and MX records to create. I could create the A and TXT records, but I don't see MX as an option.



In Admin/Domains/mydomain.com, in the DNS Settings, I have a New custom record button, but that only allows the creation of TXT, A, CNAME and AAAA records. No MX. There is a section that says Exchange Online records, and one of my created TXT records was automatically put there, but I can't add records to that section. (Also, now I can't delete that TXT record which was automatically moved here, which is a worry. There is an edit button next to it, but no delete. Can change it, can't remove it. Sweet.)



So, any ideas? Is this even possible in Office 365 (delegating a subdomain, like newsletter.mydomain.com)?




Thank you!


Answer




Since NS failed, they gave me a list of A, TXT and MX records to create. I could create the A and TXT records, but I don't see MX as an option.




If you select Office 365 as your DNS provider you CANNOT host email or IM with another hosting company.https://support.office.com/en-us/article/Can-I-add-custom-subdomains-or-multiple-domains-to-Office-365-5481401f-7771-490e-b728-b3a81305a32e





They asked me to create a subdomain and delegate the NS record to them. Turns out this can't be done in Office 365 - I couldn't create subdomains with NS records. (Did I miss something?)




If you want to add a subdomain or your domain you MUST change your DNS management to a DNS provider other than Office 365.https://support.office.com/en-us/article/Can-I-add-custom-subdomains-or-multiple-domains-to-Office-365-5481401f-7771-490e-b728-b3a81305a32e


http headers - Disabling 206 partial content responses on nginx




I have an HTML5 web app that uses a video tag. Depending on the user actions, different parts of the video will be played in response. This video does not exceed 5MB.



I need this video to be entirely downloaded on the client otherwise the user will have to wait for buffering if the part to be played is at the end of the video. Indeed, browsers behavior is to ask if Range Request are supported and to get a HTTP 206 partial content response from my server nginx.



I found a way to do what I want using xhr2 to download the entire video as a BLOB. However, I was wondering if it would be possible, for browsers which do not support xhr2, to make nginx refused Range Request and to send a classic HTTP 200 response so that the browser will fetch the entire video.



Is that possible? Thank you very much for your help!


Answer



Set max_ranges to 0. This requires nginx 1.1.2 or higher.




Example:



location ~ \.mp4$ {
max_ranges 0;
}

database - Unexplained CPU and Disk activity spikes in SQL Server 2005



Performance Monitor of Database Server



Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be > 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing.



We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot.



I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload).




I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike?



Update:



After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. Checkpoints/sec and percent of log used As described on MSDN, the checkpoints will occur when the transaction log
becomes 70% full and we are using the simple recovery model.



This has been enlightening and I've definitely learned something!


Answer




Checkpointing, writing out changed database pages. This does not happen all the time under many circumstances.



https://stackoverflow.com/questions/865659/sql-server-checkpoints


Monday, October 24, 2016

Remove Linux file permissions, in windows



I've set up a fileshare that I want my users to be able to read/write/delete to.
The problem is that I'm able to list content and delete it, not read/write.
Yes, this goes for several users.



The permissions look like this:





NTFS



No inheritance from parent folder.
Owner: Administrators
Full control: Authenticated users, Administrators



Share



Full control: Authenticated users, Administrators





I bet I missed something trivial. Could someone point me into the right direction?



Update: Now I only lack read/execute permissions after adding system, the group Users and creator owner to NTFS permissions settings (gave them full control).



It struck me that perhaps the problems is caused by some linux permissions? I copied all files from a linux smb share so the files still have linux perissions i guess, could that be it?



If so, how do I remove those?


Answer



This turned out to be an issue with the encryption windows applies. Removing the encryption solved the whole thing:/


Why does my debian server freeze?

I installed Debian 9 ("Stretch") in a virtual machine hosted on ESXi 6.5
The OS is up-to-date and nothing else has been installed but VMware tools.



Sometimes when I execute a command, the server will freeze and nothing can be done besides resetting the VM (the SSH server becomes unresponsive, all terminals are freezed, it doesn't show a KP or anything else)



I can reproduce the problem very easily: I just have to execute wget a couple of times and the OS will hang.



At first, I thought it could be a RAM problem. I used memtest86+ on the host and no problem was found. I also tried the debian package "memtester" which runs very well in the VM and doesn't make the OS freeze whatsoever.



/var/log/messages shows nothing special, but there's one line I don't understand:




Jul  3 13:05:57 myhost kernel: [   58.966715] TCP: ens192: Driver has suspect GRO implementation, TCP performance may be compromised.


What could be the problem and how can I debug the whole thing?



Config: 1 CPU / 4 cores - 32GB Ram - 64GB HDD

Sunday, October 23, 2016

domain name system - Trying to grasp the logic behind a complete DNS resolution and respective sequence of actions



Say we have a domain "example.org" and it has an authoritative name server with name "ns1.example.org" with a glued IP as delegated by domain registrars (delegative?) name servers.





  • Someone types example.org into their browser:



    Request is passed onto the ISP's DNS server. When the ISP's caching
    name server(s)/local cache does not find a match for the domain
    "example.org" and its respective (Copy of SOA record? or (WebServer
    IP?, Authoritative Name Sever IP?, Both ?)) the ISP's DNS then
    attempts to resolve the Authoritative name server IP(s) for the
    "example.org" domain by quering the WHOIS database with the domain

    name "example.org"? Or does it pass the request to the root ".org"
    server which then queries the whois database for the Authoritative
    Name Sever IP(s) using the domain name "example.org" to find a
    matching glue record?



    The WHOIS database is part of the Internet central directory,
    I take it the main root servers are what
    is refered to as the "Internet central directory" ?



    The root .org servers will contain the glue records for "example"

    domain and request for "example.org" will be finally forwarded to the
    Authoritative Name server. Where the A\AAA records will map the
    domain name "example.org" to an address for a resource such as
    WebServer etc.




Also is there such thing as a delegated NS name record that is not glued to an IP address
for the authoritative DNS server which hosts and publishes its zone file? Because I read in a book about a circular dependecy/catch-22 problem when NS name is a sub-domain of the domain being resolved and straight away thought aren't all delegated NS names tied to an IP address so why would it occur?


Answer



Glue records are in-zone A records for the NS records of the zone.
Hence, they are only required when the NS records lie in-zone.
If the NS record points to an out-of-zone hostname, no glue is permitted, since the NS record points to a hostname not under the purview of that zone.




Always start with the fundamental fact that a zone is an area of administrative responsibility - all records in a zone fall under that zone's responsibility.



That said, your web request example goes as follows:




  • the browser asks the local DNS resolver for the A record for example.org

  • the local resolver checks if it already knows it

  • if not, it forwards the query to its configured nameserver.

  • that nameserver will check if it has the record, and if it allows recursive queries, will retrieve the records if it doesn't have them, starting at the global root.




WHOIS is not a part of DNS; no whois queries are ever done for name resolution.


Saturday, October 22, 2016

Apache destination virtual host certificate when using mod_rewrite or ProxyPass

I am trying to use mod_rewrite or ProxyPass to redirect (PT) the client's request from virtual host A on 443 to different virtual host B on port 4434, also with SSL.
Like that:



SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyPassMatch ^/vd https://localhost:4434



There is the way I am trying to use mod_rewrite:



RewriteEngine on
RewriteRule (.*) https://%{HTTP_HOST}:4434%{REQUEST_URI} [PT]


The problem is that my client validates server certificate and the on response the client gets the certificate of the virtual host A on 443 port, instead of virtual host B on 4434 port, so SSL handshake failed.



Is there any way to work around this problem ?

Thanks

attach / detach mssql 2008 sql server manager



An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager...



Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated".



Rolling back to last backup is not the option I want to use.



Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach.




-- update --



Based on the comments below I update the question with the server setup.



There are two dedicated servers:



srv1: Web server with remote desktop and an Sql Server Manager



srv2: Sql server that can be accessed through the Sql Server Manager on the web server




-- update2 --



After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer.



Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot.



BR. Anders


Answer



If you know the location of the mdf and ldf files and if you have either sysadmin or dbcreator roles then you can just attach the db yourself using sp_attach_db. If you don't have these things and your service provider refuses to take this action then I would be looking for a new service provider.


pfSense DMZ VMware and Ubuntu 16.04.1 LTS

In a VMware environment I am having connectivity issues (no ping) between the gateway (pfSense DMZ) and Ubuntu server 16.04.1 LTS.




Pfsense is working fine from the LAN subnet 192.168.1.0/24 but not from DMZ subnet 10.10.10.0/24



I think I have configured the firewall side of things correctly (pfsense), but new to VMware so I think I might be missing something within the VMware environment and/or ubuntu server?



Ubuntu server has an IP address 10.10.10.6
pfSense webdmz gateway has an IP address 10.10.10.3



Looking at the topology, on vSwitch2(LAN) I connect a computer on vmnic5 I can get to the internet no problem. But on vSwitch3(WEBDMZ) I am unable to ping either way 10.10.10.6 to gateway 10.10.10.3 and vice versa.



Has someone come across same issue before?




Topology:



topology



Firewall DMZ config and ubuntu interface:



firewall dmz config and ubuntu interface

mdadm - JBOD Failed to assemble after middle device Failed

Ive got an Problem with my JBOD, after my Synology DS died because of the failure of the Boot-HDD, ive wanted to recover my JBOD (3x3TB). Ive Started an Debian-Live to mount and save the data from my Jbod, that worked well.



I Ordered an 8Tb drive from amazon to save the data, but as i started the Rsync Job the middle Device (sdb) got i/o errors...

My Fault was to think a reboot would help because of an unhandled Kernel error...
Ye, dumb me, the middle Disk died.



My Problem now:
Ive got the first and the last device working but mdadm said:



root@debian:~# mdadm --assemble --force /dev/md3 /dev/sd[bc]3



mdadm: /dev/md3 assembled from 2 drives - not enough to start the array.







Thats the Drives:



root@debian:/# mdadm --examine /dev/sd[abc]3
/dev/sdb3:



      Magic : a92b4efc

Version : 1.2


Feature Map : 0x0

Array UUID : e8937ad2:c0080cf8:6e96733a:2a3b4ee8

Name : LG-NAS:3


Creation Time : Sat Feb 25 20:08:20 2017




 Raid Level : linear


Raid Devices : 3



Avail Dev Size : 5850889088 (2789.92 GiB 2995.66 GB)



Used Dev Size : 0



Data Offset : 2048 sectors



Super Offset : 8 sectors



Unused Space : before=1968 sectors, after=32 sectors



      State : clean

Device UUID : baa0a4e4:9bc55ee7:6e6d27ea:fe158da8


Update Time : Thu Mar 9 15:20:28 2017
Checksum : 84051d55 - correct
Events : 1
Rounding : 64K


Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)







/dev/sdc3:



      Magic : a92b4efc

Version : 1.2

Feature Map : 0x0

Array UUID : e8937ad2:c0080cf8:6e96733a:2a3b4ee8


Name : LG-NAS:3


Creation Time : Sat Feb 25 20:08:20 2017



 Raid Level : linear


Raid Devices : 3




Avail Dev Size : 5850889088 (2789.92 GiB 2995.66 GB)



Used Dev Size : 0



Data Offset : 2048 sectors


Super Offset : 8 sectors




Unused Space : before=1968 sectors, after=32 sectors



      State : clean

Device UUID : 0b4313db:8989392c:870a02d2:910a8eb5


Update Time : Thu Mar 9 15:20:28 2017

Checksum : b42b1540 - correct


Events : 1

Rounding : 64K


Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)



root@debian:/# fdisk -l /dev/sd[abc]3




Disk /dev/sdb3: 2.7 TiB, 2995656278016 bytes, 5850891168 sectors



Units: sectors of 1 * 512 = 512 bytes



Sector size (logical/physical): 512 bytes / 4096 bytes



I/O size (minimum/optimal): 4096 bytes / 4096 bytes







Disk /dev/sdc3: 2.7 TiB, 2995656278016 bytes, 5850891168 sectors



Units: sectors of 1 * 512 = 512 bytes



Sector size (logical/physical): 512 bytes / 4096 bytes



I/O size (minimum/optimal): 4096 bytes / 4096 bytes



root@debian:/# mdadm --examine --scan

ARRAY /dev/md/3 metadata=1.2 UUID=e8937ad2:c0080cf8:6e96733a:2a3b4ee8 name=LG-NAS:3



My Last Idea now, to recreate the JBOD using the Following command:



mdadm --create --verbose /dev/md3 --name=LG-NAS:3 --metadata=1.2 --level=linear --raid-devices=3 /dev/sdb3 missing /dev/sdc3



Or



mdadm --create --verbose /dev/md3 --name=LG-NAS:3 --metadata=1.2 --level=linear --raid-devices=2 /dev/sdb3 /dev/sdc3




Any suggestions what to do next?

Friday, October 21, 2016

Email bounce notification for an email account that doesn’t exist in exim mail server

Need help with Exim mail server setting.



I want to set the exim mail server to trigger an email bounce notification for an email account that doesn’t exist. For example: abc@abc.com - should bounce immediately.



When I try the same using my outlook using my personal email account – I get an immediate bounce notification.



Is there a setting with exim configuration?

Scheduled Task to show console window when logged on but still run when not logged on

Is it possible (and if so, how) to set up a task (console application) in Server 2008 so that it'll run both when a user is logged in and when no user is logged in, AND - if the user is logged in (either local or via RDP) - have the console appear on the screen while the program is running?



Ie. the program should run under the defined user context and it writes status messages to stdout, which goes to a standard console window. This console window is either shown (if the defined user is currently logged in locally or via RDP), or not shown (but the application is still run).



I have access to the source of the console application, so if it needs some additional code (like specifically opening up a new console window or what have you), then that's not a problem.




At the moment, I can set up the task as "Run only when user is logged on" which will run the application when the user is logged on (local or RDP) and I can then see the status messages, or I can set it up as "Run whether user is logged or not" and no status output is visible - not even if the user is logged on.

Thursday, October 20, 2016

backup - How to choose a storage company?

For an ad agency, I need to find a good storage company.



There are some things to take in considerations :




  • Support for different OS (Linux, Mac OS X, Windows Xp/Vista) (if it matters)


  • Internal/External systems (through internet or with dedicated servers)

  • Redundancy (save on more than one disk and backups)

  • Quick transfer

  • Automation



Files to be backuped will mainly be PSD, AI files, and documents.



What should i need to know to choose a good provider ?




Any advises (if you know some to compare) (France)



Thanks.



EDIT :



Capicity is about ±2.5Tb



Budget is unknown, and open.

storage - RAID consideration for 24 Disk Array



I have here a 24 Disk Array with 3.7 TB disks in it. What would be, performance wise a good configuration when using a RAID 6: RAID 6 over all 24 disks or should I use 2x12 disk RAID 6 and than a RAID 0?




I'm not so interested in conversation about the RAID level itself (e.g. 5 or 6 or 10) but more about
the arrangement of the disks. If it would be better using multiple smaller RAID groups or one big RAID group for example... what's here the best practice?



Best.


Answer



Some of this hinges on the hardware involved. I prefer RAID 1+0 for simplicity and rebuild times. It's tough to give a generic answer without more details, though...



Things to consider:





  • The disks installed in the system: SAS? SATA? Nearline SAS? This impacts the failure rate and failure mode, as well as array rebuild times.


  • The anticipated use for storage: Your performance requirements may drive the design. Random I/O? Sequential? Read-biased? Write-biased?


  • Interconnects: How will the storage array be connected to the server? SAS? Will you be using a single connection to an HBA? Two? Multipath? 3Gbps? 6Gbps? There will be a ceiling in storage throughput because of SAS oversubscription. So this factors into the design because of that performance cap.


  • Controller: I always come from an HP SmartArray perspective, but I suppose the rest of the world uses LSI and PERC controllers. This may be a moot discussion, as LSI controllers can't have more than 16 disks in a single-level virtual drive; e.g. you wouldn't be able to create a 24-disk RAID6 volume. You can do this with HP controllers, though.


  • Resiliency: Do you plan to have online spares? When you consider a nested RAID level like 60, that becomes important.




So, assuming a controller capable of both. Your options are really 4 x 6-disk RAID6+0, 3 x 8-disk RAID6+0, 2 x 12-disk RAID6+0 and a 24-disk RAID6.




Determine the space needs, as they vary. Then evaluate the sequential performance capabilities of each. I'd suggest 3 x 8-disk as a reasonable if you go nested and aren't interested in RAID 1+0.


sudo like in Ubuntu (for Debian and other Linuxes)



I personally like the default sudo behavior of Ubuntu:
- Root login impossible
- "admin" group granted "ALL=(ALL) ALL"
- users in the "admin" group are asked for their user password (not a root password) when using sudo.



[I like it, because this way, there's no root password to be shared among several people. There may be good reasons for other opinions, too - but that shouldn't be the topic of this question.]




Now I'm trying to re-create this behavior in Debian Etch. It basically works, but there's one important difference: Debian doesn't ask for a password. It should ask for the user's password.



I edited the sudoers file to be exactly the same as in Ubuntu, and I added a user to the newly created "admin" group. What else do I have to do to get the Ubuntu behavior in Debian (and other Linuxes)?



Thanks
Chris


Answer



Problem solved itself by waiting 15 minutes...
It works now, it simply kept the password alive for 15 minutes - which is normal, but I didn't know, that it even keeps it after a logout/login. I didn't expect this at all.




Everything's working fine now, thanks for the answers! (Can/should I somehow close this question?)


windows server 2003 - Cannot access public facing website

Windows Server 2003, IIS 6




We have a public facing website deployed and until this morning it was running fine.
Now when we try and access the website from outside the network the browser is returning a 'website not found' message. We are not receiving an IIS error message, it's just stating that the website cannot be found.



Accessing the url from the internal network works fine, as does pinging the url and also the ip address internally, so it doesn't appear to be an IIS issue. tracert is just hitting internal servers as expected, so this may be a DNS issue?



Any ideas where to start looking to solve this problem would be greatly appreciated.



Thanks,
Ciaran.




Edit:
We also have another public facing website (with a different IP address) on that same server and that site is accessible externally, so the problem might be related to the IP address of our site?

Wednesday, October 19, 2016

RAID setup for maximizing data retention and read speed

My goals are simple: maximize data retention safety, and maximize read speeds. My first instinct is to do a a three drive software RAID 1. I have only used fakeraid RAID 1 in the past and it was terrible (would have led to data loss actually if it weren't for backups)



Would you say software raid 1 or a cheap actual hardware raid card? OS will be linux.



Could I start with a two drive raid 1 and add a third drive on the fly?



Can I hot swap?



Can I pull one of the drives and throw it into a new machine and be able to read all the data? I do not want a situation where I have a raid card fail and have to try and find the same chipset in order to read my data (which i am assuming can happen)




Please clarify any points on which it sounds like I have no idea what I am talking about, as I am admittedly inexperienced here. (My hardest lesson was fakeraid lol)



Thanks!



Edit:
OS will be Windows 7 for one machine, Linux for another. Is three disk hardware RAID 1 possible?

Tuesday, October 18, 2016

domain name system - Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL:




$ dig bogus.              

; <<>> DiG 9.8.1 <<>> bogus.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;bogus. IN A


I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected:



$ dig bogus. @128.59.59.70

; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70
;; global options: +cmd
;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;bogus. IN A

;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400

;; Query time: 18 msec

;; SERVER: 128.59.59.70#53(128.59.59.70)
;; WHEN: Wed Jan 25 14:09:14 2012
;; MSG SIZE rcvd: 98


Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected.



This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates.



EDIT




Here is a debug log. The client is 160.39.114.110, which is my workstation.



1/25/2012 2:16:01 PM 0E08 PACKET  000000001EA6BFD0 UDP Rcv 160.39.114.110  2e94   Q [0001   D   NOERROR] A      (5)bogus(0)
UDP question info at 000000001EA6BFD0
Socket = 508
Remote addr 160.39.114.110, port 49710
Time Query=1077016, Queued=0, Expire=0
Buf length = 0x0fa0 (4000)
Msg length = 0x0017 (23)

Message:
XID 0x2e94
Flags 0x0100
QR 0 (QUESTION)
OPCODE 0 (QUERY)
AA 0
TC 0
RD 1
RA 0
Z 0

CD 0
AD 0
RCODE 0 (NOERROR)
QCOUNT 1
ACOUNT 0
NSCOUNT 0
ARCOUNT 0
QUESTION SECTION:
Offset = 0x000c, RR count = 0
Name "(5)bogus(0)"

QTYPE A (1)
QCLASS 1
ANSWER SECTION:
empty
AUTHORITY SECTION:
empty
ADDITIONAL SECTION:
empty

1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0)

UDP response info at 000000001EA6BFD0
Socket = 508
Remote addr 160.39.114.110, port 49710
Time Query=1077016, Queued=0, Expire=0
Buf length = 0x0fa0 (4000)
Msg length = 0x0017 (23)
Message:
XID 0x2e94
Flags 0x8182
QR 1 (RESPONSE)

OPCODE 0 (QUERY)
AA 0
TC 0
RD 1
RA 1
Z 0
CD 0
AD 0
RCODE 2 (SERVFAIL)
QCOUNT 1

ACOUNT 0
NSCOUNT 0
ARCOUNT 0
QUESTION SECTION:
Offset = 0x000c, RR count = 0
Name "(5)bogus(0)"
QTYPE A (1)
QCLASS 1
ANSWER SECTION:
empty

AUTHORITY SECTION:
empty
ADDITIONAL SECTION:
empty


Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

Sunday, October 16, 2016

Spam prevention tips for Postfix

Without using SpamAssasin, or similar, what are your best tips for preventing spam.



Please try and provide config examples :D

iis 7.5 - IIS 7.5 Web Server Farm with separate domain accounts

We have 4 servers running Server 2008 R2 x64 with IIS 7.5 and they're linked together with a Web Server Farm. Content is being distributed to each server correctly and also websites so when it comes to replication nothing is required at this point.



My problem is that previously we had 1 domain account to access a centralized folder (which will now be sitting in the WWWroot folder to copy content across the other servers as well) and I want to create a separate account for each server rather than a generic one, so in case 1 fails, it won't affect all servers just 1.



Where can I specify in the apphostconfig file that each domain account needs to access this folder only on a specific server? I don't want to break the farm since it's working properly and therefore I don't want to experiment.




Any help will be appreciated.



Thanks, Chris

ip - Remote desktop into multiple different workstations sharing a single DSL line



I have been asked to help setup a remote desktop solution for a small business that has about 10 workstations. All 10 workstations are sharing a single verizon dsl line with (presumably) a low-end westell DSL modem. They do not currently have a static IP.




Each person would like the ability to remote desktop into their office machine from home, though in all likelihood there will not usually be 1-2 people doing this at any time.



What are the basic obstacles I will have to deal with? I presume not having a static ip is one problem that needs to be solved, but even with a single static IP, how will the remote connection finds it way to the proper machine? Is there some routing software that can be employed here, or another method?



Links or suggestions much appreciated. I assume this problem has likely been faced by many folks before...


Answer



Use an external dynamic DNS service to associate your IP with a DNS name.



Then, you can port-forward different external ports to the same RDP port on different internal systems. This is not routing, this is just one of the many ways you can use the features of NAT.




The desktops, of course, will need to be on, and have static internal IP addresses, or else the port-forwarding won't work.


Saturday, October 15, 2016

apache 2.4 - Cannot get web root to be /var/www/html, despite setting it in apache2.conf and 000-default.conf

new to Linux and trying to set up a basic web server. I'm currently a bit confused, as the document root when you visit the server in a browser appears to be /var/www/.



In both apache2.conf and 000-default.conf the DocumentRoot is set to /var/www/html, and I have restarted the apache2 service numerous times with no luck. I'm unsure as to what could be causing this - I have installed mod_security, but I don't think that should have any effect.



For reference, current apache2.conf and 000-default.conf (I know some values are insanely high, will sort it once I can get everything running).



Server IP: http://167.114.71.100/




As expected from apache2.conf, this gives a 403 forbidden. 167.114.71.100/html does work, however. Any ideas how I can make the default root 167.114.71.100/html?



Thanks!



apache2.conf:



# This is the main Apache server configuration file.  It contains the
# configuration directives that give the server its instructions.
# See for detailed information about

# the directives and /usr/share/doc/apache2/README.Debian about Debian specific
# hints.
#
#
# Summary of how the Apache 2 configuration works in Debian:
# The Apache 2 web server configuration in Debian is quite different to
# upstream's suggested way to configure the web server. This is because Debian's
# default Apache2 installation attempts to make adding and removing modules,
# virtual hosts, and extra configuration directives as flexible as possible, in
# order to make automating the changes and administering the server as easy as

# possible.

# It is split into several files forming the configuration hierarchy outlined
# below, all located in the /etc/apache2/ directory:
#
# /etc/apache2/
# |-- apache2.conf
# | `-- ports.conf
# |-- mods-enabled
# | |-- *.load

# | `-- *.conf
# |-- conf-enabled
# | `-- *.conf
# `-- sites-enabled
# `-- *.conf
#
#
# * apache2.conf is the main configuration file (this file). It puts the pieces
# together by including all remaining configuration files when starting up the
# web server.

#
# * ports.conf is always included from the main configuration file. It is
# supposed to determine listening ports for incoming connections which can be
# customized anytime.
#
# * Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/
# directories contain particular configuration snippets which manage modules,
# global configuration fragments, or virtual host configurations,
# respectively.
#

# They are activated by symlinking available configuration files from their
# respective *-available/ counterparts. These should be managed by using our
# helpers a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf. See
# their respective man pages for detailed information.
#
# * The binary is called apache2. Due to the use of environment variables, in
# the default configuration, apache2 needs to be started/stopped with
# /etc/init.d/apache2 or apache2ctl. Calling /usr/bin/apache2 directly will not
# work with the default configuration.


Include /etc/phpmyadmin/apache.conf

# Global configuration
#

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE! If you intend to place this on an NFS (or otherwise network)

# mounted filesystem then please read the Mutex documentation (available
# at );
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
#ServerRoot "/etc/apache2"

#
# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.

#
Mutex file:${APACHE_LOCK_DIR} default

#
# PidFile: The file in which the server should record its process
# identification number when it starts.
# This needs to be set in /etc/apache2/envvars
#
PidFile ${APACHE_PID_FILE}


#
# Timeout: The number of seconds before receives and sends time out.
#
Timeout 300

#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
KeepAlive On


#
# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to 0 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
#
MaxKeepAliveRequests 100

#
# KeepAliveTimeout: Number of seconds to wait for the next request from the

# same client on the same connection.
#
KeepAliveTimeout 5


# These need to be set in /etc/apache2/envvars
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}

#

# HostnameLookups: Log the names of clients or just their IP addresses
# e.g., www.apache.org (on) or 204.62.129.132 (off).
# The default is off because it'd be overall better for the net if people
# had to knowingly turn this feature on, since enabling it means that
# each client request will result in AT LEAST one lookup request to the
# nameserver.
#
HostnameLookups Off

# ErrorLog: The location of the error log file.

# If you do not specify an ErrorLog directive within a
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a
# container, that host's errors will be logged there and not here.
#
ErrorLog ${APACHE_LOG_DIR}/error.log

#
# LogLevel: Control the severity of messages logged to the error_log.
# Available values: trace8, ..., trace1, debug, info, notice, warn,

# error, crit, alert, emerg.
# It is also possible to configure the log level for particular modules, e.g.
# "LogLevel info ssl:warn"
#
LogLevel warn

# Include module configuration:
IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf


# Include list of ports to listen on
Include ports.conf


# Sets the default security model of the Apache2 HTTPD server. It does
# not allow access to the root filesystem outside of /usr/share and /var/www.
# The former is used by web applications packaged in Debian,
# the latter may be used for local directories served by the web server. If
# your system is serving content from a sub-directory in /srv you must allow
# access here, or in any related virtual host.


Options FollowSymLinks
AllowOverride None
Require all denied



AllowOverride None
Require all granted




Options Indexes FollowSymLinks
AllowOverride None
Require all granted


#
# Options Indexes FollowSymLinks
# AllowOverride None

# Require all granted
#





# AccessFileName: The name of the file to look for in each directory
# for additional configuration directives. See also the AllowOverride
# directive.
#

AccessFileName .htaccess

#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#

Require all denied




#
# The following directives define some format nicknames for use with
# a CustomLog directive.
#
# These deviate from the Common Log Format definitions in that they use %O
# (the actual bytes sent including headers) instead of %b (the size of the
# requested file), because the latter makes it impossible to detect partial
# requests.
#

# Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.
# Use mod_remoteip instead.
#
LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

# Include of directories ignores editors' and dpkg's backup files,

# see README.Debian for details.

# Include generic snippets of statements
IncludeOptional conf-enabled/*.conf

# Include the virtual host configurations:
IncludeOptional sites-enabled/*.conf


000-default.conf:





# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com


ServerAdmin webmaster@localhost
DocumentRoot /var/www/html

# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn


ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf


mysql INNODB inserts very slow



The database's schema is as follows.



 CREATE  TABLE  `items` (  
`id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT ,
`name` varchar( 45 ) NOT NULL ,
`main_type` tinyint( 4 ) NOT NULL ,

`rarity` tinyint( 4 ) NOT NULL ,
`stack_size` smallint( 6 ) NOT NULL ,
`sub_type` tinyint( 4 ) NOT NULL ,
`cost` mediumint( 8 ) unsigned NOT NULL ,
`ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0',
`flavor_text` varchar( 250 ) NOT NULL ,
`rlvl` tinyint( 3 ) unsigned NOT NULL ,
`final` tinyint( 4 ) NOT NULL DEFAULT '0',
PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii;



Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB.



Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.



Edit: Also this table has quite a few other tables linked to itself via the id. There's a few of them out there that are linked to it and do an on update=cascade; on delete=cascade;. I believe that that is the biggest issue here. If it is then, i can probably go in and change it and do individual deletes from the various little things when they are removed. The insert speed seems to be ~0.2 seconds whether i'm doing the insert on just items or i'm also doing it on another table that has a foreign key link to the main one.


Answer



Well, my first guess is that your InnoDB is probably broken. You can check whether there aren't any





  • triggers that would do slow operation on insert

  • processes going on that would lock the table

  • foreign keys/constraints pointing to this table



Best way to completely audit a database against anything that would cause such trouble is to read schemas dump from mysqldump command.


Anomany while creating 128 subnets for class C IP address



I am creating 128 sub nets for a organisation which has class C IP address, so i thought to chose 7 bits from the last octet of the IP address , and the rest that i am left with is one "0" which will be the host for each subnet , but the valid hosts are practically counting to zero.



If i chose a sub net mask : 255.255.255.254 for a class C address , then total sub nets that i will have is 2^7 = 128 and hosts per sub net = 2 and valid hosts per sub net = 2-2 = 0.




So my question is what should we do if we want to have a 128 sub nets in our organisation? If i use the above method then i will have no valid IP in my sub net.


Answer



When using 255.255.255.254 (31-bits) netmasks your "valid hosts per subnet" math sort of goes out the window here, because 31-bit masks are treated specially.



They're mostly used for point-to-point links, where there's no need for either a network address, or a broadcast, because each IP knows exactly where it's going to be sending all it's traffic.



It's actually even got it's own RFC (3021).


security - Apache 2.2 isn't obeying file level cgi permissions



I have two serveers. Server 1 runs Apache 2.2 and mod_perl 2.0.4. Server 2 runs Apache 2.0 and mod_perl 1.99. They have nearly identical conf files. The perl section of the vhost looks like this:




/perl>
    SetHandler perl-script
    PerlResponseHandler ModPerl::Registry
    Options +ExecCGI
</Location>




If I put a cgi script in the designated perl directory of Server 2 chmodded to 644, I can't access the file through the web browser. I get Forbidden as the error. That's the behavior I'd expect. I have to chmod it to 755 first.




However, if I put the save script in the directory for cgi scripts on Server 1 chmodded to 644 the server just executes the script. It doesn't seem to care what the file's permissions are only what the directory is set to.



All files are owned and grouped in root and apache is running under a separate user. The directory is chmodded 755 and also belongs to root.



My question is, is there a way to make the behavior identical and is this a potential security risk on Server 1? Or is there a generally better way I should be doing this?


Answer



mod_perl isn't CGI, so neither +ExecCGI nor executable permissions actually matter for it. The reason that you see different behaviour is that in verstion 1.999_02 mod_perl developers changed their mind about the executable bit:



ModPerl::Registry no longer checks for -x bit (we don't executed
scripts anyway), and thus works on acl-based filesystems. Also

replaced the -r check with a proper error handling when the file is
read in. [Damon Buckwalter ]


From http://perl.apache.org/dist/mod_perl-2.0-current/Changes


ssl - nginx proxypath https redirect fails without trailing slash



I'm trying to setup Nginx to forward requests to several backend services using proxy_pass.



The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https.



So if a link is too https://example.com/internal/errorlogs



in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/)




If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads



I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off;



This happens on more than one app under nginx, and works in apache reverse proxy



Config files;



proxy.conf




location /internal {
proxy_pass http://localhost:8081/internal;
include proxy.inc;
}
.... more entries ....


sites-enabled/main




server {
listen 443;

server_name example.com;
server_name_in_redirect off;

include proxy.conf;

ssl on;
}



proxy.inc



proxy_connect_timeout   59s;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_pass_header Set-Cookie;

proxy_redirect off;
proxy_hide_header Vary;

proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

proxy_set_header Accept-Encoding '';
proxy_ignore_headers Cache-Control Expires;
proxy_set_header Referer $http_referer;
proxy_set_header Host $host;

proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-Proto https;


curl output




-$ curl -I -k https://example.com/internal/errorlogs/
HTTP/1.1 200 OK
Server: nginx/1.0.5
Date: Thu, 24 Nov 2011 23:32:07 GMT
Content-Type: text/html;charset=utf-8
Connection: keep-alive
Content-Length: 14327

-$ curl -I -k https://example.com/internal/errorlogs

HTTP/1.1 301 Moved Permanently
Server: nginx/1.0.5
Date: Thu, 24 Nov 2011 23:32:11 GMT
Content-Type: text/html;charset=utf-8
Connection: keep-alive
Content-Length: 127
Location: http://example.com/internal/errorlogs/

Answer



I saw you added the server_name_in_redirect directive, but you need proxy_redirect directive on the location session




http://wiki.nginx.org/HttpProxyModule#proxy_redirect



You will add something like that



proxy_redirect http://example.com/ /;

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...