Wednesday, February 28, 2018

linux - How to prioritize SSH to work in the event of high CPU load

I'm encountering a weird problem where a Fedora Linux VPS server reports 100% CPU, and effectively becomes unusable, but I don't know why because the high load prevents me from SSHing into it to see what's wrong.




How do I prioritize or configure SSH so that I'm still able to connect even if some process is consuming all other CPU?

Tuesday, February 27, 2018

apache 2.2 - System Requirements of a write-heavy applications serving hundreds of requests per second

NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers.




I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests).



I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions.



As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week).



For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week.



If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

Saturday, February 24, 2018

domain name system - How to set up mx records



So I bought a new domain name from GoDaddy and pointed the domain to my VPS IP Address which works fine.



[Please note, the VPS is not hosted with GoDaddy, just the domain name.]



However, I noticed that all incoming mails from external servers (gmail, yahoomail, other domains, etc) keep bouncing back even though outcoming emails (from my domain to other mail servers work fine).




After googling the issue out, it seems that I need to make changes to my domain's MX Records in order to be able to send/receive mail.



In GoDaddy, the mx records are as follows:



10  @   mailstore1.secureserver.net 1 Hour  
0 @ smtp.secureserver.net 1 Hour


What I have tried so far:





  1. So as instructed by online tutorials and forums, I created an A Record mail.shillong.work and pointed it to my VPS IP Address.


  2. After that, I added this line to the list of MX Records:



    1 @ mail.shillong.work 1 hour




So now it looks like this:



10  @   mailstore1.secureserver.net 1 Hour  

0 @ smtp.secureserver.net 1 Hour
1 @ mail.shillong.work 1 Hour


However, I still can't send anything to any email hosted in my server.



What am I doing wrong?


Answer



The problem seems to be that you have other servers than your mail server listed in your MX record, one of which has a lower priority. MX records work on a lowest-priority-first basis, which means the internet is first of all trying to send mail for your domain to smtp.secureserver.net., which doesn't seem to know anything about your domain. At this point, delivery fails fatally, and there's an end of it; your server, being listed at second priority, will never get tried.




The only mail servers you should list in your MX records are those that either (a) are prepared to accept email for your domain, for final delivery, or (b) are prepared to accept email for your domain with a view to delivering it on to a final delivery server, and are specifically configured to do so.



If you overhaul the MX record for shillong.work to something like



shillong.work.          3600    IN      MX      10 mail.shillong.work.


and remove all other MX records, then wait an hour (for the 3600s TTL to expire), the internet should start delivering your inbound mail to your server.


virtual machines - ZFS: very big files + compression + snapshots



I backup several virtual disks (total = around 4 Tb), with several weeks of retention time.



I use 4 x 4 Tb disks in the computer dedicated to primary backup. The filesystem is ZFS RAIDZ2, so 8 Tb usable.
A secondary backup of 4 x 2 Tb disks (4 Tb usable) is on a separate building, storing last sunday's backup.




I manage the retention by doing snapshots: after each backup a snapshot is created on the primary backup filesystem. And the snapshots older than 90 days are deleted. The modified data amount is less than 4 Tb for 90 days, so everything is okay (in fact I have 30 last days + 9 previous weeks + 10 previous months, but this is not the point).



On the secondary backup I have only one backup. I plan to implement retention too.
I first thought to upgrade to 4 x 4 Tb disks (because of lack of space, I can't upgrade to 6 x 2 Tb) and do snapshots as in the primary backup.



Instead of upgrading hardware, what if I use ZFS compression + snapshots on the secondary backup?
Compression will lead to, say, 600 Gb free. Then snapshots will give retention of several days.



The saved virtual disks are updated with rsync, so only small parts are modified. So I think only small parts are "transmitted" to snapshots. But I don't find any source confirming this will work as I think.



Question: using ZFS on Linux with compression, will very big files with scattered modifications be snapshoted efficiently?



Answer



You should be using ZFS compression (with compression=lz4) by default these days. There's no good reason not to use it, except if you know that your data is not compressible.



Snapshots on compressed ZFS filesystems are still efficient and work with replication and/or rsync.


Friday, February 23, 2018

apache 2.2 - PHP fopen fails - does not have permission to open file in write mode

I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory.




I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that.



So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 *



Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

ZFS snapshots and atomic updates

So I'm tinkering around with ZFS on linux and zrep. I've got 2 VMs on my laptop and I'm running zrep and synchronizing the contents of one filesystem to another.



One unexpected situation is this: If I'm on the "slave" -- the box that's receiving the data -- and I'm continuously reading the contents of a file (such as with sum), if the file is rapidly changing on the master I will get an Input/output error on the slave as the snapshot is getting applied. This does not happen if I'm continuously reading a file that isn't changing in the snapshot.



To be clear -- the "sum" program or any other standard userland program that is reading the changing file on the target filesystem will periodically get an Input/output error and crash.



The ZFS replication itself works correctly -- zrep is just a nice tool for managing the replication process.



I'm a little confused at this behavior -- will reads of files that get updated when a ZFS snapshot is applied cause read errors, or is this a bug in ZFS on linux?

Thursday, February 22, 2018

MySQL server stops randomly. Is it possible that system kills it during high loads or low available memory?











I have an Ubuntu webserver (Apache + MySQL + PHP) on a very small machine on Amazon Web Services (EC2 micro instance). Website runs fine, very fast. So, our little traffic doesn't seems to slow the server at all.



Anyway, MySQL randomly goes down very often (once a week at least) and I can't get why. Apache instead keeps running fine. I have to log on via SSH and restart it, then all runs fine:



$ sudo service mysql status
mysql stop/waiting
$ sudo service mysql start
mysql start/running, process 25384



I've installed Cacti for performance monitoring, and I can see every time MySQL goes down, I have a high single peak in load average (up to 10, when normally is lower than 1). This is strange because it doesn't seem to occur during cronjobs or so.



I also tried to inspect MySQL logs: slow query log (that is enabled, I'm sure), /var/log/mysql.log and /var/log/mysql.err are all empty. I thought that maybe the system automatically shut down it because of low available memory; is that possible?



Now I'm trying to setup a bigger EC2 instance, but I just found something that looks critical (but I can't understand) in /var/log/syslog. I pasted the relevant part is here (MySQL went down at 11:47).


Answer



Yeah, seems that your box ran out of free ram, and the kernel killed it to protect the system stability. Try an instance with more ram!


Tuesday, February 20, 2018

Apache load balancer limits with Tomcat over AJP



I have Apache acting as a load balancer in front of 3 Tomcat servers. Occasionally, Apache returns 503 responses, which I would like to remove completely. All 4 servers are not under significant load in terms of CPU, memory, or disk, so I am a little unsure what is reaching it's limits or why. 503s are returned when all workers are in error state - whatever that means. Here are the details:



Apache config:




StartServers 30

MinSpareServers 30
MaxSpareServers 60
MaxClients 200
MaxRequestsPerChild 1000


...


AddDefaultCharset Off

Order deny,allow
Allow from all


# Tomcat HA cluster

BalancerMember ajp://10.176.201.9:8009 keepalive=On retry=1 timeout=1 ping=1
BalancerMember ajp://10.176.201.10:8009 keepalive=On retry=1 timeout=1 ping=1
BalancerMember ajp://10.176.219.168:8009 keepalive=On retry=1 timeout=1 ping=1



# Passes thru track. or api.
ProxyPreserveHost On
ProxyStatus On

# Original tracker
ProxyPass /m balancer://mycluster/m
ProxyPassReverse /m balancer://mycluster/m



Tomcat config:











connectionTimeout="20000"
redirectPort="8443" />




unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">






Apache error log:




[Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.10:8009 (10.176.201.10) failed
[Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.10)
[Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.10

[Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.201.9:8009 (10.176.201.9) failed
[Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.201.9)
[Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.201.9
[Mon Mar 22 18:39:47 2010] [error] (70007)The timeout specified has expired: proxy: AJP: attempt to connect to 10.176.219.168:8009 (10.176.219.168) failed
[Mon Mar 22 18:39:47 2010] [error] ap_proxy_connect_backend disabling worker for (10.176.219.168)
[Mon Mar 22 18:39:47 2010] [error] proxy: AJP: failed to make connection to backend: 10.176.219.168
[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state
[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state
[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state
[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state

[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state
[Mon Mar 22 18:39:47 2010] [error] proxy: BALANCER: (balancer://mycluster). All workers are in error state


Load balancer top info:




top - 23:44:11 up 210 days, 4:32, 1 user, load average: 0.10, 0.11, 0.09
Tasks: 135 total, 2 running, 133 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.2%id, 0.1%wa, 0.0%hi, 0.1%si, 0.3%st

Mem: 524508k total, 517132k used, 7376k free, 9124k buffers
Swap: 1048568k total, 352k used, 1048216k free, 334720k cached


Tomcat top info:




top - 23:47:12 up 210 days, 3:07, 1 user, load average: 0.02, 0.04, 0.00
Tasks: 63 total, 1 running, 62 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st

Mem: 2097372k total, 2080888k used, 16484k free, 21464k buffers
Swap: 4194296k total, 380k used, 4193916k free, 1520912k cached


Catalina.out does not have any error messages in it.



According to Apache's server status, it seems to be maxing out at 143 requests/sec. I believe the servers can handle substantially more load than they are, so any hints about low default limits or other reasons why this setup would be maxing out would be greatly appreciated.


Answer



Solution for this Problem is pretty simple:




add to Proxypass:



BalancerMember ajp://10.176.201.9:8009 keepalive=On ttl=60



add to Tomcats Server.xml:



Connector port="8009" protocol="AJP/1.3" redirectPort="8443 connectionTimeout="60000"



After these changes everything should be work fine :-)


KVM Host: Hardware RAID with cache + BBU vs. Linux Software RAID with LVM writeback cache and UPS




I want to set up a RAID 5 with 3x1TB drives on a Linux KVM-Host. The RAID will be used as LVM thin storage for VM disks.



There has already been much discussion on hardware raid vs software [1]. According to this discussion one should not use software raid but hardware raid with cache and BBU for VM disk storage because of the better write performance.



What I would like to know is if the following setup would be comparable to a hardware raid with cache and BBU (e.g. HP P410 512 MB + BBU) in terms of read/write performance and data safety:




  • Linux Software RAID / mdadm RAID 5

  • LVM writeback cache on a 512 MB ram disk


  • Host backed by UPS to prevent data loss like the BBU on hw raid



[1] Software vs hardware RAID performance and cache usage


Answer



None from above! You really need to look at ZFS on Linux.



http://zfsonlinux.org



https://www.reddit.com/r/zfs/comments/514k2r/kvm_zfs_best_practices/




Perfect discussion, tons of links.


Monday, February 19, 2018

linux - permissions on upload folder not working



I have a php script which uploads images to a folder.



I have these permissions on the upload folder:



  drwxrwxr--  4 user user   4096 2010-06-02 16:20 temp_images


Shouldn't these permissions be enough for files to be uploaded to the folder?




But this doesn't work.



It only works when I set the permissions to 777.



"user" is added to the www-data group, still no luck.



Any ideas why?


Answer



Your folder is owned by the user and group user. If apache is running as a different account, perhaps www-data then apache will not be able to write there. Adding the user account to the www-data group would mean that user is permitted to write in folders that the www-data group owns and is set for rw. If you want apache to write to the folder the group user owns the apache service account must be a member of the user group.



Software RAID 10 on Linux



For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 (very good value). It has four internal 3.5" slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks.



The problem is that this creates a single-point-of-failure on the boot drive. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability. The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub.



What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues?



Answer



I would be inclined to go for RAID10 in this instance, unless you needed the extra space offered by the single+RAID5 arrangement. You get the same guaranteed redundancy (any one drive can fail and the array will survive) and slightly better redundancy in worse cases (RAID10 can survive 4 of the 6 "two drives failed at once" scenarios), and don't have the write penalty often experienced with RAID5.



You are likely to have trouble booting off RAID10, either implemented as a traditional nested array (two RAID1s in a RAID0) or using Linux's recent all-in-one RAID10 driver as both LILO and GRUB expect to have all the information needed to boot on one drive which it may not be with RAID0 or 10 (or software RAID5 for that matter - it works in hardware as the boot loader only sees one drive and the controller deals with where the data it actually spread amongst the drives).



There is an easy way around this though: just have a small partition (128MB should be more than enough - you only need room for a few kernel images and associated initrd files) at the beginning of each of the drives and set these up as a RAID1 array which is mounted as /boot. You just need to make sure that the boot loader is correctly installed on each drive, and all will work fine (once the kernel and initrd are loaded, they will cope with finding the main array and dealing with it properly).



The software RAID10 driver has a number of options for tweaking block layout that can bring further performance benefits depending on your I/O load pattern (see here for some simple benchmarks) though I'm not aware of any distributions that support this for of RAID 10 from install yet (only the more traditional nested arrangement). If you want to try the RAID10 driver, and your distro doesn't support it at install time, you could install the entire base system into a RAID1 array as described for /boot above and build the RAID10 array with the rest of the disk space once booted into that.


Is there any lower bound on DHCP lease time?

Currently I am facing an issue to change the value of the DHCP lease time option in the server and configuring the the client with the same value. I have used dhcp-server package and have put the following entry in the /etc/dhcp/dhcpd.conf file in the server for the default lease time.



default-lease-time 60;



However when I start the dhcp service at the client side, the client still takes 300 seconds as its dhcp lease time value. I have tried to make it work by deleting the /var/lib/dhcp/dhclient.leases file at the clients side and restart the DHCP server, but it did not help. It always gets 300 seconds as its dhcp lease time value.




What do you think may be the possible cause behind this? Do you think there is any lower bound on the dhcp lease time option value?
N.B. I am aware that setting a DHCP lease as low as 60 seconds does not make much sense from the perspective that a client has to refresh its lease information in at most 60 seconds and this increases the network traffic. But I was experimenting with different configuration parameters and would appreciate if someone can tell me if it is possible to set a DHCP lease time as low as 60 seconds. If not, then why?

Sunday, February 18, 2018

linux - TCP listen on any IPv6 in a block on Debian

I have a /64 block of IPv6 addresses, and I'd like to be able to start a TCP server listening on any one of them. Currently I can bind to any static IP address, but not any others. If I try to bind to an address not statically routed (by the way, I'm not sure if I'm using the right terms), I get an error message, "bind: cannot assign requested address".



Here's from ifconfig:



eth0      Link encap:Ethernet  HWaddr 56:00:00:60:af:c6
inet addr:104.238.191.172 Bcast:104.238.191.255 Mask:255.255.254.0
inet6 addr: fe80::5400:ff:fe60:afc6/64 Scope:Link
inet6 addr: 2001:19f0:6801:187:5400:ff:fe60:afc6/64 Scope:Global

inet6 addr: 2001:19f0:6801:187:ea1e:eb99:13ae:d49a/128 Scope:Global
UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:1526389 errors:0 dropped:0 overruns:0 frame:0
TX packets:1622562 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:280302410 (267.3 MiB) TX bytes:266740313 (254.3 MiB)


If I try to bind to e.g. 2001:19f0:6801:187:a:b:c:d, it fails with "bind: cannot assign requested address". But if I do ip -6 addr add dev eth0 2001:19f0:6801:187:a:b:c:d, then I'm able to start the server listening on that IP address.




How can I configure Linux so that I can listen on any 2001:19f0:6801:187::/64 address? That is, I want to bind to some specific IP address without having to ip addr add it first.



Or should I just have my server ip addr add an address before binding, then maybe ip addr del it when I'm done?



Addendum: In case my problem isn't clear, I've already gotten far enough that the whole prefix is getting routed to my server so that it can, for example, respond to pings for any address with that prefix. If I start a TCP listening on "[::]:80", it will respond to requests to any IP address. What I want is to be able to listen on a specific IP address so that only requests addressed to that IP address will hit the server. Other ServerFault questions linked to are asking how to get a server to respond to requests to any IP address. I've already gotten that far. I want to be able to bind to any arbitrary address, but only one specific one. The problem is that Linux won't even let me start the server on a specific address if that particular address isn't statically assigned to an interface, and I'd like to work around that.



To be even more concrete, I can presently run a TCP echo server on my VPS on all interfaces:



ncat -l 2000 --keep-open --exec "/bin/cat"



Then from my laptop, I can connect to it using any random IPv6 address:



telnet 2001:19f0:6801:187:: 2000
telnet 2001:19f0:6801:187:abc:def:: 2000
telnet 2001:19f0:6801:187:abc:def:123:0 2000


This all works. If I start the server on a statically assigned address, it also works:




ncat -l 2001:19f0:6801:187:5400:ff:fe60:afc6 2000 --keep-open --exec "/bin/cat"


Now I can only connect to that particular IP address. So far so good.



Now I'd like to be able to start a server on some random address:



ncat -l 2001:19f0:6801:187:abc:123:: 2000 --keep-open --exec "/bin/cat"



But I get an error: "Ncat: bind to 2001:19f0:6801:187:abc:123:::2000: Cannot assign requested address. QUITTING." But if I ip -6 addr add dev eth0 2001:19f0:6801:187:abc:123::/64, then the previous ncat command works, and the server starts and only responds to connections to 2001:19f0:6801:187:abc:123::.



So can I configure Linux to let me start a server on any arbitrary address in my block without first adding it as a static address?



(Actually, my question is very similar to https://stackoverflow.com/questions/40198619/can-docker-automatically-add-ip-addresses-to-the-host-upon-running-container, although there they're talking about IPv4. There the answer is to statically add all addresses.)

DDoS Attacks & Convictions




I could probably make a better title, edit it if you find a better way of phasing my problem.

Basically what's happened is that a gameserver host thinks I keep attacking their dedicated server with a DDoS attack; but I do not.



I have a theory that someone is faking their IP so it matches up with mine, and is launching attacks with it. I am worried that this is the case, and am having a hard time convincing the owner of the gameservers that it's not me attacking his servers.



How plausible is this theory?



I also have a connection with only 64kbps upload; this is no near enough to bring the dedicated servers' network down.
I would not do such a thing, but if I were to launch a full-scale DDoS attack from my network, what effect would it have (if any) on the target dedicated server?



Edit




The server is question is not mine, but I know the sysadmin of it and can tell you the specs: 16 core (dual CPU) Intel Xeon, 32GB RAM, 8TB HDD space. The sysadmin claims the attack crashed some of the running gameservers on the server.



This question has nothing to do with my other question, which is about testing my software's handling of a DDoS.
http://i.stack.imgur.com/2uUol.jpg


Answer



That theory is plausible. For some types of DDoS attack (such as SYN floods) it is normal for all the source IPs to be spoofed and for there to be hundreds of thousands or millions of them. Yours could have been included by accident.



Two other plausible theories:





  1. There was a DDoS against your server that was not using spoofed IPs and an infected machine on your network was part of the botnet delivering this DDoS.

  2. Your hosting provider did a simple count of connections to your server and saw your IP address at the top of the list. They concluded that the IPs with the highest number of connections were causing the DDoS. This is probably an erroneous conclusion.



64Kbps upload would probably have little effect on a server but this is dependent on many factors including what type of DoS attack it is and the specs of your server, the applications running on it and its internet connection. It is certainly possible to DoS very powerful servers with dial-up connection if it's the right type of DoS (Slow-loris and the old Ping of Death spring to mind).



Ask your hosting provider for the evidence of the DDoS and how they collected the evidence.







Based on my reading of the related thread in that gaming forum, someone is seeing a lot of UDP traffic from your IP address. UDP is easily spoofable (no response is required) so that's not reliable evidence that it was in fact you.



But it's also clear that you are not a professional sysadmin acting in a professional capacity. As such this question is off-topic.


Saturday, February 17, 2018

iis 8.5 - Slow IIS performance after upgrade from 7.0 to 8.5

A website was running on an IIS server 7.0 / Windows 2008. A new server was set up, running IIS 8.5 / Windows 2012, with more powerful hardware (4 CPU cores). However, performance of a new server is dramatically low.
Application is ASP classic. What I noticed is that VBScript-intensive code totally blocks other requests! For example, I have a page that loops for ~100000 records. It takes about 20 seconds to do that. During that period other requests, even for static resources, are waiting.
Unfortunately, server is being managed by someone else and I have no access to logs or performance monitors.
What could be the source of the problem?

postfix - Effective configuration of dkimproxy in multiple-domain scenario




I have a postfix/dkimproxy setup that doesn't work the way I like.



I have exampledomain.org with SPF allowing mail only from server.exampledomain.org (rDNS mapped correctly) which is also aliased by smtp.exampledomain.org.



Currently, web applications running on the server use Postfix's builtin sendmail command when sending outbound emails. These emails come from wwwrun@server.exampledomain.org and they are properly DKIM-signed. That is correct!



When a user with @exampledomain.org (me!!) sends mail from Outlook it connects to smtp.exampledomain.org and authenticates after STARTTLS command. Unfortunately, emails are not DKIM signed. Logs show that the email is automatically relayed and doesn't go through dkimproxy. dkimproxy is configured as follows



# specify what address/port DKIMproxy should listen on
listen 127.0.0.1:10027


# specify what address/port DKIMproxy forwards mail to
relay 127.0.0.1:10028

# specify what domains DKIMproxy can sign for (comma-separated, no spaces)
domain server.exampledomain.org,exampledomain.org

# specify what signatures to add
signature dkim(c=simple)
signature domainkeys(c=nofws)


# specify location of the private key
keyfile /etc/ssl/private/dkim_server/dkim_server.key


# specify the selector (i.e. the name of the key record put in DNS)
selector server


DNS TXT records are already set.




Postfix is configured with a large master.cf file that I won't paste in its entirety. The relevant lines are



#
# modify the default submission service to specify a content filter
# and restrict it to local clients and SASL authenticated clients only
#
submission inet n - n - - smtpd
-o smtpd_etrn_restrictions=reject
-o smtpd_sasl_auth_enable=yes

-o content_filter=dksign:[127.0.0.1]:10027
-o receive_override_options=no_address_mappings
-o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject

#
# specify the location of the DomainKeys signing filter
#
dksign unix - - n - 10 smtp
-o smtp_send_xforward_command=yes
-o smtp_discard_ehlo_keywords=8bitmime,starttls


#
# service for accepting messages FROM the DomainKeys signing filter
#
127.0.0.1:10028 inet n - n - 10 smtpd
-o content_filter=
-o receive_override_options=no_unknown_recipient_checks,no_header_body_checks
-o smtpd_helo_restrictions=
-o smtpd_client_restrictions=
-o smtpd_sender_restrictions=

-o smtpd_recipient_restrictions=permit_mynetworks,reject
-o mynetworks=127.0.0.0/8
-o smtpd_authorized_xforward_hosts=127.0.0.0/8


The question is



Why doesn't mail coming from the external get processed by dkimproxy?


Answer



You need to make sure that Outlook is connecting to the submission port (port 587), instead of port 25. This is because the Postfix configuration works by signing mail received on port 587 (i.e. from your clients sending outgoing mail), but not mail received on port 25 (because this is mail being delivered to your server by other MTAs). This is implemented by the content_filter line in main.cf, which you'll note is present in the submission inet definition, but not the smtp inet definition.



ssd - Hard drive write operation expected rates

It is known that SSDs are suboptimal for "high-write" environments. What common business and/or personal computing use cases are "high-write" situations, which are "low-write" situations, and what are the relative frequencies of these situations (e.g. "of all the hard drives in all the world, half of all bits are written to X times in Y days, but a quarter of all bits are.... etc etc).



I realize that this information may vary wildly depending on all manner of things. If your data is limited to a particular use case that is fine, please share.



The more concrete and thorough the numbers the better. Citations, please, if applicable.

Friday, February 16, 2018

linux - Is it possible to get write/read permissions back after removing *all* read/write/execute permissions with chmod?

I accidentally removed all the permissions from a file. Now I don't have permission to chmod it. Is there any way of chmod(ing) the file back?




Thank you

networking - Can't access a URL with custom port from inside the LAN, but can from outside

I need to access a page in my dev server via internet. Since my ISP provides me a dynamic IP, I set this POC scenario:





  • I'm using NO-IP to translate the url to the actual IP.

  • I set a SERVA64 portable server up, with a plain html sample page, in
    the 8055 port.

  • I set a FileZilla Server up, in the 21 port.

  • I turn my firewall off, to minimize access problems.

  • I configure my router (DLINK DI-524) to port-forward the 8055 port to
    my dev server.




Since I can't upload images yet, you can see this question in my Stack Overflow question here: https://stackoverflow.com/questions/9915133/strange-portforward-behavior



All set, I tried to access the test page using the url: when I used my 3G modem (red path), I could reach the page, but when I used my LAN (blue path), I could only reach the page using the internal IP/name.
The interesting thing: when I access the FileZilla service, I can connect in both ways!!!



Added: I run SmartSniff to capture the UDP/TCP traffic, and in both requests the behavior is exactly the same: there is an UDP call to resolve the DNS (google 8.8.8.8), and a TCP call to the public IP of my server. The call made for the Filezilla Server runs ok, the call made for the Serva64 web server can't reach the sample page.

Thursday, February 15, 2018

domain name system - How could dynamic DNS work if DNS updates take hours to propagate?




Simple Failover markets itself as:




continuously monitors your servers to find out which are up and which
are down, and then it dynamically updates your DNS records accordingly
so that your domain name always points to a functional server.




From what I know, updating DNS records can take hours to days to propagate. As such, even if they dynamically update my server's DNS records, my users would still have to wait a few hours before they would see any change right?




If so, how could "Simple Failover" work?


Answer



DNS record lifetimes are based on the TTL (Time To Live) of the record itself. If the TTL is 1 hour then theoretically that's the maximum amount of time a DNS resolver will cache that information before it performs a new lookup for the record. Typically this would only affect DNS resolvers that already have the information in their resolver cache. Any resolver that doesn't have the information in their cache will perform a lookup and get the updated/new information immediately, since the information is not in it's cache there's no waiting for the TTL to expire.



Others are bound to warn you that some DNS servers don't honor TTL's and that certainly is a possibility. I prefer to work from the assumption that all DNS servers will honor the TTL and I'll deal with any edge cases that come up. If you start worrying about what some DNS servers may or may not do then you'll get all wrapped up in trying to troubleshoot DNS problems that aren't actually your problem. If someone else's DNS server doesn't honor my TTL then that's their problem, not mine.



As an aside: DNS is a pull technology, not a push technology. DNS records don't get propagated, as is commonly stated (or mistated). The only name servers that hold a copy of your DNS zones (and the records in those zones) are your name servers. When you make a change to your DNS, that change does not get pushed anywhere. Other DNS servers and/or resolvers may have one or more of your DNS records cached but when the TTL expires they'll pull the updated/new information the next time they perform a lookup of that particular DNS record.


apache 2.2 - Can't make virtual host working




I have to create a virtual host on a server which, previously hosted a single website (domain name). Now I'm trying to add a second domain on this server (using the same nameserver). What I've done so far:



Initially there was no virtual host so I've made one for the second domain:



NameVirtualHost *:80

DocumentRoot /var/www/bla
ServerName www.blabla.com
ServerAlias blabla.com


Order deny,allow
Allow from all
AllowOverride All




Because nothing happend, I changed the DocumentRoot of the apache server to /var/www (initially was the root document of the first website -/var/www/html) and created a virtual host for the first domain too:



   

DocumentRoot /var/www/html
ServerName www.first.com
ServerAlias first.com

Order deny,allow
Allow from all
AllowOverride All





In this case, first.com is working ok, but bla.com not.
When I ping blabla.com I get the "unkown host" response. What am I doing wrong? Do I have to modify something in the DNS settings too? Thank you.


Answer



Yes, if ping isn't resolving the name, then you'll need to configure DNS for that domain (or a hosts file locally, if you're just trying to test) to point to your Apache server.


Tuesday, February 13, 2018

linux - openvpn multiple instances route issue?

I am tring to connect to form same PC to openvpn server with openvpn tow instance. I have a server with multiple IP and running tow openvpn server instances on the same Server. Trying to connect to those instances from one PC at the same time.



I can connect to them separately however when i try to connect to them together the first instance connect fine. but the second instance i get this error:




Thu Dec 22 05:27:04 2011 /usr/sbin/ip link set dev tun0 up mtu 1500
Thu Dec 22 05:27:04 2011 /usr/sbin/ip addr add dev tun0 local 10.10.0.5 peer 10.10.0.6

Thu Dec 22 05:27:04 2011 /usr/sbin/ip route add 184.75.xxx.xxx/32 via 10.0.0.1
RTNETLINK answers: File exists
Thu Dec 22 05:27:04 2011 ERROR: Linux route add command failed: external program exited with error status: 2
Thu Dec 22 05:27:04 2011 /usr/sbin/ip route add 0.0.0.0/1 via 10.10.0.6
RTNETLINK answers: File exists
Thu Dec 22 05:27:04 2011 ERROR: Linux route add command failed: external program exited with error status: 2
Thu Dec 22 05:27:04 2011 /usr/sbin/ip route add 128.0.0.0/1 via 10.10.0.6
RTNETLINK answers: File exists
Thu Dec 22 05:27:04 2011 ERROR: Linux route add command failed: external program exited with error status: 2



Server A config




port 1190
proto udp
dev tun1
ca /etc/openvpn/ca.crt
cert /etc/openvpn/serverA.crt
key /etc/openvpn/serverA.key

dh /etc/openvpn/dh1024.pem

server 10.3.0.0 255.255.255.0
ifconfig-pool-persist 10.3.0.0-ipp.txt

--mode server
--tls-server
client-config-dir /etc/openvpn/ccd.d
route 10.3.0.0 255.255.255.252


push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status 10.3.0.0-openvpn-status.log

verb 3


Server B config




port 1191
proto udp
dev tun0
ca /etc/openvpn/ca.crt

cert /etc/openvpn/serverB.crt
key /etc/openvpn/serverB.key
dh /etc/openvpn/dh1024.pem
server 10.10.0.0 255.255.255.0
ifconfig-pool-persist 10.10.0.0-ipp.txt
--mode server
--tls-server
client-config-dir /etc/openvpn/ccd.d
route 10.10.0.0 255.255.255.252


push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
keepalive 10 120
comp-lzo
user nobody
group nobody
persist-key
persist-tun
status 10.10.0.0-openvpn-status.log

verb 3


Client A config




client
dev tun1
proto udp
remote 184.75.xxx.xxx 1190

resolv-retry infinite
nobind
persist-key
persist-tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/client1.crt
key /etc/openvpn/client1.key
ns-cert-type server
comp-lzo
verb 3

--script-security 2


Client B config




client
dev tun0
proto udp
remote 184.75.xxx.xxx 1191

resolv-retry infinite
nobind
persist-key
persist-tun
ca /etc/openvpn/ca.crt
cert /etc/openvpn/client2.crt
key /etc/openvpn/client2.key
ns-cert-type server
comp-lzo
verb 3

--script-security 2


Any help would be much appreciated.

Monday, February 12, 2018

Is there a way to forward a port based on subdomain?











Basically I want to have something like this:



name1.mydomain.com:1234 -> my.internal.ip.address:10001
name2.mydomain.com:1234 -> my.internal.ip.address:10002
name3.mydomain.com:1234 -> my.internal.ip.address:10003
name4.mydomain.com:1234 -> my.internal.ip.address:10004
name5.mydomain.com:1234 -> another.internal.ip.address:10001
name6.mydomain.com:1234 -> another.internal.ip.address:10002



Can be at the router level, internal dns server level or even some other machine on the local network running some app that just passes traffic on to the proper machine on the proper port.



More clarification: it is not HTTP traffic, but our own custom protocol (our own client/server application using Remoting in .NET)



Answer



OK, let's clear up some confusion here...



First up, there's no explicit requirement in your question that all of those names resolve to the same IP address -- so, you can assign a block of addresses to your router device, have the DNS records setup to provide a one-to-one mapping of name to IP address, and then use DNAT (Destination Network Address Translation) to forward the traffic on to internal devices.



I will continue on the assumption that you don't have the ability to throw a pile of IP addresses at the problem.



In general, for an arbitrary protocol running inside of TCP or UDP (because other protocols that run on top of IP don't necessarily have any concept of ports), you cannot do what you want to do, because there is no guarantee that there is any information inside the traffic "stream" to allow such routing to take place. Certain protocols, in an attempt to get around this very problem, do embed name information in their protocol (such as HTTP, with the Host header), and for those protocols there are typically proxies that will receive a request, determine the name that was presented, and then route the request to an appropriate location. Some of those proxies have been mentioned in other answers, and if those do not suffice you will no doubt receive appropriate answers if you tell us what layer 7 protocol you are attempting to proxy.



However, the vast majority of protocols do no name-based identification of their intended destination, and for those you have no option but to use IP addresses to control the flow of traffic to different internal endpoints.




EDIT: If you're defining your own protocol, it should be possible to embed the name of the host you're connecting to inside it somewhere, and then you'll just have to write your own proxy (possibly as a plugin to some existing piece of software) to take those requests, map them to the correct backend, and pass them through.


nameserver - Is it required to register a name server with GLUE records?

I have a few domains and want to resolve their DNS records with my own name server.
Let's say I have a DNS server with 2 fixed IP addresses and a domain name mydnsservers.net.



I'd like to have 2 name server -subdomains- for my other domains.





  • ns1.mydnsservers.net > 81.250.18.12

  • ns2.mydnsservers.net > 81.250.18.13





Can I just use a third party DNS (e.g. AWS Route 53) for mydnsservers.net and setup two A-records like this?




ns1. A 81.250.18.12
ns2. A 81.250.18.13




Or is it mandatory to use my own DNS server for mydnsservers.net and configure GLUE records at the TLD registry?




I know that the first option works in some cases, but my new registry gives an error when trying to use ns1.mydnsservers.net for one of the domains because it's not registred as a nameserver (doesn't have glue records).



Any help would be much appreciated!

Apache with prefork model using 1000 MB memory per process

I am currently working on a site which uses Apache running on the prefork memory model. The following is the configuration from httpd.conf




 
StartServers 30
MinSpareServers 15
MaxSpareServers 30
MaxClients 96
ServerLimit 512
MaxRequestsPerChild 0




The following is a sample line from top



PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
29261 apache 15 0 1003m 231m 53m S 16.3 2.9 1:47.68 httpd


The following are the loaded apache modules



core prefork http_core mod_so mod_auth_basic mod_authn_file 

mod_authz_host mod_authz_user mod_include mod_log_config
mod_logio mod_env mod_ext_filter mod_mime_magic mod_expires
mod_deflate mod_headers mod_usertrack mod_setenvif mod_mime
mod_status mod_autoindex mod_info mod_vhost_alias mod_negotiation
mod_dir mod_actions mod_alias mod_rewrite mod_cgi mod_version
mod_realip2 mod_php5 mod_ssl


I am not sure if all of these modules are used.




The following are the php extensions loaded



date, libxml, openssl, pcre, zlib, bz2, calendar, ctype, 
curl, hash, filter, ftp, gettext, gmp, session, iconv,
posix, Reflection, standard, shmop, SimpleXML, SPL, sockets,
exif, sysvmsg, sysvsem, sysvshm, tokenizer, wddx, xml,
apache2handler, memcache, uploadprogress, dbase, dom,
eAccelerator, gd, json, mbstring, mcrypt, memcached, mongo,
mysql, mysqli, newrelic, PDO, pdo_mysql, pdo_sqlite, xmlreader,
xmlwriter, xsl, zip



Why would apache be using so much memory per process? Is ti because of these modules? If so are there memoryhogs in there which I can start looking at to see if they are being used? Or could it be because of the php extensions? Any memoryhogs in there?



The php memory limit is set to 256MB.



Eaccelerator is configured with 512MB memory.



The server is not able to handle even slightly above average loads as swap usage starts as soon as traffic increases making the system unresponsive. The server has a total of 8GB of RAM and it is a dedicated quad core server.




Thanks in advance for any help in solving this problem.

Sunday, February 11, 2018

hyper v - Where should SCCM be installed in a small system?



Server 2012 R2 host running all VM servers to support a small IDE



After collating guidance regarding SCEP I've decided to go ahead install SCCM even though I only initially need SCEP. (I figure I'll start a slow learning process to get up to speed on SCCM)



But in the meantime, I need to install it somewhere. I figured wiser heads might advise what to do and what to avoid.



While the following post addressed some other issues, the discussion seemed pretty absolute about not having the host be a DC. So I'm guessing that means it is not advisable to put SCCM on the host either.




Should I still have a physical DC, even post-Server 2012?



So if SCCM is going to be on one of my VM's, is it OK to have it on the VM DC? Or is there some over-riding reason that it should be on it's own VM? Are there startup timing issues, or the like? I have a small system, just the one host, and I don't want to use up licenses too quickly.



Thanks.


Answer



OK. It sounds like you are working with a pretty small environment. You might want to reconsider whether or not SCCM is an appropriately size toolset. Take a look at my answer to Is SCCM overkill for medium-sized organizations? and give it it some thought. You might be happier with Windows InTune or a smaller, less complex, less featureful endpoint management system.




I'm guessing that means it is not advisable to put SCCM on the host either.





Correctomundo! See the below reasoning which I pulled directly from the Windows Server 2012 Hyper-V Best Practices which I recommend you review along with Aidan Finn's Recommended Practices For Hyper-V.






Do not install any other Roles on a host besides the Hyper-V role and
the Remote Desktop Services roles (if VDI will be used on the host).




When the Hyper-V role is installed, the host OS becomes the "Parent Partition" (a quasi-virtual machine), and the Hypervisor
partition is placed between the parent partition and the hardware. As
a result, it is not recommended to install additional (non-Hyper-V
and/or VDI related) roles





You want your Hyper-V Host to be as clean and as simply configured as possible. It is highly recommended to not install other applications or roles onto your Hyper-V host, especially one as complex as ConfigMgr.







is it OK to have [SCCM] on the VM DC?




Nope! SCCM is complex and somewhat fidgety application. In order to install it you will need a whole bunch of prerequisites, not limited too IIS, Reporting Services, MS SQL, and WSUS. For such as small Site you would co-mingled these services and Site Rolls on a single server, your Domain Controller, which also happens to run a complex and somewhat fidgety application. I highly recommend you do not do this.



Take a look at can domain controllers also serve other functions?. It used to be fairly common to deploy a single physical server that had ADDS, DNS, DHCP, File and Print Roles all co-mingled. However, with the prevalence and low cost of virtualization in the Microsoft ecosystem it is becoming more common to deploy your domain controllers in single-purpose virtual machines to avoid problems and isolate them if they occur.



As an aside, note I said "domain controllers". You will want at least two Domain Controllers, one of which is a physical standalone machine if you plan on clustering your Hyper-V hosts. You should always have two domain controllers (see: Risks of having only one domain controller?). Furthermore you should pay particular attention to the caveats of running virtualized domain controllers, especially things like cloning and time synchronization.





I don't want to use up licenses too quickly




Yep. I understand that, but please consider some of the technical limitations and dangers you might find yourself in down the road. A datacenter license of Windows Server looks like mighty affordable if SCCM has exploded your site's only domain controller.


Websocket on dreamhost shared hosting



I would like to build a small webapp with some i/o communication with the server via ajax.



Since I'm on a dreamhost shared server I already know that I cannot use node.js to build my app. But now, reading around the internet, I'm starting to suspect that I can also forget about websockets!!!



Am i right??




Thanks


Answer



According to this post you have to upgrade your DH server from shared to VPS or Dedicated to use WebSocket.


Yahoo marked my mail as spam and says domainkey fails

Yahoo is marking our mail as spam. We are using PHP Zend framework to send the mail.
Mail header says that Domain Key is failed.



Authentication-Results: mta160.mail.in.yahoo.com from=mydomain.com; domainkeys=fail (bad sig);
from=mydomain.com; dkim=pass (ok)



We configured our SMTP server (Same server used to send mail from zend framework.)
in outlook and send the mail to yahoo. This time yahoo says domainkeys is pass.




Authentication-Results: mta185.mail.in.yahoo.com from=speedgreet.com; domainkeys=pass (ok);
from=speedgreet.com; dkim=pass (ok)



Domainkey is added in mail header on our server which is used by both outlook client and PHP client.
yahoo recognize the mail which is sent from outlook and yahoo does not recognize the mail from
PHP client.
As far as I know, Signing the email is done on the server side with help of domain key.
PHP and Outlook uses the same server to sign the mail. But why yahoo handling differently?
What I am missing here? Any Idea? Can anyone help me?

apache 2.2 - Active Directory problems while trying to perfom compare operation

I have CentOs 5.5 with Apache 2.2 and SVN installed. Also I have Windows 2003 R2 with Active Directory.
I'm trying to authorize users via AD so each user have access to repo if he is a member of corespondent group in AD.
Here is my apache config:




LoadModule dav_svn_module modules/mod_dav_svn.so
LoadModule authz_svn_module modules/mod_authz_svn.so
LDAPVerifyServerCert off
ServerName svn.mydomain.com
DocumentRoot /var/www/svn.mydomain.com/htdocs

RewriteEngine On
[Location /]
AuthType basic
AuthBasicProvider ldap
AuthzLDAPAuthoritative on
AuthLDAPURL ldaps://comp1.mydomain.com:636/DC=mydomain,DC=com?sAMAccountName?sub?(objectClass=*)
AuthLDAPBindDN binduser@mydomain.com
AuthLDAPBindPassword binduserpassword
[/Location]
[Location /repos/test]

DAV svn
SVNPath /var/svn/repos/test
AuthName "SVN repository for test"
Require ldap-group CN=test,CN=ProjectGroups,DC=mydomain,DC=com
[/Location]


When I'm using "Require valid-user" everything goes fine, "Require ldap-user" also works.
But as soon as I use "Require ldap-group" authorization fails.
Trere are no errors in apache logs, but Active Directory shows folowing error:





Event Type: Information
Event Source: NTDS LDAP
Event Category: LDAP Interface
Event ID: 1138
Date: 10/9/2010
Time: 1:28:52 PM
User: MYDOMAIN\binduser
Computer: COMP1
Description:
Internal event: Function ldap_compare entered.


Event Type: Error
Event Source: NTDS General
Event Category: Internal Processing
Event ID: 1481
Date: 10/9/2010
Time: 1:28:52 PM
User: MYDOMAIN\binduser
Computer: COMP1
Description:

Internal error: The operation on the object failed.

Additional Data
Error value:
2 0000208D: NameErr: DSID-031001CD, problem 2001
(NO_OBJECT), data 0, best match of:
'DC=mydomain,DC=com'


I'm confused by this problem. What I'm doing wrong?

Saturday, February 10, 2018

amazon ec2 - Subdomain using AWS Route 53, load balancer, EC2, Apache

I've tried to follow a few different tutorials on this, but can't quite get it to work.




I have all of my DNS for my domain (example.com) in Route 53. Works fine.



My top level domain (A record) points to my load balancer (AWS) as an alias. This points to an EC2 server and works fine.



I want to add a subdomain (client.example.com), but I'm not quite sure where to add this and what type of records I need. I want this to point to a directory on the same EC2 server as my top level domain (which would have the path example.com/client). I don't want it to redirect, just serve the files from this directory.



Not sure if I need to create a new hosted zone or not, what to put in the zone, what to point it at, and if I need to modify anything on my server (like a rewriterule).



Any direction would be appreciated.

filesystems - linux file system allowing check while in use?



Is there a file system for linux where it's possible (and safe) to perform a file system check while it's mounted read write or which does not need to be checked periodically ?



E.g. a file system whose check first 'grabs' the entire file system and then releases those parts which have been checked already.




(I'm looking for file systems with capabilities like ext2 or better, i.e. something I could use as a replacement for a root or /home file system on a PC)


Answer




Is there a file system for linux where it's possible (and safe) to perform a file system check while it's mounted read write




Don't run fsck on a mounted file system. fsck on a mounted filesystem can data corruption.




E.g. a file system whose check first 'grabs' the entire file system and then releases those parts which have been checked already.





btrfs:




  • Online filesystem check

  • Very fast offline filesystem check

  • Checksums on data and metadata (multiple algorithms available)


amazon ec2 - SSL termination point for AWS EC2? ELB or NginX or..?



I have a number of stand alone java web applications that currently run on different ports and URLS. I would like to expose all these apps behind a single port(443) and map the different public URLs to the individual internal URL/port. I am thinking clients hit Nginx as reverse proxy.



I also need these apps to be accessible only via SSL and plan on everything in an AWS VPC with the SSL terminating at the AWS ELB before hitting the reverse proxy.



This seems like a pretty standard stack. Is there any reason not to do this? Any reason I should terminate the SSL at the reverse proxy (Nginx or other) instead of the AWS ELB?



thanks



Answer



In some setups there are security aspects to consider when deciding where to terminate SSL:




  • Which node do you want to trust with your certificate?

  • How much communication will happen behind the SSL termination point and thus remain unprotected?



You also have to consider technical aspects about what is possible, and what is not:





  • A load balancer that does not terminate SSL cannot insert X-Forwarded-For headers. Thus the backend will not know the client IP address unless you use DSR based load balancing.

  • A frontend that does not terminate SSL cannot dispatch to different backends depending on domain name unless the client supports SNI.


OpenVPN connects but gets invalid IP on Tap Device

I have a WinXP SP2 box trying to connect on a OpenVPN server and getting the following errors:





Tue May 14 11:29:52 2013 Notified TAP-Windows driver to set a DHCP IP/netmask of
192.168.5.6/255.255.248.0 on interface {48B4760C-5A76-4F9E-9140-FB73DF819E2A} [DHCP-serv: 192.168.0.0, lease-time: 31536000]
Tue May 14 11:29:52 2013 Successful ARP Flush on interface [2] {48B4760C-5A76-4F9E-9140-FB73DF819E2A}
Tue May 14 11:29:57 2013 TEST ROUTES: 0/0 succeeded len=1 ret=0 a=0 u/d=down
Tue May 14 11:29:57 2013 Route: Waiting for TUN/TAP interface to come up...
Tue May 14 11:30:00 2013 TEST ROUTES: 0/0 succeeded len=1 ret=0 a=0 u/d=down
Tue May 14 11:30:00 2013 Route: Waiting for TUN/TAP interface to come up...
Tue May 14 11:30:01 2013 TEST ROUTES: 0/0 succeeded len=1 ret=0 a=0 u/d=down



Fireall and Antivirus are already turned off.



Here's my client config:




client
float
dev tap
proto udp

remote xxxx 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca endian-001.pem
auth-user-pass
comp-lzo
verb 3

hp - Non-ECC memory on Proliant DL385 g5



Is it possible to use non-ECC memory in the HP Proliant DL385 g5, the server is a test system so error checking is less important



Answer



The processors in the DL385G5 do not require ECC RAM, they do require Registered RAM (sometimes called "buffered" RAM). Finding Non-ECC Registered RAM is going to be difficult and expensive.



While it might technically work, it's going to cost much more than just using ECC Registered RAM.


Friday, February 9, 2018

monitoring - Cacti - central server to check status

Hey is there a way to setup one server for cacti and having all the other servers send their data to it? I would have thought, that the poller sends all the info to the central server including the rdd files etc.. is there a way to do this?



Is there maybe a simple simular tool to do this if cacti does not have that capability? I would like to check CPU load, Disk space, RAM usage, Process number, logged in users... nice to have would be ping, disk i/o



I would love a simple setup and client I can run on all my debian machines...

Linux missing disk space



I have KVM vps with strange disk usage:



# df -h
Filesystem Size Used Avail Use% Mounted on

/dev/sdb 493G 1.2G 466G 1% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/sda1 96M 41M 51M 45% /boot


# du -sh /
du: cannot access `/proc/1633/task/1633/fd/4': No such file or directory
du: cannot access `/proc/1633/task/1633/fdinfo/4': No such file or directory
du: cannot access `/proc/1633/fd/4': No such file or directory
du: cannot access `/proc/1633/fdinfo/4': No such file or directory
1021M /



How could it be? Where are ~20G of free space?


Answer



I guess you talk about the discrepancy between 493GB total space, 1.2G used and 466GB free. This is likely the result of the usual 5% of disk space that is reserved for root and not generally available.



To check for this, please add the output of at least the Reserved block count from



tune2fs -l /dev/sdb 



to your question (provided you use ext3 or ext4 as a filesystem)


Thursday, February 8, 2018

domain name system - Query between two DNS servers



I’ve been struggling with understanding DNS with BIND9.



I’ve used two machines, A and B, connected by a LAN and not for global use.



A is running a name server managing “example” domain and web server named “www”.



B is running a name server managing “sub.example” domain as subdomain with delegation from A and web server named “www” too.




Configuration files are bellow.



"named.conf" for "example" at machine A.



options {
directory "C:\dns\etc";
recursion yes;
version "XXX DNS Server 1.0X";

};

logging {
channel my_file {
file "c:\dns\etc\named.run" versions 5 size 1m;
severity debug 0;
print-category yes;
print-severity yes;
print-time yes;
};
category default {my_file;};
category queries {my_file;};

category lame-servers {my_file;};
category config {my_file;};
};


zone "." {
type hint;
file "named.root";
};


zone "localhost" {
type master;
file "localhost/fwd";
allow-update { none; };

};
zone "0.0.127.in-addr.arpa" {
type master;
file "localhost/rev";
allow-update { none; };

};
zone "example" {
type master;
file "example/fwd";
allow-update { none; };

};
zone "72.11.16.172.in-addr.arpa" {
type master;
file "example/rev";

allow-update { none; };
};


"zone file" for "example" at machine A.



$TTL 1H
@ 1H IN SOA example. postmaster.example. (
200508291 ; Serial
15M ; Refresh

5M ; Retry
1D ; Expire
15M) ; TTL

IN NS ns.example.
IN A 172.16.11.72
ns IN A 172.16.11.72
www IN A 172.16.11.72

sub IN NS ns.sub.example.

ns.sub.example. IN A 172.16.10.37


"named.conf" for "example" at machine B.



options {
directory "C:\dns\etc";
recursion yes;
version "unknown";
allow-transfer {172.16.11.72; };


};
logging {
channel my_file {
file "c:\dns\etc\named.run" versions 5 size 1m;
severity debug 0;
print-category yes;
print-severity yes;
print-time yes;
};

category default {my_file;};
category queries {my_file;};
category lame-servers {my_file;};
category config {my_file;};
};



zone "." {
type hint;

file "named.root";
};

zone "localhost" {
type master;
file "localhost/fwd";
allow-update { none; };

};
zone "0.0.127.in-addr.arpa" {

type master;
file "localhost/rev";
allow-update { none; };
};
zone "sub.example" {
type master;
file "example/fwd";
allow-update { none; };
};
zone "37.10.16.172.in-addr.arpa" {

type master;
file "example/rev";
allow-update { none; };
};
zone "example" {
type forward;
forward only;
forwarders {
172.16.11.72;
};

};


"zone file" for "sub.example" at machine B.



$TTL 1H
@ 1H IN SOA sub.example. postmaster.sub.example. (
200508291 ; Serial
15M ; Refresh
5M ; Retry

1D ; Expire
15M) ; TTL

IN NS ns.sub.example.
IN A 172.16.10.37
ns IN A 172.16.10.37
www IN A 172.16.10.37


Now I have four servers in two machines like below.




"ns.example"  and "www.example"   in machine A.

"ns.sub.example" and "www.sub.example" in machine B.


I can resolve “www.example” from A and “www.sub.example” from B.



But I can’t resolve “www.sub.example” from A and “www.example” from B.




The messages dig command shows and being written in BIND's log are the bottom.



Both A and B are respond "SERVFAIL" or "connection timed out; no servers could be reached" but there are no error message in BIND's log.



Actually they are Windows2008 servers and I’ve changed windows firewall filter to accept UPD 53 port each other.



Strangely there are no DROP message, even ALLOW message too, in firewall log both A and B.



I mean if I dig "www.example" from A I can see ALLOW message but if I dig "www.sub.exaple" I can’t see ALLOW and DROP message.




I think I have to classify this problem into BIND caused or Windows firewall caused.



What do I have to do first?



For example, I guess DNS query can't be sent to another machine of name server. That's why dig showed message "no servers could be reached".



How can I check DNS query is sent or NOT and why if it’s not sent?



dig at machine A.




C:\dns\bin>dig www.sub.example

; <<>> DiG 9.9.2-P1 <<>> www.sub.example
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 1777
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:
;www.sub.example. IN A

;; Query time: 0 msec
;; SERVER: 172.16.11.72#53(172.16.11.72)
;; WHEN: Wed Mar 13 08:42:04 2013
;; MSG SIZE rcvd: 44

C:\dns\bin>dig @172.16.10.37 www.sub.example. a


; <<>> DiG 9.9.2-P1 <<>> @172.16.10.37 www.sub.example. a
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached


dig at machine B.



C:\dns\bin>dig www.example


; <<>> DiG 9.9.2-P1 <<>> www.example
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 39790
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096

;; QUESTION SECTION:

;www.example. IN A

;; Query time: 4015 msec
;; SERVER: 172.16.10.37#53(172.16.10.37)
;; WHEN: Wed Mar 13 09:40:31 2013
;; MSG SIZE rcvd: 40

C:\dns\bin>dig @172.16.11.72 www.example. a

; <<>> DiG 9.9.2-P1 <<>> @172.16.11.72 www.example

; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached


BIND's log at A.



13-Mar-2013 14:43:22.624 general: info: managed-keys-zone: loaded serial 0
13-Mar-2013 14:43:22.624 general: info: zone 72.11.16.172.in-addr.arpa/IN: loaded serial 200508291
13-Mar-2013 14:43:22.624 general: info: zone 0.0.127.in-addr.arpa/IN: loaded serial 200508291

13-Mar-2013 14:43:22.624 general: info: zone example/IN: loaded serial 200508291
13-Mar-2013 14:43:22.624 general: info: zone localhost/IN: loaded serial 200508291
13-Mar-2013 14:43:22.624 general: notice: all zones loaded
13-Mar-2013 14:43:22.624 general: notice: running
13-Mar-2013 14:43:22.624 notify: info: zone example/IN: sending notifies (serial 200508291)
13-Mar-2013 14:43:22.624 notify: info: zone 72.11.16.172.in-addr.arpa/IN: sending notifies (serial 200508291)
13-Mar-2013 14:44:34.515 queries: info: client 172.16.11.72#58221 (www.sub.example): query: www.sub.example IN A +E (172.16.11.72)
13-Mar-2013 14:44:39.515 queries: info: client 172.16.11.72#58221 (www.sub.example): query: www.sub.example IN A +E (172.16.11.72)
13-Mar-2013 14:44:44.515 queries: info: client 172.16.11.72#58221 (www.sub.example): query: www.sub.example IN A +E (172.16.11.72)



BIND's log at B



13-Mar-2013 14:38:27.281 general: info: managed-keys-zone: loaded serial 0
13-Mar-2013 14:38:27.281 general: info: zone 0.0.127.in-addr.arpa/IN: loaded serial 200508291
13-Mar-2013 14:38:27.281 general: info: zone 37.10.16.172.in-addr.arpa/IN: loaded serial 200508291
13-Mar-2013 14:38:27.281 general: info: zone sub.example/IN: loaded serial 200508291
13-Mar-2013 14:38:27.281 general: info: zone localhost/IN: loaded serial 200508291
13-Mar-2013 14:38:27.296 general: notice: all zones loaded
13-Mar-2013 14:38:27.296 general: notice: running

13-Mar-2013 14:38:27.296 notify: info: zone sub.example/IN: sending notifies (serial 200508291)
13-Mar-2013 14:38:27.296 notify: info: zone 37.10.16.172.in-addr.arpa/IN: sending notifies (serial 200508291)
13-Mar-2013 14:46:08.984 queries: info: client 172.16.10.37#58326 (www.sub.example): query: www.sub.example IN A +E (172.16.10.37)
13-Mar-2013 14:46:11.250 queries: info: client 172.16.10.37#58330 (www.example): query: www.example IN A +E (172.16.10.37)
13-Mar-2013 14:46:17.250 queries: info: client 172.16.10.37#58330 (www.example): query: www.example IN A +E (172.16.10.37)

Answer



There are a couple of things I can think to suggest.



The first is that the return code you are getting in the failed digs is SERVFAIL. There can be several reasons for that, but one that you want to rule out first is that something is preventing name queries between machines. I recognize that you say you have turned off the Windows firewall rule for port 53 UDP, but I would suggest you demonstrate (to yourself, at least) that Machine B can do a "dig @machine-a www.example. a" and get the answer you expect. Then check that Machine A can query the server at Machine B.




Apart from that, it would be very helpful to see your named.conf and any messages named is logging. Showing us named.conf would allow us to check that your zones are specified correctly (most of the RRs in your zone files are relative to the zone origin so it's important to see how the zone is loaded so we can see what the origin really is..) and will help us figure out if either of the machines, or both, are supposed to be performing recursion.



Please provide more information; it will make it much easier to determine what is going on.



EDIT:



The dig output you have provided certainly makes it sound like the servers are not receiving requests from other machines:



; <<>> DiG 9.9.2-P1 <<>> @172.16.10.37 www.sub.example. a

; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached


and



; <<>> DiG 9.9.2-P1 <<>> @172.16.11.72 www.example
; (1 server found)
;; global options: +cmd

;; connection timed out; no servers could be reached


that would seem significant, wouldn't you agree?


linux - Cannot install grub to RAID1 (md0)

I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace:



# go to superuser
sudo bash
# see RAID state
mdadm -Q -D /dev/md0

# State should be "clean, degraded"
# remove broken disk from RAID
mdadm /dev/md0 --fail /dev/sda1
mdadm /dev/md0 --remove /dev/sda1
# see partitions
fdisk -l
# shutdown computer
shutdown now
# physically replace old disk by new
# start system again

# see partitions
fdisk -l
# copy partitions from sdb to sda
sfdisk -d /dev/sdb | sfdisk /dev/sda
# recreate id for sda
sfdisk --change-id /dev/sda 1 fd
# add sda1 to RAID
mdadm /dev/md0 --add /dev/sda1
# see RAID state
mdadm -Q -D /dev/md0

# State should be "clean, degraded, recovering"
# to see status you can use
cat /proc/mdstat


This is the my mdadm output after sync:



/dev/md0:
Version : 0.90
Creation Time : Wed Feb 17 16:18:25 2010

Raid Level : raid1
Array Size : 470455360 (448.66 GiB 481.75 GB)
Used Dev Size : 470455360 (448.66 GiB 481.75 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu Nov 1 15:19:31 2012
State : clean

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11
Events : 0.11049560

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1

1 8 17 1 active sync /dev/sdb1


After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0.
This is my fdisk -l output:



Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00057d19

Device Boot Start End Blocks Id System
/dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect
/dev/sda2 940910985 976768064 17928540 5 Extended
/dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000667ca

Device Boot Start End Blocks Id System
/dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect
/dev/sdb2 940910985 976768064 17928540 5 Extended
/dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris


Disk /dev/md0: 481.7 GB, 481746288640 bytes
2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table



This is my grub install output:



root@answe:~# grub-install /dev/sda
/usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet..
/usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install.
root@answe:~# grub-install /dev/sdb
Installation finished. No error reported.


Some version information:




grub-install (GRUB) 1.99-21ubuntu3.4
3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux


So



1) "update-grub" find only /sda and /sdb Linux, not /md0



2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0"




I cannot load my system except from /sdb1 (and by hand, not automatically), and in DEGRADED mode...



Anybody can resolve this issue? I have big headache with this.



UPDATE: after wiping out new disk with zeroes, copying partitions with sfdisk update-grub say:



root@answe:~# grub-install /dev/sda
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.

error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
Installation finished. No error reported.


Now update-grub generate same errors:



root@answe:~# update-grub
error: found two disks with the index 2 for RAID md0.

error: found two disks with the index 2 for RAID md0.
Generating grub.cfg ...
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
Found linux image: /boot/vmlinuz-3.2.0-32-generic
Found initrd image: /boot/initrd.img-3.2.0-32-generic

...
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
Found memtest86+ image: /boot/memtest86+.bin
No volume groups found
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
Found Ubuntu 12.04.1 LTS (12.04) on /dev/sda1
Found Ubuntu 12.04.1 LTS (12.04) on /dev/sdb1
error: found two disks with the index 2 for RAID md0.

error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
error: found two disks with the index 2 for RAID md0.
done

Tuesday, February 6, 2018

virtual machines - Allocated too much memory from VMware to windows MBR disk, how to remove

I'm a junior sysadmin and today, I have allocated too much memory to a disk on a windows server 2008 VM through vsphere.



In my srv2008 vm, I saw two disks of 2tb with the same letter (spanned layout)
I needed space so I went in vsphere and raised capacity for one of the two drives, without noticing that in windows, they were mbr.



Now, all the disk that are part of the spanned layout are "Healthy(at risk)", I assume because on one of them I went over 2TB limit for mbr.
It seems like I can't easily remove the memory I allocated on vsphere, and I can't make another disk and add memory to it to increase my disk D:/ space.
I'm running out of space very soon and I'm racing against the clock. This is the main file server, anyone has an idea what I could do?



I tought about just creating another disk, not part of that spanned layout however I don't have enough free space to transfer everything, and I assume a lot of stuff here have hard coded paths to some specific file and moving them else where would break everything




The storage is on a nimble storage array

reverse proxy - Forward ssh connections to docker container by hostname

I have gotten into a very specific situation and although there are other ways to do this, I've kinda gotten obsessed with this and would like to find out a way to do it exactly like this:




Background



Say I have a server running several services tucked away in isolated docker containers. Since most of those services are http, I'm using an nginx proxy to expose specific subdomains to each service. For example, a node server is running on a docker container with its port 80 bound to 127.0.0.1:8000 on the host. I'll create a vhost in nginx that proxies all requests to myapp.mydomain.com to http://127.0.0.1:8000. That way, the docker container cannot be accessed from outside, except through myapp.mydomain.com.



Now I want to start a gogs docker container in such a way that gogs.mydomain.com points to the gogs container. So I start this gogs container with port 8000 bound to 127.0.0.1:8001 on the host. And an nginx site proxying requests to gogs.mydomain.com to http://127.0.0.1:8001 and it works well...



However, gogs being a git container, I would also like to access the repos like through git@gogs.mydomain.com:org/repo but that doesn't work with the current setup. One way to make that work would be to bind the port 22 of the container to the port 0.0.0.0:8022 on the host, and then the git ssh url can be something like git@gogs.mydomain.com:8022/repo.



(That doesn't seem to work; when I push to an origin with uri like that, git demands the password for user git on gogs.mydomain.com - instead of gogs.mydomain.com:8022 - but that's probably something I'm doing wrong and out of scope for this question, however, I would appreciate any diagnosis for that too)




Problem



My main concern is, that I want the ssh port :22 to be proxied just like I am proxying http ports using nginx; i.e. any ssh connections to gogs.mydomain.com get passed on to the container's port 22. Now I can't bind the container's ssh port to the host's ssh port because there is already an sshd running on the host. Also, that would mean that any connections to *.mydomain.com get passed to the container's sshd.






I want any ssh connections to:





  • mydomain.com host.mydomain.com or mydomain's IP address to be accepted and forwarded to the sshd on the host

  • gogs.mydomain.com or git.mydomain.com to be accepted an passed on to the sshd on the gogs container

  • *.mydomain.com (where * is anything other than the possibilities above) to be rejected



If it were http, I could easily make that work through nginx. Is there a way to do that for ssh?






(Also would like to go out on a limb and ask: is there a way to accomplish that with any tcp service in general?)




Any insights into the way I'm trying to do it here, are also welcome. I don't mind being told when what I'm trying to do is utterly stupid.






What I've already got in my mind:



Maybe I could share the sshd socket on host with the container as a ro volume? That would mean the sshd inside the container could pick up all connections to *.mydomain.com. Could there be a way to make the sshd inside the container reject all connections other than gogs.mydomain.com or git.mydomain.com? However, the sshd on the host will pick up all the connections to *.mydomain.com anyway including gogs.mydomain.com; so there would be a conflict. I dunno, I haven't actually tried it. Should I try it?

linux - iptables with non_transparent squid proxy

iptables has been working well with squid transparent. but with squid authentication method, its has some problems.

due to documents i found that, because suid in authentication mode, is destiation for packets, cannot set rules with iptables for actual destination/source.



is this right? has any solution?



if dosnt right, can you give me some squid config file with AD integration SSo, or any other authentication method



excuse me for my bad english. Thank you.

Monday, February 5, 2018

openvpn - Tunnel only one program (UDP & TCP) through another server

I have a windows machine at home and a server with debian installed.
I want to tunnel the UDP traffic from one (any only this) program on my windows machine through my server.



For tcp traffic this was easy using putty as a socks5 proxy and then connecting via ssh to my server - but this does not seem to work for UDP.

Then I setup dante as a socks5 proxy but it seems to create a new instance/thread per connection which leads to a huge ram usage for my server, so this was no option either.



So most people recommend openvpn, so my question: Can I use openvpn to just tunnel this one program through my server? Is there a way to maybe create a local socks5 proxy on my windows machine and set it as a proxy in my program and only this proxy then will use openvpn?



Thank you for your ideas

Sunday, February 4, 2018

Combining Apache ServerAlias and 301 Redirects



I've referenced this question but I'm not sure if I'm clear on how this works given this situation:




I have a client with two brands that are currently online like this:



Brand A (site 1)    Brand B (site 2)
--- ---
site 1 pages site 2 pages


Now, they've had a new site built that combines content from both sites and unifies their two brands into one:




Brand Unified (site 3)
---
site 3 pages


Clearly there are 301 redirects I want to put into place for each original site after I update the A records for each domain. However, the new site is a custom WordPress theme and WP only allows for one domain per installation. This leaves me with a situation where I can assign the original TLD from, say, Brand A as the site URL in WordPress, and then have the Brand B domain redirect to A.



I thought I would use a server block like this in the Apache site conf:





ServerName site1.com #site1.com now points directly here
ServerAlias site2.com #site2.com points directly but redirects
ServerAdmin admin@brandAsite.com
DocumentRoot /var/www/newsite


So, assuming there is nothing wrong with this configuration, what happens if I try to implement 301 redirects for site1 and site2 links? Specifically, since site2.com is now listed as an alias of site1.com, can I still effectively have 301 redirects in the .htaccess file in the /var/www/newsite/ directory or does ServerAlias interfere with that? I'm wondering if it gets aliased to back to site1.com and therefore the 301 rules wouldn't trigger?


Answer



If you are able to configure Apache conf, why complicate situation and put redirects (or anything) in .htaccess files? You seem to assign site2.com to /var/www/newsite only for the purpose to using .htaccess and nothing else. I would keep it simple, everything in one place:




NameVirtualHost  *:80

ServerName site1.com
DocumentRoot /var/www/newsite


ServerName site2.com

# Specifically mapped redirects
Redirect permanent /foo/ http://site1.com/new-foo/


# Root redirect (if no other cases above match)
Redirect permanent / http://site1.com/


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...