Tuesday, January 31, 2017

linux - crontab schedules and cron jobs



I've put two files in the /etc/cron.d/ directory:



The first makes a new post everyday at 12:00AM:



0 0 * * * php /var/www/site1/helper post:make



The second updates the latest post every 10 minutes



10 * * * * php /var/www/site1/helper post:update


Do I have to do something else for this job to run based on the time (eg. every 10 minutes) or do I have to do crontab job1 and crontab job2?



EDIT: I also installed cronie.


Answer



Putting files in cron.d is enough. However, your last entry should be:




*/10 * * * * php /var/www/site1/helper post:update


Otherwise it runs once an hour, at the 10th minute.


rhel5 - CommunicationException when shutting down JBoss 4.2.2




I have deployed an application using JBoss 4.2.2 on a 64-bit RHEL5 server. Since there are other JBoss servers, I had to change some port configurations so that there would be no conflicts when starting the server. So right now I'm using ports-01 from the sample-bindings.xml file that came in the docs/examples/binding-manager/samples directory. In addition, below is a list of all the files I've edited to reflect the new ports:




  • JBOSS_HOME/servers/default/deploy/jboss-web.deployer/server.xml:


    • Changed Connector port - 8080 to 8180

    • Changed AJP 1.3 Connector port - 8009 to 8109



  • JBOSS_HOME/server/default/deploy/jbossws.beans/META-INF/jboss-beans.xml


    • Changed 8080 to 8180


  • JBOSS_HOME/server/default/conf/jboss-service.xml:


    • Changed 8083 to 8183

    • Changed 1099 to 1299


    • Changed 1098 to 1298

    • Changed 4444 to 4644

    • Changed 4445 to 4645

    • Changed 4446 to 4646

    • Changed 4447 to 4647


  • JBOSS_HOME/server/default/conf/jboss-minimal.xml:


    • Changed 1099 to 1299


    • Changed 1098 to 1298




When I start the server (binding to localhost) everything is fine and I'm able to access the application. But when I try to shutdown the server I get the following error:




Exception in thread "main" javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost [Root exception is javax.naming.CommunicationException
: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]]
at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1562)

at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:634)
at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:627)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at org.jboss.Shutdown.main(Shutdown.java:214)
Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]
at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:274)
at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1533)
... 4 more
Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]
at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:248)

... 5 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:525)
at java.net.Socket.connect(Socket.java:475)
at java.net.Socket.(Socket.java:372)

at java.net.Socket.(Socket.java:273)
at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:84)
at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:77)
at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:244)
... 5 more




Is there any other file that I need to change the 1099 to 1299, or am I missing some other step?


Answer



Shutdown.sh launches a separate java program that sends a JMX request to the jboss server process. It doesn't reference any configuration files to discover the new port, it just assumes the defaults. (how could it? you're not passing it your configuration directory)




So to connect to your jboss server running on the non-default port, you need to run it like so:



shutdown.sh --server=YOURHOST:1299


Also, if you're actually using the sample-binding.xml, i.e., if you uncommented the jboss.system:service=ServiceBindingManager mbean in jboss-service.xml & configured ServerName & StoreURL appropriately, then you shouldn't need to make any other configuration changes for the new ports. That's the point of the binding manager, to centralize all of that work.


ssh authentication nfs

I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords.
I setup using nfs on "ub0" but still I am asked to insert a password.



Here is my scenario:




  • machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id

  • the 2 servers are sharing a folder that is the HOME directory for "mpiu"

  • I did a chmod 700 on the .ssh


  • I created a key using ssh-keygene -t dsa

  • I did "cat id_dsa.pub >> authorized_keys". On this last file I tried also chmod 600 and chmod 640

  • off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem.



Below the content of my .ssh folder




authorized_keys
id_dsa

id_dsa.pub
known_hosts


After all of this calling wathever function "ssh ub1 hostname" I am requested my password.
Do you know what I can try?



I also UNcommented in the ssh_config file for both machines
this line





IdentityFile ~/.ssh/id_dsa


I also tried




ssh -i $HOME/.ssh/id_dsa mpiu@ub1



Below the ssh -vv




Code:
OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007
OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ub1 [192.168.2.9] port 22.

debug1: Connection established.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2
debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024
debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh
debug1: no match: lshd-2.0.4 lsh - a GNU ssh
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1

debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib

debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa
debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour
debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour
debug2: kex_parse_kexinit: hmac-sha1,hmac-md5

debug2: kex_parse_kexinit: hmac-sha1,hmac-md5
debug2: kex_parse_kexinit: none,zlib
debug2: kex_parse_kexinit: none,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client 3des-cbc hmac-md5 none
debug2: mac_setup: found hmac-md5

debug1: kex: client->server 3des-cbc hmac-md5 none
debug2: dh_gen_key: priv key bits set: 183/384
debug2: bits set: 1028/2048
debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
debug1: Host 'ub1' is known and matches the RSA host key.
debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1
debug2: bits set: 1039/2048
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys

debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098)
debug1: Authentications that can continue: password,publickey

debug1: Next authentication method: publickey
debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: password,publickey
debug2: we did not send a packet, disable method
debug1: Next authentication method: password
mpiu@ub1's password:


I hangs here!

Monday, January 30, 2017

Apache server status requests/sec



Was wondering for a while now about what requests/sec actually means. Is it actual Apache processing speed cap? Or is it the actual speed at which the current traffic is being processed?



See the following image:



apache status



So as per the above, having presumably a speed of 8.94 request/sec and 21 requests being processed, does it mean that it takes 21/8.94=2.34 seconds per request? That can't be right though, since my site loads in less than half a second...




Am I interpreting this correctly? Thanks.


Answer



No, this is completly wrong.



The numbers mean that there is an average of 8.94 requests that is coming into the server per second since the last start of the server, so peaks don't show up there. At this very moment, the server is handling 21 requests, nearly three times the average value so I would guess you are at a peak time.


Sunday, January 29, 2017

What user do scripts in the cron folders run as? (i.e. cron.daily, cron.hourly, etc)




If I put a script in /etc/cron.daily on CentOS what user will it run as? Do they all run as root or as the owner?


Answer



They all run as root. If you need otherwise, use su in the script or add a crontab entry to the user's crontab (man crontab) or the system-wide crontab (whose location I couldn't tell you on CentOS).


linux - How to delete millions of files without disturbing the server




I'd like to delete an nginx cache directory, which I quickly purged by:



mv cache cache.bak
mkdir cache
service nginx restart


Now I have a cache.bak folder which has 2 million files. I'd like to delete it, without disturbing the server.



A simple rm -rf cache.bak trashes the server, even the simplest HTTP response takes 16 seconds while rm is running, so I cannot do that.




I tried ionice -c3 rm -rf cache.bak, but it didn't help. The server has an HDD, not an SSD, probably on an SSD these might not be a problem.



I believe the best solution would be some kind of throttling, like how nginx's built in cache manager does.



How would you solve this? Is there any tool which can do exactly this?



ext4 on Ubuntu 16.04


Answer



I got many useful answers / comments here, which I'd like to conclude as well as show my solution as well.





  1. Yes, the best way to prevent such thing happening is to keep the cache dir on a separate filesystem. Nuking / quick formatting a file system always takes a few seconds (maybe minutes) at most, unrelated to how many files / dirs were present on it.


  2. The ionice / nice solutions didn't do anything, because the deleting process actually caused almost no I/O. What caused the I/O was I believe kernel / filesystem level queues / buffers filling up when files were deleted too quickly by the delete process.


  3. The way I solved it is similar to Tero Kilkanen's solution, but didn't require calling a shell script. I used rsync's built in --bwlimit switch to limit the speed of deleting.




Full command was:



mkdir empty_dir

rsync -v -a --delete --bwlimit=1 empty_dir/ cache.bak/


Now bwlimit specifies bandwidth in kilobyes, which in this case applied to the filename or path of the files. By setting it to 1 KBps, it was deleting around 100,000 files per hour, or 27 files per second. Files had relative paths like cache.bak/e/c1/db98339573acc5c76bdac4a601f9ec1e, which is 47 characters long, so it would give 1000/47 ~= 21 files per second, so kind of similar to my guess of 100,000 files per hour.



Now why --bwlimit=1? I tried various values:




  • 10000, 1000, 100 -> system slowing down like before

  • 10 -> system working quite well for a while, but produces partial slowdowns once a minute or so. HTTP response times still < 1 sec.


  • 1 -> no system slowdown at all. I'm not in a hurry and 2 million files can be deleted in < 1 day this way, so I choose it.



I like the simplicity of rsync's built in method, but this solution depends on the relative path's length. Not a big problem as most people would find the right value via trial and error.


Saturday, January 28, 2017

linux - IPTables allow then block with active connection



I have a backup server and I was wondering if I set a cron job to allow connection from a server in IPTables then once it connects with rsync, can I use IPTables to then shut off the port to prevent connections?



The idea is to block the chance of backups getting wiped if the main server got compromised(yes, it is secured but i dont take chances.)



EDIT: After trying stuff and because of how things work. I decided the best idea will be to setup a second server which will just pull from the first server.


Answer



Assuming it connects over ssh rather than rsyncd, you could handle this with a rule such as this




iptables -A INPUT -s -p tcp --dport ssh -m connlimit --connlimit-saddr --connlimit-upto 1 -j ACCEPT



Providing there are no other rules to allow it and the policy for INPUT is REJECT or DROP, this will work.



If you also want to restrict this to a specific time, additionally use -m time --timestart 01:00:00 --timestop 01:02:00 - which would provide a two minute window every day starting at 1AM


Directing requests on Apache, and transforming how the request URI is seen

How can I configure Apache to direct requests for particular URL on server to a particular directory, while at the same time transforming how that URL is seen by the script that processes it?



Say I have a php script in the following directory:




/somedir/foo/script.php



I would like all incoming HTTP requests to http://server/foo/* to be processed by /somedir/foo/script.php. However, I would also like the script to know what the remainder of the URI is in the REQUEST_URI variable.
(The * part of the URL is opaque information that is only meaningful to the script, and could be anything)



For example:



http://example.com/foo/



will be handled by /somedir/foo/script.php, and the script will see the REQUEST_URI as simply "/" and



http://example.com/foo/the/quick/brown/fox.html


will also be handled by /somedir/foo/script.php, while REQUEST_URI will be seen as "/the/quick/brown/fox.html"



How do I configure Apache to behave this way?



(Note that this is strictly an Apache question; I do not want to alter the script in any way.)

linux - BIND9 recursion on slave servers for a delegated zone - not working




I've been reviewing BIND/DNS documenatation and I've been unable to find
a clear answer. tl;dr - querying a secondary nameserver for a delegated
zone A record does not work with recusion enabled. And, by defition,
doesn't work with recursion disabled either, since all that is defined
in the zone from our point of view is the NS and glue record.



Software stack: bind-9.3.6-4 on CentOS 5.4 x86 for the secondary nameserver;
bind-9.2.4-30 on Centos 4.7 x86 for the primary nameserver.



I will use master and primary, slave and secondary, as synonyms,

respectively.



Our setup is as follows ( names/IPs changed to protect the innocent ):



ns.pr.example.com == primary nameserver, 10.10.0.1, 192.168.0.1



ns1.pr.example.com == secondary nameserver, 10.11.0.1, 192.168.0.2



ns2.pr.example.com == secondary nameserver, 10.11.0.2, 192.168.0.3




delegated.pr.example.com == delegated sub-zone



nsdelegated.pr.example.com == authoratative NS for



delegated.pr.example.com sub-domain, 10.11.0.5 NOT under our control!



You'll notice that ns1 and ns2 can talk to ns.pr.example.com over an
shared network - 192.168.0.0/24. However, ns.pr.example.com cannot
talk to the nsdelegated.pr.example.com host, which only has a
10.11.0.0/24 address.




The 192.168 network is a stand-in for our public-IP space; but the 10.10
and 10.11 networks are private, closed networks used for cluster
computing. Connecting ns.pr.example.com to the 10.11 network, either
directly or through a static route, is out of the question.



On the primary nameserver, ns.pr.example.com, the following defition is
added to the zone file, along with an updated serial:



/etc/named.conf:





zone "pr.example.com" {
type master;
file [db.filename];
};



db.filename:





delegated.pr.example.com. IN NS nsdelegated.pr.example.com.
nsdelegated.pr.example.com. IN A 10.11.0.5 ; glue record



This is replicated to the slave servers, ns1 and ns2. The record can be
seen, both in the flat files, and confirmed with dig:



slave example





dig -t ns +short @ns1 delegated.pr.example.com
nsdelegated.pr.example.com IN A 10.11.0.5



master example




dig -t ns +short @ns delegated.pr.example.com
nsdelegated.pr.example.com IN A 10.11.0.5



The nsdelegated server itself is responsive:




dig -t a +short @nsdelegated.pr.example.com randomhost.delegated.pr.example.com
10.11.0.222



But, a lookup on the secondary nameserver with the recursion-desired bit

set ( the default ) fails.




dig +recurse +short -t a @ns1 randomhost.delegated.pr.example.com
[no output]



It also fails on the primary server, ns, but that would be expected
since there is no way for ns.pr.example.com to contact 10.11.0.5 and
answer the request. Non-recursive queries also fail, since the relevant

information must be fetched from the nsdelegated.pr.example.com server.



My question is: why are the recursive questions to the secondary
nameservers failing? They have the correct delegation information, an NS
record and a glue record, and they are able to contact the delegated
nameserver.



My hunch is that, as a secondary nameserver, it may somehow be 'passing
on' the recursive question to the primary nameserver, where it then
fails. But I can't find any documentation to this effect, and it doesn't

make intuitive sense.



Any ideas, or debugging suggestions? I turned on maximal logging for
named, as well as query logging, but I couldn't get good information.
There wasn't an obvious "show me the lookups you do on behalf of
clients" log.



Thanks.


Answer



Of course you need to specify the delegated zone in named.conf, otherwise bind will think it is just a dotted record it should have, since it is authoritative for the pr.example.com zone.




What you want is something like this. In the master named.conf you specify a new zone (and accordingly a new zone in the slaves):



zone "delegated.pr.example.com." { type master; file [db.filename]; };


and the zone file should be:



delegated.pr.example.com. NS nsdelegated.pr.example.com. 
nsdelegated.pr.example.com. IN A 10.11.0.5 ; glue record



Now the main DNS server and its slaves know about the new zone and things should work.



==== EDIT



Mistake, the SOA record is not in ps.example.com but it is in the zone definition of delegated.pr.example.com. Fixed that.


Friday, January 27, 2017

windows server 2012 - Why can I not see a Computers GPO in my GPMC? How can I force a GP update without a Computers GPO?

This is not a duplicate. This question is much more detailed than "What is group policy". Mods need to be a little less aggressive around here.



I'd like to push out a Windows Firewall: Allow remote administration exception properties group policy to all computers connected to my DC. The instructions from MS are here, the relevant line is "Right-click the selected OU, and click Group Policy Update…". If I try to update the Users GPO it says there are no computer objects. If I try to add a "Computers" object, it says it already exists. The only thing in the tree that I can update is the Domain Controllers object; that doesn't seem right.



It's very confusing. If I can't add it, how come I can't see it? I don't have any filters turned on.



The whole point of this is to be able to force a GP update on all computers in the domain. The instructions that I linked imply that I can do that via the GPMC but I don't see any way to accomplish that.



GPMC tree

domain name system - DNS for private network - should router be the DNS server?



I want to set up BIND for a private subdomain on a private network, like in the question here: How to configure bind for a private subdomain?



My question is this - should my (linux) router act as the DNS server for this? Or should I have a seperate machine on the network acting as the DNS server? Does it not matter as long as all the machines on the network are configured to resolve to the internal DNS server?


Answer



It doesn't matter where you run it as long as it is reachable from the internal machines.




DNS is a very lightweight service, which can easily coexist with many others on a machine.



However, make sure it keeps working. When DNS fails, dozens of things will stop working and you'll be wondering what the heck is going on before you figure out DNS is down.


Thursday, January 26, 2017

linux - memory tuning with rails/unicorn running on ubuntu





I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7.



It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely.



My question concerns memory usage, and what concerns I should have with what I am seeing. (if any)



Here is the scenario:



Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour.




If I drop my page caches using the linux command echo 1 > /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours.



Before:




total used free shared buffers cached
Mem: 7130244 5005376 2124868 0 113628 422856
-/+ buffers/cache: 4468892 2661352
Swap: 33554428 0 33554428



After:




total used free shared buffers cached
Mem: 7130244 4467144 2663100 0 228 11172
-/+ buffers/cache: 4455744 2674500
Swap: 33554428 0 33554428



My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour?



FWIW, my Unicorn config:




worker_processes 15

listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog => 1024
listen 8080, :tcp_nopush => true
timeout 180


pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid"

GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true

before_fork do |server, worker|
STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK"
print_gemfile_location

defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!

defined?(Resque) and Resque.redis.client.disconnect

old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# already killed
end
end


File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)}

end

after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
defined?(Resque) and Resque.redis.client.connect
end



Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour?



tia


Answer



This is the line that matters (specifically the last column):



-/+ buffers/cache: 4468892 2661352



You'll note that this number doesn't really change when you dropped your caches.




The OS will deal with the freeing buffers when the running applications demand more memory. For what you're doing, it's not productive to try being very fiddly with how the OS handles its memory, particularly given that you appear to have plenty.


Wednesday, January 25, 2017

Amount of allowed subdomains in domain and email



How many subdomains can you have on your domain? And with that, I mean levels of subdomains.



For example, you have the domain example.com, I know you can have test.example.com, but how many levels can you have? Like test2.test.example.com, test3.test2.test.example.com etc.



Second, I was wondering 2 things about email addresses. First of all, how common is the usage of subdomains in an email address (IF it's even possible)? I've never seen something like test@test.example.com to be honest so I'm not exactly sure. And if it's possible, how many levels (just like above) can you have?



And last, can the questions above depend on the domain registrar or the mail server etc.? And what could it depend on?


Answer




There are no direct limits on how many levels i.e. dots you can have in a hostname. However, a RFC1034 compliant hostname can only be 255 bytes long, leaving 253 bytes for a fully qualified domain name FQDN in DNS. Some systems and TLS/SSL limits FQDN to 64 bytes and FQDN in emails should not exceed(*) 245 or 221 bytes depending on the maximum user name length (8 or 32).



As TLD usually takes at least 2 characters and . and every part of the hostname must be at least one character long, the space left for additional dots i.e. theoretically maximal levels would be:




  • (253-3)/2 = 125 levels after TLD for theoretically longest (not so useful) hostname

  • (221-3)/2 = 109 levels after TLD, if you wish to use it for email

  • (63-3)/2 = 30 levels after TLD, if you wish to use SSL/TLS.




And yes, user@subdomainof.subdomain.example.com is in a valid email address format.






(*) The special limitation for email address length is a result of RFC 2821 4.5.3.1 and 4.1.2:



4.5.3.1 Size limits and minimums

path
The maximum total length of a reverse-path or forward-path is 256

characters (including the punctuation and element separators).


4.1.2 Command Argument Syntax

Path = "<" [ A-d-l ":" ] Mailbox ">"


As forward-path must include the angle brackets, only 254 characters are left for the email address. Then, the username@ part of 8(+1) or 32(+1) must be excluded to get the maximum FQDN lenght.


domain name system - Multiple sites on server but only one is down. Possible DNS issue?

I have several sites on a VPS server and one of the sites randomly went down earlier (500 error). The site is hosted on a subdomain of another website that is hosted elsewhere. So, the site that is hosted on my server is blog.example.com while example.com is hosted on another server.



I think the issue is DNS related and possibly NS record related. Prior to the site going down, I had the subdomain website NS records set as my server NS records e.g. ns1.myserver.com and ns2.myserver.com. I thought I might need to change the NS records to the server that is hosting the main website example.com. I changed the NS records to ns1.example.com and ns2.example.com but the problem is still not fixed. Could there be a different problem or another solution?

Tuesday, January 24, 2017

Webserver and FreeNAS storage server

I have a web server running currently which is low on free storage space and many more files are incoming.
I want to store files from that web server onto another server (NAS, FreeNAS server).
An example: someone wants to download a PDF file from the website (on the web server), but the file will be downloaded from the FreeNAS server.
Is this possible? Can I use FreeNAS for this?

Monday, January 23, 2017

active directory - Applying Group Policy to the Security Groups

I'm using VBScript in my logon script to map network drives..
I know that a group policy should applied to individual user accounts and computer accounts by linking (GPOs) to Active Directory containers (OUs).
The thing that I do not know is how to apply group policy to an OU that has nothing but groups in it?

email - What happened to all the spam?

Don't get me wrong, I'm mostly glad that this happened. However, I want to make sure that the reasons for it happening are sound - rather than there being a problem with our methods. I'd like to illustrate what's going on here with a graph:



http://lightspeed.ca/personalpage/ernied/spam_junk-year.png



The bright green line here shows the rate at which our server has rejected messages from IP addresses listed in realtime blacklists over the course of the last 12 months. Last May, we were rejecting an average of about 175 messages every 5 minutes, or 35 per minute, using this filter alone. It's pretty clear that since October, it's tapered off to a fraction of that - we're now averaging about 8 rejected messages per minute on this filter:



http://lightspeed.ca/personalpage/ernied/spam_junk-week.png




Since we see no corresponding rise in the number of messages being trapped by Spamassassin (the teal line largely drowned out at the bottom of the graph) or any other filters, I can come to one of two conclusions based on these statistics:



1) All of our filters have become ineffective.



or



2) Spammers aren't spamming as much as they used to.



Historically speaking, I find 1 to be much more likely than 2. However, from experience and customer complaints (rather, a lack thereof), 1 isn't true because we're not seeing much spam in our inboxes anymore. So what the heck is going on here? I can't fathom that spam has somehow become unprofitable. Have they moved on to softer targets? I'm seeing little to no spam on Facebook or Twitter or any HTTP forums. Have there been massive arrests, removing spammers from the wild, and discouraging new criminals from entering the ring?




Whatever the reason, it sounds to me like a hard-fought victory for someone out there. But I still want to make sure that it's either time to break out the champagne or start sharpening our swords.

ftp - Writing permission with VSFTPD and Centos 6.2



I have a server with centos 6.2 with httpd and vsftpd.



I have few web site in /var/www and i want to add a ftp user for each site.



My user1 home directory is /home/user1 and can read/write to it folder from ftp. (it's the user i use to ssh and almost everything)




I made user2 which home is /var/www/site2 and bash setting /bin/nologin (because i want it to be just a ftp user)



I can log in the FTP with the user2 and download file, but i can't upload file or mkdir...



The permission are :



for /var/www :



drwxrwxr-x. 13 root root 4096 Aug 21 14:08 .




for /var/www/site2 :



drwxrwxrwx. 2 user2 user2 4096 Aug 21 14:35 site2



(the 777 was just for testing...)



My vsftpd.conf is :



 anonymous_enable=NO 

local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=NO
log_ftp_protocol=YES
chroot_local_user=YES
listen=YES

pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
banner_file=/etc/vsftpd/banner


My iptables is currently stop for testing, so the problem is not my firewall either...



SELinux is enabled :




SELinux status:                 enabled
SELinuxfs mount: /selinux
Current mode: enforcing
Mode from config file: enforcing
Policy version: 24
Policy from config file: targeted


When i disabled it, it's working! :)
How can I enabled it and keep my vsftpd working?




Thank in advance for your help


Answer



What are the rights on /var/www/site2?



User2 will need write access to this directory at the file system level. For instance /var/www/site2 needs to be something like:



ls /var/www

drxwr-xr-x user2 www-data site2/



Make sure SELinux is disabled as well



 setsebool -P allow_ftpd_full_access 1

virtualhost - Apache fails after configuration of 2 virtual hosts




Apache does not restart after having changed the configuration.



Error:
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80



File: /etc/apache2/sites-enabled/000-default.conf:



Listen 80


DocumentRoot /var/www/html
ServerName sladie.myserver.com

# Other directives here



DocumentRoot /var/www/html/mydomain.com
ServerName www.mydomain.com


# Other directives here



Got this configuration from the Apache Docs.



Any idea why this does not work?


Answer



If you are using Debian, you don't need to specify Listen 80 in your virtualhost, because is declared in




/etc/apache2/ports.conf:9:Listen 80

Sunday, January 22, 2017

apache 2.4 - Virtual Host preventing server-status directive

In my Ubuntu 16.4 VM I've set up Apache 2.4 successfully and have several vhosts set up. I'm wanting to see server-status but my first vhost keeps preventing that.
I've read and re-read the Apache 2.4 docs on this. I've put the following in my /etc/apache2/apache.conf, then in /etc/apache2/mods-enabled/status.conf and finally in the /etc/apache2/sites-enabled/0firstvhost.conf




SetHandler server-status
Require ip 10.211.55.0/24



In reading many posts on this subject in the Apache docs and ServerFault, I've tried many variations that are applicable to Apache 2.4




I can verify mod_status is running by seeing it when running



sudo apachectl -M | grep status


Of course, I've checked the apachectl configtest each time and restarted the apache2 service to see if I can browse to 10.211.55.3/server-status but the Drupal PHP app keeps interfering. There is no .htaccess at the root of this vhost.



I have placed this directive within and without the directive.




I check a browser at the IP addy of the VM and also within the VM run



curl localhost/server-status
curl 10.211.55.3/server-status


The Drupal app gets read first. What to try next? thx, sam

automation - Automated monitoring of a remote system that sends email alerts

I need to monitor a remote system where the only access I have is that I can subscribe to email alerts of completed/failed jobs. I would like a system that can monitor these emails and provide an SMS or other alert when:




  • An email indicates failure.

  • A process that was expected to complete by a
    given time has not.


  • A process that was expected to complete N minutes
    after completion of another process
    has not completed.



Are there any existing tools that allow this? I'd consider any option - SaaS, open-source, COTS, as long as I don't have to write it myself!



Cheers,



Blake

ubuntu - Is a reboot required to refresh permissions after adding a user to a new group?

On ubuntu server, I've noticed more than once now that after adding a user to a group that user doesn't have group permissions until I reboot the system. For example:



User 'hudson' needs permission to read directory 'root:shadow /etc/shadow'
So I add hudson to the shadow group. hudson still cannot read. So, I 'sudo shutdown -h -r now' and when the system comes up again user hudson can read.



Is a reboot required or is there a better way to get permissions applied after adding the user to the group?

Saturday, January 21, 2017

linux - ZFS Send and ZFS receive dataset without -RI incremental replication sync

Is there a way I can send ONLY the latest snapshots to the backup zfs system even though it has previous snapshots? When I try I keep getting error:



"cannot receive new filesystem stream: destination has snapshots (eg. mirrorpool/ETC/Stuff) must destroy them to overwrite it"




And I was using zfs send receive with the -F already.



Basically the receiving system has not received a bunch of snapshots since I found it had run out of space. So I deleted a bunch of VERY old snapshots on the receiving zfs file system and left the more recent ones, but the zfs system that does the zfs send has a lot of even more recent snapshots that don't exist on the zfs receiver (backup server). But I do NOT want to replicate ALL the missing snapshots back to the snapshot they have in common. I would like to simply send the most recent snapshot couple snapshots to the zfs receiver.



Currently the zfs receiver has the first couple snapshots ever created and then the rest were deleted and only ones left were the latest it had from around sometime in October 2018. So I would like to avoid sending Every Daily snapshot since October 2018 from the Zfs Sender system to the zfs receiver and just send only the last couple snapshots.



Or is there some sort of just "rsync" type of zfs send | zfs receive where I can just keep the two datasets in-sync without sending over any snapshots?

Friday, January 20, 2017

How can I configure Postfix to ignore relayhost when forwarding mail?



How can I configure Postfix to ignore relayhost when forwarding mail?




I currently use the relayhost to send all outgoing email via an external SMTP service :



# /etc/postfix/main.cf
relayhost = [smtp.mandrillapp.com]


A couple of my domains are configured to send via an alternative SMTP service :



# /etc/postfix/relayhost_maps

@domain1 [email-smtp.us-east-1.amazonaws.com]
@domain2 [email-smtp.us-east-1.amazonaws.com]


However, a few of my customers have their incoming email forwarded to other accounts. I don't want to be sending forwarded email via my external SMTP service, instead I want it relayed directly by localhost.



For example, my machine accepts email for 'user@domain3.com', which the client has configured to be forwarded to 'other@hotmail.com'. What I'm looking for is a way to have any emails forwarded to 'other@hotmail.com' relayed directly by my server - and not relayed by my external SMTP service.



I think transport maps are close to what I need, and I have found lots of information on how to route to an external SMTP with transport mapping, but I can't figure how to relay from localhost only when forwarding mail.




I thought I had a solution here:



How can I configure Postfix to ignore relayhost for some domains?



... but when I tried it, I sent an email to myself and received it 2,500 times in some sort of loop between my machine and my SMTP gateway, and had to quickly stop Postfix!



So, basically I want to relay forwarded messages from localhost, and non-forwarded messages via various SMTP services.


Answer



Have a




transport_maps=hash:/etc/postfix/transport


line in main.cf, then add to /etc/postfix/transport :



other@hotmail.com smtp:


run postmap /etc/postfix/transport and reload postfix if you changed main.cf .


Thursday, January 19, 2017

ESXi from SD card to hard disk on raid system?



My currrent ESXi 4.0.0 system runs from an SD card and has some VMs set up on two 1 TB drives running in a Raid level 1 configuration on a Dell PowerEdge R710. I've been asked to make the system use a hard disk in place of the SD card.



I've tried to copy the entire SD card to a third hard drive (trying both dd from an Ubuntu live disc or Clonezilla, which gave the same results). With the hard disk configured in the Raid controller as Raid level 0, ESXi loads and starts to boot up. About 40% of the way in, the error message "Failed to find boot partition" is displayed.




Taking another route, a fresh install from a free VMware trial disc doesn't seem to have that error but I do have to reconfigure everything. Not only do I get the notification that I'm on a trial license, but the existing VMs aren't visible when I connect via the client. Is there some way to import them that isn't readily apparent to a novice user?



How do I run the system from a hard disk rather than the SD card, keeping my existing VMs as unchanged as possible?


Answer



You won't be able to move to running off of an SD card to the disk arrangement you've described without some disruption to the virtual machines presently on disk. May I ask why you're no longer interested in using the SD for ESXi? Many people seem to be moving in the direction of using SD cards/USB keys instead of dedicated disks for VMWare.



The fresh installation approach should allow you to import the existing datastore containing your VMs. Going to: Configuration -> Storage -> Add Storage within the vSphere client should allow you to mount the old VMFS volume from the RAID 1 pair.



enter image description here




Following that, you will need to browse the datastore and reimport your virtual machines by adding them into your VM inventory.


Typical port forwarding with nftables example

I want to connect to a virtual VM hosted by the server 1.2.3.4 using ssh.
The IP of the VM is 10.10.10.100.



"nft list ruleset" prints:





table inet filter {
chain input {
type filter hook input priority 0; policy drop;
iif "lo" accept comment "Accept any localhost traffic"
ct state invalid drop comment "Drop invalid connections"
ct state established,related accept comment "Accept traffic originated from us"
ip6 nexthdr ipv6-icmp icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, mld-listener-query, mld-listener-report, mld-listener-done, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept comment "Accept ICMPv6"
ip protocol icmp icmp type { destination-unreachable, router-advertisement, router-solicitation, time-exceeded, parameter-problem } accept comment "Accept ICMP"
ip protocol igmp accept comment "Accept IGMP"

tcp dport ssh accept comment "Accept SSH on port 22"
tcp dport { http, https, 8008, http-alt } accept comment "Accept HTTP (ports 80, 443, 8008, 8080)"
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}

table ip nat {
chain input {
type nat hook input priority 0; policy accept;
counter packets 3 bytes 180
}
chain prerouting {
type nat hook prerouting priority -101; policy accept;
counter packets 12 bytes 2122
dnat to tcp dport map { 10100 : 10.10.10.100 }:tcp dport map { 10100 : ssh }
}

chain postrouting {
type nat hook postrouting priority 0; policy accept;
snat to ip saddr map { 1.2.3.4 : 10.10.10.100 }
}
}


"nmap -p10100 1.2.3.4" says: 10100/tcp filtered itap-ddtp



"ssh 1.2.3.4" works.




On Server "ssh 10.10.10.100" works



"sysctl net.ipv4.ip_forward" prints "net.ipv4.ip_forward = 1"

Wednesday, January 18, 2017

permissions - Linux user group configuration for Git bare repository




I'm using a Ubuntu box to host my bare Git repositories for developers to work off.



At the moment I'm creating a user account for each developer on the box because it doubles as a filestore and local testing server.



When somebody pushes to the bare repository other developers are unable to work on the files which change in the objects folder as a result. The new files are created with the user of the developer who pushes.



I have placed all the developers into a dev group but the umask doesn't allow the group to edit.



I've never had to set up a Git repository so haven't had experience in working with the permissions. I do want each developer to have their own user account on the test server, and I would prefer them to do actions on the server using that account. I don't mind giving them sudo rights.




Is setting the umask for each developer the way forward?


Answer



While "How do I share a Git repository with multiple users on a machine?" does address your issue (and involves setting umask for the users), I prefer adding to my git installation an authorization layer like gitolite (see its documentation).




  • No sudo right to give to anymone.

  • All git repo operations are done by one 'git' user.

  • you can set precisely the umask for newly created (and gitolite-managed) Git repos: "Setting umask in Git / Gitolite"



sql server 2005 - DPM 2007 clashing with existing SQL backup job



I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3.




The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task.



Since adding the DPM job I'm receiving many error messages:




DPM tried to do a SQL log backup,
either as part of a backup job or a
recovery to latest point in time job.
The SQL log backup job has detected a
discontinuity in the SQL log chain for

database SERVER_NAME\DB_Name since the
last backup. All incremental backup
jobs will fail until an express full
backup runs.




My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down.



I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests.




It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up.
Many thanks in advance


Answer



Your Google-fu is correct. When your full backup runs from the SQL Agent job, the RESTORE chain for the DPM backup sequence is broken, and DPM no longer has context on the previous log files.



Running the SQL Agent backup as copy-only will work, as it does not break the RESTORE chain. Taking a full backup with this option does not make the new backup a new base - it does not clear any of the differential bitmaps and doesn't interfere with the DPM backups.



It's difficult to maintain two backup chains simultaneously, as each one will interfere with the other.


database administration - Things every SQL Server DBA should know

What things should every SQL Server database administrator know?



Books, blogs, tools, you name it.

Tuesday, January 17, 2017

web server - Port forwarding

I run a school network, and it has one ILS (library management system) server, and about 10 computer lab computers. The lab comps all run XP pro, and connect through a series of hubs -> a home-style router (dhcp server, dns server, pppoe client)(yes, its a rather small school) -> a modem -> the phone line. The complab comps need to get online, and the ILS server has an OPAC (online public access catalog), which I need to be able to access remotely. It is accessed from a computer on the local network by simply typing the server's hostname or IP into the address bar of a browser, so I think its safe to assume that it runs on port 80, the default port for all web trafic. I also need remote access to samba shares on the server, and remote ssh access via PuTTY. The way I plan to implement this is by forwarding ports 80, 22, and whatever port it is that samba runs on (need to look that up later). My question is two parted, and assumes that the external (global) IP is static:





  • Will it work?

  • Is it safe? By this I mean will hijacking port 80 sabotage web access for the other computers? To give an example, lets say lab comp. A requests / from http://google.com. Google receives this request, and sends back an HTML document on port 80. Instead of going to lab computer A, it goes to the server as that's where port 80 was forwarded to. This is obviously a problem, as lab computer A didn't receive google's home page so that he could search for stuff.

performance - How to ask antivirus software to work slower and hence use less disk access?

It is our policy for our end-user's computers (usually laptops) to have high power CPUs, GPUs, RAMs (no less than 16GB) and HDD space (1TB), but we save money by choosing lower rotation speed of HDD. We have very high rotation speed for our servers instead. Usually it works quite good, but antivirus software is raising problems. I can observe in Task Manager that if total (from all the processes) disk access is more than 4-5 MB/s, then the Taks Manager indicates 100% use of disk access and the other applications are slowing down visibly. Usually the antivirus software, especially scanner process is consuming the highest part of the disc access. Of course, I can assigner lower priority for antivirus software but this has impact of CPU use (which is not problem). But is it possible to slow down the disk access of antivirus scanner process? It is OK, that each downloaded file, each accessed web page is scanned in real time, but I don't see the necessity to have high disk access and express resource consumption for the long-running background disk scanner processes. We use many of our computers for programming, that is why each of them can contain around 5.000.000 files or more (no more than 15.000.000 files). So - scanner is trying to process quickly all those files and the work is impossible.



And regarding the option to do scans during idle/maintenance time. Well - it is important to stress that we use mainly laptops for the end users, many of them take their computers home, have flexible schedules. So - there is no time window that could be planned especially for the maintenance activities. So - this is no option. I wonder why antivirus companies are not thinking in terms of customer satisfaction?

Monday, January 16, 2017

domain name system - DNS and PTR for SMTP: shared IPs and subdomains

This question is similar to others about PTR and DNS for SMTP, but one specific aspect was unanswered: what if one machine does SMTP and HTTP on the same IP address. For example:




SMTP at mail.example.com, also HELO. (1.2.3.4)
HTTP at www.example.com (1.2.3.4)
general access like ssh at example.com (1.2.3.4)



What are the requirements for the PTR record on the address 1.2.3.4 to be accepted by spam filters? The 'main' hostname for 1.2.3.4 is example.com, but if reverse DNS lookups require an exact match, I have to set it to mail.example.com. That's stupid. I mean, reverse lookups of 66.102.13.106 don't result in mail.google.com.



Or, is it enough if a reverse lookup finds example.com and mail.example.com as MX record on it? In other words, should I set the PTR to example.com?



One could argue that I should make SMTP access and the HELO example.com, but that causes inflexibility, because then I can never move SMTP to another machine by simply changing the A record.




Edit: it seems unclear what I mean, so let me clarify:



The server in question hosts DNS, SMTP, WWW and a lot more. It does all of it's own DNS. Example.com points to that machine, say 1.2.3.4. Because mail is not its main thing, I don't want 1.2.3.4 to reverse resolve to mail.example.com



The server runs postfix and its HELO is mail.example.com, which also points to 1.2.3.4. For the PTR to match, 1.2.3.4 should reverse resolve to mail.example.com, but as I said, I want it to resolve to example.com, because mail is not the server's main task.



Does that mean I have to change the mailname to example.com, and having it at mail.example.com will cause some spam filters to reject it, even though mail is an mx record of example.com?

centos - Source and destination servers showing different number of established TCP connections

I have a JBoss app server and a Postgres database server on different machines. I'm troubleshooting TCP connections between them (because the app keeps running out of database connections).



I'm seeing this and it makes no sense:





  • When I do a netstat on the database server, I see lots of established TCP connections from my app server.

  • When I do a netstat on the app server, I see almost no established TCP connections to the database server.



The machines are VMware virtual machines running Centos, managed by a cloud provider (not AWS). There's no firewall between the machines (as per Too many established connections left open) which does seem like similar behaviour.



I don't know what else could cause this asymmetry?

windows 7 - Is Remote Desktop to Workstations Secure?




I have users that want to use remote desktop for remote access to their workstations. I have RADIUS connected VPN server that I use, however I remember to connect and disconnect rather than send web traffic over the VPN.



I doubt they will do this, because the previous IT consultant left them RDP open and didn't even suggest to change passwords such as 1234,password and {insert child/pet name}. Now they have to use the Password policy that R2 ships with , so I know we are more secure in that regard.



So the most important issue is how dangerous is leaving 7 and XP Remote open to the internet?


Answer



If you have passwords set to be of a decent length and complexity, RDP is encrypted, so it for the most part is secure. I personally wouldn't do it, preferring to use something like a Cisco VPN client on workstations then VPN to the workstation rather than leaving it open to the webbertubes. RDP can be susceptible to MITM attacks and you'll probably get bots and scans that will probe them.



I'd also set your policy to lock out accounts if they are tried 3 times with incorrect passwords to prevent/minimize brute force attacks.




Summary: it's probably secure enough to do this, but it's bad practice and should be avoided.



EDIT: there are worms that attack RDP, so you'll want to be mindful of this in enforcing your policies. I.e., Morto.


linux - Configuring SSL hosts to be able to access HTTP / HTTPS on multiple domains?

I'm having trouble configurig multiple ssl hosts on my apache server (CentOS). Originally I thought the problem was only having one IP, so once this was discovered I asked our server provider to add another IP which they did.



However, I'm still having problems. We want to be able to have http & https access for both of our domains, domain1.com & domain2.com as well as having various subdomains.



I have the certificates, keys, intermediate certs on the machine (for both domains) and these appear to be fine.



The situation is that all the HTTP sites are working correctly, and the first SSL domain is working but when I try and visit the second domain over HTTPS I get a security error (says wrong certificate as is showing domain 1's cert!).




Also, the pages being served to domain2 are not the correct oes (i.e not what the DocumentRoot says!). It appears as though it is defaulting to the first ssl config for all domains/ips.



Config Files:



THis is an excerpt from httpd.conf
####
NameVirtualHost **.**.**.27:80


DocumentRoot /var/www/html/ADDIR

ServerName domain1.com
ErrorDocument 404 /var/www/html/404.html


# # There are other virtualhosts for other ServerNames & DocumentRoots too but they're otherwise identical to above. ###




NameVirtualHost **.**.**.41:80



DocumentRoot /var/www/html/SOC
ServerName domain2.com


# # #
This is an excerpt from ssl.conf



DocumentRoot "/var/www/html/ADDIR/"
ServerName domain1.com:443
ErrorLog logs/ssl_error_log
TransferLog logs/ssl_access_log
LogLevel warn
SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW

#certificates

SSLCertificateFile /ssl/server.crt
SSLCertificateKeyFile /ssl/server.key
SSLCACertificateFile /ssl/intermediate.crt


SSLOptions +StdEnvVars


SSLOptions +StdEnvVars



SetEnvIf User-Agent ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"




SSLEngine On
SSLCertificateFile /ssl/SD/server.crt
SSLCertificateKeyFile /ssl/SD/server.key
SSLCACertificateFile /ssl/SD/intermediate.crt

ServerAdmin info@mydomain.com
ServerName domain2.com
DocumentRoot /var/www/html/SOC/

router - Cisco 1841 - Configuration Issue with NAT

We recently dug out an old Cisco 1841 to solve a need and have been in the process of trying to configure it appropriately. Admittedly, this is my first escapade into the land of Cisco Routing. I've been trying to piece together the correct NAT rules, but something just isn't right.



To give you a lay of the land, we have Outside Internet connection going into fa0/1. Then we have a Cisco Firewall going to fa0/0. Now, before I get too far, I know for a fact that the Cisco firewall is configured appropriately. The original router that was in place before we swapped it with the 1841 worked just fine. For those wondering, we were using an Edgemark router through a PBX provider that we no longer want to use. To fill the need for a router, we replaced the Edgemark router with this Cisco router.



Internet -> Cisco 1841 FA0/1 -> Cisco 1841 FA0/0 -> Cisco ASA 5520 Firewall -> Core Internal Switch



interface FastEthernet0/0
description $ETH-LAN$
ip address 67.xxx.xxx.177 255.255.255.240

ip nat inside
ip virtual-reassembly
no ip route-cache
duplex auto
speed auto
!
interface FastEthernet0/1
description $ETH-WAN$
ip address 65.yyy.yyy.150 255.255.255.252
no ip proxy-arp

ip nat outside
ip virtual-reassembly
no ip route-cache
speed 10
full-duplex
!
ip classless
ip route 0.0.0.0 0.0.0.0 65.yyy.yyy.149
!
no ip http server

no ip http secure-server
ip nat pool Net67 67.xxx.xxx.176 67.xxx.xxx.191 netmask 255.255.255.240
ip nat pool ovrld 67.xxx.xxx.178 67.xxx.xxx.178 prefix-length 24
ip nat inside source list 101 pool ovrld overload
ip nat outside source list 101 pool Net67 add-route
!
access-list 101 permit ip 67.xxx.xxx.176 0.0.0.15 any


Now, the nat rules that I have here are rules that I had pieced together off of sites such as ServerFault, Cisco Community, and other sources. I think something is wrong though.




Here are the issues:




  • Devices on the inside can't see the internet.


    • Though the router CAN ping 8.8.8.8 from itself.


  • Traffic on the outside going to the inside public IP's can't get through.




Any help would be appreciated.



Thanks!



EDIT: A Previous config that I also tried, which also did not work was this.



interface FastEthernet0/0
description $ETH-LAN$

ip address 67.xxx.xxx.177 255.255.255.240
ip nat inside
ip virtual-reassembly
duplex auto
speed auto
!
interface FastEthernet0/1
description $ETH-WAN$
ip address 65.yyy.yyy.150 255.255.255.252
no ip proxy-arp

ip nat outside
ip virtual-reassembly
no ip route-cache
speed 10
full-duplex
!
ip route 0.0.0.0 0.0.0.0 65.yyy.yyy.149
!
no ip http server
no ip http secure-server

ip nat inside source route-map nonat interface FastEthernet0/1 overload
!
access-list 101 permit ip 67.xxx.xxx.176 255.255.255.240 any
route-map nonat permit 10
match ip address 101

Saturday, January 14, 2017

linux - Outgoing brute force attacks from my server

One of the servers I look after appears to be participating in brute force attacks against Wordpress installations.



I've been on the receiving end of this many times, so am very familiar with steps that can be taken to prevent this. What I'm struggling with, however, is detecting outgoing attacks. The server is a typical Apache server with a number of vhosts on it - this is where the complication comes of course - if there was just one on there, it wouldn't be as difficult!



I'm currently using tcpflow to log traffic going from any port on this server to port 80 on any other machine using this command:



tcpflow -i eth0 dst port 80 and src host  and port not 22


I've found this preferable to tcpdump. Looking through its output can be somewhat brain-melting after a while :) tcpflow puts each request into a separate file..




Here is some output from a file which I believe to be suspicious activity:



POST /wp-login.php HTTP/1.1
User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Host: somedomain.com
Accept: */*
Cookie: wordpress_test_cookie=WP+Cookie+check
Content-Length: 97
Content-Type: application/x-www-form-urlencoded


log=jacklyn&pwd=london&wp-submit=Log+In&redirect_to=http://somedomain.com/wp-admin/tes1a0&testcookie=1


Please note, I've obfuscated the "Host:" above, I believe that's the host being attacked (is this correct?).



So my question really, is how do I go about detecting the vhost that is generating this malicious traffic? If I can do that, I can let my client know, and he can take steps to investigate the site and make the necessary changes to stop it..



Any solutions very gratefully received :)

linux - RAID setup for high speed reading




We are looking to build a realtime playback machine using Linux and a RAID (5 or 10) setup. The current setup looks like:




  • 12GB memory

  • 5 x 7200rpm drive (software raid)

  • centOS 6 (Kernel Linux 2.6.32-71.29.1.el6.x86_64)

  • NVidia Quadro 5000 (driver 280.13)

  • Intel(R) Xeon(R) CPU X5650 @ 2.67GHz




I did run Bonnie++ and iozone to do some benchmark with different raid setup (5 and 10), with
different fs type (ext4 and xfs), and different stripe size. Unfortunately it seems that I can't
get the speed I want out of it (always <200MB/s).



The other test I made was directly in the playback software (RV - http://www.tweaksoftware.com/products/rv), but could not get it to play faster than 20 frame per second (looking for 24 fps) with more than 3 sequences.



These playback details are a little futile, I just want to know what would be the best setup to get something like ~700MB/s read performance? Is it possible?



I've been reading quite a bit, seems like a hardware controller could be better. Also I guess 7200rpm is not enough. 10 or 15k might be better? What about SSD?




I've another constraint with this project, this machine will store all the sequences for all the projects, so density matter (I bet it will cost way more to get the same storage amount with SSD drives vs. std 10k rpm drive).



Any suggestions or tips will be appreciated to get the best read speed/storage amount.



Thanks!



Edit: Just stumble upon this http://www.fusionio.com/products/iodrive/. Anyone has experience with this card?


Answer



If you need to handle video streams then you got to get something way better than what you spec'd. Even if you get enough SATA drives to get the desired bandwidth of 700MB/s (which should be easily doable on todays consumer class hardware) you'll maybe have sever latency problems.




What good is your storage solution if you can crank out even 1GB/s but each IO takes 500ms or so to complete? You're talking video so you want something that can deliver enough IO so that your maximum latency of 40ms (25fps) for retrieving your frames is taken care of.



You might also want to have a look at specialized file systems for video streaming applications. Hitachi Data Systems (HDS) sells one for example also XFS does have an real-time extension that was developed for media applications.


Friday, January 13, 2017

vmware esxi RAM configurations




We want to increase total RAM size in 3 of our vmware esxi nodes.



My question is that as long as the total size of the RAM on each node is the same, does not matter what size of each individual chip i use in 3 nodes? (the type and frequency of the ram will be the same. )



something Like this:



24 x 16G = 384G in node1
12 x 32G = 384G in node2
12 x 32G = 384G in node3



Is this doable or everything must be identical? is there anything else we need to concern?


Answer



This does not matter.



Is there a reason you you think the composition of total RAM in a node would make a difference?


exim - does email pipe to program cause problems with unicode characters?

I'm piping incoming mail into a PHP script, immediately storing the RAW email in a MySQL db. It works very well, except ~0.7% of emails arrive with a truncated message body.



I found someone whose emails were failing, and had them send an email TO my gmail account AND to the server. Gmail had no problems, I saw the whole message. But my server cropped the raw message like so:



Delivered-To: asdasd@gmail.com
Received: by 10.152.1.193 with SMTP id 1csp3490lao;

Mon, 20 Oct 2014 05:33:31 -0700 (PDT)
Return-Path:
Received: from vps123.blahblah.com (vps123.blahblah.com. [74.124.111.111])
by mx.google.com with ESMTPS id fb7si7786786pab.30.2014.10.20.05.33.30
for
(version=TLSv1 cipher=RC4-SHA bits=128/128);
Mon, 20 Oct 2014 05:33:30 -0700 (PDT)
Message-ID: <14FBD481E1074C79AF3D@acerDator>
From: =?utf-8?Q?sende=C3=A4r?=
To: "test"

References:
Subject: Message body will contain only Det h
Date: Mon, 20 Oct 2014 14:33:24 +0200
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="----=_NextPart_000_0018_01CFEC72.CE424470"
X-Priority: 3
X-MSMail-Priority: Normal
Importance: Normal
X-Mailer: Microsoft Windows Live Mail 14.0.8117.416

X-MimeOLE: Produced By Microsoft MimeOLE V14.0.8117.416
X-Source:
X-Source-Args:
X-Source-Dir:

Det här är ett flerdelat meddelande i MIME-format.

------=_NextPart_000_0018_01CFEC72.CE424470
Content-Type: text/plain;
charset="utf-8"

Content-Transfer-Encoding: quoted-printable

This email will not be received correctly. EXIM may not handle =
some poorly formed emails. For example ...

Det h=E4r =E4r ett flerdelat meddelande i MIME-format.

... is directly above this quoted-printable wrapper, thanks to the =
Swedish email client Microsoft Windows Live (circa 2009), adding UTF-8 =
chars where there should only be ascii. At least, that's what I think =

the problem is.

------=_NextPart_000_0018_01CFEC72.CE424470--


My server crops the message immediately before the first foreign character. The stored raw data contains the headers, a blank line, "Det h", and nothing else.



When I pipe the above email into the PHP script in the shell (/blah/email_in.php < bademail.txt), and it stores the message perfectly. So I don't think my script is at fault, it stores the raw STDIN correctly.



I used cPanel to "Set Default Address" to "Pipe to a program". I don't know whether or not this setting bypasses EXIM entirely, but I read somewhere that EXIM handles the pipe transport, so my first guess is that EXIM is mangling a poorly formatted message, and choking the stream at the first unicode character ä.




To confirm this, I need a way to pipe email INTO EXIM, basically tricking EXIM into thinking it just received an email when actually it just received a txt file. I've found several tutorials on how to telnet to port 25, etc., but nothing that would preserve the headers, multipart boundaries, nor that made sense to a unix n00b like me that relies on cPanel.



Am I correct about EXIM being the likely culprit?
Can anyone suggest a way to test this, or an alternative approach?



My server runs EXIM + Dovecot on CentOS 6.5.



p.s. My only other thought is to let the server store mail normally, and if these messages are magically stored correctly, to use IMAP to retrieve/delete the messages rather than going directly into the pipe... seems less efficient to add the IMAP middleman, though this approach is probably more robust.

Thursday, January 12, 2017

logging - Get notification from supervisord when a job exits



Is there any way supervisord can automatically restart a failed/exited/terminated job and send me a notification email with a dump of the last x lines of log file?


Answer



There is a plugin called superlance.



You install it with pip install superlance or download it at: http://pypi.python.org/pypi/superlance



The next thing you do is you go into your supervisord.conf and add the following lines:




[eventlistener:crashmail]
command=/usr/local/bin/crashmail -a -m email1@example.com
events=PROCESS_STATE


This should be followed by a "supervisorctl update". When a process "exits" you will now get a notification sent to email1@example.com.



If you only want to listen to some selected apps you can exchange the -a for a -p program1 or if it is a group group1:program2 One example would be



[eventlistener:crashmail]

command=/usr/local/bin/crashmail -p program1 -p group1:program2 -m email1@example.com
events=PROCESS_STATE


Regarding the automatic restart:
you should make sure that autorestart is set to true (it is set to unexpected by default). This way the package will be restarted 3 times. If after that it still exits, it gives up, but you can change that with startretries.



Example program:



[program:cat]

command=/bin/cat
autorestart=true
startretries=10

security - Optimized LAMP/LEMP stack scripts



In the "let's try not to reinvent the wheel" perspective, I've been looking for a packaged LAMP (or LEMP) stack for some time now, not only the basic Mysql, Apache , PHP etc... but ideally stuff like APC, Postfix... basically something that would implement recognized practices & standard for security, general performance. A standard default installation that would work out of the box with all the bells & whistles that one would need to get started.




It's usually fairly easy to find the basic configurations with Apache, Mysql, PHP etc... but surprisingly difficult to find anything that goes a step further.



The Mercury Project seems to have been absorbed by the Pantheon Project and it looks like it's not supported anymore, looking at the comments on the group's page, the install script seems out of date. There's also the BOA project that sounds excellent but goes way beyond what I'm looking for.



Linode.com has a few stackscripts but the LAMP stack doesn't implement a mailing solution ( I'm looking here for the basic notifications from the server )



And there's of course WHM/CPanel, but I've never been a fan and I'm not looking for a control panel.



Have I missed something?




Drupal optimization is a plus but not a deal breaker.


Answer



There are installer scripts that exist for web applications and supporting services, but most of them to my knowledge are focused on the web hosting world. For example Scriptaculous and Fantastico, to name just two.



There are also pre-baked virtual appliances made by places like JumpBox, BitNami, CloudZoom, and Turnkey Linux. Those can be variously deployed to cloud providers and be up and running in mere minutes.



Perhaps you could start with some of those projects and move forward, developing something more to your own tastes.



...but, wait...




If after reading all of the above you're left thinking "But wait, that's not exactly what I want" that's because what you want doesn't exactly exist yet. It appears that you want something that's more specific than a generic install script (Fantastico, etc.) but not quite as heavy a a drop-in virtual appliance.



I'm sure that something closer to what you want exists. For myself, there was a time when I was working on Wordpress installations a lot and had a fancy idea to create a spectacular installation script that went an extra mile or five to lock down permissions, edit directory structures and generally clean up after the installation to make things smarter, tidier and much more secure for the Linux OS, the MySQL database server, Apache and any caching / proxies involved.



I'm sure I'm not the only one that had an idea like that, so there was likely someone who had a custom Wordpress install script that I could have used or at least learned from and mutated to my own desires. I could have turned it into quite the github project, I think.



What I'm saying is that you'll really need to get down to the grass roots level of some kind of LAMP community that focuses on the needs of those who rapidly deploy multiple servers in the use-cases that you focus on. More than likely you'll be laying down a lot of your own pipe. You'll probably want to get some core group of contributors to help you. Make it a full on FOSS project.



Then you'll be known as That Amazing FOSS Guy and you'll never lack for roses at your feet! Or, something like that...


NGINX as reverse proxy in front of Apache does not work with SSL turned on



I have read every post, tutorial and comment on forums out there an for the life of me still cannot get Nginx proxy to work properly once SSL is turned withing Nginx server block.



Apache is fully set up with virtual host for both regular and ssl access.
Apache is listening on port 8081 with ports.conf is as follows:




NameVirtualHost *:8081
Listen 8081


Listen 443



Listen 443




My SSL apache vhost is as follows:



With nginx turned on and ssl settings commented out as in the following configuration (see below) everything works fine as I am able to access both SSL and non SSL versions of site correctly.



    server {
listen 80;
# listen 443 ssl;

server_name foobar.net;

# ssl on;
# ssl_certificate /etc/letsencrypt/live/foobar.net/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/foobar.net/privkey.pem;

location / {
proxy_pass http://104.236.224.53:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}


When I modify above file and turn on SSL by un-commenting options in server block there appears to be a conflict between Nginx and Apache on port 443?



Updated and un-commented server blocks looks like so:




    server {
listen 80;
listen 443 ssl;
server_name foobar.net;

ssl on;
ssl_certificate /etc/letsencrypt/live/foobar.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/foobar.net/privkey.pem;

location / {

proxy_pass http://104.236.224.53:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}


Trying to start nginx return following error:




 nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2017-02-20 18:35:20 EST; 16s ago
Process: 14505 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS
Process: 14475 ExecReload=/usr/sbin/nginx -g daemon on; master_process on; -s reload (code=exited, status=0/SUCCESS)
Process: 14671 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Process: 14652 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 14328 (code=exited, status=0/SUCCESS)


Feb 20 18:35:18 foo.foobar.net nginx[14671]: nginx: [emerg] listen() to 0.0.0.0:443, backlog 511 failed (98: Address already in use)
Feb 20 18:35:18 foo.foobar.net nginx[14671]: nginx: [emerg] listen() to 0.0.0.0:443, backlog 511 failed (98: Address already in use)
Feb 20 18:35:19 foo.foobar.net nginx[14671]: nginx: [emerg] listen() to 0.0.0.0:443, backlog 511 failed (98: Address already in use)
Feb 20 18:35:19 foo.foobar.net nginx[14671]: nginx: [emerg] listen() to 0.0.0.0:443, backlog 511 failed (98: Address already in use)
Feb 20 18:35:20 foo.foobar.net nginx[14671]: nginx: [emerg] listen() to 0.0.0.0:443, backlog 511 failed (98: Address already in use)
Feb 20 18:35:20 foo.foobar.net nginx[14671]: nginx: [emerg] still could not bind()
Feb 20 18:35:20 foo.foobar.net systemd[1]: nginx.service: Control process exited, code=exited status=1
Feb 20 18:35:20 foo.foobar.net systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Feb 20 18:35:20 foo.foobar.net systemd[1]: nginx.service: Unit entered failed state.
Feb 20 18:35:20 foo.foobar.net systemd[1]: nginx.service: Failed with result 'exit-code'.



What am I missing here with my implementation to get SSL to get passed on properly to Apache from Nginx?






Edit 1:
To address @Tim's good point I'll edit my main intent in having Nginx handling all requests.





  • My original intention was to install Discourse, which itself is in a Docker container, on the same machine where Apache was already being used as my main server.

  • Because Discourse needs access to port 80 to run properly, it is recommended to setup nginx in front as a reverse proxy to handle all incoming requests so that it passes them accordingly.

  • I want to use apache on the back to handle all dynamic content and let nginx handle static bits. And so it is my understanding that in order to do so a virtual hos needs to be established on apache for each instance: both http and https requests. Maybe I'm wrong here on this point?



I followed configurations suggested by DigitalOcean: fast forward to their optional step 9.



Logically at this point, just as I have HTTP host on apache listening on port 8081 for requests being passed on from nginx, I assumed I could do the same and also have HTTPS host on apache listen to port 8081 and gracefully pass headers over to Apache to handle the rest. This implementation did not work fully as I was plagued with error 400: the plain http request was sent to https port



I took it a step further and assumed that again maybe since both of Apache HTTP and HTTPS are listening to port 8081 on back that if I assigned assigned apache HTTP to port 8081 and HTTPS to port 1443 everything would work seamlessly. Again it does not work fully as when I try to access my worpress blog via HTTPS with this implementation I get error




Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.


At this point I'm literally fresh out of ideas even though it seems many have gotten digital ocean suggested implementation to work properly. :-/


Answer



Looks like I finally got implementation thanks to all the feedback I received from you. Thanx @AlexeyTen and @Tim





  • First I disabled https vhost on apache for domain foobar.net sudo a2dissite foobar.net.cong


  • I edited apache ports.conf file to only listen to port 8081 and removed listening on port 443:





NameVirtualHost *:8081

Listen 8081





  • Finally I edited nginx server block to listen to port 443 and made sure to comment out ssl on. Not doing so did not work.




> server {
> listen 80;
> listen 443 ssl;

> server_name foobar.net;
>
> # ssl on;
> ssl_certificate /etc/letsencrypt/live/foobar.net/fullchain.pem;
> ssl_certificate_key /etc/letsencrypt/live/foobar.net/privkey.pem;
>
> location / {
> proxy_pass http://104.236.224.53:8081;
> proxy_set_header Host $host;
> proxy_set_header X-Real-IP $remote_addr;

> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header X-Forwarded-Proto $scheme;
> } }



This implementation appears to work fine and handover php handling to Apache seamlessly.


Wednesday, January 11, 2017

mod rewrite - Looking for equivalent of ProxyPassReverseMatch in Apache to fix missing trailing forward slash issue



I have two web servers, www.example.com and www.userdir.com. I'm trying to make www.example.com as the front end proxy server to serve requests like in the format of http://www.example.com/~username such as



http://www.example.com/~john/



so that it sends an internal request of



http://www.userdir.com/~john/


to www.userdir.com. I can achieve this in Apache with




ProxyPass /~john http://www.userdir.com/~john
ProxyPassReverse /~john http://www.userdir.com/~john



The ProxyPassReverse is necessary as without it a request like http://www.example.com/~john without the trailing forward slash will be redirected as http://www.userdir.com/~john/ and I want my users to stay in the example.com space.



Now, my problem is that I have a lot of users and I cannot list all those user names in httpd.conf. So, I use



ProxyPassMatch ^(/~.*)$ http://www.userdir.com$1


but there is no such thing as ProxyPassReverseMatch in Apache. Without it, whenever the trailing forward slash is missing in the URL, one will be directed to www.userdir.com, and that's not what I want.



I also tried the following to add the trailing forward slash





RewriteCond %{REQUEST_URI} ^/~[^./]*$
RewriteRule ^/(.*)$ http://www.userdir.com/$1/ [P]


but then it will render a page with broken image and CSS because they are linked to http://www.example.com/images/image.gif while it should be http://www.example.com/~john/images/image.gif.



I have been googling for a long time and still can't figure out a good solution for this. Would really appreciate it if any one can shed some light on this issue. Thank you!


Answer




You can just ignore the username and anything that follows when fixing up the redirect:




ProxyPassReverse /~
http://www.userdir.com/~




Since this is just a prefix substitution.


mac osx - Using both expanders in HP D2700

I am considering purchasing an HP D2700 for use with SSDs (Samsung Pro 840's), for use in realtime playback of high resolution images. The D2700 has two I/O modules (which I assume are the actual SAS expanders). However, since the enclosure was designed as a SAS enclosure, the "B" module routes to the second port of the SAS drives.



My question is: can the enclosure be rewired such that the "A" expander goes to drives 1-12, and the "B" expander goes to drives 13-25? I don't need the SAS redundancy since I'll be using SATA SSDs. And as-is, from what I can tell, I'll be limiting myself to a single SAS cable's worth of data (4 x 6Gb/sec), which is insufficient for my needs (I need double that).




Is this possible? Are there SFF-8087 type cables that go from the expanders to the backplane? Or do the expanders/ I/O modules jack straight into the backplane?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...