Monday, July 31, 2017

How accurate is apache benchmark?



Alright, so I'm in development right now and I'd like to understand exactly how good the benchmarks are. I've just been using apache benchmark. Do they include the server sending the files?




Also, is "requests per second" literally how many users can visit the page within one second? If it's at 30 requests per second, can literally 30 people be refreshing pages every second and the server will be fine?



It seems like a lot to me. I know a lot of people get way better stats out of their servers, but I haven't done much optimization yet.



Also, will increasing your ram increase you rps linearly? I have 512mb, so if I upgrade to 1gb, would that mean I'd get about 60 rps?



How does concurrency affect your rps?


Answer




I've just been using apache benchmark. Do they include the server sending the files?





ab? Yes, I think so




Also, is "requests per second" literally how many users can visit the page within one second? If it's at 30 requests per second, can literally 30 people be refreshing pages every second and the server will be fine?




Yes, if they perform exactly the same operations your benchmark does. Which is rarely the case.





It seems like a lot to me.




Yeah, most people would think that 30 requests per second is a very low number, but most sites would get by with that.




Also, will increasing your ram increase you rps linearly? I have 512mb, so if I upgrade to 1gb, would that mean I'd get about 60 rps?





Rarely.




How does concurrency affect your rps?




Well, it goes both ways. You might have concurrency issues, typically locks. Write operations typically lock other writers (and sometimes, writers block readers, or even readers block other readers). If you have locking, concurrent users can slow others.



On the other hand, you can have scenarios such that one users is performing I/O while another one is doing CPU work; these could be parallelized and you would be using your resources more efficiently.




Most of the time, concurrency hits you, though.


memory - Why drop caches in Linux?

In our servers we have a habit of dropping caches at midnight.



sync; echo 3 > /proc/sys/vm/drop_caches


When I run the code it seems to free up lots of RAM, but do I really need to do that. Isn't free RAM a waste?

Sunday, July 30, 2017

linux - Permissions/Owernship issue on local LAMP install











I have a local dev environment set up with a LAMP stack, it's used for WordPress development.



Now, whenever I want to edit a file in /var/www/mysite I need to type in sudo before I can edit anything. This is obviously a unnecessary, and I was wondering what I need to set up to fix this issue.


Answer



You can add yourself into the www-data group



Then you would have to make all files writeable for the group www-data: sudo chmod g+w * -R, but if wordpress creates new files those will have the wrong permissons again. To avoid that you have to set the umask, check this link for further information:
https://wordpress.stackexchange.com/questions/2200/cant-install-new-plugins-because-of-the-error-could-not-create-directory




or since you are on your local system and don't have to fear the evil internet, you could run apache under your user.



Most often you can find the settings in httpd.conf, there you will find two options:



User www-data
Group www-data


Hope I could help


Saturday, July 29, 2017

nginx - Wordpress overloads LEMP

My current configuration:



GCE f1-micro (1 vCPU, 0.6GB) Haswell,

CentOS 7.2, NGINX 1.10.2, PHP 7.0.12




  • Static pages serve without issue.

  • phpinfo() page servers without issue.

  • WordPress setup page overloads CPU causing me to reset the server.




[error] 29111#0: *43 FastCGI sent in stderr: "PHP message: PHP Warning: Unknown: failed to open stream: Permission denied in Unknown on line 0




Unable to open primary script: /var/www/mysite.com/public/index.php (Permission denied)" while reading response header from upstream, client: XX.XXX.XXX.XXX, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/php-fpm.sock:", host: "XXX.XXX.XXX.XXX"




NGINX *.conf file location directives



location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {

try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
include /etc/nginx/fastcgi.conf;
}
location ~ ^/(status|ping)$ {
access_log off;
include /etc/nginx/fastcgi.conf;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;

}


NGINX



user = rocky



PHP-FPM



user = rocky




group = rocky



listen.owner = rocky



listen.group = rocky



listen.mode = 0660



Public permissions




/var/



drwxr-xr-x. root root unconfined_u:object_r:httpd_sys_content_t:s0 www


/var/www/



drwxr-xr-x. root root unconfined_u:object_r:httpd_sys_content_t:s0 mydomain



/var/www/mydomain/



drwxr-xr-x. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 public


/var/www/mydomain/public



-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 index.html
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 index.php

-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 info.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 license.txt
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 readme.html
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-activate.php
drwxr-xr-x. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-admin
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-blog-header.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-comments-post.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-config-sample.php
drwxr-xr-x. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-content
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-cron.php

drwxr-xr-x. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-includes
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-links-opml.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-load.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-login.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-mail.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-settings.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-signup.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 wp-trackback.php
-rw-r--r--. rocky rocky unconfined_u:object_r:httpd_sys_rw_content_t:s0 xmlrpc.php



Audit Log




type=SYSCALL msg=audit(1480104445.879:461): arch=c000003e syscall=9 success=no exit=-13 a0=0 a1=10000 a2=7 a3=22 items=0 ppid=1270 pid=1275 auid=4294967295 uid=1000 gid=1001 euid=1000 suid=1000 fsuid=1000 egid=1001 sgid=1001 fsgid=1001 tty=(none) ses=4294967295 comm="php-fpm" exe="/usr/sbin/php-fpm" subj=system_u:system_r:httpd_t:s0 key=(null)



type=AVC msg=audit(1480104445.879:461): avc: denied { execmem } for pid=1275 comm="php-fpm" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:httpd_t:s0 tclass=process


hardware - Blade servers in small office server room (90db)?




We have small server room with 15 rack servers ATM, and we are planning to add 7U 10 blade enclosure.



One can hear the servers from the outside now (the doors are not too thick) but it is not bad, actually it is OK. But I have read that noise levels from blade are up to 90db and this is - I guess - few times more than what we have now. As we cannot afford to move to bigger place ATM and do not wan to move the servers to data centre I wonder if we can sort it out with decent amount of Noise Cancelling Foam and DIY afternoon.



Did anyone tried it before? Is 90db manageable anywhere outside the data centre at all? Do not want to buy computers just to learn that we have to move out.


Answer



I'm very happy with the XRackPro2 noise reduction rack enclosures. It makes my server setups in office environments much more acceptable. The largest solution they have is a 25U rack, but it's an option for your sound situation.


tcp - FreeBSD slow transfers - RFC 1323 scaling issue?



I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on.




  • Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30

  • Client: Mac OS X 10.6, Firefox 192.168.17.47


  • Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also)



Questions:




  • Does this look normal?

  • Is packet #2 specifying a window size of 65535 and a scale of 512?

  • Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K?

  • Why is the window scale so high?




Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer.



No. Time Source Destination Protocol Length Info

108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 > http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1

115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http > 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489


116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 > http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338

117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1

118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU]


Details:



Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309

Source port: http (80)
Destination port: 49190 (49190)
[Stream index: 4]
Sequence number: 1 (relative sequence number)
[Next sequence number: 310 (relative sequence number)]
Acknowledgement number: 425 (relative ack number)
Header length: 32 bytes
Flags: 0x018 (PSH, ACK)
Window size value: 130
[Calculated window size: 66560]

[Window size scaling factor: 512]
Checksum: 0xd182 [validation disabled]
Options: (12 bytes)
No-Operation (NOP)
No-Operation (NOP)
Timestamps: TSval 2617517423, TSecr 945617490
[SEQ/ACK analysis]
TCP segment data (309 bytes)

Answer




I got word from the sysadmin team that this issue was being caused by some issue with the VMWare net driver not respecting / playing nice the sysctl tunables. Throughput with the same setup on physical hardware has throughput with a reasonable percentage for the pipe rather than the 1/10th or less we saw with VMware.


Friday, July 28, 2017

Linux cron spamming me then stops about php/suhosin

My server emails me when any messages goes to root. cron sends me messages. Today I got over 300 emails from my server all of which are




PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/suhosin.so' - /usr/lib/php5/20090626+lfs/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0




I have no idea why. I went to debug it however 5hours ago it stopped so theres nothing i could look at except maybe logs. Why did this maybe happen? Disk isn't full and I have enough ram available.

Thursday, July 27, 2017

linux - How to automatically reconcile /etc/group and /etc/gshadow




Running grpck to check the groups, I see these errors:



 'twinky' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
'dipsy' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
'laalaa' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
'po' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
'noonoo' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
'dipsy' is a member of the 'foo' group in /etc/group but not in /etc/gshadow
...



...and on for quite a few. I'm not sure how this happened, and I'd like to get it cleaned up. I know I could manually edit the /etc/gshadow, but I'd rather let the OS do it, to prevent typos and manual labor.



Is there anything that can automatically reconcile a group into gshadow? Maybe something like (making this up):



# grpfix foo


I've tried man on various group-related commands and googled around, but so far I haven't been able to find the answer.



Answer



man had the answer I missed before:



grpconv


http://linux.die.net/man/8/grpconv:




The grpconv command creates gshadow from group and an optionally existing gshadow.




safety - Networking - shielded cat 5 cable and mains power

Unsure if this is the correct forum. Anyhow, we have had a network installed, using shielded cat 5 cable.



This cable is in the same trunking as the mains power cable.




The electrician says this is fine. The network man says its not.



Who is correct?



Graham

Is there a better way to redirect my client's domains?



I serve multiple websites for my clients on one server. During development I simply make a subdomain on my own domain. Example client.mydomain.example.
Often my customers already have a domain and email with another provider.

I do not want to host email for my clients.



I have tried three different ways of setting up DNS:




  1. Make an HTTP redirect from clientdomain.example to www.clientdomain.example and then a CNAME record for www.clientdomain.exmple to point to temporary subdomain on my server client.mydomain.example.

  2. Have an A record from (*.)clientdomain.tld to my server IP, but leaving MX and such to point to their current email-host.

  3. Set up my own nameservers and use those for my client's domain. Then setup the same way as 2.




As far as I can see there are pros and cons with all three:




  1. Pros: Convenient. I can change the IP address on my server, move to another serverpark, set up failsafe, load balancing and so on.
    Cons: I force my clients to use www, and if they already have a site on non-www they might suffer SEO-vise(?). Also the extra
    CNAME record is bad for page speed.

  2. Pros: No SEO or page-speed issues. Easy setup.
    Cons: If I need to change IP, I need to make DNS changes for ALL my client sites.

  3. Pros: No SEO or page-speed issues. If I need to change server, I can do this for all my client sites at once, since DNS settings is conveniently setup on my own name servers.
    Cons: I need to run my own name servers. I also have to set up MX records and possibly other DNS records for my customers.



My preferred way right now is 1. since I think the pros outweighs the cons for most of my cases.




Are there any other way to redirect from a domain to a server without specifying the IP-address?



Clarification: Solution 1 works for me, but it is slower because of the two steps before coming to the final A record. Ideally I would want to point both non-www and www to my server domain, but as far as I know, this is not possible with a CNAME record, right?


Answer



Please notice, that a CNAME is not a HTTP redirect at all.




  • Only redirect here is from the example.com to www.example.com and it can be made permanent. The extra CNAME is on DNS level and will be cached, so it doesn't really affect the site performance nor speed at all.


  • Your web server needs to be aware of the client domain despite who's hosting the DNS.





That's why I'd prefer your case #1, with a better understanding:




  1. Guide your client to add www.example.com. CNAME client.example.net. to be able to change the IP address on your own, if necessary.

  2. Advise how to make a redirect from non-www to www on their current web server.

  3. Bind the actual domain name to your client.example.net in your webserver's configuration.




With the case #3 you won't just end up technically hosting DNS servers, but also be responsible of doing updates whenever a client needs a new third party record.


Wednesday, July 26, 2017

Hardware RAID Controller Support for SSD TRIM



Do any hardware RAID controllers available today support TRIM?




If not, do any manufacturers have target dates for supporting TRIM?



Should I even care about TRIM for SSDs installed in performance-sensitive workstations?



Before you suggest it, yes software RAID would sidestep the issue, but my requirements do not allow software RAID.



edit: The answer appears to be "no RAID controllers support TRIM" at the current date.



update: Intel 7 series motherboards do support RAID 0 TRIM as of August 2012. Probably even more vendors support this now in 2015


Answer




I don't know of any RAID controller that supports TRIM commands.



As your Wikipedia link explains, the TRIM command provides a way for the file system to tell an SSD when a block of data is no longer needed. For example, after a file is deleted.



Life gets more complicated if you have a RAID layer between the file system and the SSDs. First you need to update the RAID software (or firmware) to accept TRIM commands from the file system. Then the RAID layer has to figure out what to do with them. For RAID 1 (mirroring) it would be pretty straight-forward. RAID would just pass the TRIM commands to the underlying SSDs.



For parity-based RAID, however, there's not much you could easily do with TRIM commands. Even when the file system is done using a block, you can't TRIM it, as RAID needs the contents of the block for parity calculations. RAID could subtract the block from the corresponding parity block and then TRIM it, but you've now added 3 extra I/O operations so you can get an unknown gain from issuing the TRIM command. I can't see how this would be worth it.



All in all, the SSD TRIM command is still quite new. Many SSDs don't support it, and I'm not even sure how many file systems have support for it. So it is likely to be a while before RAID systems start supporting it.


Tuesday, July 25, 2017

mysql - Percona 5.7 restarting and drop my connection



i have fight with this bug a long time now, when i run scripts to MySQL (Percona) i disconnect me, and when i try to import a mysql dump from a live server its here disconnect me to.



So what i'm fight with its why i got connection refuesed every time i try to running my scripts out or trying to import somthing with my GUI client.



have eny how get this messegt to before and know somebody whats happen here? its a local database on our developer server so its not so crical right now.





06:23:02 UTC - mysqld got signal 6 ; This could be because you hit a
bug. It is also possible that this binary or one of the libraries it
was linked against is corrupt, improperly built, or misconfigured.
This error can also be caused by malfunctioning hardware. Attempting
to collect some information that could help diagnose the problem. As
this is a crash and something is definitely wrong, the information
collection process might fail. Please help us make Percona Server
better by reporting any bugs at http://bugs.percona.com/




key_buffer_size=8388608 read_buffer_size=131072
max_used_connections=19 max_threads=152 thread_count=16
connection_count=16 It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads =
68309 K bytes of memory Hope that's ok; if not, decrease some
variables in the equation.



Thread pointer: 0x7f8c80000ae0 Attempting backtrace. You can use the
following information to find out where mysqld died. If you see no

messages after this, something went terribly wrong... stack_bottom =
7f8cd0cd0e80 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0xe8197c]
/usr/sbin/mysqld(handle_fatal_signal+0x479)[0x797f89]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0)[0x7f8d06efc8d0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f8d04e82067]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f8d04e83448]
/usr/sbin/mysqld[0x76c931]
/usr/sbin/mysqld(_ZN2ib5fatalD1Ev+0x15d)[0x10cd45d]
/usr/sbin/mysqld(_Z20buf_page_io_completeP10buf_page_tb+0x9c0)[0x110eff0]

/usr/sbin/mysqld[0x113c424]
/usr/sbin/mysqld(_Z13buf_read_pageRK9page_id_tRK11page_size_tP5trx_t+0x38)[0x113cc68]
/usr/sbin/mysqld(_Z16buf_page_get_genRK9page_id_tRK11page_size_tmP11buf_block_tmPKcmP5mtr_tb+0x4a6)[0x110c096]
/usr/sbin/mysqld(_Z27btr_cur_search_to_nth_levelP12dict_index_tmPK8dtuple_t15page_cur_mode_tmP9btr_cur_tmPKcmP5mtr_t+0x5db)[0x10ecdcb]
/usr/sbin/mysqld[0x103f8a8]
/usr/sbin/mysqld(_Z15row_search_mvccPh15page_cur_mode_tP14row_prebuilt_tmm+0x111b)[0x104266b]
/usr/sbin/mysqld(_ZN11ha_innobase13general_fetchEPhjj+0x1bb)[0xf317cb]
/usr/sbin/mysqld(_ZN7handler18ha_index_next_sameEPhPKhj+0x141)[0x7fa561]
/usr/sbin/mysqld[0xc2b8fa]
/usr/sbin/mysqld(_Z10sub_selectP4JOINP7QEP_TABb+0x147)[0xc326d7]

/usr/sbin/mysqld(_ZN4JOIN4execEv+0x3b8)[0xc2b198]
/usr/sbin/mysqld(_Z12handle_queryP3THDP3LEXP12Query_resultyy+0x238)[0xc9d7a8]
/usr/sbin/mysqld[0x75f30d]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THDb+0x342c)[0xc5db8c]
/usr/sbin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x625)[0xc60d25]
/usr/sbin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0x877)[0xc61617]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x1b7)[0xc62c27]
/usr/sbin/mysqld(handle_connection+0x2a0)[0xd257e0]
/usr/sbin/mysqld(pfs_spawn_thread+0x1b4)[0xe9e804]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4)[0x7f8d06ef50a4]

/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f8d04f3562d]




The script error from PHP is, and its kind of random what the error is.



PDOException: SQLSTATE[HY000] [2006] MySQL server has gone away

PDOException: SQLSTATE[HY000] [2002] Connection refused

Answer




There might be several reasons for this error. Often this is caused by a corrupt database. Check if this is the case via:



mysqlcheck --check --all-databases


The PHP exceptions you mentioned aren't important; they are caused by the database crashing, and are an effect rather than a cause.


redhat - ext4: Online resize not detected




On a RedHat 6 server, we ran into an issue with online resizing of an ext4 filesystem.



With only /dev/sda we had 13GB available in the volume group, but needed 20GB more on one logical volume which was 36GB. Added /dev/sdb to the volume group, and the file system was extended (lvextend) and resized (resize2fs) to 56GB.
No error messages during the resize, and the OS reported the new size.



The logical volume in question hosts an installation of IBM HTTP Server (apache 2.2), config and log files for some 8 different web servers.



This morning the file system usage grew beyond 36GB.
What happened first was that the webservers stopped logging (discovered after), while the web servers kept on running without issues.
2,5 hours later, in relation to log rotation and some other writes to the file system things started to freeze up.

Meaning: the webservers stopped taking traffic, allthough the processes stayed up, trying to "tail" a log file would hang, and could not be interupted.
The load of the server went from 0.10 to 4000 (yes...) - mostly related to iowait (it would seem).



The sollution was to shut down the webserver - kill -9 was the only way, and reboot the server. Umount the filesystem, did an fsck (no errors), and start things up again.
No issues since.



We can excactly time the error with logging stopping to the time the disk (lv) usage grew above it's previous size of 36GB.



Services on other file systems seemed to work fine - amongs others the operating system.




In /var/log/messages we saw i.e.:



kernel: INFO: task httpd: blocked for more than 120 seconds.
kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kernel: httpd D 0000000000000001 0 6889 6865 0x00000080
kernel: ffff88023aa99c88 0000000000000086 0000000000000000 0000000000006102
kernel: ffff88010aebaa80 ffff880105dd0ae0 000000003aa99c08 ffff880105dd0ae0
kernel: ffff880105dd1098 ffff88023aa99fd8 000000000000fb88 ffff880105dd1098
kernel: Call Trace:
kernel: [] __mutex_lock_slowpath+0x13e/0x180

kernel: [] mutex_lock+0x2b/0x50
kernel: [] generic_file_aio_write+0x71/0x100
kernel: [] ext4_file_write+0x61/0x1e0 [ext4]
kernel: [] do_sync_write+0xfa/0x140
kernel: [] ? autoremove_wake_function+0x0/0x40
kernel: [] ? security_file_permission+0x16/0x20
kernel: [] vfs_write+0xb8/0x1a0
kernel: [] sys_write+0x51/0x90
kernel: [] ? __audit_syscall_exit+0x265/0x290
kernel: [] system_call_fastpath+0x16/0x1b



Versions:



Kernel: 2.6.32-358.2.1.el6.x86_64
lvm2-2.02.98-9.el6.x86_64
e2fsprogs-1.41.12-14.el6.x86_64


There were found no issues with the underlying hardware.



Answer



The answer is:
The filesystem was created with mke2fs



The default behaviour is then to create an ext2 filesystem.
However it was mounted as an ext4 filesystem - without any error messages - and later percieved as an ext4 filesystem.



So no wonder online resizing worked, and no wonder the extended portion was recognized after an unmount/mount or reboot.



It took some time to discover since there was a long time between the creation and the resizing and was finally disovered when running blkid, which said "ext2". tune2fs -l also said "not clean".



domain name system - bind can I use hostnames in also-notify instead of IP?

I'm using my VPS provider's DNS servers as a slave, and to send them notifies, I use the also-notify function. They're making changes to the IP scheme of their DNS, which means I need to update my also-notify, which is fine, but I'm looking to future proof.



Today I have also-notify { 1.2.3.4; 4.5.6.7 }



Could I do this instead? also-notify { ns1.my-vps; ns2.my-vps }

Monday, July 24, 2017

security - IIS6, NETWORK SERVICE and IIS_WPG



Is it correct that IIS6, by default, runs under the NetworkService account (NETWORK SERVICE)? and that NetWorkService is a member of the IIS_WPG group?



Reference: Microsoft TechNet page on IIS6 and security identities and groups.


Answer



Um, yes, that is correct. You even reference the appropriate docs.


apache 2.2 - How to Check if a SSL Certificate is successfully renewed



I have two web servers, one for an intranet and the other for a website. The system specs of the two are almost the same, which is as follows:



CentOS 6.5
LAMP (Apache/2.2.15) + WordPress, which were installed with yum command




I am trying to renew their wildcard SSL certificate with a new one, which I recently got from GoDaddy. The zip file sent from GoDaddy includes the followings:



c************.crt
gd_bundle-g2-g1.crt
gd_intermediate.crt



The two servers share the same private key (test.key), which I am going to use for the new certificate too. So the 2 steps below is all I did on both servers.



(Step 1)
Copy the three files above to /etc/pki/tls/certs directory and edit the /etc/httpd/conf/httpd.conf so the keys "SSLCertificateFile" and "SSLCertificateChainFile" point to the new respective file. The file looks like below after editting.






SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/c************.crt
SSLCertificateKeyFile /etc/pki/tls/private/test.key
SSLCertificateChainFile /etc/pki/tls/certs/gd_bundle-g2-g1.crt
AllowOverride All
DocumentRoot /var/www/html
ServerName *****.*********.com




(Step 2)
Restart the server



After the steps, I accessed both servers with google chrome and checked to see if the expiration date had changed. The expiration date on the intranet has changed like I had expected.



(before)
Valid from 6/17/2014 to 6/17/2015
(after)
Valid from 5/18/2014 to 6/17/2016



But the date on the website is still the same. Is there any other way to check if the certificate is successfully renewed? Or is there anything wrong about the steps I followed? I did not get any errors when I went through the steps and I am thinking that there might be some more steps I need to do to get a wildcard certificate to work.


Answer




1) Remember - Apache uses either httpd.conf or ssl.conf depending on how Apache was configured - since ssl.conf is preferred make sure the "failing" server is NOT using ssl.conf instead.



2) Have you tried copying the httpd.conf file from the working server to the "failing" server. If everything else is the same, that should make SSL work, if it doesn't everything is NOT the same on the two servers - double check


Saturday, July 22, 2017

amazon web services - How to subscribe a specific instance within an elastic beanstalk application to an SNS topic?



Ok, so I have an elastic beanstalk application with a scalable web tier, served behind an ELB. Long story short, I need to be able to subscribe a specific instance within my web tier to an SNS topic. Is it safe for me to use a standard method to get the instance ip address (as detailed in python here How can I get the IP address of eth0 in Python?) and then simply subscribe to an SNS topic using that ip as an http subscriber?




Why? Good question...



My datamodel is made up of lots of objects many of which can have an attached set of users which may want to be able to observe those objects. This web tier in my application is responsible for handling the socket interface (using socket.io) for client applications.



When a user is created in the system, so too is an SNS topic for the user, allowing notifications to be pushed to that user when an object it is interested in changes. The way I am planning to set this up, a client application will connect to EB via socket.io at which point the server instance it connected to will subscribe to that user's SNS topic. Then when an interesting object changes, notifications will be posted to the associated user's topics, thus notifying the server instance that the client application has an open connection to, which can then send a message down the socket.



I believe it is important that the specific instance is subscribed rather than the web tier's external CNAME or ip, as the client application is connected to a specific instance and so only that instance can send messages over it's socket. Subscribing the load balancer would be no good as the notification may be delivered to an instance that the user is not connected to.



I believe the question at the top is all I need, but I'm open to creative solutions if my reasoning seems flawed??


Answer




This architecture is not well suited for your use-case.




  1. Creating an SNS topic "per user" won't scale,

  2. Having your backend EC2 instances subscribed requires those backend EC2 instances to have public endpoints,

  3. As EC2 instances get created and terminated, stale subscriptions will stick around, and have to be managed/deleted,

  4. Also, locking a user to a specific EC2 instance won't work well if that EC2 instance were to be terminated (due to scaling down or other failure).



Instead, try:





  1. Allow your client connections to connect to the load balancer, which then allows connections to jump EC2 instances as needed.

  2. Use another mechanism, such as Redis pub/sub, to manage getting messages to the EC2 instances, and thus to the clients.


centos - How to set permissions for a CIFS mount with autofs?

I've set up a CIFS mount on my CentOS 6.4 server with autofs :



File /etc/auto.mnt :



Photos -fstype=cifs,perm,rw,uid=505,forceuid,gid=505,forcegid,file_mode=0770,dir_mode=0770,credentials=/root/credentials.txt ://adsrv01/Photos



What a ls command shows :



[root@websrv01 mnt]# ls -l
total 4
drwxr-xr-x 1 root root 4096 Apr 26 12:01 Photos


What I expect from the ls command :




[root@websrv01 mnt]# ls -l
total 4
drwxrwx--- 1 photos photos 4096 Apr 26 12:01 Photos


Do you see anything wrong ? How can I set owner and chmod right ?



Edit : I forgot to say that chown and chmod commands are denied for root user on the /mnt/Photos directory. I can't get it right, and I also tried using fstab.



This is what happens with fstab :




mkdir /mnt/Photos
chmod 770 /mnt/Photos
chown photos:photos /mnt/Photos
mount /mnt/Photos


The permissions are automatically changed and set to 755 when the directory is mounted. I can't set the mode back to 770 : permission denied.

Friday, July 21, 2017

debian - DHCP-Server - tell clients to renew lease

I have just set up a new DHCP server (dhcpd - package is dhcp3-server) on a Debian 6 box.
The new server is up and running, and I have successfully connected a client.



Formerly a router acted as the DHCP server.



My question: is there any way to send a broadcast to the network prompting all current lease holders - which still hold a lease from the router - to get a new lease from the new server?



Regards

linux - iptables block port range with single port exception

I`ve two rules. First blocked all port from range:




-A INPUT -m state --state NEW -m tcp -p tcp --match multiport --dports 200:65535 -j DROP





and second open one in this range:




-A INPUT -i eth0 -p tcp --dport 5901 -m state --state NEW,ESTABLISHED -j ACCEPT




but it doesn`t work. Anyone know why?

Wednesday, July 19, 2017

Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers)



Install FTP Server




  • apt-get install vsftpd Enable local_enable and write_enable to YES

  • and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd

  • restart - to allow changes to take place




Add WordPress User for FTP access in WP Admin



Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file



Add a FTP user account




  • useradd username -d /var/www/ -s /usr/sbin/nologin

  • passwd username




add these lines to the bottom of /etc/vsftpd.conf
- userlist_file=/etc/vsftpd.userlist
- userlist_enable=YES
- userlist_deny=NO



Add username to the list at top of /etc/vsftpd.userlist




  • restart vsftpd "service vsftpd restart"

  • make sure firewall is open for ftp "ufw allow ftp" allow


  • modify the /var/www directory for username "chown -R
    /var/www



I have also went through everything listed on this post and no luck. I am getting connection refused.



Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here.



Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5

linux - Server crashing: Too many connections?

I have server with approximately 500 active connections at a time (it's for a very busy website). Unfortunately, Apache keeps crashing the entire server every hour or so. The server has 8 GB of RAM and a quad core Xeon CPU so, as far as I am concerned, this should be sufficient to handle the amount of connections. I suspect that my Apache configuration could need some optimization. Here is the current config:



StartServers          2
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxClients 400
MaxRequestsPerChild 20000



Any advice (not only related to Apache) is MUCH appreciated!

networking - Network monitoring












What software would you recommend to monitor a network? We have a main server which acts as a DNS among other services. And we would like to monitor network activity: what protocols are being used, bandwidth, etc.



Kind of a Big Brother thing, to know when a user tries to login to his personal mail account on GMail, Hotmail, etc. or uses an external IM account, things which are not permitted under the company's rules. And if possible, block these access (or being able to know about it, in order to take the correspondent disciplinary actions).



I've read Nagios is a monitoring service, is this the solution we are looking for? What other open source alternatives are there?


Answer



Nagios is a solid open source solution, the plugin architecture means that nagios basically provides a framework for monitoring to occur, and then you can plug in the exact monitors you need. Since nagios is fairly popular, there are a ton of monitoring modules already in existence.



Nagios is mostly a real time monitor, not a reporter. I know there are ways to pass the real time nagios data up to a reporting app like cacti or munin that produce some lovely graphs.



Best way to redirect a bad-rated domain to a new one?

I recently bought a domain that has an Alexa rank of ~50000. However the domain has a poor history as the previous owner didn't responded to abuse emails nor any DMCA notices. Long story short, the domain is blacklisted in many places including google search. The site however is perfectly legal and I intend to start fresh on a new domain name.




I was curious whether a 301 redirect to new domain will also impact its rating being associated with the old one? Or a 302 (temp) redirect is a better choice? Or maybe even a 302 redirect to a third intermediate domain followed by a 301 redirect to my final domain?



EDIT: I don't care about reputation of the old domain (the one I bought). I just want to be sure that it won't affect my new (brand new) domain's reputation in any way.

Tuesday, July 18, 2017

amazon ec2 - AWS ELB Latency issue

I have two c3.2xlarge EC2 machines with Ubuntu environment both in us-west-2a AZ. Both contains same code with mySQL database from AWS RDS (db.r3.2xlarge). Both instances are added to an ELB. Both has one cron scheduled that runs twice in a day.



ELB has been configured to raise the alarm once the threshold crosses 5.0. The CPU utilization of both the instances are by average 30 - 50. At peak hours hits 100% for a minute or two and then returns to normal. But ELB constantly raises alarm thrice a day. At this time, both instances has



CPU     - ~50%
Memory - total - 14979
used - ~6000

free - ~9000
RDS CPU - ~30%
Connections - 200 to 300 /5,000


According to this https://aws.amazon.com/premiumsupport/knowledge-center/elb-latency-troubleshooting/ I could find nothing wrong with the instances. But still latency hits the peak and both instance fails to respond.



Till now, I am just removing one of the instance from the load balancer, restart the apache and then load it back and do the same for other instance. This does the job perfectly alright and the instances and ELB works good for next 6-10 hours. But this is not acceptable since, every day twice or thrice one has to take care of the server, needs it to restart.



I need to know, if there is anything wrong or any steps to be taken to resolve this problem.




Latency



Memory



Apache server-status contains too many such (~200/250 processes):



7-0 23176   1/2373/5118 C   30.95   3986    0   0.0 7.01    15.78   127.0.0.1   ip-xxx-xxx-xxx-xxx.us-west-2.comp   OPTIONS * HTTP/1.0

Monday, July 17, 2017

permissions - How do I make a directory writable for the webserver?



Just got SQLite up and running on a new linode - but I couldn't make it work til I read some info that says the server must have write permissions for both the directory and the file -- okay fine.



So I made the directory 0777 permissions -- which is probably bad. How do I go about doing this properly?


Answer



Basically, give ownership of that folder to the to the user the webserver is running as. (usually "nobody"). chown is the command to do that. Then, only the first trio of permissions is useful. So 700 permissions would alow the webuser to access and write to the directory, and no one else to even read it.



--Christopher karel



hardware - Storage setup for large files

I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice.




Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people.



I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable.



What do you recommend?



Thanks

Sunday, July 16, 2017

active directory - Error promoting secondary Windows server 2016 DNS with a Primary Windows Server 2003

I have an OLD Intel server socket 478 DDR running Windows Server 2003 EE SP2 as Primary DC and DNS. In order to migrate to server 2016 and discard this old hardware and server version I did:
1- installed a new Windows server 2016 to create a Secondary DC.
2- I added it to the Domain with no issues.
3- The old server 2003 it is already operating at the highest possible functional level: Windows server 2003.
4- Added an Active Directory Domain Services at the new Server 2016
5- When trying to promote the new server 2016 as a Domain Controller I get this error message:




"Verification of replica failed. The forest functional level is Windows 2000. To install a Windows Server 2016 Domain or Domain Controller, the forest functional level must be Windows Server 2003 or higher."



When running the adprep32 /forestprep I get this message:



"Adprep was unable to check the forest update status.
[Status/Consequence]
Adprep queries the directory to see if the forest has already been prepared. If the information is unavailable or unknown, Adprep proceeds without attempting this operation.
[User Action]
Restart Adprep and check the ADPrep.log file. Verify in the log file that this forest has already been successfully prepared.
Adprep encountered an LDAP error.

Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of:
'CN=Servers,CN=Site-PHAV,CN=Sites,CN=Configuration,DC=phav,DC=cubacatering,DC=avianet,DC=cu'.
"



The user I logged is part of the Domain Admins, Enterprise Admins and Schema Admins group.



Please HELP!!

windows server 2008 - Win2008: SC SDSET - how to grant a specific local user rights to stop and start a specific local service?

Where is a useful reference for the sdset command?



I can read and read, and I have yet to find a straightforward list of steps to say:



Service: App
User: Joe



Grant Joe start/stop/restart to App




(Why can't it be that easy? )



Note: Getting sdset wrong can cause a service to disappear from Service Manager, and only be visible to root/system (invisible to administrators!). Getting this right is important.

Saturday, July 15, 2017

ssh - Intermittent ssh_exchange_identification: Connection closed by remote host



We encountered an issue where a series of git requests over ssh would sometimes fail with
ssh_exchange_identification: Connection closed by remote host




There are many examples on SE/SF of structural problems (tcp-wrappers, permission on key files).



Our problem was: What is a likely cause of intermittent connection failures with this message?


Answer



Our issue appeared to have been caused by a moderately high number of incoming requests.



As soon as the number of unauthenticated connections goes over the sshd:MaxStartUps parameter,
sshd starts rejecting those connections.




The solution lies in modifying MaxStartups in sshd_config


Friday, July 14, 2017

Hidden nginx rewrite rule (I think)



About a week ago, I was playing with nginx rewrite stuff to rewrite /admin to https.




I now want to undo this, but I cannot for the life of me, remember where I put that rewrite rule.



I've reloaded, restarted, stopped and started nginx. I've rebooted the server. I've restored nginx.conf to the default version.



I have no idea where I put that rule. It's either there, or nginx is just confused, because when I go to [domain]/admin, it redirects to https://[domain]/admin



I might end up purging nginx from the system and installing from scratch.



Is there anywhere else that a rewrite might be put?

Any suggestions?



Thanks.


Answer



Perhaps you could provide your actual configuration file? You want to look for the include directive as that's the only way any directive can be "hidden".



Of course, far more likely is that your browser is caching and you've actually already removed the rewrite. Try to test the URL with curl and see if a location header is present, if not then it's browser caching.


linux - Where are the logs for ufw located on Ubuntu Server?



I have an Ubuntu server where I am blocking some IPs with ufw. I enabled logging, but I don't know where to find the logs. Where might the logs be or why might ufw not be logging?


Answer




Perform sudo ufw status verbose to see if you're even logging in the first place. If you're not, perform sudo ufw logging on if it isn't. If it is logging, check /var/log/ for files starting with ufw. For example, sudo ls /var/log/ufw*



If you are logging, but there are no /var/log/ufw* files, check to see if rsyslog is running: sudo service rsyslog status. If rsyslog is running, ufw is logging, and there are still no logs files, search through common log files for any mention of UFW. For example: grep -i ufw /var/log/syslog and grep -i ufw /var/log/messages as well as grep -i ufw /var/log/kern.log.



If you find a ton of ufw messages in the syslog, messages, and kern.log file, then rsyslog might need to be told to log all UFW messages to a separate file. Add a line to the top of /etc/rsyslog.d/50-default.conf that says the following two lines:



:msg, contains, “UFW” -/var/log/ufw.log
& ~



And you should then have a ufw.log file that contains all ufw messages!



NOTE:



Check the 50-default.conf file for pre-existing configurations.



Make sure to backup the file before saving edits!


Thursday, July 13, 2017

centos6 - VPS CentOS 6.3 x64 recovery mode changes are not saved after reboot



i am running a CentOS 6.3 x64 OS on a VPS server, the server was having issues logging in to SSH using root credentials, it showed a message similar to "No Shell Exists, Access Denied" root login via console is also not working, however a normal wheel user can login.



So i had to boot the virtual machine into recovery for troubleshooting, however when i make any change and reboot the server the changes are not preserved.




I have already tried mounting the file system as read-write using following commands:



mount -a -o rw
mount -o remount, rw /


but this doesn't seem to work.



i am trying to add a new user and after mount the filesystem as read-write, i run the following commands




adduser username
passwd username
visudo


the changes are shown until i restart the system into normal mode, can anyone guide me how can i add a new root user from centos recovery mode or how to retain the changes made in recovery mode?



The VPS is hosted by FDC Servers using OnApp http://onapp.com
i believe they have their own recovery console as i am unable to find its name.



Answer



It sounds like you are booting into Rescue mode. According to https://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-rescuemode-boot.html, the default root partition is a temporary root partition. The documentation says to run chroot /mnt/sysimage to fix this issue.


Tuesday, July 11, 2017

godaddy - .it domain, Dreamhost Hosting

I registered a .it domain name from GoDaddy, but I would like to host it with DreamHost.



I'm having problems. I can't change the nameservers to ns1.dreamhost.com (ns2, etc) I get this error message:




Your changes could not be submitted. 1
change could not be submitted.



The nameservers entered encountered
errors.





Other people have had this problem:
1
2



I could not find a clear, well explained solution.



User davethewave apparently solved the problem, but I don't understand how:





RE: Has anyone successfully hosted a .it site here? hi i've two domains
.it here: http:/ /www.davethewave.it
and http:/ /www.hctrieste.it note that
the italian NIC does not allow to have
authoritative dns out of italy. so
you have to modify dns manually, by
copying ip values from DH wep panel.
bye DaVe





So, perhaps I won't be able to use DreamHost DNS, and that's less than ideal, but I need this domain to work somehow.



When someone types in my .it url, I want them to see my DreamHost hosted site. How can I make this happen?

freebsd - Is it possibile to alow port forwarding only for specific IP public addresses

I have freeBSD router and it host public IP address, I am using ipnat.rules to configure port forwarding prom public network inside my private network. Now I wondering can I restrict only specific public IP addresses to can pass trough my port forwarding. What I want is to only my specific public IP addresses can walk inside my network on specific ports. Here is how now look like
my ipnat.rules file



rdr fxp0 217.199.XXX.XXX/32 port 7900-> 192.168.1.12 port 80 tcp

Where did my memory go? linux box



I just got a linux box and i installed apache, mono and i'm about to install mysql.




I checked the memory with free -mt and got this.



             total       used       free     shared    buffers     cached
Mem: 492 470 22 0 31 343


This means i have 492mb in total and i am using 470!?! how can i be using 470! i should only be running apache2. How do i figure out where my ram is going.


Answer



The -/+ buffers/cache displays the actual memory available on the free column. Linux uses unused memory for caching disk I/O.



Permissions for Windows Server 2008 R2 NFS Share Files

I have configured and am using a NFS share on a Windows 2008 server. I am copying files from a Unix server using anonymous access. What I cannot figure out is how to get the file permissions working on the Windows side. I cannot rename or copy the files without editing the permissions for each file individually. I have set the permissions on the containing folder, but no new files copied inside the folder inherit the folder's permissions.




How do I get the permissions to be the same for all new files that are added to that folder?

Monday, July 10, 2017

mod rewrite - Apache mod_proxy vs mod_rewrite



What is the difference between using mod_proxy and mod_rewrite?



I have a requirement to send certain url patterns through the tomcat, which runs on the same host but under port 8080. I know this is something for mod_proxy, but I"m wondering why I can't just use mod_rewrite, or what the difference is?



Probably has to do w/ reverse proxy, and also when in the pipeline it gets handled?



Thanks.



Answer



mod_rewrite using the P flags puts the request through mod_proxy. The advantage in using mod_rewrite is that you get more control for mapping requests, just like rewrite let's you rewrite URLs. But the proxying is exactly the same. The disadvantage is that mod_rewrite syntax is more complex. So my recommendation is to use mod_proxy -style configuration directives unless you need something more complicated. Others will probably recommend mod_rewrite -style, because then you only have to learn one style.


pci dss - IPv6: Should I have private addresses?



Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly.



Private Network



The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port:





  • iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network),

  • VPN to get access to the private range (something I'd rather avoid)

  • dedicated database servers

  • LDAP

  • centralized configuration (like puppet)

  • centralized logging




We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems).



Public Network



Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well.



The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now).



IPv6




I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least).



We have two IP networks right now. Adding the public IPv6 address would make it three.



Just use IPv6?



I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk.



It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now).




It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate.



Summary



I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me.



Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed):




  • Are there any technical disadvantages (limitations, buffers, scalability)?


  • Are there any other security considerations (asides from firewalls mentioned above) to consider?

  • Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet?

  • Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet)

  • Some other thing I didn't consider?


Answer



Alright, let's reply by parts



1) Private addresses




ipv6 has different "scopes" so you can have a local scope and a global scope, ipv6 is smart enough to know who's what and to regulate traffic accordingly so you can have a local non-routable network on ipv6 without any problem at all, actually it comes by default as that



2) Dump ipv4 and run only ipv6



All ipv6 implementations so far are dual stack so you can comfortably run both, and I would definitely recommend you to run both, there's no damage in doing that and ipv4 is not going away for a long time, although ipv6 is very cool completely dropping ipv4 is not something I would do.



3) Short questions



a) No technical disadvantages, on the contrary! Lots of cool stuff, automatic assignation of addresses, anycast, native ipsec, it's quite cool
b) Firewalls should be good, but there's some specific firewall rules that you should pay attention to like allowing local-link scope traffic, allow multicast on ipv6 and disable processing of RH0 packets, also have in mind that icmpv6 is a completely new protocol and ipv6 is a lot more dependant on it than icmp on ipv4 so filtering it is not a good idea

c) As far as I know most of linux services support ipv6 without any problem, dual stack ftw!



Also it's not bad to get yourself familiar with all the ipv6 new specs, have a look at http://en.wikipedia.org/wiki/IPv6 for starters


windows server 2012 r2 - Computer not joining domain

Since a few days (without any changes on the AD) it's impossible to join a computer in my domain. When I try to do it, after write the domain name, I'm prompted for an account who can join the domain, I complete it and nothing...



I've tried to wait a very long time and it never does anything.



It's a computer which was in the domain before, but I rejoin the domain because of an error on user logon (approbation between computer and domain not allowed, something like that).




I've checked my AD and it seems everything is ok. It runs on Win 2k12 r2.



Any ideas?

dns zone - Why do website host handles dns to resolve services(mail.domain.com ftp etc) instead of the domain registry?

I have bought a new hosting service, and needed to change the name servers pointing to the new host for my website. When I did , all other subdomains especially ftp, mail subdomain and mx record broke.



I was told I have set up the dns resolution again in my host cpanel to fix this. So I wonder why cant it be set up in the domain registry? Why not point the website to the host which have all my files, and the rest leave as it is?



My suspicion is, correct me if Im wrong. That in able to find the domain.com IP, it needs go through the domain "." root then "com" top level and finally my domain. Once it finds where its sitting, it resolve the rest of names like @, then mail subdomain , ftp etc.But since it doesnt have any record to where to find it , it just stops. The thing is why cant it just go back to the domain registry to find the other info?



Please do let me know If Im understanding this all wrong.




PS: Current situation is I have a website in a cloud server on rackspace, our website points to that site and mails, ftp etc points to other server. So if this works why do I have to change the dns records in my hosting too. Really confused.

Sunday, July 9, 2017

domain name system - Using an outside DNS server to resolve my DNS server



Using the DNS server at Zoneedit.com, I have created 2 sub domains:




Ns1.example.com and




Ns2.example.com




Both Ns1 and Ns2 resolve to my server, which also hosts a DNS server.



If I were to create new zones on my DNS server, could I point my registrar to Ns1 and Ns2.example.com and expect the domains I am the authoritative host for to resolve?



Essentially, Dotster would have to look up Ns1/2.example.com first, and then be forwarded to my DNS server, where the entries would be resolved.



Is this possible?



Answer



If I understood well you want to delegate the DNS from your registar to your own DNS server. So yes it is possible you just have to tell your registar to point to it. You will need to add an NS record and a A record to point to your server and server's IP address



And so you'll will be the authoritative DNS server for your zone.



It should look like this:



registar:



example.com. IN NS ns1.example.com.

example.com. IN NS ns2.example.com.



ns1.example.com. IN A 0.0.0.0 ;; ip of ns server is needed because it's the only way for everbody
ns2.example.com. IN A 0.0.0.0 ;; to know where you are



You:



example.com. IN SOA ( ;;; soa blargh!! As it means Start Of Authority you are the one
)




example.com. IN NS ns1.example.com.
example.com. IN NS ns2.example.com.



a.ramdom.host.example.com. IN A 0.0.0.0 ;; ip of a random host


Suggestions for a NAS for VMware storage





I have been looking at Openfiler, and it appears to be a great open-source solution. I haven't seen very much documentation on limitations of OF. We are by no means a Fortune 500 company (yet:) so our current budget is rather minimal, but none the less I would like to hear your opinions!



Our storage server consists of 6TB (12 x 500GB), AMD 2.4 (2x), 8GB RAM and the purpose will be to serve as our VMWare storage. The VMs will consist of web servers, QB servers, and possibly small-scale mail will be run off our blade environment.



Just wanted hear your thoughts since I don't have any experience other than with Dell's SAN management software.


Answer



FreeBSD 8.2, running ZFS. ZFS includes the following out of the box:




  • Supports NFS & iSCSI out of the box.


  • ZFS includes Snapshots, data checksums, multiple copies, filesystem compression

  • RAID-Z - Similar to RAID-5, but without the RAID-5 write hole. All disk writes are atomic copy-on-write transactions, so the on-disk state is never inconsistent (No need to FSCK after a power outage!).

  • Double-parity RAID-Z2 (e.g. RAID-6, but without the write hole)

  • (soon) data deduplication

  • There is no need for an expensive RAID controller, so you can drop that layer of complexity.



Read more about the benefits of ZFS in this short summary at http://hub.opensolaris.org/bin/view/Community+Group+zfs/whatis .



FreeBSD is a very solid operating system, and ZFS is surprisingly easy to learn and use.




This solution is free. There's no cost. There are a couple additional packaged products which are similar:






Saturday, July 8, 2017

domain name system - Can a CNAME record not include www?




My registrar tell me that I can't have a CNAME record that doesn't start with www. Is it true?



So I am using amazon ec2 with load balancing. the loadbalancer has a convoluted DNS name, and specifically tells you to use a CNAME to send requests to that DNS name, and not an A record.





myloadbalancer-1234567890.us-west-1.elb.amazonaws.com (A Record)



Note: Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an "A" record with any specific IP address. If you want to use a friendly DNS name for your load balancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. For more information, see Using Domain Names With Elastic Load Balancing.




I purchased a domain. The registrar doesn't allow you to add records yourself, only by emailing them a request. So they set up a CNAME record



mydomain.org  SOA 111  whatever

www.mydomain.org CNAME 3596 myloadbalancer-1234567890.us-west-1.elb.amazonaws.com


when I asked them why it doesn't work with www, the registrar answered that there is a technical prohibition to make a CNAME record without www. Is it true, or are they incompetent/lying and I should switch registrars?


Answer



A CNAME (Alias) record points to an A (Host) record. You can create multiple CNAME records and point them to an A record. The most common CNAME record used is the subdomains www, which is supported by every provider. You can also use Route53 to create A or CNAME. it supports 'www'.



Cheers!!



Tested Solution :




Update - To solve your problem you can use route 53 to create an A record for the ELB using root domain such as 'example.com' and CNAME as www.example.com.


email - How to block IP addresses from port 25

PROBLEM: Users are getting 15-20 SPAM emails per hour, even with SpamAssassin set to its most aggressive settings



SOLUTION: SPAM filtering services are available from companies like McAfee (Intel). These services work by changing the domain MX record to point to the McAfee servers; McAfee filters the email and returns it to our HostGator private server on port 25.




NEW PROBLEM: Spammers are ignoring our MX records and delivering email directly to port 25 of our domain host (e.g. yourdomain.com) … so SpamAssassin is useless and we can’t use an outside Spam Service. If we can’t fix this we will be forced to move all the domains on our Private Server to a GoDaddy Exchange Server (Exchange implements the solution proposed below).



PLATFORM: I'm using a dedicated server that I lease through HostGator. The server is running CentOS with a WHM / cPanel setup. I'm hoping to find some sort of script / plugin that will allow me to block all IP addresses (except ones that I choose to allow) from port 25 on SOME domains but not all domains (since some users aren't using McAfee as a 3rd party solution).



PROPOSED SOLUTION: McAfee recommends that participating domains (not all domains will use an external SPAM service) deny SMTP access to all mail servers (clients can still access via SMTP AUTH) … EXCEPT for an ALLOW block containing IP Addresses of authorized McAfee servers. This is evidently the solution Exchange uses.



QUESTION: Is there a way to do this? HostGator has been ZERO help to me whatsoever. They just keep telling me to use SpamAssassin, which I don't want to use.



I guess I'm just perplexed by this. I can't be the ONLY person experiencing this issue, yet no matter how much I Google it there doesn't seem to be any clear cut answer. Spammers are bypassing my MX records (which are pointed to McAfee spam solutions) and therefore avoiding the spam filters altogether, hence blowing up my inbox with all this spam. As I understand it, Exchange servers work by denying ALL IPs on port 25 except for the IPS of the third party spam solution. Now I know I don't have an Exchange server, but isn't there an easy way to do this on my server?

networking - Can I use routes to influence which interface address multicast listeners use?

I have a server with multiple NICs on it. Each NIC is plugged into a different, isolated network that is serving multicast traffic. I have a program that listens to the multicast traffic on each of these networks. Right now I have to specify in my program which interface to use as part of the multicast join. This is not a big deal, but is slightly inconvenient.



Is it possible to use routes to influence this process? Suppose I have two multicast groups as follows:



A. 224.1.2.32  39312 eth1
B. 224.1.11.19 59328 eth2



Can I add two routes to the routing table such that when I join the multicast group from my code the kernel knows to send group A's join out eth1 and group B's join out eth2? I've been unable to get it to behave the way I want. Adding various routes seems to not affect this process, and the only way I've found to be able to influence which interface is chosen is to specify it in code as part of the multicast_request data structure.

Thursday, July 6, 2017

raid - RAID1 write penalty using mdadm - what's the cause?

When doing some performance tests writing to a SSD-based 3-way RAID 1 mirror powered by mdadm, it appears we suffer a significant write penalty of about 2.2x slower than doing the same test on just one individual drive. We're reading from and writing to the same underlying physical drives in this test because that simulates our real world test case we're interested in.



Is this slowdown due to SATA III speed limitations or something else? I'm surprised RAID 1 would have a write penalty because I'd think it could write to all three drives simultaneously at the same speed it could write to one of them.



All three drives present:
dd if=/dev/md3 of=test.file bs=1048576 count=37193
...207.748 s, 188 MB/s


Just two drives present (i.e. normal two-drive RAID 1)
dd if=/dev/zero of=test.file bs=1048576 count=37193
...119.016 s, 328 MB/s

Just one drive present (no redundancy)
dd if=/dev/zero of=test.file bs=1048576 count=37193
...93.794 s, 416 MB/s

linux - P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up




We have a CentOS 5.4 server (build 2.6.18-164.el5xen).



We went to P2V this server so we can have redundancy, the physical only has one PSU.



The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority.



I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post.



Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications.




My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot?



Thanks for any input.



Edit:
I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg



Edit2:





echo Scanning logical volumes



lvm vgscan --ignorelockingfailure



echo Activating logical volumes



lvm vgchange -ay --ignorelockingfailure VolGroup00



resume /dev/VolGroup00/LogVol01




echo Creating root device.



mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00



echo Mounting root filesystem.



mount /sysroot



Answer




I have always had much better success booting the physical system with "Parted Magic" USB or CD, imaging the system with Clonezilla (from inside of PMagic), then restoring in the virtual machine with the same Parted Magic disk.



If you are migrating a Windows machine, "mergeide" might also be quite helpful for you.



More on mergeide: http://www.biermann.org/philipp/STOP_0x0000007B/


Wednesday, July 5, 2017

Domain controller not appearing in DNS on any server but itself?




I have a site with a DC I want to decommission, and a relatively new one (promo'd within the last two weeks). I'm having trouble demoting the old server, and in the process of trying to figure out why I'm running repadmin /replsum on a few of my DCs.



On most of them, I'm getting the error:




Experienced the following operational errors trying to retrieve replication information:



58 - c7908eb4-5ef4-46a7-b445-642b33ece726._msdcs.domain.com





I've looked around in DNS on the rest of my DCs looking for this listing, and I finally found it on my new DC mentioned above, and it refers to itself! So every other DC appears not to know about this DC, at least from a DNS perspective, but running repadmin /replsum on my new DC does not return any errors.



Why would this be happening, and what's the best way to correct it?


Answer



Eventually they all just figured it out, I guess. I took a look a few days later and the relevant DNS entries existed and I'm no longer getting any errors running repadmin.


Tuesday, July 4, 2017

permissions - Shared group directory with individual user files

I have a mounted NFS partition in which a specific group, say nfsgroup, has rwx for the directory (call it nfsdir). If my user brian is a member of nfsgroup and creates a file in nfsdir, then I chgrp it to say brian (my own group), other users in the nfsgroup can still delete my file. It gives me the rm: remove write-protected regular empty file ‘test.txt’? prompt, but still lets me delete the file from another user not part of the brian group but a part of the nfsgroup.




Is there a way such that I can allow all users in nfsgroup to create files in nfsdir, but also provide a way for members of nfsgroup to protect individual files from other group members modifying them?

How to create non-clustered indexes on a SQL Server 2008 database? Preferably without code?

I would appreciate help on how to create non-clustered indexes on a SQL Server 2008 database without using code--or rather, 'statically' once and for all prior to running any SQL queries (that probably does not make sense, but my point being I don't want to run the SQL command to create indexes everytime I run my SQL queries that are part of my business application).



That is, ideally there's a tool within Microsoft SQL Server built into Visual Studio 2010 Professional (NOTE: I DO NOT HAVE ENTERPRISE OR ULTIMATE EDITIONS--THIS MAKES A BIG DIFFERENCE AS TO WHAT I CAN DO WITH THE BUILT-IN SQL MANAGER IN PROFESSIONAL VERSION) to do this--since I don't have any other tool (I just looked, and found that Microsoft SQL Server 2008 does not have what I need--at least on my system--it is apparently a crippled freeware version). So perhaps a simple SQL command to index the below table is warranted.



I have read the references below, but I cannot figure out how to do this.



Here is my table:



Table CUSTOMER

Columns:

CustomerID = GUID - this is a unique primary key

CustomerDecimal1 = decimal- this is not unique, but 99% of the time it is unique
compared to the rest of decimal fields. I wish to index this field 1 of 2

CustomerDecimal2 = decimal- this is not unique, but 99% of the time it is unique
compared to the rest of decimal fields. I wish to index this field 2 of 2


CustomerTextComments = vChar(50)


The decimal fields are frequently used in WHERE clauses, so they are ideal candidates for a non-clustered index, which apparently is a new feature supported in Microsoft SQL Server 2008.



Further about my platform: I do already have a table with existing data in it, but only a few records, mostly blank. I am working from the Server Explorer from inside of Visual Studio 2010, which has a lot of functionality including the ability to generate SQL queries. Ideally I'd like to write any indexing method in Linq-to-entities (only because I don't really know SQL that well) but if somebody can give me a complete listing on how to index CustomerDecimal1, CustomerDecimal2 fields in this table I would be grateful.



References:



http://blog.sqlauthority.com/2008/09/01/sql-server-2008-introduction-to-filtered-index-improve-performance-with-filtered-index/ (SQL Server 2008 new 'filtered' index property for WHERE clause searches)




http://en.wikipedia.org/wiki/Index_%28database%29#Non-clustered



-----Updated



@mrdenny -- I thank you for your time and I see you have a stellar reputation, but I cannot believe what you are saying--yes I am stubborn and call it denial! :-) I will leave this thread open a bit more in the hopes somebody else sees it. Also as I do not run SQL natively, only Linq-to-entities from inside the Entity Framework (EF 4.0) I would not even know where to put the code you helpfully provided ("T/SQL to create a non-clustered index on the two decimal columns"). I am using both decimal columns at all times in my WHERE search--so your first SQL command is appropriate for me.



Can anybody translate Mr. Denny's first SQL code into Linq-to-Entities? Failing that,I will throw up my hands and say i don't believe it (goes against what I read about indexing being like a balanced tree of sorts, which should be automatically built into the system), or, in the alternative, I've read between the lines that indexing will save you at most about 20% better performance--good but nothing to really get too worked up over. Yes it's sour grapes!

Monday, July 3, 2017

tomcat6 - SSL configuration , Tomcat with Apache and mod_jk




I am looking to configure SSL with tomcat 6 and apache web server, using the tomcat connector mod_jk. I am pretty new to this, so please bear with me.



I have SSL certificate purchased and configured in tomcat using keystore file. It is perfectly working if access tomcat directly via https. Now i need apache in front of tomcat, my question is, do i need to provide certificate both in tomcat and apache or just the tomcat? Isn't apache supposed to just pass on the request to tomcat (using JkExtractSSL) and let it handle ssl authentication (verification of certificate)?



If certificate paths need to be configured in both apache and tomcat, then i have cert.p7b and certreq.csr files, which are surely not apache compatible, can you please tell how can i do that?



I have the following configuration so far:



httpd.conf:




    LoadModule ssl_module modules/mod_ssl.so
LoadModule jk_module modules/mod_jk.so
JkWorkersFile /usr/local/apache2/conf/workers.properties
JkShmFile logs/mod_jk.shm
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkMount /mywebapp/* worker1
JkExtractSSL On

JkHTTPSIndicator HTTPS
JkSESSIONIndicator SSL_SESSION_ID
JkCIPHERIndicator SSL_CIPHER
JkCERTSIndicator SSL_CLIENT_CERT



DocumentRoot "/var/lib/tomcat6/webapps/mywebapp"

Alias /mywebap "/var/lib/tomcat6/webapps/mywebapp"


Options Indexes FollowSymLinks
AllowOverride NONE
Order allow,deny
Allow from all



AllowOverride None
Deny from all




Include conf/extra/httpd-ssl.conf


httpd-ssl-conf:



    


DocumentRoot "/var/lib/tomcat6/webapps/mywebapp"

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXP56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLOptions +StdEnvVars +ExportCertData

Alias /mywebapp "/var/lib/tomcat6/webapps/mywebapp"

Options Indexes FollowSymLinks
AllowOverride None

Order allow,deny
Allow from all


JkMount /mywebapp/* worker1


AllowOverride None
Deny from all





Important to mention here is there is no SSLCertificateFile and SSLCertificateKeyFile configured in httpd-ssl.conf, as i am not sure, if it is needed in both tomcat and apache web server. I have it already configured in tomcat using keystore file.


Answer



SSL is used to encrypted communications between a client and your web
service. If you are putting Apache in front of Tomcat, then you need
to configure Apache with the SSL certificate...and you don't need it
at all for Tomcat, because Apache is handling all of the client
communication.





If certificate paths need to be configured in both apache and
tomcat, then i have cert.p7b and certreq.csr files, which are surely
not apache compatible, can you please tell how can i do that?




The .csr file is your certificate request and is not important.



This

question

has links that will help you convert your .p7b file into a
PEM-encoded certificate for use with Apache.



You can also export the PEM-encoded certificate from your keystore
using the -exportcert command:



keytool -exportcert -alias  | openssl x509 -inform der



The JkExtractSSL directive tells Apache to pass some SSL related
information to Tomcat. According to this document, that includes
the following environment variables:




  • SSL_CIPHER

  • SSL_CIPHER_USEKEYSIZE

  • SSL_SESSION_ID

  • SSL_CLIENT_CERT_CHAIN_n



Nginx and different versions of PHP FPM + PHP



Due to a better understanding of what I wish to achieve thanks to Mark and his previous answer, I am posting a (hopefully) clearer and slightly different variation of my previous question as that thread has reached saturation;



I am trying to have multiple WordPress sites run on an nginx server, where each site requires a different version of PHP. I wish to achieve this by using multiple versions of PHP-FPM each running a different version of PHP, separate to nginx.



I then want to use .conf files to control which PHP-FPM server each site uses, allowing that site to run on the desired PHP version. (As per the comments section)




Currently my server block for testsite1 looks like this, running the default PHP version.



server {
listen 80;
listen [::]:80;

root /usr/share/nginx/html/testsite1;
index index.php index.html index.htm;


server_name local.testsite1.com;

location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}

error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;
location = /50x.html {

root /usr/share/nginx/html;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
}

}


This is located in /var/nginx/sites-available/testsite1 and is sym linked to /var/nginx/sites-enabled/testsite1. Each server block is location in a separate file within sites-available



testsite1
testsite2
testsite3



I have compiled a different version of PHP (5.3.3), but I am unsure how to set up multiple PHP-FPM servers and how to make each 'point' to a different version of PHP. I also need guidance on how to set up multiple .conf files to define which PHP-FPM server each WordPress site will use.



(essentially, I need hand holding throughout the entire process...)


Answer



In my experience, a simple server structure as follows, which is enough for your case.



Assumption




You have about two hours to set them up.





Server Environment Assumption




1 x Nginx server (Front-end, to process static files)



2 x PHP-FPM server (Back-end, to process PHP script)



1 x database server (Either in another separated server or in Nginx server is okay)




Nginx server can be accessed by Public Network and Private Network



PHP-FPM servers and DB server can only be accessed by Nginx server (Private Network)




Public Network, which means can be accessed by people who have internet.



Private Network, which can be seen in a specific network group. (Class A, Class B, Class C. Ex, 192.xx.xx.xx or 10.xx.xx.xx or 172.xxx.xxx.xxx)




If you are using VPS on Linode and DigitalOcean, both of them provide private network IP, you can follow the instruction given by them to set it up.



If not, you can set up your own VPN(Virtual Private Network) or use your router to build one, it's easy, you can GOOGLE all what you need.



If you are using Ubuntu, making a VPN with configuration only costs you less than 5 minutes.



Before next step, suggest you to set up the firewall rules to prevent potential attacks. (By using IPTABLES or its wrapper, FirewallD)



Although we make PHP-FPM as dedicated from Nginx, however, the PHP files of websites can't be passed through both TCP Port and Unix Socket.




Thus, you have to manage your root folder for web server.



Options to manage website folder





  1. Uploading your websites to Nginx server AND PHP-FPM servers with SAME folder PATH


  2. Write a script to synchronous files to all of your servers


  3. Using GIT to all your servers.


  4. Creating a NFS (Network File System) on Nginx or another dedicated server






If you are using *nix system, I suggest the forth option to do due to,




First, manage all your websites files in one server



Second, very easy to maintain




Third, backup in minutes (This should be another question)



※ NFS server acts as a pure storage server for your websites




Some people may consider using NFS cause network latency, however, as for multiple websites waiting to be managed, NFS is an efficient and simple way.



You can GOOGLE "NFS on Linux" to finish installing and configuring this step, costs you about 1 hour for newer.



However, be aware that NFS server should be in a Private Network as well.




When NFS, PHP-FPM and Nginx are both in the same Private Network, the network latency should be less.



Then let's config the nginx.conf



Assumption




Your Nginx public IP is 202.123.abc.abc, listen 80(or 443 if SSL enabled)




Your PHP-FPM 5.5 is on 192.168.5.5, listen 9008



Your PHP-FPM 5.6 is on 192.168.5.6, listen 9008



(additional example) Your HHVM 3.4 is on 192.168.5.7 , listen 9008



And you consider PHP 5.5 is your most used PHP version




    server {


listen 80 default server;
server_name frontend.yourhost.ltd;
#root PATH is where you mount your NFS
root /home/www;
index index.php;

location ~ \.php$ {
try_files $uri $uri/ = 404;
fastcgi_pass 192.168.5.5:9008;

fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PHP_VALUE open_basedir=$document_root:/tmp/:/proc/;
include fastcgi_params;
fastcgi_buffer_size 512k;
fastcgi_buffers 256 4k;
fastcgi_busy_buffers_size 512k;
fastcgi_temp_file_write_size 512k;
fastcgi_intercept_errors on;


}

}
#Here to set up you vhosts
include vhosts/*.conf;



Above lines should be put before the last }





Then, go creating a folder called vhosts inside the folder of nginx.conf



Assumption




You have another one application is using PHP 5.6



You have another application is using HHVM





For PHP 5.6 (vhosts/app1.conf)



       server {
server_name app1.yourhost.ltd;
listen 202.123.abc.abc:80;
index index.php;
#root PATH is where you mount your NFS
root /home/app1;
#Include your rewrite rules here if needed

include rewrite/app1.conf;

location ~ \.php($|/){
try_files $uri $uri/ = 404;

fastcgi_pass 192.168.5.6:9008;
fastcgi_index index.php;
include fastcgi_params;
set $path_info "";
set $real_script_name $fastcgi_script_name;

if ($fastcgi_script_name ~ "^(.+?\.php)(/.+)$") {
set $real_script_name $1;
set $path_info $2;
}
fastcgi_param SCRIPT_FILENAME $document_root$real_script_name;
fastcgi_param SCRIPT_NAME $real_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param PHP_VALUE open_basedir=$document$
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ {

expires 30d;
}

location ~ .*\.(js|css)?$ {
expires 12h;
}

access_log /var/wwwlog/app1/access.log access;
error_log /var/wwwlog/app1/error.log error;
}



For HHVM (vhosts/app2.conf)



       server {
server_name app2.yourhost.ltd;
listen 202.123.abc.abc:80;
index index.php;
#root PATH is where you mount your NFS
root /home/app2;

#Include your rewrite rules here if needed
include rewrite/app2.conf;

location ~ \.hh($|/){
try_files $uri $uri/ = 404;

fastcgi_pass 192.168.5.7:9008;
fastcgi_index index.hh;
include fastcgi_params;
set $path_info "";

set $real_script_name $fastcgi_script_name;
if ($fastcgi_script_name ~ "^(.+?\.hh)(/.+)$") {
set $real_script_name $1;
set $path_info $2;
}
fastcgi_param SCRIPT_FILENAME $document_root$real_script_name;
fastcgi_param SCRIPT_NAME $real_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param PHP_VALUE open_basedir=$document$
}

location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ {
expires 30d;
}

location ~ .*\.(js|css)?$ {
expires 12h;
}

access_log /var/wwwlog/app2/access.log access;
error_log /var/wwwlog/app2/error.log error;

}


In this way, you can even add different SSL certificates for your vhosts.



Restart your server, and enjoy!



EDITED



To install different versions of PHP-FPM, you can compile it yourself, or using existing stack.




Recommend https://github.com/centos-bz/EZHTTP/archive/master.zip
to save your time



Using method,
(Assumed your machine has installed WGET and UNZIP)




$ wget --no-check-certificate https://github.com/centos-z/EZHTTP/archive/master.zip?time=$(date +%s) -O server.zip




$ unzip server.zip



$ cd EZHTTP-master



$ chmod +x start.sh



$ ./start.sh



Choose 1 in the first screen




Choose 1 in the second screen (LNMP)



Choose 1 when asking you which version of nginx you want to install(do_not_install)



Choose 1 when asking you which version of mysql you want to install(do_not_install)



Choose what version you want when asking you which version of PHP you want to install



Remain all settings be the default (Make you manage PHP-FPM easily in the future)




Choose what extra extensions you want, of course you can ignore cause all common extensions will be installed later.



Let this shell start compiling!



linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...