Saturday, February 28, 2015

worksheet function - Date formula in excel regarding calendar


I need to fill in a Day and Date column in an Excel table to create a one month calendar:


Select a year:  2016
Select a month: September
-------------------------
| Day | Date |
|------------------------
|Thursday | 01.09.2016 |
|Friday | 02.09.2016 |
| etc. | etc. |
-------------------------

(Note the date format is dd.mm.yyyy.)


The days and dates must be calculated automatically after choosing year and month.


Answer



You can achieve it with a few simple formulas:



  • Date for first day of the month:
    =DATE(B1,B2,1)

  • Rest of the dates:
    =IFERROR(IF(MONTH(B5)=MONTH(B5+1),B5+1,""),"")

  • Day names:
    =IFERROR(CHOOSE(WEEKDAY(B5,2),"Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"),"")


Fill the formulas down to 31 rows, it'll display only dates in the month, cells below will be empty.


enter image description here


ssl - Nginx https vhost capering all requests

I am trying to setup a gitlab instance along an own cloud instance on the same server. Both work fine over http, and both work fine over https if one one host is enabled.




The weird thing is that the owncloud host catches all requests to the server, even though the site config only says it should catch one ones to the appropriate domain, and thus prevents the gitlab vhost from answering.



Owncloud conf:



upstream php-handler {
# server 127.0.0.1:9000;
server unix:/var/run/php5-fpm.sock;
}

server {

listen 80;
server_name cloud.example.com;
return 301 https://$server_name$request_uri; # enforce https
}

server {
listen 443 ssl;
server_name cloud.example.com;

ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;

ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;

# Don't show version
server_tokens off;

# Have separate logs for this vhost
access_log /var/log/nginx/owncloud_access.log;
error_log /var/log/nginx/owncloud_error.log;

# Path to the root of your installation

root /usr/share/nginx/owncloud;

client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;

rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

index index.php;

error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;

location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}

location ~ ^/(?:\.|data|config|db_structure\.xml|README) {

deny all;
}

location / {
# The following 2 rules are only needed with webfinger
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;


rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

try_files $uri $uri/ index.php;
}

location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;
fastcgi_connect_timeout 120;
fastcgi_pass php-handler;
}

# Optional: set long EXPIRES header on static assets
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don't log access to assets

access_log off;
}

}


should only catch requests to cloud.domain.com?



GitLab config:




upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}

## This is a normal HTTP host which redirects all traffic to the HTTPS host.
server {
listen *:80 default_server;
server_name git.example.com; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
root /nowhere; ## root doesn't have to be a valid path since we are redirecting

rewrite ^ https://$server_name$request_uri permanent;
}

server {
listen 443 ssl;
server_name git.example.com; ## Replace this with something like gitlab.example.com
server_tokens off;
root /home/git/gitlab/public;

## Increase this if you want to upload large attachments

## Or if you want to accept large git objects over http
client_max_body_size 512M;

## Strong SSL Security
## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl on;
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;

ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';


ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache builtin:1000 shared:SSL:10m;

ssl_prefer_server_ciphers on;

add_header Strict-Transport-Security max-age=63072000;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;


## Individual nginx logs for this GitLab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;

location / {
## Serve static files from defined root folder.
## @gitlab is a named location for the upstream fallback, see below.
try_files $uri $uri/index.html $uri.html @gitlab;
}


## If a file, which is not found in the root folder is requested,
## then the proxy pass the request to the upsteam (gitlab unicorn).
location @gitlab {

## If you use https make sure you disable gzip compression
## to be safe against BREACH attack.
gzip off;

## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.

proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;

proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;


proxy_pass http://gitlab;
}

## Enable gzip compression as per rails guide:
## http://guides.rubyonrails.org/asset_pipeline.html#gzip-compression
## WARNING: If you are using relative urls do remove the block below
## See config/application.rb under "Relative url support" for the list of
## other files that need to be changed for relative url support
location ~ ^/(assets)/ {

root /home/git/gitlab/public;
gzip_static on; # to serve pre-gzipped version
expires max;
add_header Cache-Control public;
}

error_page 502 /502.html;
}



AMEND:
For HTTP, everything works as intended, with multiple vhosts. The problems start with SSL. And yes, nginx has SNI enabled (nginx -V says so).



Thanks for any help, I know there's a guru out there who knows the answer. :)

windows 7 - Improving laptop performance, including 32- vs. 64-bit

I have a slightly older laptop (Dell Inspiron 1720) into which I am about to install an SSD. I'm wondering now what other options I have to improve the performance of this computer (since I can't buy a new one for quite some time yet).



  • I have already maxed out the RAM at 4GB (DDR2) and upgraded the
    discrete graphics to the highest processor available for my model.


  • I tried a permanent ReadyBoost USB drive, but saw no performance
    changes (plus several sites say that it's no benefit if you
    have more than 1GB of RAM).


  • I usually keep my windows installs pretty light and don't install a
    lot of programs, so that's already been accounted for.



Anything else that could help?


Finally, I'm currently running Windows 7 64-bit (to take full advantage of the 4GB of RAM), but I'm wondering if the older hardware actually takes a performance hit running 64-bit. I don't have any programs that require 64-bit, and I have 32-bit available. Should I reinstall with 32-bit?

How can I stop, once and for all, Windows 10 from waking up on its own?



Duplicate: Conclusively stop wake timers from waking Windows 10 desktop -- there are conclusive answers there. I searched before asking this, didn't find that question.


I've gone through about 5 posts about this issue. I'll go and fix it and then about a week later it starts up again. I'm sitting 10 feet away from the computer and it just started up again.


It's getting old. I need a list of things to check -- things that might be causing this behavior and how to stop them. I suspect that it is Windows Update helpfully resetting it's ability to wake up and check for updates.


PS C:\WINDOWS\system32> powercfg -waketimers
Timer set by [SERVICE] \Device\HarddiskVolume1\Windows\System32\svchost.exe (SystemEventsBroker) expires at 6:46:52 PM o
n 1/10/2016.
Reason: Windows will execute 'NT TASK\Microsoft\Windows\Media Center\mcupdate_scheduled' scheduled task that requested
waking the computer.

Answer



It sounds like there is a task on the PC that is telling it to wake up once a week to do what it has to do. Check the Task Scheduler on the PC for any tasks that tell it to wake up.


There is also the good possibility that it is Windows Update waking your PC up. Although it is not recommended, you can set it to "Never check for updates" and do your own manual updates one a week or so.


Another possibility is a Wake-On-LAN event which is telling your PC to wake up, disable it in your BIOS if this is the case. While you are there, make sure there isn't a schedule in your BIOS which is telling your PC to wake up.


That is all I can think of that could cause it to wake up every week or so...


Microsoft Store and other apps such as Calc and Photos won't launch after Windows' update


After an update, Microsoft Store and several other apps such as Photos and Calculator stopped working. Clicking on them would open a window's frame for a split second, then immediately disappear without any error message.


I also noticed that in the pictures' properties, at the "Open with" line, Photos had been replaced by "TWINUI".


After a quick googling it appeared the problem was related to incoherences in the apps packages, however none of the guides I followed worked for me.


Another symptom was that the app's names weren't displayed in Windows' program list, instead they were shown as follow :


enter image description here


There are different suggestions that I found on the Internet, but none of them are relevant in my situation :



  • sfc /scannow

  • dism /online /cleanup-image /restorehealth

  • In powershell : Get-AppXPackage -AllUsers |Where-Object {$.InstallLocation -like
    "*SystemApps*"} | Foreach {Add-AppxPackage -DisableDevelopmentMode
    -Register "$($
    .InstallLocation)\AppXManifest.xml"}


  • Downloading the migration tool from Microsoft's website and updating over the current installation


  • Running Windows' problem diagnosis tools


Answer



Those symptoms were caused by the fact that the registered packages had a higher version than the packages actually available in "C:\Program Files\WindowsApps". (Such a bug in 2018, no comment ...)


To fix it, I had to manually uninstall the packages in Powershell, then install the versions available. You can follow this simple procedure if you are in the same situation :



  1. Accessing WindowsApps : follow this guide to take ownership of "C:\Program Files\WindowsApps";


Note : I will take the calculator as example, you have to repeat the following procedure for every broken app. There might be an automated way to do it with a powershell script, but I don't know about it.



  1. Find out the registered version of your broken app :

    • Open the console in admin mode and type "powershell" ;

    • Type Get-AppXPackage -Name "*calc*" (replace calc by what's relevant for you. The * is a regular expression meaning it can be replaced by anything) ;

    • In the results displayed, find out the PackageFullName line, and copy/paste this name in notepad so you don't lose it. If you don't find any folder with the same name in the WindowsApps folder, it means you have identified at least part of your problem ! In my case :



enter image description here



  1. Find out the last available package : go in your WindowsApps folder, and find the folder with the last version of calculator (the one with "x64" in its name), in my case :


enter image description here



  1. Unregister the broken version : back to the powershell, enter the command :


(Obviously replace the package name depending on your situation)


Remove-AppxPackage -Package "Microsoft.WindowsCalculator_10.1712.3351.0_x64__8wekyb3d8bbwe"


  1. Register the available package :


(The folder you found at step 3)


Add-AppxPackage -DisableDevelopmentMode -Register "C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_10.1706.2406.0_x64__8wekyb3d8bbwe\AppxManifest.xml"


  1. Update the app : Simply launch Microsoft Store, click the "..." on the top right corner, then "Download and Update". Then click "Get update", and the store will update your app to their last version. Note that if the store itself is broken you can fix it the same way than I showed you with Calculator.


And if you didn't get any error message at this point, your problem should be fixed !


windows 7 - How to install Intel wireless drivers without the extra bloatware?

How to install just the 7260 WiFi drivers on Windows 7?


I tried unchecking the "Intel ProSet Wireless bloatware" option at the setup stage, and only check the WiFi link driver, but it's still getting installed.
If I uninstall it later from the Add/Remove Programs list it uninstalls the driver too. The computer is Dell Latitude E5540 Notebook.


This is the web page: Intel 8260 7265 3165 7260 WiFi Driver


Is there any way to install only the driver without the extra 469 MB of Intel Proset Wireless bloatware?

Curly braces in Autohotkey "Send" command conflicting with hotkey braces


It's pretty hard to explain without showing the code first, so here goes:


This is the code:


#l::
{
SoundGet, mutestate, , MUTE
if mutestate = Off
Send {Volume_Mute}
Sleep 200
DllCall("LockWorkStation")
Sleep 200
SendMessage,0x112,0xF170,2,,Program Manager
Return
}

And this is the log output:


002: {
003: SoundGet,mutestate,,MUTE
004: if mutestate = Off
005: Sleep,200 (0.20)
006: DllCall("LockWorkStation")
007: Sleep,200 (0.20)
008: SendMessage,0x112,0xF170,2,,Program Manager
009: Return (16.63)

Now to the actual "problem".


There is one part of the actual code that doesn't show up in the log (but still executes), which is the Send {Volume_Mute}. I've tested that it still runs by setting volume to maximum, then triggering the hotkey. It locks the computer, then mutes it, which is exactly what it's supposed to do.


I'm just wondering why doesn't it show up in the log at all. My only guess would be that the curly braces is probably causing the "problem".


#l::
{ << This brace
SoundGet, mutestate, , MUTE
if mutestate = Off
Send {Volume_Mute} << The 2 braces here
Sleep 200
DllCall("LockWorkStation")
Sleep 200
SendMessage,0x112,0xF170,2,,Program Manager
Return
} << And this brace

I'm not really sure if this is what's causing the problem, but I'd really like to know what exactly is the cause.


Answer



After Windows XP, SoundGet is not the best way to get the mute state. I recommend checking out the Vista Audio Library which I believe is currently the best method.


Simply save the file to your script's directory and include the it by using #Include like this:


#Include VA.ahk

And here is the equivalent of your first 3 lines of code:


if ! VA_GetMasterMute()
VA_SetMasterMute(true)

centos - Dovecot – Can send, not receive mail (visable in mail queue) Where is email?




Mail queue administration say’s:




Received from myemail@adres.com H=mail-wm0-f44.google.com [74.125.82.44] P=esmtps X=TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=no S=3663 DKIM=gmail.com id=CALSm3d9r2Qd9JcO-AFg+inZmKheuq0w9PzErUv98tjh0bg5KmQ@mail.gmail.com T="Re: test mail "

myemail@adres.com R=virtual_user T=dovecot_lmtp_udp defer (-1): Failed to connect to socket /var/run/dovecot/lmtp for dovecot_lmtp_udp transport: No such file or directory


/var/log/exim/mainlog





1fT4jr-0005si-Ro for myemail@adres.com

cwd=/var/spool/exim 3 args: /usr/sbin/exim -Mc 1fT4jr-0005si-Ro

1fT4jr-0005si-Ro == myemail@adres.com R=virtual_user T=dovecot_lmtp_udp defer (-1): Failed to connect to socket /var/run/dovecot/lmtp for dovecot_lmtp_udp transport: No such file or directory


/etc/dovecot/dovecot.conf






## Dovecot 2.0 configuration file

#IPv4
#listen = *

#IPv4 and IPv6:
listen = *, ::


protocols = imap pop3 lmtp

auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@&
auth_verbose = yes
disable_plaintext_auth = no
login_greeting = Dovecot DA ready.
mail_access_groups = mail
default_login_user = dovecot
mail_location = maildir:~/Maildir


default_process_limit=512
default_client_limit=2048

passdb {
driver = shadow
}
passdb {
args = username_format=%n /etc/virtual/%d/passwd
driver = passwd-file

}
protocols = imap pop3
service auth {
user = root
}
service imap-login {
process_min_avail = 16
user = dovecot
}
service pop3-login {

process_min_avail = 16
user = dovecot
}

ssl_cert = /etc/exim.cert
ssl_protocols = !SSLv2 !SSLv3
ssl_cipher_list = ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP
ssl_key = /etc/exim.key
userdb {
driver = passwd

}
userdb {
args = username_format=%n /etc/virtual/%d/passwd
driver = passwd-file
}
verbose_proctitle = yes
protocol pop3 {
pop3_uidl_format = %08Xu%08Xv
pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%m, size=%s, bytes=%i/%o
}


mail_max_userip_connections = 15
remote 127.0.0.1 {
mail_max_userip_connections = 40
}

# LMTP socket for local delivery from exim
service lmtp {
executable = lmtp -L
process_min_avail = 16

unix_listener lmtp-client {
user = mail
group = mail
mode = 0660
}
}

protocol lmtp {
log_path = /var/log/dovecot-lmtp-errors.log
info_log_path = /var/log/dovecot-lmtp.log

postmaster_address = postmaster@HOSTNAME
}


I have recent update Centos, Directadmin & database, before that i never had any problems.


Answer



The problem was dovecot



I have found the solution that worked for me.





cd /usr/local/directadmin/custombuild
./build update
./build exim
./build exim_conf
mv /etc/dovecot /etc/dovecot~moved
./build dovecot
./build dovecot_conf

windows 7 - Fresh SSD whilst keeping HDD for storage


I am planning on updating from an HDD only system to a SSD (OS + Applications) + HDD (Larger storage of files and some Applications).


I currently have an HDD with Windows 7 on it, but am going to install a fresh installation of Windows 10 on the SSD.


Is it possible to utilise the SSD for the OS whilst still accessing the files on the HDD (even though it still contains Windows 7 OS and file structure therein)? (In other words can I turn an HDD into a storage drive, removing windows 7 without formatting the drive and then recopying the files I would like to keep)?


Thanks in advance


Answer



Yes, it possible to use the SSD for the OS whilst still accessing the files on the HDD and one thing you can do after setting up the system like this is accessing the HDD and manually erasing Windows, Program Files, Doc&Sett and all the other junk locations windows creates. Then, you're left with your actual useful data. Make sure to move the documents and desktop initial items if you want to do such a clean-up.


linux - Force media to be mounted to /dev/sr1 instead of /dev/sr0

I would like to be able to have the first device that I mount to a linux host be /dev/sr1 instead of /dev/sr0.


I am trying to script installing IBM’s SVC. There installer is hardcoded to expect the boot the cd and search /dev/sr1 for files. Another limitation is that their RDCLI which allows virtual media mounting can only mount a single ISO.


So when I mount the ISO .\rdmount.exe -s IMM IP address -d path/to/svc-install.iso -l Username -p Password


I successfully mount the ISO to sr0 and can’t run the installer.


If rdmount allowed me to mount multiple ISO’s I could just mount the install.iso twice.


I haven’t been able to find a method to change sr0 to sr1 once it is mounted or to have it mount directly to sr0. Symlinks and udev haven't helped because those only help once the OS is loaded.


If I make folder /dev/sr0 and mount something to it. Then try to run the rdmount command it appears to not mount.



newinstall:/dev/disk/by-id # ls scsi-3600605b0045637c019eaca5719c9d3a9 scsi-3600605b0045637c019eaca5719c9d3a9-part1 scsi-3600605b0045637c019eaca5719c9d3a9-part10 scsi-3600605b0045637c019eaca5719c9d3a9-part11 scsi-3600605b0045637c019eaca5719c9d3a9-part12 scsi-3600605b0045637c019eaca5719c9d3a9-part13 scsi-3600605b0045637c019eaca5719c9d3a9-part2 scsi-3600605b0045637c019eaca5719c9d3a9-part4 scsi-3600605b0045637c019eaca5719c9d3a9-part5 scsi-3600605b0045637c019eaca5719c9d3a9-part6 scsi-3600605b0045637c019eaca5719c9d3a9-part7 scsi-3600605b0045637c019eaca5719c9d3a9-part8 scsi-3600605b0045637c019eaca5719c9d3a9-part9



When I unmount this ISO and run the rdmount command it looks to have mounted but sr0 remains a directory unless I delete it first.



newinstall:/dev/disk/by-id # ls -ltar |grep sr lrwxrwxrwx 1 root root 9 Aug 28 18:05 usb-IBM_IBM_Composite_Device-0_20070221-15 -> ../../sr0 newinstall:/dev/disk/by-id # file /dev/sr0 /dev/sr0: directory newinstall:/dev/disk/by-id # ls -ltar /dev/sr0 total 0 drw-rw---- 2 root disk 40 Aug 28 17:58 . drwxrwxrwt 13 root root 4100 Aug 28 18:05 ..



Does anyone have an explanation for this last mount pointing to an empty folder? Are the symlinks created even if the mount catastrophically fails?

Friday, February 27, 2015

linux - swap partition vs file for performance?

What is better for performance? A partition closer to the inside of the disk will have slower access times, and we must wait for the drive to switch between the OS and swap partitions.




On the other hand, a swap partition bypasses all of the filesystem allowing writes to the disk directly, which can be faster than a file.



What is the performance trade off?



How much does having a fixed size swapfile make a difference?



Is it a case that it will be longer to change to the swap partition, but performance will be better while it is on the swap partition that if it had been a swap file?

windows 7 - How to mount a folder as a virtual CD or DVD drive?

I want to mount a local folder as a virtual CD/DVD drive to allow me to run a program without having the CD mounted physically.




I know I could burn it to an ISO file and mount that as a virtual CD/DVD drive using Daemon Tools or similar programs, however I would prefer if I could mount it directly from a folder.





I have looked at this question, however there was no useful answers as the asker wanted to boot from the folder, which is not possible:

How to mount a folder as a virtual CD/DVD drive?

encoding - Filename becomes gibberish


Using JDownloader to download some files makes the filename looks like ".æ­·Ã¥²Ã¨¬Ã¥Ã¯¼Ã§Ã§ Ã¦¸¯Ã¤¸Ã©" on my file system. The original filename is in Chinese. Is this a encoding issue? (original encoding not UTF-8). If that's the case, can this be recovered? I am guessing to find a encoding converter, and convert it to UTF-8.


Answer



You' re looking for convmv, tye man convmv for more information.



converts filenames from one encoding to another



CPU usage pegged at 100% on windows vista



According to Task Manager the CPU Usage is 100% but when I click on the "Processes" and sort by CPU in descending order I see that the process using the most CPU is taskmgr.exe at 02%. All the others are at 00%. So what's eating up all my CPU cycles?


The CPU is a Intel Core 2 Solo CPU U3500 @ 1.40GHz. So it's not a spectacular CPU but still... the behavior I'm seeing still doesn't make any sense.


Also, Aero is disabled.


Answer



Sorry, cannot yet comment. Windows update is know to sometimes cause 100% cpu usage. Click "show all processes" in taskmanager. If your notebook feels slow, see if svhost is at 100%, which would indicate Windows update is the fault. I think running it for a day or two fixed Windows update last time. If any other process is hang, try terminating it. If print spooler is at 100% restart the computer. If a virus is consuming all power, it may not show in the taskmanager.


memory - Windows system shows wrong amount of RAM?


I just got an ASUS Eee PC 1005HA and immediately upgraded it to 2 GB of RAM. However, when I open up the Windows System Dialog, it says there is only .99 GB of RAM. CPU-Z sees the whole 2 GB though. What's going on here?


The BIOS also reports the correct amount of memory in the system.


Answer



Go into the BIOS and save the changes - only then will Windows report the correct amount of RAM.


alt text


P.S. this is one of the most frequently asked questions over at eeeuser.com :)


Individual user sessions in vmware esxi



I have a server with vmware esxi 5.5 installed. Basically, I've noticed when I use Vsphere client and logged in as root and another person uses Vsphere client and logged in as root on the same esx host, the person can see what I'm doing. Is there a way to setup private sessions? I'd like to be able to work on my vm's and another person working on his vm as well without us looking at each others vm setup. Highly appreciate any advice. Thank you.


Answer




if the user wants to look into the vm I am using, I can see on top of the screen that there are two active users on the vm. I was hoping there would be some way to have individual private console sessions for each user logging into esx via vsphere client





There's an advanced option that might help you but it's per VM: Prevent Users from Spying on Remote Console Sessions



I don't know any other way.



Btw: You're talking about two persons logged in as root. In that case you don't have two active users, you have one user (root) with two sessions. However, with RemoteDisplay.maxConnections=1 you can limit the number of console sessions to one; another root session can't open one because then there would be two console sessions.


ruby on rails - Configure Apache + Passenger to serve static files from different directory




I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app.



The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder.



Here is the config I'm using:





ServerName domain.com


DocumentRoot /home/user/apps/testapp/public

RewriteEngine On

RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f
RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d
RewriteRule ^/(.*)$ /home/user/public_html/$1 [L]




This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request.



Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist)



How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?


Answer



Problem was a school boy mod_rewrite error - in order for the RewriteRule to be run both the RewriteCond statements above need to be satisfied, which of course they never will be. This was my fault for copying this from a negative condition test and not realising they would have to be separated. Although annoying that I need two RewriteRule statements, this works perfectly:



  RewriteCond /home/user/public_html%{REQUEST_URI} -f
RewriteRule ^ /home/user/public_html%{REQUEST_URI} [L]

RewriteCond /home/user/public_html%{REQUEST_URI} -d
RewriteRule ^ /home/user/public_html%{REQUEST_URI} [L]

domain name system - DNS failover in a two datacenter scenario

I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario:




I have two servers with the same configuration, content, mysql replication (dual-master).
They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup.
Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down.



My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP.
Of course the TTL will be low, as it's supposed to be in DNS failovers.



I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach?




An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)?
This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative.



I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

php fpm - Nginx php-fpm pool being blocked and stop responding



I'm having some issues with requests for pages not getting a response after requests take a long time to process.



I have nginx setup to use php-fpm. I have two pools setup in PHP-FPM. One pool for general web page requests, one pool to serve up image and other large files.



From my php-fpm config file:




[www]
listen = var/run/php54/php-fpm-www.sock
pm = dynamic
pm.max_children = 20
pm.start_servers = 4
pm.min_spare_servers = 4
pm.max_spare_servers = 20
pm.max_requests = 200



[www-images]

listen = var/run/php54/php-fpm-images.sock

pm = dynamic
pm.max_children = 5
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 2

pm.max_requests = 40


Nginx is configured to use these two separate pools, with requests for images stored in Amazon S3 going through the 'www-images' pool to be resized to the requested size. From my nginx config file:



location ~* ^/proxy  {
try_files $uri @404;
fastcgi_pass unix:/opt/local/var/run/php54/php-fpm-images.sock;
include /opt/local/etc/nginx/fastcgi.conf;
}


location / {
try_files $uri /routing.php?$args;
fastcgi_pass unix:/opt/local/var/run/php54/php-fpm-www.sock;
include /opt/local/etc/nginx/fastcgi.conf;
}


Because I'm testing on a terrible internet connection these requests are timing out in PHP, which is expected.





2013/01/20 15:47:34 [error] 77#0: *531 upstream timed out (60:
Operation timed out) while reading response header from upstream,
client: 127.0.0.1, server: example.com, request: "GET
/proxy/hugeimage.png HTTP/1.1", upstream:
"fastcgi://unix:/opt/local/var/run/php54/php-fpm-images.sock:", host:
"example.com", referrer: "http://example.com/pictures"




What's not expected and I'd like to resolve is that any requests that should be going to the 'www' pool are timing out with nginx not getting a response from PHP-FPM.





2013/01/20 15:50:06 [error] 77#0: *532 upstream timed out (60:
Operation timed out) while reading response header from upstream,
client: 127.0.0.1, server: example.com, request: "GET /pictures
HTTP/1.1", upstream:
"fastcgi://unix:/opt/local/var/run/php54/php-fpm-www.sock:", host:
"example.com"





After a couple of minutes requests to the 'www' pool start working again, without any action on my part.



I thought that using separate pools should mean that even if one pool has issues with requests taking a long time, the other pool should remain unaffected.



So my question is; how do I isolate the two pools, so that one pool being overwhelmed by requests that are timing out, doesn't affect the other pool.



To clarify, it is by design that I want to limit the number of requests that can be made at once through the 'www-images' pool. Although in practice this limit will hardly ever be reached (due to caching of the files downloaded from S3 to the server), if there is an unusual situation where that pool reaches it's limit, I want the www pool to continue functioning, as that is where the sites functionality actually sits.


Answer



I found two things:





  1. Add session_write_close(); to any long running PHP scripts as session data is locked to prevent concurrent writes only one script may operate on a session at any time.

  2. For any images that may be slow to load, make sure to load them from a different domain than the one you serve web pages and Ajax calls from, as web browsers will queue requests to the same domain when there are more than a small number of requests active.



Although these are two separate things, they had the same effect of making requests to the 'www' pool be blocked by requests to the 'www-images' pool.


apache 2.2 - mod_rewrite rewrite rule required to strip section of route and add to end of query string

I'm using an API that puts a unique id in the route and then I need that unique ID moved into my internal routing (so php can work with it).



The url is:





http://top.level.domain/folder/[UNIQUEID]/index.php




The file on the server to host is actually located at:




http://top.level.domain/folder/index.php




So I need to turn the original URL into:





http://top.level.domain/folder/index.php/[UNIQUEID]




Any ideas of a rewrite rule that would do what I need?



(Duplicate question closed on stackoverflow.)

apache 2.2 - High server load at some times, without explanation!

I don'y know what is going on. My dedicated server runs Cent OS 5.6 x86_64. It has been running for over a year pretty well. I also never had any disk failure. (Or I have never knew about any disk failure). The disks are in RAID, so, i'm not sure, but its possible the data center could have replaced a disk without I knowing that.



The fact is that some days ago, strange things started do happen. Server Load gets high, even when there is just a few requests/sec, and some httpd processes eats 100% cpu.



Other times, "top" doesn't show processes causing high server load, but the "Service Status" page on whm, shows high load.



Other thing that happened also, is that the server sometimes looks very slow to access whm or SSH, but I can access the websites hosted on it, and they load pretty fast, like if everything was normal, even with a high load.




Now, the Server Load, is about 40.



One strange thing that I noticed, is the "Blocks Written/Sec"



Blocks Read/sec =  1607.11
Blocks Written/Sec = 11836.01


I think the Blocks Written/Sec is higher then normal. The server hosts a popular photo effects website, so it gets a lot of traffic, but I think its strange...




Apache is optimized, with the same configurations, and the same visitors that the server was before the problems started.



What can be causing this?

How to install Windows 8 from USB flash drive?



I downloaded Windows 8 as an ISO image, and I'd like to install it on a computer that doesn't have an optical drive.




Is it possible to install Windows 8 from a USB drive? How would I do this?


Answer



The Windows 8 installer is very similar internally to the Windows 7 installer. All of the other methods mentioned in the previous post on How to Install Windows 7 from a USB Drive should work perfectly with the Windows 8 installer.



I would recommend using the Windows 7 USB Download Tool, as it's a very easy process — you just select the ISO, choose the flash drive, and it automatically formats it, copies everything over, and configures the flash drive to be bootable. I used it to successfully copy the Windows 8 installer onto a flash drive, and the installation went flawlessly on multiple machines.



(Note that when using the Windows 7 USB Download Tool, your flash drive will be automatically reformatted, so make sure to back up any files you have saved on your flash drive before starting this process.)


vmware vsphere - After ESXi update - Assignment Host to VM is lost



I have a serious VMWare problem, situation as follows:




  • one standalone host with internal HDDs. One pair of HDDs in Raid 1 are used for ESXi. 4 HDDs in Raid 10 are used as datastore for VMs.

  • VirtualCenter is also a VM and on the internal storage.

  • a NFS share is also used, some machines are on the share




I updated the vSphere Infrastructure from version 4.1 to version 5. vCenter update and client updates went fine.
Today I updated the host and thats how the trouble began. I saw all partitions in the installation process and installed ESXi 5 on the partition where ESXi 4 was installed. After that I have a fresh installed ESXi 5 without configuration. If I connect to it with the vSphere client I can't see the VMs who were under this hosts management. I can see the internal datastore and I can browse it but i can't manage or start the machines, same goes for the NFS share (inventory under "Virtual Machines" is empty).



How can I bring back the assignment "Host to VM" so that the machines on the internal storage are managed by this host? So the vCenter is also a virtual machine and I am unable to start it because it also is on the internal HDDs. I couldn't use the Update Manager because we only have this standalone host, so I did an "interactive update" booting from the ESXi 5 CD. I thought there is some kind of migration and the host will know that it has to manage the VMs on the internal storage but it seems that did not happen...
I could kick myself in the butt for doing this, any help appreciated.


Answer



In the datastore browser, find each VM's .vmx file, right click, and there's a "Register" option to put the VM back in the inventory.




For future reference, yes, there is an upgrade option from the same installer CD that'll preserve all settings - it should prompt you to do so during the process.


MySQL keeps crashing OS server.. Please help adjust my.ini!




I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) always with this only error:



Changed limits: max_open_files: 2048  max_connections: 800  table_cache: 619


I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do?



Here is my current ini settings:



default-character-set=latin1

default-storage-engine=INNODB
max_connections=800
query_cache_size=84M
table_cache=1520
tmp_table_size=30M
thread_cache_size=38
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=30M
key_buffer_size=129M
read_buffer_size=64K

read_rnd_buffer_size=256K
sort_buffer_size=256K
innodb_additional_mem_pool_size=6M
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size=3M
innodb_buffer_pool_size=250M
innodb_log_file_size=50M
innodb_thread_concurrency=10



Here is some extra information from phpMyAdmin:




Server: MYSERVER (localhost via TCP/IP)
Server version: 5.0.90-community-nt
Protocol version: 10
MySQL charset: UTF-8 Unicode (utf8)
Microsoft-IIS/7.0
MySQL client version: 5.0.90
PHP extension: mysqli




From my research, it seems to me that this error is saying that the OS hard coded limits keeps getting hit and that I should use the innoDB heavy .ini file. However, I do not know what the implications will be for my sites using MySQL. Below is the heavy innoDB configurations I am thinking of replacing it with, can anyone tell me what this will mean for my sites with existing databases? They are all InnoDB and even all their tables are InnoDB. Am I on the right track?




[client]
port = 3306
socket = /tmp/mysql.sock

[mysqld]
port = 3306
socket = /tmp/mysql.sock

back_log = 50
max_connections = 100
max_connect_errors = 10
table_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 4
default_table_type = MYISAM
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin=mysql-bin
log_slow_queries
long_query_time = 2
log_long_format
server-id = 1
key_buffer_size = 32M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
skip-federated
skip-bdb
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 2G
innodb_data_file_path = ibdata1:10M:autoextend
innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120

[mysqldump]
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M

[myisamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]
open-files-limit = 8192




Answer



I don't think MySQL should ever kill your operating system, even if it's misbehaving. What you describing is not normal for a healthy server. In the worst case, the MySQL instance should die, not the whole server.



You should investigate for possible hardware problems, such as insufficient cooling or bad RAM chips. So you should rule those out first.



If you agree that this might indeed be a hardware problem, here is what you could do:




  • improve cooling. Maybe open the server case and leave it running this way to prove the theory.


  • burn a memcheck live CD and do a quick RAM check. This requires a reboot, but I reckon your server is giving you daily opportunities, right? ;-)



Good luck!
- Yves


Thursday, February 26, 2015

windows 10 - Removed a virus, W10 Cloud Protection & Automatic Sample submission disabled by group policy


The cause


So, I did a stupid and executed an infected exe. Immediately my PC started acting up, all sorts of applications were installing, ads were popping up, you name it. I quickly started a Windows Defender scan but 10 seconds later a notification popped up that Windows Defender was disabled by group policy.


The clean up


I managed to download and run Malwarebytes which as far as I know cleaned up most of it. I had to change the HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows Defender key in the registry to enable Windows Defender again. And after a little bit of cleaning up I think my PC is clean again.


My question


However, my Settings > Updates & security > Windows Defender menu still says "some settings are managed by your organization".


I can turn on or off Windows Defender, but the two options below it "Cloud Protection" and "Automatic Sample submission" are greyed out. Any idea on how to get rid of that and make sure nothing else was changed?


I've tried looking around in gpedit.msc as some posts suggested but could not find anything regarding those two settings.


My Settings screen


Answer



User @DanielB posted about O&O ShutUp 10 which is an application that stops communication to Microsoft by changing your Windows settings.


After installing and opening it I got a nice overview of my options. After fiddling around with it I found the Undo all changes (factory reset) button which fixed my problem!


microsoft excel - Copy worksheets from all open workbooks & paste into new master workbook - revised

I have the following code courtesy of get.digital.help.com


Code executes fine except for 2 things:



  1. the personal.xlsb file pastes into new master wrkbk along with all the other open wrkbks.


    How can code prevent personal.xlsb from being copied.


  2. error code "Run time error 9: Subscript is out of range" generated at this line located just before "end macro"/ "end sub":



WBN.Sheets(Array("Sheet1", "Sheet2", "Sheet3")).Delete


What is causing this error and how to fix it?


'Name macro
'https://www.get-digital-help.com/copy-each-sheet-in-active-workbook-to-new-workbooks/#master
Sub CopySheetsToMasterWorkbook()
'this version includes option to name copied worksheets
'Dimension variables and declare data types
Dim WBN As Workbook, WB As Workbook
Dim SHT As Worksheet
'Create a new workbook and save an object reference to variable WBN
Set WBN = Workbooks.Add
'Iterate through all open workbooks
For Each WB In Application.Workbooks
'Check if workbook name of object variable WB is not equal to name of object variable WBN
If WB.Name <> WBN.Name Then
'Go through all worksheets in object WB
For Each SHT In WB.Worksheets
'Copy worksheet to workbook WBN and place after the last worksheet
SHT.Copy After:=WBN.Sheets(WBN.Worksheets.Count)
'Adds option to name each WrkSht added to MasterWB
WBN.Sheets(WBN.Worksheets.Count).Name = Left(WB.Name, 30 - Len(SHT.Name)) & "-" & SHT.Name
'Continue with next worksheet
Next SHT
'End of If statement
End If
'Continue with next workbook
Next WB
'Disable Alerts
Application.DisplayAlerts = False
**'Delete sheet1, sheet2 and sheet3 in the new workbook WBN
WBN.Sheets(Array("Sheet1", "Sheet2", "Sheet3")).Delete**
'Enable Alerts
WBN.Application.DisplayAlerts = True
'End macro
End Sub

Windows 8 Pro non-upgrade version won't let me activate


I went to Best Buy to buy Windows 8 and asked for the non-upgrade (clean installation) version of it. Got the software from a salesperson who said it can be for non-upgrades (and clearly has no "Upgrade" label on the box) and after installing, lo and behold Windows is complaining that my key is no good because I didn't have a previous installation of Windows ("this specific product key can only be used for upgrading, not clean installations").


Of course, I have already referred to this but like I said the item I purchased is supposed to be for clean installs! How can I get this resolved?


UPDATE: Microsoft did help me get this fixed, but I have discovered (after some new information coming to light) that this is was a MICROSOFT problem, not with Best Buy! Background: I didn't mention that I actually bought 2 identical copies of the exact same box and product number, both Windows 8 Pro not labeled as "Upgrade" versions. Well, I thought this was Best Buy's fault because it seemed like they must have handed us 2 upgrade versions. The kicker: The other copy was for my mom, and that copy activated without a hitch. This tells me that Microsoft really screwed up here in including an upgrade-only product key in one of their full-installation boxes!! Consumer beware!


Answer



You seem fairly positive that this is not an upgrade version. So, the best thing to do at this point is call Microsoft's activation team and talk to them at (888) 571-2048. They will be able to sort out your situation.


How do I repair a Windows 7 installation damaged by Windows 8 sleep mode

I'm experimenting with a Windows 8 installation which is on a separate SSD. My actual Windows 7 installation I'm working with is on my old HDD.


While Windows 8 was in sleep mode I swapped the hard disks and put in the Windows 7 HDD (I thought the computer was off). When I started the computer, Windows 8 started back up to the login screen – then it was stuck and some seconds later the computer rebooted.


Now the Windows 7 Installation is damaged. When I boot, after the Windows 7 startup logo appears, a bluescreen shows up for few seconds stating:


STOP: c000021a {Fatal System Error}
The verification of KnownDLL failed. System process terminated unexpectedly with a status of 0xc000012f (0x00f0bb90 0x00000000).
The system has been shut down.

and then the computer reboots. The same happens in safe mode. 'Windows startup repair' cannot repair the issue.


Any idea what could have happened exactly and/or how to repair this Windows 7 Installation?

Using Windows XP Professional SP3, how do I reformat an external hard drive as FAT32?




I'll soon be getting a new laptop with Windows Vista or Windows 7 and turning my current laptop into a Linux box. Since FAT32 works out of the box on both Windows and Linux machines and I've had some problems with Linux and working with NTFS drives in the past (although things might have gotten better - I'll be playing around with it, for sure), I want to back up my important files to my external hard drive, but I want it in FAT32 format for the time being. The only option there is when I go to format the drive is NTFS, though.



How can I force Windows XP to format my external hard drive as a FAT32 drive? Or has Linux support for reading and writing NTFS gotten good enough where it doesn't matter anymore?


Answer



It's a limitation of Windows XP. It can read FAT32 drives larger than 32GB, but it cannot format over 32GB. Either boot up with a DOS or Windows 98 boot disk and format it there (with large hard drive support) or download and use fat32format.


SSH user minimum permissions

I am working on an application which sshes into servers and gathers information about the server such as disk and memory usage. Another task it needs to do is get file size information of certain files which may be anywhere on the server.



Because of the nature of this application, I would want to restrict the ssh user on the server to only be able to read files in /proc/* and get file sizes of certain files. I cannot give an example because the files may change on a server by server basis.



Is there any way that an ssh account could be locked down to prevent reading /proc/* and doing a du on a file that could be anywhere?

hard drive - Desmistifiying SATA hotplug

I have a BIOS that has an option to enable hot-plug on individual ports. I have a sliding enclosure for HDD and SSD (nothing more than a pass-trhu to power and a sata port) that allow me to cut power to the drive before physically moving it.



I would love the convenience of inserting and removing HDD/SSD there without shutting down the computer every time.



But while researching about SATA hotswap, out of expensive enterprise solutions, there is zero reliable information. I tried even looking at patents. I can't find a single reliable source that tells me how reliable/unreliable is hotswapping on the consumer world.



So, I do have support in my bios, motherboard and enclosure. The drivers I've never seen mentioning hot-pluggable on the specs, even on the enterprise ones. How much risk of data loss will i be facing for this convenience?



Then, hardware aside, there is the software issue. Do i need support on the OS? and is there any AT command to unplug the drive that must be issued or does it park it's head on power down automatically? there is a slightly informed discussion on the software side here







edit:
found some more info regarding hot-pluggable. from Western Digital: it says every driver that supports SATA by definition of the standard, already support hot-plugging.




SATA-compliant devices thus need no further modification to be
hot-pluggable and provide the necessary building blocks for a robust
hot-plug solution, which typically includes: Device detection even

with power downed receptacles (typical of server applications)



Pre-charging resistors to passively limit inrush current during drive
insertion



Hot-plug controllers to actively limit inrush current during drive
insertion




source: http://wdc.custhelp.com/app/answers/detail/a_id/941/~/hot-swap-or-hot-plug-wd-sata-drives







But, the above starts another doubt. it says:




In order to take advantage of hot-plug capabilities for your Serial
ATA hard drive, you must use the Serial ATA power connection, not the
Legacy (Molex) power connection. The Legacy (Molex) power connection
does not support hot-plugging.





some of my drivers are connected from molex->sata power, just because i'm out of sata power ports on my PSU. from what i could trace, some molex and Sata power comes from the same 12V rail. and the SATA plug does not have any logic it seems. it is just dumb plastic. Does that mean i'm safe and the doc refers to drivers that supports both sata and molex?

Expanding Wired Network with Wireless Router

I want to expand my home network with the ASUS RT-N66U. I want several workstations in one part of the house to be wired, but the other workstations in another part of the house to be wireless, while at the same time, being able to access all the machines (e.g. printer) in the network via LAN.


Obviously, connecting the LAN side of the main switch to the WAN socket of the wireless router doesn't do anything because this means that I won't be able to access anything that isn't connected to the wireless router.


I've followed the instructions here, but after doing so, I get an error message saying that the router isn't connected to WAN whenever I try to access something on the internet.


To sum up everything, here is my setup:
ASUS RT-N66U connected to a switch's LAN port via the ASUS router's WAN slot. The switch is connected to the modem


And here is my problem: Although I can use the internet, I am having trouble connecting to a printer that is wired up to the switch. How can I make it so that I can connect to the internet and access all other machines in my home network.

software rec - Maintaining a repository of binary files

I have a bunch of files in proprietary format (.pdf, .doc, .wmv, etc) that I want to mirror on a server I have, for archival purposes as well as to be able to pull down the "asset repository" to another computer (from the server).


Basically I want GIT but for binary files. It would be nice if a revision history could be maintained for the Word documents (every "push" to the server overwrites the copy on the server but secretly the old copy would be saved somewhere).


The simplest thing is to use FTP, but it seems like an annoying way to manage, to have to manually rename the documents etc.

graphics card - Will image quality be better if I covert VGA to DVI-D




The current situation is this:




  • CPU: only VGA port

  • 2 monitors: VGA and DVI-D (dual link)

  • Available cables: DVI-D (single link), VGA

  • On board graphics



I'm thinking about buying a VGA-to-DVI adapter (male-to-female, respectively) and a DVI splitter. I then can hook the cables on the splitter monitors. Will it make the image quality significantly better? If not then I may use VGA cable and only need to buy a VGA splitter.



Answer



A VGA signal is an analog signal an in comparison to DVI is often times perceived as of poorer quality. Converting that signal to something else won't fix it. You could try this approach if you have excessively long cables as usually a DVI cable has less loss of quality per length unit.



If you do have the option try to switch to DVI-D which should improve the quality. Actually as your setup already includes a display which is using DVI you should be able to notice the difference.



Maybe describe the issue your experience which makes you consider this option to get some more helpful information.


windows 7 - Can you run a .iso from a flash drive or external hard drive?






I'm currently in the process of building a computer (parts in the mail from newegg, should be fun) and since I need an OS, I figured I's take advantage of my student status to download a copy of windows 7 from the MSDN Academic Alliance.


Anyway, that download left me with a .iso file. One problem -- the .iso is too large to burn to a CD (it's 3 gigs, or so, I believe). So my question becomes, what's the best way to install the OS? Do I need to procure a DVD burner / blank DVD ? Is it possible to mount the .iso on a virtual drive on my external harddrive (via daemon tools or the like) and then install from that?


What's my best option?


Edit: .iso is not 15 gigs, the "Install space required" is 15 gigs. >.< .iso itself is just 3...


Answer



Follow the steps from the following site to copy the ISO to a flash drive and install it from there:


Windows 7 USB/DVD Download Tool


(I'm not sure, but if you purchased an Upgrade version of Windows 7, you might not be able to reboot via USB or DVD drive and install it; you may be required to mount the ISO and install it from your current O/S.)


If you're not trying to boot up via a flash drive and install from there, it's much easier just to mount the ISO and install it from the mount while using your current O/S.


usb boot - USB thumb drive with CentOS 7 not booting


I am trying to create a custom bootable CentOS 7 ISO that boots from USB.



  1. I have downloaded the CentOS minimal DVD

  2. I have used dd to put the ISO onto the thumbdrive:


    dd if=CentOS-7-x86_64-Minimal-1511.iso of=/dev/sdb bs=4MB

  3. I have tried multiple options to boot from DVD like changing boot order and disabling/enabling UEFI bios but it just does not boot from the thumb drive. The ISO works fine if I burn it to DVD.



What can I do to get it to boot?


Answer



When generating a custom centos ISO, to get it to boot in a usb stick, isohybrid required.


In my case I was using isohybrid but I the variable I was using as the iso path was wrong. As a result isohybrid was failing


Wednesday, February 25, 2015

windows - mklink to network share or UNC path or Mapped Drive?

Are there performance or permission or other considerations to think about when using a mklink path to a network share versus just a straight UNC path (or mapped drive for that matter).



For example, can these three ways of accessing a network resource be considered functionally equivalent and roughly interchangeable?




mklink /d c:\shares\warehouse \\server1\warehouse
xcopy /s c:\shares\warehouse d:\temp\warehouse_copy


.



xcopy /s \\server1\warehouse d:\temp\warehouse_copy


.




net use X: \\server1\warehouse
xcopy /s X:\ d:\temp\warehouse_copy


Server is Windows 2003, clients are Win7 Pro. Network is mostly gigabit, though there are few 100mbit laggards here and there. I used a cmd shell in the example because it's easiest to explain, in practice the resource would be accessed by a variety of other methods also (Windows Explorer, Office "open" dialogs, system backup services, etc.)

windows 8 - How to keep a hard drive from going to sleep?


I have two hard disk drives in my Windows 8 desktop. The issue I am having is that the secondary hard drive goes to sleep frequently (I assume do to inactivity while I am only using the primary drive.) Then when I need to access it I hear it spin back up as my entire computer grinds to a halt for a couple seconds.


Is there anyway to prevent an internal hard drive from sleeping? I looked in the BIOS and didn't see anything, and there was no Power Management tab in device manger like there is for USB drives.


This behavior has occurred with other versions of Windows, so it is not specific to Windows 8. I am starting to wonder if it is a hardware feature of the drive. Haven't tried it under Linux or some other OS.


Answer



Control Panel, Power Options, Change Plan Settings, Change Advanced Power settings, then where it says "Turn Hard Disks after" instead of selecting a number of minutes, set it "never"


firewall-cmd on OpenVZ CentOS 7

So I've been trying to set up a webserver on my VPS with CentOS 7.
To do this I've used this tutorial. Installation of MySQL/MariaDB and PHP worked successfully. However I can't access my server, because I haven't allowed external access yet.



To do this I am forced to use these three commands (according to the tutorial):





firewall-cmd --permanent --zone=public --add-service=http



firewall-cmd --permanent --zone=public --add-service=https



firewall-cmd --reload




The command firewall-cmd wasn't found because according to this thread, OpenVZ installs a stripped down version of CentOS7, so I used the commands from there.




However, following error message popped up when using systemctl start firewalld:




Job for firewalld.service failed. See 'systemctl status
firewalld.service' and 'journalctl -xn' for details.




systemctl status firewalld.service -l shows this info:





firewalld.service - firewalld - dynamic firewall daemon Loaded:
loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active:
failed (Result: exit-code) since Mon 2016-07-18 04:31:46 EDT; 6min ago
Process: 12522 ExecStart=/usr/sbin/firewalld --nofork --nopid
$FIREWALLD_ARGS (code=exited, status=1/FAILURE) Main PID: 12522
(code=exited, status=1/FAILURE)



Jul 18 04:31:46 Christof2 systemd[1]: firewalld.service: main process
exited, code=exited, status=1/FAILURE Jul 18 04:31:46 Christof2

systemd[1]: Failed to start firewalld - dynamic firewall daemon. Jul
18 04:31:46 Christof2 systemd[1]: Unit firewalld.service entered
failed state.




FYI: I did everything from a fresh installation of CentOS7, if you want I can simply reinstall CentOS and do one step differently, if that helps.

Dell Inspiron 15R N5110 keyboard not working

keys are my Dell Inspiron 15R Laptop are not working. I have replaced hard drive of my laptop. when I installing keyboard its grip (that handled the cable) is broken and also I connect the cable wrong position. After that I reconnect it and some how the cable grip is fixed. Now sometimes some keys are working and sometimes it doesn't. What can I do now?

email - Is it OK to have multiple TXT records for a single domain containing different SPF entries?



A remote recipient domain is rejecting mail on the grounds of SPF and I think it's because the sender has SPF configured incorrectly.



When I run dig, I see:



[fooadm@box ~]# dig @8.8.8.8 -t TXT foosender.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-20.P1.el5_8.6 <<>> @8.8.8.8 -t TXT foosender.com

; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30608
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;foosender.com. IN TXT

;; ANSWER SECTION:

foosender.com. 14039 IN TXT "v=spf1 include:spf.foo1.com -all"
foosender.com. 14039 IN TXT "v=spf1 include:_spf.bob.foo2.com -all"

;; Query time: 26 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Jan 7 09:45:38 2014
;; MSG SIZE rcvd: 146


Is this a valid setup? It seems strange to me that there are two separate records (each with hard fails). Shouldn't everything be in a single record?




I would expect the proper TXT record to be:



v=spf1 include:spf.foo1.com include:_spf.bob.foo2.com -all


Answer



No. You are right. See RFC 4408, section 4.5.





  1. Records that do not begin with a version section of exactly

    "v=spf1" are discarded. Note that the version section is
    terminated either by an SP character or the end of the record. A
    record with a version section of "v=spf10" does not match and must
    be discarded.


  2. If any records of type SPF are in the set, then all records of
    type TXT are discarded.



    After the above steps, there should be exactly one record remaining
    and evaluation can proceed. If there are two or more records remaining, then check_host() exits
    immediately with the result of "PermError".




    If no matching records are returned, an SPF client MUST assume that
    the domain makes no SPF declarations. SPF processing MUST stop and
    return "None".




backup - How do I save a web page in Firefox - the saved version shows "view previous comments" again

I am trying to save a page with Firefox. It's a post on facebook, loaded on its own page. I clicked "view previous comments" so all comments are displayed. I save the page.


I then loaded the saved file. "View previous comments" is back, and if I click on it it wants to load the source off of the internet. I tried loading the page again, displaying all the comments again, switching Firefox to offline mode, and saving the page. I got three consecutive "source could not be read" errors. It saved part of it anyway. I loaded the page, and again, "view previous comments" is back. I tried "view page source" from the completely loaded page, and then saved the file from that window, and I get exactly the same results.


Firefox 20.0.1; Windows XP SP3.

What is my nginx + php server bottleneck?

I am running some siege tests on my nginx server. The bottleneck doesn't seem to be cpu or memory so what is it?



I try to do this on my macbook:



sudo siege -t 10s -c 500 server_ip/test.php



The response time goes to 10 seconds, I get errors and siege aborts before completing.



But I if run the above on my server



siege -t 10s -c 500 localhost/test.php


I get:




Transactions:               6555 hits
Availability: 95.14 %
Elapsed time: 9.51 secs
Data transferred: 117.30 MB
Response time: 0.18 secs
Transaction rate: 689.27 trans/sec
Throughput: 12.33 MB/sec
Concurrency: 127.11
Successful transactions: 6555
Failed transactions: 335

Longest transaction: 1.31
Shortest transaction: 0.00


I also noticed for lower concurrent figures, I get vastly improved transaction rate on localhost compared to externally.



But when the above is running on localhost the CPU usage is low, memory usage is low on HTOP. So I'm confused how I can boost performance because I can't see a bottleneck.



ulimit returns 50000 because I've increased it. There are 4 nginx worker processes which is 2 times my cpu cores. Here are my other settings




worker_rlimit_nofile 40000;

events {
worker_connections 20000;
# multi_accept on;
}

tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;

types_hash_max_size 2048;


The test.php is just a echo phpinfo() script, nothing else. No database connections.



The machine is an AWS m3 large, 2 cpu cores and about 7gb of ram I believe.



Here is the contents of my server block:



      listen 80 default_server;

listen [::]:80 default_server ipv6only=on;

root /var/www/sitename;
index index.php index.html index.htm;

# Make site accessible from http://localhost/
server_name localhost;

location / {
try_files $uri $uri.html $uri/ @extensionless-php;

}

location @extensionless-php {
rewrite ^(.*)$ $1.php last;
}

error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;
location = /50x.html {

root /usr/share/nginx/html;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}


My php-fpm settings:



 pm = dynamic

; The number of child processes to be created when pm is set to 'static' and the

; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 40


; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.start_servers = 30

; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 20


; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 35

; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;


; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
;pm.max_requests = 500

memory - Windows 10, 'System' process taking massive amounts of RAM


Since I upgraded to Windows 10, my system has been consuming RAM excessively


enter image description here


I've been reading a bit and determined it's likely a driver leaking memory. So I got myself the Windows Driver Kit and tracked memory usage with poolmon:


enter image description here


However, I don't really know how to proceed from here. Is the item tagged "smNp" the culprit in this issue? How do I go from there to actually identifying the driver?


I tried some stuff like "C:\Windows\System32\drivers>findstr /s smnp ." but it returned no results. I also took a look at the pooltag.txt file and this is the description I found for it:


enter image description here


So yeah, any help would be appreciated.
Thanks in advance.


Answer



By going into services.msc (via Win+R) and disabling Superfetch completely solves this. I am not sure if Superfetch is just broken as of now or it's "by design".


In addition, apparently getting rid of the paging file will have the same effect but the above solution is a safer bet.


Why can't curl retrieve the SSH host key (key: )




I've been using curl (by means of git-ftp) for a while, and passing only username and an sftp URL.



Authentication would always work implicitely through publickey.



Suddenly curl will not connect through SSH anymore – apparently because it does not get a host key and therefore rejects the connection:



Trying {IP}...
* Connected to host.example.com ({IP}) port 22 (#0)
* SSH MD5 fingerprint: {Fingerprint}
* SSH host check: 2, key:

* Closing connection 0



Why can't curl get the key?



Connections with ssh -v work and do give me 2 host keys, also curl --insecure will work.


Answer



libssh2 does not support some later keys like ecdsa-sha2-nistp256 and ssh-ed25519.



So if you already have one of these keys in your .ssh/known_hosts, libssh2 will fail. But you can add another key that libssh2 supports, like RSA:




To fix it, retrieve the RSA public key from the remote host and add it to your known_hosts file:



ssh-keyscan hostname.example.com >> ~/.ssh/known_hosts



The exact format and file location might vary by system.


domain name system - Registering a co.za using private nameservers



I'm having some difficulty registering a co.za domain using my own name servers. I'm new to this so please excuse and newbie mistakes and questions.



I'm using BIND 9.7.1-P2 and have followed all the tutorials I can find. But when I try register the co.za domain I get the following:







Provided Nameserver information
Primary Server : ns1.maximadns.co.za @ 41.185.17.58
Secondary 1 : ns2.maximadns.co.za @ 41.185.17.59



Domain "maximadns.co.za", SOA Ref (), Orig ""
Pre-existing Nameservers for "maximadns.co.za":-



Syntax/Cross-Checking provided info for Nameserver at 6a: ns1.maximadns.co.za @ 41.185.17.58
IPv4: 41.185.17.58 ==> [WARN: No PTR records!]

FQDN: ns1.maximadns.co.za ==> [WARN: No A records!]



Syntax/Cross-Checking provided info for Nameserver at 6e: ns2.maximadns.co.za @ 41.185.17.59
IPv4: 41.185.17.59 ==> [WARN: No PTR records!]
FQDN: ns2.maximadns.co.za ==> [WARN: No A records!]
!
! The message "No PTR records?" indicates that the reverse domain
| information has not been configured correctly.
!
!

! The message "No A records?" means that name of the Nameserver specified can not be resolved.
! This can be ignored if the specified Nameserver is a child of the
| domain application.
!



Adding application
Checking quoted Nameservers....



NS1-1 FQDN: ns1.maximadns.co.za.
NS1-1 IPV4: 41.185.17.58

NS1-1 ORIGIN: ns1.maximadns.co.za.
NS1-1 E-MAIL: hostmaster@maximasoftware.co.za.
NS1-1 SER-NO: 2010081601
NS1-1 NS RECORD1: ns1.maximadns.co.za.
NS1-1 NS RECORD2: ns2.maximadns.co.za.



NS2-1 FQDN: ns2.maximadns.co.za.
NS2-1 IPV4: 41.185.17.59
NS2-1 ORIGIN: ns1.maximadns.co.za.
NS2-1 E-MAIL: hostmaster@maximasoftware.co.za.

NS2-1 SER-NO: 2010081601
NS2-1 NS RECORD1: ns1.maximadns.co.za.
NS2-1 NS RECORD2: ns2.maximadns.co.za.



ERROR: No valid nameservers found - rejecting request.






I did provide IPv4 glue records when specifying the nameservers for the domain registration. From what I understand that error means that there are no A or PTR records for the domain found on the specified servers. But what confuses me is when I use Dig to check if my name servers are working, I seem to get the correct response (well according to the tutorials I've read).




When I do a 'dig @41.185.17.58 maximadns.co.za' I get the following response:






; <<>> DiG 9.3.2 <<>> @41.185.17.58 maximadns.co.za
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1364
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2




;; QUESTION SECTION:
;maximadns.co.za. IN A



;; ANSWER SECTION:
maximadns.co.za. 21600 IN A 41.185.17.62



;; AUTHORITY SECTION:
maximadns.co.za. 21600 IN NS ns1.maximadns.co.za.
maximadns.co.za. 21600 IN NS ns2.maximadns.co.za.




;; ADDITIONAL SECTION:
ns1.maximadns.co.za. 21600 IN A 41.185.17.58
ns2.maximadns.co.za. 21600 IN A 41.185.17.59



;; Query time: 53 msec
;; SERVER: 41.185.17.58#53(41.185.17.58)
;; WHEN: Wed Aug 18 10:08:23 2010
;; MSG SIZE rcvd: 117







And when I do a 'dig @41.185.17.58 -x 41.185.17.58' I get the following response:






; <<>> DiG 9.3.2 <<>> @41.185.17.58 -x 41.185.17.58
; (1 server found)
;; global options: printcmd
;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1660
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2



;; QUESTION SECTION:
;58.17.185.41.in-addr.arpa. IN PTR



;; ANSWER SECTION:
58.17.185.41.in-addr.arpa. 21600 IN PTR ns1.maximadns.co.za.



;; AUTHORITY SECTION:

17.185.41.in-addr.arpa. 21600 IN NS ns1.maximadns.co.za.
17.185.41.in-addr.arpa. 21600 IN NS ns2.maximadns.co.za.



;; ADDITIONAL SECTION:
ns1.maximadns.co.za. 21600 IN A 41.185.17.58
ns2.maximadns.co.za. 21600 IN A 41.185.17.59



;; Query time: 41 msec
;; SERVER: 41.185.17.58#53(41.185.17.58)
;; WHEN: Wed Aug 18 10:09:54 2010

;; MSG SIZE rcvd: 140






Just for the record I'm not doing these dig queries on the server itself, I'm doing them from my personal PC which is not on the same LAN as the server, so they are being performed over the Internet. I'm aware that I'm specifying my server for these dig queries, but unless I misunderstand when I specify Glue Addresses when registering the domain it will explicitly use those IP Addresses as the name servers.



This is the point I'm stuck at, when I try register the domain it says my name servers aren't valid, but when I test my name servers they are "working". Either I'm testing incorrectly and/or have misunderstood some or all of the concepts of DNS.



Any help/advice/pointers you can afford to offer would be greatly appreciated.




Thanks in advance.


Answer



The issue is with your PTR records they do not exist for 41.185.17.58 41.185.17.59.



host 41.185.17.58
Host 58.17.185.41.in-addr.arpa. not found: 3(NXDOMAIN)


From what i can see that block belongs to web africa, you need to get them to deligate your part to you or create PTR records for you.




17.185.41.in-addr.arpa. 522 IN  SOA smtp1.wadns.net. noc.webafrica.co.za. 2008120678 14400 600 86400 14400

windows 10 - How to reinstall Cortana?

With reference to:


Can I completely disable Cortana on Windows 10?


I removed the Cortana packages from my running Windows 10 using the method posted by magicandre1981 and using win6x_registry_tweak.


My question is how can I re-install Cortana (or for that matter any other package removed similarly)?


I have opened up the install.wim image (converted from a Win 10 install.esd file) and found Cortana in the system apps folder, but don't know how / what to use to re-install it.


Any help please?

ubuntu - Is multiple php5-fpm processes normal




I'm running several wordpress sites in a LEMP (Ubuntu Linux, Nginx, MySQL, PHP) stack. Looking at the running processes I can see there are two php5-fpm processes.



Is this normal or have I done something? I'm more used to a LAMP stack and think I only usually had one php process running.


Answer



Yes, that's entirely normal. Each PHP-FPM process can only handle a single request at a time, so PHP-FPM fires up multiple (known as a pool) to handle more than one concurrent request.



A LAMP stack wouldn't have any PHP processes, as PHP executes within Apache using mod_php.


ubuntu - HP Proliant ML110 G6 as first file server (RAID)?

Bit of a noobie and this will be the 1st server hardware I have purchased.
We are a small design studio and I am about to buy our 1st server.
I have successfully run Ubuntu on an old PC which has acted as our file server untill now.



I am keenly looking at the HP Proliant ML110 G6 which seems to me to fit the bill.



Any reason why this wouldnt be a good choice for an Ubuntu file server?



I am hoping to fill the 4 drive bays with 1TB SATA hard drives in some kind of raid configuration. I think RAID 10 or 1. My goal is data mirroring, surviveability, minimum disruption to operation, and fixability if somthing goes wrong. Any sugestions?

Will the embedded, HP Smart Array B110i SATA Controller RAID 0/1/10, work for me or will I need a seperate RAID card?



I know this is all a bit vague, but I do hope that someonwe will be able to give me a few words of advice, so as to give me the confidence to make the purchase.



Thanks,



G

virtualization - Consumer (or prosumer) SSD's vs. fast HDD in a server environment



What are the pro's and con's of consumer SSDs vs. fast 10-15k spinning drives in a server environment? We cannot use enterprise SSDs in our case as they are prohibitively expensive. Here's some notes about our particular use case:





  • Hypervisor with 5-10 VM's max. No individual VM will be crazy i/o intensive.

  • Internal RAID 10, no SAN/NAS...



I know that enterprise SSDs:




  1. are rated for longer lifespans


  2. and perform more consistently over long periods



than consumer SSDs... but does that mean consumer SSDs are completely unsuitable for a server environment, or will they still perform better than fast spinning drives?



Since we're protected via RAID/backup, I'm more concerned about performance over lifespan (as long as lifespan isn't expected to be crazy low).


Answer



Note: This answer is specific to the server components described in the OP's comment.





  • Compatibility is going to dictate everything here.

  • Dell PERC array controllers are LSI devices. So anything that works on an LSI controller should be okay.

  • Your ability to monitor the health of your RAID array is paramount. Since this is Dell, ensure you have the appropriate agents, alarms and monitoring in place to report on errors from your PERC controller.

  • Don't use RAID5. We don't do that anymore in the sysadmin world.

  • Keep a cold spare handy.

  • You don't necessarily have to go to a consumer disk. There are enterprise SSD drives available at all price points. I urge people to buy SAS SSDs instead of SATA wherever possible.

  • In addition, you can probably find better pricing on the officially supported equipment as well (nobody pays retail).

  • Don't listen to voodoo about rotating SSD drives out to try to outsmart the RAID controller or its wear-leveling algorithms. The use case you've described won't have a significant impact on the life of the disks.




Also see: Are SSD drives as reliable as mechanical drives (2013)?


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...