Sunday, January 12, 2020

freebsd - Trying to "zfs attach" a new disk, how to get correct specification for the disk I'm adding?

I'm migrating data from my old server to zfs on FreeBSD 10.x (I'm actually on FreeNAS 9.10.2-u1 but doing this activity in console so it's pure FreeBSD). My problem is that zpool attach needs a new_device in the correct format or slice/partition information, which I don't know how to provide.



Because of costs, I'm migrating the data in two stages - copying the data from my old mirror to a new zfs pool (without redundancy), then breaking the mirrors on the old server to move the mirror drives over and resilver on the new server, at all stages having 2 copies of the data. SMART stats are all good, ands all disks are "enterprise" type. Although not ideal, so far it's gone well. I've copied over the data, and connected the disks from the old server to the new server - where I'm now stuck on getting the correct args for zpool attach.




Current storage is as follows:



camcontrol devlist identifies the disk devices and model numbers, giving:



ada0 = 6TB disk
ada1 = 4TB disk
ada2 = 6TB disk
ada3 = BOOT MIRROR
ada4 = BOOT MIRROR
ada5 = 4TB disk

ada6 = 6TB disk


glabel status identifies the gptid's for the 5 disks already in use:



gptid/c610a927-01da-11e7-b762-000743144400     ada0p2 - 6TB
gptid/c68f80ae-01da-11e7-b762-000743144400 ada2p2 - 6TB
gptid/3b2b904b-02b3-11e7-b762-000743144400 ada3p1 - BOOT MIRROR
gptid/fb71e387-016b-11e7-9ddd-000743144400 ada4p1 - BOOT MIRROR
gptid/c566154f-01da-11e7-b762-000743144400 ada5p2 - 4TB



zpool status identifies the 3 disks in the data pool so far, by gptid



gptid/c610a927-01da-11e7-b762-000743144400 (from above this is ada0p2, 6TB)
gptid/c68f80ae-01da-11e7-b762-000743144400 (from above this is ada2p2, 6TB)
gptid/c566154f-01da-11e7-b762-000743144400 (from above this is ada5p2, 4TB)


so the new disks to attach are:




ada1 (4TB) - attach to gptid/c566154f-01da-11e7-b762-000743144400 (ada5p2)
ada6 (6TB) - attach to gptid/c610a927-01da-11e7-b762-000743144400 (ada0p2)

disk arriving shortly (6TB): attach on arrival to gptid/c68f80ae-01da-11e7-b762-000743144400 (ada2p2)


Problem:



What I'm stuck on is the actual command to use for attach. zpool attach gives an error whatever I try:




zpool attach ada0p2 ada6
missing specification

zpool attach gptid/c610a927-01da-11e7-b762-000743144400 ada6
missing specification


I'm guessing it's objecting to the "ada6" and I should be providing some other identifier, or a slice/partition ID instead. But I don't have these; zfs creates them itself when it attaches the disk.




What is the correct command to use here, or what am I missing?

Saturday, January 11, 2020

postgresql - nginx / node.js / postgres, scalability problems?

I have an app running with:





  • one instance of nginx as the frontend (serving static file)

  • a cluster of node.js application for the backend (using cluster and expressjs modules)

  • one instance of Postgres as the DB



Is this architecture sufficient if the application needs scalability (this is only for HTTP / REST requests) for:




  • 500 request per seconds (each requests only fetches data from the DB, those data could be several ko, and with no big computation needed after the fetch).



  • 20000 users connected at the same time




Where could be the bottlenecks ?

Friday, January 10, 2020

vmware esxi - Storage vMotion fails with error 0xbad0060 (Necessary module isn't loaded)

We ran into the following problem: we added a new LUN to our small SAN when we upgraded from ESX 4.1 to ESXi 5.0. We wanted to move a number of VMs from one LUN to the other using storage vMotion. One of the reason for that was to make sure the VMs are safe when we'll upgrade from VMFS 4 to VMFS 5.




Unfortunately, we ran into the following error when we try to perform a storage vMotion:




A general system error occurred: Failed to initialize migration at source.
Error 0xbad0060. Necessary module isn't loaded.




The same error occur when trying a host vMotion.



Any idea what could cause this ?

Thursday, January 9, 2020

scalability - How many databases can SQL server express handle



I'm running a SQL EXPRESS 2005 server currently hosting ~50 databases. The databases serve clients' CMS/eCommerce websites. The connections are to a single instance, no user attached instances are being used. Median DB size is 5MB, the largest 20MB. The website are mostly low traffic, the CPU utilization is < 10%, and the SQL process uses at most 350MB RAM.
For now I'm well within the SQL server express limits of 1CPU/1GB RAM. In the upcoming expansion the number of databases may double. If I assume linear growth in requirements, the 1GB limit still wont be reached. But I'm concerned the number (> 100) of databases may become an issue. I'm not sure if this usage scenario is what Microsoft had in mind for express.
Is there any information or preferably real-world experience regarding SQL server express capability to handle lots of small databases? Can I expect it to run 150 databases, or should I start working on migrating to other database servers/file-based databases?


Answer




According to the SQL Server 2005 Express edition overview:




there are no limits to the number of
databases that can be attached to the
server.




So, the limit is how much performance you can utilise of the server. Consider that as the express edition will only use one CPU core, if you have a quad core processor it can not use more than 25%.




If you later on find that you need to utilise more of the server's performance, you can quite easily upgrade to a different version of SQL Server.


ssl - After setting up Elastic Load balancer my https doesn't work anymore. Nginx error



I have a regular instance that works fine when not behind a load balancer.
I set up an ELB with 80 forwarding to 80 and 443 forwarding to 443 and sticky sessions.
Afterward I receive this error when going to any https page.




The plain HTTP request was sent to HTTPS port


I handle the process of forcing https on certain pages in my nginx configuration.
What do I need to do to get this working? I'm putting in a barebones version of my nginx config below.



http {

include mime.types;
default_type application/octet-stream;


# Directories
client_body_temp_path tmp/client_body/ 2 2;
fastcgi_temp_path tmp/fastcgi/;
proxy_temp_path tmp/proxy/;
uwsgi_temp_path tmp/uwsgi/;

server {
listen 443;
ssl on;

ssl_certificate ssl.crt;
ssl_certificate_key ssl.key;

server_name www.shirtsby.me;
if ($host ~* ^www\.(.*)) {
set $host_without_www $1;
rewrite ^/(.*) $scheme://$host_without_www/$1 permanent;
}

location ~ ^/(images|img|thumbs|js|css)/ {

root /app/public;
}
if ($uri ~ ^/(images|img|thumbs|js|css)/) {
set $ssltoggle 1;
}
if ($uri ~ "/nonsecure") {
set $ssltoggle 1;
}
if ($ssltoggle != 1) {
rewrite ^(.*)$ http://$server_name$1 permanent;

}

location / {
uwsgi_pass unix:/site/sock/uwsgi.sock;
include uwsgi_params;
}

}

server {

listen 80;
server_name www.shirtsby.me;
if ($host ~* ^www\.(.*)) {
set $host_without_www $1;
rewrite ^/(.*) $scheme://$host_without_www/$1 permanent;
}

if ($uri ~ "/secure") {
set $ssltoggle 1;
}

if ($ssltoggle = 1) {
rewrite ^(.*)$ https://$server_name$1 permanent;
}

location ~ ^/(images|img|thumbs|js|css)/ {
root /app/public;
}

location / {
uwsgi_pass unix:/home/ubuntu/site/sock/uwsgi.sock;

include uwsgi_params;
}
}
}

Answer



Ended up getting the answer from vandemar in the nginx IRC Channel.
Seems pretty simple, but I struggled with figuring it out. The ELB was handling the SSL, I had already given it all the cert information. The problem was trying to handle it again on the individual instances or in the configuration file.
The solution is to just eliminate all the SSL stuff from the config file.
So removing these three lines fixed everything:




ssl                 on;
ssl_certificate ssl.crt;
ssl_certificate_key ssl.key;

centos - What does cron mail status 0x0047#012 Mean



I noticed in the mail log last night that I'm getting a new message:



MAIL (mailed XXX bytes of output but got status 0x0047#012)


The cron job did run successfully though (as it's a script that transmits to a third party API, and they confirmed that they received data), but I'm unable to see the status of the transmission on our end.



I'm thinking it might be related to amount of available disk space, but I have no way to be sure.




Here is the output of df-h



Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1 9.8G 9.7G 0 100% /
devtmpfs 1.9G 64K 1.9G 1% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb1 48G 6.7G 39G 15% /var/www



For reference, we're using Centos 6.6 on AWS



I tried looking online for the meaning of this message, but I was unable to find it. If anyone could shed some light on it, that would be great, thanks.



EDIT:



The answer marked as a dupe did not help me, as it's not related to my question and the user asking that question got a different error response.


Answer



So I got a hold of our system admin (we contract out to him, I'm just a dev at my company), and he said this was an issue with updating our AWS server. Basically we log to our /var/httpd folder since we have plenty of space there, but the update caused our pointers to go away. Here are the notes from him to help anyone in the future.




These notes relate to the issue in general, and the apache logs:



After a round of server updates last week, the apache logs were writing to the wrong location. This has been fixed and the logs are now properly writing to the /var/httpd volume again. We have the logs writing to the /var/httpd volume to keep it from suffocating the root volume. The root volume is 10GB and the /var/httpd volume is 50GB.



These notes are specific to the cron issue:



It was probably the root volume space issue. Mail servers write to queues which then send. If the volume was full then it couldn’t write to the queue.



I would still be interested in finding out where I can see a list of the status codes that cron uses, as that was my original question, and I can't seem to find this info. If I find this information out, I will update this answer with that.


Monday, January 6, 2020

django - Amazon EC2 Ami recommendations for free tier?



Amazon web services recently introduced a free tier, where you basically get free stuff to try out AWS and run tiny sites and projects. Basically it's free as long as you remain below a certain limit of bandwidth, disk storage etc.



Since going over the limits can quickly become quite expensive (for a hobbyist) I would like some recommendations or suggestions about which AMI's I can run on the free tier, for the purpose of trying out Ruby on Rails and/or Django.


Answer



Use the Amazon Linux AMI. It's the only AMI that's officially supported and maintained by Amazon. It's optimized for EC2 with the ec2-api-tools included, boots from EBS, and a package repository that's hosted on EC2. It also includes great features like CloudInit.




There's more info info in the Amazon Linux AMI User Guide.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...