Tuesday, July 30, 2019

windows server 2012 r2 - Storage Spaces two-way mirror with odd number of disks

I am currently using storage spaces on Windows Server 2012 R2. 2 disks (2*3TB) are installed with one virtual volume set to two-way mirroring. Since this is a very small server, it only has one SATA port left. For increasing the storage capacity, I am thinking about adding 1 more drive, but this time with a capacity of 6TB. According to Mixing disks of different sizes in a Storage Spaces pool , one might think that a possible way would be to 1. remove one of the 3TB drives, 2. add the new 6TB drive, 3. rebuild the storage space, 4. format the old 3TB drive and then 5. add it again to the pool. Would I then be able to access 6TB of capacity and still have two-way mirroring resiliency? Can't find any documents that cover this scenario.

Saturday, July 27, 2019

forwarding proxmox vnc websocket with nginx

I installed nginx in order to be a lazy person and just go to proxmox.domain.com instead of proxmox.domain.com:8006, but now I can't access the VNC client when connected to the first address, although I can doing the ip+port. A friend of mine pointed out that I have to forward web sockets, so I hit the keyboard and googled it and found this. I tried everything in there, and it isn't working. I have restarted nginx and it said that the config file worked.



location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass https://localhost:8006;

}

location /websockify {
proxy_http_version 1.1;
proxy_pass http://127.0.0.1:6080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

# VNC connection timeout
proxy_read_timeout 3600s;


#disable cache
proxy_buffering off;
}

location /vncws/ {
proxy_pass http://127.0.0.1:6080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";


}


This is the block of config in my /etc/nginx/sites-enabled/proxmox. What am I doing wrong?

Friday, July 26, 2019

postfix - Can't receive emails to my mail server



I created a mail server, following this tutorial:
https://www.linode.com/docs/email/postfix/email-with-postfix-dovecot-and-mysql ,



I configured the email on my email client, SMTP and IMAP and both connections are working, I can send email from my email address (example@domain.com) to another (example@gmail.com) and i can read it as well, but when I send to email address of my email server (example@domain.com), it sends, but I do not see it in inbox of example@domain.com...



Created a mx record: example.com mx 10 example.com
and created a spf record too: v=spf1 a mx ~all



The mail server is on Ubuntu 14.04.1 Desktop.




The results of dovecot -n



# 2.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 3.13.0-44-generic x86_64 Ubuntu 14.04.1 LTS ext4
auth_mechanisms = plain login
mail_location = maildir:/var/mail/vhosts/%d/%n
mail_privileged_group = mail
namespace inbox {
inbox = yes

location =
mailbox Drafts {
special_use = \Drafts
}
mailbox Junk {
special_use = \Junk
}
mailbox Sent {
special_use = \Sent
}

mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
prefix =
}
passdb {
args = /etc/dovecot/dovecot-sql.conf.ext

driver = sql
}
protocols = imap pop3 lmtp
service auth-worker {
user = vmail
}
service auth {
unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0666

user = postfix
}
unix_listener auth-userdb {
mode = 0600
user = vmail
}
user = dovecot
}
service imap-login {
inet_listener imap {

port = 0
}
}
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}

service pop3-login {
inet_listener pop3 {
port = 0
}
}
ssl = required
ssl_cert = ssl_key = userdb {
args = uid=vmail gid=vmail home=/var/mail/vhosts/%d/%n

driver = static
}


Results of netstat -panet:



tcp        0      0 0.0.0.0:995             0.0.0.0:*               ESCUTA      0          11309       986/dovecot     
tcp 0 0 0.0.0.0:7300 0.0.0.0:* ESCUTA 1001 13485 1371/perl
tcp 0 0 192.168.1.104:27754 0.0.0.0:* ESCUTA 1001 13484 1371/perl
tcp 0 0 127.0.0.1:3306 0.0.0.0:* ESCUTA 117 11024 1134/mysqld

tcp 0 0 0.0.0.0:587 0.0.0.0:* ESCUTA 0 11611 1232/master
tcp 0 0 0.0.0.0:465 0.0.0.0:* ESCUTA 0 11617 1232/master
tcp 0 0 127.0.1.1:53 0.0.0.0:* ESCUTA 0 12148 1486/dnsmasq
tcp 0 0 0.0.0.0:21 0.0.0.0:* ESCUTA 0 9546 993/vsftpd
tcp 0 0 0.0.0.0:22 0.0.0.0:* ESCUTA 0 9555 980/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* ESCUTA 0 8840 510/cupsd
tcp 0 0 0.0.0.0:25 0.0.0.0:* ESCUTA 0 11602 1232/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* ESCUTA 1000 21171 3082/1
tcp 0 0 0.0.0.0:993 0.0.0.0:* ESCUTA 0 10582 986/dovecot
tcp 0 0 192.168.1.104:993 85.242.102.178:50794 ESTABELECIDO 119 24686 3168/imap-login

tcp 0 0 192.168.1.104:7300 78.111.205.56:45853 ESTABELECIDO 1001 21084 1371/perl
tcp 0 0 192.168.1.104:993 85.242.102.178:55965 ESTABELECIDO 119 21384 3176/imap-login
tcp 0 264 192.168.1.104:22 192.168.1.90:57457 ESTABELECIDO 0 21118 3046/sshd: diogosar
tcp 0 0 192.168.1.104:993 85.242.102.178:57359 ESTABELECIDO 119 21396 3178/imap-login
tcp6 0 0 :::995 :::* ESCUTA 0 11310 986/dovecot
tcp6 0 0 :::587 :::* ESCUTA 0 11612 1232/master
tcp6 0 0 :::80 :::* ESCUTA 0 10956 1340/apache2
tcp6 0 0 :::465 :::* ESCUTA 0 11618 1232/master
tcp6 0 0 :::22 :::* ESCUTA 0 9557 980/sshd
tcp6 0 0 :::25 :::* ESCUTA 0 11603 1232/master

tcp6 0 0 ::1:6010 :::* ESCUTA 1000 21170 3082/1
tcp6 0 0 :::993 :::* ESCUTA 0 10583 986/dovecot


My question is: Why I see no inbox on email, if I send emails to it? I tried with POP3 too...



Update:



File excerpt: /var/log/mail.log:




Jan 29 15:07:43 Ubuntu postfix/error[3196]: AA1E22C3528: to=, relay=none, delay=59795, delays=59795/0.06/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:11:03 Ubuntu postfix/scache[3193]: statistics: start interval Jan 29 15:07:43
Jan 29 15:11:03 Ubuntu postfix/scache[3193]: statistics: domain lookup hits=0 miss=2 success=0%
Jan 29 15:12:43 Ubuntu postfix/qmgr[3157]: 225BE2C3568: from=, size=519, nrcpt=1 (queue active)
Jan 29 15:12:43 Ubuntu dovecot: lmtp(3217): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:12:43 Ubuntu postfix/lmtp[3216]: 225BE2C3568: to=, relay=Ubuntu[private/dovecot-lmtp], delay=58634, delays=58634/0.03/0.01/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:17:43 Ubuntu postfix/qmgr[3157]: 083672C3576: from=, size=697, nrcpt=1 (queue active)
Jan 29 15:17:43 Ubuntu dovecot: lmtp(3225): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:17:43 Ubuntu postfix/lmtp[3224]: 083672C3576: to=, relay=Ubuntu[private/dovecot-lmtp], delay=57731, delays=57731/0.03/0.01/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:20:52 Ubuntu dovecot: imap-login: Login: user=, method=PLAIN, rip=85.242.102.178, lip=192.168.1.104, mpid=3232, TLS, session=

Jan 29 15:22:43 Ubuntu postfix/qmgr[3157]: 369702C3665: from=<>, size=2930, nrcpt=1 (queue active)
Jan 29 15:22:43 Ubuntu postfix/qmgr[3157]: F0A022C3538: from=, size=1260, nrcpt=1 (queue active)
Jan 29 15:22:43 Ubuntu postfix/qmgr[3157]: 0A11C2C3581: from=, size=525, nrcpt=3 (queue active)
Jan 29 15:22:43 Ubuntu dovecot: lmtp(3654): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:22:43 Ubuntu dovecot: lmtp(3657): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:22:43 Ubuntu postfix/lmtp[3652]: 369702C3665: to=, relay=Ubuntu[private/dovecot-lmtp], delay=30046, delays=30046/0.03/0.01/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:22:43 Ubuntu postfix/lmtp[3653]: F0A022C3538: to=, relay=Ubuntu[private/dovecot-lmtp], delay=21441, delays=21441/0.02/0.01/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:22:43 Ubuntu dovecot: lmtp(3660): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:22:43 Ubuntu postfix/lmtp[3652]: 0A11C2C3581: to=, relay=Ubuntu[private/dovecot-lmtp], delay=30047, delays=30047/0.03/0.02/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:26:03 Ubuntu postfix/scache[3659]: statistics: start interval Jan 29 15:22:43

Jan 29 15:26:03 Ubuntu postfix/scache[3659]: statistics: domain lookup hits=0 miss=1 success=0%
Jan 29 15:29:44 Ubuntu postfix/smtpd[3765]: connect from nk11p03mm-asmtp001.mac.com[17.158.232.236]
Jan 29 15:29:45 Ubuntu postfix/smtpd[3765]: BE8582C34DE: client=nk11p03mm-asmtp001.mac.com[17.158.232.236]
Jan 29 15:29:45 Ubuntu postfix/cleanup[3776]: BE8582C34DE: message-id=
Jan 29 15:29:45 Ubuntu postfix/qmgr[3157]: BE8582C34DE: from=, size=1267, nrcpt=1 (queue active)
Jan 29 15:29:45 Ubuntu dovecot: lmtp(3780): Fatal: Error reading configuration: Invalid settings: postmaster_address setting not given
Jan 29 15:29:45 Ubuntu postfix/lmtp[3779]: BE8582C34DE: to=, relay=Ubuntu[private/dovecot-lmtp], delay=0.23, delays=0.21/0.01/0.01/0, dsn=4.4.2, status=deferred (lost connection with Ubuntu[private/dovecot-lmtp] while receiving the initial server greeting)
Jan 29 15:29:46 Ubuntu postfix/smtpd[3765]: disconnect from nk11p03mm-asmtp001.mac.com[17.158.232.236]



Results of : telnet cr7akg.com smtp



Trying 85.242.102.178...
Connected to cr7akg.com.
Escape character is '^]'.
220 Ubuntu ESMTP Postfix (Ubuntu)


Results of : telnet 192.168.1.104 smtp




Trying 192.168.1.104...
Connected to ubuntu.lan.
Escape character is '^]'.
220 Ubuntu ESMTP Postfix (Ubuntu)

Answer



Found this solution:



add this on /etc/dovecot/conf.d/20-lmtp.conf:




protocol lmtp {
postmaster_address = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf # required
mail_plugins = quota
}


probably this was:



protocol lmtp {
# Space separated list of plugins to load (default is global mail_plugins).

#mail_plugins = $mail_plugins
}

unix - CRON job erratic behavior with Bourne shell script



I have the following script, that runs normally when I type the script name at the prompt (logscript):



#!/bin/sh
dvar=`date +"%m\/%d\/%y"`
filedate=`date +%b%d%Y`
echo DSS1 > serverlog_${filedate}.txt
grep "^$dvar" oasErrLog >> serverlog_${filedate}.txt
echo CMX1 >> serverlog_${filedate}.txt

ssh GVECMX1 grep "^$dvar" /home/gve/log/oasErrLog >> serverlog_${filedate}.txt
echo CMX2 >> serverlog_${filedate}.txt
ssh GVECMX2 grep "^$dvar" /home/gve/log/oasErrLog >> serverlog_${filedate}.txt
echo XIS1 >> serverlog_${filedate}.txt
ssh GVEXIS1 grep "^$dvar" /home/gve/log/oasErrLog >> serverlog_${filedate}.txt
echo XIS2 >> serverlog_${filedate}.txt
ssh GVEXIS2 grep "^$dvar" /home/gve/log/oasErrLog >> serverlog_${filedate}.txt
scp serverlog_${filedate}.txt "GVEXOSR2:C:/Documents\ and\ Settings/gve/Desktop/logs"
#rm serverlog_${filedate}.txt



Sample normal output is:



DSS1
01/11/10 03:00:08.139 XIS run_backupserver - Startup of XISBACKUP backupserver complete.
CMX1
01/11/10 04:30:05.710 OMNICOMM 30274 - boatimesync: error 3 from boaRx rtu 35 pretry 1 {protocols/boa/boa_command.c:601}
01/11/10 04:30:12.117 OMNICOMM 30274 - CRC (0FFC) does not match calculated CRC (03B0) for remote JRS. headerLength=5 dataLength=0 crcByteOffset=2 functionCode=1 {protocols/boa/boa_io.c:1177}
CMX2
XIS1

XIS2
01/11/10 03:00:10.563 XIS run_backupserver - Startup of XISBACKUP backupserver complete.


BUT when I set up a CRON job, it runs and scps the file, but the contents are wrong, AND the file isn't on the server (when the rm line is commented out like I showed above). Here is the output I'm getting, but NOTE: the output changes, it varys in what it outputs:



DSS1
CMX1
01/11/10 001/11/10 04:30:05.710 OMNICOMM 30274 - boatimesync: error 3 from boaRx rtu 35 pretry 1 {protocols/boa/boa_command.c:601}
01/11/10 04:30:12.117 OMNICOMM 30274 - CRC (0FFC) does not match calculated CRC (03B0) for remote JRS. headerLength=5 dataLength=0 crcByteOffset=2 functionCode=1 {protocols/boa/boa_io.c:1177}

CMX2
CMX2
CMX2
CMX2
XIS1
XIS1
XIS1
XIS1
XIS2
01/1101/11/10 001/11/10 03:00:10.563 XIS run_backupserver - Startup of XISBACKUP backupserver complete.



Any ideas on why the CRON job is not running this exactly as the system runs it when the command is typed in manually?



EDIT: I have modified the script to loop and with all absolute addressing and modified the CRON file with SHELL, PATH and HOME variables, but the output is still erratic, here's the script now:



#!/bin/sh

### internal variable definitions
dvar=`date +"%m\/%d\/%y"`

filedate=`date +%b%d%Y`

# add the prefix of new hosts into the string below
# which will be expanded later into GVE(whatever) while looping
HOSTLIST="DSS1 CMX1 CMX2 XIS1 XIS2"

# main Loop
for SUFFIX in $HOSTLIST
do
echo $SUFFIX >> /home/gve/log/serverlog_${filedate}.txt

ssh GVE$SUFFIX grep "^$dvar" /home/gve/log/oasErrLog \
>> /home/gve/log/serverlog_${filedate}.txt
echo "\n" >> /home/gve/log/serverlog_${filedate}.txt
done

# transfer and delete file
scp /home/gve/log/serverlog_${filedate}.txt \
"GVEXOSR2:C:/Documents\ and\ Settings/gve/Desktop/logs"
#rm serverlog_${filedate}.txt



and here is the output:



DSS1
01/1201/12/10 03:00:08.323 XIS run_backupserver - Startup of XISBACKUP backupserver complete.
1
01/12/101/12/10 00:00:37.003 agc - dbioLower: DNP prev cmd may still in prog, name NPC_GT3_GOV raiseTimeout 1250 lowerTimeout 2500 curtime(1263286837:3) cmd_time(1263286834:807)
01/12/10 02:14:57.545 OMNICOMM 1562 - CRC (F110) does not match calculated CRC (1110) for remote ARS. headerLength=5 dataLength=10 crcByteOffset=7 functionCode=2 {protocols/boa/boa_io.c:1177}



CMX2


XIS1


XIS1


XIS2

01/12/101/12/10 03:00:10.408 XIS run_backupserver - Startup of XISBACKUP backupserver complete.


Notice the messed up dates on some lines, the '1' instead of 'CMX1' and the duped 'XIS1'.



FINAL EDIT:



It looks like that somehow CRON has spawned multiple processes that were tripping over each other. After kill all applicable processes it straightened out. CRON has a history (if you do some web-searching on it) of buggy multiple-process spawns, so beware.


Answer



First rule of cron is to set up several things you expect. One, what directory you are in (explicitly 'cd' to it). Two, the path you expect (in the crontab, PATH=...) and three, where the mail goes (if you want to change it.)




For example:



SHELL=/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin
HOME=/var/log


I would then also have each script either set up additional paths if needed, and always




cd $HOME


or to another explicit path.


bind - Apache2, Bind9, virtualhosts, subdomains and domain email

i'm running an ubuntu vps which hosts at least 2 different sites/domains via apache2's virtualhost, and they've been working fine for months. now i'm wondering about setting up some subdomains, and utilising email for the current domains i host.



i've read through posts and articles about setting up bind9, and even articles about how email actually works and it's journey from MTA to MTA( theres more to just clicking 'Send'), but my understanding is still a bit flaky here.




one of my questions is: linearly speaking, if i set up a sub-domain/sub-zone with my BIND9, assuming that it's currently installed but not running( no bind9 or named processes shown), will the bind9-DNS sever sit between my vps service provider's nameservers and my apache2-HTTP server?



example:
[ns1.serviceprovider.com] > [BIND9 DNS server] > [Apache2] > [sub.mydomain.com]



and not only do i have to create records for the new sub-domain/sub-zone, but also the necessary records for all of my current domains too?



my other question is: i know email is working on my server, as i can check and send via mail, but email both to and from outside the server doesn't( i assume it's only local mail for now).i did a dig mydowmain.com MX and it shows there is a MX record pointing to my servers ip, so why do i not receive an email i sent to postmaster@mydomain.com?




the user 'postmaster' doesn't exist, but in the aliases file it shows postmaster: root.



i appreciate any help you guys can give, and i know these questions seem a little noob, but we all started out as noobs at some point ;)

sharing an ext3/ext4 partition on external drive



is there a way of sharing an ext3/ext4 formatted partition on an external USB drive between different users (uids) on different Linux machines without creating a group for this purpose, setting the group ownership of the partition to this group and adding each respective user to the group on every machine?



This would mean that I need to have root privileges on every machine... which I may not have in some cases.



I'm using the partition to store the code I'm developing on Linux and I would like the option to be safe... if possible.




I could use a vfat partition but then I have no control of the rw rights + I cannot develop directly in the dir: I would always have to tar.gz the directory, extract, work, tar.gz, copy to the external drive... and so on.



Thanks!


Answer



The general answer is "no". uid and gid on the filesystem will be as set at the moment of the write and if they don't match on a different machine, then privileges won't match either.



If you do not want to make a small revolution with uids/gids on several machines, you could try using acls to set the desired permissions for all desired users on all machines. I suspect this will use numerical uids internally, so it could happen, that giving access to your account foo on machine A, uid a, will give access to your files to a random guy bar on machine B, uid a. It also seems like it's more hassle than it's worth.



I think that saner approach would be to use tar to migrate your development tree.




I have also had a half-baked idea of carrying around a Subversion repository (with files writable only to root and appropriate access configuration files), and relying on svn server being present on all machines which you are going to use, but I do not think it's excessively sane.


linux - Site hacked, looking for security advice











Last weekend my company's site was hacked.



They did the nicest thing of doing that on a Friday evening so we only noticed the attack on Monday morning.. The funny thing is that we switched from Windows to Linux recently because it was supposed to be more stable and secure. Go figure. And yes, we got us blacklisted on Firefox and Chrome.



Since I am not a Linux expert, I am looking for advice on how to avoid problems like this in the future. What steps do you take to protect your systems? It seems we had weak passwords, but shouldn't Linux block the account after a few failed logins? They tried more than 20 combinations...



In addition to that, I am looking for a tool (or service) similar to pingdom but applied to security. If my site is ever hacked, alert me. Is that such a thing? A Hacking monitor? :)



Another thing, how do you notify your clients about such issues? Do you just ignore and hope no one noticed? Email explaining what happened?




*posting as anonymous to avoid more bad exposure to my company, which is bad already...


Answer



As far as a service similar to pingdom, but applied to security, I will suggest Sucuri's free Network integrity monitor.



What it does? It monitors your web site (and domains) on real time and alert you if they are
ever defaced, blacklisted, hacked, etc. Link: http://sucuri.net



As the name implies, it monitors the integrity of your 'internet' presence.




*disclaimer: I developed it.


Thursday, July 25, 2019

exchange - Best techniques to make sure that emails are not marked as spam



We have a service that requires a registration and the registration process sends some confirmation emails.



These emails say in a three-line message that you have to click some link with a generated token to complete the registration. We send these emails in several languages, always HTML-formatted.



These emails are always delivered instantly to Gmail, Yahoo and Hotmail accounts. It took quite some tweaking though to make our local Exchange server pass them through; the "intelligent" spam filter was always rating these emails as around 75% suspicious. Another mail server located on the same shared hosting sends emails that always get through. At the same time, other confirmation emails sent to our local Exchange server, usually get delivered to our mailboxes. The problem seems to be specific to our confirmation emails only.



The question is how do we make sure that these confirmation emails are not trashed by the spam filters? Is there a safe technique to, e.g. use some specific sender id like 'noreply' or including some specific text into the message, maybe more text or something else to let the spam filter classify the confirmation message as such.



Answer



I don't know Exchange, but I am quite sure that you can somewhere configure to trust every mail coming from your own servers.



On a more general note some hints to avoid being classified as spam:




  • Don't send spam (send only requested mails, make it really easy to unsubscribe for newsletters etc., don't give anyone a reason to tag your mails as spam)

  • Try to personalize even the registration confirmation: "Dear John Doe" instead of "Dear Customer".

  • Don't be an open relay

  • Make sure your DNS entries (forward and reverse) are in order


  • Configure DKIM and/or SPF records


Wednesday, July 24, 2019

svn - How do i know which program-user access a certain folder in linux debian?




How do i know which program-user access a certain folder in linux debian?, basically a screwed my svn repo, basically cause i could not commit to it due to permission. got a



db/txn-current-lock': Permission denied error.



So i tried to fix it and screwed up again. because of emergency in the programing team, i gave 777 permission to all folder and change the user:group permission to root:root to all. Now i want to fix the setup like god commands. I want to create a a group which only people on these groups are able to use the svn, but i don´t know which user (program) access the repo.



Is it www-data ? or is it _svn:*:73:73:SVN Server:/var/empty:/usr/bin/false (this from /etc/passwwd) ? how do i check ? Thank you!


Answer



How do you serve the repo?




Is it through apache mod_svn? Then the directory/files have to be owned by the user under which apache runs (www-data usually).



Through SSH? Then you have to create a group that will own the files (but there were some problems with this setup, don't remember what).



Through svnserve? Then it must be owned by user under which svnserve runs.


debian squeeze - ext4 production ready on kernel 2.6.32 (debian6)?

I've to maintain a production system on debian6 (kernel version 2.6.32). I'd like to use ext4 on a ssd disk (and generally hope to improve performance upgrading from ext3). I can't find any reference about the ext4 state on that kernel/distribution.
Thanks in advance.

pxe boot - How to configure PXE/Tftp in dhcp.conf on vmware Fusion 7 Pro with static reservations



I'm senior devops engineer with a deployment system to ship a couple thousand linux machines a week. I'm using vmware fusion 7 pro to do this, and I'm having a little trouble with pxe booting.



I ran into this little issue, so I thought i'd do a quick writeup to maybe save someone 15 mins. If someone knows a better way, I'd love to hear it.



I'm playing with PXE booting some containers. I want VMware Fusion to handle DHCP, and I want the VM's to be Nat'ed with my Mac for outside access, thus I need to use vmnet8 by default. I also have several virtual machines that should have static reservations to mimic the production network this system will reside on. So, I edit /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf



I need to add two simple lines to the subnet declaration for tftp:




  next-server 192.168.87.20;  
filename "pxelinux.0";


The problem is the subnet declaration (line 26 of the stock file) is in the forbidden zone:



###### VMNET DHCP Configuration. Start of "DO NOT MODIFY SECTION" #####



If you change stuff in the forbidden zone then restart vmware fusion, or restart its networking, it rudely replaces the whole dhcpd.conf file with an auto-generated one. It does rotate the existing one to a backup, but only one rotation (3 would be less rude, so would a prompt and a log entry).


Answer



This is How To Add the PXE Boot Configuration in the relevant VMware Fusion dhcp.conf file.



The "proper" solution is to simply redeclare the whole subnet block below the "DO NOT MODIFY" section. So, to add the next-server and filename options for PXE booting, I had to add the following:



####### VMNET DHCP Configuration. End of "DO NOT MODIFY SECTION" #######  
subnet 192.168.87.0 netmask 255.255.255.0 {
range 192.168.87.128 192.168.87.254;
option broadcast-address 192.168.87.255;

option domain-name-servers 192.168.87.2;
option domain-name localdomain;
default-lease-time 1800; # default is 30 minutes
max-lease-time 7200; # default is 2 hours
option netbios-name-servers 192.168.87.2;
option routers 192.168.87.2;
next-server 192.168.87.20;
filename "pxelinux.0";
}
#

host cmhpxe {
hardware ethernet 00:0C:29:DF:06:7F;
fixed-address 192.168.87.20;
}
#


The fun way to reconfigure and restart DHCP without restarting fusion:



alias vm_restartdhcpd='sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --configure ;   

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop;
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start'

Tuesday, July 23, 2019

vpn - IPv6 over IPv4 in OpenVPN on a server that has multipe non-consescutive IPv6 addresses

I have a server that has 1 IPv4 public address and 10 public IPv6 addresses (each one /128, not in the same subnet like a /64 or /48) - I know it sounds strange but I have no control over this, it is how they were assigned by the provider. This is acting as an OpenVPN server, currently running with tun on IPv4 by doing NAT with the single public IPv4. So far so good.



I am trying now to also assign a public IPv6 address to clients. The problem is I am not sure how to do it, because all the documentation I found requires me to have a whole subnet of /64 or at least /112.




Isn't a way available that will allow me out of the 10 IPv6 addresses I have (/128) to keep one for the server, one for the tun device and 8 for clients?
does -ifconfig-ipv6-pool take multiple /128 addresses or can it be used more than once in server.conf? What route should it have so client's ipv6 traffic is entirely redirected via the vpn.

Monday, July 22, 2019

failover - Using Azure Traffic Manager for an immediate increase in capacity

I have a REST web-service on Azure which has very high but variable load, it's all set-up to auto scale using Paraleap so that it can handle the peak periods but keep costs down when things are quieter.



I have never been able to figure out a way, using any metrics, to predict when a server is going to start maxing out before it actually maxes out! So the solution I have at the minute is a separate programme that constantly checks to see if the server is up, if it starts returning errors then it tells the server to start returning an error message to a certain percentage of users, returning a simple error takes up less of the servers resources which allows the majority of users to still have a service, and then it tells Paraleap to increase the number of instances .. increasing instances takes 10-15 minutes normally, so during this period things aren't great and some users get errors, but ultimately, the new instance kick in and normal service is resumed.




I hoped Azure Traffic Manager would be my solution, my hope was that I could use failover mode, and when a failure was detected on my main web-service, I could divert x% of requests to a backup, which would return the main-service to a working state .. at the same time I would independently tell the main web-service to scale, and when it finished, the traffic manager would divert everything back to the main web-service. In other words, I'd get an instant increase in capacity which would fill the gap whilst I boot up new instances.



Unfortunately, I can't seem to find a way to do this! It looks like Traffic Manager, on detecting a failure, diverts 100% of traffic to the backup. So I'd need to more than double my server capacity just for these moments i.e. have X instance for the main web-service, and x+1 waiting in the backup, a failure with main would diver 100% of requests to backup which would have more capacity, then I would launch more instances for the main, eventually Traffic Manager would send all requests back there, at which point I'd then need to add more instances to the backup and have it sit waiting again. This would be massive overkill and would cost me a fortune!



Does anyone have any suggestions on how I can manage this better?



Thanks!

Thursday, July 18, 2019

raid - HP Array P410, HDD changed, but still predict to fail soon

On HP Proliant DL G6, one disk in RAID 1 on P410 array broken. It was HP EG0300FBDBR. I changed it on compatible HP model - HP EG0300FAWHV. (both 300 Gb, 10K)



But HP Array Configuration Utility still show me - that new HP EG0300FAWHV - 300 GB 2-Port SAS Drive at Port 1I : Box 1 : Bay 0 is predicted to fail soon.



New disk in server blinking green - is like "The drive is rebuilding, erasing, or it is part of an array that is undergoing capacity expansion or stripe migration."



But two days gone and status didn't change.




In RIS Event Log, last info:



*> Event 123 2016-02-08 11:56:54 Hot Plug Physical Drive Change Removed.




Physical drive number: 0x09. Configured drive flag: 1. Spare drive
flag: 0. Big drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 124
2016-02-08 12:16:57 Hot Plug Physical Drive Change Inserted. Physical
drive number: 0x09. Configured drive flag: 1. Spare drive flag: 0. Big
drive: 0x00000009. Enclosure Box: 00. Bay: 00 Event 125 2016-02-08

12:16:57 Logical Drive Status State change. State change, logical
drive 0x0000. Previous logical drive state (0x03): Logical drive is
degraded. New logical drive state (0x04): Logical drive is ready for
recovery operation. Spare status (0x00): No spare configured Event
126 2016-02-08 12:16:57 Logical Drive Status State change. State
change, logical drive 0x0000. Previous logical drive state (0x04):
Logical drive is ready for recovery operation. New logical drive state
(0x05): Logical drive is currently recovering. Spare status (0x00): No
spare configured Event 127 2016-02-08 12:51:51 Logical Drive Status
State change. State change, logical drive 0x0000. Previous logical

drive state (0x05): Logical drive is currently recovering. New logical
drive state (0x00): Logical drive OK. Spare status (0x00): No spare
configured Event 128 2016-02-09 03:23:22 Logical Drive Surface
Analysis Surface Analysis pass information. Block count: 00000000.
Drive No: 00. Starting Address: 00000848:00000000.*




I attached ADU report and screen.



Is it mean that new disk still recovering? But why its status as "predict to fail" and why so long for recovering? Why HP utility didn't mark it as rebuilding?




Report ADU: https://www.dropbox.com/s/70ucdsiafzdwvfr/ADUReport.zip?dl=0



enter image description here

perl - Spamassasin: Check mailbox for spam

How can I have spamassasin check a mail folder for spam and move it into a different one.



I've been getting a lot of spam recently and started implementing bayesian classifiers. I moved a bunch into a separate folder and trained it with 'sa-learn'.



Now I want spamassasin to use what it just learned and check all email in a folder for spam and move it into the spam folder if it considers it spam. (e.g. the Spam-Level is higher than 5.0 in my configuration.)




EDIT: I am storing my emails with the maildir format on the server.



Any idea?



Thanks in Advance

storage - JBOD: any system that can do RAID-0 can do JBOD?

any storage controller that can do RAID-0 can do JBOD?



Am looking for a storage solution using ZFS,
currently have Dell Perc5i and 6i not sure about their capabilities for JBOD.



does ZFS really only need JBOD for RAIDZ, RAIDZ2 etc?

Tuesday, July 16, 2019

ubuntu - Why is my mdadm raid-1 recovery so slow?



On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!):



dimmer@paimon:~$ cat /proc/mdstat 

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[2] sdb1[1]
1953513408 blocks [2/1] [_U]
[====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec

unused devices:


Eventhough I have set the kernel variables to reasonably quick values:




dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min
1000000

dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max
100000000


I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned.



dimmer@paimon:/sys$ sudo parted /dev/sdb

GNU Parted 2.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) p
Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt


Number Start End Size File system Name Flags
1 1049kB 2000GB 2000GB ext4

(parted) unit s

(parted) p

Number Start End Size File system Name Flags
1 2048s 3907028991s 3907026944s ext4


(parted) q
dimmer@paimon:/sys$ sudo parted /dev/sdc
GNU Parted 2.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA WDC WD20EARS-00J (scsi)
Disk /dev/sdc: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt


Number Start End Size File system Name Flags
1 1049kB 2000GB 2000GB ext4


I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow.



I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet.



Any ideas? Thanks.




As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange:



top - 11:35:13 up 2 days,  9:40,  3 users,  load average: 2.87, 2.58, 2.30
Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers
Swap: 1526132k total, 0k used, 1526132k free, 535416k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0
1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init
2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd
3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0
4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0
5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0
6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1
...



dmesg errors, definitely looking like hardware:



[202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[202884.007015] ata5.00: failed command: FLUSH CACHE EXT
[202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout)
[202884.033667] ata5.00: status: { DRDY }
[202884.040329] ata5: hard resetting link
[202889.400050] ata5: link is slow to respond, please be patient (ready=0)
[202894.048087] ata5: COMRESET failed (errno=-16)

[202894.054663] ata5: hard resetting link
[202899.412049] ata5: link is slow to respond, please be patient (ready=0)
[202904.060107] ata5: COMRESET failed (errno=-16)
[202904.066646] ata5: hard resetting link
[202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[202905.849178] ata5.00: configured for UDMA/133
[202905.849188] ata5: EH complete
[203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[203899.007096] ata5.00: failed command: IDENTIFY DEVICE
[203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in

[203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout)
[203899.041232] ata5.00: status: { DRDY }
[203899.048133] ata5: hard resetting link
[203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[203899.826062] ata5.00: configured for UDMA/133
[203899.826079] ata5: EH complete
[204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[204375.007421] ata5.00: failed command: IDENTIFY DEVICE
[204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in
[204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout)

[204375.044374] ata5.00: status: { DRDY }
[204375.051842] ata5: hard resetting link
[204380.408049] ata5: link is slow to respond, please be patient (ready=0)
[204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[204384.449938] ata5.00: configured for UDMA/133
[204384.449955] ata5: EH complete
[204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[204395.988140] ata5.00: failed command: IDENTIFY DEVICE
[204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in
[204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout)

[204395.988151] ata5.00: status: { DRDY }
[204395.988156] ata5: hard resetting link
[204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[204399.330487] ata5.00: configured for UDMA/133
[204399.330503] ata5: EH complete

Answer



The problem ended up being caused by hardware. Eventually I unscrewed all the HDDs from their chassis and moved them to new locations in the tower- and I tried unplugging peripherals that I knew I would not be needing. This fixed the problem. The raid array quickly rebuilt and my raid array has been stable for months.



Unfortunately I do not know what specifically caused the problem. I am guessing that something put noise into a data link, but I really have no clue. My motherboard is an Asus P5WDG2 WS Pro, in case anyone else has a similar problem with this motherboard.




Thanks to everyone who tried to help - this ended up being an edge case.


Monday, July 15, 2019

MySQL database server on a VPS: can it handle the spike in traffic I'm anticipating?

I have VPS service with Hostican.com. 2.7GHz CPU, 2GB RAM (level 5, if you look at the site). I have a smartphone app that makes requests to PHP forms, which query the MySQL DB. I'm anticipating a huge surge of activity in a single day, due to an upcoming advertising campaign. Success of the campaign is pretty critical; I can't afford for the server to go down when the surge of activity hits.



Information:





  • I'm anticipating about 20,000 new accounts in a single day. To create an account takes 2 SELECTs, 2 INSERTs, and then 11 GRANT statements in a START/COMMIT block. The SELECTs and INSERTs are simple, with no joins.

  • Once a user has created a new account, they can perform a variety of functions, all of which access a PHP form and query the database. Most of these forms do 1 or 2 simple queries (i.e. a SELECT and an INSERT) and return a result. Some may do as many as 4, but none of the queries are complex. Most are single table, with a few two-table JOINs.

  • Data fields are all relatively small; no query is returning more than a few rows of text.

  • All tables are InnoDB. I've indexed fields wherever I thought it would be helpful.

  • All requests to the server are HTTPS.



The bulk of the traffic will probably be from account creation.



Can I rely on this VPS setup to handle that sort of traffic?




Also, assuming this single day is a success, I may be fielding this many account requests (and an appropriately increasing number of requests to the other PHP forms) every day thereafter. Potentially 20K new accounts each day. I don't imagine the VPS I have can handle that for very long, but I'm not sure roughly at what point it wouldn't be handle it, and what I should shoot for in terms of an upgrade.



Update



I'm looking into MySQL and PHP optimizations, and my options for upgrading to a dedicated server. I'd certainly be willing to spend the money on a dedicated server if that's what it would take (and it sounds like that's going to be the case, sooner or later).



Two points I forgot about and wanted to add:





  • Does the fact that all HTTP REQUESTS to the server use SSL add much processing overhead? From what I've read, it doesn't sound like I should worry about it too much...

  • n important point that I forgot to mention: The application that the server is talking to is a messaging application, where incoming messages are stored in the DB until users retrieve them (hence the write-heavy SQL statements). Outgoing messages are through email; so, I also have an email server running (exim, I believe, though I can change it) and sending emails very frequently. Once the campaign is in full swing, emails probably will be going out 1 per second or more. Generally speaking, is this going to add a lot of processing/memory overhead?



So in all, the server uses Apache, MySQL, PHP, SSL, and exim (or another email server) to serve this smartphone application. Are the SSL and email going to be much of a concern, compared to PHP and MySQL?

Sunday, July 14, 2019

dovecot - Postfix Connection Timed out

I used the tutorial below to setup postfix and dovecot on a Digital Ocean server. But I'm getting "Connection Timed out" when postfix/smtp tries to deliver an email to a remote host like gmail or yahoo..




https://www.digitalocean.com/community/articles/how-to-set-up-a-postfix-e-mail-server-with-dovecot






RECEIVING A NEW MESSAGE /var/log/mail.log






Feb 2 22:33:38 localhost dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth




Feb 2 22:33:38 localhost dovecot: auth-worker: Debug: pam(app,189.63.49.XXX): lookup service=dovecot



Feb 2 22:33:38 localhost dovecot: auth-worker: Debug: pam(app,189.63.49.XXX): #1/1 style=1 msg=Password:



Feb 2 22:33:38 localhost dovecot: auth: Debug: client out: OK#0111#011user=app



Feb 2 22:33:40 localhost postfix/submission/smtpd[1045]: E9AA724264: client=unknown[189.63.49.XXX], sasl_method=PLAIN, sasl_username=app



Feb 2 22:33:43 localhost postfix/cleanup[1052]: E9AA724264: message-id=




Feb 2 22:33:43 localhost postfix/qmgr[32661]: E9AA724264: from=, size=485, nrcpt=1 (queue active)



Feb 2 22:33:45 localhost postfix/submission/smtpd[1045]: disconnect from unknown[189.63.49.XXX]



Feb 2 22:34:14 localhost postfix/smtp[1053]: connect to mta5.am0.yahoodns.net[98.136.216.25]:25: Connection timed out






Feb 2 22:24:21 localhost postfix/smtp[1013]: connect to mta6.am0.yahoodns.net[98.136.217.202]:25: Connection timed out




Feb 2 22:24:21 localhost postfix/smtp[1014]: connect to gmail-smtp-in.l.google.com[173.194.76.27]:25: Connection timed out



Feb 2 22:24:21 localhost postfix/smtp[1015]: connect to gmail-smtp-in.l.google.com[173.194.76.27]:25: Connection timed out



Feb 2 22:24:21 localhost postfix/smtp[1012]: connect to aspmx.l.google.com[173.194.68.27]:25: Connection timed out






Here is /etc/postfix/main.cf

http://pastebin.com/nHQhh8Bp



/etc/postfix/master.cf
http://pastebin.com/nnJBP9mh



Result from #> netstat -nlptu
http://pastebin.com/PKJX6xC1

hp proliant - HP Server Gen8 Mixed RAID and/or Pass Through



I'm considering several HP Gen8 Server models (e.g. DL*e series) for use with WS12R2E. For us, cost savings using a mix of SSD/HDD will likely better serve our needs than purchasing SAS drives.



I would like to use one (or RAID1 two smaller) SSD drive/s for the system volume, and pass through remaining xTB capacity HDD drives for storage configured with WS12 storage pools.




  1. Do HP RAID controllers offer pass through? Only certain controllers?


  2. If not, can I configure individual xTB HDD drives as RAID0? Any HP-specific precautions to note using this approach?


Answer



Nope. HP controllers don't offer a mixed mode like what you're asking for.



Which specific server model (and storage controller) are you planning to use?



If you buy a DL3x0p series Gen8 server with an onboard P-series RAID controller, there is a secret "HBA" mode that will disable all RAID features.



If you buy a DL3x0e series, there are some messy storage issues associated with the product that will make you regret the purchase.



Friday, July 12, 2019

Can a RAID array be moved from a HP P410i to a Dell Perc S100?



I have a HP Proliant ML350 G6 server with a HP P410i controller that failed miserably after a power spike. Can I move the RAID10 array to a Dell PowerEdge T310 with a Perc S100 controller without losing data?


Answer



No, you can't move an HP Smart Array RAID group to a Dell Perc controller without reformatting. The array metadata is stored on the disks, so you'd need a system with an HP Smart Array controller to transfer the array set to.




The P410i controller is an embedded controller, so your ML350 G6 would need a new system board to repair. You could also substitute a Smart Array P410 PCIe controller to use if your PCIe slots are still healthy. They're cheap and abundant.


ssh - how to tunnel SOCKS proxy?

I have a distant server, that is not accessible from the outside world. I would like to turn it into a proxy nonetheless. I was thinking I could use server (that I fully own) to achieve my goal.



Apologies if the following is not clear enough, for I have very little knowledge about networking.



What I can do:





  • I know I can open a reverse ssh connexion from distant to server, and use it to access distant from the local computer. For instance, from local I can ssh server, then when logged in, use ssh localhost -p 12345 to finally access distant via ssh.

  • I also know about ssh -D 1080 server, that will allow me to use server as a SOCKS proxy from my computer



network figure



What I would like to do



I would like to open a SOCKS proxy from local, that uses the ip of distant. I have no idea wether this is possible, but in naive words, I'd say I'm trying to open a socks connexion to the server, while telling the server to use its own localhost:12345 to access internet.




In other words, I'm trying to funnel a SOCKS proxy from localto distant, thanks to server that is accessible via a public IP address.



Any idea/direction on how to do this, provided it is even possible ?

Thursday, July 11, 2019

freebsd - How can I setup PCI passthrough for my HighPoint 1144A USB 3.0 card under ESXi 5.1?

I have an Apple Mac Pro (MacPro5,1) running VMware ESXi 5.1.0 (799733) with a HighPoint RocketU 1144A USB 3.0 PCI-express card installed. Given that VMware ESXi 5.1 doesn't support USB 3.0 for host-attached devices, I am trying to attach the USB 3.0 PCIe card to a virtual machine using DirectPath I/O PCI passthrough.



It took me a number of different configurations, but I finally managed to configure the HighPoint RocketU 1144A USB 3.0 PCI Express card for DitectPath passthrough:




DirectPath Configuration Screenshot



DirectPath Status Screenshot



The first few times I tried, after rebooting I still received the "The host needs to be rebooted to apply configuration changes" warning, even after rebooting the host. Finally I found that by moving the USB 3.0 card to the 3rd PCIe slot, I was able to get the passthrough configuration to stick.



I also note that the device shows up as a "ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller" when it actually should have 4xASM1042 chips on it. I also had to pass-through the PLX Technology, Inc PEX 8609 8-lane, 8-Port PCI Express Gen 2 Switch to make the configuration changes stick.



Now, however, I am stuck trying to attach the ASMedia Technology ASM1042 device to a FreeBSD (FreeNAS) virtual machine. When I add the device as a PCI passthrough device to the virtual machine, I am unable to power on the virtual machine:




VM Power On Error Screencapture




VMware ESX unrecoverable error: (vcpu-0) PCIPassthruChangeIntrSettings: 09:00.1 failed to register interrupt (error code 195887105)




I tried passing through only the ASMedia ASM1042, only the PLX PEX 8609, and both the two devices. None of these three configurations worked.



Googling for that error code led me to this forum post in German which by using Google Translate I was able to extract the following information:





You have to disable MSI and forcing the card to get a INTx interrupt access, we go.
Can you read here on page 5. http: / / www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf you
following parameters have to manually add to your VM config:
pciPassthru0.msiEnabled = "FALSE" I had to run the whole thing also virtualisert to ESXi
and then I walked the closed system of ESXi but with the time on your nerves because you
problems but not actually really engage. can I have now migrated to KVM (Proxmox VE) where
the rich and Cine S2 V5.5 also successful at the next VM (with MSI). The whole runs super
stable.





Following those instructions I added



pciPassthru0.msiEnabled = "FALSE"
pciPassthru1.msiEnabled = "FALSE"



to my .vmx file, and this did enable me to boot the FreeBSD 8.3 (FreeNAS 8.3.1) VM... however the virtual machine hung at boot here:



FreeBSD Hang Screencapture




I expect that I need to make some other change. The VMware PDF document they linked to describes "linked devices", and I expect there's some sort of linking going on whereby the 4xASM1042 chips (because the 4 ports on the USB 3.0 card each have their own channel) need to be linked through the switch... or something... but I am unsure of technically what is going on or how to proceed.



How can I resolve these problems and get passthru working for the HighPoint 1144A USB 3.0 card under EXSi 5.1?

spam filter - Exchange 2013 - Remove warning text from outgoing email body

I created a transport-rule in our Exchange server 2013 where it will add a warning text on top of email-body to all external incoming emails. This is to alert employees about potential risks in external emails when it has website-links and attachments which may be harmful. The text is as follows:

Text



CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


Now, when user will reply to the email, I want it to be remove when Exchange process it to send. How can I remove the warning text from outgoing emails in Exchange? I was looking for something in rules, but there is none I could find.



Any help will be appreciated. Thanks.

Wednesday, July 10, 2019

domain name system - DNS hosting design - should secondary server not be spread over countries, networks etc?

My DNS registrar and DNS provider recently had a long outage, rendering all my domains unusable (email, own+client websites etc).



They have 3 DNS server, who are all in the same co-hosting facility!



I know just enough about networking to make my spidey-sense supertingle, but not enough to condemn this. Is that not an atrocious design?




Should they not have been spread across lines, networks - even continents?



enter image description here



(Source: https://help.hover.com/hc/en-us/community/posts/115007805527-After-recent-outage-what-are-you-going-to-do-to-fix-your-network-design-problems-)

ubuntu - How to repair the fstab file with current configuration




I'm using Ubuntu 10.04 and i accidently removed all the entries from the fstab files while doing a backup (Yeah, I know ;)).



I would like to know if there is a way to rebuild it with the current mount options, since I did not restart the server since the deletion. If there is no such program, could anybody help me rebuild it.



Using this, I have found the command to show the current setup, but I don't know what to do with it.




$ sudo blkid
/dev/sda1: UUID="3fc55e0f-a9b3-4229-9e76-ca95b4825a40" TYPE="ext4"

/dev/sda5: UUID="718e611d-b8a3-4f02-a0cc-b3025d8db54d" TYPE="swap"
/dev/sdb1: LABEL="Files_Server_Int" UUID="02fc2eda-d9fb-47fb-9e60-5fe3073e5b55" TYPE="ext4"
/dev/sdc1: UUID="41e60bc2-2c9c-4104-9649-6b513919df4a" TYPE="ext4"
/dev/sdd1: LABEL="Expansion Drive" UUID="782042B920427E5E" TYPE="ntfs"


$ cat /etc/mtab
/dev/sda1 / ext4 rw,errors=remount-ro 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
none /sys sysfs rw,noexec,nosuid,nodev 0 0

none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
none /dev devtmpfs rw,mode=0755 0 0
none /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
none /dev/shm tmpfs rw,nosuid,nodev 0 0
none /var/run tmpfs rw,nosuid,mode=0755 0 0
none /var/lock tmpfs rw,noexec,nosuid,nodev 0 0
none /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
none /var/lib/ureadahead/debugfs debugfs rw,relatime 0 0

/dev/sdc1 /home ext4 rw 0 0
/dev/sdb1 /media/Files_Server ext4 rw 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0
/dev/sdd1 /media/Expansion\040Drive fuseblk rw,nosuid,nodev,allow_other,blksize=4096,default_permissions 0 0
gvfs-fuse-daemon /home/yvoyer/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=yvoyer 0 0
/dev/sdd1 /media/Backup500 fuseblk rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions 0 0
/dev/sr0 /media/DIR-615 iso9660 ro,nosuid,nodev,uhelper=udisks,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 0 0
gvfs-fuse-daemon /home/cdrapeau/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=cdrapeau 0 0



Answer



You can copy the lines started with /dev/sd** from mtab and paste them in to a new text file and change /dev/sd** with UUID or LABEL. For example from your config:



use



UUID="3fc55e0f-a9b3-4229-9e76-ca95b4825a40" / ext4 rw,errors=remount-ro 0 0


instead




/dev/sda1 / ext4 rw,errors=remount-ro 0 0


The line above also works, but UUID is the new standart and if your grub configured with UUID, it might can't understand which partition is what.



if partition has Label you can use the LABEL instead UUID, for example:



LABEL="Files_Server_Int" /media/Files_Server ext4 rw 0 0



IMO copy your mtab to a new file and remove the lines started with "none" and change the /dev/sd** part with blkid output equivelants. If UUID exist, use the UUID instead of /dev/sd**. If LABEL exist use the LABEL instead of UUID.



Do not remove anything else except "none" lines. Save the file, change the file name to fstab and copy in to /etc.



Finally add lines for swap if any (if you forget this your system will boot but you will have zero swap space). e.g. in your case note that blkid prints this line:



/dev/sda5: UUID="718e611d-b8a3-4f02-a0cc-b3025d8db54d" TYPE="swap" 


so you need this line in fstab:




UUID=718e611d-b8a3-4f02-a0cc-b3025d8db54d none  swap sw   0   0


This might help to restore your fstab.


Tuesday, July 9, 2019

linux - Howto check disk I/O utilisation per process



Im having a problem with a stalling Linux system and I have found sysstat/sar to report huge peaks in disk I/O utilization, average service time as well as average wait time at the time of the system stall.



How could I go about to determine which process is causing these peaks the next time it happen?
Is it possible to do with sar (ie: can I find this info from the alreade recorded sar files?



Output for "sar -d", system stall happened around 12.58-13.01pm.




12:40:01          DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
12:40:01 dev8-0 11.57 0.11 710.08 61.36 0.01 0.97 0.37 0.43
12:45:01 dev8-0 13.36 0.00 972.93 72.82 0.01 1.00 0.32 0.43
12:50:01 dev8-0 13.55 0.03 616.56 45.49 0.01 0.70 0.35 0.47
12:55:01 dev8-0 13.99 0.08 917.00 65.55 0.01 0.86 0.37 0.52
13:01:02 dev8-0 6.28 0.00 400.53 63.81 0.89 141.87 141.12 88.59
13:05:01 dev8-0 22.75 0.03 932.13 40.97 0.01 0.65 0.27 0.62
13:10:01 dev8-0 13.11 0.00 634.55 48.42 0.01 0.71 0.38 0.50



This is a follow-up question to a thread I started yesterday: Sudden peaks in load and disk block wait, I hope its ok that I created a new topic/question on the matter since I have not been able to resolve the problem yet.


Answer



If you are lucky enough to catch the next peak utilization period, you can study per-process I/O stats interactively, using iotop.


Sunday, July 7, 2019

security - Unified Communications SSL Certificate (SAN)




Are Unified Communications SSL certificates (aka Subject Alternative Name / SAN) suitable for securing a CMS website in IIS, which serves lots of different secured websites for various clients each on their own domain name? The CMS will run on a single IP address and a single IIS site, but the content served will be different based on the host header / domain.



I understand that this is technically possible, but I'm interested to know if anyone thinks this is bad practice or a potential security issue to share an SSL certificate for many clients?



Thanks,
Tim


Answer



By itself, using a UC certificate with multiple domains names presents no security problems. Just keep in mind that all of the domain names will be listed in the certificate, so you can't put any private sites in there.


linux - Very high RAM buffers usage following instance resize/reboot

Yesterday afternoon, we resized one of our Linode instances (CentOS 5.7, 64-bit) from a 4GB instance to 12GB. Immediately following that reboot, I noticed that memory usage for buffers was insanely high - higher than I've ever seen on any machine I've touched. Even on my most heavily-used servers, I rarely see buffer usage exceed ~200MB. On this server, the current buffer usage is two orders of magnitude higher than before we resized and rebooted.




Here is the a munin memory graph with data pre and post migration:



Munin Mempory Graph



Data that munin is displaying is corroborated by the output of "free":



[erik@host ~]$ free -m
total used free shared buffers cached
Mem: 11967 10146 1820 0 7374 1132
-/+ buffers/cache: 1639 10327

Swap: 255 0 255


Now, I'm well aware of the kernel's usage of unused memory for cache, but my understanding of buffers is that buffers are different. They're used to temporarily store writes until they've been committed to disk. Is that a correct understanding? This server has very little disk IO (it's an apache/php webserver, DB is elsewhere, so only IO of substance are access_logs), and as such, I'd expect buffer usage to be quite low.



Here is a network traffic graph for the same time period:



enter image description here



As you can see, there is no substantive change in traffic before and after the resize.




During the reboot, three things changed that I know of:




  1. We picked up 4 additional cores that Linode gave out earlier this week, bring the total cores to 8.

  2. We're on the "Latest 64 bit" kernel, which is now 3.7.10-x86_64-linode30. Previously we were on 3.0.18 I believe.

  3. We went from 4GB RAM to 12GB.



Of these changes, my hunch is that it was the new kernel that is causing the increased buffer usage. Unfortunately, at the moment, we can't take another downtime hit to downgrade to an earlier kernel, though that may end up being necessary if I can't get this buffer usage sorted out.




With that, I have a couple questions:




  1. Are any of you running the 3.7.10 kernel and if so, have you seen a similar change?

  2. What tools are available to inspect the kernel buffers and their sizes?

  3. I assume that, like cache, the kernel will release this memory when other applications need it. Is this correct?

domain name system - Setting up new host provider for my website




My website is hosted by the same company from which I bought DNS. This company also takes care of my emails.



Now, however, I want to host my website on a different service, while the other services (email, ftp, smtp...) would remain on the old host. The instructions at my new host are to add my domain.sk and www.domain.sk as CNAME records pointing to their server.



However, I'm worried that this will make my emails not work.



Current setup:



domain.sk.    IN NS  ns.exohosting.sk.  

domain.sk. IN NS ns1.exohosting.eu.
domain.sk. IN NS ns3.exohosting.cz.
domain.sk. IN A 92.240.253.107
domain.sk. IN MX 10 relay.exohosting.sk.
domain.sk. IN MX 10 relay1.exohosting.sk.
domain.sk. IN MX 20 relay2.dnsserver.eu.
domain.sk. IN MX 15 relay3.dnsserver.eu.
smtp.domain.sk. IN A 92.240.253.56
mail.domain.sk. IN CNAME pop3-imap.dnsserver.eu.
*.domain.sk. IN A 92.240.253.107

ftp.domain.sk. IN CNAME ftpcluster.dnsserver.eu.
setup.domain.sk. IN CNAME setup.dnsserver.eu.
webmail.domain.sk. IN CNAME webmail.dnsserver.eu.
wapmail.domain.sk. IN CNAME wapmail.dnsserver.eu.
webftp.domain.sk. IN CNAME webftp3.dnsserver.eu.
stats.domain.sk. IN CNAME stats1.dnsserver.eu.
autoconfig.domain.sk. IN CNAME autoconfigcl.dnsserver.eu.
autodiscover.domain.sk. IN CNAME autoconfigcl.dnsserver.eu.



To be added according to new host:



www.domain.sk.    IN CNAME   webapp-XXXX.pythonanywhere.com.    
domain.sk. IN CNAME webapp-XXXX.pythonanywhere.com.

Answer



You can’t create a CNAME record with the same name as any other resource record so, unfortunately, you won’t be able to replace the domain.sk A record with a CNAME record.



BIND and other name servers will prevent you from creating a CNAME record that masks other resource records. I presume the web interfaces of DNS hosts will also throw an error.




You can add the www subdomain as a CNAME record since this doesn’t conflict with any other resource records.



www.domain.sk.    IN CNAME   webapp-XXXX.pythonanywhere.com.


For the bare domain, I’d agree with Andreas Rogge’s suggestion of configuring a web server for domain.sk with your current host so that HTTP requests for domain.sk are redirected to www.domain.sk. I’d also suggest using a 301 Moved Permanently redirect so that search engine page rank is preserved.






If you’re changing web hosts and you want to have other separate websites available by subdomains such as myapp.domain.sk but hosted on pythonanywhere.com, you could replace the wildcard A record with a wildcard CNAME record:




*.domain.sk.    IN CNAME   webapp-XXXX.pythonanywhere.com.


Presumably the hosting nameserver supports wildcard CNAME records. See Is a wildcard CNAME DNS record valid?


centos - How to configure iptables for a dial-up VPN with OpenVPN and two interfaces?

I have an AWS EC2 instance, running Amazon Linux, that has two Elastic Network Interfaces (ENIs) attached: eth0 and eth1. I am connecting to the public IP on eth0. Everything works great, except I would like to route unencrypted traffic out of the eth1. i.e. Client connects to eth0 to setup an encrypted VPN tunnel, then his/her unencrypted internet traffic is routed in/out of eth1 and back across the tunnel on eth0.



I don't know enough about iptables to get this config working, despite trying for several hours. I'm hoping this is a simple one?




I've installed the latest version of OpenVPN from source and done the following:




  1. Disabled source/dest check on the interfaces

  2. Added the following to "rc.local": echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

  3. Added the following iptables commands:

    iptables -A INPUT -i eth0 -m state --state NEW -p tcp --dport 443 -j ACCEPT
    iptables -A INPUT -i tun+ -j ACCEPT

    iptables -A FORWARD -i tun+ -j ACCEPT
    iptables -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT
    iptables -t nat -A POSTROUTING -s 10.18.14.0/24 -o eth0 -j MASQUERADE



My server config file looks like this:




port 443

proto tcp-server
dev tun
tls-server
server 10.18.14.0 255.255.255.0

ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/vpnserver.crt
key /etc/openvpn/pki/vpnserver.key
dh /etc/openvpn/pki/dh.pem


ifconfig-pool-persist ipp2.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

keepalive 5 15
comp-lzo
max-clients 5
persist-key
persist-tun

status openvpn-status.log
log-append /var/log/openvpn_road.log
verb 6
mute 20

tun-mtu 1500
auth SHA1
keysize 128
cipher BF-CBC

vmware esxi - HP Proliant DL180 G6 - Smart Array P410, bay 11 error



I've been trying to search over ther internet for a solution to fix this. I'm the new IT person for my organization and our previous IT have not kept any records on certain things. I do understand, that its a bad practice but I'm now making all these on a documentation for any future reference.




Having said that; recenty I came across an issue on our server. We're using an HP Proliant DL180 Gen6 server with ESXi 5.0 ... The issue is; that I'm unable to power up certain VM's as it gave me I/O error. Below seen was the error;




Reason: 0 (Input/output error).
Cannot open the disk '/vmfs/volumes/4e7a4edb-08851e40-0c1e-1cc1de700f23/EON-GATEWAY
(192.168.0.1 )/EON-GATEWAY ( 192.168.0.1 )-000001.vmdk' or one of the
snapshot disks it depends on.




So to speak, I powered down all the VM's and restarted the host to jump into BIOS for an observation on the RAID. I do not know what type of RAID that the server is on as it shows something like;





Error on SLOT1 : bay 11 -- (as I remember)




Is there a way for me to check what exactly the issue is.. Because, I can see that the effected hard disk still flashes green color LED. Out of 12 bay.. bay 1 shows an orange color LED & bay 4 shows nothing at all.



I'm pretty much confused how to get this sorted. If anyone can guide me what exactly I need to do to get this sorted or may be a hint on how to check the RAID / array info.??






Below seen images are from smart array controller...



enter image description here



enter image description here



enter image description here



Here's a video link to the server HDD's. I'm still curious as now the bay 1 flashes blue & amber while others bays are in blue (on the smart array screen as seen above)..



Answer



This could be a VMware issue or a locking problem on the virtual disk. Can you capture the full error message? Do other virtual machines power on without problems?



Despite that, it appears you have a physical storage issue, too.



Here's what the HP Smart Array P410 configuration output on a DL180 G6 looks like:



      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 2 TB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 2 TB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 2 TB, OK)

physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 2 TB, OK)
physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 2 TB, OK)
physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 2 TB, OK)


Are you sure that you're not mistaking the drive designation of 1I:1:1, which means (port 1I:box 1:bay 1) for "SLOT1 : Bay 11"? That would explain the amber/orange light in the first drive bay.



Given that this server was not documented well, there's a high probability that it was also configured with RAID5 (mean? probably).





  • Does the server boot?

  • What error messages do you see at POST?

  • Do you have to press any keys on the keyboard to allow the system to boot? (e.g. F1)

  • What capacity and type of disks are installed in the server?



If the server is on, you can view the RAID configuration from within ESXi. Do this by navigating to: Hardware Status > Sensors > Storage.



If your ESXi was installed using an HP-specific VMware image, you will see the RAID configuration there.




enter image description here



If you don't see anything inside of VMware, you will need to reboot and view the RAID configuration at the BIOS level.



When the system is powered on, you want to hit the F8 key when prompted to enter the Smart Array P410 configuration utility.



Once inside, select "View Logical Drives".



enter image description here




This will show you the RAID health status and you can hit Enter for details. This will tell you conclusively which disks are good/bad/missing in the array.


Saturday, July 6, 2019

nginx reverse proxy to non-standard ssl port

I am having a terrible time getting this to work, here's my config file. When I navigate to 'sub.domain.com' I'm redirected to an https version of the URL so I know nginx is receiving the request, but the reverse proxy isn't loading 10.3.2.200:8443. When I load "https://sub.domain.com", Chrome tells me "ERR_CONNECTION_CLOSED". I'm using snippets from other answers on StackExchage + other online tutorials but with no success.



As a peculiarity, if I run a test and change 443 to 20205 in the second server block, then the reverse proxy works with 'https://sub.domain.com:20205' and successfully forwards to 10.3.2.200:8443.



## running on 10.3.2.205
upstream destsrv {
server 10.3.2.200:8443;
}
server {

listen 80 http2;
listen [::]:80 http2;
server_name sub.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sub.domain.com;
ssl_certificate /etc/letsencrypt/certs/star_domain_com/fullchain.pem;

ssl_certificate_key /etc/letsencrypt/certs/star_domain_com/privkey.pem;
location / {
proxy_pass https://destsrv;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
}
}



This is a reverse proxy on a home server, I have ports 80 and 443 forwarded to my nginx server.



What's going on here?

Friday, July 5, 2019

version control - Git, Init remote repository

we are changing our VCS from Subversion to Git. Server is installed and running so far now to get things done right: If I create a local repo on my machine and want to push/clone this to a remote repository (a central server - in this case) - I have to do a git --bare init project.git on the remote server, is this right? Is there a possibility to create the remote repo without to logging into the central machine in this case?

security - Got Hacked. Want to understand how



Someone has, for the second time, appended a chunk of javascript to a site I help run. This javascript hijacks Google adsense, inserting their own account number, and sticking ads all over.



The code is always appended, always in one specific directory (one used by a third party ad program), affects a number of files in a number of directories inside this one ad dir (20 or so) and is inserted at roughly the same overnight time. The adsense account belongs to a Chinese website (located in a town not an hour from where I will be in China next month. Maybe I should go bust heads... kidding, sort of), btw... here is the info on the site: http://serversiders.com/fhr.com.cn



So, how could they append text to these files? Is it related to the permissions set on the files (ranging from 755 to 644)? To the webserver user (it's on MediaTemple so it should be secure, yes?)? I mean, if you have a file that has permissions set to 777 I still can't just add code to it at will... how might they be doing this?



Here is a sample of the actual code for your viewing pleasure (and as you can see... not much to it. The real trick is how they got it in there):








Since a number of folks have mentioned it, here is what I have checked (and by checked I mean I looked around the time the files were modified for any weirdness and I grepped the files for POST statements and directory traversals:




  • access_log (nothing around the time except normal (i.e. excessive) msn bot traffic)

  • error_log (nothing but the usual file does not exist errors for innocuous looking files)

  • ssl_log (nothing but the usual)

  • messages_log (no FTP access in here except for me)




*UPDATE:** OK, solved it. Hackers from China had physically placed a file in our site that allows them to do all manner of administrative things (database access, delete and create files and dirs, you name it, they had access). We were lucky they didn't do something more destructive. There was nothing in the normal apache log files but I found a different set of log files in a web server log analyzer and the evidence was in there. They were accessing this file with their own admin username and password and then editing whatever they needed right there on the server. Their file has "apache" set as the user while all other files on our site have a different user name. Now I need to figure out how they physically got this file onto our system. I suspect blame for this will eventually rest with our web host (Media Temple), unless they actually had our FTP login... not sure how I will determine that, however, as this file has probably been there for a while.


Answer



First of all chmod 744 its NOT what you want. The point of chmod is to revoke access to other accounts on the system. Chmod 700 is far more secure than chmod 744. However Apache only needs the execute bit to run your php application.



chmod 500 -R /your/webroot/



chown www-data:www-data -R /your/webroot/




www-data is commonly used as Apache's account which is used to execute the php. You could also run this command to see the user account:



`print system("whoami");
?>`


FTP is horribly insecure and it's very likely that you were hacked from this method. Using FTP you can make files writable, and then infect them again. Make sure you run an anti-virus on all machines with FTP access. There are viruses that sniff the local traffic for FTP user-names and passwords and then login and infect the files. If you care about security you'll use SFTP, which encrypts everything. Sending source code and passwords over the wire in clear text is total madness.



Another possibility is that you are using an old library or application. Visit the software vendor's site and make sure you are running the latest version.



Thursday, July 4, 2019

memory usage - Linux: drop caches do not free buffers/cache - why?

Since I've found discussion of buffers/cache on that site, not Unix of SE, I post my question here. I've read In Linux, what is the difference between "buffers" and "cache" reported by the free command? and Meaning of the buffers/cache line in the output of free, where it is written:




caches will be freed automatically if memory gets scarce, so they do
not really matter.




Currently free reports 8Gb of buffers/cache, however, system when approaching zero free memory becomes unresponsive for long time and sync; echo 3 > /proc/sys/vm/drop_caches does not change much. Why? I post output of free -m and also more detailed output of cat /proc/meminfo:




              total        used        free      shared  buff/cache   available
Mem: 15740 4508 2366 8453 8865 2474

MemTotal: 16118172 kB
MemFree: 528472 kB
MemAvailable: 475820 kB
Buffers: 1588 kB
Cached: 8939100 kB
SwapCached: 0 kB
Active: 6711540 kB

Inactive: 8440460 kB
Active(anon): 6621624 kB
Inactive(anon): 8402256 kB
Active(file): 89916 kB
Inactive(file): 38204 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB

Writeback: 0 kB
AnonPages: 6211412 kB
Mapped: 1534592 kB
Shmem: 8812568 kB
Slab: 203244 kB
SReclaimable: 106932 kB
SUnreclaim: 96312 kB
KernelStack: 18736 kB
PageTables: 93880 kB
NFS_Unstable: 0 kB

Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 8059084 kB
Committed_AS: 23933660 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB

ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 1761344 kB
DirectMap2M: 14704640 kB

DirectMap1G: 1048576 kB

Wednesday, July 3, 2019

Allowing SSH on a server with an active OpenVPN client




I have a VPS running CentOS 7 that I connect to with SSH. I would like to run an OpenVPN client on the VPS so that internet traffic is routed through the VPN, but still allow me to connect to the server via SSH. When I start up OpenVPN, my SSH session gets disconnected and I can no longer connect to my VPS. How can I configure the VPS to allow incoming SSH (port 22) connections to be open on the VPS's actual IP (104.167.102.77), but still route outgoing traffic (like from a web browser on the VPS) through the VPN?



The OpenVPN service I use is PrivateInternetAccess, and an example config.ovpn file is:



client
dev tun
proto udp
remote nl.privateinternetaccess.com 1194
resolv-retry infinite
nobind

persist-key
persist-tun
ca ca.crt
tls-client
remote-cert-tls server
auth-user-pass
comp-lzo
verb 1
reneg-sec 0
crl-verify crl.pem



VPS's ip addr:



1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:be:16:f7 brd ff:ff:ff:ff:ff:ff
inet 104.167.102.77/24 brd 104.167.102.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:febe:16f7/64 scope link
valid_lft forever preferred_lft forever
4: tun0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
link/none
inet 10.172.1.6 peer 10.172.1.5/32 scope global tun0
valid_lft forever preferred_lft forever



VPS's ip route:



0.0.0.0/1 via 10.172.1.5 dev tun0
default via 104.167.102.1 dev ens33 proto static metric 1024
10.172.1.1 via 10.172.1.5 dev tun0
10.172.1.5 dev tun0 proto kernel scope link src 10.172.1.6
104.167.102.0/24 dev ens33 proto kernel scope link src 104.167.102.77
109.201.154.177 via 104.167.102.1 dev ens33

128.0.0.0/1 via 10.172.1.5 dev tun0

Answer



I'm having a similar issue to this and have been attempting the fix described in this forum post.



The idea is that currently when you connect to your public IP address, the return packets are being routed over the VPN. You need to force these packets to be routed over your public interface.



These route commands will hopefully do the trick:




ip rule add from x.x.x.x table 128




ip route add table 128 to y.y.y.y/y dev ethX



ip route add table 128 default via z.z.z.z




Where x.x.x.x is your public IP, y.y.y.y/y should be the subnet of your public IP address, ethX should be your public Ethernet interface, and z.z.z.z should be the default gateway.



Note that this hasn't worked for me (using Debian and PrivateInternetAccess) but may help you out.


domain name system - Why do I need to set a hostname?



I know there's quite a few questions about host names. But even after reading them, I didn't really understand the concept of host names entirely. So here's my question:



I've been following this guide in setting up a VPS with Linode.




The first step is to set a host name. From what I understand, a host name is just an arbitrary name you can set to identify your machine in a network. Also, the FQDN is the host name plus the domain name (which can or can not be related to web domains hosted on the server). Please correct me if I'm wrong.



Then it instructs me to modify /etc/hosts and add in something like:



12.34.56.78    plato.example.com    plato


So my question is, what exactly does this line accomplish? I've done it before but never really understood what it did. Also, if the host name and the domain name used in the FQDN is just arbitrary, where can they be used? Actual use cases would be very helpful and detailed explanation would be great. Thanks!


Answer




Certain applications will use the hostname for certain parameters unless explicitly set. Postfix, for example, will identify itself using the hostname of the machine unless you specify otherwise in the config file.



The hosts file is used for name resolution. When resolving domain names, your server will check its hosts file before making a DNS request.



That line you posted will essentially make your server resolve "plato.example.com" and "plato" to that IP address. Thats why you'll generally see the first line with 127.0.0.1 localhost localhost.localdomain .... so the server will always resolve its hostname to itself.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...