Tuesday, April 30, 2019

domain name system - dcdiag DNS test fails, but DNS seems to be working properly

Active Directory setup:



Single forest, 3 domains, with 1 domain controller each. All running server 2008 R2, with the same domain/forest functional level.



DNS clients are configured as follows:



DC1 -> DC2 (prim), DC1 (sec)



DC2 -> DC1 (prim), DC2 (sec)




DC3 -> DC1 (prim), DC3 (sec)



All zones are replicated throughout the entire forest, and each DNS server is set-up with 8.8.8.8/8.8.4.4 as forwarders.



Problem:



Everything appears to be working as should. AD is replicating properly, DNS is responsive and not causing any issues, BUT when I run dcdiag /test:dns, the enterprise DNS test fails on DC2 and DC3 with the following error:



TEST: Forwarders/Root hints (Forw)
Error: All forwarders in the forwarder list are invalid.




Error: Both root hints and forwarders are not configured or



broken. Please make sure at least one of them works.



Symptoms:



Event viewer is constantly showing these 2 event ID's for DNS client:



ID 1017 - The DNS server's response to a query for name INTERNAL RECORD indicates that no records of the type queried are available, but could indicate that other records for the same name are present.




ID 1019 - There are currently no IPv6 DNS servers configured for any interface on this host. Please configure DNS server settings, or renew your dynamic IP settings. (strange, as IPv6 is disabled on the network card)



nslookup is working as expected, and finding any and all records appearing in ID 1017, no matter which DNS server I select to use.



While running dcdiag, the following events appear:



Event ID 10009: DCOM was unable to communicate with the computer 8.8.4.4 using any of the configured protocols.



DCOM was unable to communicate with the computer 8.8.8.8 using any of the configured protocols.




Event ID 1014: Name resolution for the name 1.0.0.127.in-addr.arpa timed out after none of the configured DNS servers responded.



I've run wireshark while dcdiag is running its test, and the internal DNS servers do resolve anything thrown at them, but then the server continues querying Google DNS and root hints.



What the hell is going on? What am I missing here?



Edit: The actual enterprise DNS test error messages are:



         Summary of test results for DNS servers used by the above domain


controllers:



DNS server: 128.63.2.53 (h.root-servers.net.)

1 test failure on this DNS server

Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.63.2.53


DNS server: 128.8.10.90 (d.root-servers.net.)

1 test failure on this DNS server

PTR record query for the 1.0.0.127.in-addr.arpa. failed on the DNS server 128.8.10.90 Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 128.8.10.90

DNS server: 192.112.36.4 (g.root-servers.net.)

1 test failure on this DNS server


Name resolution is not functional. _ldap._tcp.domain1.local. failed on the DNS server 192.112.36.4


etc., etc.

SQL Server "Long running transaction" performance counter: why no workee?

Please explain to me the following observation:



I have the following piece of T-SQL code that I run from SSMS:



BEGIN TRAN
SELECT COUNT (*)

FROM m
WHERE m.[x] = 123456
or m.[y] IN (SELECT f.x FROM f)
SELECT COUNT (*)
FROM m
WHERE m.[x] = 123456
or m.[y] IN (SELECT f.x FROM f)
COMMIT TRAN



The query takes about twenty seconds to run. I have no other user queries running on the server.



Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again.



According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

Monday, April 29, 2019

domain name system - How much does the geographical location of DNS servers matter?

We have started to run our own DNS servers located in Asia since that's where our main audience is. However, it seems that some users in the US are having difficulties accessing our website sometimes. I've noticed myself that DNS lookups of our domain from the US are relatively slow (500 msec+). Maybe the problems some users are having are due to other DNS configuration errors, but in general, how much of an issue is the geographical location of DNS servers? Should we have an additional server in the US?

Sunday, April 28, 2019

IIS Web Farm site stuck on starting on other servers - 0x800710D8



We have a win 2012/IIS 8.5 web farm up and running using a shared config. All was working great on the servers and we would create a site on one server and it would go across them all. We run into an issue with the servers and had to change a number of them to local configurations before reverting them back to the shared config.



The problem we had was with us not being able to start sites so was a bit of a major issue. At the time we suspected it was related to the way we were using DFS to share the configuration across servers and possibly IIS was accessing the config files whilst they were being touched by DFS. We tried a couple of things and ended up reverting the servers back to a previous IIS config (due to corruption issues and not being able to start sites) and had to setup a new DFS share.




We have an issue when we create a new site on the farm, the site is started on web01 but stuck on starting on the remaining servers. When we try and click on start on one of the sites we get the error



there was an error performing this operation. Details: the object identifier does not represent a valid object. (exception from HRESULT: 0x800710D8)



When i edit the binding of any site on the server (the one with the sites stuck on starting) and apply the changes I am then able to start all the problematic sites.



Anyone any ideas as to what the cause could be and how to resolve it?



Thanks



Answer



Disable shared config on all servers - this will result in them being temporarily separated and each storing their own config - this is OK.



Export the iisConfigurationKey and iisWasKey from web01 and import on all other servers - these encryption keys need to be synchronized across the farm for shared config to work. If you built the other servers by cloning web01, then no need to do this.






Export:



C:\windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -px "iisConfigurationKey" C:\iisConfKey.xml -pri


C:\windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -px "iisWasKey" C:\iisWasKey.xml -pri


Import:



C:\windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -pi "iisConfigurationKey" C:\iisconfkey.xml -exp

C:\windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -pi "iisWasKey" C:\iisWasKey.xml -exp






Setup a network share and that all servers can use to access the shared config as detailed here.



On the first server (web01), export the configuration - place it in this network location (make not of the encryption key you use when you export).



On the first server (web01), switch to shared configuration mode again - defining the same network location - input the encryption key if prompted.



Now do the same on all other web servers, switch to shared configuration mode again - defining the same network location - input the encryption key if prompted.




Reboot all of them.



Now manage config via web01 and it should appear correctly on the other servers.


ssd - Trim support in Hardware RAID (Perc H700)



We are about to order a dell R610 with a H700. We were thinking of using 2 X Samsung 850 Pro 512Gb drives in RAID 1.



The question is does the Perc H700 support TRIM in hardware RAID 1? The answers I have found from googling around seem to indicate no, but this also seems to be a fast moving field, and there have been firmware updates since then.




Ohh and it seems non-dell certified drives will no longer be blocked



Or, if there is no hardware supported TRIM for RAID 1 on the H700 with the latest firmware, then would it be better to get SSD's which have inbuilt TRIM based on sandforce controllers? Eg the Kingston HyperX which seems to have done reasonably well in the endurance tests



The Samsung 850's have better stats than the HyperX drives, but I am lead to believe would have degraded performance without TRIM commands being issued to them...



Thus does the PERC H700 support TRIM, if not then is there another Hardware RAID controller that does, if not is the best bet using sandforce controlled SSDs?



Thanks,
Jas



Answer



Well, doesn't the SSD choice depend a bit on your anticipated workload?



Really: Are SSD drives as reliable as mechanical drives (2013)?



But generically, you can attach just about anything to an LSI (Perc) controller and make it work. Should you? I mean, these are still consumer disks...



There's no TRIM support on the hardware RAID controller (it's not common). It's also not that important. You can just under-provision the drives. Create a Virtual Disk smaller than the capacity of the SSDs; e.g. don't allocation all of the space to the disks.



enter image description here



amazon ec2 - Approximately, how many writes can a EC2 large dedicated Mongo server handle?



Let's assume that it was only writes.
Each "document" inserted is less than 140 characters.




How many writes can this database handle?


Answer



EC2 is notorious for inconsistent throughput. There is no way to answer this question reliably, and even testing this in "production" is going to be problematic because of the varied nature of your platform.



If you want to load-test your application, you need a different platform, and really should be using a hosted (or better, leased) server environment.



With that said, to maximize throughput, use SSD drives, ensure that at least your indexes can remain in memory and that they're useful indexes (though keeping your indexes + db in memory is even more ideal), and shard. (Keep in mind that sharding increases complexity, especially on the backup/recovery front.


Web Server Hardware - What Do I Need?




I need a web server for static web content, a corporate blog and the company e-commerce system. I have some ideas, but thought of seeking additional feedback from the world's best server pros!



NOTE 1 - the company has around 300 customers, and revenues around $1 Million. Let's see, several hundred users a day, downloading and otherwise viewing our site. I'm hoping the new server will help us boost traffic so I want to give myself something to grow into. So far, I'm looking at something lie:




8-core Opteron
16-32 GB RAM
4 x 1 TB drives (some kind of RAID)
Gigabit LAN



Am I on the right track?



NOTE 2 - This is what I went with:




  • Rackform nServ A161

  • Opteron 6128 2.0GHz, 8-Core


  • 16GB (4 x 4GB) Operating at 1333MHz
    Max (DDR3-1333

  • 2 x Intel 82574L Gigabit Ethernet Controllers

  • Integrated IPMI 2.0 with Dedicated LAN

  • LSI 9260-4i 6Gb/s SAS/SATA RAID

  • 4 X 1TB Seagate Constellation ES Optical Drive:

  • Low-Profile DVD+/-RW Drive Power

  • 350W Power Supply


Answer




Yes, you are on the right track. Most web servers need CPUs, RAM, storage, and network connections.



You need to put more thought into your requirements. Once you have those you can design an architecture and find software that meets those requirements (iterate as needed). The software should have parameterized hardware requirements.



Sizing a server is not an exact science so you should design it with the ability to scale and you should implement monitoring so you know when and where to scale.



Random considerations:




  • Usually having 1 server is not a good idea because there is no redundancy. More generally, what are your availability reqs? Do you need a load balancer?


  • If you are going to run an ecommerce site you usually have a database and it is on a separate system with a firewall between it and the web server.

  • You need to consider security. Do you want to run a wordpress blog on the same server as your ecommerce site?


Saturday, April 27, 2019

hosting - How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

Greetings,




I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting).



Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as alias@SomeoneElsesDomain.org, which isn't being hosting by me but I have access to the email account.



Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using alias@SomeoneElsesDomain.org as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. alias@MyHostedDomain.org?



I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience.



(If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

Friday, April 26, 2019

kvm virtualization - KVM + NFS poor disk performance



Situation: We have an Ubuntu server that hosts three VMs using KVM. All guests as well as the host need to access the same files in a certain subfolder of /var . Thus, the subfolder is exported via NFS. Our problem is that the guest can read from/write to the directory with only half the speed of the host. The export table looks like this



alice@host:~$ cat /etc/exports

/home/videos 192.168.10.0/24(rw,sync,no_root_squash)


where the host has IP 192.168.10.2 and the VMs 192.168.10.1{1..3}. /home/videos is a symlink to that certain subfolder in /var. In particular, it's /var/videos/genvids.



This is the relevant line from the VM's fstab:



192.168.10.2:/home/videos /mnt/nfs nfs auto,noatime,rsize=4096,wsize=4096  0 0



The hard disk has a sustained data rate of ~155 MB/s which is verified by outputs of hdparm -tT as well as dd:



alice@host:~$ dd if=/home/videos/4987_1359358478.mp4 of=/dev/null bs=1024k count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 2.04579 s, 154 MB/s


From within a VM things look differently:




bob@guest:~$ dd if=/mnt/nfs/4959_3184629068.mp4 of=/dev/null bs=1024k count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 4.60858 s, 68.3 MB/


Fitting the blocksize to the file system's page size had no satisfying effect:



bob@guest:~$ dd if=/mnt/nfs/4925_1385624470.mp4 of=/dev/null bs=4096 count=100000
100000+0 records in

100000+0 records out
409600000 bytes (410 MB) copied, 5.77247 s, 71.0 MB/s


I consulted various pages on NFS performance, most relevant the NFS FAQs Part B, and the respective Performance Tuning Howto. Most of the hints do not apply. The others did not improve the results. There are threads here that deal with disk performance and KVM. However they do not cover the NFS aspect. This thread does, but network speed seems not the limiting factor in our case.



To give a complete picture this is the content of the exports etab with symlinks resolved and all active export options shown:



alice@host:~$ cat /var/lib/nfs/etab
/var/videos/genvids 192.168.10.0/24(rw,sync,wdelay,hide,nocrossmnt,secure,

no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,
anonuid=65534,anongid=65534)


What also bothers me in this context - and what I do not understand - is the nfsd's procfile output:



alice@host:~$ cat /proc/net/rpc/nfsd
...
th 8 0 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
...



For the third column and beyond I would have expected values other than zeros after reading from the disk within the VMs. However, nfsstat tells me that there were indeed read operations:



alice@host:~$ nfsstat
...
Server nfs v3:
null getattr ...
9 0% 15106 3% ...
read write ...

411971 95% 118 0% ...
...


So, the topic is quite complex and I'd like to know where else to look or whether there is an easy solution for this.


Answer



As it turns out the problem was easier to resolve than expected. Tuning the rsize and wsize option in the VM's fstab did the trick. The respective line is now



192.168.10.2:/home/videos /mnt/nfs nfs auto,noatime,rsize=32768,wsize=32768  0 0



For me this was not obvious since I had expected best performance if values for rsize and wsize meet the disk's block size (4096) and are not greater than the NIC's MTU (9000). Apparently, this assumption was wrong.



It is notable that the exact sustained disk data rate depends on the very file: For two similar files of size 9 GB I observed rates between 155 MB/s (file 1) and 140 MB/s (file 2). So a reduced data rate with one file may still result in full data rate with another file.


raid - SSA configured RAID1 array not visible in CentOS7 installation

I have a ProLiant ML30 Gen9 server with an hardware raid adapter that I can configure using HP's "Smart Storage Administrator" (SSA) tool. I have defined my two 1TB drives to be in a RAID1 setup as a logical drive and enabled the primary boot on them.



However, if I start the CentOS7 installation, I can still select /dev/sda and /dev/sdb as installation medium, where I would expect only one (logical, mirrored, raid) drive.



update It is a Smart Array B140i. I have tried to load drivers, but they either lock up the install, cannot be loaded or do not return the array in the install. I spent an hour with HP support (which could only tell me to reset to defaults and get me links I already found).



What am I doing wrong?

Wednesday, April 24, 2019

email - Change smtp name

My question is probably very easy to answer but I have been struggling with this the whole day. Actually, I would like to change my smtp.mail name and account that is shown as sending the emails in the header. I changed the "From" but it shows only in the "visible" part of the email and there are values that are different in the header.



In this example :



Received-SPF: pass (google.com: domain of bounce@taggedmail.com designates 67.221.174.127 as permitted sender) client-ip=67.221.174.127;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of bounce@taggedmail.com designates 67.221.174.127 as permitted sender) smtp.mail=bounce@taggedmail.com; dkim=pass (test mode) header.i=tagged@taggedmail.com



smtp.mail=bounce@taggedmail.com
account=bounce




Thank you in advance!

sftp - Correctly setting user permissions via SSH

I'm getting stuck with user permissions on a LAMP stack (using Digital Ocean if it matters). Here's my setup.




User dev has the following groups:
dev www-data



The /var/www folder has been set so that the owner is www-data:www-data, it looks like this:



 drwxrwxr-x  3 www-data www-data 4096 Mar 30 17:41 www


If I use the dev user to sftp in, everything looks good, but if I then upload a file, the new file has the ownership of dev:dev.




This becomes a problem when I have a new user called dev2 that is also working in the same directory as they can't delete or overwrite the files that belong to dev.



My experience with users is unfortunately limited to using cPanel, where I can create multiple FTP users that don't have this access/overlap issue. How can I do this via terminal?

Tuesday, April 23, 2019

postfix - Interpreting a DMARC report that seems to have conflicting data



I recently implemented DMARC in monitoring mode, in order to begin preparing all the domains I manage. Here is the aggregate report for yesterday. I don't understand why DKIM would evaluate to false under policy_evaluated when DKIM is marked pass under auth_results. This domain (mydomain.io) sent one message yesterday (my own server is the SMTP server) to another domain I manage (myotherdomain.net) whose MX is Google Apps.







google.com
noreply-dmarc-support@google.com
https://support.google.com/a/answer/2466580
xxx711

1469923200
1470009599




my.domain.io
r
r

none


none
100




23.92.28.xx
1

none
fail
pass




mydomain.io



myotherdomain.net
pass


mydomain.io
pass






Answer



It is failing because the domain isn't aligned for DKIM



The calculation of the result in "Policy Evaluated" can be made as follows:





  1. Is the result in "Auth results" Pass?

  2. Is the domain in "Auth results" aligned? That is, is the domain in "Auth results" the same domain on "Policy Published"?



If 1 and 2 are Yes then the result is Pass, otherwise is Fail



In your case, for DKIM #1 is Yes, but #2 is No because the domain on "Policy Published " is "mydomain.io" but the domain reported in the "Auth results" for DKIM is "myotherdomain.net "


linux - Apache as a Reverse Proxy Exposes internal server strings

I'm using Apache mod_proxy as a reverse proxy to internal servers, with mod_security to rename the apache server string to something different, however the server string of the internal servers get forwarded instead. Obviously I want my reverse-proxy to hide the Server String of the internal servers (can't change them directly) and instead show the custom one.




How to do that?

How can I run Bugzilla on Windows with Strawberry Perl?

I've been at it for 3 days now, and run into, usually, a Storage.pm problem with the "Binary image v18.86 being greater than 2.7".



I've tried different Bugzilla's: 3.0.8, 3.2.4, 3.4rc1.



Next I'll be trying different Perl's, (using 5.10.0.4 portable right now),



I don't want to go to an older version of MySQL (5.1.36-community), so next I'll try PostgreSQL 8.4.




I'll update as I go. I wanted to ask here, since these are some common platforms, and perhaps someone has it working.



P.S.: Windows XP, Abyss Web Server X1 (though I can't even run perl check-setup.pl yet)



UPDATE: A chronicle of my (so far) fruitless journey.

Monday, April 22, 2019

windows server 2003 - DHCP failing to update DNS, no Active Directory



I have a DHCP and DNS server, running Windows 2003 SP1. I configure everything according to this Microsoft Technet article "Using DNS servers with DHCP", but it does not work. Note that the client is a Linux client, but that should not matter; it did not work when it sent option 81 or when it only sent a hostname.




Also note the following documents/tutorials that we read:





In the logs I get the following messages:



30,07/10/09,16:31:04,DNS Update Request,151.28.30.10,hostname.testdomain.local,,MACHINE-317A15D\Administrator
31,07/10/09,16:31:51,DNS Update Failed,10.30.28.151,hostname.testdomain.local,2,
30,07/10/09,16:31:51,DNS Update Request,151.28.30.10,hostname.testdomain.local,,

10,07/10/09,16:31:51,Assign,10.30.28.151,hostname.testdomain.local,001D09117758,
31,07/10/09,16:46:08,DNS Update Failed,10.30.28.151,hostname.testdomain.local,-1,


One other clue I have is the following event log entry:
Event Properties



with the following text:





The DNS server machine currently has
no DNS domain name. Its DNS name is a
single label hostname with no domain
(example: "host" rather than
"host.microsoft.com").



You might have forgotten to configure a primary DNS domain for the
server computer. For more information,
see either "DNS server log reference"
or "To configure the primary DNS

suffix for a client computer" in the
online Help.



While the DNS server has only a single label name, all zones created
will have default records (SOA and NS)
created using only this single label
name for the server's hostname. This
can lead to incorrect and failed
referrals when clients and other DNS
servers use these records to locate

this server by name.



To correct this problem:
1) open Control Panel
2) open System applet
3) select Computer Name tab
4) click the "Change" button and join the computer to a domain or workgroup; this name will be used as your DNS domain name
5) reboot to initialize with new domain name



After reboot, the DNS server will attempt to fix up default
records, substituting new DNS name of
this server, for old single label
name. However, you should review to
make sure zone's SOA and NS records
now properly use correct domain name

of this server



For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.




This question is almost identical to this earlier ServerFault question, except that in this case the DHCP/DNS server are not joined to an ActiveDirectory domain. Also it is different from this other ServerFault question, because in my case the logs do indicate a failure, not success.


Answer



This is not possible. As the Event log states, you must join a domain for this to work. The Active Directory domain name is used as the DNS domain name for the system.




As soon as we recreated the configuration with the DHCP/DNS server joined to Active Directory, it all worked.


linux - Execute a command as root

I'm trying to create a ruby script that is executed with root permissions when run by an unprivileged user. Basically I'm writing a wrapper script that when run adds the user to a group, runs a command, then removed the user from the group. This is all under CentOS and not using sudo.




I've played around with having the script owned by root and then chmod +s which as I understand should run the script with root permissions. However when I run the ruby command system "gpasswd -a #{user} #{group}" in my script I get a permission denied message.



I'm a bit stuck now no how to get this working.



Thanks

raid - How do I recover from a faulted zpool where one device is OK, but was temporarily offline?



I have a zpool with 4 2TB USB disks in a raidz config:




[root@chef /mnt/Chef]# zpool status farcryz1
pool: farcryz1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
farcryz1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da1 ONLINE 0 0 0

da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0


In order to test the pool, I simulated a drive failure by pulling the USB cable from one of the drives without taking it offline:



[root@chef /mnt/Chef]# zpool status farcryz1
pool: farcryz1
state: ONLINE

status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
farcryz1 ONLINE 0 0 0

raidz1 ONLINE 0 0 0
da4 ONLINE 22 4 0
da3 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors


Data's still there, pool still online. Great! Now let's try to restore the pool. I plugged the drive back in, and issued the zpool replace command as I was instructed to above:




[root@chef /mnt/Chef]# zpool replace farcryz1 da4
invalid vdev specification
use '-f' to override the following errors:
/dev/da4 is part of active pool 'farcryz1'


Um.... That's not helpful... So I tried a zpool clear farcryz1, but that didn't help at all. I still couldn't replace da4. So I tried a combination of onlineing, offlineing, clearing, replaceing, and scrubing. Now I am stuck here:



[root@chef /mnt/Chef]# zpool status -v farcryz1

pool: farcryz1
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: scrub completed after 0h2m with 0 errors on Fri Sep 9 13:43:34 2011
config:


NAME STATE READ WRITE CKSUM
farcryz1 DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
da4 UNAVAIL 9 0 0 experienced I/O failures
da3 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors
[root@chef /mnt/Chef]# zpool replace farcryz1 da4

cannot replace da4 with da4: da4 is busy


How can I recover from this situation, where one device in my zpool was unexpectedly disconnected (but is not a failed device) and is now back again, ready to be resilvered?






EDIT: As requested, a tail of dmesg:



(ses3:umass-sim4:4:0:1): removing device entry

(da4:umass-sim4:4:0:0): removing device entry
ugen3.2: at usbus3
umass4: on usbus3
da4 at umass-sim4 bus 4 scbus6 target 0 lun 0
da4: Fixed Direct Access SCSI-6 device
da4: 400.000MB/s transfers
da4: 1907697MB (3906963456 512 byte sectors: 255H 63S/T 243197C)
ses3 at umass-sim4 bus 4 scbus6 target 0 lun 1
ses3: Fixed Enclosure Services SCSI-6 device
ses3: 400.000MB/s transfers

ses3: SCSI-3 SES Device
GEOM: da4: partition 1 does not start on a track boundary.
GEOM: da4: partition 1 does not end on a track boundary.
GEOM: da4: partition 1 does not start on a track boundary.
GEOM: da4: partition 1 does not end on a track boundary.
ugen3.2: at usbus3 (disconnected)
umass4: at uhub3, port 1, addr 1 (disconnected)
(da4:umass-sim4:4:0:0): lost device
(da4:umass-sim4:4:0:0): removing device entry
(ses3:umass-sim4:4:0:1): lost device

(ses3:umass-sim4:4:0:1): removing device entry
ugen3.2: at usbus3
umass4: on usbus3
da4 at umass-sim4 bus 4 scbus6 target 0 lun 0
da4: Fixed Direct Access SCSI-6 device
da4: 400.000MB/s transfers
da4: 1907697MB (3906963456 512 byte sectors: 255H 63S/T 243197C)
ses3 at umass-sim4 bus 4 scbus6 target 0 lun 1
ses3: Fixed Enclosure Services SCSI-6 device
ses3: 400.000MB/s transfers

ses3: SCSI-3 SES Device

Answer




Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.




Looks like after the initial temporary failure, you may only have needed to do a zpool clear to clear the errors.




If you want to pretend that it's a drive replacement, you probably need to clear the data off the drive first before you try re-adding it to the pool.


Sunday, April 21, 2019

Good iptables starting rules for a webserver?



I am installing a new centos 5.4 server and I would like to have a set of clean rules for mu iptables to startup.



What would be the good rules to start with?




Is this a good starting point :



# Allow outgoing traffic and disallow any passthroughs

iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP

# Allow traffic already established to continue


iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow ssh, ftp and web services

iptables -A INPUT -p tcp --dport ssh -i eth0 -j ACCEPT
iptables -A INPUT -p tcp --dport ftp -i eth0 -j ACCEPT
iptables -A INPUT -p udp --dport ftp -i eth0 -j ACCEPT
iptables -A INPUT -p tcp --dport ftp-data -i eth0 -j ACCEPT
iptables -A INPUT -p udp --dport ftp-data -i eth0 -j ACCEPT

iptables -A INPUT -p tcp --dport 80 -i eth0 -j ACCEPT

# Allow local loopback services

iptables -A INPUT -i lo -j ACCEPT

# Allow pings

iptables -I INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT
iptables -I INPUT -p icmp --icmp-type source-quench -j ACCEPT

iptables -I INPUT -p icmp --icmp-type time-exceeded -j ACCEPT


For what is this rule :



iptables -A INPUT -p tcp --dport domain -i eth0 -j ACCEPT


UPDATE :




It will be a web server with FTP (required), apache, SSH, mysql.


Answer



Your IPTables rules seem to be mostly appropriate for your server. But I would suggest a couple of possible changes:




  • Unless you need to allow SSH, MySQL, and FTP access from the entire Internet, it would be much more secure to use the '--source' option to restrict access on those ports from certain approved IP addresses, only. For instance, to only allow SSH access from the IP address 71.82.93.101, you'd change the 5th rule to 'iptables -A INPUT -p tcp --dport ssh --source 71.82.93.101 -i eth0 -j ACCEPT'. You'll probably need to add a separate rule for each individual IP address that you want to allow, see this question for more info on that: iptables multiple source IPs.


  • Unless this machine is running a DNS server, you'll probably want to block access to the 'domain' (53) port. To do this, just remove the line 'iptables -A INPUT -p tcp --dport domain -i eth0 -j ACCEPT'. (This should also answer your final question, BTW.) If you are actually running a DNS server, though, leave this rule in place.


  • If you need to allow remote MySQL client access over the network, you'll need to add the line 'iptables -A INPUT -p tcp --dport 3306 -i eth0 -j ACCEPT' to open up external access to the standard MySQL port. But DON'T do this unless it's really necessary--if you only need local MySQL access (for a PHP app running under Apache, say), you don't need to provide remote MySQL access. And unless you want to risk getting hacked, if you do open port 3306 to the network, make sure that you require strong passwords for all of the MySQL user accounts, and that your MySQL server packages are up-to-date.


  • One of your comments ('Allow ssh, dns, ldap, ftp and web services') mentions LDAP services, but there is no such rule in your configuration. This happens to me a lot when I copy an example configuration and modify it. It won't affect the function, but I would fix the comment, since misleading comments can cause indirectly by confusing you or another admin in the future.





In my experience, it's hard to come up with a perfect set of IPTables rules, but I think you're definitely on the right track. Also, good luck with learning more about IPTables--these rules can seem complex at first, but it's a very helpful skill for any Linux sysadmin to have.


Saturday, April 20, 2019

"Granted for all" as default in Apache 2.4 vhost configuration?



I don't use the sites-enabled/sites-available possibility, but include a list of vhost entries from one configuration file - so I was wondering if the following is possible?



An older installation I have on Apache 2.2 allows me to set one default vhost entry as the first entry - configure the dir settings - and have all the others follow suit - or so it appeared. I didn't have to set the directory settings for each vhost separately. (I'll add additional info if this is not clear)



However, since 2.4 it seems that I have to set the directory setting for each vhost entry? If I don't I get a 403 forbidden message right off the bat. Once I add the directory entry (granted for all) - all is fine.



Is there a possibility to set the directory settings (granted for all) as a default setting?


Answer




There are multiple contexts available for most Apache configuration directives.



The two you are concerned about are Server Config (aka Global) and Virtual Host. Most directives that can be applied to a specific virtual host can also be applied globally, outside of a tag.



If a directive is used in the Server Config context, then it will apply to all virtual hosts unless a conflicting directive is specifically applied.



In the official documentation for the directives of Apache and its modules, you will see the following keywords to describe which context they can be used in:



Snippet from Apache:




server config - This means that the directive may be used in the server configuration files (e.g., httpd.conf), but not within any or containers. It is not allowed in .htaccess files at all.



virtual host - This context means that the directive may appear inside containers in the server configuration files.



directory - A directive marked as being valid in this context may be used inside , , , , and containers in the server configuration files, subject to the restrictions outlined in Configuration Sections.



.htaccess - If a directive is valid in this context, it means that it can appear inside per-directory .htaccess files. It may not be processed, though depending upon the overrides currently active.



Source: Apache 2.4 Documentation


Thursday, April 18, 2019

ZFS delete snapshots with interdependencies and clones



Below is my list of ZFS volumes and snapshots, as well as the origin and clone for each.




I want to delete all the snapshots, but keep all the filesystems. How can I do this?



I have tried zfs promote followed by attempting to delete each filesystem for many different combinations of the filesystems. This shifts around where the snapshots "live"; for instance, zfs promote tank/containers/six moves snapshot F from tank/containers/three@F to tank/containers/six@F. The live data in the filesystem isn't modified (which is what I want!), but I still can't delete the snapshot (which is not what I want).



A typical zfs destroy attempt tells me it has dependent clones, some of which (the snapshots) I do want to destroy, but others of which (the filesystems) I do not want to destroy.



For example.



# zfs destroy tank/containers/six@A
cannot destroy 'tank/containers/six@A': snapshot has dependent clones

use '-R' to destroy the following datasets:
tank/containers/five
tank/containers/two@B
tank/containers/two


In the above example, I don't want to destroy tank/containers/five or tank/containers/two, but if I zfs promote five and two, I still can't destroy any snapshots. Is there a solution?



# zfs list -t all -o name,origin,clones
NAME ORIGIN CLONES

tank - -
tank/containers - -
tank/containers/five tank/containers/two@B -
tank/containers/four tank/containers/six@C -
tank/containers/one - -
tank/containers/one@E - tank/containers/three
tank/containers/two tank/containers/six@A -
tank/containers/two@B - tank/containers/five
tank/containers/six tank/containers/three@F -
tank/containers/six@A - tank/containers/two

tank/containers/six@C - tank/containers/four
tank/containers/three tank/containers/one@E -
tank/containers/three@F - tank/containers/six

Answer



AFAIK you're going to have to copy those datasets out to new, independent datasets. Promotion just switches around which dataset is "parent" vs "child", it doesn't actually break any dependencies if you want to keep both.



Eg:



root@box~# zfs snapshot tank/containers/six@1 

root@box~# zfs send tank/containers/six@1 | pv | zfs receive tank/containers/newsix
root@box~# zfs destroy -R tank/containers/six
root@box~# zfs destroy tank/containers/three@F
root@box~# zfs rename tank/containers/newsix tank/containers/six


Take your time and be sure of what you're doing. Especially with the actual deletions.



This replication is block-for-block, so if there's any significant data in there it WILL take a while. The pv part is strictly optional, but will give you a progress bar to look at while you wait.




Also maybe consider syncoid to automate the replication tasks, now and in the future. (Obligatory: I am the original author of this tool, which is GPLv3 licensed and free to use.)


linux - How do I prevent accidental rm -rf /*?




I just ran rm -rf /* accidentally, but I meant rm -rf ./* (notice the star after the slash).



alias rm='rm -i' and --preserve-root by default didn't save me, so are there any automatic safeguards for this?






I wasn't root and cancelled the command immediately, but there were some relaxed permissions somewhere or something because I noticed that my Bash prompt broke already. I don't want to rely on permissions and not being root (I could make the same mistake with sudo), and I don't want to hunt for mysterious bugs because of one missing file somewhere in the system, so, backups and sudo are good, but I would like something better for this specific case.







About thinking twice and using the brain. I am using it actually! But I'm using it to solve some complex programming task involving 10 different things. I'm immersed in this task deeply enough, there isn't any brain power left for checking flags and paths, I don't even think in terms of commands and arguments, I think in terms of actions like 'empty current dir', different part of my brain translates them to commands and sometimes it makes mistakes. I want the computer to correct them, at least the dangerous ones.


Answer



One of the tricks I follow is to put # in the beginning while using the rm command.



root@localhost:~# #rm -rf /


This prevents accidental execution of rm on the wrong file/directory. Once verified, remove # from the beginning. This trick works, because in Bash a word beginning with # causes that word and all remaining characters on that line to be ignored. So the command is simply ignored.



OR




If you want to prevent any important directory, there is one more trick.



Create a file named -i in that directory. How can such a odd file be created? Using touch -- -i or touch ./-i



Now try rm -rf *:



sachin@sachin-ThinkPad-T420:~$ touch {1..4}
sachin@sachin-ThinkPad-T420:~$ touch -- -i
sachin@sachin-ThinkPad-T420:~$ ls

1 2 3 4 -i
sachin@sachin-ThinkPad-T420:~$ rm -rf *
rm: remove regular empty file `1'? n
rm: remove regular empty file `2'?


Here the * will expand -i to the command line, so your command ultimately becomes rm -rf -i. Thus command will prompt before removal. You can put this file in your /, /home/, /etc/, etc.



OR




Use --preserve-root as an option to rm. In the rm included in newer coreutils packages, this option is the default.



--preserve-root
do not remove `/' (default)


OR



Use safe-rm




Excerpt from the web site:




Safe-rm is a safety tool intended to prevent the accidental deletion
of important files by replacing /bin/rm with a wrapper, which checks
the given arguments against a configurable blacklist of files and
directories that should never be removed.



Users who attempt to delete one of these protected files or
directories will not be able to do so and will be shown a warning

message instead:



$ rm -rf /usr
Skipping /usr


Wednesday, April 17, 2019

domain name system - Failover for server with dual WAN

I have one mail server, one SonicWall firewall, two Internet providers, and an internal DNS server. I have WAN failover set up on the SonicWall so in the event that the primary connection is down users will get out on the secondary connection. The mail server is accessible from outside via either ISP.



mail.mydomain.com uses the primary Internet connection, and is the primary MX record.



mail2.mydomain.com uses the secondary Internet connection, and is the secondary MX record.




Webmail, mail clients and smartphones can use either address to connect when outside the LAN (only mail.mydomain.com works internally because of the internal DNS server), but when the primary ISP is down users need to know to use mail2.mydomain.com and smartphones don't connect as they are configured for mail.mydomain.com



I'd like to automatically detect when the primary Internet connection is down so mail.mydomain.com connects over either WAN connection.



I think BGP and DNS failover are my options, and I'm wondering if a load balancer is a possible solution and how that would fit into the setup. BGP is not an option with the internet providers I have. DNS failover with dyn.com or dnsmadeeasy.com is an option, but I'm concerned that the user's ISPs won't respect the short TTL and this won't be effective for short outages.

domain name system - Microsoft DNS 'Virtual' subdomain?

I've been studying DNS, and would like to know if/how this is possible in MS DNS -



Say you have an AD domain - domain.com - in a main office. The subnets here might be 10.0.0.0/24 - 10.0.10.0/24, but they all pull dhcp from the DC and become hostx.domain.com.



10.0.11.0/24 is a branch office (mpls/vpn) which isn't a part of the AD domain, and hosts there are configured with static IP's, and thus have to be referred to via IP address for administrative purposes.



Now let's say I'd like for there to be a branch.domain.com subdomain. Would it be possible to configure these devices to pull dhcp from the central server and receive fqdn's like hostx.branch.domain.com ?




Or, even if I were to leave their static configurations intact, and just wanted to use DNS as a more convenient way to access remote devices - is it possible just to create a record that will point hostx.branch.domain.com to that device?



( The reason i'd rather not create a new dns host 'branch' is because in reality, there are >50 branches in our network, and the only devices on these networks are the printers, switches, etc., so that would be pretty inefficient. My first thoughts would be either to create aliases for the main DNS server and have it refer to itself for these lookups - Or maybe, to add just one more dns server, with an alias for each branch pointing to it, and use this secondary server to hand out dhcp to branch devices? edit: Or, would it be as simple as adding a forward lookup zone for each branch?)

Tuesday, April 16, 2019

Preserving file rights when copying a folder on Windows Server

We have a shared network drive running Windows Server at work.




One of the folders contains sensitive information that should only be visible to a small group of people.



The problem is that if one of those people copy and paste a folder that has read permissions for everyone into the sensitive folder, anyone will be able to access that folder if they go directly to the full path.



If there any way to set up the file server to make 100% sure that all files and folders created or copied anywhere in the tree under x:\sensitive will have the same restricted rights as x:\sensitive?

Monday, April 15, 2019

ntp - all my ntpd servers marked falsetick, why?

I have a set of four ntpd servers that sync time from the same stratum 1 server.

But on some clients they all are marked falsetick, why?



     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
x10.201.24.36 209.51.161.238 2 u 12 64 377 0.177 464.794 1.136
x10.201.13.99 209.51.161.238 2 u 37 64 377 0.148 463.427 0.541
x10.201.24.37 209.51.161.238 2 u 817 1024 377 0.174 462.235 0.143
x10.201.12.198 209.51.161.238 2 u 853 1024 377 0.158 462.151 302.364
*127.127.1.0 .LOCL. 10 l 48 64 377 0.000 0.000 0.004



They recover from time to time, but why does it happen at all?
Also another question is why do I have such a big offset?
I tried to leave only 1 ntp server, but offset doesn't go down anyway.



     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*10.201.12.198 209.51.161.238 2 u 64 64 7 0.108 470.963 0.200
127.127.1.0 .LOCL. 10 l 136 64 14 0.000 0.000 0.001



I am runing CentOS 7 and all servers are in the same network.



ntp.conf:



driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod limited nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict lga-ntp03.pulse.prod

server lga-ntp03.pulse.prod iburst burst
restrict lga-ntp06.pulse.prod
server lga-ntp06.pulse.prod iburst burst
restrict lga-ntp05.pulse.prod
server lga-ntp05.pulse.prod iburst burst
restrict lga-ntp01.pulse.prod
server lga-ntp01.pulse.prod iburst burst
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys



Update #1
After removing LOCL (Thanks to @John Mahowald), I am still getting my servers marked as falsetick:



     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
x10.201.24.36 209.51.161.238 2 u 886 1024 7 0.177 -330.92 0.172
*10.201.13.99 209.51.161.238 2 u 773 1024 17 0.203 -152.68 0.090
x10.201.24.37 209.51.161.238 2 u 750 1024 17 0.167 94.101 0.468
x10.201.12.198 209.51.161.238 2 u 409 1024 17 0.129 51.831 0.176

virtualization - Hypervisor load on local disks

What is the I/O load on the local disk system for the host OS in XenServer? I can't find this info anywhere. As we have a SAN for the VMs themselves, can we get away with cheap controller / SATA disk in RAID-1 for the hypervisor? We won't boot from the SAN as it seems prone to problems.

.htaccess - One wildcard SSL certificate to work on my subdomain sites using wildcard subdomains, and on my single subdomain site not using the wildcard subdomain

I have a WordPress multisite on domain mysite.com, this allows you to create multiple sub domain sites (test1.example.com, test2.example.com etc).
It uses wildcard subdomains where you add a * in cpanel > subdomains. Here is the htaccess code:




RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]



I can configure a wildcard SSL certificate to work for this, all subdomains on this WordPress network will have https.



I also have a non-WordPress site running on the same domain: (myothersite.mysite.com). This site doesn't use wildcard subdomains, I have to set up the subdomain name in cPanel > subdomains by adding the subdomain 'myothersite', as it won't work with the * (wildcard subdomain).



My question is, is there a configuration that will allow me to use the same wildcard SSL certificate on my non WordPress site?



Because at the moment, I need 2 SSL certificates. I need a wildcard SSL certificate which handles all the subdomains on my WordPress multisite, and a single SSL certificate for my non WordPress subdomain site.




Any help appreciated.

Sunday, April 14, 2019

spam - legit emails in junkbox

Hey this is actually a reverse question.
My personal email (firstname.lasname@gmail.com) is winding up in many peoples junkbox and I have no idea why.



What may the cause be? Is it because it has the word Entrepreneur (and programmer) in my sig? is it because my first name is unique(european like)?



Its driving me crazy. I sent out dozens of business emails a month to people I just meet so its actually hurting me much more then others :(



-edit-




I also want to mention this is non spam. Typically I email people I meet and say hi or to follow up. I was requested by someone to send him an email so I can test something, so I did and he replied to me 10 days later telling me he found it in his junk, like many others have said to me.



-edit-



bortzmeyer suggested emailing check-auth@verifier.port25.com I did and here are the results



SPF check:          pass
DomainKeys check: pass
DKIM check: pass
Sender-ID check: pass

SpamAssassin check: ham

----------------------------------------------------------
SpamAssassin check details:
----------------------------------------------------------
SpamAssassin v3.2.5 (2008-06-10)

Result: ham (-2.6 points, 5.0 required)

pts rule name description

---- ---------------------- --------------------------------------------------
-0.0 SPF_PASS SPF: sender matches SPF record
-2.6 BAYES_00 BODY: Bayesian spam probability is 0 to 1%
[score: 0.0000]
0.0 HTML_MESSAGE BODY: HTML included in message

Saturday, April 13, 2019

windows server 2012 - How do i use storage spaces?



I am planning on building a new windows 2012 server for a client and I have no experience in doing so. I have built many linux servers for them and setting up software raid during the install is a trivial matter. I have been unable to confirm that the windows 2012 install process has an analogous process for setting up Storage Spaces during the install.




  1. Can Storage Spaces be used as an installation target (configured during setup?)

  2. Is it capable of mirror+stripe (RAID10)?




We have not ordered the hardware yet, so I'm looking for clarification.


Answer



Storage spaces is not supported on boot, system, or CSV volumes. It's all done post-installation.



You can span (sucky), mirror (smart) or parity (I think it's equivalent to R5 and R6). There is no striping.



I suggest, at the moment, avoid if you have a decent RAID controller that can do online expansion. I see that you're ordering new hardware - spend the $500 and order it with a good RAID controller - you'll save yourself a lot of hassle in the long term.




RAID in Windows has always been a bit of an unloved child. I use it for mirroring in el cheapo servers, but that's it. That said, you can add a mirror your boot drive after installing the operating system should you so choose.


Unable to authenticate MediaWiki (on LAMP/Ubuntu) users using AD (on Windows Server 2012 R2) via LDAP



I am working in an AD domain with a single DC running Windows Server 2012 R2. On the domain's network (though not formally domain-joined) is a LAMP web server running Ubuntu Server 14.04.3 LTS. All machines are able to reach one another by both IP address and DNS record, and the LAMP stack is (as far as I can tell) appropriately configured; HTTP requests are served as expected.




The aim is to set up an instance of MediaWiki on the LAMP server. Moreover, MediaWiki should - using Ryan Lane's excellent extension LdapAuthenticate - contact the AD DC to authenticate user logins.



I have tried to follow setup instructions as closely to the book as possible. LAMP installation is mostly taken care of by the Ubuntu Server installer, and I additionally install via apt-get the packages php5-intl php5-gd texlive php5-xcache imagemagick mediawiki mediawiki-math and their dependencies.



I next uncomment the #Alias... line in /etc/mediawiki/apache.conf, run the commands a2enconf mediawiki and php5enmod mycrypt, and lastly install the LdapAuthenticate MediaWiki extension according to tutorials at the author's website.



Appended to my /etc/mediawiki/LocalSettings.php are:



ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, 7);


require_once("$IP/extensions/LdapAuthentication/LdapAuthentication.php");

$wgLDAPDebug = 3;
$wgDebugLogGroups["ldap"] = "/tmp/debug.log";

$wgAuth = new LdapAuthenticationsPlugin();

$wgLDAPDomainNames = array("MYDOMAIN");
$wgLDAPServerNames = array("MYDOMAIN" => "addc.local.domain.com");
$wgLDAPSearchStrings = array("MYDOMAIN" => "USER-NAME@local.domain.com");


$wgLDAPEncryptionType = array("MYDOMAIN" => "tls");


I next add the AD DC's self-signed CA certificate to /etc/ssl/certs on the LAMP server, run c_rehash, and restart everything.



At this point I am able to get into MediaWiki and navigate to the login form no problem. The login form shows MYDOMAIN, and PHP reports no errors - the LdapAuthentication plugin looks good to go.



When I try to login using an AD credential set, however, MediaWiki reports a wrong password. A PHP error on the web page reports that PHP was unable to start TLS (Warning: ldap_start_tls(): Unable to start TLS: Connect error in...), and this same message is reconfirmed by the LdapAuthentication plugin's debug log which I set earlier to /tmp/debug.log.




Looking now at the AD DC, I note the following event in the system log:



Error from Schannel, Event ID 36874
An TLS 1.2 connection request was received from a remote client application, but none of the cipher suites supported by the client application are supported by the server. The SSL connection request has failed.


This error coincides with repeated attempts to authenticate user logins on MediaWiki with AD via LDAP.



I don't know enough about managing cipher suites to approach resolving this issue. Moreover, days upon days of Google searching hasn't yielded me any productive results. Could someone point me in the right direction?


Answer




After hacking at the LdapAuthentication (version 2.0d) MediaWiki extension and introducing my own break points, I was able to track down the problem. It turns out that binding to my AD server over SSL only works following a call to PHP's ldap_connect() that looks like,



ldap_connect("myserver.com", 636);


rather than,



ldap_connect("ldaps://myserver.com:636");



upon which LdapAuthenticate insists.



I note also that LdapAuthenticate wraps PHP's ldap_connect() in the following way,



public static function ldap_connect( $hostname=null, $port=389 ) {
wfSuppressWarnings();
$ret = ldap_connect( $hostname, $port );
wfRestoreWarnings();
return $ret;
}



but then only ever passes in $hostname, as a URI formatted as one of,



ldapi://myserver.com:port
ldap://myserver.com:port
ldaps://myserver.com:port


leaving $port to its default value of 389. This means that, say you're attempting to bind over SSL, the actual call to PHP's ldap_connect() looks like this,




ldap_connect("ldaps://myserver.com:636", 389);


which must be causing some problems!



I think this might be a bug in LdapAuthenticate, but then again it might just be me totally misinterpreting the PHP (it's been a while). In any case I managed to get the authentication working by hacking LdapAuthenticate to force a call to,



ldap_connect("myserver.com", 636);



This does, however, leave one unanswered question. Why does my AD server happily accept binds to myserver.com, but not to ldaps://myserver.com? (I have confirmed that this is the case using ldp.exe on Windows). I suspect that this is a DNS issue, since myserver.com is actually handled according to a record on my local DNS server (on which I may have indeed missed a trick!) I will ask another question elsewhere!


php - Website has links to Malware, Caused by virus



The computer that I do my most webdevelopment work with caught a virus. A website that I am currently working on was compromised(I think by phpDesigners stored FTP password).



I currently get :









right at the end of every file that has the name starting with index on that domain.



currently I am combing through each and every file on the server with the name index(and others randomly) for this change and removing it, but this is a lengthy process and I am not sure if this is the right/entire fix for it.



What is the best way to deal with this type of a scenario?
(The virus on the PC has be cleaned)



Answer



This is related to this question:



https://stackoverflow.com/questions/3393888/how-can-i-remove-script-virus-from-my-script/


kvm virtualization - KVM and virtual to physical CPU mapping



I'm a relative late comer to the virtualistion party, so you'll have to forgive me if this seems like an obvious question.



If I have a server with 12 cores available, does each KVM guest have access to all 12 cores? I understand KVM makes use of the Linux scheduler, but that's where my understanding of "what happens next" ends.



My reason for asking is, the 10 or so distinct tasks we are intending to run in KVM guests (for purposes of isolation to facilitate upgrades) won't utilise a single core 100% of the time, so on that basis it seems wasteful to have to allocate 1 virtual CPU to each guest - we'll be out of cores from the get-go with a "full", idle server to show for it.




Put another way, assuming my description above, does 1 virtual CPU actually equate to 12 physical cores in terms of processing power? Or is that not how it works?



Many thanks



Steve


Answer



A virtual CPU equates to 1 physical core, but when your VM attempts to process something, it can potentially run on any of the cores that happen to be available at that moment. The scheduler handles this, and the VM is not aware of it. You can assign multiple vCPUs to a VM which allows it to run concurrently across several cores.



Cores are shared between all VMs as needed, so you could have a 4-core system, and 10 VMs running on it with 2 vCPUs assigned to each. VMs share all the cores in your system quite efficiently as determined by the scheduler. This is one of the main benefits of virtualization - making the most use of under-subscribed resources to power multiple OS instances.




If your VMs are so busy that they have to contend for CPU time, the outcome is that VMs may have to wait for CPU time. Again, this is transparent to the VM and handled by the scheduler.



I'm not familiar with KVM but all of the above is generic behavior for most virtualization systems.


untagged - How to make up for System Administration "experience"




Background



I have been a part-time Junior SysAdmin at a college for 5 years now. I am now looking for a full-time position as a Linux SysAdmin. I believe I am very capable and have made it to several 2nd---even 3rd round---interviews. However, I keep getting rejected on the fact that I "lack experience".



How is one suppose to gain or make up for this "experience"?



I know there are similar question on SF but they do not address my issue.



Previous questions




Gaining SysAdmin skills



I've taken the technical tests that hiring managers have administered to me and have done fairly well. In fact, last week, the person administrating the exam said I was correct on a couple of questions that neither the previous candidates nor most of the already-employed team answered correctly. However, today, I get a call from the manager that they went with someone with more experience. So it is not a question about skills.



Gaining SysAdmin experience



I run a small network at home that includes everything from a custom iptables firewall to Samba shares. Despite being only a part-time Junior SysAdmin in the past, I've played crucial roles in countless projects; right aside Senior SysAdmins. I could confidently say I've held my own.



So my questions...





  • How do I go about gaining this "experience"? Perhaps receive certifications?


  • Maybe Junior SysAdmin wasn't the proper entry-level job?


  • Should I be looking for something else?


  • Are these just lame excuses not to hire me and maybe I'm putting too much value on it?




Any hiring managers that want to chime in: PLEASE do.




Come on my SF people. Cheer me up here by giving me hope. I've heard the "lack of experience" reason 3 times already and it's admittedly eating at my confidence.


Answer



I am in a position that hires people.



Did you speak to the people who interviewed you and they were the ones that told you you "lacked experience"? A part-time admin for 5 years translates into roughly 2 years of full-time experience. That isn't a LOT of experience and you may never get a "real" reason since it seems that too many people are afraid of getting sued, but I digress.



Do you have any letters of reference? We have no idea what your resume looks like. Are you dressing appropriately for interviews? Are you attentive during interviews and asking questions? There are so many variables that could be coming into play here.



Just keep plugging away at your skill set. When I hire people, several things are extremely important to me. Ambition, drive, motivation, problem solving abilities, people skills, a sense of humor, and attitude. VERY seldom is number of years of experience an important factor to me.




I cannot emphasize this enough either....dress like you want the job.


Thursday, April 11, 2019

windows - netbios domain rename



we have windows server 2008 r2 domain controllers. the forest and domain functional level is windows server 2003 (it's possible to raise the levels). we have an exchange 2010 server.



i would like to rename the netbios domain name (pre-windows 2000 domain name). i don't touch the dns domain name.



so if i just rename the netbios domain name is there any impact to the exchange organisation?




a best practice for renaming the netbios domain name would be nice.


Answer



I have seen this request numerous times in numerous forums. Rather than reinvent the wheel, I want to point you to a blog article written by Ace Fekay that does an excellent job explaining how to perform this task.



http://msmvps.com/blogs/acefekay/archive/2009/08/19/domain-rename-with-or-without-exchange.aspx


windows - Is it possible to have the realname from an email propagated through to outlook



we are using JIRA to manage our projects. For various reasons, it was required to move the email account from unix to an Exchange 2007 server.




We have added a mailbox for the JIRA account, since we also want to receive mails back.



In the postfix era, Outlook did not know the jira account and thus displayed the fullname from the email header. With the beautiful effect, that the name of the person posting the comment was set as the sender's name in outlook (Like: Thomas [jira]). Now, since outlook knows the sender, the email is received as "JIRA".



We are missing this piece of comfort a lot, since it allows to filter by name very easily.



Is there any way to configure Exchange to pass-through the fullname for that account only or how can we configure Exchange to accept mails from the JIRA server without requiring authentication (anonymous sending).



Thanks in advance!




Thomas


Answer



Short answer is you can't. If Exchange has an SMTP address on a mailbox or a recipient, the display of that address is going to get changed to the display name of the mailbox or recipient. There is no way around it. And switching to anonymous authentication will not solve this problem because the address is not matched based on the authenticating user but the address in the header.


raid - Why would a RAID5 rebuild fail?



I have a IBM System x3650 server with a ServeRaid controller and two RAID5 arrays, each consisting of 3 disks.



Yesterday, one disk failed (It was the Raid array that holds the data, the system is located on the sound array). I naively trusted the RAID controller in rebuilding the array. I shut down the server, replaced the failed disk with a new similar. I booted in the controller bios, where I could see that it recognized the new disk and was ready to rebuild (I had nothing to do, everything was automatic). I started the server and it rebuilt the array.




This morning everything seemed OK. The rebuild was finished, the array seemed sound. Only a few hours later, the mysql service crashed with a corrupted database. I managed to dump the data partially and restored the rest from backup. I thought I was OK.



But then I found that some active logfiles were corrupt: they included blocks from different random files. If I appreciate the situation correctly, only files modified since the rebuild has started are corrupted, but I'm not yet 100% sure for this. Somehow, the rebuild must have corrupted the data.



I ask this question to learn out of error. I hope the next time will be never...



What can be the reason that the rebuild failed ? What can I do better next time ?
Is it compulsary to cut the server from the network during rebuild ? I thought, the controller should manage concurrently rebuild and make ordinary reads and writes.
Or shouldn't this never happen, and maybe the controller is faulty ?


Answer



From your description, it seem that the rebuild did not fail, in the sense that the array was up and running. However, it seems that the rebuild process caused some blocks to be wrongly placed/remapped, which is an extraordinary rare but dangerous thing.




I suggest you to take the time to examine the situation. Did you read/follow the RAID card manual? Are you 100% sure that you did the right things? If the reply to both question is "yes", you should immediately open a support case with your server vendor/consultant.


domain name system - Does CNAME only work for subdomains?

Does CNAME only work for subdomains? I wonder because I want to redirect from the domain name I registered, say, www.mycoolname.com, to my page at one of the famous social network web sites, say, www.fakebooked.net/coolnameofmine. Does anybody know if that is possible to accomplish somehow changing some DNS settings?

Wednesday, April 10, 2019

hard drive - SSD or HDD for server

Issue




I have read many discussions about storage, and whether SSDs or classic HDDs are better. I am quite confused. HDDs are still quite preferred, but why?



Which is better for active storage? For example for databases, where the disk is active all the time?



About SSD.



Pros.





  • They are quiet.

  • Not mechanical.

  • Fastest.



Cons.




  • More expensive.




Question.




  • When the life cycle for one cell of a SSD is used, what happens then? Is the disk reduced by only this cell and works normally?

  • What is the best filesystem to write? Is ext4 good because it saves to cells consecutively?



About HDD.




Pros.





Cons.




  • In case of mechanical fault, I believe there is usually no way to repair it. (Please confirm.)

  • Slowest, although I think HDD speed is usually sufficient for servers.




Is it just about price? Why are HDDs preferred? And are SSDs really useful for servers?

php - Get nginx location wildcard inside location block?

I am trying to set up an API where I can access API "foobar" through the URL http://my-apis.com/foobar/route. This is what I have so far:




location ~ ^/foobar(/.*)$ {
root /var/www/mysite/foobar/public;

... more fastcgi stuff ...

fastcgi_param SCRIPT_FILENAME $document_root/index.php$1;
fastcgi_param PATH_INFO $fastcgi_path_info;
}



The API is routing to a Slim framework application and currently it successfully routes to the correct index.php, showing an nginx 404/403 whenever the URL does not start with /foobar. However the route passed to Slim (which looks like it's represented by $1 on line 6) is still the full /foobar/route. This means I have to append all all my Slim routes with /foobar, which, although I can use a Slim group, is still a pain. I would like to be able to pass just the /route bit to Slim.



Is there a way I can extract just the wildcard-matched bit of the location directive? Since $1 gives the full route. Alternatively I might be able to do this with some kind of rewrite, but I don't know enough about Slim.



Any help would be much appreciated!



Thanks!

Tuesday, April 9, 2019

website - Get javascript load calling stack

Is there some tool that will give me a load stack of the javascript files our web site loads? Our page is slow because of some javascript files load times being slow, but they're not file we're loading. So one of the file we're loading is loading them.



I need to find the call stack that leads to loading the slow files (on is on gdrive - not the fastest response ever).

windows - How to send ctrl+alt+del using Remote Desktop?



How can I send ctrl+alt+del to a remote computer over Remote Desktop?



For example, if I wanted to change the local admin password on a remote PC using a Remote Desktop connection, it would be helpful to be able to send the ctrl+alt+del key sequence to the remote computer.



I would normally do this by pressing ctrl+alt+del and selecting the change password option. But I can't send ctrl+alt+del using Remote Desktop since this "special" key series is always handled by the local client.


Answer



ctrl+alt+end is the prescribed way to do this.




Coding Horror has some other shortcuts.


active directory - Redoing AD: How to completely remove old domain from the network without re-installing Windows?



We currently have two DC's for our one domain but as of now do not have anyone actually authenticating to them so I'd like to take the chance to install it correctly. The domain was setup before I was hired and was done sloppily and is also using the correct naming structure as per this MdMarra post. http://www.mdmarra.com/2013/04/best-practices-for-configuring-new.html



I've decommissioned DC's in the past, seized/transfered roles, etc; but have never tried to completely remove a domain from the network. Will the "/forceremoval" switch + removing metadata be enough?



I'd really like to avoid re-installing Windows.




Other Info: Both on Server 2008 R2. Both have DNS installed. DC1 resides in 192.168.1.x/24 AND 192.168.2.x/24 and runs DHCP for both subnets. DC2 is on 192.168.2.x/24.


Answer



AD DS is a server role that can be removed just like any other server role. Run DCPROMO on both DC's to demote them. When you demote the last DC make sure to select the option that it is the last DC in the domain. This will revert both DC's to standalone servers.



You're probably going to need to revisit and probably reconfigure DHCP and DNS in order to continue to serve your network clients.



EDIT:



Here's my opinion on some of the issues you related in your comment:




rDNS zone missing: an rDNS zone isn't a requirement for AD. It's a preference. There isn't any function of AD that needs or requires an rDNS zone. I personally prefer to create an rDNS zone.



AD Recycle Bin not enabled: Again, this is a preference and not a requirement. I prefer to enable it.



IPv6 enabled: This is debatable. I'm not convinced that it should be disabled. I know that there's a lot of information on the internet for and against but I've never had an issue leaving it enabled and I haven't seen any technical information from MS that recommends disabling it.



No Replication: If the DC's aren't replicating than that's definitely a problem that would need to be resolved if you were leaving the domain intact.


Monday, April 8, 2019

Use or don't use virtualization for Linux Webserver?



I maintain the servers for a big webproject (java + postgres + some tools around) which is currently hosted on three machines:





  1. Machine: Mailserver (postfix), Ad-Server (lighttpd + php + openx)

  2. Machine: Tomcat + Servlet

  3. Machine: PostgreSQL-Server, static content (via lighttpd)



All machines run Debian Stable and are connected via a VPN (openvpn). As the hardware is very old (AMD Athlon 3000+ and 2GB RAM on each) its time for a change.



These servers should now be replaced by one big machine (16GB Ram, big Intel CPU's supporting VT, 5 IP's).




The question now is: Should I still seperate the differents tasks using Virtual Machines or should I simply put everything on the machine as usual. Where are the pros and cons?



I thought of the following:



Pro Virtualization:




  • Security: As the vm's are seperated you cant take the whole machine (hopefully)




Con Virtualization:




  • Performance: There is a performance loss

  • Work: Every maintaince-work has to be done several times for every vm

  • Communication: Communcation of the different vm (Servlet do Database) gets more complicated.

  • Hard Memory Limits: I have to assign static resources (like Memory) to each machine. This can be a con if say my db-server needs more ram for 30 seconds (than it got assigned) and there would be more ram available on other machines. With no virtualization this wouldn't be a problem.



Thanks for any hints.



Answer



Performance loss - Yes, technically there is one. Is it something you or your users will notice? Unless it's some crazy high-end workload, or you horribly over-provision VM's* (or are trying to squeeze 5 VM's with a "normal" RAM allocation onto an old, existing server, I seriously doubt it. Remember to actually check your RAM usage - if you're splitting everything up, you don't need, say, 512megs for an NTP server that defaults to runlevel 3. (Splitting off JUST an NTP server is excessive, it was merely an example.)



Work - This is true. If it's only going from, say, one to three servers, probably not that big a deal - do your change, copy/paste your commands from one terminal session to the other. Past that though, you want some kind of management tool, I'm currently looking at Puppet.



*Memory Limits - Depends on the virt. solution you use. Some environments, like ESX/vSphere allow you to allocate more RAM to VM's than physically is available. If you pay for the feature, ESX let's you set up resource pools, and will automatically adjust resources as needed, with the ability to set priorities. Like everything, you have to know how it works and the tradeoffs in a particular environment.


postfix-policyd-spf-python - spoof protection - spf checks FAIL but no action taken - why?

I've installed postfix-policyd-spf-python and configured the postfix integration according to the docs.



This is my policyd-spf.conf config file:



debugLevel = 1 
TestOnly = 0


HELO_reject = SPF_Not_Pass
Mail_From_reject = Fail

PermError_reject = False
TempError_Defer = False

skip_addresses = 127.0.0.0/8,::ffff:127.0.0.0/104,::1


Incoming emails from foreign mail servers get checked and flagged correctly. But when I check for spoof protection, somehow the emails go through:




$ telnet mail.example.com 25

Connected to mail.example.com.
Escape character is '^]'.
220 mail.example.com ESMTP Postfix
helo asd.somedomain.com
250 mail.example.com
mail from: me@example.com
250 2.1.0 Ok

rcpt to: test@example.com
250 2.1.5 Ok
data
354 End data with .
from: "ME"
to: "test"
subject: test

asdasd klajsdlaksjd


thanks!
.

250 2.0.0 Ok: queued as 8C9EC1260E1


In my view, this email should NOT be delivered.



Here's the debugging output from postfix-policyd-spf-python:




policyd-spf[34414]: Found the end of entry
policyd-spf[34414]: Config: {'debugLevel': 5, 'HELO_reject': 'SPF_Not_Pass', 'Mail_From_reject': 'Fail', 'PermError_reject': 'False', 'TempError_Defer': 'False', 'skip_addresses': '127.0.0.0/8,::ffff:127.0.0.0/104,::1', 'TestOnly': 0, 'SPF_Enhanced_Status_Codes': 'Yes', 'Header_Type': 'SPF', 'Hide_Receiver': 'Yes', 'Authserv_Id': 'mail.example.com', 'Lookup_Time': 20, 'Whitelist_Lookup_Time': 10, 'Void_Limit': 2, 'Reason_Message': 'Message {rejectdefer} due to: {spf}. Please see {url}', 'No_Mail': False, 'Mock': False}
policyd-spf[34414]: Cached data for this instance: []

policyd-spf[34414]: skip_addresses enabled.

policyd-spf[34414]: _get_resultcodes: scope: helo, Reject_Not_Pass_Domains: None, helo_policy: SPF_Not_Pass, mfrom_policy: Fail
policyd-spf[34414]: Scope helo unused results: ['Pass', 'None', 'Temperror', 'Permerror']
policyd-spf[34414]: helo policy true results: actions: {'defer': [], 'reject': ['Fail', 'Softfail', 'Neutral'], 'prepend': ['Pass', 'None', 'Temperror', 'Permerror']} local {'local_helo': False, 'local_mfrom': False}
policyd-spf[34414]: spfcheck: pyspf result: "['None', '', 'helo']"

policyd-spf[34414]: None; identity=no SPF record; client-ip=xx.xx.xx.xx; helo=asd.somedomain.com; envelope-from=me@example.com; receiver=


policyd-spf[34414]: _get_resultcodes: scope: mfrom, Reject_Not_Pass_Domains: None, helo_policy: SPF_Not_Pass, mfrom_policy: Fail
policyd-spf[34414]: Scope mfrom unused results: ['Pass', 'None', 'Neutral', 'Softfail', 'Temperror', 'Permerror']
policyd-spf[34414]: mfrom policy true results: actions: {'defer': [], 'reject': ['Fail'], 'prepend': ['Pass', 'None', 'Neutral', 'Softfail', 'Temperror', 'Permerror']} local {'local_helo': False, 'local_mfrom': False}
policyd-spf[34414]: spfcheck: pyspf result: "['Fail', 'SPF fail - not authorized', 'mailfrom']"

policyd-spf[34414]: Fail; identity=mailfrom; client-ip=xx.xx.xx.xx; helo=asd.somedomain.com; envelope-from=me@example.com; receiver=



policyd-spf[34414]: Action: None: Text: None Reject action: 550 5.7.23


As we can see from the log files, the SPF check does return:



spfcheck: pyspf result: "['Fail', 'SPF fail - not authorized', 'mailfrom']"



however, the last line reads:




Action: None: Text: None Reject action: 550 5.7.23



Why is that? Why is the Action: None? In my view, the email should be rejected and not accepted by the server. What am I doing wrong?

ubuntu change default files and folders permission

i have ubuntu 10.04 on my server. when i was create a file by ftp, php and ... files permission is 600 and folders is 700. how can i change default files permission into 644 and folders to 755?
upload file by CuteFtp
my username is not ROOT
suphp installed

Postfix/Dovecot permissions on new files in a mailbox




According to this article:




When creating new files inside a mailbox, Dovecot copies the
read/write permissions from the mailbox's directory.




I'm not seeing this. Here is what I'm seeing:




andrewsav@hroon-precis:~$ dovecot --version
2.0.19
andrewsav@hroon-precis:~$ sudo ls -al /var/mail/vhosts/myhost.com/andrews
total 76
d-wxrws--- 6 vmail vmail 4096 May 15 19:53 .
drwxrwsr-x 4 vmail vmail 4096 Mar 8 07:27 ..
drwxrws--- 2 vmail vmail 4096 May 15 19:53 cur
-rw-rwS--- 1 vmail vmail 288 May 12 20:49 dovecot.index
-rw-rwS--- 1 vmail vmail 31316 May 15 19:53 dovecot.index.log
-rw-rwS--- 1 vmail vmail 24 Dec 13 14:27 dovecot.mailbox.log

-rw-rw---- 1 vmail vmail 54 May 15 19:53 dovecot-uidlist
-rw-rwS--- 1 vmail vmail 8 Dec 13 14:32 dovecot-uidvalidity
-r--rwSr-- 1 vmail vmail 0 Dec 12 22:34 dovecot-uidvalidity.50c84fbc
drwxrws--- 2 vmail vmail 4096 May 15 21:15 new
-rw-rwS--- 1 vmail vmail 6 Dec 13 14:27 subscriptions
drwxrws--- 2 vmail vmail 4096 May 15 21:15 tmp
drwxrws--- 5 vmail vmail 4096 Dec 13 14:32 .Trash
andrewsav@hroon-precis:~$ sudo ls -al /var/mail/vhosts/myhost.com/andrews/new
total 24
drwxrws--- 2 vmail vmail 4096 May 15 21:15 .

d-wxrws--- 6 vmail vmail 4096 May 15 19:53 ..
-rw------- 1 vmail vmail 3435 May 15 19:54 1368604473.Vca02I500e0M443155.hroon-precis
-rw------- 1 vmail vmail 4028 May 15 20:42 1368607343.Vca02I500e1M96785.hroon-precis
-rw------- 1 vmail vmail 4623 May 15 21:15 1368609338.Vca02I500fcM737208.hroon-precis
andrewsav@hroon-precis:~$


The mail directory has rw for the group and the individual files in the new directory for some reason do NOT have rw. Because of this they can't be accessed by people/processes they are desired to be accessed. What am I missing?



I'm running ubuntu 12.04LTS




Update 1



To give a bit of background: I've been running postfix+dovecot for quite some time now. It was installed with small deviations according to this document. Normally mailboxes are not accessed locally, the I accessed via POP/IMAP by remote client.



However I find it useful to run mutt occasionally on the server. I can do it alright if I run it as



sudo mutt -f /var/mail/vhosts/myhost.com/andrews



however I wanted to be able to run it without sudo, and that's where the trouble started. I added myslef into vmail group and I added the following lines into .muttrc:



set spoolfile = '/var/mail/vhosts/myhost.com/andrews/'
alternates myhost.com
set reverse_name = yes
set from = 'andrews@myhost.com'


But this does not work unless I explicitly do chmod g+rw on new and cur. And it works only until new mail arrived, because the new mail does not have that rw.




Is there anyway I can solve this?



Update 2



After discussing this issue with NickW in chat, we came to the conclusion that it's actually Postfix that are writing these files, and not Dovecot. The LDA is most likely Postfix virtual. Here is Postfix configuration.



main.cf:



# See /usr/share/postfix/main.cf.dist for a commented, more complete version



# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no

# appending .domain is the MUA's job.

append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no

# TLS parameters
smtpd_tls_cert_file=/etc/apache2/ssl/my.crt
smtpd_tls_key_file=/etc/apache2/ssl/my.key

smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

myhostname = myhost.myhost.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

myorigin = /etc/mailname
#mydestination = myhost.com, hroon-precis, localhost.localdomain, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all

smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

smtpd_sasl_auth_enable = yes
#smtpd_tls_wrappermode=yes
smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination
smtpd_tls_auth_only = no
#smtpd_sasl_security_options = noanonymous, noplaintext
smtpd_tls_security_level=may

virtual_mailbox_domains = myhost.com
virtual_mailbox_base = /var/mail/vhosts
virtual_mailbox_maps = hash:/etc/postfix/vmailbox

virtual_minimum_uid = 100
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_alias_maps = hash:/etc/postfix/virtual
mydomain = myhost.com

transport_maps = hash:/etc/postfix/transport


master.cf




#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master").
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)

# ==========================================================================
smtp inet n - - - - smtpd
#smtp inet n - - - 1 postscreen
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
#submission inet n - - - - smtpd
# -o syslog_name=postfix/submission
# -o smtpd_tls_security_level=encrypt
# -o smtpd_sasl_auth_enable=yes

# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes
# -o smtpd_sasl_auth_enable=yes
# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
# -o milter_macro_daemon_name=ORIGINATING
#628 inet n - - - - qmqpd
pickup fifo n - - 60 1 pickup

cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
#qmgr fifo n - n 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush

proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local

virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery

# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#
# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#

# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#
# Specify in cyrus.conf:
# lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
# mailbox_transport = lmtp:inet:localhost
# virtual_transport = lmtp:inet:localhost

#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)
# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus unix - n n - - pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================

# Old example of delivery via Cyrus.
#
#old-cyrus unix - n n - - pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}
#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe

flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}

mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}


transport:



info@myhost.com discard:
sales@myhost.com discard:
webmaster@myhost.com discard:



vmailbox:



user1@myhost.com myhost.com/user1/
user2@myhost.com myhost.com/user2/
... etc
andrews@myhost.com myhost.com/andrews/
@myhost.com myhost.com/andrews/



I searched Postfix documentation and I was not able to find a way to specify permissions to Postfix for newly created mail message files inside a mailbox.



My thinking is that it could be impossible, and then there must be another way of setting up mutt so that it can access the maildirs without need to do sudo/be root.



Any hints are appreciated.


Answer



I'm answering here instead commenting , so I can format properly.
Since you have dovecot, you should already have lda installed (its in dovecot-core ).
Add this to /etc/postfix/master.cf:




dovecot   unix  -       n       n       -       -       pipe
flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}


Add this to /etc/postfix/main.cf:



virtual_transport               = dovecot
dovecot_destination_recipient_limit = 1



Change /etc/dovecot/conf.d/15-lda.conf:



protocol lda {
postmaster_address = postmaster@example.com
log_path = /var/log/dovecot-deliver
info_log_path = /var/log/dovecot-deliver
}


(though pretty much optional that 3 line between {} )
postmaster_address is the from address for the bounced mail




Change /etc/dovecot/conf.d/10-master.conf:



service auth {
...
unix_listener auth-userdb {
mode = 0666
user = vmail
group = vmail
}

...
}


Add all users from /etc/postfix/vmailbox to /etc/postfix/virtual like this:



user1@myhost.com user1@myhost.com 
user2@myhost.com user2@myhost.com
... etc



Move the catch-all to /etc/postfix/virtual:



@myhost.com andrews@myhost.com


Change /etc/dovecot/conf.d/15-lda.conf:



lda_mailbox_autocreate = yes



This will auto-create maiboxes that are absent



To keep the discard rules,
Add to main.cf :



mydestination=localhost.localdomain


Add to /etc/postfix/virtual:




info@myhost.com devnull@localhost.localdomain
sales@myhost.com devnull@localhost.localdomain
webmaster@myhost.com devnull@localhost.localdomain


Then to /etc/aliases :



devnull: /dev/null



These lines from /etc/postfix/main.cf are no longer needed and can be removed:



#virtual_mailbox_base = /var/mail/vhosts
#virtual_minimum_uid = 100
#virtual_uid_maps = static:5000
#virtual_gid_maps = static:5000
#transport_maps = hash:/etc/postfix/transport



Run




  • newaliases

  • postmap /etc/postfix/virtual

  • service postfix restart

  • service dovecot restart



and lets hope it works.



linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...