Thursday, January 31, 2019

Apache restart on Ubuntu - error “could not bind to address 0.0.0.0:80”

I'm a n00b - trying to get apache2 set up on Ubuntu 9.10 (Karmic Koala) on Rackspace Cloud. I have set up/configured OpenSSL and installed Apache, but Apache won't start. I assume its a misconfiguration in my /etc/apache2/sites-available/ssl or /etc/apache2/sites-available/default files)



When I try to restart apache using the command:



sudo /etc/init.d/apache2 restart
I get the following error message:



[error] (EAI 2)Name or service not known: Could not resolve host name *.80 -- ignoring! 
[error] (EAI 2)Name or service not known: Could not resolve host name *.80 -- ignoring!
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down

Unable to open logs ...fail!


For my /etc/apache2/sites-available/ssl I have used a virtual host of *:443.



For my /etc/apache2/sites-available/default i have used a virtual host of *:80

opensolaris - ZFS rpool full, cant do anything

I have a huge problem, my rpool is full, so when i boot, i have tons of "no space on device" in my shell. No way to log, ssh server is down.



So i decided to boot with opensolaris Live cd and mount rpool, using those topics:




Open Indiana topic



and this one:
Orcale blog



But i cannot mount rpool/ROOT/solaris because i cannot do this command line:



 zfs set mountoint=/a rpool/ROOT/solaris



Because i'have got an zfs cannot set property "out of space"...i am stuck...



Another strange thing its that zpool import -f -R /a rpool succeed, and when i launcg df -g i can see the mount, it tells me that 48G are used 100% of the capacity.But when i ls -al /a there is only etc and export empty directories. No files, nothing that i can delete to make space.



I really dont know what to do, any help will be great.



Best regards,

Wednesday, January 30, 2019

running nginx as a reverse proxy with apache




Q1) Do I need mod_deflate running on apache? does it help in performance in anyway?



Q2) Do I need mod_cache running on apache if nginx is serving a static caching proxy?




CacheEnable disk http://website.com/
CacheIgnoreNoLastMod On
CacheMaxExpire 86400
CacheLastModifiedFactor 0.1

CacheStoreNoStore Off
CacheStorePrivate On

CacheDefaultExpire 3600
CacheDirLength 3
CacheDirLevels 2
CacheMaxFileSize 640000
CacheMinFileSize 1
CacheRoot /opt/apicache




Answer



You don't need to run mod_deflate on apache, use compression of nginx instead. Secondly you can use caching of nginx instead of mod_cache on apache.



You can read up using the link below to get a better idea of nginx caching.



How to set up Nginx as a caching reverse proxy?


firewall - Need IPTABLES rules explanation about OpenVPN set up



I've got Ubuntu 16.04 and OpenVPN installed and seems to be working fine. But when I check firewall rules using "sudo ufw status", then I see this:




Status: active


To Action From
-- ------ ----
80 ALLOW Anywhere
443 ALLOW Anywhere
53 ALLOW Anywhere
465 ALLOW Anywhere
25 ALLOW Anywhere
110 ALLOW Anywhere
995 ALLOW Anywhere
143 ALLOW Anywhere
993 ALLOW Anywhere
10025 ALLOW Anywhere
10024 ALLOW Anywhere
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
53 (v6) ALLOW Anywhere (v6)
465 (v6) ALLOW Anywhere (v6)
25 (v6) ALLOW Anywhere (v6)
110 (v6) ALLOW Anywhere (v6)
995 (v6) ALLOW Anywhere (v6)
143 (v6) ALLOW Anywhere (v6)
993 (v6) ALLOW Anywhere (v6)
10025 (v6) ALLOW Anywhere (v6)
10024 (v6) ALLOW Anywhere (v6)



Port 1194 isn't mentioned at all! But I use netstat command "root@mail:~# netstat -anlp |grep 1194" I get this:



udp        0      0 0.0.0.0:1194            0.0.0.0:*                           1142/openvpn    



Also I have this file, created by the OpenVPN script here /etc/systemd/system/openvpn-iptables.service and I see this in it:




[Unit]
Before=network.target
[Service]
Type=oneshot
ExecStart=/sbin/iptables -t nat -A POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to xx.249.16.253

ExecStart=/sbin/iptables -I INPUT -p udp --dport 1194 -j ACCEPT
ExecStart=/sbin/iptables -I FORWARD -s 10.8.0.0/24 -j ACCEPT
ExecStart=/sbin/iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
ExecStop=/sbin/iptables -t nat -D POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to xx.249.16.253
ExecStop=/sbin/iptables -D INPUT -p udp --dport 1194 -j ACCEPT
ExecStop=/sbin/iptables -D FORWARD -s 10.8.0.0/24 -j ACCEPT
ExecStop=/sbin/iptables -D FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target




So my question is... if port 1194 is open (is it?) with these IPTABLES rules, then why I don't see it in ufw status?


Answer



I expect that the confusion is coming because you are using both UFW and IPTABLES. UFW is a front-end for iptables, but if you add rules outside it I expect that it can't recognises those rules.



Thus you are not seeing the iptables rules injected to handle your OpenVPN connection.



I expect if you list the iptables rules you will see them. Try




  /sbin/iptables -vnL


To show the IPTables and UFW rules (but in the IPTABLES form)


hardware - HP ProLiant ML350 G6 won't power on



I pulled a couple of ML350's out of storage for a training lab, and to my chagrin only one boots. When I plug the good ML350 in, the NIC light flickers green briefly and it powers up normally. When I plug the bad ML350 in, the NIC light does not flicker at all. Instead both the rear and front UID lights turn solid blue. Pressing the UID and/or power button does not change this. The fans do not spin up and there is no video.



My initial thought was that the PSU was bad, so I swapped PSUs (bad to good and good to bad) and outlets on my power strip (just in case I had a bad outlet). This did not change things either. The issue did not follow the PSU, so I can rule out a bad PSU. I then pulled the motherboard tray out of the bad server, reseated all my cables, and looked for any glaring issues on the power distribution board under the mobo tray (eg, bulging capacitors or cracked components). I saw nothing out of the ordinary.



My final step was to remove the ram, raid array, and dvd drive. No change.




In the searching I've done on SF I see mentions of some ProLiant models having a system health LED on the motherboard. I do not see any lights on the motherboard at all.



I am not able to connect to iLO. I've never done it before so it is possible I'm doing it wrong, but I know the DNS name and followed these steps.



Anyone see anything I'm missing? Does anyone know a way I can narrow this down further to determine if its the motherboard, PD board, or something else that has gone bad and needs to be replaced? The machine is out of warranty, and I'm not sure my boss will want to throw a bunch of money at it.


Answer



So I finally found some documentation on HPs website that helped me narrow the issue to the Power Distribution board.



Apparently there are 12 LEDs on the motherboard. None of them light up. http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c01726637#N102BA




And this page tells the minimum hardware configuration: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c03212774



With that information I can now confidently say that it needs a new PD board.


Tuesday, January 29, 2019

usb flash drive - Moving ESXi installation to USB key



What's the best way to move a VMware ESXi 4.1 installation from a harddrive to a usb memory?



I'm planning to install a fresh VMware ESXi on the usb key, boot it up, and then somehow transfer the configuration from the harddrive, is that possible? Do I have to reconfigure all my guests? I'm sure there's a neat trick here!



I'm not very experienced with VMware, and this is a lab environment, so I'm here to learn.


Answer



To my understanding, the method is pretty straight-forward.




No vSphere server method (i.e. stand-alone):




  1. Manually grab config you need (IP address, any hardware-config details like VLAN, NTP configs if any) via vSphere client.

  2. Configure BIOS to boot from USB before HD.

  3. Boot to USB.

  4. Connect to the ESXi instance via vSphere client

  5. Add in the config grabbed in step 1 as appropriate

  6. Perform a Rescan Devices from Storage Adapters to locate any VMFS volumes.


  7. Browse the newly discovered Data Stores.

  8. When you find .vmx files, right click and select "Import into inventory"



vSphere method with vMotion:




  1. Migrate all VMs off of the ESXi server.

  2. Manually grab any special config you need (VLAN config being most important, NTP)

  3. In vSphere, Disconnect the host, then delete it.


  4. Configure BIOS to boot from USB before HD.

  5. Boot to USB.

  6. In vSphere, Add Host, and pick the hostname and login info for the USB-ESXi you created.

  7. Manually enter the config grabbed in step 2 as appropriate.

  8. Rescan your storage adapters for VMFS volumes.

  9. Migrate a VM to this server to make sure it works.



vSphere method without vMotion (uncommon, but could happen):





  1. Manually grab any special config you need (VLAN config being most important, NTP)

  2. Turn off all VMs on the machine.

  3. Follow steps 3-8 of the "with vMotion" checklist.

  4. Follow steps 7-8 of the "no vSphere" method.



Note: When you disconnect the host, associated VMs will go into 'Orphan'. It is possible that when you do the Rescan Devices they'll all come back and there will be no need to do step 4.


Monday, January 28, 2019

domain name system - DNS loadbalancing options



I have assembly a high availability system, as the following illustration suggests:



DNS RR -> Balancer1 
\

\
HAproxy1 ---> Backend Servers
HAproxy2 ---> Backend Servers
HAproxy3 ---> Backend Servers
/
/
DNS RR -> Balancer2


In few words: Two load balancers with VIP to receive the requests from clients

and then distribute it between 3 HAproxy servers that act as SSL offload and back-end balancing.



My problem now is the DNS RR. It has its perks but I'm looking for a better solution to distribute
the clients between the Balancer1 and Balancer2. Any sugestions?



PS: GeoDNS is not an option.


Answer



You could utilize a CDN as the user-facing clients. You'd then utilize the CDN's functions to load balance across you Balancer hosts. That may include DNS RR, however the CDN's configuration is known and managed so you can be confident that the CDN will respond properly to backend changes.



As an example, you could use Akamai CDN to route user requests. You could then use Akamai Global Traffic Manager (GTM) to control which origins are used by Akamai. They have a 'failover' and 'round robin' function you could use, and Akamai's healthcheckers will manage which origins are available. They can also retry requests if they experience an error talking to your origin.




Amazon Cloudfront + Route53 'weighted' records + Route53 healthchecking accomplishes it similarly.



This works even if your content is not cacheable, as a CDN does not have to be used exclusively for cacheable content. It has the benefit of bringing the user into a ecosystem you control near the 'edge', and troubleshooting CDN->origin connections is much easier than unknownuser->origin.



This route also gives you a measure of DoS protection as you can apply filters at the edge.


Sunday, January 27, 2019

Storage solution with a Windows Hyper-V cluster



I'm a bit of a newbie when it comes to storage, so I'd be grateful for any pointers!



I'm trying to plan out a small Windows Server 2012R2 Hyper-V cluster for an SMB consisting of 2 servers and since I wanted to include a failover option, should one of the hosts die, I wanted to include some sort of shared storage to enable the VMs to fail over to the second host if needed.



After doing some research, it would seem a direct-attached-storage box might be the best solution, or at least the best compromise between cost and performance.



If at all possible, I'd like to have some sort of redundancy for the storage and of course RAID comes to mind. Unfortunately according to this, it would seem RAID is not supported:




"The clustered storage pool MUST be comprised of Serial Attached SCSI (SAS) connected physical disks. Layering any form of storage subsystem, whether an internal RAID card or an external RAID box, regardless of being directly connected or connected via a storage fabric, is not supported."



...and this is where I get confused. My understanding is that whatever the DAS storage solution does internally (e.g. set up a RAID volume on a few disks and give the hosts access to said volume) should be completely transparent to the hosts themselves. Said hosts should be then able to use this volume to create a Windows Failover Cluster and a Hyper-V cluster after that.



So, on to my questions:




  1. does the article only apply to WFC storage configured using Storage Space? i.e. did I completely misunderstand it?

  2. Will I be able to use an SAS DAS box with a RAID volume (with SAS HBA cards for the hosts) to configure my cluster?




Thanks in advance!


Answer



No, hosts shouldn't care what's below the LUN, but there are a few things you should consider:




  1. Organizing your data.




Consider a physical server for which you would organize the disks and files as follows: System files, including a page file, on one physical disk; Data files on another physical disk.



For an equivalent clustered VM, you should organize the volumes and files in a similar way: System files, including a page file, in a VHD file on one CSV, Data files in a VHD file on another CSV.



Try to keep the same rules when/if you add new VM hosts.




  1. Adding any disks to Available Storage




In Failover Cluster Manager, in the console tree, expand the name of the cluster, and then expand Storage. Right-click Disks, and then select Add Disk. A list appears showing the disks that can be added for use in a failover cluster.
Select the LUN disk or disks you want to add, and then select OK.
The disks are now assigned to the Available Storage group.



The disks can be the LUNs, they do not have to be physical disks.



You do not even have to use pools (depending on how you planned things to be done).



Practically, as long as you managed to create the LUN, what type of storage configuration is behind them is irrelevant. In my case, I use a Dell storage with SAS SSD for the high speed requirements and another Dell storage with HDDs as secondary and backup.


HP Proliant DL360 G6 Wont Detect SSD

I have a hp Proliant DL360 G6



It has 4 hard drive slots and I have 2 Intel 80GB SSDs and 1 WD 160GB regular hard drive. I have tried the WD in all 4 slots and each time it detects it and boots right up into vmware so the slots arn't defective. But when I insert the 2 Intel SSDs the server doesn't see them on POST

linux - Apache uses 100% CPU. Can "ps" command tell me what it is doing?



I have a SLES 10 Linux server, and some times it is maxed out by Apache to 100% CPU.




With ps ax can I see, that Apache have spawned ~50 Apache processes.



Can e.g. the ps command tell me what each of these Apache processes are doing?



Or perhaps some other method so I can see what web pages that triggers the problem?


Answer



My /etc/httpd/conf/httpd.conf file has this section:



# Allow server status reports generated by mod_status,

# with the URL of http://servername/server-status
# Change the ".example.com" to match your domain to enable.
#

SetHandler server-status
Order deny,allow
Deny from all
Allow from .example.com
Allow from 127. 192.168.1.




Thus if I go to http://192.168.1.1/server-status, I get a page that tells me:




  1. server version

  2. httpd uptime

  3. current CPU usage


  4. a grid of what each process is doing


  5. recent requests



    Apache Server Status for 192.168.3.1



    Server Version: Apache/2.2.3 (Red Hat)
    Server Built: Jul 14 2009 06:04:04



    Current Time: Saturday, 17-Jul-2010 10:20:31 CDT
    Restart Time: Saturday, 17-Jul-2010 10:13:12 CDT
    Parent Server Generation: 0
    Server uptime: 7 minutes 19 seconds
    Total accesses: 51 - Total Traffic: 156 kB
    CPU Usage: u0 s0 cu0 cs0
    .116 requests/sec - 363 B/second - 3132 B/request
    1 requests currently being processed, 7 idle workers



    __W_____........................................................
    ................................................................
    ................................................................
    ................................................................




    Scoreboard Key:
    "_" Waiting for Connection, "S" Starting up, "R" Reading Request,
    "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
    "C" Closing connection, "L" Logging, "G" Gracefully finishing,
    "I" Idle cleanup of worker, "." Open slot with no current process



    Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
    0-0 20715 0/2/2 _ 0.00 418 0 0.0 0.01 0.01 192.168.3.97 dit GET /server-status HTTP/1.1
    1-0 20716 0/49/49 _ 0.00 128 0 0.0 0.15 0.15 192.168.3.97 dit GET /server-status HTTP/1.1
    2-0 20717 0/0/0 W 0.00 0 520222374 0.0 0.00 0.00 192.168.3.97 dit GET /server-status HTTP/1.1



Saturday, January 26, 2019

Huge load on Centos, many apache processes



I'm experiencing a huge load on my server at the moment and I can't figure out why. When I use the 'top' command, there's hundreds of apache processes with the command "aux", but I can't find anything online that tells me what it means. The load is flapping between 50-150, which is a good 50-150 more than it usually is.



Netstat returns hundreds and hundreds of rows like this:




tcp  0  0 xxx.xxx.xxx.xxx:45216  61.155.202.205:80  CLOSE_WAIT  28863/aux


Almost all from 61.155.xxx.xxx (not sure if this is relevant information, but trying to give as much as possible).



The OS is CentOS: release 5.7 Final
We just run LAMP stack on it with about 30 websites that don't get much load (or so I thought). I've checked the logs for all of the vHosts but none seem to be getting many/any requests (not nearly enough to cause this trouble). I'm not sure if there are other logs I should be checking?



It started a couple of days ago; no changes made on the server as far as I'm aware.




Does anyone have any ideas for how I can track down what's causing the huge spike in load? Are there other commands/logs that I've missed that might be able to help me track down what the problem is?


Answer



That's not a connection from 61.155.xxx.xxx. That's a connection to a webserver on 61.155.202.205.



It looks very much like your webserver is making HTTP requests to other webservers on ADSL connections in China. Try a tcpdump -n -A -s0 host 61.155.202.205 to see what kind of data you are collecting. I suspect it's malicious.



If it is malicious, refer to My server's been hacked! EMERGENCY.







The "many Apache processes" is most likely caused by the high load rather than causing the high load. Even at a load average of 50 I would expect to start seeing HTTP requests taking multiple seconds. At 150 it would be worse.


Friday, January 25, 2019

linux - How can I block a specific type of DDoS attack?

My site is being attacked and is using up all the RAM. I looked at the Apache logs and every malicious hit seems to simply be a POST request on /, which is never required by a normal user.



So I thought and wondered if there's any sort of solution or utility that will monitor my Apache logs and block every IP that performs a POST request on the site root. I'm not familiar with DDoS protection and searching didn't seem to give me an answer, so I came here.



Thanks.



Example logs:



103.3.221.202 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
122.72.80.100 - - [30/Sep/2012:16:02:03 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11"

122.72.28.15 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)"
210.75.120.5 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0"
122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1"
122.96.59.103 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Linux; U; Android 2.2; fr-fr; Desire_A8181 Build/FRF91) App3leWebKit/53.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1"
122.72.124.3 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:13.0) Gecko/20100101 Firefox/13.0.1"
122.72.112.148 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.1" 302 485 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:13.0) Gecko/20100101 Firefox/13.0.1"
190.39.210.26 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/5.0 (Windows NT 6.0; rv:13.0) Gecko/20100101 Firefox/13.0.1"
210.213.245.230 - - [30/Sep/2012:16:02:04 +0000] "POST / HTTP/1.0" 302 485 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)"
101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
101.44.1.28 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"

101.44.1.28 - - [30/Sep/2012:16:02:14 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"
101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11"
101.44.1.25 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11"
211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6"
211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)"
211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)"
101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11"
101.44.1.25 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"

211.161.152.108 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
101.44.1.28 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
211.161.152.106 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1"
103.3.221.202 - - [30/Sep/2012:16:02:13 +0000] "POST / HTTP/1.1" 302 466 "-" "Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
101.44.1.28 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; MRA 5.8 (build 4157); .NET CLR 2.0.50727; AskTbPTV/5.11.3.15590)"
211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"
211.161.152.104 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"
211.161.152.105 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6"
101.44.1.25 - - [30/Sep/2012:16:02:10 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11"

122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
122.72.124.2 - - [30/Sep/2012:16:02:11 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
122.72.124.2 - - [30/Sep/2012:16:02:17 +0000] "POST / HTTP/1.1" 302 522 "-" "Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1"
210.213.245.230 - - [30/Sep/2012:16:02:12 +0000] "POST / HTTP/1.0" 302 522 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)"


iptables -L:



Chain INPUT (policy ACCEPT)
target prot opt source destination


Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination


-




bui@debian:~$ sudo iptables -I INPUT 1 -m string --algo bm --string 'Keep-Alive: 300' -j DROP
iptables: No chain/target/match by that name.
bui@debian:~$ sudo iptables -A INPUT -m string --algo bm --string 'Keep-Alive: 300' -j DROP
iptables: No chain/target/match by that name.

Thursday, January 24, 2019

hp proliant - 3rd party SSD on DL360e Gen8 V2

I want to order a HPE DL360e Gen8 V2 server, without any drives. I have read some topics regarding "unsupported" SSDs drives.




We want to use 1TB SSD drives.



Has someone tested and successfully installed 3rd party SSD drives on this server?



Thank you in advance.

Wednesday, January 23, 2019

domain name system - My ISP set up a PTR record for my mail server, but some places aren't seeing it




I have a VPS running Windows 2008 server with plesk 9, which I am using for email. I asked to my ISP to add a PTR record, which they did but my mails still droping into spam box.



I have checked almost every DNS tool I can think of, and sometimes it shows the PTR record and sometime it does not. I'm not sure where the problem is.



This morning and all day intodns was showing i have a reverse ptr but right now not.
Could anyone point me to the right direction to find out where is the problem.
Thanks a lot.



http://www.intodns.com/wcrop.com




81.222.137.195.in-addr.arpa -> no reverse (PTR) detected

Answer



Something is funky with your ISP's DNS.



There are two authoritative nameservers for 222.137.195.in-addr.arpa.:



222.137.195.in-addr.arpa. 172800 IN NS  ns2.lermi.net.
222.137.195.in-addr.arpa. 172800 IN NS ns1.lermi.net.



These two servers are out of sync: NS1 has zone serial 1334308835 and is returning your PTR record. NS2 has zone serial 1330809486 (older), and is not returning your PTR record.
Anyone whose query is (randomly) sent to NS2 will not get your PTR record.



Instruct your ISP to fix this issue and all should be right in your DNS universe.






NS1




; <<>> DiG 9.7.3-P3 <<>> @ns1.lermi.net -x 195.137.222.81
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48514
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;81.222.137.195.in-addr.arpa. IN PTR


;; ANSWER SECTION:
81.222.137.195.in-addr.arpa. 172800 IN PTR wcrop.com.

;; AUTHORITY SECTION:
222.137.195.in-addr.arpa. 172800 IN SOA tr1.turkcealan.com. log\@ramtek.net.tr. 1334308835 10800 3600 604800 3600

;; Query time: 149 msec
;; SERVER: 195.149.85.195#53(195.149.85.195)
;; WHEN: Thu Apr 19 18:26:23 2012
;; MSG SIZE rcvd: 68



NS2



; <<>> DiG 9.7.3-P3 <<>> @ns2.lermi.net -x 195.137.222.81
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 50379
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;81.222.137.195.in-addr.arpa. IN PTR

;; AUTHORITY SECTION:
222.137.195.in-addr.arpa. 172800 IN SOA tr1.turkcealan.com. log\@ramtek.net.tr. 1330809486 10800 3600 604800 3600

;; Query time: 145 msec
;; SERVER: 195.137.223.65#53(195.137.223.65)

;; WHEN: Thu Apr 19 18:26:51 2012
;; MSG SIZE rcvd: 140

Redundant Paths between DELL R720 and MD3420 in ESXi 5.5



We had some trouble configuring redundant paths between a DELL R720 and MD3420 storage connected via two LSI 12GB SAS-HBAs using ESXi 5.5. The datastore was accessible and worked fine, except for the redundancy (the datatstore was only accessible over one of both paths).



We found a workaround together with DELL support, which included removing VMwares default driver and downloading a newer version of the driver:





This solution works fine since then, no problems occured.




My question is:
Does anybody know, if this procedure will be still requierd with a new R730 we are deploying this week?



We want to individualize the installation as little as possible, for the obvious reason of troubleshooting the system in the future and we dont want to provoke any issues regarding compatibility between 2 servers connecting to the datastore using different drivers.


Answer



Worked like a charm using the fully patched ESXi 5.5
It doesnt seem like the workaround is still necessary.



Apreciate the thoughts on this topic, though. Thanks.



cisco asa - NAT Rule changes from ASA software 8.0 to 8.4



I have an Access Rule and a NAT rule that works fine with on the Security Appliance Software Version 8.0



The rule is as follows:
enter image description here



enter image description here




However, I am having trouble making the same rule work on an ASA running on the Security Appliance Software Version 8.4.



I know that the configuration has changed, I think I am just supposed to create a network object for ath-security and define my access and NAT rules at the same time, but I haven't configured anything on an ASA in years and got a little over my head.



I have it setup as follows:



enter image description here
enter image description here



What am I doing wrong here?




The CORP-OUTSIDE and NM-OUTSIDE are supposed to be different; these are two different ASA's. The XXXX-OUTSIDE is a network object for the outside IP address of each device. CORP-OUTSIDE is on the ASA with the 8.0 software, NM-OUTSIDE is on the ASA with the 8.4 software






show running-config returns the following on ASA with 8.0 software:




static (inside,outside) tcp interface www LVMSecurity www netmask
255.255.255.255





show running-config returns the following on the ASA with the 8.3 software:




object network AthertonSecurity-2.123 nat (inside,outside) static
interface service tcp www www








Using the ASDM Packet Trace tool, I get the following error on the 8.3 ASA:




Info: (sp-security-failed) Slowpath security checks failed



Answer



Figured this out, posting what I believe is the answer:



The problem was with the ACL within the Access Rules settings.

It seems in the 8.3 software the Destination Criteria, Destination: should no longer be the 'outside interface' but the Network Object destination itself.



It seems Cisco switched the configuration from being what seems backwards, to the correct way?


Tuesday, January 22, 2019

How to make a linux VM working as a router

I have access to an openstack account where I can create Linux 14.04 VMs. I have created two network interfaces.





  1. "public-net" which is connected to the internet through a router


  2. "private-net" which is not exposed the internet




Now, I have created one VM, named "GATEWAY" which is connected to both the network interfaces and it has two internet address on eth0 (10.70.0.6) and eth1 (10.90.0.1). eth0 is exposed to the internet and eth1 is for the private network. The GATEWAY VM has a public ip-address on eth0.



Now I have created one more VM, named "AGENT" on private-net interface. ip address is 10.90.0.7 and make the default gateway as 10.90.0.1 (GATEWAY vm machine)



As the private VM is not exposed to any router so we can not have internet access to the VM. To enable internet access I have added a NAT rule on the GATEWAY vm as below:




sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE


This will change the source address of all internet packets leaving the host GATEWAY as the address of the GATEWAY machine. Also, set the ipv4 packet forwarding=1 in the GATEWAY machine.



I can ping any external address from the GATEWAY machine but not from the internal agent machine. Not to mention that this private AGENT machine does not have internet access too.



Can anyone please help me set up the gateway VM such a way so that I can use it as a router and bring internet access to the private machines.



This is how my routing table looks like in AGENT machine:




Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0 10.90.0.1 0.0.0.0 UG 0 0 0 eth0
10.90.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.169.254 10.90.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0


Here I am adding my tcpdump for icmp ping on both the intefaces.




eth1: interface connecting to private network.



18:43:39.309771 IP host-10-90-0-7.openstacklocal > 172.217.3.14: ICMP echo request, id 2395, seq 1, length 64
18:43:39.355430 IP 172.217.3.14 > host-10-90-0-7.openstacklocal: ICMP echo reply, id 2395, seq 1, length 64
18:43:40.318637 IP host-10-90-0-7.openstacklocal > 172.217.3.14: ICMP echo request, id 2395, seq 2, length 64
18:43:40.364178 IP 172.217.3.14 > host-10-90-0-7.openstacklocal: ICMP echo reply, id 2395, seq 2, length 64


eth0: interface connecting to the internet.




18:43:39.309796 IP host-10-70-0-6.openstacklocal > 172.217.3.14: ICMP echo request, id 2395, seq 1, length 64
18:43:39.355396 IP 172.217.3.14 > host-10-70-0-6.openstacklocal: ICMP echo reply, id 2395, seq 1, length 64
18:43:40.318679 IP host-10-70-0-6.openstacklocal > 172.217.3.14: ICMP echo request, id 2395, seq 2, length 64
18:43:40.364154 IP 172.217.3.14 > host-10-70-0-6.openstacklocal: ICMP echo reply, id 2395, seq 2, length 64
18:43:41.326618 IP host-10-70-0-6.openstacklocal > 172.217.3.14: ICMP echo request, id 2395, seq 3, length 64


Here I can see, ping respons is coming from the external address and its travelling both the intefaces. Even though it's being received by the eth1 to the private VM, its saying ping lost 100% packets.



---------     -------------------                                            -------------                                                      ------------

INTERNET |----| openstack-router| --10.70.0.1 --------10.70.0.6(NIC eth0) --| GATEWAY-VM |-- 10.90.0.1(NIC eth1) ---------10.90.0.7(NIC eth0) --| AGENT-VM |
--------- ------------------- ------------- ------------

I want four mirrored drives - should I combine raid controller and windows soft raid?



I know it seems silly, but I really want my four 1TB drives to be mirrored (running my own subversion server for development). Is there anything wrong with building two raid1 arrays on my 'SYBA SY-PCI40010 PCI SATA II (3.0Gb/s) 4 Port RAID 0/1/5/10 JBOD Card' raid controller, then mirroring those two logical drives in windows 7?


Answer



Do you really want the data on one drive mirrored across three other drives?



Would RAID 1+0 work for you instead? This would tolerate the failure of two drives once they weren't from the same mirror set. If your controller supports RAID6 this will tolerate the failure of any two drives.



In answer to your question, your suggested set up sounds like it might be difficult to troubleshoot in the event of a drive failure. Personally I would not prefer it to two RAID 1s or a single RAID 10 set.


Monday, January 21, 2019

windows - What does it mean to grant/set permissions for NETWORK SERVICE on a network share?



I'm confused about the NETWORK SERVICE account (group?) works on network shares:



On one hand, NETWORK SERVICE is generally described as an account that's local to a given machine. (See, e.g., here on serverfault or in Microsoft's Access Control in IIS 6.0 document.) So it's not a domain-wide account. And, for instance, if a process running under NETWORK SERVICE on SERVERA tries to request a resource on SERVERB, the authentication won't be under some hypothetical MYDOMAIN\NETWORK SERVICE, but rather under MYDOMAIN\SERVERA$. (The latter is known as SERVERA's "computer account".)



On the other hand, I've noticed I can go to a remote file share where I have admin rights, and set permissions on a particular directory for NETWORK SERVICE. (e.g. I can go to \\MYSHARE in Windows Explorer, right-click one of the directories, go to Security > Edit > Add, type "NETWORK SERVICE" in the "Enter the object names to select" box and click OK. Now I have a new NETWORK SERVICE entry in the list of "Group or user names", and I can change the permissions for it, just like I might change permissions for the "Users" group.)




If NETWORK SERVICE is strictly a machine-by-machine account, I don't understand what is supposed to happen when I create a set of permissions for NETWORK SERVICE on a remote share. Does that entry refer to NETWORK SERVICE on one particular (unspecified) machine? To judge by the icon, the permissions are technically for a NETWORK SERVICE group, rather than than a NETWORK SERVICE user. But I can't seem to find any documentation for a NETWORK SERVICE group or how it might work compared to a regular domain group.



My only guess so far is that, if you grant access to the NETWORK SERVICE group (assuming there is such a thing), this amounts to granting access to all the "computer accounts" on the whole domain. (That is, giving permissions to NETWORK SERVICE on a central file server would be the same as giving permissions to MYDOMAIN\SERVERA$, MYDOMAIN\SERVERB$, MYDOMAIN\SERVERC$, ..., MYDOMAIN\MYLASTSERVER$.)


Answer



NETWORK SERVICE is a well known account. It has the same SID on every machine. You are correct that NETWORK SERVICE on MachineA will not authenticate as NETWORK SERVICE on MachineB. It's not a group, it is an account.



It's very rare that you would be setting NETWORK SERVICE permission (share or NTFS) on a share. This would only be necessary if a service on the local machine, running under the credentials of NETWORK SERVICE, was trying to connect to that share on the local host.



When a service logging on as NETWORK SERVICE tries to connect to a remote machine the credentials of the local machine will be used. So if a service is running on MachineA in the domain example.com then that service would connect to MachineB as Machine@example.com (or example\MachineA if you like NetBIOS style names).



Sunday, January 20, 2019

smtp - Can I have an MX record for a 3rd level domain?



I have mail working for continuumconcepts.com right now. My mx records point to my assigned google postini servers. Works great.



I have another server out in the wild that is completely disconnected from this network, and I have named it sfr.continuumconcepts.com. I'd like to get mail working on it as well, for test purposes (I'll be using it for other domains later).



I've added sfr.continuumconcepts.com as an mx record, but nslookup -type=mx sfr.continuumconcepts.com doesn't seem to show it yet. I don't know if I need to wait longer or if I set it up incorrectly.




Here's a screenshot of my DNS manager at godaddy. As far as I know, this is public information so hopefully I am not embarrassing myself by revealing too much. :)



In the MX record setup, this is the blurb godaddy gives: "MX records are for routing email that is addressed to a particular domain name. Like a CNAME record, an MX record points one domain name or subdomain to another domain name or subdomain for which an A record exists.
Entering "@" for the host name is the same as entering your domain name, minus the "www." Entering "www" for the host name is the same as entering your domain name, including the "www"."



Does this look like it's set up properly? Thanks in advance.


Answer



Yes, you can have an MX record for a 3rd level domain. You can have a MX record for anything (not that it makes sense in all cases).



Looks like it was a propagation delay because I see it:




 $ host -t mx sfr.continuumconcepts.com
sfr.continuumconcepts.com mail is handled by 10 sfr.continuumconcepts.com.


+1 for actually showing your domain and making troubleshooting so much easier.


bind - How to setup Cloudflare when I don't have authority over the parent domain?

I'm new to cloudflare and have a handful of web servers I'd like to run through it:
(I'd ask this on cloudflare's forum but I corrected a typo in my email address for the free cloudflare service and they keep sending the forum confirmation link to the typo address)




  www.my.domain.net
hosta.my.domain.net
hostb.my.domain.net


I've tried working through the cloudflare setup page and when it did the domain search for the hosts I entered, it instructed me
to change the nameservers for domain.net to "bill.ns.cloudflare.com" and
"tony.ns.cloudflare.com". The thing is, I don't have authority over the parent domain, domain.net.



What's the best way to handle this? My (Bind) SOA record looks (similar) to this:




  IN NS ns1.my.domain.net.
IN NS ns2.my.domain.net.
IN A 1xx.9x.4x.1
IN TXT "v=spf1 mx ptr ip4:1xx.x2.x3.x3 ip4:1xx.x2.x3.x4 include:other.domain.net include:spf.protection.outlook.com ?all"


I'm not sure changing the NS lines to "bill" and "tony" on cloudflare would be the
right thing to do for my.domain.net since cloudflare seems to be wanting me to
change the DNS for the entire parent domain.




Is there a way to only send www, hosta and hostb through cloudflare and leave the
rest of my.domain.net "un-proxied" through cloudflare? Ideally, since I have no cloudflare-fu, I would setup a test host and experiment first before moving everything over.

Saturday, January 19, 2019

windows - UAC - When set to "Never notify" do I still have a dual token?



UAC can be set to never notify, but that's not the same as not having UAC at all.



What I mean is, does the OS still create a dual token for admin users but just auto-elevate everything?




The difference is important since various file-system ops will still behave differently to say, Windows NT 4.0.



For example, when Explorer sees a folder with only Administrators:Full-Control it often prompts that you don't have access and elevates, then auto-adds your user into the ACL.



That's what I seem to observe, and I really don't like it. By setting UAC to not prompt, I assume this elevate-and-modify-ACL will just happen, but its still screwing with my ACLs.



In general, since UAC, I seem to spend so much time not having rights to things and messing around with ACLs whereas in the NT 4.0 days, life was simple, the ACL was the truth.



I "get" UAC for my mother-in-law, but on a server, where experts roam?!



Answer



This isn't a healthy attitude to have, in my opinion. Even experts make mistakes. Also, there are thousands of server admins in the world who I wouldn't exactly call "experts." You don't hear many *nix admins saying things like, "man, what BS, I'm an expert, I shouldn't have to sudo!"



But anyway, on to your question.



First of all, you ask, (paraphrased) "if I disable UAC, will I still have a restricted token?"



Well that depends. Who are you? Not everyone on the system will have a restricted token. Only users who log on to the system who are members of privileged groups such as Administrators, Domain Administrators, etc., or who have sensitive privileges such as SeTcpPrivilege, etc., will be given restricted tokens in addition to their full token during logon.



Please reference Windows Internals, 6th Ed. Part I Chapter 6 for a full list of exactly which groups and what privileges are checked before a restricted access token is generated.




A quote from the aforementioned book:




If one or more of these groups or privileges are present, LSASS creates a restricted token for the user (also called a filtered admin token), and it creates a logon session for both. The standard user token is attached to the initial process or processes that Winlogon starts (by default, Userinit .exe) .



Note If UAC has been disabled, administrators run with a token that includes their administrator group memberships and privileges.




And also, from Chapter 2 (emphasis is mine):





Upon a successful authentication, LSASS calls a function in the security reference monitor (for example, NtCreateToken) to generate an access token object that contains the user’s security profile. If User Account Control (UAC) is used and the user logging on is a member of the administrators group or has administrator privileges, LSASS will create a second, restricted version of the token . This access token is then used by Winlogon to create the initial process(es) in the user’s session .




You can test this for yourself using whoami /priv. With UAC on, log on as a user who is a member of the Administrators group. In a non-elevated command prompt, you will see that the list of privileges is much shorter in the non-elevated command prompt, implying the existence of two separate tokens for the same user:



UAC ON



Now turn UAC off (or set to "Never Notify",) reboot the machine, and attempt the same test. You will notice now that there is no difference between a standard and an elevated process. No more restricted access token.



Friday, January 18, 2019

What is wrong with my nginx reverse proxy configuration, with single server (and more later)



I'm trying to get an nginx reverse proxy setup working. I have two web servers set up, once with nginx, and one with apache2. I'm currently not able to get it working with just the nginx server, so that's all I'm trying for right now, but I added that I'm eventually trying with two, in case that effects the setup.



I have four machines in this setup.







1. Client machine



192.168.0.5



Ubuntu 14.04 desktop






2. Reverse proxy server




192.168.0.10



nginx 1.4.6



Ubuntu 14.04 server






3. Server 1




192.168.0.15



server1.mydomain.com



nginx 1.4.6



Ubuntu 14.04 server







4. Server 2



192.168.0.20



server2.mydomain.com



apache2



Ubuntu 14.04 server







On my client machine, I have set my hosts file to point to the reverse proxy server for each of the web servers, like below



/etc/hosts On client 192.168.0.5
127.0.0.1 localhost
192.168.0.10 server1.mydomain.com
192.168.0.10 server2.mydomain.com



I have ssl certs for server1 and server2, that I have put on the reverse proxy server (192.168.0.10). We'll call them server1.crt, server1.key, and server2.crt, server2.key.




I believe that I have to have this setup with the certs working like this:



client(192.168.0.5) ---https---> reverseProxy(192.168.0.10 holds ssl certs) ---http---> server1 or server2 


I have both servers working now, with http, and I just need to fix the nginx reverse proxy settings on 192.168.0.10.



Here's something I've tried, but it isn't correctly redirecting. Once again, I'd like an https connection to the reverse proxy server, and then an http connection between the reverse proxy and the servers.




/etc/nginx/nginx.conf



user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
worker_connections 768;
# multi_accept on;
}


http {

##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;

keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;

# server_names_hash_bucket_size 64;
# server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;


##
# Logging Settings
##

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

##
# Gzip Settings
##


gzip on;
gzip_disable "msie6";

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;


##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##

#include /etc/nginx/naxsi_core.rules;

##

# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##

#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;

##
# Virtual Host Configs

##

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}


/etc/nginx/sites-available/default



server {

listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

root /usr/share/nginx/html;
index index.html index.htm;

# Make site accessible from http://localhost/
server_name localhost;

location / {

# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}

server {
listen 443;
server_name server1.mydomain.com;


ssl on;
ssl_certificate /usr/local/nginx/conf/server1.crt;
ssl_certificate_key /usr/local/nginx/conf/server1.key;
ssl_session_cache shared:SSL:10m;

ssl_session_timeout 5m;

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";

ssl_prefer_server_ciphers on;

location / {
proxy_pass http://192.168.0.15:80;
proxy_set_header Host server1;

proxy_redirect http:// https://;
}
}



I'm assuming that something is incorrect in my /etc/nginx/sites-available/default file, but I've been reading through several tutorials, and this seems pretty close. This setup, is obviously only trying with server1, and ignores server2, but I assumed that if I could get one work, I could add as many others as I wanted. I have found similar questions, such as this one, but I still haven't been able to get this configuration working with a single server.



What's currently happening



Currently, when I go to server1.mydomain.com, I get the standard "Welcome to nginx!" page from the reverse proxy server (192.168.0.10). There's no forwarding going on.



Am I getting close? Thanks in advance







EDIT1



After trying the solution posted by Capile, I ran into another problem (which may have been expected, by someone with more web knowledge than myself).



When I changed my /etc/nginx/sites-available/default file to this:



/etc/nginx/sites-available/default (On reverse proxy)



server {


listen 80 default_server;
listen 443 ssl default_server;
server_name server1.mydomain.com;

ssl on;
ssl_certificate /usr/local/nginx/conf/server1.com.crt;
ssl_certificate_key /usr/local/nginx/conf/server1.com.key;
ssl_session_cache shared:SSL:10m;


ssl_session_timeout 5m;

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;

location / {
proxy_pass http://192.168.0.15:80;
proxy_set_header Host server1;


proxy_redirect http:// https://;
}
}


With this config I get a



400 Bad Request



The plain HTTP request was sent to HTTPS port




I thought maybe I should try https://server1.mydomain.com, but that just spins.



Also, I don't mind using the same ssl cert for both servers. I don't think that will be an issue.






EDIT2



First of all, thank you for all of the help.




I removed the ssl on; line as recommended by Capile, and changed the proxy_redirect http:// https://; line to proxy_redirect http:// $scheme://; as recommended by Richard Smith.



This fixed the bad request for the http traffic. So now, if I go to http://server1.mydomain.com I am successfully redirected to the site (yaay!)



If I try to go to https://server1.mydomain.com I am also redirected, which is great, but I'm getting an Unable to connect error. This makes sense if the reverse proxy is forwarding http traffic to http, and https traffic to https since there is no https configuration for the backend server.



My goal is that if I go to http://server1.domain.com that it connects to the reverse proxy using https, and then it forwarded on to the backend server using http. That doesn't appear to be happening...it looks like it's just forwarding it without ever using https.



On the other hand, if I go to https://server1.domain.com it should connect to the reverse proxy using https, and then forward on to the backend server using http.




So I never want an http connection from the client to the reverse proxy server.



The curls are acting as expected though. When I curl the http, or https site, I get this:



curl -i https://server1.mydomain.com or curl -i http://server1.mydomain.com



HTTP/1.1 302 Found
Server: nginx/1.4.6 (Ubuntu)
Date: Thu, 21 Jan 2016 16:34:29 GMT

Content-Type: text/html; charset=utf-8
Content-Length: 92
Connection: keep-alive
Cache-Control: no-cache
Location: http://test1/users/sign_in
Set-Cookie: session=336e109ad711; path=/; expires=Thu, 28 Jan 2016 16:34:36 -0000; HttpOnly
Status: 302 Found
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: ad65f-f6ds204-9fs8d-bdsdfa43-df5583266sdf87

X-Runtime: 0.005358
X-Xss-Protection: 1; mode=block

You are being redirected.


So, the redirect is definitely happening for either type of connection, but I don't THINK that the SSL certs for the https connection between the client and reverse proxy server are ever being used.


Answer



you are close, but there's no reverse proxy for http configured, only for https (in the above setup) — so it should show default content from document root. First server also lacks ending }.




You may configure both http and https in the same block, just use the ssl keyword in the listen directive (under the ssl port, so there's no need to set the ssl on directive):



server {
listen 80 default_server;
listen 443 ssl default_server;
...
ssl_certificate /usr/local/nginx/conf/server1.crt;
ssl_certificate_key /usr/local/nginx/conf/server1.key;
ssl_session_cache shared:SSL:10m;
...



I must alert that using two different SSL certificates within the same IP is somewhat limited and complicated — that is because it would require HTTP renegotiation (since the request needs to be encrypted with the certificate before requesting the proper Host), so you'll probably need to separate the certificate by server block. For this, use a more specific, by IP listen:



server {
listen 192.168.0.10:443 ssl; # for server1.mydomain.com
...

server {
listen 192.168.0.11:443 ssl; # for server2.mydomain.com

...


I also recommend you use the Mozilla SSL Config Generator for best practices on SSL security.





To avoid the 400 error, just remove the directive ssl on; and reload. It was causing the http traffic to require SSL — you already indicated to use ssl on the line: listen 443ssldefault_server; (this means turn ssl on on this port).



The rest of the configuration is fine, but depending on your server response, you may need to adjust the proxy settings.




First, check if the spinning is client-side or server-side: access it with cURL or turn on web developer tools on your browser and check the network: if your browser is being redirected, you probably need to adjust the proxy_redirect or even make some string replacements on your server response.



Check it this way with cURL:



curl -i http://server1.mydomain.com


If you see a Location: header in your response, you'll need to finetune your proxy settings (that will depend on the response). For example, you may be accessing it as https then your proxy forwards as http and your backend server application redirects to https (but when the response goes through the proxy, it goes back to http to the browser). There are several ways of fixing it, by using proxy_set_header or even adjusting your backend server.




For example you could use:



proxy_set_header X-Forwarded-Proto $scheme;


But either your backend http server or your application would need to properly understand this.






Alternatively, if there's no redirect, but no response as well (or a No gateway response after the request timeout — 30s), check if your backend server is properly responding to the proxy server.






Please also note that you don't need a http server on your backend, you can use, for example, a FastCGI server directly — this sometimes has fewer downfalls.








Based on the cURL response, I see that the configuration is working as expected — the backend server response is asking for you to sign in. If the SSL certs weren't working, you wouldn't be able to curl -i https://server1.mydomain.com.




The traffic between frontend (proxy) and backend are made through http only (see the proxy_pass directive), and that's also usually expected (another encryption might add unnecessary overhead).



Now, if you wish to use https only, you have two options: either configure that in your backend (so that it forwards you to https://test1/users/sign_in) or use a different setting for nginx, where you strip the http:80 server and make it redirect everything to https:443. Something like this (be sure to remove the listen 80 from the next server block):



server {
listen 80 default;
server_name _;
return 301 https://$host$request_uri;
}


server {
listen 443 ssl;
...

Exchange server intermittently not receiving or delivering emails to a few addresses?



This is a strange problem. We are using an Exchange 2007 server to handle the emails to and from the company. There are two main problems which are probably related.




  1. None of our mails sent to one single customer are ever received.



When we send any type of mail to one particular customer, they never get it. We have confirmed the address and tried to send more to other mail addresses on the same domain and they still don't receive it. No error (email or otherwise) is ever issued. (Domain related? Blacklisted?)





  1. Sometimes (intermittently) a mail sent to our company (can be any address on our domain) is never received.



I tried this the other day from home and sent a mail to my work address. It was never received. But then a day later i sent another and it was received fine (so the mail address is fine). No error (email or otherwise) is ever issued.



Any ideas where to start looking for causes?


Answer



If you're going to track this problem down, you have to start by looking at the message tracking logs on your Exchange server to determine exactly what YOU did with the message. If you're not familiar with message tracking in Exchange 2007, take a look at this series of articles that explain how to do it. Send a test message and then track it in your system - you'll be able to determine exactly what your server did with the message. If you see that your server handed the message off to the recipient server, you can relax and be totally confident that your system is working correctly and the recipient server is the one having problems - and you obviously can't fix that. Without tracking the message, you'll have no idea where the actual breakdown is.


apache 2.2 - LAMP Server HTTP-> HTTPS redirect in .htaccess




I have 2 LAMP servers with the same version of Apache, both running Wordpress.
SSL is working on both servers (although the test server uses a cert for a different domain).



On both I have the following .htaccess file (/var/www/html/.htaccess):



# BEGIN WordPress

RewriteEngine On


RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]

RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]




# END WordPress


The test server redirects to HTTPS correctly, but the production server does not attempt to redirect at all. I can manually browse the production site via HTTPS.



The permissions on the .htaccess file is 755 and owned by apache:apache on both server.



In order to make sure that the redirect on the test site was because of the .htaccess file, I changed it by removing the following:





RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L]



The redirect on on the test site stopped until I added it back.



Is there something that may have been done on the test site that needs to be done on production?


Answer




I found the issue so I guess I jumped the gun on posting this question, but I'll leave it here - maybe it will save somebody some time.



In the /etc/httpd/conf/httpd.conf file, I had AllowOverride set to None still on the production site for the document root directory.
Here is what it looks like now (with the redirect working):





#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews

#
# Note that "MultiViews" must be named *explicitly* --- "Options All"
# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.2/mod/core.html#options
# for more information.
#
Options Indexes FollowSymLinks


#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride All

#
# Controls who can get stuff from this server.
#

Order allow,deny
Allow from all



Thursday, January 17, 2019

linux - Possible to authenticate Samba via Kerberos but without domain-join?

With a Kerberos config file...





[realms]
DOMAIN.COM = {
kdc = dc1.domain.com
admin_server = dc1.domain.com
}



...it is possible for Linux to talk to Active Directory for password validation without necessarily being an AD domain member:





$ kinit jdoe
Password for jdoe@DOMAIN.COM:
$ klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: jdoe@DOMAIN.COM

Valid starting Expires Service principal
01/12/15 15:36:16 01/13/15 01:36:25 krbtgt/DOMAIN.COM@DOMAIN.COM

renew until 01/19/15 15:36:16



At this point, you can use PAM to define local Linux users in /etc/passwd, yet have their TTY sessions authenticated via Active Directory. Authn via krb5 is done as a per-login context:




auth        sufficient    pam_krb5.so use_first_pass




But if krb5 is already implemented as part of the PAM global defaults, why isn't Samba also picking it up? I see /etc/pam.d/samba does an include of the Kerberized password-auth file, but no joy when accessing an SMB volume. (Debug logs indicate a failed-to-get-SID error, which is very "you are not part of the domain".)



My underlying question is: can a similar krb5 authn centralization be done under Samba as it was for Shell, without all that extra overhead/complexity of domain membership? I need to have Samba services implemented on a group of NIS-clustered systems, but don't want to have different TDBSAM back-ends on each system leading to SMB password confusion. Using Kerberos as my authenticator would be great. However, I still want to define authorization/access via local Linux account and not open up Samba access to all domain users as would be the case with domain-join, winbind DC emulation, or full-fledged AD server.



Alternatively: is there a better centralized back-end authn option for Samba in a Linux cluster? I looked at CTDB, but it seemed to be geared towards mediating shared-storage rather than central authn with disparate volumes...

Wednesday, January 16, 2019

linux - SMART warns me but I don't trust it



I've got a server with four Samsung hard drives. All drives are the same model and have been bought together. The drives are SAMSUNG HE753LJ with firmware 1AA01113.



I'm getting SMART errors but I have the feeling that smartctl does not understand the value he gets from the hard drive.



Here's the result of a SMART test:





asgard:~# smartctl -H /dev/sdb
smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
Failed Attributes:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
3 Spin_Up_Time 0x0007 001 001 011 Pre-fail Always FAILING_NOW 60340




I don't trust SMART because:




  • It's been over one year that all disks are about to fail within less than 24 hours. Nothing blew up yet.

  • Wikipedia says that
    "Spin-Up Time is the average time of spindle spin up (from zero RPM to fully operational [millisecs])." That would mean that the drives need about one minute to wake up?!




I would like to follow smartctl's advice and change these disks but I just don't trust the results I read.



What do you think about this?
What would you do?



Thanks for your help.


Answer




All drives are the same model and have been bought together.





This is a ticking bomb.



Based on both the message from SMART and the quote above, you should change disks right away.



Since the drives have been bought together and are the same model, they will probably have the same weaknesses, and probably all fail simultaneously under the same condition...



The main concept of RAID is that disks fail at different times, giving you the opportunity to swap one disk at a time, and avoid data loss.




Others have reported simultaneous failure of an entire array of identical disks in a RAID configuration, coming from the same production batch, and thus being subject to the same weakness.



I can't stress this enough: You need to start swapping your drives!


linux - Deciphering kpartx output




I hope I've posted this in the proper place, if I haven't then please advise me on where to move the post.



I've tried deciphering the kpartx output myself but now I'm sort of stuck and in need of guidance. I lack knowledge in many areas and I'm trying to improve, hence the deciphering. I'll post my problem and my findings so far and I'm hoping that someone out here could spare some of their time in guiding me in my troubleshooting/deciphering.



The problem



[root@hostname ~]# kpartx -l /dev/mapper/mpathcg 
mpathcg1 : 0 673171632 /dev/mapper/mpathcg 63



This number right here is my issue: 673171632. As far as I know, and also according to this answer https://serverfault.com/a/469734. This number should represent the number of blocks of this particular device.



[root@hostname ~]# fdisk -l /dev/mapper/mpathcg

Disk /dev/mapper/mpathcg: 344.7 GB, 344671125504 bytes
255 heads, 63 sectors/track, 41903 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes

Disk identifier: 0xa5c9e23d

Device Boot Start End Blocks Id System
/dev/mapper/mpathcgp1 1 41903 336585816 8e Linux LVM
Partition 1 does not start on physical sector boundary.


But looking at the fdisk output, which I trust by experience, the number of blocks for this device is 336585816. To me, we have an inconsistency here. Since I trust fdisk by experience I was curious about how kpartx finds the number of blocks and then maybe look at fdisk and see how they differ from each other. So here is where the "deciphering" begun.



The actual question




I'm actually here for guidance, but in attempt to follow this forums guidelines and to help anyone else wondering the same thing:




How does kpartx determine it's output, in particular the number of
blocks?




My findings so far




My number one finding: I'm terrible at reading C code...



kpartx/kpartx.c:



            printf("%s%s%d : 0 %" PRIu64 " %s %" PRIu64"\n",
mapname, delim, j+1,
slices[j].size, device,
slices[j].start);
}



To me it seems that this struct called slice(s) has an element (or whatever the term is), named size. That is the size of a partition in blocks. Which is what get's outputed to stdout. However, I don't understand how it get populated with actual numbers.



kpartx/kpartx.h



struct slice {
uint64_t start;
uint64_t size;
int container;
int major;

int minor;
};


This is what the struct looks like. Which seems to correpond to what kpartx outputs.



kpartx/kpart.c:



typedef int (ptreader)(int fd, struct slice all, struct slice *sp, int ns);
...

...
...
extern ptreader read_dos_pt;


These also seem intersting, I basing this on the name read_dos_pt, since the partition in questions is a dos partition and that ptreader seem to use the slice struct. Maybe to populate it?



kpartx/dos.c:



read_dos_pt(int fd, struct slice all, struct slice *sp, int ns) {

struct partition p;
unsigned long offset = all.start;
int i, n=4;
unsigned char *bp;
uint64_t sector_size_mul = get_sector_size(fd)/512;

bp = (unsigned char *)getblock(fd, offset);


Here I notice the getblock function, which to me seems obvious for what I'm looking for. But looking at the getblock function in kpartx/kpartx.c I get lost and confused.




Any help I can get will be appreciated. Thank you for your time.


Answer



Not sure how relevant this is for serverfault, but I'll take it apart anyway.



Skip past getblock in read_dos_pt. Interesting part is on line 97. sp[i].size = sector_size_mul * le32_to_cpu(p.nr_sects);
sector_size_mul is the number of 512 byte sectors in one native sector for this disk (eg, 4k disks would have a sector_size_mul of 8). Most likely, this will be 1, especially if it's a file you're probing.



p.nr_sects is being populated directly from the on-disk dos partition table using memcpy. The osdev wiki has a nice tabular dos partition format description, so you can see the nr_sects structure field is a uint32_t starting at byte 12 of the partition entry (cf. dos.h offset of partition.nr_sects).




Thus what kpartx is putting in that field is "the number of 512 byte sectors in the partition, regardless of native sector size."



Going back to your fdisk output, it's pretty clearly in 1k blocks.



Divide your byte size by 1024 and you're going to get the 336585816 number you're seeing in fdisk, but divide by 512 and you'll get what kpartx shows you.


Monday, January 14, 2019

apache 2.2 - httpd server is not starting

I am implementing failover mechanism in JBoss from here




I added following line in httpd.conf file



LoadModule jk_module modules/mod_jk.so


Now I am getting following error while starting Apache




D:\Installation\apache-2\bin>httpd -k start
httpd: Syntax error on line 495 of D:/Installation/apache-2/conf/httpd.conf: Can

not load D:/Installation/apache-2/modules/mod_jk.so into server: The specified
procedure could not be found.


I am sure that there is file mod_jk.so in modules directory. What could be reason for this and how to resolve this problem?
I am using Wondows 7 OS.
Thanks in advance

What does "every two minutes" mean in cron?

I've got two scripts in cron set to run every two minutes: */2 -- the thing is, they're out of step. One runs at 1,3,5,7,9 minutes, etc. and the other at 0,2,4,6,8. This is not a mission-critical problem, but means I've got two status reports, one a bit stale compared to the other.



What does cron do exactly? Run the first one in crontab document order, waiting till it's finished to run the second one?



Is there any way I can make the run at the same time, or as close as possible?

Sunday, January 13, 2019

active directory - Windows 7 cannot join samba domain




I have a 3.5.6 samba server with a LDAP backend (both on Debian 6.0). I've been successfully adding Windows XP machines to the domain for years. I now try to add Windows 7. I have made the recommended registry changes, but I don't have any success so far. Here is what happens:



1. I go to computer name, select "Domain" instead of "Workgroup", type in the domain name, click OK. It asks me for the username and password of an account that can add computers to the domain; I enter them. After about 40 seconds, I get the following message:




The following error occurred attempting to join the domain "ITIA":



The specified computer account could not be found. Contact an administrator to verify the account is in the domain. If the account has been deleted unjoin, reboot, and rejoin the domain.





Despite this, the samba server successfully creates the computer account.



2. Therefore, if I try again a second time, without deleting the already created computer account, I get a different error:




The following error occurred attempting to join the domain "ITIA":



The specified account already exists.





(Note that until a while ago samba wasn't configured to automatically create computer accounts. What I did whenever I wanted an XP to join was to manually create it. When I first attempted to solve the Windows 7 join problem, I setup samba to do this automatically, as this is what most people do, as I understand, and I thought that it might be related. I haven't attempted to add an XP since I made this change, so I don't know if it works, but whether it works or not, the problem remains.)



Update 1: Here are the relevant parts of smb.conf:



[global]

panic action = /usr/share/samba/panic-action %d

workgroup = ITIA
server string = Itia file server

announce as = NT
interfaces = 147.102.160.1
volume = %h

passdb backend = ldapsam:ldap://ldap.itia.ntua.gr:389
ldap admin dn = uid=samba,ou=daemons,dc=itia,dc=ntua,dc=gr
ldap ssl = off
ldap suffix = dc=itia,dc=ntua,dc=gr
ldap user suffix = ou=people
ldap group suffix = ou=groups

ldap machine suffix = ou=computers
unix password sync = no
add machine script = smbldap-useradd -w -i %u

log file = /var/log/samba/samba-log.all
log level = 3
max log size = 5000
syslog = 2

socket options = SO_KEEPALIVE TCP_NODELAY


encrypt passwords = true
password level = 1
security = user

domain master = yes
local master = no
wins support = yes

domain logons = yes

idmap gid = 1000-2000


Update 2: The server has a single network interface eth1 (also an unused eth0 that shows up only in the kernel boot messages) and two ip addresses; the main, 147.102.160.1, and an additional one, 147.102.160.37, that comes up with "ip addr add 147.102.160.37/32 dev eth1" (used only for a web site that has a different certificate than other web sites served from the same machine). One of the problems I recently faced was that samba was using the latter IP address. I fixed that by adding the "interfaces = 147.102.160.1" statement in smb.conf.



Now:



acheloos:/etc/apache2# tcpdump host 147.102.160.40 and not port 5900
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes

13:13:56.549048 IP lithaios.itia.civil.ntua.gr.netbios-dgm > 147.102.160.255.netbios-dgm: NBT UDP PACKET(138)
13:13:56.549056 ARP, Request who-has acheloos2.itia.civil.ntua.gr tell lithaios.itia.civil.ntua.gr, length 46
13:13:56.549091 ARP, Reply acheloos2.itia.civil.ntua.gr is-at 00:10:4b:b4:9e:59 (oui Unknown), length 28
13:13:56.549324 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138)
13:13:56.549608 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos2.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138)
13:13:56.549741 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138)
13:13:56.550364 IP lithaios.itia.civil.ntua.gr.netbios-dgm > acheloos.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138)
13:13:56.550468 IP acheloos.itia.civil.ntua.gr.netbios-dgm > lithaios.itia.civil.ntua.gr.netbios-dgm: NBT UDP PACKET(138)



(acheloos2 is the second IP address, 147.102.160.37). The above dump occurs when I click "OK" (to join the domain), until it asks me for the username and password of a user that can join the domain. I don't know why the client is contacting the second IP address. I tried temporarily deactivating it, but I still had some related ARP traffic (though I think not IP traffic).


Answer



Try changing the script from smbldap-useradd -w -i %u
to smbldap-useradd -W %u. This should resolve your issue.


linux - Allignment of ext3 partition on LVM RAID volume group

I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry.



My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition?





Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes
Disk identifier: 0x4e428f49

Device Boot Start End Blocks Id System
/dev/dedvol/backup1 63 267349 2146982827+ 83 Linux
Partition 1 does not start on physical sector boundary.




lvdisplay /dev/dedvol/backup
--- Logical volume ---
LV Name /dev/dedvol/backup
VG Name dedvol
LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt
LV Write Access read/write
LV Status available

# open 0
LV Size 2.00 TiB
Current LE 524288
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 32768
Block device 253:1




vgdisplay dedvol
--- Volume group ---
VG Name dedvol
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable

MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 14.55 TiB
PE Size 4.00 MiB
Total PE 3815448
Alloc PE / Size 3670016 / 14.00 TiB

Free PE / Size 145432 / 568.09 GiB
VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

Saturday, January 12, 2019

filesystems - Where do ext3 inode / meta-data reside?



Just a quick question about ext3:



Are inodes stored in the same area as file data, or are there separate regions on the disk reserved exclusively for meta-data and others reserved exclusively for file/directory-content?



Reason I'm asking: If fsck is clearing/deleting/rewriting something that it thinks is an "inode", could it actually be messing with file content, or would the worst case effect be that a certain file disappears from the directory tree and get's added to a lost-and-found location, like in FAT file systems?



Answer



Within a block group, the data blocks come right after the inode table.


Cable management strategy for a small chassis switch



I'm working with a small site buildout that contains a single HP ProCurve 4200vl chassis switch. The data cabling contractor did not provision for any cable management. I'm looking at the rack pictured below and would like some suggestions on how to cleanly patch down to the switch below without vertical managers. I wanted to avoid full vertical management since this setup will only utilize 40-50 cables. Is there a clean way to do this? Would something like the NeatPatch be the best approach? If the answer is velcro, I guess that works, too...



enter image description here


Answer




I think that a combination of these and velcro may be your best bet.


Thursday, January 10, 2019

active directory - How to change the Domain "Short Name" in Windows 2003




I administer a Windows SBS 2003 server for a company called XYZ Associates. The following lists the AD Domain information for this server. Please note that I inherited this server and did not set it up.




Domain short name:         XYZASSOCIATES  
Domain DNS name: XYZ.local
Forest DNS name: XYZ.local
Site name: Default-First-Site-Name
PDC role owner: CN=NTDS Settings,CN=XYMAIN,CN=Servers,

CN=Default-First-Site-Name,

CN=Sites,CN=Configuration,DC=XYZ,DC=local
Schema role owner: CN=NTDS Settings,CN=XYMAIN,CN=Servers,

CN=Default-First-Site-Name,CN=Sites,
CN=Configuration,DC=XYZ,DC=local
Domain is in native mode: True
Forest-wide Schema
Master FSMO: CN=XYMAIN
Forest-wide Domain
Naming Master FSMO: CN=XYMAIN

Domain's PDC Emulator FSMO:CN=XYMAIN
Domain's RID Master FSMO: CN=XYMAIN
Current Userid: CN=Scott McKinney,
OU=SBSUsers,OU=Users,OU=MyBusiness,DC=XYZ,DC=local
Current domain controller: XYMAIN.XYZ.local



How can I change the Domain short name XYZASSOCIATES to just XYZ? It is a source of constant typos for me when doing any domain related work and the extra 10 characters don't add anything unique or necessary to the domain name.




I looked at all of the items and properties in Active Directory Domains & Trusts, Active Directory Sites & Services, Active Directory Users & Computers and can't find any reference to XYZASSOCIATES.



Are there any ramifications to existing users and joined computers if the short name is changed, or would they seamlessly update their credentials to use the new short name?



Thanks in advance for your help and insight.


Answer



Here's the Technet article on renaming a Windows domain.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...