Thursday, May 31, 2018

http status code 403 - 403 forbidden on Apache after trying to install nginx reverse proxy

My setup:



Digital Ocean droplet running Debian 8



2 websites with each their domain running on Apache2



Tried installing nginx and configure it as a reverse proxy following these instructions:
https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-14-04-droplet




It instantly broke my sites, giving Forbidden 403 error when trying to access them.



I spent so many hours trying to make it work and now decided to leave it and just use Apache2 like i did before.



But now the sites are still showing Forbidden 403 even after nginx is stopped.
Briefly installed lighttpd + lighttpd php5-cgi and then i could access the sites, however, it was showing just 1 site on both domains.



I have chown -R www-data:www-data /var/www



Also did a chmod -R 755 /var/www




Please, if anyone could provide some input, I would be so happy. I am going crazy trying to fix this mess. :(



Apache ports.conf:



Listen 80


Listen 443




Listen 443



Sample from Apache error log:



[Thu Mar 03 13:56:36.965194 2016] [authz_core:error] [pid 31517] [client 185.106.92.253:55470] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php
[Thu Mar 03 13:56:43.316074 2016] [authz_core:error] [pid 31518] [client 185.106.92.253:52484] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php

[Thu Mar 03 13:56:47.635774 2016] [authz_core:error] [pid 31496] [client 185.106.92.253:53967] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php
[Thu Mar 03 13:57:00.853631 2016] [authz_core:error] [pid 31670] [client 185.106.92.253:50494] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php
[Thu Mar 03 13:57:08.455024 2016] [authz_core:error] [pid 31668] [client 185.106.92.253:45464] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php
[Thu Mar 03 13:57:21.641599 2016] [authz_core:error] [pid 31517] [client 185.106.92.253:38106] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php
[Thu Mar 03 13:57:28.132631 2016] [authz_core:error] [pid 31518] [client 185.106.92.253:48468] AH01630: client denied by server configuration: /var/www/html/xmlrpc.php


apache2.conf:



Mutex file:${APACHE_LOCK_DIR} default


PidFile ${APACHE_PID_FILE}

Timeout 300

KeepAlive On

MaxKeepAliveRequests 100

KeepAliveTimeout 100


User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}

HostnameLookups Off

ErrorLog ${APACHE_LOG_DIR}/error.log

LogLevel warn


IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf

Include ports.conf


Options FollowSymLinks
AllowOverride None
Require all denied




AllowOverride None
Require all granted



Options Indexes FollowSymLinks
AllowOverride All
Require all granted




Options Indexes FollowSymLinks
AllowOverride All
Require all granted





Options Indexes FollowSymLinks
AllowOverride None
Require all granted


AccessFileName .htaccess


Require all denied



LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent


site1.conf:





ServerName www.site1.com
ServerAlias site1.com

ServerAdmin webmaster@localhost
DocumentRoot /var/www/site1

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined



Options FollowSymlinks
AllowOverride none
Require all granted



AddHandler php5-fcgi .php
Action php5-fcgi /php5-fcgi
Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi

FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization




SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|ico|png)$ \ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|sit|rar)$ \no-gzip dont-vary
SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary


BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html


site2.conf:




ServerName www.site2.com
ServerAlias site2.com


ServerAdmin webmaster@localhost
DocumentRoot /var/www/site2

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined


Options FollowSymlinks
AllowOverride none

Require all granted



AddHandler php5-fcgi .php
Action php5-fcgi /php5-fcgi
Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization





SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|ico|png)$ \ no-gzip dont-vary
SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|sit|rar)$ \no-gzip dont-vary
SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary

BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

linux - UBUNTU: why my crontab isn't running the code?



I've configured manually the /etc/crontab to add an event that runs on day 9 every months...



The Cron is running, but it didn't runned my code yet...
My code is at the last line:




17 *    * * *   root    cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
23 9 9 * * root wget "http://www.mysite.com/url.api?dowhat=mensalcheck" --no-cache --read-timeout=1600 -O "/var/log/mysite.com/$(date "+url-api.%d-%m-%y.log")"

Answer



To make sure your cron is run, use "crontab -e" or "sudo crontab -e" to edit your cron jobs. Then when you're done editing and saved the file, crontab will install the new cronjob properly to be executed the next time.



So, use sudo crontab -e and make sure you at least do one modification to the file.




Also cron usually has no PATH variable, meaning it does know where to find wget, so the best is to put /usr/bin/wget.


best practices - Actions to take during/after a power outage



We're in the process of replacing the shelving in our server room, and I found a piece of paper that had been covered over which lists various actions to take during/after a power outage:




  1. ADSL modem power off and on

  2. Start all servers

  3. Start production order polling process

  4. Check: E-mail services started

  5. Check: SQL Agent service started


  6. Check: Internet connection restored

  7. Photocopiers: power off and on



Some of these steps make sense, but I'm not sure about powering things off and on. All our modems, routers, switches, and servers are on battery backup. PowerChute Business Edition is installed on the servers and they are configured to shutdown automatically at the last possible minute (because we get a lot of short outages). I know from past outages that auto shutdown is working, and the servers are being auto powered on again when the power is restored. The photocopiers are not on battery backup, and considering everybody wants them to die I'm not really interested in protecting them.



Checking that things are up and running again makes sense, and I've configured quite a few automatic e-mails to handle this (using a third-party monitoring service).



So what do I really need to be doing during the power outage itself? I figure that making a note of the time and calling the power company should be sufficient. I've read elsewhere on this site of people recommending to shut everything down manually, but we don't stay on site until the power comes back up so we prefer that things come back up on their own.




To give some context on the environment, our e-mail server is located in-house, as well as a web server which runs our on-line ordering system and zip locator service. These need to be up, though orders are not really lost when we're down; we're a manufacturing company with a distribution network, so we don't sell directly to the end consumer. Our distributors will enter their orders when the system comes back up.


Answer



You don't specify what kind of environment you work in and what the systems are used for, but I'll make some general statements. Given that you use an ADSL connection, I'm going to assume you're not hosting any world-facing applications (websites, email, etc.)



During the power outage, there's not a whole lot you can do. If you're running off a generator (doesn't sound like it) or a shared UPS, shut down development systems as early as possible -- it'll leave more power for your production systems.



The key is in the planning -- when you say "last possible minute," what if your systems take slightly longer to shut down than usual? Are you risking data by an improper shutdown? I would leave more padding.



Where I work, none of our systems shut down during an outage -- we have a room-wide UPS that's backed by a natural gas generator, so we hope everything stays up for an outage of almost any length.


intel - All paths down on vSphere esxi 5.1, LSI 9211-8i

Works 100%
ESXi 5.1




HBA (LSI 1064e) > 1x SAS2 100GB SSD




Doesn't work
ESXi 5.1





HBA (LSI 9211-8i IT /IBM M1015) > 1x SAS2 100GB SSD




After a few hours of heavy HD load i get:
- Device or filesystem with identifier ['all drives'] has entered the All Paths Down state.
- ESXi retries but loses connectivity to the SAS drive, marks it as "dead or error"
- reboot fixes the issue




How can i fix this issue?

domain name system - Adding provided secondary DNS server to bind



I am using a secondary DNS server that my hosting provider has giving to me for my domain, the URL for it is:



sdns1.ovh.ca


I am using Webmin to install the DNS server on my Ubuntu Server. Since a CNAME to sdns1.ovh.ca would not be allowed, how do I add this to my name server so ns1.example.com is the main dns server, and ns2.example.com is the name server my provider has giving me?




Zone file:



domain.me.  IN  SOA ns1.domain.me. xxxxx.gmail.com. (
1360915275
10800
3600
604800
38400 )
domain.me. IN NS ns1.domain.me.

domain.me. IN A 192.95.29.122
www.domain.me. IN A 192.95.29.122
ns1.domain.me. IN A 192.95.29.122
domain.me. IN NS ns2.domain.me.
ns2.domain.me. IN CNAME sdns1.ovh.ca.


Godaddy's Host Summary:



ns1.domain.me

Host Ip: 192.95.29.122

ns2.domain.me
Host Ip: 192.95.29.122

Answer



I'm having a little difficulty decoding your question, but assuming what you're saying is that your DNS registrar has given you a second DNS server for your domain, and you want to know how to edit the zone file on your DNS primary to make it consistent with this new information, the answer is to add a record in the zonefile that says



IN                NS      sdns1.ovh.ca.



Note that terminal dot - it's important. This will not magically add the new DNS server to the list of name servers returned by the TLD servers (those authoritative for the domain of which yours is a child) when your domain is queried; that has to be done separately. Nor does it magically set this new DNS server up as a slave, pointing to your primary server as the master (one would hope that the provider has done this, since it was they who told you the new server's details). But once those other things have been done, the above will make it all self-consistent.



It would have been easier to answer this question if you'd provided your domain. Obviously, some won't want to, and others are forbidden to, but DNS is a fairly public system; no confidentiality or security is lost by telling us that it exists. So if you need to ask further questions about this, I urge you to provide that information.



Edit: yes, this goes into the zonefile for the domain, on the DNS master. If manually, it goes in the zonefile for the domain as detailed in named.conf; you'll need to know where your own named.conf lives, as it varies by OS, platform, and implementation.



Edit: from memory, I'm fairly sure that an NS record must not be a CNAME (later edit: this is indeed so, see RFC2181 s10.3). Remove the lines



craftblock.me.      IN  NS  ns2.craftblock.me.

ns2.craftblock.me. IN CNAME sdns1.ovh.ca.


and replace them with



craftblock.me.      IN  NS  sdns1.ovh.ca.


and thank you for telling us the domain name.




Edit: in the light of what you've told us about godaddy's information, your NS records should probably read:



craftblock.me.      IN  NS  ns1.craftblock.me.
craftblock.me. IN NS ns2.craftblock.me.
ns1.craftblock.me. IN A 192.95.29.122
ns2.craftblock.me. IN A 192.95.29.122


I note they're doing that awful old trick of having two NS records (which is required) which are in fact the same IP address (which is lame), but that's not your fault. Once this is up and running on your new registrar, you might want to arrange 2ary DNS hosting with someone else, maybe a friend or colleague, to restore the nameserver redundancy the DNS is supposed to give you.




Edit: we're going round in circles. As I said, advertising the right nameservers in your zonefile will do nothing for the whois information (the list of nameservers returned by your TLD's server) or the setup of your 2ary.



Plus your currently advertised DNS servers in the whois are NS1.NFOSERVERS.COM and NS2.NFOSERVERS.COM, so nothing we're discussing here will make any real difference. I'm no longer sure what you want, nor indeed that you're sure what you want.



Could you maybe consider overhauling this question in its entirety, or perhaps deleting it and opening a new question where you say clearly and simply what you want to achieve? May I add that in my opinion, messing around with the DNS is not for people who don't know what they're doing; it's quite easy to make your domain entirely non-functional. I think you should seriously consider whether you should be doing this at all with a professional domain, given that you don't appear to understand the underlying concepts.


Wednesday, May 30, 2018

storage - How to reliably map vSphere disks Linux devices

After a virtual disk has been added to a Linux VM on vSphere 5, we need to identify the disks in order to automate the LVM storage provision.



The virtual disks may reside on different datastores (e.g. sas or flash) and although they may be of the same size, their speed may vary. So I need a method to map the vSphere disks to Linux devices.






Through the vSphere API, I am able to get the device info:



Data Object Type: VirtualDiskFlatVer2BackingInfo
Parent Managed Object ID: vm-230
Property Path: config.hardware.device[2000].backing

Properties


Name Type Value
ChangeId string Unset
contentId string "d58ec8c12486ea55c6f6d913642e1801"
datastore ManagedObjectReference:Datastore datastore-216 (W5-CFAS012-Hybrid-CL20-004)
deltaDiskFormat string "redoLogFormat"
deltaGrainSize int Unset
digestEnabled boolean false
diskMode string "persistent"
dynamicProperty DynamicProperty[] Unset

dynamicType string Unset
eagerlyScrub boolean Unset
fileName string "[W5-CFAS012-Hybrid-CL20-004] l****9-000001.vmdk"
parent VirtualDiskFlatVer2BackingInfo parent
split boolean false
thinProvisioned boolean false
uuid string "6000C295-ab45-704e-9497-b25d2ba8dc00"
writeThrough boolean false



And on Linux I may read the uuid strings:



[root@lx***** ~]# lsscsi -t
[1:0:0:0] cd/dvd ata: /dev/sr0
[2:0:0:0] disk sas:0x5000c295ab45704e /dev/sda
[3:0:0:0] disk sas:0x5000c2932dfa693f /dev/sdb
[3:0:1:0] disk sas:0x5000c29dcd64314a /dev/sdc


As you can see, the uuid string of disk /dev/sda looks somehow familiar to the string that is visible in the VMware API. Only the first hex digit is different (5 vs. 6) and it is only present to the third hyphen. So this looks promising...




Alternative idea



Select disks by controller. But is it reliable that the ascending SCSI Id also matches the next vSphere virtual disk? What happens if I add another DVD-ROM drive / USB Thumb drive? This will probably introduce new SCSI devices in between. Thats the cause why I think I will discard this idea.






  1. Does someone know an easier method to map vSphere disks and Linux devices?

  2. Can someone explain the differences in the uuid strings? (I think this has something to do with SAS adressing initiator and target... WWN like...)


  3. May I reliably map devices by using those uuid strings?

  4. How about SCSI virtual disks? There is no uuid visible then...

  5. This task seems to be so obvious. Why doesn't Vmware think about this and simply add a way to query the disk mapping via Vmware Tools?

ESXi 4.1: how to make use of an iSCSI LUN bigger than 2 TB?



I have a ESXi servers (4.1 U1) connected to an iSCSI NAS, where 4 LUNS have been defined and presented to this host; all the LUNs are 3 TB in size. ESXi correctly sees the LUNs and acknowledges their 3 TB size.



I know the maximum size for a VMFS datastore is 2 TB, so creating a single datastore for each LUN would be a waste of space; I also know that creating multiple datastores on a single LUN is not a best practice, and it doesn't seem to work either: if there already is a datastore on a LUN, the vSphere Client just doesn't let me create anything else there, it doesn't even list it in the list of available devices for creating datastores.



I know I can use extents to create a datastore bigger than 2 TB, but this only seems to work across multiple LUNs: when I try to increase the size of a datastore using the same LUN where it resides, ESXi tries to actually increase it, so it won't go above 2 TB in size.




My question is: is there any way to combine two extents in the same LUN, so effectively creating a 3 TB datastore made up by 2 1.5 TB extents?



If this is not possible, is it possible to create two datastores in the same iSCSI LUN? I know it's not best practice, but it should at least be possilbe... but looks like it isn't.



If even this is not possible, then... how to make use of these 3 TB LUNs?


Answer



You can't do this sorry, not with v4 as it's a limitation of SCSI-2/VMFS3 (version 5 however changes things a lot ;) ). Just go back and represent your LUNs as 2TB one and extent them if required, you're still limited to 2TB vmdk's anyway so it's a little pointless doing extent'ed DS's anyway really. If it's any consolation I limit my DS's to 500GB anyway to ensure the ops people don't over subscribe any given DS with VMs.


windows 7 - does using the SET command add to your path?



If I want to add the java \bin to my environment variable, can I do this from the command prompt using the SET command or is that just temporary?



Answer



That's just temporary for the current process' environment. There's setx if you'd like to make a more permanent change.


apache userdir jail permissions

On Debian 8 I'm running apache2 in a jailed environment using jailkit and the userdir mod. In current jail setup users can navigate into another users directory i.e



/home/jail/home/anotheruser


and view files in it but can navigate into folders



I tried jailing users to their home folder using




chmod 0700 /home/jail/home/*


now when i try to navigate into another users directory i get



 Permission denied


But now when I broswe to the users website I get




You don't have permission to access / on this server.


It worked before I did the chmod. So how do I jail user to their home but still allow they websites to be viewed?



I tried adding www-data to a users group



groups test
test : test


usermod -a -G www-data test
groups test
test : test www-data


But still get permission denied.

Monday, May 28, 2018

sql server - Windows LocalSystem vs. System



https://stackoverflow.com/questions/510170/the-difference-between-the-local-system-account-and-the-network-service-accou tells:




Local System : Completely trusted
account, moreso than the administrator
account. There is nothing on a single
box that this account can not do and
it has the right to access the network

as the machine (this requires Active
Directory and granting the machine
account permissions to something)"




http://msdn.microsoft.com/en-us/library/aa274606(SQL.80).aspx (Preparing to install SQL Server 2000(64 bit) - Creating Windows Service Accounts) tells:




"The local system account does not
require a password, does not have

network access rights, and restricts
your SQL Server installation from
interacting with other servers.
"




http://msdn.microsoft.com/en-us/library/ms684190(v=VS.85).aspx (LocalSystem Account, Build date: 8/5/2010) tells:




"The LocalSystem account is a
predefined local account used by the

service control manager. This account
is not recognized by the security
subsystem
, so you cannot specify its
name in a call to the
LookupAccountName function. It has
extensive privileges on the local
computer, and acts as the computer on
the network. Its token includes the NT
AUTHORITY\SYSTEM and
BUILTIN\Administrators SIDs
; these

accounts have access to most system
objects. The name of the account in
all locales is .\LocalSystem
. The
name, LocalSystem or
ComputerName\LocalSystem
can also be
used. This account does not have a
password. If you specify the
LocalSystem account in a call to the
CreateService function, any password
information you provide is ignored"





http://technet.microsoft.com/en-us/library/ms143504.aspx
(Setting Up Windows Service Accounts) tells:




Local System is a very high-privileged
built-in account. It has extensive
privileges on the local system and
acts as the computer on the network.

> The actual name of the account is "NT
AUTHORITY\SYSTEM".




Well-known security identifiers in Windows operating systems
( http://support.microsoft.com/kb/243330 )
does not have any SYSTEM at all (but only "LOCAL SYSTEM")







My Windows XP Pro SP3 (with MS SQL Server setup, developing machine in workgroup) does have SYSTEM but not LocalSystem or "Local System".



QUESTIONS:



Can somebody clear out this mess?



It is possible to burn hours after hours, day after day reading MS docs just to collect more and more contradictions and misunderstandings...



1)
Has LocalSystem rights to access the network or not?

What is the mechanism?



2)
Are the SYSTEM and the LocalSystem (and the "Local System") synonyms?



Why they have been introduced?



What are the differences between SYSTEM and Local System



----------




Update1:



Hi, sysamin1138!



Your answers add even more confusion if to compare them to observed reality,
for ex., to the fact that Fresh installed or workgroup Windows XP Pro SP3 has only SYSTEM (but not LocalSystem).



Sysadmin138 wrote:





  • "Different security principles for similar problems, which allow a bit of granularity in your security design. One is local only, the other has domain visibility."



Does this phrase mean that LocalSystem is added upon joining computer to domain?



Should it be understood that SYSTEM is for "local"/internal and workgroup access (computer identification) and LocalSystem for identification of computer in domain?



----------




Update2: same workgroup Windows XP Pro SP3 if not specified otherwise



Hi, Sysadmin1138,
In your Edit




"It's just that in that case SYSTEM
and NT Authority/SYSTEM are equivalent
in ability",





how are they (NT Authority/SYSTEM and SYSTEM) related to LocalSystem? Did not you err one of them with LocalSystem?



Greg Askew,




"Note that if you configure a service
to logon as .\LocalSystem, it will
still appear as logged on as NT
AUTHORITY\SYSTEM in Process Explorer

or System in Task Manager"




This is a little be closer. I cannot choose LocalSystem in either NTFS/share premissions, RunAs list.
But in services.msc the service "SQL Server (MS SQL SERVER)" --> double-click or rc --> Properties ---> tab "Logo on as:" has radiobuttom "Local System account". This service then appears in Windows Task Manager as SYSTEM



Greg Askew and sysadmin1138,



"NT AUTHORITY" or any "xxx\"
does not appear anywhere. All account names are single-labeled. Note it is Windows XP workgroup computer. Though I run an instance of ADAM (Active Directory Application Mode).




I guess "NT AUTHORITY" is from that famous "security subsystem" which is absent in workgroup(?) Would "NT Authority" appear if I join computer to a domain?



NTFS/share permission list has 2 columns:




  • "Name(RDN)" colum having single-label account names

  • "In Folder" column having either MyCompName (eg, for Administrator, Administrators, ASPNET, SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER, etc.) or blank (e.g., for ANONYMOUS LOGON, Authenticated Users, CREaTOR GROUP, CREAtOR OWNER, NETWORKING SERVICES,SYSTEM, etc.).




The former ones have also synonyms for coding as "MyCompName\xxxx" or ".\xxx" (i.e.




  • SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER =

  • = MyCompName\SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER

  • = .\SQLServerReportServerUser$MyCompName$MSRS10_50.MSSQLSERVER)



Can you synchronize your answers in context of http://blogs.msdn.com/aaron_margosis/archive/2009/11/05/machine-sids-and-domain-sids.aspx (Machine SIDs and Domain SIDs)?




----------



Update3: same workgroup Windows XP Pro SP3 if not specified otherwise



Hi, Sysadmin1138,



And how to see edit-history? and dereference SID?



Breakthrough! cacls shows "NT Authority\SYSTEM"...




Though for services it is all vice versa: all services show under "Log On" tab




  • the radiobutton "Local System account" which results in SYSTEM in WIndowsTaskManager and

  • the "This account" radiobutton --> btn "Browse..." that doesn't show the SYSTEM account in the list



Sorry for your time, but I couldn't get yet to any LocalSystem in Windows XP! LocalSystem does not show up anywhere in XP! but the problem that all MS docs dwell only on LocalSystem...



BTW, http://support.microsoft.com/kb/120929 ("How the System account is used in Windows") tells that SYSTEM is for internal to computer logging of services, and surprise-surprise "APPLIES TO" all Windows from NT Workstation 3.1 to Windows Server 2003 except Windows XP(?!).




Is Windows XP some anomaly in Windows line?



----------



Update4: same workgroup Windows XP Pro SP3 if not specified otherwise



I couldn't detect any LocalSystem (only "local system" mentioned in text to radiobutton of services LogOn)in Windows XP though all MS docs usually dwell on LocalSystem only but not SYSTEM. I marked this question as answered having understood for me that Windows XP is anomaly/exception in Windows OS-es having some GUI usability bug and I should guess how a scenario would have appeared in other Windows (with the help of answer(s) here)



If it is not correct, please be free to prove/share another point of view







Update5: same workgroup Windows XP Pro SP3 if not specified otherwise



Venceremos!



I found "Local System" in Windows XP! It is shown in "Log On As" column in services.msc!


Answer



[wiped large answer, summarizing for clarity. See edit-history for sordid tale.]




There is a single well-known SID for the local system. It is S-1-5-18, as you found from that KB article. This SID returns multiple names when asked to be dereferenced. The 'cacls' command-line command (XP) shows this as "NT Authority\SYSTEM". The 'icacls' command-line command (Vista/Win7) also shows this as "NT Authority\SYSTEM". The GUI tools in Windows Explorer show this as "SYSTEM". When you're configuring a Service to run, this is shown as "Local System".



Three names, one SID.



In Workgroups, the SID only has a meaning on the local workstation. When accessing another workstation, the SID is not transferred just the name. The 'Local System' can not access any other systems.



In Domains, the Relative ID is what allows the Machine Account access to resources not local to that one machine. This is the ID stored in Active Directory, and is used as a security principle by all domain-connected machines. This ID is not S-1-5-18. It is in the form of S-1-5-21[domainSID]-[random].



Configuring a service as "Local Service" tells the service to log on locally to the workstation as S-1-5-18. It will not have any Domain credentials of any kind.




Configuring a service as "Network Service" or "NT Authority\NetworkService" tells the service to log on to the domain as that machine's domain account, and will have access to Domain resources. The Windows XP Service Configurator does not have the ability to select "Network Service" as a login type. The SQL Setup program might.



"Network Service" can do everything "Local System" can, as well as access Domain resources.



"Network Service" has no meaning in a Workgroup context.



In short:



NT Authority\System = Local System = SYSTEM = S-1-5-18




If you need your service to access resources not located on that machine, you need to either:




  • Configure it as a Service using a dedicated login user

  • Configure it as a Service using "Network Service" and belong to a domain


Saturday, May 26, 2018

rewrite - nginx point subdomain to subfolder

I want http://blog.domain.com/ to point to http://www.domain.com/blog



Just a notice, not to redirect to that location, but just point to it.
Also /blog is not a folder. It could be blog.php for example



So when I navigate to http://blog.domain.com, the website display's content from http://www.domain.com/blog



What I tried so far:



server{

listen 80;
server_name blog.domain.com;

rewrite ^/blog(.*) http://blog.domain.com/$1 permanent;
}


The result is nginx returning 404 not found error.

sql - Azure blobs vs Heroku type storage




I am making an app that will receive images and then needs to store them in a SQL DB using Nodejs. I need to store a lot of images and query them a lot as well (I need to query the db to see if that image is in there)



From what I have read its better to save the files in the file system and put the path in the DB (cheaper).



I learned to develop using Heroku and Mlab and saving URLs to my DB this will be my first time using the FileSystem.



I read about azure storage options and a rep told me that I can use their blob services and it would be a good fit. But it seems like there is a lot of learning involved to use their services. I tried azure and aws before for another project and everything seems more complicated with them compared to just using git to deploy to heroku, just picking which services and stuff to use and how to use them.



But it seems like the pricing is A LOT cheaper using azure blobs than using e.g. Heroku, although even the pricing is difficult for me to understand with all these services (Heroku DYNOS...)




The thing is as far as I can tell with Heroku I can just make my app, upload the files to the server using the fs module and then save that path to my DB. Whereas to use azure blobs I also have to learn how to use azures API and store the image in the blob.
I have read their documentation:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-nodejs-v10
and cant find exactly how I would know the location of the file on the blob so that I could save it to my DB and then how to retrieve that file.



So my questions are as follows:




  1. Is my best option (cheapest most efficient) to use azure and azure

    blobs or are there better options for what I need?

  2. Is it as I see it that their is another layer of complexity in using
    azure storage and blobs and is this the norm when using azure & AWS
    (are these cloud services whereas the others arent? Heroku also
    mentions cloud in their web page but these seem different)????

  3. Is it worth the hassle to learn about blobs or is it better to just use something more simple to start off?

  4. Is saving the images to cloudinary and saving the urls to my SQL DB
    viable or is that too expensive or inefficient???




Thanks for the help in advance


Answer



I have not tried Heroku, but I can comment on Azure Blobstorage.



The best practice I see is:




Storing images to blobstorages, with
anonymous read-only access to container and saving URLs to database.





As you have only URLs, you app will download images from Azure saving traffic from your server. Generating random string names instead of (1,2,3,... or a,b,c,... or john,doe,jane,...) for each image will restrict user from accessing unauthorized images.






Creating blob storage on Azure is very simple.




  1. Create a resource (+)

  2. Select Storage Account


  3. Create



You will get Storage Account Name and Key from Access Keys tab.






In Storage Account you'll see: Blobs, Files, Tables, Queues.



Go to Blobs:





  1. Create a new container

  2. Be careful about Public Access Level for container (I would recommend container)

  3. Click on container properties (More options icon ...) and get Container URL.






As mentioned in the docs you read, you will need to define





  • Storage Account Name

  • Key

  • Container URL



in your code and use them.






Friday, May 25, 2018

permissions - Best way to recover mysql from a "chmod -R 777 /" with databases intact



Question:
What is the best way to recover mysql (or worst case: migrate away) from a "chmod -R 777 /" with databases intact?



System:
Ubuntu 12.04 LTS
MySQL 5.5.24
64 bit Amazon EC2 cloud server.



Background:

Attempting to recover (or at least recover data from) a system which had this done to it:



    chmod -R 777 /


No point in worrying about the why. It was a manager with too much access and too little experience who likes to swim in deep waters. It was a pure accident on his part, not actually meaning to hit enter when he did.



I have recovered much of the system, but am really hung up on getting mysql working again. I have already worked through such pages as:






Have already done this:



    sudo chmod 644 my.cnf
chown mysql:mysql my.cnf


At which point attempting to start mysql:



    sudo service mysql start



Produces this output in syslog:



    Apr 12 20:51:42 ip-10-10-25-143 kernel: [18632541.774742] type=1400 audit(1365799902.306:41): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18210 comm="apparmor_parser"
Apr 12 20:51:42 ip-10-10-25-143 kernel: [18632541.964496] init: mysql main process (18214) terminated with status 1
Apr 12 20:51:42 ip-10-10-25-143 kernel: [18632541.964542] init: mysql main process ended, respawning
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632542.959796] init: mysql post-start process (18215) terminated with status 1
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.002041] type=1400 audit(1365799903.534:42): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18238 comm="apparmor_parser"
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.098490] init: mysql main process (18242) terminated with status 1

Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.098536] init: mysql main process ended, respawning
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.140706] init: mysql post-start process (18244) terminated with status 1
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.158681] type=1400 audit(1365799903.690:43): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18258 comm="apparmor_parser"
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.285087] init: mysql main process (18262) terminated with status 1
Apr 12 20:51:43 ip-10-10-25-143 kernel: [18632543.285133] init: mysql respawning too fast, stopped


What I read from that is mysql is terminating with a status 1, and it loops a few times trying to start up and gets stopped from doing that after too many attempts. I've looked into status 1 but haven't found solutions that seem applicable.


Answer





  1. Create new VM instance with the same OS version and MySQL version.
  2. Start MySQL on new VM - it should create MySQL data directory with no databases but with correct permissions - in /var/lib/mysql.
  3. Stop MySQL on new VM.
  4. Copy /var/lib/mysql on new VM to /var/lib/mysql.empty.
  5. Copy /var/lib/mysql from old VM to /var/lib/mysql on new VM.
  6. Manually set permission on all files and directories in /var/lib/mysql based on permissions on /var/lib/mysql.empty.
  7. A short prayer to a deity of choice wouldn't hurt now.
  8. Start MySQL on new VM.
  9. I'd recommend dumping all data with mysqldump -A, recreating a new, empty mysql data directory and importing this data back just in case.
  10. When you are confident that it works and all your data is intact shutdown the old VM and archive it's disk image somewhere. It is much less work and much safer to setup a new server than trying to recover from this.


website - SEO Consultant Wants System Credentials



An SEO consultant has asked (demanded) credentials to the web environment so he can do ... Whatever it is that they do.



I'm new to the company but an experienced Systems Engineer. I've just now been brought into this situation, but my reaction to giving him credentials is a pretty solid "No" unless he can provide a compelling reason, which is yet forthcoming. Before I was brought into this, he had been provided an archive of the relevant files, but he said that this was insufficient.



The (admittedly) little that I know about SEO tells me that he should be able to get everything that he wants should be able to be gathered from view source or a copy of the files, and we would implement his changes in a production deploy after review.


Answer



Short answer: What Chris S said: See "Our security auditor is an idiot, how do I give him the information he wants?".







Long answer:



Some of what a "SEO Guy" needs to do might require server access -- for example, installing optimized mod_rewrite rules, adding custom 404 pages, creating friendly redirects (and/or optimizing existing 3xx redirects), etc.
None of this is something that you can't do for him, and none of it is black magic trade secrets (he's going to make these changes on your server, you could diff the config file later and see exactly what was done).



Because of that I personally don't see any need to give them access to make changes on the server (a read-only account sure, if you want, but no ability to affect changes without going through your company's approval process).
My advice:




  1. Say No.
    Be proud of your No, for you are on the side of good, and righteousness, and stability of your environment.


  2. Explain WHY you are saying no to your manager/supervisor/whoever is in charge.
    Pretty straightforward: "It's a giant security risk, he can just as easily give us his changes to push live so we can audit them first, yadda yadda yadda.".
    If you present solutions that still let the SEO guy get his job done while protecting your environment, and your higher-ups aren't insane, they will probably back you on this.

  3. Explain WHY you are saying no to the consultant and give him the alternate solutions.
    If it's a deal breaker for them let 'em walk. There are tons of SEO consultants out there...



If Management tells you to give him access anyway get that in writing. Issue a memo outlining the risks, and get someone above you to sign off on those risks (this is all about protecting you in the event this guy blows up your server).
You should also insist that the consultant sign something stating that they will be liable for any damages if they disrupt the stability of your environment (which is all about protecting the company).


Thursday, May 24, 2018

apache 2.2 - Can undefined variables and notices cause a cpu spike?

We're running centos 6.7, PHP 5 and MySQL 5.5 and Apache 2.2.15. Sometimes we see high CPU usage, mainly caused by MySQL so we take all the logs we can get. We're addressing MySQL optimization separately, this question is about PHP and apache mainly.



In apache error log, we keep seeing undefined variables, notices, and warnings. Every few thousand requests or more apache seems to restart.



Two questions:




  • Do Apache errors cause apache to restart at some point?

  • Do they cause high CPU usage, at some point?




The error log can be found here, will expire in a week.



I found a strange entry



enter image description here



Not sure what Microsoft's IIS has to do with the log file.

Deleting Old (Non-Boot) Windows Vista Directory From Command-Line




Does anyone have a good script that will delete an old, non-booting, inactive Windows directory from the command-line?


Answer



Umm...



rd /s /q 


From a CMD command prompt.



How's that work for you?




Edit:



Okay-- how about:



takeown /F  /R /D Y
cacls /T /G Everyone:F
rd /s /q



Should be run from an elevated command-prompt.


amazon ec2 - Idle AWS EC2 but high memory usage



I'm using Amazon EC2 instance C4.large, total 3.75G memory, running Amazon-Linux-2015-09-HVM



The memory usage increases day by day, like there's a memory leak. Then I kill all my program and all memory hog processes like Nginx/PHP-FPM/Redis/MySQL/sendmail. It's very strange the memory is not released, still very high.
The line -/+ buffers/cache: 3070 696 indicates actual free memory with buffer/cache excluded:



$ free -m
total used free shared buffers cached
Mem: 3767 3412 354 4 138 203

-/+ buffers/cache: 3070 696
Swap: 0 0 0


As you can see after kill there are only a few user processes running, the highest is only 0.1% memory usage:



$ ps aux --sort=-resident|head -30
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 32397 0.0 0.1 114232 6672 ? Ss 08:04 0:00 sshd: ec2-user [priv]
ec2-user 32399 0.0 0.1 114232 4032 ? S 08:04 0:00 sshd: ec2-user@pts/0

ntp 2329 0.0 0.1 23788 4020 ? Ss Dec06 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 32400 0.0 0.0 113572 3368 pts/0 Ss 08:04 0:00 -bash
rpcuser 2137 0.0 0.0 39828 3148 ? Ss Dec06 0:00 rpc.statd
root 2303 0.0 0.0 76324 2944 ? Ss Dec06 0:00 /usr/sbin/sshd
root 2089 0.0 0.0 247360 2676 ? Sl Dec06 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root 1545 0.0 0.0 11364 2556 ? Ss Dec06 0:00 /sbin/udevd -d
root 1 0.0 0.0 19620 2540 ? Ss Dec06 0:00 /sbin/init
ec2-user 1228 0.0 0.0 117152 2480 pts/0 R+ 10:32 0:00 ps aux --sort=-resident
root 2030 0.0 0.0 9336 2264 ? Ss Dec06 0:00 /sbin/dhclient -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0
rpc 2120 0.0 0.0 35260 2264 ? Ss Dec06 0:00 rpcbind

root 2071 0.0 0.0 112040 2116 ? Sroot 1667 0.0 0.0 11308 2064 ? S Dec06 0:00 /sbin/udevd -d
root 1668 0.0 0.0 11308 2040 ? S Dec06 0:00 /sbin/udevd -d
root 2373 0.0 0.0 117608 2000 ? Ss Dec06 0:00 crond
ec2-user 1229 0.0 0.0 107912 1784 pts/0 S+ 10:32 0:00 head -30
root 2100 0.0 0.0 13716 1624 ? Ss Dec06 0:09 irqbalance --pid=/var/run/irqbalance.pid
root 2432 0.0 0.0 4552 1580 ttyS0 Ss+ Dec06 0:00 /sbin/agetty ttyS0 9600 vt100-nav
root 2446 0.0 0.0 4316 1484 tty6 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty6
root 2439 0.0 0.0 4316 1464 tty3 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty3
root 2437 0.0 0.0 4316 1424 tty2 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty2

root 2444 0.0 0.0 4316 1416 tty5 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty5
root 2434 0.0 0.0 4316 1388 tty1 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty1
root 2441 0.0 0.0 4316 1388 tty4 Ss+ Dec06 0:00 /sbin/mingetty /dev/tty4
dbus 2160 0.0 0.0 21768 232 ? Ss Dec06 0:00 dbus-daemon --system
root 2383 0.0 0.0 15372 144 ? Ss Dec06 0:00 /usr/sbin/atd
root 2106 0.0 0.0 4384 88 ? Ss Dec06 0:16 rngd --no-tpm=1 --quiet
root 2 0.0 0.0 0 0 ? S Dec06 0:00 [kthreadd]


No process using high memory but system total free is only 696M out of 3.75G, is it a bug of EC2 or Amazon Linux? I have another T2.micro instance running, after kill Nginx/MySQL/PHP-FPM the memory is released and free number bumped.

It's appreciated if someone could help.


Answer



I don't have a C4.large instance handy to check my theory, so I may be shooting in the dark, but have you checked the stats for the Xen balloon driver?



Here's a dramatic explanation of the possible mechanism: http://lowendbox.com/blog/how-to-tell-your-xen-vps-is-overselling-memory/



And here's documentation of the various sysfs paths that will give you more information: https://www.kernel.org/doc/Documentation/ABI/stable/sysfs-devices-system-xen_memory


Wednesday, May 23, 2018

linux - RHEL server Yum dependencies not working



I have a redhat server that isnt resolving dependencies correctly.



I want to install httpd via yum "yum install httpd" and it installs correctly, but when i go to start httpd I get the following error:





Stopping httpd:                                            [FAILED]

Starting httpd: /usr/sbin/httpd: error while loading shared libraries: libaprutil-1.so.0: cannot open shared object file: No such file or directory
[FAILED]


It is missing the dependency for apr-util package.



Weirdly the i386 package is installed and not the x86_64 package. Can anyone shed any light on why the dependencies might not be resolved correctly?



ldd /usr/sbin/httpd
libm.so.6 => /lib64/libm.so.6 (0x00002b02370db000)

libpcre.so.0 => /lib64/libpcre.so.0 (0x00002b023735e000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00002b023757a000)
libaprutil-1.so.0 => not found
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002b0237793000)
libldap-2.3.so.0 => /usr/lib64/libldap-2.3.so.0 (0x00002b02379cb000)
liblber-2.3.so.0 => /usr/lib64/liblber-2.3.so.0 (0x00002b0237c06000)
libdb-4.3.so => /lib64/libdb-4.3.so (0x00002b0237e14000)
libexpat.so.0 => /lib64/libexpat.so.0 (0x00002b0238109000)
libapr-1.so.0 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b023832c000)

libdl.so.2 => /lib64/libdl.so.2 (0x00002b0238547000)
libc.so.6 => /lib64/libc.so.6 (0x00002b023874c000)
libsepol.so.1 => /lib64/libsepol.so.1 (0x00002b0238aa3000)
/lib64/ld-linux-x86-64.so.2 (0x00002b0236ebe000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00002b0238ce9000)
libsasl2.so.2 => /usr/lib64/libsasl2.so.2 (0x00002b0238eff000)
libssl.so.6 => /lib64/libssl.so.6 (0x00002b0239118000)
libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002b0239364000)
libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002b02396b6000)
libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002b02398e4000)

libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002b0239b79000)
libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002b0239d7c000)
libz.so.1 => /usr/lib64/libz.so.1 (0x00002b0239fa1000)
libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002b023a1b5000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002b023a3be000)


yet this is the i386 package



apr-util-1.2.7-11.el5.i386 : Apache Portable Runtime Utility library

Repo : installed
Matched from:
Filename : /usr/lib/libaprutil-1.so.0


UPDATE:
Just to update, I am hosting my own repo on a cobbler server also, but that was created correctly and im not sure if this would cause any problems with dep solving?



UPDATE2:
I have changed the debug level to 10 to see what i get via yum, here is the output.




im pretty sure there should be an entry other than none, but not sure what it should be...



Resolving Dependencies
Running "preresolve" handler for "security" plugin
--> Running transaction check
---> Package httpd.x86_64 0:2.2.3-31.el5 set to be updated
Checking deps for httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('initscripts', 'GE', ('0', '8.36', None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libc.so.6(GLIBC_2.2.5)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u

looking for ('libpthread.so.0(GLIBC_2.2.5)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('rtld(GNU_HASH)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/etc/mime.types', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/bin/bash', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/bin/sh', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('textutils', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libm.so.6(GLIBC_2.2.5)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/sbin/chkconfig', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/bin/rm', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/bin/sh', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u

looking for ('/bin/mv', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/usr/share/magic.mime', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/usr/sbin/useradd', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('/usr/bin/find', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libcrypt.so.1(GLIBC_2.2.5)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('sh-utils', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libc.so.6(GLIBC_2.3.4)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libc.so.6(GLIBC_2.4)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('gawk', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libc.so.6(GLIBC_2.3)(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u

looking for ('/bin/mktemp', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libc.so.6()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libpcre.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libdb-4.3.so()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libcrypto.so.6()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libexpat.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libselinux.so.1()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libm.so.6()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libssl.so.6()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('liblber-2.3.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u

looking for ('libdl.so.2()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libaprutil-1.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libz.so.1()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libcrypt.so.1()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libapr-1.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libpthread.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
looking for ('libldap-2.3.so.0()(64bit)', None, (None, None, None)) as a requirement of httpd.x86_64 0-2.2.3-31.el5 - u
--> Finished Dependency Resolution
Dependency Process ending
Depsolve time: 0.811


Answer



I have tracked down this issue to another package advertising that it can resolve the dependency for libapr. So when http said it needed libapr, this badly written package said it could fulfil the library need so got installed rather than the proper redhat libapr package. I have organised for the developer to be beaten


Tuesday, May 22, 2018

How to limit PHP-FPM memory usage?




I'm running an Ubuntu 10.04 nginx webserver with PHP-FPM. It has 512MB of total memory (256MB swap). After starting the PHP-FPM process (/etc/init.d/php5-fpm start), it uses an acceptable ~100MB for about 5 children. But then the processes suddenly balloon to using 400MB.



Here's a graph of my server's memory usage with PHP-FPM.



Here's my PHP process memory usage (ps aux | grep php)



I have set my PHP-FPM config conservatively: pm = static and pm.max_children = 5.
I'm only running a few Wordpress blogs, and I don't get that many visitors.



How can I control the memory usage of PHP-FPM's processes so it doesn't eat up my server?



Answer




  • Disable any PHP extensions that you don't need.

  • Set a low max requests per child so each process is restarted more often.

  • Reduce the number of processes. You don't need many for a small blog. 2 should be fine.


email - DNS: Google Apps Mail MX record issue caused by CNAME from EC2



I want to host my website on EC2 and my mail servers on Google Apps. This would seem to be simple but I can't get the receiving of Mail to work due to a DNS issue. I have changed the MX records as required for my host but they aren't being picked up because my CNAME which is required for EC2 makes the DNS search for the MX on Amazon which is not what I want.



http://www.dnsstuff.com/tools/legacy/?formaction=DNSLOOKUP&ToolFormName=customlookup&name=kodental.co.uk&detail=0&type=MX



There are a couple of folks having similar mail issues which they solve by using A records not CNAMEs e.g. https://stackoverflow.com/questions/6493076/setting-up-cname-at-directnic-com-caused-gmail-in-google-apps-for-businesses-to-s "For compatibility reasons, you can't put a CNAME in the root domain; doing so will break email. Use an A record instead. "



But you can't use an A record with EC2 - you have to use a CNAME as the IP changes.




Are these services just incompatible and I have to move the sites web hosting to somewhere I can add an A record to an IP?



This is a bit of a pain so I thought I'd ask here if anyone has an alternative before I wade in.



Thanks


Answer



You cannot use a CNAME on the bare domain name (what you are calling the "root domain"). This is a known limitation of ELB (elastic load balancer) on EC2.



The solution, released recently by Amazon, is to use Amazon Route53 to host your DNS. This integrates with ELB to handle bare domains without CNAME. Your MX records can still be pointed at Google with Route53.



Monday, May 21, 2018

router - Are network and broadcast IPs supposed to respond?



I have a /24 network that is subnetted into a bunch of small chunks.
I have recently gone into each router on the network (mostly Cisco) in order to document how this network had been divided. Now looking at a ping sweep output from:



nmap -sP 192.168.1.*



I see that some but not all reserved "network" and "broadcast" IPs respond to pings. For example, the network 192.168.1.80/29 has the network of 192.168.1.80 and a broadcast of 192.168.1.87. On this particular subnet, both of these IPs give me a ping response from the external interface of the router (192.168.5.20).



Many of the other subnets behave in a similar manor. However others do not. Looking at the router configs, nothing really jumps out at me that looks like it would cause this behavior.



Does anyone know the reason for this behavior? Do I want those addresses to respond or not? Slightly unrelated: should I have reverse DNS entries for the network and broadcast IPs?


Answer



You do not want anything to respond to a ping of the network or broadcast addresses over the Internet. If that was allowed to happen your network could be used as part of a smurf attack.




Most host based firewall software these days block responses to ICMP for the network/broadcast addresses. Since there is very little actual value that can come from having icmp replies to broadcasts enabled.



The Linux kernel by default ignores these types of pings but that can be configured by changing the value of /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts.



As for your question about DNS. I don't know that there is much advantage one way or the other. It wouldn't hurt to add it, but there isn't really a good reason for it. Having a reverse lookup maybe helpful for someone outside of your network if they wanted to lookup who owned those addresses and they didn't know how do a proper lookup.


Sunday, May 20, 2018

domain name system - How to get subdomain in Route 53 to resolve to Internet-facing Elastic Load Balancer?



I own a domain, call it doggos.lol that uses Route 53 for DNS. I want to create a subdomain elb.doggos.lol that resolves to the public DNS of an ELB. I created a CNAME to route elb.doggos.lol to an Alias target (the ELB public DNS).



I saved the record but the route is not working. If I execute an HTTP request against the public DNS of the ELB, I get the correct REST response from the server it sends to. However, if I go to the subdomain in the CNAME record, I get DNS_PROBE_FINISHED_NXDOMAIN. Testing the CNAME record on Route 53 returns a REFUSED DNS response code.



Am I missing something?


Answer



Turns out for Alias targets, you must use an A record (or AAAA for IPv6). I switched the record from CNAME to A and this resolved the problem.




https://aws.amazon.com/premiumsupport/knowledge-center/route-53-create-alias-records/


Saturday, May 19, 2018

mysql - Looking for server monitoring app... nothing fancy.. for Windows







We are looking for a tool to keep an eye on our web servers (http, file exists, connects) and our smtp, pop servers. Also we'd like to check (simple queries) our databases (mySQL, microsoft). Anything else is not as important. Something really easy to use! Should work on Windows XP and also on Windows Server 2008.
Thanks!

Thursday, May 17, 2018

How to enable JMX on elastic beanstalk running amazon linux tomcat8



I have to enable the following configuration for tomcat running in elastic beanstalk environment



-Dcom.sun.management.jmxremote.port=9000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false


I have absolutely no idea where I have to make these changes.



Elastic Beanstalk Configuration


64bit Amazon Linux 2016.09 v2.3.1 running Tomcat 8 Java 8


I believe elastic beanstalk create different set of folder structure for tomcat8



# whereis tomcat8
tomcat8: /usr/sbin/tomcat8 /etc/tomcat8 /usr/libexec/tomcat8 /usr/share/tomcat8



My Issue is resolved following this link
https://bobmarksblog.wordpress.com/2016/08/08/monitoring-elasticbeanstalk-tomcat-instances-using-visualvm-via-ssh/


Answer



The solution is a lot simpler than I thought.



To enable JMX in AWS Elastic Beanstalk you must add JVM command line configuration ->



Select Elastic Beanstalk environment -> 
Configuration -> Software Configuration ->
Add following to “JVM command line options:”



in text box



 -Dcom.sun.management.jmxremote.port=9000 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false


It will automatically trigger the changes and will update the servers, now in a security group (assigned to your instances), you must allow this port 9000.



Now you can connect to any of the instances at a time but viewing is public IP address from AWS console (web).




Please have a look at this link for more details.
https://bobmarksblog.wordpress.com/2016/08/08/monitoring-elasticbeanstalk-tomcat-instances-using-visualvm-via-ssh/


smtp - How can I find out which script/program/user invokes exim (and is sending spam)?

The problem



A client of mine asked me to take a look at his shared-hosting webserver for the following problem, but I'm stuck at finding out what's wrong. His server is being blacklisted by a lot of major blocking list such as CBL, Spamhaus and the blockling list from Outlook.com.



What I've tried already



I started by looking at the users in his DirectAdmin environment but I didn't find any users whom are sending more than couple of e-mails per day. I downloaded his exim log, took a look at the mail queue, but couldn't find anything out of the ordinary. Next thing I thought of was running findbot.pl from CBL, but it came up only with false-positives.




Another thing I tried was to change the sendmail_path in php.ini to log every e-mail that is being sent out via sendmail. However, everytime I changed the sendmail_path, all PHP processes started to hang. I tried different ways (MailCatcher, my own scripts), but every change made the processes hang. Really strange, but after I few tries, I moved on to the next step.



Next step: installing lsofand create an bash script that would print the output of lsof -i | grep smtp into a log file, every second, while printing the outpot of ps auxw to another log file every second. This gave me some valuable information, but I can't track the issue yet.



Where I'm stuck



So after letting it run for a couple of hours, I opened up both log files and saw a bulk of this rules:



lsof - logfile




COMMAND     PID    USER   FD   TYPE           DEVICE  SIZE/OFF    NODE NAME
exim 10921 mail 9u IPv4 2260427 0t0 TCP hostname-from-server.com:smtp->208.93.4.208:49711 (ESTABLISHED)
exim 10921 mail 10u IPv4 2260427 0t0 TCP hostname-from-server.com:smtp->208.93.4.208:49711 (ESTABLISHED)


When I look at the logfile and search for the PID that is mentioned in the lsof logfile, I see the following lines:



ps auxw - logfile




USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mail 1750 0.0 0.0 59032 1320 ? Ss Nov28 0:01 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
root 10909 0.0 0.0 103388 896 pts/2 S+ 17:44 0:00 grep mail

mail 1750 0.0 0.0 59032 1320 ? Ss Nov28 0:01 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
root 10917 0.0 0.0 103388 896 pts/2 S+ 17:44 0:00 grep mail

mail 1750 0.0 0.0 59032 1320 ? Ss Nov28 0:01 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
mail 10921 0.0 0.0 61112 1792 ? S 17:44 0:00 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
root 10923 0.0 0.0 103388 896 pts/2 S+ 17:44 0:00 grep mail


mail 1750 0.0 0.0 59032 1320 ? Ss Nov28 0:01 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
root 10931 0.0 0.0 103388 896 pts/2 S+ 17:44 0:00 grep mail

mail 1750 0.0 0.0 59032 1320 ? Ss Nov28 0:01 /usr/sbin/exim -bd -q15m -oP /var/run/exim.pid
root 10939 0.0 0.0 103388 896 pts/2 S+ 17:44 0:00 grep mail


The problem: there is nothing out of the ordinary with this line and I can't see which script, program of user called exim. When I take a look at the exim mainlog and rejectlog, I can't find the ip 208.93.4.208 nor can't I find any line at all around 17:44 (time according to the ps auxw log).




When I follow lines from the logfiles from e-mails that I send myself, I can find them in the mainlog from exim at exactly the time that is mentioned in the ps auxw log. It appears that, somehow, the spammails aren't logged in exim or are removed immediately after sending.



My questions




  • I think I can solve my problem if I knew which script, program or user called the PID and invoked exim/mail. Does anyone have an idea?

  • Is it possible that some other server, not ours, is sending out spam and is, for example, spoofing our IP-address? Maybe this is a very dumb question, but I'm curious, since it so easy to spoof headers.



Additional information




Via the provider-portal of Outlook.com, we managed to get one of the e-mail headers:



X-HmXmrOriginalRecipient: someone-who-received-our-spam@hotmail.com
X-Reporter-IP: [IP-from-some-who-flagged-as-spam]
X-Message-Guid: a2236172-9474-11e5-9c3a-00215ad6eec8
x-store-info:4r51+eLowCe79NzwdU2kR3P+ctWZsO+J
Authentication-Results: hotmail.com; spf=none (sender IP is [OUR-IP-ADDRESS]) smtp.mailfrom=minvituccia@blackberrysa.com; dkim=none header.d=blackberrysa.com; x-hmca=none header.id=minvituccia@blackberrysa.com
X-SID-PRA: minvituccia@blackberrysa.com
X-AUTH-Result: NONE

X-SID-Result: NONE
X-Message-Status: n:n
X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtHRD0yO1NDTD02
X-Message-Info: 11chDOWqoTmjqhOzvWWho/vK8oL2x1FIoEm0Tn+r3D4Vy8IHo2wUnqS07yp2Fxclyw07ONZgeH1xFUrogbJOZz8Pfl5FrUXTGgolDal8+UhiPOrwCAKsLtRr0R42oH/Du2inmiSwuWc/pY9oiWRqLA5If7jw818pUulf3QP7m+wKn2HEVHAg2VBr+OqDk1w/hWWO68tIy1BSoE8QFSPMNXh31MYdKh4mif3jAqDU+0qWqWSAxPdE/A==
Received: from [our-hostname] ([our-ip-address) by COL004-MC2F4.hotmail.com with Microsoft SMTPSVC(7.5.7601.23143);
Thu, 26 Nov 2015 11:34:05 -0800
Return-path:
Received: (qmail 18660 invoked by uid 61081); 26 Nov 2015 20:52:03 -0000
Date: 26 Nov 2015 20:52:03 -0000
Message-ID: <20151126205203.18660.qmail@our-hostname.com>

From: "Meghann Gasparo"
To: "someone-who-received-spam-from-our-server"
Subject: You could strike all your limpid seed right into my love tunnel text me 1.970.572.00.14
Mime-Version: 1.0
Content-Type: text/html
Content-Transfer-Encoding: 8bit
Mime-Version: 1.0
Content-Type: text/html
Content-Transfer-Encoding: 8bit
X-OriginalArrivalTime: 26 Nov 2015 19:34:06.0061 (UTC) FILETIME=[69C119D0:01D12881]


Throw some of your hot cum on my face, deep into my door
or run my humps rubbed once again.
Watch my profile to receive much more spicy fun or just sms right now 1-970-572-00-73

--70969AA2-2F73-4465-8DF3-26DC57EA3967--


We don't use qmail as MTA. Needless to say, but the domain blackberrysa.com is not one of ours.

Wednesday, May 16, 2018

windows 7 - .postgres looses "Log on as a service" after reboot; PostgreSQL service does not start



I've installed PostgreSQL 9.1 x64 on a Windows 7 Enterprise x64 system using the usual install method. The computer has a Novell Client for Windows, and a ZENworks Adaptive Agent, which I suppose externally manages some of the users/policies for the system. I've installed postgres on several Windows computers, so I'm a bit surprised that this system is behaving differently.



When the computer reboots, the PostgreSQL Service does not startup. The full message from attempting to start the service is:





Windows could not start the postgresql-x64-9.1 - PostgreSQL Server 9.1 service on Local Computer.
Error 1069: The service did not start due to a logon failure.
Services1




I can then go to the properties for that service, in the "Log On" tab, retype the password that was originally used with the installer.



postgres service



When I click OK, a dialog appears:





The account .\postgres has been granted the Log On As A Service right.
Services2




which sounds great. I can then correctly start the PostgreSQL Service and continue on. The problem is when I reboot, I need to go to manage the service, retype the password and manually start the service again.



Viewing the "User Rights Assignment" in "Local Security Policy", I see that the "Log on as a service" is wiped after each reboot, leaving only the default "NT SERVICE\ALL SERVICES". This is what I see on a fresh reboot:




Log on as a service dialog



I can then manually add the COMPNAME\postgres user to this dialog to start the service, but it disappears on the next reboot.



Is the problem that the "Log On As A Service" privileges is wiped by the Local Security Policy, or is there something up with the Novell Client/ZENworks Adaptive Agent? Are there any other strategies to make the "Log on as a service" privileges stick for the .\postgres user?


Answer



The fix was simple. Go to the "Log On" tab for the postgres service and change the selection from "This account" to "Local System account" (second figure in my question). Works perfectly now.


Tuesday, May 15, 2018

ubuntu - How do I register Linux server with Windows DNS server



I have several Ubuntu machines (mostly 8.04) that I would like to register their hostnames (or desired hostnames) with my main DNS server running on Windows 2000 so that I can access these machines from any other machine using that DNS server by hostname. Windows clients can do this automatically with the MS client or manually with ipconfig /registerdns. How do I do the equivalent in Linux? I don't necessarily want to register them with the domain using Likewise Open, unless that is the only way to send DNS entries to the Windows server.




These are static IP's. I realize I could add the DNS entries on the Windows side manually as well, but I'm not actually in charge of that Windows DNS server.


Answer




Sorry, I forgot to put in the question
that these are static IP's. I realize
I could add the DNS entries on the
Windows side manually as well, but I'm
not actually in charge of that Windows
DNS server.





If you don’t have control of the DNS server, and if the DNS isn’t set up to allow non-secure updates, and it isn't set up to update based on DHCP assignments, and you have a static address, then you are probably out of luck.



Since this system has a static address, is there some reason you can’t just contact the person who runs the DNS server and ask them to add a record for your system?


remote - How to debug problems over IM and the phone?





If, like me, you get friends, family, and coworkers with computer problems (whether it's server, desktop, or laptop related), who ask you to help over the phone or IM, how do you debug problems?



I'm quite good at debugging hardware and software problems...when I'm at the computer. If I'm at the machine in question, I can hammer through the dialogs I need, mess with the BIOS, listen to sounds, etc, and it's pretty easy. When it's remote, it's a different story.



What effective techniques can one use to debug problems remotely, and what can you do to get better at debugging remotely?


Answer




I think one of the first important steps is to correctly assess their readiness level so you can respond appropriately. I have a cheat-sheet for myself that is based a model from Management of Organizational Behavior. Responding appropriately helps you avoid lots of the frustration and unpleasantness that can happen. When you are on the phone this is particularly important since it is far more difficult to asses the situation, and respond appropriately then it is when you are working with someone face-to-face.



Encourage them to talk as about the problem as much as possible. Ask lots of questions. You can't see what they are seeing, you can't hear what they are hearing, you don't know exactly what they did to get to the point where they decided to contact you. Uou need to encourage them to tell you what they are seeing, hearing and doing by asking lots of appropriate questions. You need them to be your eyes and ears, and hands.



If you are able then use the tools to remotely view the computer. The suggestions others have made are good. VNC is free but the other options are good too. If you are doing this professionally, you definitely want to work out a good system to remotely support systems you are responsible for.



If you are able, have a computer or VM where you are working that is similar to what they have. If you can't use a remote access tool, then being able to follow along with what they are doing on a different system is helpful. Even if you do have remote access having a VM you can test something on is helpful. That way you don't have to break the real system to try something you are not sure about.



Work on getting as in-depth of an understanding as you can about the computer systems you may support. Some things cannot be remotely viewed or replicated on another system. Some times you are just going to have visualize what they are seeing and doing in your head. The better you understand the systems you support, the easier this is.




As l0c0b0x mentioned see the Your troubleshooting rules? question for advice on doing the actual troubleshooting.


php fpm - How to configure php-fpm for php 5.6 and apache 2.2



I installed php 5.6, apache 2.2 and php-fpm on my centos 6.6 by using repo from



https://webtatic.com/packages/php56/




I follow this instruction abd try to make php-fpm work
http://www.garron.me/en/linux/apache-mpm-worker-php-fpm-mysql-centos.html



but there are something different from the tutorial:




  • the fcgi module conf file location is in /etc/httpd/conf.d/fcgid.conf

  • there is no /var/lib/cgi-bin/php5-fcgi in such location

  • there is no /var/run/php5-fpm.sock in such location




fcgid.conf



# This is the Apache server configuration file for providing FastCGI support
# through mod_fcgid
#
# Documentation is available at
# http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html

LoadModule fcgid_module modules/mod_fcgid.so


# Use FastCGI to process .fcg .fcgi & .fpl scripts
AddHandler fcgid-script fcg fcgi fpl

# Sane place to put sockets and shared memory file
FcgidIPCDir /var/run/mod_fcgid
FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm






ScriptAlias /fcgi-bin/ /usr/bin/
AddType application/x-httpd-php .php
AddHandler php-fastcgi .php
Action php-fastcgi /fcgi-bin/php-cgi
FastCgiExternalServer /usr/bin/php-cgi -host 127.0.0.1:9000




Anyone know how to config? There no php-fpm info show in phpinfo, and i use echo php_sapi_name();, it returns 'cgi-fcgi'



And i tried remove , there is an error
Invalid command 'FastCgiExternalServer', perhaps misspelled or defined


Answer



You are using fcgid module but not using mod_fastcgi, there is no FastCgiExternalServer, directive. but i don't know how to make fcgid and php-fpm together.


apache 2.2 - Redirect problem using .htaccess mod_rewrite only for root



I'm trying to permanently redirect all requests to my root directory to another site. I don't want anything other than the root requests to be redirected.



Requests to "http://www.example.com" should to to "http://www.example2.com/blah"




I can get this working with the following:



RewriteEngine On
Rewriterule ^/$ http://www.example2.com/blah [L,R=301]



Everything seems to work fine (all requests other than root remain not redirected). Except that one particular type of request doesn't work.



I have a PHP script that runs at "http://www.example.com/phpscript"




Requests to that script have an extra component in the url like "http://www.example.com/phpscript/blah"



I strip out the /blah part within the PHP script and return an gif image based on the request. This may be the source of my problem. Requests to this url don't work with my above rewriterule.



Any ideas? Thanks in advance.


Answer



You possibly want



Rewriterule ^$ http://www.example2.com/blah [L,R=301]



(No slash)



If this doesn't work, you should try adding this to your http config (Somewhere in the config, it doesn't work at htaccess or virtualhost level)



RewriteLog /tmp/rewrite.log
RewriteLogLevel 9


This will provide you with a line by line explaination of what it's trying to match against what regex, and what the final decision is.




Remove these lines afterwards, otherwise one day you'll discover you don't have nearly as much disk space as you thought you should...


windows server 2008 - Is it possible to enter System Administration field without experience



HI Guys ,




I currently Work as web developer(3 years). I want to move into System Administrative field.
I am currently doing job so can get any training.



My company has one computer called server with 2TB HD 8GB RAM XEON, which is only used to store files nothing else.



I just keep experimenting things on that server. Initially it had only windows 2003 server installed but as i was trying to learn more about the Windows. I installed 2008 server and then installed VMware Workstation and then two VM 1)Again 2008 server 2)Ubuntu



Just for experimenting.




MY Main question is by experimenting this way and doing some certifications LIke MCITP,VCP.
Can i enter the System Administartor field.I think i can finish some certifiaction within few months. Can i try all practical stuff on that server?



IF i write in my resume that i can done all that stuff while working for that web design company . Will that be considered as experience or not



thanks


Answer



Of course you can get into system administration with no experience. Everyone has no experience at some point :)



If you really want to get into it, there's a couple of things you should do:





  • Experiment & learn on your own time
    as well (at home, etc)

  • Find yourself a job as a junior
    sysadmin at a smallish company
    where a more senior sysadmin can
    mentor you.


Monday, May 14, 2018

php fpm - PHP is stopping a web-based script after 60 seconds



I'm working in a Bitnami installation of Apache2 and PHP (5.6) and we have trouble with a script that's taking longer than 60 seconds to complete.



This script in question is failing after 60 seconds, sending a 504 error.



I already checked all other possibilities, but it keeps going back to the execution time.



if(round($mem_usage/1048576,2) > 36)

{
echo "exceded 36mb, aborting
";
echo "element: ".$fila." of ".mysql_num_rows($result)."
";
echo "memory usage: ".round($mem_usage/1048576,2)."M
";
echo "memory limit: ".ini_get('memory_limit')."
";
echo "max_execution_time: ".ini_get('max_execution_time')."
";
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "time elapsed: ".ceil($time)." seconds";


exit();
}


Memory values are always under the normal range (we increased the memory limit just in case), but the script failed to print this message whenever time elapsed 60 seconds, so we discarded Memory Usage.



I've searching a lot in StackExchange questions, and I compiled a list of common answer to this problem:




  • max_execution_time




We tried increasing it to 300 however, this had no effect. PHP seems to ignore this value, already checked php_info() just in case it wasn't setting, but it is.




  • Memory Limit



See above, RAM usage keep within normal range. Not the cause of the crashes.





  • Doing a try{}catch(){}



The scripts halts, and doesn't execute the code inside the catch block.




  • Checking error logs




PHP is NOT throwing anything to the error logs, it just terminates the script abruptly and without any output, we already checked the log options in php.ini




  • set_time_limit()



This method is returning FALSE, according to the documentation it means it's failing to set a new time limit, which safe mode is a common cause of this happening, which leads us to the next step






This is PHP5.6, according to the documentation.



This feature has been DEPRECATED as of PHP 5.3.0 and REMOVED as of PHP 5.4.0.



  • max_input_time



We set it to 300, no effect.





  • default_socket_timeout



Ditto




  • Checked .htaccess for related configurations




Done, and we didn't find anything.




  • Setting timeout in Apache configuration



We didn't have a Timeout option in our Apache configuration, and when added, it had no effect.





  • I'm getting more suggestions in answers, so I'm adding them here.




This isn't possible for some PHP scripts, but have you tried running the script from the command line rather than via HTTP request? This would rule out if there's an Apache or PHP-FPM config screwing things up.




This script can't be run in CLI, but running a dummy script causes to execute indefinitely, so CLI doesn't halt the execution.





Do you have anything set for max_execution_time or request_terminate_timeout in your PHP-FPM config?




request_terminate_timeout is set to 300, no effect.
php_admin_value[max_execution_time] is set to 300, no effect. (this one has to be set this way, according to documentation)




Do you have anything set for LimitRequestBody (or any other limits for that matter) in your Apache config?





Not set anywhere, and no other related limits in Apache configuration.




Are you performing a file upload when this happens? If so, have you checked upload_max_filesize and post_max_size in your PHP config?




No file upload is taking place in this request, so these limits are irrelevant in this specific situation.




It doesn't make sense that you would be getting different responses based on the browser, unless you are doing some kind of browser detection in your PHP and having your application behave differently based on this. If you aren't doing this, and you can consistently confirm this correlation, one thing you could do is see if the two different browsers are for some reason sending different requests.





Apparently it's how Firefox displays the error, using the Developer Console in Network shows a 504 same like in Chrome (My mistake when I asked the question as I put 503 by accident.)




have you tried ignore_user_abort(true)




Just enabled ignore_user_abort (via PHP-FPM settings). And it had no effect.





and/or attempting to capture some diagnostics via register_shutdown_function()?




Attempting to register the shutdown function has no effect, since the code block is never executed.






As a final note, this server has PHP-FPM, and that's where we are settings the PHP configurations, however nothing we do has any effect and the script in question is still terminating after 60 seconds.




Any help with this issue will be appreciated.


Answer




Are you hitting the apache server directly or through a load balancer or proxy?




This comment in the OP made me realize we were using AWS Load Balancer. Upon checking the documentation we saw it has a default idle timeout of 60 seconds.



Increasing this limit allowed the script to finish gracefully.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...