Sunday, June 30, 2019

linux - Finding current dynamic IP of a remote computer



I need to access a remote computer using VNC.
the computer has a static IP in its network but connects to Internet trough a VPN connection.
How can I find current IP of this computer from my own system (remotely)?
Can something like DynamicDNS help (e.g. ddclient && dnsomatic.com)? If Yes; does a free service exists?



Both systems run Linux



Thanks



Answer



Yes, some sort of dynamic DNS setup would work best. There are plenty of free dynamic DNS services, or you can run your own with a bit of BIND and nsclient.


Saturday, June 29, 2019

SAS Raid Controller



I am working with a sys admin on a DELL PowerEdge 710. The sysadmin purchased the server before I was advised with no cache on the raid controller, we are running a SQL server and needed to have cache on the raid controller. The server only has 6 slots, originally all 6 were on the raid controller with no cache. We moved two to the new cached controller but were reluctant, the server works fin but there is an error on start up because the orginal card is looking for the two drives.



E1A15 SAS cable B not found (the original controller is throwing this error)



How do I reset this error > ?



Answer



Why don`t you ask Dell?
Or was the server bought without hardware support?



I would go the the original raid-controller, unsetup the raids there, go to the bios and disable the old raid controller.


Thursday, June 27, 2019

memory - Long page allocation stalls on Linux – why does this happen?

I have a problem (which I can reliably reproduce) on a bunch of Linux hosts, where the system becomes completely unresponsive after a process aggressively consumes memory. I see things like this in the kernel log:



2017-09-14 19:53:51.252365 kernel: hyperkube: page allocation stalls for 62933ms, order:0, mode:0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null)
2017-09-14 19:53:51.252402 kernel: hyperkube cpuset=kube-proxy mems_allowed=0
2017-09-14 19:53:51.252440 kernel: CPU: 1 PID: 1438 Comm: hyperkube Not tainted 4.11.9-coreos #1
2017-09-14 19:53:51.252478 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 11/11/2016
2017-09-14 19:53:51.252512 kernel: Call Trace:
2017-09-14 19:53:51.252591 kernel: dump_stack+0x63/0x90

2017-09-14 19:53:51.252628 kernel: warn_alloc+0x11c/0x1b0
2017-09-14 19:53:51.252682 kernel: __alloc_pages_slowpath+0x811/0xe50
2017-09-14 19:53:51.252720 kernel: ? alloc_pages_current+0x8c/0x110
2017-09-14 19:53:51.258910 kernel: __alloc_pages_nodemask+0x21b/0x230
2017-09-14 19:53:51.258951 kernel: alloc_pages_current+0x8c/0x110
2017-09-14 19:53:51.259009 kernel: __page_cache_alloc+0xae/0xc0
2017-09-14 19:53:51.259041 kernel: filemap_fault+0x338/0x630
2017-09-14 19:53:51.268298 kernel: ? filemap_map_pages+0x19d/0x390
2017-09-14 19:53:51.268360 kernel: ext4_filemap_fault+0x31/0x50 [ext4]
2017-09-14 19:53:51.268397 kernel: __do_fault+0x1e/0xc0

2017-09-14 19:53:51.268436 kernel: __handle_mm_fault+0xb06/0x1090
2017-09-14 19:53:51.268471 kernel: handle_mm_fault+0xd1/0x240
2017-09-14 19:53:51.268504 kernel: __do_page_fault+0x222/0x4b0
2017-09-14 19:53:51.268539 kernel: do_page_fault+0x22/0x30
2017-09-14 19:53:51.268572 kernel: page_fault+0x28/0x30
2017-09-14 19:53:51.268605 kernel: RIP: 0033:0x45d561
2017-09-14 19:53:51.268666 kernel: RSP: 002b:00007f64d3ef2de8 EFLAGS: 00010246
2017-09-14 19:53:51.268717 kernel: RAX: 0000000000000000 RBX: 000000000000007c RCX: 000000000045d561
2017-09-14 19:53:51.268757 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
2017-09-14 19:53:51.277186 kernel: RBP: 00007f64d3ef2df8 R08: 00007f64d3ef2de8 R09: 0000000000000000

2017-09-14 19:53:51.277239 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
2017-09-14 19:53:51.277283 kernel: R13: 0000000000000034 R14: 0000000000000000 R15: 00000000000000f3
2017-09-14 19:53:51.277322 kernel: Mem-Info:
2017-09-14 19:53:51.277355 kernel: active_anon:903273 inactive_anon:164 isolated_anon:0
active_file:166 inactive_file:754 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:8251 slab_unreclaimable:17340
mapped:591 shmem:2354 pagetables:4389 bounce:0
free:14896 free_pcp:73 free_cma:0
2017-09-14 19:53:51.277393 kernel: Node 0 active_anon:3613092kB inactive_anon:656kB active_file:864kB inactive_file:2744kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:2364kB dirty:0kB writeback:0kB shmem:9416kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 606208kB writeback_tmp:0kB unstable:0kB pages_scanned:246 all_unreclaimable? no

2017-09-14 19:53:51.288390 kernel: Node 0 DMA free:15052kB min:184kB low:228kB high:272kB active_anon:764kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15988kB managed:15900kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:84kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
2017-09-14 19:53:51.288448 kernel: lowmem_reserve[]: 0 3717 3717 3717
2017-09-14 19:53:51.288483 kernel: Node 0 DMA32 free:44532kB min:44868kB low:56084kB high:67300kB active_anon:3612328kB inactive_anon:656kB active_file:912kB inactive_file:2516kB unevictable:0kB writepending:0kB present:3915776kB managed:3841148kB mlocked:0kB slab_reclaimable:33004kB slab_unreclaimable:69276kB kernel_stack:10096kB pagetables:17556kB bounce:0kB free_pcp:412kB local_pcp:156kB free_cma:0kB
2017-09-14 19:53:51.288520 kernel: lowmem_reserve[]: 0 0 0 0
2017-09-14 19:53:51.288553 kernel: Node 0 DMA: 5*4kB (UM) 1*8kB (M) 3*16kB (UM) 2*32kB (UM) 1*64kB (M) 2*128kB (UM) 1*256kB (U) 0*512kB 2*1024kB (UM) 0*2048kB 3*4096kB (ME) = 15052kB
2017-09-14 19:53:51.288609 kernel: Node 0 DMA32: 537*4kB (UMEH) 360*8kB (UMEH) 397*16kB (UMEH) 238*32kB (UMEH) 141*64kB (UMEH) 61*128kB (E) 22*256kB (E) 4*512kB (ME) 1*1024kB (M) 0*2048kB 0*4096kB = 44532kB
2017-09-14 19:53:51.288735 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
2017-09-14 19:53:51.288784 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
2017-09-14 19:53:51.294569 kernel: 3234 total pagecache pages
2017-09-14 19:53:51.294619 kernel: 0 pages in swap cache

2017-09-14 19:53:51.294670 kernel: Swap cache stats: add 0, delete 0, find 0/0
2017-09-14 19:53:51.294747 kernel: Free swap = 0kB
2017-09-14 19:53:51.294781 kernel: Total swap = 0kB
2017-09-14 19:53:51.294825 kernel: 982941 pages RAM
2017-09-14 19:53:51.300569 kernel: 0 pages HighMem/MovableOnly
2017-09-14 19:53:51.300616 kernel: 18679 pages reserved
2017-09-14 19:53:51.300673 kernel: 0 pages hwpoisoned


As you can see here, the system was seemingly stalled for >60 seconds trying to allocate memory. After around 10 minutes of the system being completely unusable, the OOM killer steps in and kills the greedy process.




I'd really love if someone could help me understand:




  • Why it takes the OOM killer so long to act?

  • Why these allocations take so long? If there is no memory available, why does this not just fail?

Wednesday, June 26, 2019

wsus - Windows Updates not working via SCCM 2012 R2




We upgraded from SCCM 2012 SP1 on Server 2008 R2 to SCCM 2012 R2 CU2 on Server 2012 R2. Very simple site hierarchy - one site server provides MP, DP, SUP, etc. roles, and has WSUS installed (with all configuration performed by SCCM).



I am trying to deploy Windows updates and SCEP updates; my SCEP definition updates work perfectly, but Windows 7 updates, such as security and critical, do not jive so well. Between the Windows and the SCEP updates, the respective software update groups, ADRs, deployments, etc. are all identical to the extent that is relevant. There are no errors in UpdatesDeployment.log, UpdatesHandler.log, UpdatesStore.log, WUAHandler.log, or WindowsUpdate.log. The only thing that particularly stands out to me is that when I a Software Update Scan Cycle (an SCCM client action) from a client, WindowsUpdate.log offers this information:



Agent   ** START **  Agent: Finding updates [CallerId = CcmExec]
Agent * Include potentially superseded updates
Agent * Online = Yes; Ignore download priority = Yes
Agent * Criteria = "(DeploymentAction=* AND Type='Software') OR (DeploymentAction=* AND Type='Driver')"
Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed
Agent * Search Scope = {Machine}

PT +++++++++++ PT: Synchronizing server updates +++++++++++
PT + ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}, Server URL = http://[REDACTED]:8530/ClientWebService/client.asmx
PT +++++++++++ PT: Synchronizing extended update info +++++++++++
PT + ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}, Server URL = http://[REDACTED]:8530/ClientWebService/client.asm
PT + ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}, Server URL = http://[REDACTED]:8530/ClientWebService/client.asmx
Agent * Added update {0BCA6C00-4FD3-4280-96BE-B89988FA1702}.101 to search result
~[Omitting 425 more lines identical except for the particular update GUID.]
Agent * Found 426 updates and 75 categories in search; evaluated appl. rules of 2398 out of 3466 deployed entities
Agent ** END ** Agent: Finding updates [CallerId = CcmExec]
~[Omitting a lot of identical lines that describe WUA's (successful) reporting.]

COMAPI >>-- RESUMED -- COMAPI: Search [ClientId = CcmExec]
COMAPI - Updates found = 426
COMAPI -- END -- COMAPI: Search [ClientId = CcmExec]


So it sure as heck seems like it's found some updates, but it never installs anything, nor does are the updates shown in Software Center, even though the deployment is configured to do so. However, if I use Windows Update to check for updates on the client, I get this result in WindowsUpdate.log:



Agent   ** START **  Agent: Finding updates [CallerId = AutomaticUpdates]
Agent * Online = Yes; Ignore download priority = No
Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1"

Agent * ServiceID = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7} Managed
Agent * Search Scope = {Machine}
Setup Checking for agent SelfUpdate
Setup Client version: Core: 7.6.7600.320 Aux: 7.6.7600.320
~[Omitting lines about signature validation and SelfUpdate check (spoiler alert: "SelfUpdate is NOT required").]
PT +++++++++++ PT: Synchronizing server updates +++++++++++
PT + ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}, Server URL = http://[REDACTED]:8530/ClientWebService/client.asmx
PT +++++++++++ PT: Synchronizing extended update info +++++++++++
PT + ServiceId = {3DA21691-E39D-4DA6-8A4B-B43877BCB1B7}, Server URL = http://[REDACTED]:8530/ClientWebService/client.asmx
Agent * Found 0 updates and 75 categories in search; evaluated appl. rules of 2398 out of 3466 deployed entities

Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates]
AU >>## RESUMED ## AU: Search for updates [CallId = {87B4DC09-5A34-4351-975C-EE9BB69D9346}]
AU # 0 updates detected
AU ## END ## AU: Search for updates [CallId = {87B4DC09-5A34-4351-975C-EE9BB69D9346}]


I have no idea if the results from Windows Automatic Updates are relevant to a WSUS/SCCM issue, so forgive me if my second chunk of logs is useless.



I have attempted the solutions offered in this question, with no change in results. Can anyone offer any other suggestions?




Additional details:




  • Synchronization between WSUS and SCCM is happy (confirmed successful by wsyncmgr.log).

  • Content is distributed in SCCM (confirmed successful by distmgr.log).

  • No errors in server-side logs: PatchDownloader.log, SUPSetup.log, WCM.log,WSUSCtrl.log, or wsyncmgr.log.


Answer



This was answered in a TechNet thread.




SCCM 2012's splendid version control increments its SUP catalog version every time it downloads new updates, as seen in the Catalog Version column under Monitoring -> Software Update Point Synchronization Status. Every update that the SUP adds is entered as a row in the CI_ConfigurationItems table in the SCCM database. One column in this table, SDMPackageDigest, contains XML metadata, including a line that specifies which Catalog Version number the update was added to: , where [x] is a decimal integer. When upgrading from 2012 SP1 to 2012 R2, we imported our entire database to the new server, which meant that all updates had an entry for the MinCatalogVersion, reaching at least as high as 2200. However, SCCM stores the catalog version in registry keys, which were not imported, which meant that on the new server, the version number was restarted at 1. Thus, the SUP would not install updates that had a higher MinCatalogVersion than the catalog version, which was...essentially all of them.



The fix for this is to change three registry values on the SCCM server, all located in the key HKLM\SOFTWARE\Microsoft\SMS\Components\SMS_WSUS_SYNC_MANAGER.




  • ContentVersion

  • LastAttemptVersion

  • SyncToVersion




After restarting the SMS_Executive service, updates promptly became available to all workstations they were deployed to.



I acknowledge that the proper thing to do would have been to use XQuery to search the XML data in the SQL table for the highest value for MinCatalogVersion; however, I was on a very tight deadline to fix the issue and did not have time to try to figure out an appropriate query. Thus, I just set all three of the registry values to 10,000 (decimal) and hoped for the best.


Prevent using wildcard in Apache server alias setting




This is a modified VirtualHost setting from my server.





ServerName example.com
DocumentRoot /mnt/example/public



AllowOverride all

Options -MultiViews





I found that I can access the example.com and also *.example.com, e.g. www.example.com, abcde.example.com, etc.



I know there should be a ServerAlias setting which let me activate wildcard subdomain suppot. But I can't see the ServerAlias setting my VirtualHost or Apache config file.




I want to allow only www.example.com and example.com can access my site, and other subdomain will get a 404 error.



How can I set this up?



Thanks all. :)


Answer



I think what you're missing before all your vhost containers in the conf file is:



NameVirtualHost *:80



Once you've done that, depending on what you want to do for all those wildcards you deem "invalid" you can create the required containers, and then followup with a final one with a wildcard that acts a catchall for the rest.



See http://httpd.apache.org/docs/2.2/vhosts/name-based.html



For instance, what I've done is have a vhost container for myhost.mydomain.tld, and then whatever other domains, and finally, I have a container for *.mydomain.tld that basically points to a static page notifying people to mind their own business.


Utilizing SSL on Multi-domain, Autoscaling Elastic Beanstalk Setup

We are creating a Content Management System for our company. It is important that this CMS support dynamic domain names on a dynamic number of servers. After many hours of research we felt that Amazon's Elastic Beanstalk was the way to go. One thing that we also require is the ability to dynamically enable SSL for the domains associated with our system.




So in our system, we can create a "site" which will be associated with a domain. When the site is created, we should also have the ability to choose if the domain will be hosted over SSL/TLS. We plan on white-labeling the system and anticipate a large number of domains being associate with it.



I have been exploring the different possibilities for being able to get SSL set up on the servers (or the load balancer) and be able to change what domains are secured on the fly. Here is where I am at:




  • Using Amazon's Certificate Manager: this would be the most desired way to go about it. It is integrated with AWS and very easy to use. However, it has several debilitating limitations: 1. You have to verify every domain by email every time you request a new certificate. No big deal except that 2. It cannot apply certs to EC2 instances, only load balancers and load balancers can only be assigned one cert. This means you have to re-verify every domain whenever you want another domain to be secured. No good.

  • Using Let's Encrypt on the Load Balancer: this would be the next best way (that I can see) to secure our sites. Whenever a new site needs to be secured we will request a new certificate for all the domains that need SSL. Once the cert is created, we push it to IAM and tell EBS to associate the Load Balancer with the new Cert. The only problem I see with this is that LetsEncrypt limits their certs to 100 domains, as does the non-free but relatively inexpensive SSL provider, SSLMate. Might work for now, but it doesn't scale. Is there an automated SSL provider that has no limit on the number of domains on the cert?

  • Using Passthrough SSL: Amazon's Elastic Beanstalk allows you to set it up in such a way so that the load balancer will pass the encrypted traffic straight to the EC2 instances. Then you can allow the EC2 instances to handle the certificates. I can then utilize LetsEncrypt and assign an individual cert for each domain. I run into an issue when considering autoscaling: we will need to duplicate the certs across instances. My solution would be to store the certs in a secured S3 bucket, then have a cron running on all the EC2 instances to pull the new/updated certs over.




Are there any concerns with the last idea? Is there a better solution to what I am trying to do? Am I missing something? A concern? Or maybe a super simple solution to my problem?



Note that I am using docker, so I can set anything up on the server that I need to.

Tuesday, June 25, 2019

bash - Shell script Process PID logging and maintenance using exec

I am trying to launch a java process as a server and then might periodically need to restart/kill it at certain times. Since I use a shell script to launch java jvm(to build classpath), I thought of logging the shell script process id using $$ and then run java as
"exec java" so that I can use the logged process id for killing the process and launching the new jvm.
Is that the best way to do it? Any feedback?

domain name system - Why is geo-redundant DNS necessary for small sites?


This is a Canonical Question about DNS geo-redundancy.





It's extremely common knowledge that geo-redundant DNS servers located at separate physical locations are highly desirable when providing resilient web services. This is covered in-depth by document BCP 16, but some of the most frequently mentioned reasons include:




  • Protection against datacenter disasters. Earthquakes happen. Fires happen in racks and take out nearby servers and network equipment. Multiple DNS servers won't do you much good if physical problems at the datacenter knock out both DNS servers at once, even if they're not in the same row.


  • Protection against upstream peer problems. Multiple DNS servers won't prevent problems if a shared upstream network peer takes a dirt nap. Whether the upstream problem completely takes you offline, or simply isolates all of your DNS servers from a fraction of your userbase, the end result is that people can't access your domain even if the services themselves are located in a completely different datacenter.




That's all well and good, but are redundant DNS servers really necessary if I'm running all of my services off of the same IP address? I can't see how having a second DNS server would provide me any benefit if no one can get to anything provided by my domain anyway.




I understand that this is considered a best practice, but this really seems pointless!

Saturday, June 22, 2019

performance - Hardware setup advice for SQL Server 2005



I'm currently having trouble with a database server for a DB with lots of small writes and a few in comparison big reads. Read performance is more important because people is involved, writes are performed by automated clients. This database is currently 30GB and will grow up until a hundred or so.



The currently awful and ugly setup which underperforms is




  • DELL Poweredge R300

  • Quad Core Xeon X3353 @2.66GHz


  • 14 GB RAM

  • SAS 15k 146GB (two partitions, one OS, another Logs)

  • SAS 15k 146GB (one partition, data and tempdb)

  • No RAID (ugh)

  • SATA 7k 1TB [via USB 2.0, external power supply, where daily backups are stored]



So, due to its many ugly points (underperforming, no RAID, external USB drive for backups) I'm planning to set up a new database server.



I've thought about:





  • 4 SAS 15k 73GB RAID10 for logs

  • 2 SAS 15k 73GB RAID1 for OS

  • 4 SAS 15k 146GB RAID10 for data + tempdb

  • 32GB RAM



So, three questions:





  • Is this the best price/performance combination? Is this overkill? Is there a better combination? Is more information needed?


  • Do you know of a reasonable priced single machine which can contain the 10 disks or must I go for an external locally attached storage right now?


  • Any recommendations for getting something like this at a reasonable price/quality?



Answer



32GB of RAM is a nice typical round number for most Server CPU's but if you are looking at Xeon 5500 (Nehalem) CPU's then remember that they are configured optimally in banks of 3 DIMM's per CPU, and multiples of 6 DIMMs for dual socket servers so you will see better performance with 24\48GB RAM rather than 32GB.



There are plenty of servers that will take 10 disks but whether you would consider them reasonably priced or not I can't say. I doubt that you will find any 1U like the R300 that will take 10 disks - HP's 360 G6 can cram 8 2.5" SFF drives into 1U for a basic list price of just over $2K. Dell's R610 is roughly equivalent (it's also a 1U dual socket Xeon 5500 system) but only gets 6 drives internally. Even if you bump up to the DL380\R710 2U class servers




Rackable towers like the HP ML370 G6 run for around $2.5k basic but can take up to 24 drives. Dell's T710 runs to around 16.



The prices above are the minimum list prices for the chassis with 1 CPU, no disks and virtually no RAM. A configured HP ML370 with those drives, 24 GB of RAM and dual Xeon E5540 CPU's will have a list price of around $10k. The bulk of that comes from your drive choices - you could probably save about $1500\$2K by opting for a single socket variant but the marginal cost of the RAM might increase as the 4GB DDR3 DIMM's are about 50% more expensive per GB than 2GB modules.



Edited to add some idea of performance comparisons.



A Xeon 5540 2.53Ghz CPU will be about 50% better clock for clock compared to your existing CPU. A dual socket setup with SQL 2005 should scale out reasonably efficiently so in pure CPU terms a dual socket system like those listed above will have about 3x the cpu power of your R300.



Set up with balanced DDR3 RAM @ 1066Mhz you should see about 3x the memory bandwidth, possibly more since I suspect that your current R300 setup isn't optimal.




In terms of disk performance you're increasing your effective random IO capacity by at least a factor if 5 (obviously enough) but by adding in a decent advanced RAID controller and properly segregating the functionality you will probably see substantially better performance improvements over and above that.



The Tylersburg chipset used in (almost) all of the current Xeon 5500 based systems also separates Memory IO and the rest which is harder to quantify but will invariably be of some benefit.



These systems are many times faster than your existing setup but whether that actually translates into improved performance for your users depends on what your current bottlenecks are. If it's a poorly designed app or network congestion beefing up the server alone isn't going to result in such clear gains.


Friday, June 21, 2019

linux - Best practice for scaling a single application source to multiple nodes

I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB.



Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it.



My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP.




Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times?



To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

php fpm - NGINX accessing the files in the wrong path

I have a codeigniter application that has the following directory structure




|_
| |_
| |_index.php
|_



The developers have an index.html in the , and directories. The codeigniter files and the services rolled out by the developers are in the directories. The developers access the services by appending the service name to the domain as follows http://mydomain/web/API/index.php/v1/GetRoles The host name is configured as a BASE PATH variable in a javascript file.



This app is served by NGINX running fast-cgi.



Though the HTML pages get loaded no data is being fetched. So we tried accessing the functions by typing the URL in the browser (http://mydomain/web/API/index.php/v1/GetRoles). We get a 404 Not Found error. The error logs contain the following entries



2016/05/20 16:14:57 [error] 2127#0: *232 FastCGI sent in stderr: "Unable to open primary script: /home/myname/public_html/domainname/public/projectname/index.php (No such file or directory)" while reading response header from upstream, client: 14.141.163.46, server: mydomain.com, request: "GET /web/API/v1/get/Common/GetActivities HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "mydomain.com"


The server should be looking for index.php in /home/myname/public_html/domainname/public/projectname/web/API/index.php instead its looking for it in the WEB ROOT.




My NGINX configuration is as follows



server {
listen mydomain.com;
server_name mydomain.com;
root /home/myname/public_html/domainname/public/projectname;

keepalive_timeout 70;
access_log /home/myname/public_html/domainname/log/access.log;

error_log /home/myname/public_html/domainname/log/error.log;

gzip_static on;

index index.html index.php;

location / {
try_files $uri $uri/ /index.php?$query_string;
}


error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}

# pass the PHP scripts to FastCGI server listening on the php-fpm socket
location ~ \.php$ {
#try_files $uri =404;

#fastcgi_pass unix:/tmp/php5-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_intercept_errors on;
}
}



I am unable to figure out why NGINX is looking for index.php in the WEB ROOT instead of loading whatever we type in the URL. Look forward to hear from anyone who have faced a similar problem.

linux - SSH traffic redirect for LXC containers




I use LXC containers for ssh hosting and I would like to redirect SSH/SFTP traffic (using port 22) to the container's private IP address but on a user/IP basis. That is - one source port, many destinations.




  1. ssh ahes@server.com

  2. we have user 'ahes', private IP for this user container is 10.10.66.66

  3. redirect traffic to 10.10.66.66:22



It is not possible for me to assign public IP address to each container.




Possible solutions I figured out:




  1. Easy one - forget about global port 22 and use port matching particular user. For example ahes would have port 6666. Then redirect traffic with simple iptables rule: server.com:6666 => 10.10.66.66:22. Disadvantage is that in some places ports other than 22/80/443 are blocked.


  2. use ForceCommand directive in sshd on parent server:





Match Group users
ForceCommand /usr/local/bin/ssh.sh



ssh.sh script:




#!/bin/bash
# ...some logic here to find user IP address
# run ssh
exec ssh $USER@$IP $SSH_ORIGINAL_COMMAND



This solution is almost good but I didn't find a way to make sftp working with this configuration.



The other consideration is that I cannot dig into protocol because encryption is done before any data identifying user is sent. Futhermore I don't really have skills to hack sshd source code and keeping parent server with original packages is very desirable for security reasons.



I also found libpam-nufw package used for authentication on connection level (iptables) but I think it is for other purposes.



I would appreciate any clues. Thank you.


Answer



Set an HTTP proxy listening at port 443 and enable forwarding connections to port 22 at the internal LXC IPs. Then, when using ssh/sftp clients, use the ProxyConnect option combined with netcat/socat/proxytunnel/whatever.




Another common solution is to set up an SSH gateway (for instance, a dedicated LXC on the same box). Users connect there first and then to their LXC instance.


Thursday, June 20, 2019

IIS 7.5 Different ssl certificate with different domain name on same ip

have an IIS 7.5 (win 2008 r2) server and i want to bind the same web site to 2 different domain and to 2 different certificates.



I can't use wildcard since the domains are different FQDNs.




If i add 2 bindings for https and port 443 i can't select 2 different certificates (when i change one binding it changes the other).



There is a way to solve this without using different port or splitting to 2 different websites?

Wednesday, June 19, 2019

Nginx not redirecting non-www to www




I need my Nginx setup to redirect users to www.example.com if they type example.com in the browser. The reason is that our SEO consultant said there should be only one preferred domain, otherwise Google sees it as content duplication. Anyway . . .



So the point is, I also have SSL from Letsencrypt set up on the server, but I'm not able to achieve the redirect from example.com to www.example.com (the server accepts both the versions). Here's the configuration I'm using:



server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;

}

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;

server_name example.com www.example.com;
root /home/my_site;

index index.php index.html index.htm;

# for letsencrypt
location ~ /.well-known {
allow all;
}

location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}


error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;

location = /50x.html {
root /usr/share/nginx/html;
}

location ~ \.php$ {

try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}


==== Update ====




I now changed my configuration to as advised by Tim (and I always to nginx -t and restart) in one of the answers to the following:



server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}


server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
return 301 https://www.example.com$request_uri;
}

server {

listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.example.com;
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;

root /home/ankush/wp_ankushthakur;
index index.php index.html index.htm;

# for letsencrypt

location ~ /.well-known {
allow all;
}

location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}

error_page 404 /404.html;


error_page 500 502 503 504 /50x.html;

location = /50x.html {
root /usr/share/nginx/html;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;

fastcgi_index index.php;
include fastcgi_params;
}
}


Here's the output of curl -k and access logs for all of the variations (I didn't build Nginx from source because I'm hoping for a simpler solution and don't want to mess up the server):



curl -k http://example.com
Curl output: 301 - Moved permanently

Access logs: "GET / HTTP/1.1" 301 194 "-" "curl/7.47.0"

curl -k http://www.example.com
Curl output: 301 - Moved permanently
Access logs: "GET / HTTP/1.1" 301 194 "-" "curl/7.47.0"

curl -k https://example.com
Curl output: 301 - Moved permanently
Access logs: "GET / HTTP/1.1" 301 194 "-" "curl/7.47.0"


curl -k https://www.example.com
Curl output:
Access logs: "GET / HTTP/1.1" 301 5 "-" "curl/7.47.0"


Notice the last section, where the CURL output is blank and the access logs still give a permanent redirect.



Funnily enough, if I comment out the second server block and then restart Nginx, I end up with the opposite effect of what I wanted: www redirects to non-www! I'm surprised that's happening, because the HTTPS version of www.example.com isn't mentioned anywhere in this (third) version of the config.


Answer



I was finally able to convince our SEO person to consider the non-www domain as primary. The configuration that worked to redirect www to non-www was as below. Although my attempt for achieving the reverse had a similar configuration, I'm not sure what was preventing it.




server {
listen 80;
listen [::]:80;

server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}

server {

listen 443 ssl http2;
listen [::]:443 ssl http2;

include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;

server_name www.example.com;
return 301 https://example.com$request_uri;
}


server {
listen 443 ssl http2;
listen [::]:443 ssl http2;

include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;

server_name example.com;

root /home/mysite;

index index.php;

location ~ /.well-known {
allow all;
}

location / {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri $uri/ /index.php?$query_string;
set $path_info $fastcgi_path_info;

fastcgi_param PATH_INFO $path_info;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
}
}

active directory - Windows: Can domain controllers also serve other functions?



This question was a discussion about whether Active Directory is necessary to run Terminal Services. But a chain of answers and comments (mostly by me) brought up a related question around Domain Controllers.



It is clearly poor practice to have only one Domain Controller in an AD environment. It is also clearly best practice to have each domain controller on a separate (physical or virtual) single function server. However, not everyone can follow best practices all of the time.




Is it OK to use servers filling other roles as domain controllers?



What things should be considered in determining whether to "dual-purpose" a server?



Does the domain controller role change how Windows operates the file system or on the hardware?



Are there difference between versions of Windows Server?


Answer



You can and it works. I have about 40 branch offices and - for political reasons - a management decision was made to give each a full server infrastructure. For financial reasons it was a single-server environment in each, so it's all DC/File/Exchange (this was in the Windows 2000 days).




However, management of it is a nightmare, and my preferred rule is "a DC is a DC and nothing else goes on it". These are your most important servers, and if your AD goes funny you will have a horrible time getting it back right. If you can, give yourself the best chance of avoiding this by having dedicated DC roles. If you can't, beg, scream, whimper, bribe, threaten, prophesy, or whatever it takes to put yourself in a position where you can.


Monday, June 17, 2019

Apache multiple SSL sites, multiple IPs, without DNS modification



My goal is to have multiple SSL sites on multiple IP address, but I'm struggling with the Apache setup:




// I want this:
http + https example.com
http + https example.net

// On these IPs:
http example.com 1.1.1.1:80
http example.net 1.1.1.1:80
https example.com 2.2.2.2:443
https example.net 3.3.3.3:443



Note that the DocumentRoot is different for all 4 sites.



In my current Apache setup, when a client visits https://example.com, Apache serves up 1.1.1.1 (connection refused, assume :443) instead of 2.2.2.2:443. The same is true with https://example.net (instead of 3.3.3.3:443). I assume this is because of my DNS a records for @ and www pointing to 1.1.1.1. The non-SSL 1.1.1.1 name-based-vhosts work fine.



I'm not sure if this is intended Apache behavior or not. So the core of my question is, "is this intended Apache behavior? If so, could someone give me an example of how the IPs should look in this situation? Should BOTH http and https example.com be on ONE IP instead of me splitting them up like this?"



My httpd.conf is like this right now:



# http example.com and http example.net:

Listen 1.1.1.1:80
# https example.com:
Listen 2.2.2.2:443
# https example.net:
Listen 3.3.3.3:443

NameVirtualHost *:80


ServerName example.com

DocumentRoot /var/www/example.com



ServerName example.net
DocumentRoot /var/www/example.net



SSLEngine on

ServerName example.com
DocumentRoot /var/www/example.com-ssl



SSLEngine on
ServerName example.net
DocumentRoot /var/www/example.net-ssl




Edit: Every google search I do returns tons of SNI guides (multiple SSL vhosts on one IP, which is not what I'm looking for.


Answer



You seem to have misunderstood how DNS works.



DNS in this case resolves names such as example.com to IP addresses such as 203.0.113.1. You can't have a different IP address for a different port or service.



Thus, you need to use the same IP address for HTTP, HTTPS and every other service that might be served with that domain name.


nginx - SSL Personal Certificate link to Intermediate Certificate is broken



Thank You for reading.



I have a test server built where I am trying to implement a encrypt communication using SSL/TLS. The communication is between IIS (web server, where asp.net application is published) and NGINX at the remote server.



I am having problem establishing communication as the IIS sends an empty certificate to NGINX when NGINX sends a certificate request to IIS. The intermediate certificate in the windows server is what the NGINX is expecting.



I have found that there is a broken link between SSL certificate of ASP.NET application and the intermediate certificate.




This is the inhouse dev environment, so the ssl/tls communication should be eastablished using self-assigned certificate only.



Personal Certificate Snapshot



Now, when I checked the SSL using online checker, I receive the following snapshot.



SSL online checker



I believe that the broken link here may be the reason of the lack of encrypt communication. I am not sure.




Thank You for reading my post.


Answer



Your chain has no intermediates, so they can't be sent.



Over TLS the End-Entity certificate (either client or server auth, depending on who is sending it) is transmitted, along with any intermediates, but NOT the self-issued root certificate.



Your system will need to have already had the root certificate to determine trust, and your system will need to have already had a way of building chains, so the TLS implementors decided that sending the root certificate is a waste of bytes on the wire.



* Root

|
-- * Intermediate 1
|
-- * Intermediate 2
|
-- * Intermediate 3
|
...
|
-- * End-Entity / Leaf



Most modern infrastructure is Root -> One Intermedate -> End-Entity.


Sunday, June 16, 2019

networking - Network file system with failover

Basically I'm looking for "multipath nfs". I want a classic network filesystem but with multiple servers mounted at the clients to a single mount point and it should handle a server failure with transparent failover among the servers without any delay. Load balancing and performance is not an issue. Sync among the servers can be done outside of this solution, it could even be read-only for the normal clients through this interface.



I prefer to avoid GFS, Lustre, AFS, IP round robin and "complicated" things like those.



Do you know a simple solution for this problem?

Friday, June 14, 2019

Apache, mysql ivestigation needed!

I have problem with server, all works fine but then suddenly server load increases to 12-25 and when i restart apache and mysql the load is dropping, and after few days 1-2 load again increases :(



Can you advice me methods which i can use to investigate and fix this annoying problem?

Thursday, June 13, 2019

Exchange e-mails delayed/not delivered to *Internal* Distribution List



I'm having an issue right now with a specific distribution list on our Exchange 2007 server. Sending e-mails to it causes exchange to bounce back after several hours with "Delivery is delayed to these recipients or distribution lists: ". No recipients receive the message and it's an internal distribution list. Does anybody know why this could be happening?



It's the second time it's happened, the first time was a couple of months ago.. That time we opened a support case with Microsoft as we were so confused by it.... but never really figured out the exact root cause or why it started working again.



One weird thing I noticed is that despite the fact that Exchange will notify the sender after four hours that the message is delayed... the message does not appear in the Queue viewer


Answer



I didn't realize there was another Hub transport server involved in this organization. After looking at the Queue in that Hub transport server, I found the delayed messages, which they were trying to send to the main server. In the queue viewer it showed the error it was recieving... "'451 4.4.0 Primary target IP address responded with: "451 5.7.3 Cannot achieve Exchange Server authentication." Attempted failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts." A bit of googling led me this solution, which I've pasted below




Change the Default receive connector properties to the following



Authentication Tab:
TLS(Transport Layer Security): Select
Basic Authentication: Select
Exchange Server Authentication: Select
Windows Integrated Authentication: Select
External Security (IPSec): not selected



Permission Groups:
Anonymous Users: Uncheck
Exchange Users: Select
Exchange Servers: Select
Legacy Exchange Server: Select
Partners: Uncheck


Tuesday, June 11, 2019

security - Backup to track fraud on Dedicated Server running Apache and MySQL

Background:



i am running a Dedicated Server with WHM/cPanel and i would like to know what to backup. My old VPS was hacked into using a security vulnerability in TimThumb and i was unable to track back who had done it because the logs were being deleted once in a while and some time had passed before i could analyze it, the logs were gone.



On my new (and hopefully secure:) server i would like to regularly backup logs and everything i should need to track down someone who executed malicious commands and web request on my server.




Question:




  • What do i need to backup to track stuff like http events, SSH connections & etc?

  • Where exactly are those files located?

  • Is there an automated way of copying the files or doing this backup?



Please advise me on this task.




Thank you

lsi megaraid migrate raid 6 to raid 5 without data loss?




I'm using a LSI MegaRAID SAS 9260-8i controller, firmware version 12.15.0-0189 with 8 physical drives. I currently have the main array running on RAID6 with 5 disks and second array on RAID1 with 2 disks. Last one disk slot is hot spare for RAID6.
Now I need run third array RAID1. I'm looking for two vacant slots...



Is there a way to migrate my array from RAID6 to RAID5 with one disk remove without recreating and restore data from backup? After that I would turn it off hot spare and have released the second slot.



Regards.


Answer



In general, there's no RAID controller I'm aware of that can migrate volumes to a configuration with a lower drive count.



MegaRAID is no exception to this, you can either keep the drive number or increase it (i.e. when increasing redundancy level).




MegaRAID supports the following RLM paths with the above in mind:




  • RAID 0 to RAID 1

  • RAID 0 to RAID 5

  • RAID 0 to RAID 6

  • RAID 1 to RAID 0

  • RAID 1 to RAID 5

  • RAID 1 to RAID 6


  • RAID 5 to RAID 0

  • RAID 5 to RAID 6

  • RAID 6 to RAID 0

  • RAID 6 to RAID 5



On a side note, if you are about to RLM a sizable amount of data (few TB) the complete array reconstruction may be a better path performance-wise.



You still need to do a full backup beforehand regardless of the way you choose, but RLM itself would take ages with a huge performance impact and unpredictable outcome, especially if your drives are old and patrol read is not being run on schedule.


Monday, June 10, 2019

apache 2.2 - 127.0.0.1 is working but localhost is not working on mac XAMPP



I installed XAMPP on my mac months ago and was working great.



Now i get "Test Page For Apache Installation" when i try to browse /localhost



and /localhost/xampp is not found.




But when i browse /127.0.0.1 it just works as localhost used to be.



I double checked my /etc/hosts file that i have 127.0.0.1 localhost and not commented.



Also when i browse localhost/~username/test.php , i get contents of test.php:







but if i browse 127.0.0.1/~username/test.php , i get:



ganim


what could change redirecting of localhost or how can i get localhost work again?


Answer



Maybe the OS X built-in web server is active and managed to bind to localhost, while XAMPP managed to bind to 127.0.0.1? Try turning off Web Sharing in System Preferences and restart XAMPP.


Sunday, June 9, 2019

networking - Twisted pair cable twists and unwanted signals issue



I am confused about one point I have read the following paragraph from the networking book.
“the twists in the twisted pair cable are used to avoid the unwanted signals. For example one twist, one wire is closer to the noise source and the other is farther; in the next twist the reverse is true. Twisting makes its probable that both wires are equally affected by the unwanted signal. This means that the receiver which calculate the difference between the two receives no unwanted signal.”



Now ok I understood the purpose of twists but I am confused about how receiver will calculate the difference when it will receive the signal?. How unwanted signal will be eliminated ?
Another thing that I want to make clear is , I am beginner please provide such an answer that can be understood.



Answer



A 'voltage' as such, is very difficult to measure. In fact, it's hard to even define it. What's always used is a 'voltage difference'. A typical 'AA' battery uses chemical energy to keep a voltage difference of 1.5V between its contact points. A light bulb will light up when a voltage difference forces electric charges to flow through its filament.



Think of a waterfall, the energy of the fall depends only on the difference between the altitude at the top and the bottom of the fall. it doesn't matter if it occurs on top of a mountain or at sea level, as long as the fall itself is the same length.



in old 'single ended' signals (like rs-232, a parallel port, old IDE), bits are represented by the voltage of individual wires.... and a 'reference point' (or ground connection). it's always a voltage difference, but the reference is constant, so it's not always mentioned.



in 'differential signals' (ethernet, 'ultra scsi', any modern serial port (USB, SATA, SAS, FireWire, even PCI-ex!)), each signal is carried by two wires, usually twisted together (or very close traces on a printed board), and the receiver doesn't use a common reference point to measure the voltage difference, it uses the difference between the two signal wires. This way, it doesn't matter if wire A is 22v and wire B is 25V, or A is -10v and B is -7V; it only matters that B is 3V higher than A.


openssl - Heartbleed flaw fix on Debian Wheezy




is there any 100% working method to update openssl to the non vulnerable version on Debian Wheezy.
I do not want to upgrade the whole OS, nor would I like to install a non official package.




Is there any solution right now ?



Thanks


Answer



from DSA-2896-1 openssl -- security update :




For the stable distribution (wheezy), this problem has been fixed in version 1.0.1e-2+deb7u5.





so,
assuming you have:



deb http://security.debian.org/ wheezy/updates main contrib non-free


in your /etc/apt/sources.list file



apt-get update

apt-cache policy openssl
apt-get install openssl


apt-cache policy openssl will show you candidate updates



apt-get install openssl will upgrade to last openssl version



run again apt-cache policy openssl and check version at Installed: line is equal or superior to: 1.0.1e-2+deb7u5




upgrading openssl package should upgrade libssl1.0.0 as it's a dependency


partition - System with RAID 5 array, plus additional drive for OS

I'm in the process of building a NAS system -
It has 8 1TB drives, which I am currently building as a RAID 5 array. It's using a 3ware hardware raid controller.



I've also installed a single 250gb drive into the top of the unit (there is an additional drive bay here)
My intention is to use this drive for the OS / boot drive.



How do I go about setting this up?

By that, I mean, do i have to set some jumper on the drive / set a setting in the bios, to boot from my single drive, rather than my RAID array?
I've plugged the SATA cable from the drive into the mainboard - SATA 0 (i think, at least)



Also, how should I format my RAID array, once it's finished formatting?
Should I be using GPT - I will install Windows Server 2008 x64 Standard as the OS on the above mentioned 250gb drive

Saturday, June 8, 2019

debian - 20 Watt difference between C-states enabled in Bios and same states in Intel_idle?



I am trying to reduce the idle power usage of a dual-Haswell-EP server made by Intel. If I enable the C-states in the UEFI-Bios, the minimum power consumption is about 80W. However, if I disable C-states in the Bios and boot the system, the minimum power usage never drops below 100W. (everything else same, same microcode, same frequencies, same Bios version)



This is surprising, because in both cases, after booting Debian, the intel_idle driver takes over the control as reported by /sys/devices/system/cpu/cpuidle/current_driver. I do not see any reason, why the power consumption should be different? powertop and turbostat report the same 99.9% C6 state for all cores.




The power draw is measured via sensors or the inbuilt BMC console.



For reasons irrelevant to this question, I would like to boot with the C-states disabled and then let the intel_idledriver take over. Is there any other implied difference from disabling the C-states in the Bios and a way how I could achive the same minimal power consumption?


Answer



When changing C-states via BIOS, some other parameters can be automatically set to lower performance/power. For example:




  • CPU performance bias can be changed

  • PCI-E Active State Power Management (ASPM) can be disabled;


  • other integrated component can be set at increased efficiency / lower performance.



I often saw BIOS configurig too aggressive power management settings; I generally enable any C states in the BIOS and set it to "Balanced/Optimized" or "Performance" profiles, but leaving any C/P states/transitions choice to the Linux kernel.


Friday, June 7, 2019

linux - Re-assembling the RAID-5 array reboots my CentOS-5 machine




I have 3 HDD's, each divided into 3 partitions.



I had created a RAID-1 for boot partition




  • md0 created from sda0, sdb0



and had also created two RAID-5 arrays:





  • md1 created from sda1, sdb1, sdc1

  • md2 created from sda2, sdb2, sdc2



It used to work fine but one day I had to power off the machine (cold reboot) to get any response from the machine. After that, when the system started booting, it tried for a while to reconstruct the RAID arrays but after a few minutes it crashed silently.



I booted the system in linux rescue mode from the DVD and tried to re-assemble the RAID devices manually. I was able to re-assemble md0 and md1 using:





mdadm --assemble --scan /dev/md0



mdadm --assemble --scan /dev/md1




But when I try to re-assemble md2 using:




mdadm --assemble --scan /dev/md2





the system reboots silently again.



How can I fix this problem?


Answer



I was suspecting that it's a hardware issue but the problem was the old kernel used in CentOS5. I used Ubuntu 10.10 32bit edition live CD and was able to reconstruct the array with it.



After reconstructing the array, I rebooted the server with the original CentOS on it and then upgraded all packages and the kernel. It's been working just fine ever since.


mysql - ubuntu, folder perms, drwxrwx---, php user in group, can't create file



i'm stuck & need help understanding file create permission for members of group.







in php, i want to fopen / create a file in a folder that is owned by mysql:mysql
(for importing data into mysql)



folder -ld



drwxrwx--- 2 mysql mysql 4096 Dec 14 14:33 /var/lib/mysql-files



php runs as user www-data



i added 'www-data' user into group 'mysql'



sudo usermod -a -G groupName userName 


verified



sudo groups www-data

www-data : www-data mysql


it appears my php user account 'www-data' has write permissions to the folder through group membership, but I get an error 13 'permission denied'.






while typing this question, a similar question (https://serverfault.com/a/534000/65092)



had an answer that the parent folders (/var and /var/lib) need to have 'x' permissions for the user or group, I understand that to mean:




php user 'www-data' needs to be able to look inside /var , to read /lib , to read /mysql-files .



/var = drwxr-xr-x 16 root root 
/var/lib = drwxr-xr-x 62 root root


and it appears this is already enabled.







any suggestions or comments?
thanks.


Answer



Solution:



I applied Mike's suggested link solution of



sudo chmod g+s /var/li/mysql-files



(which is supposed to set the group id of any new created files equal to the group of it's parent folder), but I was still unable to create a file in that folder using php.



Upon further reading, I learned that permissions are applied after login (duh, of course), but since user www-data did not have a password and cannot login, I needed to reboot the server, to see if the new permissions would take affect.



Next I tested the file creation thru php from a terminal, but it still did not work. I soon realized the cli of php was launched from my user account on the terminal, therefore it was not running as user www-data, therefore permission denied. I launched php from the terminal to run under user www-data by sudo -u www-data php -a and Viola! the file was created.



checking the file permissions after creation, the owner was www-data, the group was properly set to the group of it's parent folder, however the write permission was not set. Further reading about umask led me to use sudo setfacl -d -m group:mysql:rwx /var/lib/mysql-files
(to setup an access control list to enable write permission for the group for new files created in the folder. it appeared the write permission of the folder was already enabled for groups, so I'm not sure why this extra step was needed.)




A further test from the command line running php as user 'www-data' passed.



A test using the full php script passed.



Thanks!


Thursday, June 6, 2019

user permissions - Why doesn't MS Windows SystemLocalService have the "Log on as a service" right?

My developer is telling me that in order to run a service with the reduced privileges of the built-in account SYSTEM\LocalService, he needs to grant it the "log on as a service" right.



How can this possibly be so? Half of the services in my Windows 2012 R2 machine are running as LocalService (the other half, inexplicably LocalSystem).




Developer points me to this page, where indeed, the right is not listed there. https://msdn.microsoft.com/en-us/library/windows/desktop/ms684188%28v=vs.85%29.aspx



Can someone explain this paradox to me?

smtp - iptables port forwarding to server with different port

This summary is not available. Please click here to view the post.

Wednesday, June 5, 2019

Connect via Webdav from Windows 7?

I'm trying to connect to an Alfresco server, via Webdav, from a Windows 7 client. I can create a web folder connection with the wizard, but there are three or more folder links created and none of them work, double clicking on them simply does nothing. One of the folders have the name that I specify in the wizard, the others are simply named as the server adress.



While surfing the net I've seen that others have experienced the same issue, but so far I haven't seen any solution or any explanation.



Edit: I might add that the client is running Windows 7 RC, build 7100.

Tuesday, June 4, 2019

networking - What is the network address (x.x.x.0) used for?




It appears to be common practice to not use the first address in a subnet, that is the IP 192.168.0.0/24, or a more exotic example would be 172.20.20.64/29.



The ipcalc tool I frequently use follows the same practice:




$ ipcalc -n -b 172.20.20.64/29
Address: 172.20.20.64
Netmask: 255.255.255.248 = 29
Wildcard: 0.0.0.7
=>
Network: 172.20.20.64/29
HostMin: 172.20.20.65
HostMax: 172.20.20.70
Broadcast: 172.20.20.71
Hosts/Net: 6 Class B, Private Internet



But why is that HostMin is not simply 64 in this case? The 64 address is a valid address, right? And whatever the answer, does the same apply to IPv6?



Perhaps slightly related: it also appears possible to use a TCP port 0 and an UDP port 0. Are these valid or used anywhere?


Answer



As Wesley, Chopper3, and Willy pointed out modern convention uses the first address (all zeroes host number) for the subnet and the last address (all ones host number) as the broadcast address.



For historical reasons many OSes treat the first address as a broadcast. For example, pinging x.x.x.0 from OS X, Linux, and Solaris on my local (/24) network gets responses. Windows doesn't let you ping the first address by default but you might be able to enable it using the SetIPUseZeroBroadcast WMI method. I wonder if you could get away with using .0 as a host address on an all-Windows network.


Monday, June 3, 2019

proxy pass domain FROM default apache port 80 TO nginx on another port

Im still learning server things so hope the title is descriptive enough.



Basically i have sub.domain.com that i want to run on nginx at port 8090.




I want to leave apache alone and have it catch all default traffic at port 80.



so i am trying something with a virtual name host to proxy pass to sub.domain.com:8090, nothing working yet and go no idea what the right syntax could be.



any ideas? most of what i found was to pass TO apache FROM nginx, but i want to the do the opposite.




LoadModule proxy_module modules/mod_proxy.so



LoadModule proxy_http_module modules/mod_proxy_http.so






ProxyPreserveHost On



ProxyRequests Off



ServerName sub.domain.com



DocumentRoot /home/app/public




ServerAlias sub.domain.com



proxyPass / http://appname:8090/ (also tried localhost and sub.domain.com)



ProxyPassReverse / http://appname:8090/







when i do this i get




[warn] module proxy_module is already loaded, skippin



[warn] module proxy_http_module is already loaded, skipping



[error] (EAI 2)Name or service not known: Could not resolve host name sub.domain.com -- ignoring!





and yes, the app is working (i have it running on port 80 with another subdomain) and it works at sub.domain.com:8090

iis 7 - What are good load testing tools for IIS 7 web applications?





The title says it all. I'm looking for a good set of tools that I can use to load test a web application on IIS 7 before deployment.


Answer



There are a couple of good tools available:



Not free, but excellent if you are doing this professionally is Visual Studio Team System Test Load Agent. MSDN covers how to set it up and run it here: Controllers, Agents, and Rigs. You can download a trial here:





As far as free tools...




Web Capacity Analysis Tool (WCAT):



Overview: Web Capacity Analysis Tool (WCAT) is a lightweight HTTP load generation tool primarily designed to measure the performance of a web server within a controlled environment. WCAT can simulate thousands of concurrent users making requests to a single web site or multiple web sites. The WCAT engine uses a simple script to define the set of HTTP requests to be played back to the web server. Extensibility is provided through plug-in DLLs and a standard, simple API.



Features:




  • HTTP 1.0 and HTTP 1.1 capable

  • Supports IPv6 Multithreaded Support

  • Supports generating stress from multiple machines


  • Extensible through C plug-in

  • DLLs Supports Performance Counter integration

  • Measures throughput and response time

  • Supports SSL requests

  • NTLM Authentication request support

  • Easily supports testing thousand of concurrent users



Download the x86 version here, and the x64 version here.


linux - High memory consumption on my VPS








I have a 12GB RAM VPS where I host a a couple of static websites , 2 small Magento powered stores and a couple of WordPress installations - overall nothing exciting and generally low traffic.



I've noticed though that my memory consumption is quite high - kindly have a look at the results of free m below:



             total       used       free     shared    buffers     cached
Mem: 12306384 12137728 168656 0 753360 8629744
-/+ buffers/cache: 2754624 9551760
Swap: 1048564 104 1048460



Also see a screenshot out of Mumin http://s13.postimage.org/q2xewgnef/Screenshot_4.jpg



Now I reckon that 9,5 GB seem to be buffer / cache related - however I find this quite high. Is this something I have to worry about or will they eventually free themselves ? (I've read it somewhere but I am certainly not an expert)



Another note on the side is that Memcached has been installed once and applied to one of my Magento installation - could it be related to it?



Some expert advise would be truly appreciated.

ssl - Route 53 Naked/Root Domain Alias Record



Route 53 supports Alias records which use Amazon S3 static websites to dynamically resolve naked domains to their www counterparts using a 301 redirect. I am wondering whether the Alias record will support SSL:



http:// example.com -> http:// www.example.com (this will work)
https:// example.com -> https:// www.example.com (will this work?)



I realize that SSL doesn't have anything to do with DNS, but Route 53's implementation of the Alias record (using an S3 static website) concerns me.



It seems like dnsimple's ALIAS record does support SSL: http://support.dnsimple.com/articles/domain-apex-heroku/




If indeed Route 53 does not support SSL and dnsimple does, how does dnsimple's implementation of the ALIAS record differ?


Answer



Because you will configure the S3 bucket to send a 301 redirect to www.example.com if you follow Amazon's directions, you will wind up with SSL certificate warnings if someone uses the non-www form. As far as I can tell, Amazon provides no way for you to provide your SSL certificate in this circumstance.



DNSimple has a different implementation which, instead of sending a 301 redirect, sends visitors directly to the IP address of the Heroku app (which, presumably, they look up dynamically). This works as long as Heroku is expecting it.


Sunday, June 2, 2019

apache 2.2 - mod_deflate - Optimal configuration for most browsers

I was wondering if someone here could help me determine the optimal standard configuration for using mod deflate with Apache. Basically, mod_deflate recommends using the following configuration for getting started right away:




Compress only a few types




AddOutputFilterByType DEFLATE text/html text/plain text/xml
http://httpd.apache.org/docs/2.0/mod/mod_deflate.html




However, just from reading the doc, you can customize this for all the browsers out there. In addition, you can customize mod_deflate for all different kinds of mime types. I was wondering if anybody out there has experimented with these settings and found a setting which is optimal for all browsers.



Another example Apache provides, but mention not to use if you don't understand all the config options:




# Insert filter

SetOutputFilter DEFLATE

# Netscape 4.x has some problems...
BrowserMatch ^Mozilla/4 gzip-only-text/html

# Netscape 4.06-4.08 have some more problems
BrowserMatch ^Mozilla/4\.0[678] no-gzip

# MSIE masquerades as Netscape, but it is fine
# BrowserMatch \bMSIE !no-gzip !gzip-only-text/html


# NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48
# the above regex won't work. You can use the following
# workaround to get the desired effect:
BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html

# Don't compress images
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png)$ no-gzip dont-vary


# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent env=!dont-vary



http://httpd.apache.org/docs/2.0/mod/mod_deflate.html



I understand most of the configuration settings, and I would like to setup something similar. I wouldn't mind avoiding compression of images and other media which is already compressed. The details I have problems with, is understand how this reacts with all the different browsers out there, chrome, firefox, IE, Opera, etc... Obviously I'm not concerned with Netscape 4.X. I'm hoping somebody has tested all this already and might be able to recommend a good setting that meets this criteria.



I mean if it's just a matter of using the recommended setting in the doc, I'm fine with that, but I wanted to check just to be sure.




Just to provide a few additional details. We use Apache as a front to all of our webservices. For Example: confluence, git, gitweb, etc...



Tomcat and other services are proxied via Apache so we have configurations for virtualhosts, mod_proxy w/AJP, mod_ssl.



My company doesn't have a dedicated IT team so I have to set much of this up in my spare time. I would appreciate any input you can provide.



So just to state clearly what I'm asking, what is the optimal configuration for handling the basic content needs serving requests from Apache to mainstream browsers?



My list of basic content types so far:





  • text/html

  • text/plain

  • text/xml

  • text/x-js

  • text/javascript

  • text/css

  • application/xml

  • application/xhtml+xml


  • application/x-javascript

  • application/javascript

  • application/json



Types which obviously don't need to be compressed:




  • images - gif, jpg, png

  • archives - exe, gz, zip, sit, rar


Saturday, June 1, 2019

apache 2.2 - How to redirect http to https for IP and domain on Apache2

I have an Apache2 web server running on Ubuntu 12.04. The domain of the website that I am hosting works fine, and if you go to the http version instead of the https version, it automatically redirects to the https version of the domain.



However, if I go to the http version of the ip address of the domain, it does not automatically redirect to the https version of the ip address.




There is nothing in the .htaccess or httpd.conf file.



I have tried adding a redirect for the ip specifically in the .htaccess file, but it does not work properly and the when I visit the domain, it says there are too many redirects.



I have a default-ssl file and website file in the sites-enabled folder. The default-ssl has virtual host config for port 443 and the ServerName



The website file has virtual host config for port 80, but no ServerName



Both files have the same DocumentRoot




How do enable the https redirect for the ip address as well, not only the domain?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...