Sunday, January 31, 2016

Administrator User on Windows 2008 R2 server gets "access denied" messages for starting/stopping services



I logged into a Windows 2008 R2 server as a domain user that is part of the Administrators group on the target computer. Executing the following command, I get access is denied errors:




$> sc stop ServiceName
[SC] OpenService FAILED 5:
Access is denied.


What is strange is, as the very same user, I can open up the Services GUI (Administrative Tools > Services) and start/stop the very same service no problem. This appears to be happening for all services that I try to start/stop, and it happens as any "Administrative" user on this computer (with the exception of the local admin user, which I don't have the creds for in order to test). Command line fails, but GUI works.



I also know that spelling of the service name is correct, because if I alter it to be something else, I get a different error ("The specified service does not exist..."). I do notice that I can change the casing of the service name (ServiceName vs SERVICENAME) and get access denied errors on both.




I get similar access denied messages when using "net start ServiceName" instead of the sc command.



Any idea what is going on here? Needing this to work for scripting purposes. The same scripts are working fine on a Win2003 server.


Answer



Looks like you have discovered why lots of people hate User Account Control.



You should right-click on the Command Prompt icon and select "Run As Administrator"; that will allow you to actually make use of your admin rights.


linux - MySQL server sporadically runs rampage

I'm faced with a baffling problem where mysqld sporadically gets in an awkward state of very high CPU usage (several hundred threads, all fighting for CPU time) and quick memory accumulation (to not say leaking). Once this starts happening the server gets very slow at fulfilling queries, often slow enough to trigger timeouts in applications waiting for it.



Eventually it will have accumulated enough memory to start triggering the Linux kernel OOM-killer, at which point the entire system is already practically unusably slow from constant swapping that a hard reset is needed.
Setting tighter ulimit values for the mysql user for memory allocation has fixed that specific symptom, but the underlying problem still remains.



Once the MySQL server is in this state it takes several minutes after the shutdown command to actually stop, which makes restarting it nerve-racking too. Although I still prefer to try and shut it down cleanly to avoid breaking replication.



The slow query log did not prove helpful as once the server is in this state every query gets logged as a slow query, making it difficult to pick out any actual offenders.



Upgrading the mysql server to version 5.6.43-1debian8 has reduced the problem from occurring every few days to every couple of weeks.




I have not yet been able to acquire a SHOW FULL PROCESS LIST; during an ongoing situation.



The machine this is running on is a 12 core / 24 thread with 64 GiB of memory (with most of that available to MySQL) and 32 GiB of HDD swap.



Here's my entire my.cnf after suggestions from here (It's uncertain if these changes have fixed the problem already, I'll be able to tell with time).



#
# The MySQL database server configuration file.
#

# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html


# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
# default-character-set=latin1


# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
log_error = /var/log/mysql/mysql_error.log
socket = /var/run/mysqld/mysqld.sock
nice = 0

[mysqld]

#
# * Basic Settings
#
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
#basedir = /usr/local
#ledir = /usr/local/bin
#datadir = /var/lib/mysql

#datadir = /ssd/mysql
datadir = /mnt/vg0_lv0/mysql/
tmpdir = /tmp
#language = /usr/local/share/mysql/english
#open-files-limit = 32000
sql_mode = NO_ENGINE_SUBSTITUTION

character-set-server=latin1
collation-server=latin1_swedish_ci


net_read_timeout = 120
skip-external-locking
skip-name-resolve
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = ::
#
# * Fine Tuning
#

# mysqld might use { key_buffer + (read_buffer_size + read_rnd_buffer_size + sort_buffer_size + join_buffer_size) * max_threads } bytes of memory
#max_threads = 2048 # probably bound to max_connections
key_buffer = 1536M # global limit
key_buffer_size = 1536M # not sure which spelling is correct, use both

# as said above, these limits are per connection and NOT global
read_buffer_size = 16M
read_rnd_buffer_size = 16M
sort_buffer_size = 16M
myisam_sort_buffer_size = 16M

join_buffer_size = 20M

max_allowed_packet = 128M
preload_buffer_size = 384K
tmp_table_size = 192M
max_heap_table_size = 192M
thread_stack = 256K
thread_cache_size = 24

# This replaces the startup script and checks MyISAM tables if needed

# the first time they are touched
myisam-recover = BACKUP
max_connections = 500
table_open_cache = 3200
table_open_cache_instances = 2 # approx. half of cores used routinely
#table_definition_cache # automatically set to min(400+(table_open_cache/2),2000)
#thread_concurrency = 48
#
# * Query Cache Configuration
#

# Defaults: 1M, 16M
#query_cache_limit = 12M
#query_cache_size = 2048M
#query_cache_type = 1
#query_cache_min_res_unit = 32K
# Query Cache disabled due to lock contention
query_cache_size = 0
query_cache_type = 0

# query_prealloc_size = 64M


wait_timeout = 3600
interactive_timeout = 28800

#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!

#general_log_file = /var/log/mysql/mysql.log
#general_log = 1
log_error = /var/log/mysql/mysql_error.log
#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# Here you can see queries with especially long duration
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 10

#log-queries-not-using-indexes = 1 # logs all queries that filter without an index, not just bad JOINs (spammy)
#log_slow_queries = /var/log/mysql/mysql-slow.log # old, invalid
#long_query_time = 2
#log-queries-not-using-indexes = 1
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
server-id = 3


log_bin = /mnt/vg0_lv0/mysql_binlog/mysql-bin.log
expire_logs_days = 31
max_binlog_size = 1073741824

#log_bin = /var/log/mysql/mysql-bin.log
#expire_logs_days = 31
#max_binlog_size = 8192M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name
#slave-net-timeout = 30

#master-retry-count = 99999
#slave-skip-errors=1062,1053
#slave-skip-errors=1062,1053,1050,1146,1051,1396,1060,1054

#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
innodb_buffer_pool_size = 6144M

innodb_buffer_pool_instances = 6 # only >1 if each instance gets at least 1GiB, i_buffer_pool_size will be divided by this
innodb_log_file_size = 768M
innodb_file_per_table = on
innodb_ft_cache_size = 32M
innodb_ft_total_cache_size = 640M
innodb_additional_mem_pool = 32M
innodb_thread_concurrency = 0 # 0=autodetect
innodb_read_io_threads = 64 # 64=max
innodb_write_io_threads = 64
innodb_log_buffer_size = 8M


# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem

# ssl-key=/etc/mysql/server-key.pem



[mysqldump]
quick
quote-names
max_allowed_packet = 32M

[mysql]

#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer = 16M

#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/



A mysqltuner report after suggestions from here (comments added where snipped):



 >>  MySQLTuner 1.7.13 - Major Hayden 
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering

[--] Skipped version check for MySQLTuner script
[OK] Logged in using credentials passed on the command line

[OK] Currently running supported MySQL version 5.6.44-log
[OK] Operating on 64-bit architecture

-------- Log file Recommendations ------------------------------------------------------------------
[--] Log file: /var/log/mysql/mysql_error.log(2M)
[OK] Log file /var/log/mysql/mysql_error.log exists
[OK] Log file /var/log/mysql/mysql_error.log is readable.
[OK] Log file /var/log/mysql/mysql_error.log is not empty
[OK] Log file /var/log/mysql/mysql_error.log is smaller than 32 Mb
[!!] /var/log/mysql/mysql_error.log contains 2285 warning(s).

- Almost all of these are replication warnings about RAND()
[!!] /var/log/mysql/mysql_error.log contains 27 error(s).
- Almost all of these are "Logging to '/var/log/mysql/mysql_error.log'." and similar.
[--] 18 start(s) detected in /var/log/mysql/mysql_error.log
[--] 1) 2019-05-09 20:27:44 34546 [Note] /usr/sbin/mysqld: ready for connections.
[--] 2) 2019-05-09 20:23:44 8281 [Note] /usr/sbin/mysqld: ready for connections.
[--] 3) 2019-05-09 20:22:38 39057 [Note] /usr/sbin/mysqld: ready for connections.
[--] 4) 2019-05-09 20:01:56 43432 [Note] /usr/sbin/mysqld: ready for connections.
[--] 5) 2019-05-09 19:55:54 22899 [Note] /usr/sbin/mysqld: ready for connections.
[--] 6) 2019-05-09 19:48:51 33825 [Note] /usr/sbin/mysqld: ready for connections.

[--] 7) 2019-05-09 19:26:38 20046 [Note] /usr/sbin/mysqld: ready for connections.
[--] 8) 2019-05-01 14:28:48 1915 [Note] /usr/sbin/mysqld: ready for connections.
[--] 9) 2019-04-15 19:50:40 9309 [Note] /usr/sbin/mysqld: ready for connections.
[--] 10) 2019-04-15 19:40:28 43854 [Note] /usr/sbin/mysqld: ready for connections.
[--] 16 shutdown(s) detected in /var/log/mysql/mysql_error.log
[--] 1) 2019-05-09 20:27:20 8281 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 2) 2019-05-09 20:23:20 39057 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 3) 2019-05-09 20:22:12 43432 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 4) 2019-05-09 20:01:32 22899 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 5) 2019-05-09 19:55:29 33825 [Note] /usr/sbin/mysqld: Shutdown complete

[--] 6) 2019-05-09 19:48:21 20046 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 7) 2019-05-09 19:26:06 1915 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 8) 2019-05-01 14:28:01 9309 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 9) 2019-04-15 19:50:16 43854 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 10) 2019-04-15 19:40:02 7435 [Note] /usr/sbin/mysqld: Shutdown complete

-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +PERFORMANCE_SCHEMA
[--] Data in InnoDB tables: 25.4G (Tables: 9193)
[--] Data in MyISAM tables: 6.1G (Tables: 4007)

[OK] Total fragmented tables: 0

-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.

-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] User '*****@%' has user name as password.

- A few more like these, removed for privacy.
[!!] User '*****@%' does not specify hostname restrictions.
- Several dozen more like these, removed for privacy. Restrictions are implemented via iptables.
[--] There are 618 basic passwords in the list.
[!!] User '*****@localhost' is using weak password: 123456 in a lower, upper or capitalize derivative version.
[!!] User '*****@%' is using weak password: test in a lower, upper or capitalize derivative version.

-------- CVE Security Recommendations --------------------------------------------------------------
[OK] NO SECURITY CVE FOUND FOR YOUR VERSION


-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 21h 36m 2s (112M q [450.481 qps], 1M conn, TX: 225G, RX: 21G)
[--] Reads / Writes: 95% / 5%
[--] Binary logging is enabled (GTID MODE: OFF)
[--] Physical Memory : 63.0G
[--] Max MySQL memory : 41.5G
[--] Other process memory: 4.9G
[--] Total buffers: 7.7G global + 68.2M per thread (500 max threads)
[--] P_S Max memory usage: 487M
[--] Galera GCache Max memory usage: 0B

[OK] Maximum reached memory usage: 11.6G (18.42% of installed RAM)
[OK] Maximum possible memory usage: 41.5G (65.92% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (64K/112M)
[OK] Highest usage of available connections: 10% (51/500)
[OK] Aborted connections: 0.16% (2506/1587046)
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 4M sorts)
[!!] Joins performed without indexes: 1703245
[OK] Temporary tables created on disk: 6% (998K on disk / 15M total)

[OK] Thread cache hit rate: 99% (64 created / 1M connections)
[!!] Table cache hit rate: 0% (3K open / 405K opened)
[OK] Open file limit used: 4% (1K/24K)
[OK] Table locks acquired immediately: 99% (142M immediate / 142M locks)
[OK] Binlog cache memory access: 99.99% (4902582 Memory / 4902885 Total)

-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 487.9M
[--] Sys schema isn't installed.


-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is disabled.

-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 22.7% (365M used / 1B cache)
[OK] Key buffer size / total MyISAM indexes: 1.5G/1.4G
[OK] Read Key buffer hit rate: 97.7% (1B cached / 37M reads)
[!!] Write Key buffer hit rate: 15.9% (454K cached / 72K writes)

-------- InnoDB Metrics ----------------------------------------------------------------------------

[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[!!] InnoDB buffer pool / data size: 6.0G/25.4G
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 768.0M * 2/6.0G should be equal 25%
[OK] InnoDB buffer pool instances: 6
[--] InnoDB Buffer Pool Chunk Size not used or defined in your version
[OK] InnoDB Read buffer efficiency: 100.00% (180155088749 hits/ 180158103035 total)
[!!] InnoDB Write Log efficiency: 37.14% (922821 hits/ 2484433 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1561612 writes)


-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is disabled.

-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.

-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.


-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.

-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] This server is acting as master for 1 server(s).
[--] Binlog format: STATEMENT
[--] XA support enabled: ON
[--] Semi synchronous replication Master: Not Activated
[--] Semi synchronous replication Slave: Not Activated

[--] No replication setup for this server or replication not started.

-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/log/mysql/mysql_error.log file
Control error line(s) into /var/log/mysql/mysql_error.log file
Set up a Secure Password for user@host ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
Restrict Host for user@% to user@SpecificDNSorIp
4 user(s) used basic or weak password.
Adjust your join queries to always utilize indexes

Increase table_open_cache gradually to avoid file descriptor limits
Read this before increasing table_open_cache over 64: [snip - bit.ly not allowed]
Read this before increasing for MariaDB https://mariadb.com/kb/en/library/optimizing-table_open_cache/
This is MyISAM only table_cache scalability problem, InnoDB not affected.
See more details here: https://bugs.mysql.com/bug.php?id=49177
This bug already fixed in MySQL 5.7.9 and newer MySQL versions.
Beware that open_files_limit (24000) variable
should be greater than table_open_cache (3200)
Consider installing Sys schema from https://github.com/mysql/mysql-sys
Variables to adjust:

join_buffer_size (> 20.0M, or always use indexes with JOINs)
table_open_cache (> 3200)
innodb_buffer_pool_size (>= 25.4G) if possible.


What steps can I take to try and debug this problem while it isn't currently happening?
What steps can I take to try and fix the problem?

cloud - Digital Ocean Wordpress Server Inaccessible After Massive CPU spike

I have set up a digital ocean server to host a wordpress website. I set it up as the basic $10 server as I don't expect much traffic.



After setting it up I followed the Digital Ocean security tips and added a user for myself and set myself as a sudo-er. I also disabled SSH access as root.



I had what seems to be a very common issue with wordpress on digital ocean, MySQL was giving an out of memory exception. I therefore created a 4GB Swap file as this seems to be the remedy, and I've not seen that error since.



A couple of days ago I did some work on the site and got it ready to release. I wrote 30 small blog posts and added a plugin called Yoast for SEO. I left the site overnight and came to it the next day on my lunch break, only to find the site was down.




After I reboot the server the site last for 10 minutes or so and then crashes again. MySQL seems to be hogging a lot of RAM, but I'm not getting the database error I saw last time.



I have even upped the Server to the $20 version with double the RAM, but it makes no difference.



I have also noticed a ridiculous spike in CPU usage the night after I finished installing Yoast and writing my blogs: the site has been unstable ever since.



CPU Spike



Whilst the website is inaccessible I can still access the server via the webconsole on the Digital Ocean site.




The website's not even up long enough to get a backup of the content I've setup on wordpress. Any ideas how I can sort this out?

sql server - What would cause a query being ran from SSMS on local box to run slower then from remote box




When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (>45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec).



One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost.



Specs for Server are:
Windows Server 2003, Enterprise Edition, SP2
4 X Intel Xeon 3.6GHz - 6GB System Memory
Active/Active Cluster
SQL Server 2005 SP2 (9.0.3239)



Answer



Have you compared the execution plans from both servers? Have you tried querying your production server locally, when the results slow down? Have you checked to see if you have any blocking, or resource waits on your production server?


Saturday, January 30, 2016

php - Apache's "OPTIONS * HTTP/1.0" running 100% CPU - Runaway httpd process

I have this recurring issue where an httpd processes will randomly start running at 100% CPU. Often other httpd processes will join in, and it will continue until I restart Apache. Oddly, the thing it's running 100% CPU on is "OPTIONS * HTTP/1.0". Here's the output from one of these:



9-0     38787   1/9/391 C   103.14  1323    7   0.0 0.08    4.11    ::1 www.mysite.com  OPTIONS * HTTP/1.0


CPU is at 103.14% and it's been 1323 seconds since its last request. It's also stuck in 'C' - closing connection state.



Here's another case where other processes join in running 100% CPU:




0-0     12792   0/33/64 W   95.73   1097    0   0.0 0.10    0.39    66.68.237.216   www.mysite.com      POST /page_a.php HTTP/1.1
9-0 12795 1/6/15 C 94.42 1174 0 0.0 0.03 0.07 ::1 www.myserver.com OPTIONS * HTTP/1.0
19-0 12986 0/4/41 W 95.67 1011 0 0.0 0.03 0.24 81.237.216.111 www.mysite.com POST /page_b.php HTTP/1.1
20-0 12720 0/10/10 W 94.32 1220 0 0.0 0.03 0.03 187.184.103.218 www.mysite.com POST /page_a.php HTTP/1.1


My setup is that this is on OS X Lion 10.4.7. I'm running Apache2 with PHP 5.3. Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch



Some potentially relevant http.conf settings:




MaxRequestsPerChild 100000
Timeout 300
KeepAlive On
KeepAliveTimeout 8
MaxKeepAliveRequests 100


When I sample the runaway processes, I see each of them show _spin_lock and kevent like so:




   Sort by top of stack, same collapsed (when >= 5):
_spin_lock$VARIANT$mp (in libsystem_c.dylib) 4358
kevent (in libsystem_kernel.dylib) 2179


Here is some of the non-repeating Apple code I see:



Call graph:
2413 Thread_1080146 DispatchQueue_1: com.apple.main-thread (serial)
+ 2413 start (in httpd) + 52 [0x10f364794]

+ 2413 main (in httpd) + 4045 [0x10f37048d]
+ 2413 ap_mpm_run (in httpd) + 1740 [0x10f3aaabc]
+ 2413 perform_idle_server_maintenance (in httpd) + 703 [0x10f3aa38f]
+ 2413 make_child (in httpd) + 435 [0x10f3aa003]
+ 2413 child_main (in httpd) + 1831 [0x10f3a9e47]
+ 2413 clean_child_exit (in httpd) + 49 [0x10f3a8d31]
+ 2413 apr_pool_destroy (in libapr-1.0.dylib) + 52 [0x10f45761b]
+ 2413 ??? (in libapr-1.0.dylib) load address 0x10f44b000 + 0xb7b4 [0x10f4567b4]
+ 2413 php_apache_child_shutdown (in libphp5.so) + 17 [0x10fa3842f]
+ 2413 php_module_shutdown_wrapper (in libphp5.so) + 9 [0x10f97021d]

+ 2413 php_module_shutdown (in libphp5.so) + 35 [0x10f970167]
+ 2413 zend_shutdown (in libphp5.so) + 57 [0x10f9c0692]
+ 2413 zend_hash_destroy (in libphp5.so) + 53 [0x10f9cb19a]
+ 2413 destroy_op_array (in libphp5.so) + 271 [0x10f9b91c0]
+ 2413 _efree (in libphp5.so) + 52 [0x10f9a6312]
+ 2413 _zend_mm_free_canary_int (in libphp5.so) + 473 [0x10f9db899]
+ 2413 free (in libsystem_c.dylib) + 71 [0x7fff8f86170e]
+ 2413 szone_size_try_large (in libsystem_c.dylib) + 37 [0x7fff8f8240f9]
+ 2413 _spin_lock$VARIANT$mp (in libsystem_c.dylib) + 30,25,... [0x7fff8f86336e,0x7fff8f863369,...]
2413 Thread_1080153 DispatchQueue_2: com.apple.libdispatch-manager (serial)

2413 _dispatch_mgr_thread (in libdispatch.dylib) + 54 [0x7fff908fc31a]
2413 _dispatch_mgr_invoke (in libdispatch.dylib) + 923 [0x7fff908fd78a]
2413 kevent (in libsystem_kernel.dylib) + 10 [0x7fff8ed047e6]


And then stuff like this:



       0x10f4ce000 -        0x10f4d0ff7  mod_reqtimeout.so (??? - ???) <035F872B-8196-3CCE-A4D0-AA8D5C1550EC> /usr/libexec/apache2/mod_reqtimeout.so
0x10f4d4000 - 0x10f4d8ff7 mod_ext_filter.so (??? - ???) /usr/libexec/apache2/mod_ext_filter.so
0x10f4dd000 - 0x10f4eaff7 mod_include.so (??? - ???) <70E541B9-A864-3FE1-AB85-EBF632FFD376> /usr/libexec/apache2/mod_include.so

0x10f4ef000 - 0x10f4f2ff7 mod_filter.so (??? - ???) <2093EE45-E335-3B36-A6BA-6EA4EB7E483C> /usr/libexec/apache2/mod_filter.so
0x10f4f6000 - 0x10f4f8ff7 mod_substitute.so (??? - ???) <9ED1AB37-EE13-39DC-AB97-98A2B39555B0> /usr/libexec/apache2/mod_substitute.so
0x10f4fc000 - 0x10f501ff7 mod_deflate.so (??? - ???) /usr/libexec/apache2/mod_deflate.so
0x10f506000 - 0x10f50bfff mod_log_config.so (??? - ???) <61EA3051-8D4A-3A00-B7BC-C68E18CA9479> /usr/libexec/apache2/mod_log_config.so
0x10f511000 - 0x10f512fef mod_log_forensic.so (??? - ???) <06654BB4-CA2A-3D70-B759-12191119E5C7> /usr/libexec/apache2/mod_log_forensic.so
0x10f516000 - 0x10f516ff7 mod_logio.so (??? - ???) /usr/libexec/apache2/mod_logio.so
0x10f51a000 - 0x10f51aff7 mod_env.so (??? - ???) /usr/libexec/apache2/mod_env.so
0x10f51e000 - 0x10f524fff mod_mime_magic.so (??? - ???) <1737F398-6315-31B4-B8B4-57F590F07268> /usr/libexec/apache2/mod_mime_magic.so
0x10f529000 - 0x10f52aff7 mod_cern_meta.so (??? - ???) /usr/libexec/apache2/mod_cern_meta.so
0x10f52e000 - 0x10f52ffff mod_expires.so (??? - ???) /usr/libexec/apache2/mod_expires.so

0x10f533000 - 0x10f536ff7 mod_headers.so (??? - ???) <5701D330-D777-3AAF-AEEF-F02D067F851E> /usr/libexec/apache2/mod_headers.so
0x10f53b000 - 0x10f53cfff mod_ident.so (??? - ???) <5FDFBB79-3A0C-3439-BF71-74E6A3A4B7AC> /usr/libexec/apache2/mod_ident.so
0x10f540000 - 0x10f542ff7 mod_usertrack.so (??? - ???) <09F36BB5-4F8D-339B-AA52-5FA23A946837> /usr/libexec/apache2/mod_usertrack.so
0x10f546000 - 0x10f547fff mod_setenvif.so (??? - ???) /usr/libexec/apache2/mod_setenvif.so
0x10f54b000 - 0x10f54cff7 mod_version.so (??? - ???) <4BF2E21C-E452-340E-A6B4-196DBD731CA8> /usr/libexec/apache2/mod_version.so
0x10f550000 - 0x10f566fff mod_proxy.so (??? - ???) <0169F3B2-A81A-3E23-92FE-8E9B92C38795> /usr/libexec/apache2/mod_proxy.so
0x10f56e000 - 0x10f576ff7 mod_proxy_http.so (??? - ???) /usr/libexec/apache2/mod_proxy_http.so
0x10f57c000 - 0x10f57efff mod_proxy_scgi.so (??? - ???) /usr/libexec/apache2/mod_proxy_scgi.so
0x10f583000 - 0x10f589ff7 mod_proxy_balancer.so (??? - ???) /usr/libexec/apache2/mod_proxy_balancer.so



I can post more details as requested.

linux - mysql - max_connections and max_user_connections



I have a database where two users are connecting. Both are applications. Now, one of the application always log "Too many connections.." to mysql. So I increased max connections to higher value. After one day, max_connections is limit + 1, so I increased second time. After one day, max_connections is limit + 1. So I tried to set up max_connections to 220 and want to force that both applications has to split connections with setting max_user_connections to 100. But again: used connections is 221 now.
In processlist just the two users making connection.



How can I setup mysql to reserve connections for one user?



I cannot use the 'every hour' parameters. It is important that both systems can connect everytime to the database.
I am using suse 11 with mysql 5.5.




Regards, Wyphorn


Answer



As per comments by Xaqron and Lou, the application is misbehaving. If you can't change that, adjust the MySQL wait_timeout parameter (in /etc/my.cnf or wherever your MySQL configuration file is stored) to a value (in seconds) that allows the application to operate whilst killing idle connections after an appropriate period. You'll need to determine what that value might be, but if you can experiment a little, you might want to start at 60 seconds and monitor the application behaviour. (You'll need to restart MySQL after making the change there.)



You can interactively make the change via a MySQL administrative connection by running set global wait_timeout=60 (or whether value you want), which will take effect immediately for new sessions; existing sessions will keep the wait_timeout value that was in place when they started.


Friday, January 29, 2016

security - Apache: disable directory listings



I'm using Apache 2.2.



In the var/www directory I've created a .htaccess file that contains this:



Options -indexes


When I reach my site and want to see the directories and files like this:




www.myDomainName.com/static



I get:



Forbidden

You don't have permission to access /static/ on this server



GREAT!



But, when I type the concrete IP address of my site like this:



www.ipOfMyDomainName.com/static



I get:



Index of /static/



and I can see the whole directory structure and all the files.



How can I solve this? So nobody can see my files and directories.






UPDATE: So, I'm using virtual host and I had to delete "Indexes" from the file named "default" in the site-available directory



now it contains this:





Options FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all


Answer



Try removing "Indexes" in this line in your httpd.conf. If that doesn't work, try removing "All" too.




Options All Indexes FollowSymLinks MultiViews

Bash Script Won't Run in Cron



Here's what I have in my crontab:



*  *  *  *  * /bin/bash /home/user_name/script.sh



Here's what's in the file:



#!/bin/bash

cd /var/www/sites/site1
sudo svn update *

cd /var/www/sites/site2
sudo svn update *



The script is set to +x.



Any ideas on why it won't run in cron? It runs fine when I run it manually.


Answer



Any reason you have /bin/bash in cron invocation? The #!/bin/bash in the script itself should do the same thing. Also make sure that the script is configured to executable (chmod +x/chmod 755). Verify that you want to run the program under your account, otherwise specify the user with the sudo -u "USERNAME" command. Also check and make sure that your account (or the account you want it to run under) has the NOPASSWD option added in /etc/sudoers (more info here:http://www.gratisoft.us/sudo/sudoers.man.html).


packages - Can't install PHP Intl on CentOS 6.5

I'm trying to install this package on my server running PHP 5.4 with no success. However, it only took a few seconds on my PC running OpenSUSE 13.2. The name of the package was "php5-intl-5.6.1-18.1.x86_64", but running "yum search" on the server doesn't return anything containing both the terms "php" and "intl".



I installed and enabled the "remi" and "remiphp55" repositories but it doesn't help neither. I still can't find the packages.




Is there any way how I can install the package?

Thursday, January 28, 2016

ssh - Allow password access for all users except root?



I want to leave the root user enabled on my servers for convenience, and the only reason people are against the idea (that I know of) is brute-force attacks on SSH.



So, is there is a way in SSH to enable password access for all users except root, but allow ssh-key access for root?



OS: Ubuntu Server Edition 10.04 x86



SSH Version: OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009


Answer




From the sshd_config(5) man page:




PermitRootLogin
...

If this option is set to “without-password”, password authentica-
tion is disabled for root.



ubuntu - My linux server was hacked. How do I find out how and when it was done?



I have a home server running a desktop ubuntu distribution. I found this in my crontab



* * * * * /home/username/ /.access.log/y2kupdate >/dev/null 2>&1



and when looking in that directory (the space after username/ is a directory name) I found a lot of scripts which are obviously doing something they shouldn't.



Before I wipe that computer and reinstall stuff, I would like to find out what caused the security breach and when it was done. So I don't open the same hole again.



What log files should I look in? The only servers that I am aware of that are running on the computer is sshd and lighttpd.



What should I do to detect if things like this happens again?


Answer



First, make sure the computer is disconnected from any networks.
Second, make sure you get any important data off the drives before booting the hacked OS again.




Start with checking out the time stamps on the files in question. Often they are accurate.
Cross reference those with the httpd log and the auth log if they weren't wiped. If one other the other was wiped, you can bet that was the means of entry. If they're still in tact, you might be able to glean more information on how they got in from the log.



If they're all wiped, you're pretty screwed. It would likely take more time to figure out what happened than it's worth.



You mentioned those two services were running, was there a good firewall in place to prevent everything else from being accessed? Did you allow SSH on port 22; is your login reasonably easy to guess; did you allow password logins; did you have any sort of real rate limiting for password logins? Did you have any additional software installed with lighttpd; perl; php; cgi; a CMS or similar? Were you running updated version of all the software; do you subscribe to security notifications for all the software you run and carefully evaluate all notifications to see if they apply to software you run/expose to the public?


domain name system - Dynamic DNS updates for Linux and Mac OS X machines with a Windows DNS server

My network has a Windows machine running Server 2008 R2 which provides DHCP and DNS. I'm not particularly familiar with Windows domains, but the domain is set to home.local and that is the DNS domain name provided with DHCP leases.



Everything works fine for Windows machines, they get the lease and update the server with their hostname and the server creates a DNS records for windowshostname.home.local.



I am having problems obtaining the same functionality on Linux (Debian) and Mac OS X (Mountain Lion) machines. They receive DHCP just fine, but DNS entries are not being created on the server for them.



On the Mac OS X machine, hostname gives an output of machostname.local, and on the Linux machine hostname --fqdn also gives an output of linuxhostname.local. I'm assuming that the server is not creating DNS entries because the domain does not match that of the server (home.local).



I don't want to statically configure these machines to be part of the home.local domain, I just want them to pick it up from DHCP and be able to have entries in the DNS server. How should I go about doing this?

centos - Unable to use builtin CA bundle to verify GoDaddy SHA2 SSL certificate



I ran into an interesting problem. We have a PHP script that contacts a LTL shipper (https://facts.dohrn.com/). That script has been failing because it can't validate the SSL certificate. I went to the site and found they were using a GoDaddy SHA2 certificate (uses the GoDaddy Certificate Bundles - G2, which is what is used for SHA2).



I have the latest version of ca-certificate installed and it looks like they have Go Daddy Root Certificate Authority - G2 but that's not the same thing and fails in all forms of validation. I was able to finally get it to work by copying the bundle and directly using that in a CURL request. But this is simply a workaround. Is there something else I'm missing that could make this work without installing the CA directly?




# openssl s_client -connect facts.dohrn.com:443
CONNECTED(00000003) depth=0 OU = Domain Control Validated, CN = facts.dohrn.com verify
error:num=20:unable to get local issuer certificate verify return:1
depth=0 OU = Domain Control Validated, CN = facts.dohrn.com verify
error:num=27:certificate not trusted verify return:1 depth=0 OU =
Domain Control Validated, CN = facts.dohrn.com verify
error:num=21:unable to verify the first certificate verify return:1
--- Certificate chain 0 s:/OU=Domain Control Validated/CN=facts.dohrn.com
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com,
Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure
Certificate Authority - G2
--- Server certificate [certificate removed]
-----END CERTIFICATE-----
subject=/OU=Domain Control Validated/CN=facts.dohrn.com
issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com,
Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure
Certificate Authority - G2
--- No client certificate CA names sent
--- SSL handshake has read 1470 bytes and written 563 bytes
--- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion:
NONE SSL-Session:
Protocol : TLSv1
Cipher : RC4-SHA
Session-ID: 1A23000017A7003411F3833970B7FA23C6D782E663CE0C8B17DE4D5A15DEE1A5
Session-ID-ctx:
Master-Key: F6C9C6345A09B7965AF762DE4BEFE8BDD249136BF30D9364598D78CF123F17230B0C25DD552F103BEF9A893F75EAD2B0
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1432044402
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)




Answer



It appears that the web server at https://facts.dohrn.com/ does not include the intermediate certificate.



This would appear to be a configuration mistake on their part. It is definitely something that can be expected to cause compatibility issues as you are really only supposed to rely on clients having the root certificates in place beforehand.



See the certificate chain, eg from the SSLLabs result: (You'll also note that there are many other issues with their SSL setup.)



1   Sent by server  facts.dohrn.com 
Fingerprint: 823e3a70f194c646498b2591069b3727ad0014d9

RSA 2048 bits (e 65537) / SHA256withRSA

2 Extra download Go Daddy Secure Certificate Authority - G2
Fingerprint: 27ac9369faf25207bb2627cefaccbe4ef9c319b8
RSA 2048 bits (e 65537) / SHA256withRSA

3 In trust store Go Daddy Root Certificate Authority - G2 Self-signed
Fingerprint: 47beabc922eae80e78783462a79f45c254fde68b
RSA 2048 bits (e 65537) / SHA256withRSA




I would say that your main options are to either try to convince the service provider to fix their service or work around the problem on your end by providing the client with the certificates that their server was expected to provide.


iis 7.5 - URL Rewrite in IIS 7.5 not working



I am trying to redirect all http traffic to https as the site requires SSL.



For example, if someone navigates to http://site.com or http://www.site.com I want to redirect the user to https://www.site.com



Right now the user gets a 403.4 Forbidden error - The page you are trying to access is secured with Secure Sockets Layer (SSL).



I've tried a number of different URL rewrite rules but none of them seem to work. In fact nothing seems to be happening different at all, almost like the module isn't even working properly.




First, is my rule correct? And if so, what else could be preventing this from working properly?



    












Answer













And SSL needs to be off apparently.



Not sure if the and tags are appropriate.


SVN from linux to windows

We're running a local development machine (windows) with svn.
Our production servers (linux) are on another location so we cannot connect from the production servers to the local development server to do a svn checkout.



I know it's possible.
Does anybody know how to connect from the linux (external production) to the windows (local development) server to do some svn tasks?



Many thanks in advance.

Wednesday, January 27, 2016

windows server 2003 - Left over domain controllers



In our Windows 2003 network we have two active domain controllers. I say active because listed in the Active Directory Sites and Services (Sites, Default-First-Site-Name, Servers) there are 4 servers listed. One of these, let's call it Server-X, has no objects associated with it and it has long been powered down, two are legit domain controllers, and the final one, let's call it Server-Y, appears as a legit DC but I am having trouble removing it.




So, Server-X must go. I was already under the assumption that it was removed... so would it be safe to delete it from the AD:Sites and Services DFSN Servers list?



Server-Y must also go but I'm having trouble removing it using the dcpromo wizard. This server is actually causing issues because workstations within the domain, every now and again, try to authenticate against it and get rejected. Should I just use dcpromo /forceremoval?



Thank you


Answer



I would use /forceremoval only as a last resort, it is probably stopping you for what it considers to be a very good reason. First question, are all your Operations Masters accounted for. Active Directory won't like removing a DC that is acting in one of those Single Master roles.


Tuesday, January 26, 2016

amazon web services - How to specify needed VPC and subnet into AWS CloudFormation template

I am new with Amazon services and particularly in CloudFormation.



So I have begun to read from "Getting Started with CloudFormation" from Amazon site http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html and now I am trying to launch example CloudFormation template https://s3.amazonaws.com/cloudformation-templates-us-east-1/WordPress_Single_Instance_With_RDS.template as wiki introduction described.



But before I has removed default VPC and added new one (10.0.0.0/16) and created new Subnet in it - 10.0.0.0/24. According to AWS docs I cant set my own VPC as default and now CloudFormation template, which described above, cant be launched and I see such error:




enter image description here



According to AWS page https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-cloudformer-default-vpc/ I can fix this issue with describing my new VPC, but I dont know why to do this correct.



Maybe you can help me?

apt-get upgrade will run on command line but not from cron bash script



Running the following commands on the command line works fine



$sudo apt-get update 

$sudo apt-get upgrade

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
linux-libc-dev
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0B/864kB of archives.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]?



But running this script using cron...



#!/bin/bash  
source /home/adm/.profile
apt-get update >> /home/adm/update_detailed.log
apt-get --yes upgrade >> /home/adm/update_detailed.log
echo "Update_successful $(date)" >> /home/adm/update.log



produces the following output:



Reading package lists...  
Building dependency tree...
Reading state information...
The following packages will be upgraded:
linux-libc-dev
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0B/864kB of archives.
After this operation, 0B of additional disk space will be used.



Why is the linux-libc-dev package not installed from the bash script but can be installed from the command line? Note that the script is set to run as the superuser.



The script was verified against [1] and online sources.



Questions I have read on serverfault have mentioned unattended upgrades but I want to understand this issue not use an alternative.



In [1] I have read that this sort of issue might be caused by the environment variables. This is why I have added the source /home/adm/.profile line to the script. It has not made a difference.




[1] Unix and Linux system administration handbook, 4ed, 0-13-148005-7


Answer



You aren't the first one to run into this problem and find that it's pretty much unworkable. That's why Debian has had a package for this purpose for many years now. It's named cron-apt.



Install this package, and then configure it by editing the configuration file in the /etc/cron-apt directory. The default configuration file is very well documented and all the options should be fairly well explained.



Note well, that while you can configure it to automatically update the system, you probably should not, as it will eventually break something important. Better to just set it up to email you when updates are available.


iis - Permissions issue with virtual directory to UNC path



I have a virtual directory in my site (test environment). It is a UNC share which is also used as a public FTP.



It is configured to connect as a domain admin account and "Test settings" says everything appears to be working. However when I try to connect to it I get:





500 - "Failed to start monitoring
changes on
\INTRANET\FTP\test\web.config because
access was denied"




This is an ASP.NET YSOD. I am not sure why ASP.NET is getting involved at all as it's a static .jpg file I'm requesting.



I tried turning on failed request tracing and this is the specific error:





  • ModuleName WindowsAuthentication

  • Notification 2

  • HttpStatus 500

  • HttpReason Internal Server Error

  • HttpSubStatus 0

  • ErrorCode 0

  • ConfigExceptionInfo

  • Notification AUTHENTICATE_REQUEST


  • ErrorCode The operation completed successfully. (0x0)



If I change the "Physical Path Logon Type" from ClearText to Network. I get the following IIS error:




HTTP Error 500.19 - Internal Server



Error The requested page cannot be
accessed because the related

configuration data for the page is
invalid.



Detailed Error Information




  • Module IIS Web Core

  • Notification BeginRequest

  • Handler Not yet determined

  • Error Code 0x80070005


  • Config Error Cannot read configuration
    file due to insufficient permissions

  • Config File \\?\UNC\INTRANET\FTP\test\web.config

  • Requested URL
    http://test.mydowmain.com:80/uploads/images/ca49acf6-6174-412e-8abd-59fab983e931.jpg


  • Physical Path
    \\INTRANET\FTP\test\images\ca49acf6-6174-412e-8abd-59fab983e931.jpg


  • Logon Method Not yet determined


  • Logon User Not yet determined

  • Failed Request Tracing Log Directory C:\inetpub\logs\FailedReqLogFiles





This does not generate a failed request log strangely enough—I have set the failed request tracing to trace errors with error codes 400-999.



Also worth noting is that if I open the Configuration feature from within IIS, I see an access denied error.



I have exactly the same set up on my local dev machine to the same UNC path and the same user it works. Just on the test server it does not.



What am I doing wrong?



Answer



The fact that it's an ASP.net app is probably exactly what the issue is here. Your application pool identity has to have rights (not necessarily the IIS identity; by default, the app pool identity is the local Network Service account.) You also probably need to run caspol.exe on your IIS machine.



http://msdn.microsoft.com/en-us/library/cb6t8dtz%28v=vs.80%29.aspx



http://learn.iis.net/page.aspx/50/aspnet-20-35-shared-hosting-configuration/



%windir%\Microsoft.NET\Framework\v2.0.50727\caspol -m -ag 1.  -url "file://\\remotefileserver\content$\*" FullTrust

linux - CentOS CPU load and CPU Freq

I have some Dual Xeon (X5650 @ 2.67GHz) server, with 72GB of RAM and HT disabled, but I have a problem.



I host srcds servers (game servers) and they are really CPU intensive. The CPU usage is usually over 50% but CPU load is 0.05~0.30 (even if I run 10 servers, each server using 1 core 100%, it will stay 0.05~0.30).



The problem is that the CPU do not ramp up, it just stays at 1.5Ghz forever as there is no load registered by the system, when actually there is. As the game servers load increase it start lagging and dropping frames becuse of the low CPU freq.




I did some benchmarks on the server, and the CPU load and frequency did rump up to ~3Ghz as it should, so I don't think its a server problem.



I used to use Ubuntu, and the CPU load was ok, but I dont want to reformat the server and set everything up again.



I there anything I can do to make CentOS display the right load and ramp up the CPU frequency as it should?

Nginx+PHP scalability on Windows



I'm trying to understand Nginx+PHP scalability running Windows with, lets say, 100 requests doing a medium-term operations.



Analyzing source code I saw:





  1. Nginx start a spawns itself several times depending on
    configuration and/or server processors count.


  2. Altough it uses I/O completion ports, each worker only creates
    one thread to handle requests.


  3. When a PHP request is done, Nginx communicates with PHP using FastCGI




At this point, although NGinx can continue scaling, I cannot see in standard PHP fastcgi sapi code that scales using multiple threads/processes and completion ports.



For unix/linux, PHP-FPM come to solve the problem because it forks child processes to complete tasks altough not sure about the performance.




But back into Windows, still in PHP-FPM, I don't see specific code to mantain the whole webserver performance high.



Is there an alternative for Windows? Is something wrong in my research?


Answer



The scalability of nginx on Windows is limited, so it's not a recommended platform for a production web site.



From the nginx web site:





Version of nginx for Windows uses the native Win32 API (not the Cygwin emulation layer). Only the select() connection processing method is currently used, so high performance and scalability should not be expected.



Although several workers can be started, only one of them actually does any work.



A worker can handle no more than 1024 simultaneous connections.



nameserver - CNAME domain to another domain, but keep different SPF records for the two?




SCENARIO:




  • mydomain.com is the main website, we do send/receive mail using
    address@mydomain.com. mydomain.com DNS has got an SPF record "v=spf1 a mx ~all"


  • mydomain.net is just an alias for mydomain.com, but we do NOT send mail
    using address@mydomain.net. Therefor mydomain.net DNS has got an SPF record
    "v=spf1 -all" to acknowledge everyone it does not send mail





Since mydomain.net is an alias for mydomain.com I wanted to use CNAME in DNS, thus:



mydomain.net -> CNAME -> mydomain.com
www.mydomain.net -> CNAME -> mydomain.com


But by doing this I noticed that when testing SPF for mydomain.net with a DNS tool like this the SPF returned is the one in mydomain.com "v=spf1 a mx ~all" and NOT as I would expect the "v=spf1 -all"



Is there a way to use different SPF for the two domains, by still using CNAME



Answer



A CNAME means that the hostname is exactly the same as the target hostname with respect to all record types. If this is not what you want then you can't use a CNAME.



You also shouldn't CNAME the root of a domain (i.e. mydomain.net), because this means that the SOA for mydomain.net is actually that of mydomain.com.


storage area network - LUN and VMFS 5 sizing assistance



I am working on setting up our vSphere 5 environment. With vSphere 5 you can go > the 2TB VMFS datastore size that you were capped at with in 4.x, etc. What size datastores are people using and whats a good way to determine the correct size?




My environment:



Hosts: I will be using 6 hosts (2 CPUS per w/ 6 cores per) = 72 cores. 192GB RAM per host = 1152GB RAM.



SAN:



VNX5500 with 35TB storage. This is tiered so it has a mix of SSD, SAS, NLS drives.



I saw someone use a formula someplace that looked like this:




(disk pool capacity – 10% free space) / total processors = datastore size



Does that look right? I may setup different levels of pools on the VNX, maybe gold/silver/bronze (which would basically be aimed at a SLA). So using this formula I would have a gold pool of lets say 10TB.



So thats (10TB - 10%) = 9TB (9000) / 72 = 125 so is that 1.25TB per datastore? And I would end up with ~7 datastores on 10TBs of space? Since VMware is aiming at easier managment through few objects, being able to go over 2TB per VMFs 5 DS now this doesn't look right to me?



Any help at all sizing my datastores would be much appreciated.


Answer



I am going to be using the formula that I posted above to size my drives. With the speed of my VNX5500's drive the IO problems that I had in the past shouldn't be a problem. I will post what my final sizes were across 30TB of space when we carve later this week.



What are useful Command-line Commands on Mac OS X?

Per the Windows and Linux threads, what commands do you find most useful in Mac OS X Server (or Client)?

Permanently mount network share without the need for log on? (Windows)

On a Windows 2008 R2 Server (Standard) I need to have a network drive mounted without having a specific user to log on to the machine first. Sort of like an NFS mount via fstab on Unix machines. The network drive will be a share via a BlackArmor (Seagate) appliance (which I presume runs Samba). The appliance can be a member of the domain if needed.



So far I have tried using Edit Group Policy -> Configuration -> Windows -> Scripts -> Startup where I had it execute



net use x: \\server\share /user:username password


With no success. Upon login the network drive was seen on Windows Explorer as a disconnected network drive.

Improving email deliverability: Implementing DKIM and DMARC

I have a messaging system on my app where users can send messages directly to other users straight from my domain (not going through Mailchimp's Mandrill templates or Google Apps). I also have cron jobs that sends users' statistics to about 5,300 users every week. Again, the script sends messages straight from my domain.



Most e-mails are going to users' spam box, which I need to fix as soon as possible. I recently found out an app that tests e-mail deliverability and gives scores based on how well configured your email server is (among other things). This is the URL https://www.mail-tester.com. I was able to fix several things and my score went up from -0.2/10 to 7.7/10. However, although the tester says my e-mail is "good stuff", I know hundreds of emails are either not being delivered (returned because sender is not trusted) or going straight into the spam box.



The last thing I need to fix to have an almost perfect score is to add a DKIM signature to the emails. Hopefully that will increase deliverability rates. This is the message the email tester gives me about DKIM: "Your message is not signed with DKIM. DomainKeys Identified Mail (DKIM) is a method for associating a domain name to an email message, thereby allowing a person, role, or organization to claim some responsibility for the message."




I did try to work this issue out with my server (BlueHost) but they were not able to help me (they helped me with other issues though).



Additionally, I used Microsoft's mxtoolbox (http://mxtoolbox.com/) to test my email and the result of tests says a DMARC is missing or invalid.



Does anybody know how to add a DKIM signature and DMARC to emails that come from the domain itself. Are there a command lines that I can use to do that?



Thank you!



P.S. App is written in PHP

Monday, January 25, 2016

2008 R2 Software Raid 5, writing during resynching

I'm build a software Raid 5 under Windows server 2008 R2 with 4 x 2TB SATA2 (i3-540 4GB)
I launched the process 24h ago, but the resynch progress is 27% only;
- Is there a problem, or is it a normal duration ? Would a 2003R2 raid 5 be faster ?
(I've seen that it could take 4 days, but some people told me it shouldn't take more tha 1 day)
- Can I write on the raid drive during the resynch process ?
Thank you.




Edits :



Actually, I estimate the resynch time to 84 h according to elapsed time and % completed.
I don't know if this calculation is correct, but it seem to match my estimated time.



Value are extracted from test of WD Caviar Green 2TO : http://www.ginjfo.com/dossiers/tests-materiel/composants/disques-durs/caviar-green-2-to-une-capacite-ecolo-hors-norme-20090309?page=3
(Access write 3ms, access read 7ms)





  • Read and write at max Sata II :
    4 * 1.820.000 MB at 375 MB/s (Sata II) = 5.4 H

  • Access to each cluster of 3 HDD :
    1 * 1.820.000 MB = 28.437.500 64KB clusters at 7ms = 55.3 H (Read simultaneous)

  • Write of each parity cluster :
    1 * 1.820.000 MB = 28.437.500 64KB clusters at 3ms = 23.7 H (Write parity)



=> 84.4 H (3.5 days)




If the calculation is correct, it could help some people to estimate the resynch time.

How to kill a process in Linux if kill -9 has no effect



I imagine that kill -9 still just sends a signal to a process. If the process is not behaving well, I would imagine that kill -9 would have no effect. Indeed, today for the first time I saw kill -9 have no effect when sent to a stuck ruby process. So here is my question: Is there some harsher way to kill a process?



Answer



You have to reboot the machine. If kill -9 doesn't kill a process, that means it is stuck in the kernel. Unless you can figure out what it's stuck on and unstick it, there's nothing you can do. The reason it's stuck is that it is waiting for something and the logic necessary to cleanly stop waiting simply doesn't exist.



(The above assumes the process has already been reparented to init. Otherwise, its parent may be keeping it around. You may have to kill its parent.)


Sunday, January 24, 2016

Using X25-E SSD on Dell R710 w/PERC H700

I noticed that StackExchange/ServerFault is using a Dell R710 with PERC H700 controller and Intel X25-E SSDs for a database server. We're trying to do the same thing!



We're currently running the most current PERC firmware, version 12.10.1 (which unfortunately is several releases behind the equivalent LSI firmware). The Intel X25-E SSDs run beautifully in all respects--configuration, performance, etc.



What we're seeing is two effects--about two minutes after a reboot, the disk status lights permanently turn from green to blinking amber, and at the same time, the hardware management log gets a non-critical error with "A non-Dell supplied disk drive has been detected". These seem to be more of a nuisance than anything else.



At this point I'm about ready to get some tape and cover the lights. So my question to the StackExchange/ServerFault folks (or anybody else) is whether they ran into anything similar.

high availability - Get IP of node running a specific resource when demoting master nodes to slaves

I am setting up a HA cluster for a web application with 2 nodes (2 physical servers):




  • node1 (current master node)

  • node2 (current slave node)




Using Corosync & Pacemaker I was able to create the cluster and some resources agents including an IP failover and a Webserver (apache).



Resources




  • Failover resource exists on only one node at a time





Uses a python script to make API calls to my hosting provider in order to update the IP failover destination





  • WebServer resource exists (as a clone) on every available node




Standard OCF resource using Apache's server-status handler





Constraints




  • There is a constraint that says that Failover and WebServer must be running at the same time on a server in order for considering it as available.






Now I would like to create a custom resource agent (like I did for the IP failover) that will:




  • Switch the mysql instance of the current slave node into a master node

  • Switch the mysql instance of the current master node into a slave node of the new master node

  • Basically do the same for Redis instance



Ideally, the resource would be started on only one node (master), and stopped on all other nodes (slaves). Therefore, starting the resource would put the current node in master mode, and stopping it would put it in slave mode.




I made a script that can easily achieve all of these operations. Here's how it works.



Turn local node in master mode:



# /usr/local/bin/db_failover_switch.sh master


Turn local node in slave mode:



# /usr/local/bin/db_failover_switch.sh slave 123.45.67.89



The synopsis is pretty straightforward to understand.
The problem I am facing, is that I obviously need to set the master IP in order for the slave to configure MySQL and Redis services accordingly.





In case of failover, I want:





  • Resource starts on node2 which becomes master node

  • Resource stops on node1 which becomes slave node



In order to stop the resource (i.e. setting it into slave mode), I need to know the IP address (hostname will do) of the node which has the resource running.



Is there a way I can have a dynamic parameter that Pacemaker will pass to my resource agent? Or maybe can I retrieve the clusters information directly from my resource agent script to know which is the node running a specific resource?

proxy - How can I remove port number on ssl domains



I serve two ssl domains on different ports using apache2.2.3. My server version is CentOS 5.11. I can't update the server.



For example:





  • https://one.domain.com/

  • https://two.domain.com:444/



These works properly.



I want to remove port number from address in second domain like https://two.domain.com/.




How can I do it with current environment?


Answer



You've two options:




  1. Use a separate IP address for the second server config. Note this could still be hosted on same server if you can configure two IP addresses on your network card or add another network card.


  2. The easier option, and to save an IP address, is to use a cert which works for both domains. More details here: Disabling SNI for specific virtualhost on Apache.




But you really should think about upgrading. Those are very old versions that you're on.



Saturday, January 23, 2016

How to have ssh tunnel prompt for username & password




We have an windows desktop application that connects to a 3rd party server with a socket connection. The 3rd-party server requires that we connect from a fixed public IP address. We need to connect from various IP addresses, so I setup a (Linux) server to tunnel the connections so that it looks to the 3rd-party server that all connections are coming from the same IP address all the time:



ssh -N -L port:127.0.0.1:port account@ip -p port2



The tunnel appears to be working fine; As a test, I can telnet to it from the account with which it was created.



To allow the windows machines (that run the application) to tunnel through, I added -g to the ssh command line. Now other machines can telnet through the tunnel as well. Everything works so far. However, I want to be able to restrict who can use the tunnel. When I telnet to the Linux server, I expected to be prompted for the username/login of the account that created the tunnel; instead, the connection is just created with no restriction. I don't want to use IP address filtering, since that is the reason I setup the Linux server in the first place (to allow any IP address). How can I get the Linux server to prompt for username/password when connecting to the tunnel from another machine? Would this be done with some additional or different command line options for ssh, or do I need to use something else?



I was expecting to run something like bitvise tunellier on the windows desktop machines. Thus, I would tell the windows desktop application to connect to a local port on the windows machine on which it runs. This local port would be tunneled to the Linux server by tunellier. The Linux server would in turn tunnel to the 3rd-party server.


Answer




If you want to require authentication, then you should probably drop the -g option to make the tunnel available to the network. Then require everyone who needs access to the tunnel to the remote system establish a connection to your SSH server with a tunnel.




How can I get the Linux server to prompt for username/password when connecting to the tunnel from another machine?




There is nothing built into SSH that is going to automatically add an authentication step on-top of a telnet session. The simple solution as I suggested above is to only permit access to the tunnel if the user can authentication to the ssh server. You could get a similar result with a VPN between the clients and the SSH server.



I am not sure about the exactly nature of whatever this tunnel is providing, but you could setup an account on the SSH server. Then configure this account to automatically run telnet to connect to this tunneled connection.




So you might do something like create an account tunnelaccess then add something like this to your sshd_config. So whenever a user logs in as tunnelaccess the command to connect to the remote tunnel would immediately happen.



  Match User username
AllowTCPForwarding no
X11Forwarding no
ForceCommand connect tunnel # replace with tool to connect to tunnel

Friday, January 22, 2016

configuration - Getting error "Invalid command 'echo'..." when restarting apache when trying to use SSLPassPhraseDialog



Using a solution to another answer, I added the following to my apache config:




SSLPassPhraseDialog exec:/path/to/passphrase-script


And in that script, I placed this:



#!/bin/sh
echo "put the passphrase here"



Now, when I restart apache, i get the following error:



Invalid command 'echo', perhaps misspelled or defined by a module not included in the server configuration


Should I be using some other command in the shell script? Or do I need to configure apache differently so the echo command works?


Answer



Your shell (/bin/sh) does not appear to support echo as a built-in command, and your script is probably being called from an environment that doesn't have a valid PATH environment variable set.



Use the full path to the echo command (usually /bin/echo, sometimes /usr/bin/echo) instead and things should work.



Thursday, January 21, 2016

Domain name to web hosting (DNS set up)



Now this is really basic. I've read about how DNS works. There are also some questions at ServerFault that talk about complex DNS configurations and stuff but my problem is that I don't get the basic of it. My question is pretty simple, even embarrassing but there's something huge I'm missing and it's driving me crazy.



When you need to link a domain name with a web hosting server, you have the domain name and the DNS of the hosting server and you point the domain name to the DNS server (either hostname or IP).



But how do you tell it which website should it display in a shared hosting? There is more than one website with the same IP.



Thank you!



Answer



On shared hosting, your provider will have a control panel to set that up for you. If you're setting up Apache yourself, this is stored in the vHosts portion of the config file.



Essentially, the webserver (usually Apache) figures out which webpage to serve to the end user, since every modern browser sends the HOST header indicating which hostname it intends to access. If you browse to the IP without telling the server which website you want to visit, most webservers display a generic error unless configured otherwise.


3rd party SSD drive in HP Proliant server only shows 3G transfer speed




After receiving some excellent advice in a previous question, we've bought a OWC Mercury Extreme Pro drive for use in our HP Proliant DL360 G7 server. Both the drive, and the P410i array controller, will apparently support a 6Gb/s connection.



However, when I view the drive in the HP Array Config Utility, the transfer speed is only listed as 3Gbp/s:



enter image description here



Is there anything I need to do to kick this drive into 6Gb/s mode, or is the HP ACU just confused by the non-HP drive? The P410i array controller firmware is v6.00-2, which I believe is the most recent.


Answer



The 410 / 410i seemsto be limited to 3Gb/s for SATA disks (6Gb/s SAS and 3Gb/s SATA support)




http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/smartarrayp410/index.html


remote desktop - Manage redirected print queues from RD users on Server 2008 as Administrator



How can I view the all the print queues on a 2008 server, even redirected ones created by remote desktop users?



When I log into a Server 2008 Server as Administrator, and I view the Printers folder (or use the Print Management tool) I only see my printers, and printers created via a 3rd party tool, Screwdrivers. I don't see the redirected print queues created for remote desktop users. Is there any way to see or manage these print queues as an administrator?




I can shadow the users and see their print queues, but that's the only way right now.


Answer



Sure, this is included in the Print Management mmc.



If you don't already have the Print Server role though, you'll need it first.



Then add yourself to the Print Operators group. Log out of the administrative RDP session once and back in...then go into Print Management and "all printers".



(Oh and basically it is what joeqwerty linked to.)




Note: in case you didn't know already the (redirected 4) would correspond to the user with session ID 4 in Task Manager, since it isn't easily available within Print Management.


Root locked out of SSH



I have tried to set up SFTP on a Debian machine, and was following instructions (and here) to prevent SFTP users from using remote login too.



But in doing so, it seems like I have locked root out from remote login too. Of course, without remote login, it seems like I am unable to fix this. It doesn't look like telnet is enabled. My other, ftp only, users are chrooted.




Is there any way to fix this? I guess if I were able to restore my sshd_config file (backed up as /etc/ssh/sshd_config.bak), I may able to log in again, but how to gain access?



As you can tell from the nature of the question, I am a bit of a newbie at all this....


Answer



Log in with your normal user account (the one for yourself, that you didn't put in a chroot) and su to root, then you can fix the problem.



If you somehow managed to chroot your own user account, or never created one in the first place (don't ever repeat this mistake) then you will have to get on the console, reboot to single user mode and recover it from the console.


windows - Azure-to-Azure Disaster Recovery orchestration



I have been looking into this lately to find no available solution for Azure-to-Azure disaster recovery solution that is native. Do you people think that it is completely fair to trust the Azure cloud to take care of the risks that a disaster brings in? Does it even make sense to setup a replication inside the Azure environment for Azure VMs? If so what are the available options and any direction is appreciated!



Coming from a fully managed solution, I think Azure is not giving us more options here for an IaaS environment setup.



My Research so far on this




Azure Site Recovery:



Only the following scenarios are supported. Not Azure-to-Azure.




  • Azure-Hyper-V site

  • Azure-VMM server

  • Azure-Physical Windows server

  • Azure-VMware virtual machine

  • Secondary datacenter-VMM server


  • Secondary datacenter-VMM server with SAN

  • Secondary datacenter-Single
    VMM server



High Availability:




  • Available only for HA Application server VMs.

  • The only promise is "Data is durable". The end user is responsible for reconnection.




Azure Backup:




  • Encrypted backups.

  • Not possible to failover by setting up a redundant infrastructure.

  • Not an ideal solution for an IaaS failover.


Answer




Assuming I'm reading you right, there are things like Azure Site Recovery, HA, Azure Backup, zone and geo redundant storage, etc.



Recommended reading:



Azure Business Continuity Technical Guidance



Disaster Recovery and High Availability for Azure Applications



Azure Site Recovery




If you would like a possible workaround for now you can follow this blog post: https://bnehyperv.wordpress.com/2015/07/27/site-recovery-protection-between-on-premises-vmware-virtual-machines-or-physical-servers-and-microsoft-azure/


Wednesday, January 20, 2016

Active Directory migration from Windows 2003 to Windows 2016



I've inherited an old Windows 2003 based Active Directory installation and I'm tasked to upgrade it to modern standards. I've done various (successful) test in my lab using the plan below, but I really want a reality-check / best practices suggestions from other experts in the field.




Current status: a single-label, Windows-2000 mixed mode Active Directory Domain running on a Windows 2003 installation. The DNS component is running with unsecure dynamic updates.



Target status: migrate to a Windows 2012R2 level domain on a Windows 2016 installation (note: the target level of Windows 2012R2, rather than 2016, is due to my customer having other Windows 2012R2 servers). The migration should be done in the least disruptive manner; anyway, as I am going to work on it during a weekend, short service disruptions are accepted.



Caveats: while single-label domain are deprecated, I really need to keep it running as-is. I evaluated both a domain rename and/or a domain migration to a new name, but they simply seem too much to ask for my customer.



My plan:




  • install a new Windows 2016 server and add it, as a simple member, to the current domain


  • raise the current forest/domain functional level to Windows 2003

  • promote the new Windows 2016 server to the Domain Controller (with Global Catalog) role

  • demote the old server (via dcpromo)

  • on the new Windows 2016 server, use "Active Directory Sites and Services" to remove any eventual leftover from the demote operation

  • on the new Windows 2016, use "DNS Manager" to change the DNS dynamic update type to "Secure only"

  • raise the forest/domain functional level to Windows 2012R2

  • change the old server's original IP address (eg: from 192.168.1.1 to 192.168.1.2)

  • change the new server's IP address to match the old domain controller (eg: from 192.168.1.10 to 192.168.1.1). Note: I'm planning to do that due to current DHCP settings and gateway firewall/VPN rules

  • migrate from FSR to DFSR (see here and here)

  • install another Windows 2016 server on a branch office, adding it as a new Domain Controller (with Global Catalog).




Questions:




  • I am missing something important?

  • Is my idea of swapping the IP address of the old/new server to minimize firewall/VPN/DHCP changes a good one, or should I avoid that?

  • Anything should I be aware of?




UPDATE: after much discussion and testing, I convinced my customer to go for a domain rename. I've done it via the rendom utility, as per Microsoft recommendations, and all went smooth (it did not have any on-premise Exchange server, fortunately).


Answer




while single-label domain are deprecated, I really need to keep it
running as-is. I evaluated both a domain rename and/or a domain
migration to a new name, but they simply seem too much to ask for my
customer.




The right thing is sometimes the hardest. IMO, you're doing your customer a disservice by continuing to use and support the SLD. Do the "right" thing and perform a domain rename or migrate to a new domain.



domain name system - MX record pointing to A record with two different IP's

So my problem is simple I just need a solid answer so that I don't break the email service.



I have two servers one for mail and other for the web service. The web server is responsible for the SSL certificates renewal (I'm using Let's Encrypt Certificate Authority).



My DNS A record is mail.example.com and points to the mail server IP. The MX record points to that A record.



The SSL certificates validation is made via DNS so I added another A record with the same hostname (mail.example.com) but pointing to the web server IP.



I tried this for a little while and It worked out (the validation succeeded and the mail service worked normally) but im not 100% sure about it and led me two the following thoughts:




1 - The A record for the web server was added after, so in the DNS query the mail server IP comes first, and because of this everything works fine.



2 - I read somewhere that the in the browser the DNS queries results are used in a random order. If the first IP can't serve HTTP requests the second will be used. I'm not sure about this but could it be that for the mail service the same happens? If the first IP resolved does not accept mail, it will try the second one?



I would like to be clarified about this because I wan't to be 100% sure of what is happening and why, to prevent any problems in the future.

Subdomain Proxy/Rewrite with Nginx Proxy

I want to take m.example.com and direct those requests to example.com/mobile/ with nginx. (The same nginx sever will be serving example.com)



Can someone help me with the config I would need? I can post my nginx.conf if needed, but I assume its something like:



location / {
rewrite /([^/] +) /mobile/$1 break;
proxy_pass http://127.0.0.1;

}



Am I in the right ballpark?

networking - Firewall connected to a switch using 2 ports (LAN & DMZ), but switch management talks on DMZ port

Someone let me know if I'm off track here.



I'm setting up a firewall with 3 ports configured (WAN, LAN, and DMZ). The LAN and DMZ ports both connect to the same switch, on which I will configure a VLAN to segregate LAN and DMZ traffic.



I've got a bit of an issue in that the switch insists on its web-management interface talking to the firewall over the port designated to the DMZ (for the moment I've reconfigured that port to be a LAN so I can get on the switch to configure it).



If I've done everything correctly to this point, can someone point me in the right direction on forcing the switch to communicate it's management data over another physical port?



The firewall packet captures clearly showed the traffic going to it on X0 port and being received to the X2 port by default.

UCC SSL certificate across multiple machines and IPs




I have some doubts concerning the UCC certificates.



From what I understand, they can be used to offer SSL to several domains using only 1 certificate file.



However, in all sites that I search, UCC SSL options always refer to "Microsoft® Exchange Server" or as being "ideal" for Microsoft technologies. Also, they are not clear about multiple IPs... and, obviously, I don't have a great deal of know-how on these subjects...



So, I have the following questions, assuming that I buy a UCC certificate for 3 domains:




  1. can I use a single certificate to "secure" www.domainA.com, domainB.pt and domainC.co.uk?


  2. can I use that same certificate on webservers, located on different machines, with completely different IP addresses?

  3. can I use that same certificate with Apache2?



More specifically, I'm currently looking at GoDaddy options:





Thanks!


Answer





  1. Yes.

  2. Yes.

  3. Yes.



That said, some UCC providers would consider these sort of uses violations of the terms of service for the UCC.


CNAME to another domain fails on some office networks, why?

Our domain "aspenfasteners.com" is hosted by Volusion. We have CNAME records "find" and "search" which point to site indexing accounts on www.picosearch.com.



These addresses fail on SOME private office networks which have their own DNS. We suspect the problem comes from Volusion's own name servers, n2.volusion.com and n3.volusion.com.



Volusion support on problems this technical is non-existant.



We have tried an NSLOOKUP on find.aspenfasteners.com with level 2 debugging info, and we got the results below. Is it possible that the local DNS is recursing to Volusion's name servers, and that while Volusion DOES return the canonical name, they do NOT resolve the address?




Can anybody with expertise in this sort of stuff PLEASE look at the NSLOOKUP below and tell me if we are right, because Volusion is giving me absolutely NO support on this topic. I need proof of where the problem lies.



Thanks VERY much!



Carlo




find.aspenfasteners.com
Server: mtl-srm-dbsv-01.fastenerwholesale.com

Address: 192.168.0.44







SendRequest(), len 61
HEADER:
opcode = QUERY, id = 8, rcode = NOERROR
header flags: query, want recursion
questions = 1, answers = 0, authority records = 0, additional = 0




QUESTIONS:
find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN


------------



Got answer (138 bytes):
HEADER:
opcode = QUERY, id = 8, rcode = NXDOMAIN

header flags: response, auth. answer, want recursion, recursion avail.
questions = 1, answers = 0, authority records = 1, additional = 0



QUESTIONS:
find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN
AUTHORITY RECORDS:
-> fastenerwholesale.com
type = SOA, class = IN, dlen = 46
ttl = 3600 (1 hour)
primary name server = mtl-srm-dbsv-01.fastenerwholesale.com

responsible mail addr = admin.fastenerwholesale.com
serial = 10219
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)


------------




SendRequest(), len 41
HEADER:
opcode = QUERY, id = 9, rcode = NOERROR
header flags: query, want recursion
questions = 1, answers = 0, authority records = 0, additional = 0



QUESTIONS:
find.aspenfasteners.com, type = A, class = IN



------------



Got answer (141 bytes):
HEADER:
opcode = QUERY, id = 9, rcode = NXDOMAIN
header flags: response, auth. answer
questions = 1, answers = 1, authority records = 1, additional = 1



QUESTIONS:
find.aspenfasteners.com, type = A, class = IN

ANSWERS:
-> find.aspenfasteners.com
type = CNAME, class = IN, dlen = 17
canonical name = www.picosearch.com
ttl = 3600 (1 hour)
AUTHORITY RECORDS:
-> com
type = SOA, class = IN, dlen = 43
ttl = 900 (15 mins)
primary name server = ns3.volusion.com

responsible mail addr = admin.volusion.com
serial = 1
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)
ADDITIONAL RECORDS:
-> ns3.volusion.com
type = A, class = IN, dlen = 4
internet address = 65.61.137.154

ttl = 900 (15 mins)





*** mtl-srm-dbsv-01.fastenerwholesale.com can't find find.aspenfasteners.com: Non-existent domain

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...