Friday, October 31, 2014

zsh - Why is my $PATH different in the executed script?


echo $PATH inside gnome terminal:



/home/pc/less.js/bin:/home/pc/local/bin:/home/pc/local/bin:/home/pc/.rvm/gems/ruby-1.9.2-head/bin:/home/pc/.rvm/gems/ruby-1.9.2-head@global/bin:/home/pc/.rvm/rubies/ruby-1.9.2-head/bin:/home/pc/.rvm/bin:/usr/local/bin:/home/pc/local/bin:/usr/lib64/mpi/gcc/openmpi/bin:/home/pc/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/home/pc/Programming/Software/tup:/home/pc/Programming/Libraries/depottools:/home/pc/Programming/Libraries/apache-maven-3.0.4/bin



From inside this script:


#!/bin/zsh
echo $PATH
while inotifywait -e modify /home/pc/vbox-shared/less; do
lessc custom.less > /home/pc/vbox-shared/less/custom.css
done


/usr/lib64/mpi/gcc/openmpi/bin:/home/pc/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin



As you can see, I modified my .zshrc file with this:



export PATH=/home/pc/less.js/bin:$PATH



Why does it not work in the script when executed as a file? The problem is that the lessc command is not being found.


Answer



The script is run using /bin/zsh, which is not an interactive or login shell and doesn't load this file. From man zsh, emphasis mine:



Commands are first read from /etc/zshenv; this cannot be overridden. Subsequent behaviour is modified by the RCS and GLOBAL_RCS options; the former affects all startup files, while the second only affects global startup files (those shown here with an path starting with a /). If one of the options is unset at any point, any subsequent startup file(s) of the corresponding type will not be read. It is also possible for a file in $ZDOTDIR to re-enable GLOBAL_RCS. Both RCS and GLOBAL_RCS are set by default.


Commands are then read from $ZDOTDIR/.zshenv. If the shell is a login shell, commands are read from /etc/zprofile and then $ZDOTDIR/.zprofile. Then, if the shell is interactive, commands are read from /etc/zshrc and then $ZDOTDIR/.zshrc. Finally, if the shell is a login shell, /etc/zlogin and $ZDOTDIR/.zlogin are read.



The script inherits the environment from where it's called, and if this isn't another (interactive) shell, it won't contain the preferences you set in .zshrc.


You can set the PATH where it applies globally (e.g. /etc/zshenv), set it explicitly in the script directly, or change the shebang script header to run /bin/zsh -i instead, making it load .zshrc (quoting man zsh: Force shell to be interactive. It is still possible to specify a script to execute.).


Alternatively, just specify the full path to the program that isn't on the default PATH, e.g. /home/pc/less.js/bin/lessc.


web server - Iptables output and forward rules for webserver

For a CentOS web server that is not behind a firewall, I set up some Input chain iptables rules to open only port 80 from internet, allow SSH only from my IPs, and so on. On this server there is only Apache serving HTTP requests (port 80).




But what about Output chain? Is it a good practice to allow only the same ports that are allowed by Input chain?
Since Forward is not used, can I set the default policy to DROP?

vmware esxi - Initiator disconnected from target during login equallogic san



I have a Dell Equallogic San and it is being accessed by many VMWare hosts (Esxi v5.1). Today morning I could find an error like



iSCSI login to target " " failed for the following reason:
Initiator disconnected from target during login




But Just after 10 seconds this error disappeared and now login works fine.



I have checked this particular datastore through VCenter Server, but I was unlucky to find any errors logged there. Can anyone please tell me what is the basic reason for this error and how it can be solved permanently?


Answer



If this is something that seems to be recurring, you should ensure that your VMware hosts are configured according to Dell's best practices recommendations.



The most likely cause would be the iSCSI login timeout value (defaults to 5 seconds, recommended 60 seconds).



See kb.vmware.com/kb/2007829 for more info



memory - How much RAM can a 32bit OS support?






I see a lot of people claim that 32bit OS can only support up to 3GB RAM, and other people claim 3.25, while others claim 3.5, and others even claim 4GB (Which makes the most sense to me: 32^2 bytes = 4GB)


Can anyone provide a definitive answer with some logic to back up their statement? How much RAM can a 32bit OS support?


Answer



As a matter of theory, 2^32 is the max. However, each OS reserves different parts of the memory space for various things (kernel space, drivers, memory structures, etc) so the usable user space and sometimes reported RAM is less than the theoretical max.


windows 8.1 - Kernel Data Inpage Error

I started getting this "BSOD" error a few weeks ago, and I have no idea how to find out why... It happens at random times and events so I have no idea what's causing it. I did a google search a few days ago and found out that you can look into the *.dmp file that is created during the error but, I failed to open it with the program from Nirsoft and, I also have no idea what am I supposed to look for... Can someone give me a hand? It's driving me insane...


I have an ASUS X55A Laptop with 4GB and Intel Celeron B830 1.80GHz x64...

Apache Virtual host (SSL) Doc Root issue

I am having issues with the SSL document root of my vhosts configuration. Http sees to work fine and navigates to the root directory and publishes the page fine -




DocumentRoot /var/www/html/websites/ssl.domain.co.uk/ (as specified in my vhost config)



However, https seems to be looking for files in the main apache document root found further up the httpd.conf file, and is not being overwritten by the vhost config. (I assume that vhost config does overwrite the default doc root?).



DocumentRoot: The directory out of which you will serve your
documents. By default, all requests are taken from this directory, but
symbolic links and aliases may be used to point to other locations.



DocumentRoot "/var/www/html/websites/"




Here is my config, I am quite a new Linux guy so any advise is appreciated on why this is happening!?



NameVirtualHost *:80
NameVirtualHost *:443


ServerAdmin root@localhost
DocumentRoot /var/www/html/websites/https_domain.co.uk/
ServerName ssl.domain.co.uk

ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.co.uk-error_log
CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.o.uk-access_log common

SSLEngine on
SSLOptions +StrictRequire
SSLCertificateFile /var/www/ssl/ssl_domain_co_uk.crt
SSLCertificateKeyFile /var/www/ssl/domain.co.uk.key
SSLCACertificateFile /var/www/ssl/ssl_domain_co_uk.ca-bundle




ServerAdmin root@localhost
DocumentRoot /var/www/html/websites/ssl.domain.co.uk/
ServerName ssl.domain.co.uk
ErrorLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-error_log
CustomLog /etc/httpd/logs/ssl.domain.co.uk/ssl.domain.xo.uk-access_log common

Disable Automatic Restarts in Windows 10 Home Anniversary Update



I have a Windows 10 Home PC that is often unattended but doing important work. The work follows no particular schedule, and may take place at any time of day or night.


As things stand, Windows 10 (anniversary update) is configured to automatically restart and install updates during inactive times. The user can configure the inactive times, but the OS forces the user to have no more than 12 active hours a day. This means that the machine may well choose to restart at a time that is highly disruptive to our work and when there is no user around to prevent the restart.


For this reason, I would like to ensure that Windows never automatically restarts. How can I achieve this?


Answer



Here are the instructions to disable auto restarts for windows 10 Pro and Home editions. If you have a different version (education, enterprise) the process is different - update your question to that effect and I'll add that info.


There are two methods provided. The first is Pro only. Win 10 home doesn't have the group policy editor so it has to be configured via the registry. This registry method will work for both Pro and Home.


I confirmed that this works on the Anniversary update version (win 10 pro).
There is one caveat - a user must be logged in for this approach to work.


Win 10 Pro:



  1. Press win+R then type gpedit.msc and press enter

  2. This will open the group policy editor. Browse through the 'tree' to the following entry:
    Computer Configuration > Administrative Templates > Windows Components > Windows Update.

  3. Look on the right panel and search for the option named No auto-restart with logged on users for scheduled automatic updates installations.

  4. Double-click on it, then change the radio button in the popup window that will appear from not configured to enabled and click OK.

  5. To make the system immediately apply the changes you just made, press WIN + R again and issue the gpupdate /force command


Win 10 Pro (alternative method) and Home:



  1. Press win+R; type regedit and press enter.

  2. Browse to the following registry entry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU

  3. If you do not have a WindowsUpdate and/or AU entry, you need to create them. Follow the 'source' link below for add'l info on how to do this.

  4. Inside the AU key, create a new 32-bit DWORD called NoAutoRebootWithLoggedOnUsers, then double-click on it and set its hex value to 1.

  5. You'll have to reboot for the change to be applied.


Another alternative - home or pro


If for whatever reason the approach above doesn't work, you can get around automatic reboots by changing your windows update settings so that you only download updates automatically and it requests approval before installing them. Once you approve installation you are at the mercy of when Windows reboots, but you have the ability to otherwise indefinitely delay it.


To change this setting:



  1. Press win+R; type regedit and press enter.

  2. Browse to the following registry entry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate\Auto Update

  3. Change the value of that key to a '3' (which configures windows update to download updates automatically but require user confirmation before installing them).


source


virtualbox - Kali Linux error while seeting up VM


I'm currently setting up my learner lab for learn some basics of IT forensics and pentesting, and during the setup of the VM (using Oracle's VirtualBox) an error occured;


enter image description here


So I created a syslog file via the menu, which gave this output:


enter image description here


Unfortunately my experive with linux is limited, but I'm wonderung about the last line; No space left, as there's plenty space.


I hope you can give me a usefull advice here (but not "Stop using Kali").


Thank you in advance.


Answer



Setting the type of OS on the virtual machine to Debian x64(closest to Kali Linux x64) should work for you. If this is not an option you'll need to go into BIOS. virtualization settings. Set the Microsoft virtualization settings to enabled.


windows 7 - Netbook not working properly

I am having a netbook of dell inspiron mini with specifications-




  • Operating System: Windows 7 Starter 32-bit (6.1, Build 7601) Service Pack 1
    System Model: Inspiron 1018


  • Processor: Intel(R) Atom(TM) CPU N455 @ 1.66GHz (2 CPUs), ~1.7GHz
    Memory: 2048MB RAM
    Available OS Memory: 2038MB RAM
    Page File: 1921MB used, 2153MB available
    Windows Dir: C:\Windows
    DirectX Version: DirectX 11
    DX Setup Parameters: Not found



Now the problem is that my netbook was working fine until i opened it after 3-4 days. My PC became very slow and if i wanted to open a folder or a file it didn't open and if i restarted or tried to shutdown the pc it didn't gave any response .
I TRIED





  1. Running a Full and Smart scan in avast and a bitdefender quickscan;result was nothing.
    2.Tried to Boost my PC by clean master,ccleaner.
    3.Ran scan of malwarebytes-antimalware; result was nothing.

windows 7 - Computer freezes on cold boot


I have a problem which started about 3 weeks ago. Each time I boot my computer for the first time of the day, it will freeze around 1 hour after the boot. It happens independently of the OS in use (Linux / Windows). After a hard reboot, the system will be stable for as long as I don't close it. I've tested so far



  1. checkdisk on the only hard drive of the computer, everything was fine


  2. Testing individual memory sticks one at the time, still freezing


  3. Unplugging every hardware pieces of the computer and clean/reseat them, still freezing


  4. Updating or/and reinstalling drivers of a lot of stuff, still freezing


  5. Stress testing the CPU and the GPU, system stable after 1 hour of stress test



I can also point out that it is not an overheating, nor an overclocking problem. I also don't have a spare PSU or a motherboard to test my system with. I'm looking for more tests or ideas so I can finally troubleshoot this problem.


The computer specs are


Dell XPS Studio 9100, 525W Dell PSU, Dell Motherboard, i7 930 @ 2.8 GHz, HD 7850 2GB OC edition, 6x2GB DDR3 RAM, Hitachi 1TB hard drive


Answer



Finally, after three months of system lock, I've found the issue. I've no magic answer, but resetting the CMOS did the trick. It rolled back the BIOS to the 2010 version and no more problem! It also started the first boot setup which may have fixed a few things.


Thank again to all contributors.


Amazon ec2 Public DNS not working



With reference to this question:
How do I access my public DNS on Amazon's EC2




If I configure my security groups acccording to Windows web platform firewall rules then is there an issue? Because when I did that I couldn't access the Public DNS on web browser.




  1. My security group is default and inbound rules are HTTP, RDP ,SMTPS, ICMP.


  2. My instance type is t1.micro webmatrix hosting server with default security group.


  3. My windows firewall is active for domain ,public and private profile.


  4. I am not sure about this point. Its HTTP port 80 as shown in my security group.





I am new to Amazon EC2 and this is really urgent.


Answer



If you are using the instance as a public DNS server then you will need to have UDP port 53 open in the instance firewall (if it has one) and in the Security Group that the instance is in.



Go to your AWS management console and Select EC2. Then Under navigation click on Network & Security -> Security Groups



Security Groups



In the Security Groups Pane select the group your instance is in (most likely default)




Select Group



Then In the lower pane click Create new Rule and select DNS



Select DNS



Then click Add Rule followed by `Apply Rule Changes.
The EC2 security group will now allow DNS queries to your instance.


domain name system - CNAME not resolving



I've got a domain (example.com) registered with godaddy and pointed to nameservers hosted by linode. I've got a multisite WordPress install on linode (blogs.com) and I want to point the domain to a subdomain of the wordpress install (example.com -> example.blogs.com).



The subdomain of the wordpress install works fine - DNS can find it and I can browse to it. in the linode's DNS manager I've set up a CNAME to make the pointer I referenced above.



Whois shows that the linode nameservers are set for the domain, but DNS can't find any nameserver for example.com.




Am I missing a step, or do I have something misconfigured?



EDIT 1



The answer section of the dig request using one of linode's nameservers is



;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 44359
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0



The answer section from the dig using my host's nameserver is



;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16379
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0


Same response in the status field if I specify CNAME or just accept the standard A query.



I do not have an A record for that example.com on the linode nameservers; do I need to set that up?




This is a fairly recent change - a few hours ago, so maybe I'm just being impatient? The nameserver changes made at the same time came through pretty quickly. I figured that the CNAME entry would be simultaneous with that; am I wrong in that expectation?


Answer



Technically what you're asking for is invalid. CNAME conflicts with all other records (with a special exception for DNSSEC records), thus having CNAME xxxxx conflicts with the SOA, MX, NS etc records for the domain. My guess is that the reason the domain resolves when you use an A there and fails when you use CNAME is because the DNS server enforces those restrictions and is unable to process your zone file.



Furthermore, based on your response to @xwincftwx's question, it's not clear that getting CNAME to work would do what you want in the first place. A CNAME pointing to an A record is exactly the same as an A record in the first place. The CNAME is handled entirely internally by the DNS system and the web browser only sees the IP address. In your test with an A record (let's say 1.2.3.4), the browser connected to 1.2.3.4 and asked it for the website example.com. If that server isn't configured to serve a website for example.com it typically serves a default site (in this case blogs.com).



If you got your domain to work as a CNAME, the browser would ask for the IP address of example.com. DNS would see that it is a CNAME, look up example.blogs.com and return 1.2.3.4. The browser would connect to 1.2.3.4 and ask it for example.com just as it did when it was an A record.



If you want people going to example.com to be redirected to example.blogs.com then you'll need to set up a basic web server that receives connections to example.com and sends a 301 permanent redirect to the browser to tell it go to example.blogs.com


Excel 2010 conditional formatting – color cell if blank and other cells are not blank



I'm very new to Excel, so any help on the below would be much appreciated. I've done endless Google searching to try to find the answer to my question, but always get stuck with the random symbols in formulas.



I am using Excel 2010 and trying to perform conditional formatting.



I have a spreadsheet with columns A to G as a register of client queries I've received and when they have been responded to. As I receive new inquiries, I log them in this register to keep track of whom I've responded to and what I said. Column G is the date of my response to the client – this is entered after I've responded.




I am trying to format the spreadsheet so that, if I enter a new client inquiry in a new row, it automatically highlights the cell in column G until a date is entered. So basically it will act as a highlighter for all unanswered inquiries.



I understand that I need to click "Conditional Formatting" → "New Rule" → "Use a formula to determine which cells to format", but I am unsure what I need to put into the formula box.


Answer



To highlight the cells in column G that is empty, you can select column G and click 'Conditional Formatting' (assuming G1 is the active cell while the whole column G is selected) and use the formula =ISBLANK(G1) in the formula box you have mentioned.



Edit:



To highlight the cells in column G that is empty, and only if all of cells A to F in the same row are not empty, you amend the formula as =AND(ISBLANK(G1), NOT(ISBLANK($A1)), NOT(ISBLANK($B1)), NOT(ISBLANK($C1)), NOT(ISBLANK($D1)), NOT(ISBLANK($E1)), NOT(ISBLANK($F1))).




Scott's answer explains well how to highlight the cells if G is empty, and any of cells A to F in the same row is not empty.


Windows 10 not booting from SSD cloned from HDD

I have installed an SSD in my laptop as second drive as below:



  1. Removed the optical drive from the laptop

  2. Removed the primary 1TB HDD from its bay/slot

  3. Inserted the 500GB SSD in the bay/slot

  4. Inserted the HDD removed in a caddy

  5. Inserted the caddy with HDD in the slot from which optical drive was removed.


Both the drives were detected by Windows 10 installed on the laptop. Windows booted from the HDD perhaps since the C drive on it remained as C, despite being in the new slot.


I created two partitions on the SSD - one of 500MB and another of 119.51GB (as exactly on the HDD - smaller one as System Reserved partition while the other for the OS); left the remaining space as unallocated.


Then using Easeus To Do Backup, I cloned the System Reserved and OS partitions on the HDD to respective partitions on the SSD. The clone operation was successful.


Then I removed the caddy with the HDD and tried booting the laptop. But it failed to boot. "A request isn't connected or can't be accessed". Using diskpart from within Windows 10 bootable USB drive, I noticed that the cloned partition on SSD was not C drive. This might explain the problem. I assigned C drive letter to partition with diskpart. No use. And the System Reserved partition had a drive letter (whereas it should not have had one). Tried Startup Repair from the bootable USB, but it was not able to detect the OS.


I inserted the caddy with HDD back in the ODD shot. The laptop booted as the it was now able to find C drive.


How do I make the laptop boot from cloned SSD?

Lenovo laptop: Is it ok to leave the battery charging while it is being used?

Is it ok to leave the battery charging while it is being use? Or is it ok to keep the power plugged in with full charged battery while the laptop is in use? Can this shorten battery's lifespan?

apache 2.2 - Multiple levels of .htaccess files with RewriteRule, only deepest level processed



I have .htaccess files at multiple levels of a directory heirarchy, each with RewriteRules in them. However, when arequest is made for a file in a subdirectory, only the rules in the most deeply nested .htaccess file (up to the level of the requested file) are ever processed. Even having only a single line with "RewriteEngine On" is a subdirectory is enough to "disable" all rewrites defined in higher directories. This happens both for apache and litespeed httpd.



I had expected (and can't find any information otherwise) that all the RewriteRules would be combined into a single ruleset (presumably with deeper levels being processed last). However this doesn't seem to be happening.



Quite confused :) What am I not understanding?




Thanks,
Mike.


Answer



Have you set RewriteOptions Inherit in each .htaccess?




In per-directory context this means
that conditions and rules of the
parent directory's .htaccess
configuration are inherited. Rules

inherited from the parent scope are
applied after rules specified in the
child scope.




http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html#rewriteoptions


ssh - VPS - Lost password and can't use blank pass via remote desktop connection

I am currently trying to login to my dedicated server, and have forgotten the password.


I've tried many different things, and currently have SSH access. I boot the dedi into rescue mode (For the SSH) and then back into hard drive boot so I can use Remote Desktop Connection (RDC). However, my most recent attempts where to use the following:


At /mnt/Windows/System32/config -> I use chntpw -u Administrator SAM and tried to use option 2... This was edit/set new user password and it failed. Secondly I tried to blank out the password (Option 1) completely, this appeared to have worked, but unfortunately I get an error when trying to connect via RDC saying invalid password. After much research I realized that often remote connections are unable to use blank passwords.


So, I thought I'd go back onto SSH and edit the password to become un-blank, and see if that worked again. But I get this error: "Sorry, unable to edit since password seems blank already (thus no space for it)" and it asks me to login with no password.


I am looking to either A) Set a new password, or B) Just get the damn admin account accessible again via RDC.
Thanks for the help in advance.

linux - Windows USB Tool and Unetbootin never detects external hard drive

I like to try out different Linux distro's and I don't like partitioning my main drive. I have plenty of actual hard drives that I can use for that (I like one OS per drive). Anyway, I'm trying to burn an ISO to my external harddrive (Seagate) but I can't because Windows 7 USB Tool and UNetBootin never detect external hard drives OR external hard drive enclosures either.


Why doesn't it detect them? It only shows my main hard drive and my USB stick. And is there a better alternative that will detect my external hard drive and recognize it as something that it can burn ISO files to?

pdf - How to save an HTML document in one file on Windows?

I wrote a product manual using HTML that includes images and I wanted to save it in one file that I could send for customers for viewing. Someone suggested converting it to a PDF but the problem is that although HTML links/bookmarks work in PDFs they are not easy to follow (Adobe reader doesn't have Back and Forward buttons for links) and it's not easy to read. Then someone suggested opening my HTML in IE and saving it as .mht file. I did so and the page looks great, except one "small" fact that all internal links/bookmarks don't work. For some stupid reason they all have absolute URLs that obviously don't work on another computer.


So any suggestions how to save HTML for viewing in one file (offline)?

Windows 7 Home Premium: install IIS?


Can you install IIS on Windows 7 Home Premium?


Answer



Yes, from technet:


To Install IIS 7.5 on Windows 7


You can perform this procedure using the user interface (UI) or a script.
Using the UI



  1. Click Start and then click Control Panel.

  2. In Control Panel, click Programs and then click Turn Windows features on or off.

  3. In the Windows Features dialog box, click Internet Information Services and then click OK.


If you use Control Panel to install IIS, you get the default installation, which has a minimum set of features. If you need additional IIS features, such as Application Development Features or Web Management Tools, make sure to select the check boxes associated with those features in the Windows Features dialog box.


email - Synchronizing e-mail using one IMAP and one POP client



OK I have an odd situation:



I need to synchronize e-mail from a single e-mail account on a single server to two different e-mail clients.

I need e-mail to be removed from the e-mail server.



I've tried setting up these accounts so that one uses POP to remove and sync e-mails and the other uses IMAP to sync e-mails, but this simply creates a race condition between the clients. and results in only some e-mails being delivered to the IMAP client.



Does anyone know of a better way to accomplish my goals?


Answer



This is going to get messy I think.



You will have to have both clients connect using IMAP as nothing else will work. Then you will need to find a way to get both clients to indicate they have finished replicating, then you will need a server process that recognises that signal and deletes the emails. Not good. What happens as more email comes in while the clients are replicating? I think that you would have to stop the email server first - see what I mean about messy?




So, lets wind back slightly. Why do you want to do that? If you can explain that, I'd be prepared to bet that we can come up with a better approach.


Thursday, October 30, 2014

laptop - Certain keys on my keyboard stopped working


Ive never spilt anything on my keyboard and they do work some of the time. All the keys that aren't working are on one side of the keyboard, but its not the entire side...


123456789
qwertyuio[]\
asdfghjkl
zxcvbnm,.

Backspace works, and so does the right hand shift and enter.


This keyboard is integrated into my laptop (The keys just started working again). The laptop is two years old and other questions did suggest that keyboards die like this. However, as you can see they do work every once in a while. Since they're working, here are the ones that stop functioning every once in a while:


0-=
p;'
/

Anyone have any suggestions on how to figure out why they would start and stop working like that? It's always all of these ones I listed all at once.


Any help is greatly appreciated.


Answer



My guess is that you have a finicky connection in your keyboard somewhere. Laptop keyboards typically are flexible and very thin. If there is a broken wire in the module, there is plenty of opportunity for the keyboard to flex and break/restore the connection.


The way most keyboards are built is not to have an individual wire path for each key, but use a grid of wires and determine what key is pressed by associating a key with what amounts to an (x,y) coordinate. Since each row and each column share a wire, if there is a break in it somewhere, it can affect multiple keys at once. Depending on how it is built, it could potentially be a problem localized to one area of the keyboard (as you seem to be experiencing).


To confirm that it is a hardware issue, I would plug in an external keyboard and verify that all the keys work on that even if the laptop keys are not. If that is the case, you may need to replace the keyboard. If you can't send it to the manufacturer to be fixed, you should be able to find a replacement keyboard part online. Replacing a laptop keyboard usually isn't a very difficult task, since they typically will just pop out and pop back in, with few screws to remove, if any.


performance - How can I identify the culprit of my slow Windows shutdown?


My computer is taking a very long time to shutdown.


How can I identify the culprit? I don't want to wait minutes for my computer to shutdown...


Is there a program I can use to track how long it takes to shutdown?


Answer



Windows provides Performance Counters as well as Event Tracing which allows applications to do performance analysis so that one can pin-point the cause of performance problems, amongst those that exist there is one outstanding toolkit: The Windows Performance Toolkit available in the Windows SDK.


In this toolkit you will find xbootmgr.exe, meant for Windows On/Off Transition Performance Analysis.


Although the above linked document goes into all the details for every on/off transition, here is the general idea about tracing and analyzing the shutdown transition using xbootmgr and the xperf GUI:



  1. Download the Windows SDK, then install the Windows Performance Toolkit using it.


  2. Open up a command prompt as an administrator, then run:



    cd %ProgramFiles%\Microsoft Windows Performance Toolkit



  3. If you want help in the future, you can type xbootmgr -help as well as xperf /?.


  4. Do a reboot trace like this:



    xbootmgr -trace shutdown -traceFlags BASE+DIAG+LATENCY -noPrepReboot



  5. After the boot, it will generate a trace within two minutes.


  6. The trace has been saved in %ProgramFiles%\Microsoft Windows Performance Toolkit, you can drag it onto xperf.exe and it will be opened in a GUI.


  7. You will see a GUI with different graphs, the arrow at the left side allows you to add/remove graphs.


  8. Look at the graphs and see if you can identify anything out of the ordinary, you can select an interval and zoom in on it if you want to. Right click and unzoom when you want to see the whole.


  9. For each graph, you can right click to get summary tables for the currently selected interval.


  10. In these tables, sort by weight or by time to figure out which it is spending the most to. Please note that you can drag around columns, so for example the I/O table allows you to check out the highest using process as well as the highest using path.


    The divider (a yellow header column) makes it so that the columns right of it show the total for the columns left of it. So, if you have Path first and then Process, then you can open the tree for a file to see what processes have accessed it and then you get the totals for that process/file combination.


  11. You can find more information on how the graphs and tables function here.


  12. If you somehow need to go down to look into the stack traces; do another boot trace and append the -stackWalk profile parameter, set the _NT_SYMBOL_PATH and right click on any graph and enable "Load Symbols". This will allow you to check what functions it's actually calling, in general you won't need this for a shutdown though; but it can allow for things like discovering that your firewall is interfering with your debugger as a programmer. Pretty nifty...



Good luck, I hope you can find the culprit. If not then drop the trace and we'll take a look for you...


Please note that DPCs are Deferred Procedure Calls and Interrupts are Software Interrupts, both are related to drivers / hardware.


routing - RFC 1918 address on open internet?



In trying to diagnose a failover problem with my Cisco ASA 5520 firewalls, I ran a traceroute to www.btfl.com and, much to my surprise, some of the hops came back as RFC 1918 addresses.



Just to be clear, this host is not behind my firewall and there is no VPN involved. I have to connect across the open internet to get there.




How/why is this possible?



asa# traceroute www.btfl.com

Tracing the route to 157.56.176.94

1
2
3

4
5 nap-edge-04.inet.qwest.net (67.14.29.170) 0 msec 10 msec 10 msec
6 65.122.166.30 0 msec 0 msec 10 msec
7 207.46.34.23 10 msec 0 msec 10 msec
8 * * *
9 207.46.37.235 30 msec 30 msec 50 msec
10 10.22.112.221 30 msec
10.22.112.219 30 msec
10.22.112.223 30 msec
11 10.175.9.193 30 msec 30 msec

10.175.9.67 30 msec
12 100.94.68.79 40 msec
100.94.70.79 30 msec
100.94.71.73 30 msec
13 100.94.80.39 30 msec
100.94.80.205 40 msec
100.94.80.137 40 msec
14 10.215.80.2 30 msec
10.215.68.16 30 msec
10.175.244.2 30 msec

15 * * *
16 * * *
17 * * *


and it does the same thing from my FiOS connection at home:



C:\>tracert www.btfl.com

Tracing route to www.btfl.com [157.56.176.94]

over a maximum of 30 hops:

1 1 ms <1 ms <1 ms myrouter.home [192.168.1.1]
2 8 ms 7 ms 8 ms
3 10 ms 13 ms 11 ms
4 12 ms 10 ms 10 ms ae2-0.TPA01-BB-RTR2.verizon-gni.net [130.81.199.82]
5 16 ms 16 ms 15 ms 0.ae4.XL2.MIA19.ALTER.NET [152.63.8.117]
6 14 ms 16 ms 16 ms 0.xe-11-0-0.GW1.MIA19.ALTER.NET [152.63.85.94]
7 19 ms 16 ms 16 ms microsoft-gw.customer.alter.net [63.65.188.170]
8 27 ms 33 ms * ge-5-3-0-0.ash-64cb-1a.ntwk.msn.net [207.46.46.177]

9 * * * Request timed out.
10 44 ms 43 ms 43 ms 207.46.37.235
11 42 ms 41 ms 40 ms 10.22.112.225
12 42 ms 43 ms 43 ms 10.175.9.1
13 42 ms 41 ms 42 ms 100.94.68.79
14 40 ms 40 ms 41 ms 100.94.80.193
15 * * * Request timed out.

Answer



It is permissible for routers to connect to each other using RFC1918 or other private addresses, and in fact this is very common for things like point-to-point links, and any routing that takes place inside an AS.




Only the border gateways on a network actually need publicly routeable IP addresses for routing to work. If a router's interface doesn't connect to any other ASes (or any other service providers, more simply), there is no need to advertise the route on the internet, and only equipment belonging to the same entity will need to directly connect to the interface.



That the packets return to you this way in traceroute is a slight violation of RFC1918, but it isn't actually necessary to use NAT for these devices as they don't connect to arbitrary things on the internet themselves; they just pass along traffic.



That the traffic takes the (possibly circuitous) route through several organizations that it does is merely a consequence of the operation of exterior gateway routing protocols. It seems perfectly reasonable that Microsoft has some backbone and some people have peered with it; you don't have to be a wholesale ISP to route traffic.



That the traffic has gone through multiple series of routers with private IPs, transiting through ones with public IPs in between, is not especially strange - it simply indicates (in this case) two different networks along the path have routed the traffic through their own routers which they have chosen to number in this way.


apache 2.2 - Subdomains work fine, but redirect to main when using CNAME



I have a server with multiple websites in subfolders that I want to give their own domains. I'm got two subdomains setup using VirtualHost as such:




DocumentRoot "/var/www/ex1"
ServerName ex1.domain.com




DocumentRoot "/var/www/ex2"
ServerName ex2.domain.com



They are setup as A records in my DNS, and they work fine when accessing ex1.domain.com and ex2.domain.com, and the main domain www.domain.com works as well.



However, when I setup their main domains, www.example1.com as CNAME record redirecting to ex1.domain.com, visting www.example1.com shows me www.domain.com and not ex1.domain.com as it should.



What am I doing wrong?



Answer



ServerAlias directive should do the trick:




DocumentRoot "/var/www/ex1"
ServerName ex1.domain.com
ServerAlias www.example1.com




DocumentRoot "/var/www/ex2"
ServerName ex2.domain.com
ServerAlias www.example2.com


linux - Will a SSD cache increase Native ZFS performance for me?



I'm using Ubuntu 11.10 Desktop x64 with Native ZFS using a mirrored pool of 2x2 TB 6.0 Gbps hard drives. My issue is that I'm only getting about 30 Mb/s read/write at any time, I would think my system could perform faster.



There are some limitations though:




  • I'm using an Asus E350M1-I Deluxe Fusion which is a 1.6 Ghz processor

    and a maximum of 8 Gb ram, which I got. I didn't know about ZFS when
    I bought the system and these days I would've selected a system
    capable of more ram.


  • My pool has about 15% free space, but performance wasn't that much better when I had more than 50% free space.


  • When the processor is very busy the read/write performance seems to decrease, so it may very well be the processor that is the bottle neck.




I've read the other posts on this site about using an SSD as a log cache device which is what I'm thinking of doing, considering I don't have that much ram.



My questions:





  1. Do you think adding a SSD as log cache device will improve
    performance?


  2. Should I instead get another 2 TB hard drive and make a RAID-Z pool
    instead? (I'm gonna need the space eventually, however the price is
    still high on mechanical drives) Would this increase performance?


  3. Sell my system and go for an Intel i3 instead?





Thanks for your time!


Answer



Note that due to licensing concerns ZFS is not a native filesystem within the Linux Kernel but a FUSE implementation in userspace. As such, it has significant operational overhead which is also well-visible in benchmarks. I believe this to be the main problem here - a high amount of overhead in conjunction with the rather low processing performance of your system.



In general, adding an SSD in whatever capacity will only be of any help if I/O is actually a bottleneck. Use iostat to verify this.



Adding an SSD as a separate log device will only help if your main problem is the synchronous write load. It will not do anything to reads or asynchronous writes (which are cached and lazy-written). As a simple yet quite effective test, you should temporarily disable the intent log - if your overall performance increases significantly, you would benefit from an SSD log device.



Adding an SSD as a L2ARC will help your reads if you have a rather compact "hot" area on your filesystem which is read in a random fashion frequently. L2ARC does not cache sequential transfers, so it would be rather ineffectual for streaming loads.


should I remove an old laptop battery to save energy?

Does the old battery of an old laptop use a lot of energy when the laptop is connected to AC? Does the power supply waste electricity trying to continually charge an old battery? If I remove the battery, how many Watts of electricity would I save? Would the laptop stay cooler?


The power supply is very hot in both cases. The laptop is a Toshiba Satellite from 2004/2005.


This is a slightly different question than this: Should I remove my laptop battery?, so don't repeat the same answers. I've read about he UPS/power buffer effect, about the life of the battery, etc. I don't care much about the battery and I boot the laptop from DVD. There is no hard disk. I care about keeping the laptop cooler and spending less electricity.

laptop - Battery plugged in, not charging


enter image description here


This is a new battery and this gives the problem of Plugged In, Not Charging for the past two days. I have no clue as to why this is happening. Sometimes it does, sometimes it does not. Battery Wear is 8%.
Some times its status is Plugged In, Charging
enter image description here


If someone could please help me diagnose this.


Answer



This is not a defect or a problem. Your battery firmware or software driver is probably configured to forego charging when the battery is very near to 100%, to save on charge cycles. Rechargeable batteries have a limited lifespan, and repeated charging can shorten it. Charging the battery also heats it up, which can shorten its lifespan. The firmware is just trying to protect you from these and increase the lifespan of the battery.


If this situation is unfamiliar to you, then it is likely that you have used devices in the past which report false charge levels to the user. For example, some devices will say that they are "100%" charged, even though the battery's theoretical maximum capacity has not been completely filled up. This user interface trick is sometimes used to prevent consumers from being concerned by exactly the symptoms of a non-problem which you are seeing.


MSDN Windows 7 Ultimate Product Key -> Anytime Upgrade Key? How?




How do I use a MSDN product key to perform a Windows Anytime Upgrade to Win7 Ultimate?



Alternately, do I need to download the Windows 7 Ultimate iso and perform an upgrade that way? I do NOT want to have to do a risky install.



Thanks.


Answer



You can't use an MSDN key for an anytime upgrade. You have to find regular installation media. If you have MSDN access, you should be able to download a legit ISO from Microsoft directly. You can still perform an upgrade installation, it will just be running from the DVD you burn. What "riskiness" are you referring to? If you use genuine media with a genuine key from MSDN, and do an upgrade install, there shouldn't be any significant risk.



With some MSDN keys (it's unclear why this occurs with some and not others), you may be informed that an upgrade installation can only be done with Windows Anytime Upgrade, not from the DVD. However, as mentioned, you cannot do an anytime upgrade with an MSDN key. The only solution I've found is to use a regular upgrade or retail key (if you have any from other installations) to initiate the upgrade process, but choose not to activate automatically. Once Windows is done upgrading, you can go to the System control panel and change the activation key to your MSDN one, and activate with that. This will work (I've done it personally), but you need to have some other license key for it to work.



Windows Update fails with code 80244019


I just found that Windows Update is failing and hasn't installed any updates for a little more than a month.


It says:


Windows could not search for new updates
There was a problem searching for updates
Errors found: Code 80244019

This article at Microsoft suggests it might be a virus: Windows Update error 80070422, 80244019, or 8DDD0018


The operating system is Windows 8.1 Pro with Media Center.


Answer



I've found an online discussion that suggests that this has been happening because of increased load of Windows 10 distribution.


What worked for me was disabling "Give me updates for other Microsoft products when I update Windows".


Uneven Cassandra load

Should a three node Cassandra cluster with a replication factor of 3 have the same load value for all three nodes?



We are using a random partitioner and NetworkTopologyStrategy. Nodetool ring shows equal values for "Owns" but unequal values for "Load".




Load            Owns    Token                                       
113427455640312821154458202477256070484
16.53 GB 33.33% 0
14.8 GB 33.33% 56713727820156410577229101238628035242
15.65 GB 33.33% 113427455640312821154458202477256070484


Running nodetool repair and cleanup on each node brought the load a little closer but it still seems quite unbalanced.




Is this considered normal?

cmd.exe - WMIC Output Result Without Property Name


I'm entering this line:


wmic /OUTPUT:D:\DriverVersion.txt path win32_VideoController get driverversion

The txt file has two lines in it:


VariableValue
XX.XXX.XXX.XX.X

But I don't want VariableValue to get into output. I want simply get the output (XX.XXX.XXX.XX.X)


How can I do this?


Answer



You could try the following:


(@for /F "delims=" %I in ('wmic path Win32_VideoController get DriverVersion /VALUE') do @for /F "tokens=1* delims==" %J in ("%I") do @echo/%K) > "D:\DriverVersion.txt"

But this converts the Unicode output data to ASCII/ANSI.


If you want to use this code within a , ensure to double all %-signs.


domain name system - Nameserver working for all TLDs except .org




I've recently set up a private name server (ns1.mediamechanic.net / ns2.mediamechanic.net), and it appears to be working for everything except our .org domains (see obapps.org).



As far as I can tell, the failure is happening before the request ever makes its way to our server, so presumably something is wrong on the side of our registrar (eNom).



When I do a trace on a working TLD (.com, .net) I get this:



===================================================
Sending request to "e,gtld-servers.net" (192.12.94.30)
===================================================

Received referral response - DNS servers for "iclaimpreview.com":
-> ns1.mediamechanic.net (216.114.240.114)
-> ns2.mediamechanic.net (208.115.254.250)
===================================================
Sending request to "ns1.mediamechanic.net" (216.144.240.114)
===================================================
Received authoritative (AA) response:
-> Answer: A-record for iclaimpreview.com = 216.114.240.114
-> Authority: NS-record for iclaimpreview.com = ns2.dallas-idc.com
-> Authority: NS-record for iclaimpreview.com = ns2.dallas-idc.com

===================================================


A .org yields this:



===================================================
Attempting to resolve DNS server name "ns1.mediamechanic.net" (details not logged)
===================================================
Failed to resolve DNS server name - error: No such host is known
===================================================

Attempting to resolve DNS server name "ns2.mediamechanic.net" (details not logged)
===================================================
Failed to resolve DNS server name - error: No such host is known
===================================================
Failed to resolve - no more DNS servers left to try
===================================================


It seems that for the .org, it’s unable even to find the name server, which doesn’t make a ton of sense, so I’m at a loss.


Answer




The error messages in the question actually say it all, the problem is simply that the names ns{1,2}.mediamechanic.net referenced in your NS records do not resolve at this point in time.



For .com/.net it "kind of works" despite there being an obvious problem because many resolvers just use the received glue without looking up the authoritative records. Both these TLDs are on the same set of nameservers so glue is provided in both these cases.



In the case of .org that TLD is on an entirely different set of nameservers so there is no glue there. The resolver will then try to look these names up and that is currently impossible.


Wednesday, October 29, 2014

ubuntu - Creating a Windows XP Professional PE USB ISO on Linux

I am attempting to create a Windows XP PE ISO on Ubuntu 16.04. So far the best way I found was the apparently popular Gandalf PE version that I downloaded and burned to USB.



I am using UNetbootin as software on Ubuntu to burn the ISO file. The ISO I selected was the latest available: http://windowsmatters.com/2016/11/08/gandalfs-win10pe-x86-redstone-build-14393-version-11-07-2016/



But when I insert and boot from USB on the computer that contains Windows XP Professional, it says that there is no boot file on the USB, and starts booting normally.



I am not sure what I am doing wrong, but does anyone have experience creating a bootable USB ISO with Gandalf on Ubuntu, to boot on a different computer?



What I need is command prompt access, to reset the Windows password.

file recovery - Get my overwritten Excel back

I have an old Excel file in the USB drive, unfortunately, i accidentally copied it into a folder that contains a newer version with the same name and overwrote the new version. I want to get my old excel back and can anybody help me?

64 bit - can't install tortoiseSVN 64 bit version on Windows 7


I have installed tortoiseSVN 32 bit and 64 bit versions on 64 bit windows 7 machine. I uninstalled all the previous versions (the versions of previous installation can't remember) of tortoiseSVN and try to install tortoiseSVN 1.8.3.24901 64 bit version. but I got this message and exit the installer.


Please uninstall all 32-bit versions of TortoiseSVN before installing TortoiseSVN 1.8.3.24901(64 bit)

I reboot the machine and cleaned the registers but still get the same message. how to prevent this?


Answer



Sometimes some garbage stays behind, in unknown folders. I had a little nightmare when I installed a 64-bit version in Vista 64, then a 32-bit and then tried to return to 64-bit.


I would suggest you the approach I used: install again a 32 bit version, uninstall it and try installing the 64-bit version you wanted. This way, whatever dependencies svnTortoise has will be mapped again by the new 32-bit installation and will be likely removed when you uninstall it.


EDIT: according to lakshman's observation, it is not recommended to restart Windows after uninstalling the 32-bit version.


windows - Wait for a process to complete in CMD


I want to write a batch file that executes another batch file, waits for it to complete the process (i.e. wait till the CMD window closes), then starts another application (.exe). How can I do that? I've tried this but it runs both processes simultaneously:


start "" "C:\Program Files\batch1.bat" /w
start "" "C:\Program Files\process1.exe"

P.S: I'm not sure if it matters, but the batch1.bat file that I mentioned executes a group of programs which takes a few seconds to complete.


Answer



Your basic error is the positioning of /w in the start command: in your command it is a parameter to batch1, not to start. You should use:


start /w "" "C:\Program Files\batch1.bat"

However, it is more efficient not to start a new cmd process and instead use:


call "C:\Program Files\batch1.bat"

Running something on Windows startup with elevated privilege


To launch a .exe on Windows startup with administrator privileges, I know that:



  • the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run method is not good, because it will always display a prompt on each startup "Are you sure to ...",


  • the TaskScheduler method works,



but:



  • what about adding a shortcut to the .exe in Startup folder of the Start menu?


I've tried it, but it silently fails - the .exe doesn't start.


I also tried to edit the properties of the shortcut: Compatibility tab > Run as admin, and also Settings for all users button, then Run as admin.


Idem: it silently fails to start.


Question 1: How to make a .exe with elevated privilege start on Windows startup with a shortcut in Startup folder?


Question 2: Will this show a UAC prompt on each startup?


Answer



Simple answer, you cannot.


This is a security violation. If this were possible, malware could invest itself easily on a target system. To prevent applications from running with administrative privileges, the methods you described that work are the only ways you can take. They require administrative privileges to setup which is the only way it can be prevented to make malicious programs to take over without the users consent.


If you somehow force a program to run with administrative privileges at startup through the startup folder (requires scripts and what not) it will trigger a UAC prompt.


Only the task schedule method can be used to do it without a UAC prompt. Of course, setting up the task requires UAC in the first place.


windows 10 - Google Chrome Hijacks my Microsoft Keyboard Media Keys

I have the Microsoft Natural Ergonomic Keyboard 4000 which includes media keys. Whenever Google Chrome (Version 38.0.2125.104 m (64-bit) on Windows 8.1 Update 1) has focus, the media keys (pause/play) do not work.


I followed the instructions here which include:



  • Open the Chrome app menu

  • Select Tools > Extensions

  • Click the ‘Keyboard Shortcuts’ link at the bottom of the page

  • Find the Google Play Music section

  • Change any specified media key options from ‘Global’ to ‘In Chrome’


Unfortunately, there is no 'Google Play Music' section or any other section that shows that there are any settings for the media keys.


How to I keep Chrome from blocking the media keys on my keyboard?


NOTE: Volume keys work--just the play/pause key is affected, and this behavior is only when Chrome is in focus. Any other program in focus, the play/pause key works fine.

apache 2.2 - WAMP different sites on different ports accessible on LAN




I have a small windows server set up on a LAN, with static IP address 192.168.1.100.

I have a few other client machines, say 192.168.1.101 - 104.



Requirements:




  • Host an apache server (wampserver) on the main server, accessible only on the LAN.

  • Set up the default wampserver tools (such as phpmyadmin) on port 8080, accessible only from the server machine

  • Use port 8081 for a special internal site, accessible by all machines on the LAN




My current setup as follows:



httpd.conf:



ServerRoot "c:/wamp/bin/apache/apache2.2.22"

Listen 8080
Listen 8081

ServerAdmin admin@localhost

ServerName localhost:8080
DocumentRoot "c:/wamp/www/"


Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all




Options Indexes FollowSymLinks
AllowOverride all
Order deny,allow
Deny from all
Allow from 192.168.1



Options Indexes FollowSymLinks

AllowOverride None
Order deny,allow
Deny from all
Allow from 192.168.1



AllowOverride None
Options None
Order deny,allow

Deny from all
Allow from 192.168.1



httpd-vhosts.conf:



Listen 8080
Listen 8081


NameVirtualHost *:8080
NameVirtualHost *:8081


ServerName localhost
DocumentRoot c:/wamp/www



ServerName site1

DocumentRoot c:/site1




  • I have opened up port 8081 on the windows server

  • I have added "site1" to point to 192.168.1.100 on the hosts files of the client machines

  • I have added an alias on the server



    Alias /site1/ "c:/site1/"





    Options Indexes FollowSymLinks MultiViews
    AllowOverride all
    Order allow,deny
    Allow from all




The problem now is that the behaviour is not quite what I need.




Current behaviour on the server:




  • 192.168.1.100:8080 serves me c:/wamp/www as expected

  • 192.168.1.100:8081 also serves me c:/wamp/www instead of c:/site1 that I expect

  • instead, 192.168.1.100:8081/site1 serves me c:/site1



Current behaviour on client machines:





  • site1:8081 (or 192.168.1.100:8081) serves me the c:/wamp/www on the server, instead of c:/site1 that I expect. I don't want c:/wamp/www accessible from clients.

  • instead, site1:8081/site1 (or 192.168.1.100:8081/site1) serves me the c:/site1 on the server.



What am I doing wrong?


Answer



Maybe an explanation on how name base virtual hosts work is helpfull here.




When a browser sends a request for 192.168.1.100:8081 what it does is connecting to 192.168.1.100. port 8081 and subsequently it will send a http request. This looks (simplified) a bit like this:



host: 192.168.1.100

GET /


Apache now needs to find out from which virtual host it will service the response. It does this by looking at the IP:Port pair, and if a NamevirtualHost statement exists for the IP:Port pair it also looks at the host: header. The important thing to be aware of here is that if you call up a site by IP, the host:header will contain the IP address, not the name of the host. You need to use names (and they need to correctly resolve to the correct ip).



If Apache can't find a virtualhost that matches the IP:Port:Host combination it defaults to the first VirtualHost section. And this is what is happening here. Just swap your two sections around and see what happens...




What you need to stop doing here is confusing apache by mixing named based virtualhosts and port based virtualhosts. In other words, you need to remove the NameVirtualHostdirectives. You don't need them.



One last remark: If the aim is to block everyone but the server itself on the wamp directory you need to change something else on your config too:




Options Indexes FollowSymLinks
AllowOverride all
Order deny,allow
Deny from all

Allow from 192.168.1.100



This way only the server gets to see this dir...


nginx - 502 Bad Gateway/ failed (111: Connection refused) while connecting to upstream

I have the following docker-compose xml



web:
build: nginx/.
container_name: nginx
ports:
- "80:80"
links:

- "openchain"
restart: always
wallet:
build: wallet/.
container_name: wallet
ports:
- "81:81"
restart: always
read_only: false
volumes:

- ./www:/usr/share/nginx/html:rw
working_dir: /user/share/nginx/html
openchain:
build: openchain/.
ports:
- "8080"
volumes:
- ./data:/openchain/data
restart: always



And the following conf for web and wallet respectively



worker_processes 4;

events { worker_connections 1024; }

http {
server {
listen 80;


location / {
proxy_pass http://127.0.0.1:8080/;

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}



and



worker_processes 4;



events { worker_connections 1024; }



http {




server {
listen 81;

location / {
root /usr/share/nginx/html;
}
}


}




when I run docker ps I get



    05e351c8f8db        openchain_web         "nginx -g 'daemon off"   5 seconds ago       Up 5 seconds        0.0.0.0:80->80/tcp, 443/tcp    nginx
e7401ea7c5bc openchain_wallet "nginx -g 'daemon off" 5 seconds ago Up 5 seconds 80/tcp, 443/tcp,
0.0.0.0:81->81/tcp wallet
40439fdb1c69 openchain_openchain "dotnet openchain.dll" 5 seconds ago Up 5 seconds 0.0.0.0:32774->8080/tcp openchain_openchain_1


But when I try to access the port openchain_openchain via proxy openchain_web I get the subject error




I'm new to docker so I'm not sure I'm proxying correctly with the nginx



Can you tell me what I did wrong?



P.S. I can access wallet just fine

iptables - Routing and OpenVPN not running on the default gateway

I'm having difficult time setting the correct iptable in order to route OpenVPN traffic to my internal OpenVPN client.



My network is similar to this




                      +-------------------------+
(public IP)| |
{INTERNET}============{ eth1 Router |
| |

| eth2 |
+------------+------------+
| (192.168.0.254)
|
| +-----------------------+
| | |
| | OpenVPN | eth0: 192.168.0.1/24
+--------------{eth0 server | tun0: 10.8.0.1/24
| | |
| | {tun0} |

| +-----------------------+
|
+--------+-----------+
| |
| Other LAN clients |
| |
| 192.168.0.0/24 |
| (internal net) |
+--------------------+




So basically, I want to accept port and forward VPN traffic from router to internal OpenVPN box. Then I want the OpenVPN box take the traffic from eth port and sent it to tun.



Here is what I tried:



iptable on router:



$ iptables -A INPUT -i tun+ -j ACCEPT
$ iptables -A FORWARD -i tun+ -j ACCEPT




# Allow udp 1194 #
iptables -A INPUT -p udp --dport 1194 -j ACCEPT



# Allow traffic initiated from VPN to access LAN
iptables -I FORWARD -i tun0 -o eth2 \
-s 10.8.0.0/24 -d 192.168.0.0/24 \
-m conntrack --ctstate NEW -j ACCEPT






iptables -I FORWARD -m conntrack --ctstate RELATED,ESTABLISHED \
-j ACCEPT




iptables -t nat -I POSTROUTING -o eth0 \
-s 10.8.0.0/24 -j MASQUERADE



iptable on OpenVPN



Can anyone give me a pointer how I can fix this problem?

How would I query, then toggle Windows updates off and on via a batch file?

Please take it easy on me if I speak out of place is this is my first post. :-) but I've been looking for a way to query the Windows Update service through a batch file and based on its current state, either turn it off or on. Ultimately, I want to be able to query if the service is started, then stop it. On top of that, I want to be able to query if its set to auto start with Windows and disable. Then i was the same batch file to query and possibly go the other. like if on, turn off. If off, turn on. or possibly query, then give the option to leave off/on or change the state and start of the service.



I found out how to do these things separately (mostly through this site) by using the following commands:





  • sc start wuauserv

  • sc stop wuauserv

  • Query wuauserv

  • sc config wuauserv start= auto

  • sc config wuauserv start= disabled



So with these i can create two different files to both stop the service and disable it on startup and a separate file to start the service and set the startup to auto, but I would like to do all this with one file if possible.




So this was so long but wanted to make sure I got my goal across and show that i did do some preemptive research.



Update:
Had to zoom in a little bit on your example:
enter image description here
This is what mine looks like when i right clicked on the .bat file i created and run it as admin:
enter image description here



I noticed that you appear to be using Windows 8.1 and im on Windows 7. Does that make a difference?

windows xp - what can cause a folder to become indestructible?


I have a directory that I want to delete, but windows (xp home sp3) is giving me the run-around and the folder is now effectively indestructible.


Attempts to open the folder, either via explorer or cmd.exe are met with 'd:/temp/foo Is Not Accessible. Access is denied'.


Attempts to delete the folder result in 'Cannot delete foo: The directory is not empty'


So I can't delete it because supposedly it's not empty, but windows won't let me in it for some reason, so I can't clean it out first. There's nothing in it of consequence, and basically I just want to delete it at this point.


Thinking that some other process must have a lock on it, I used the SysInternals 'handles' and Process Explorer to look for open handles with the directory name. These turned up no matches. (The directory name is not actually 'foo', it is something more unique but 'foo' is easier to type here).


I put the machine through a restart, and the problem persists. I did a search for the folder name with regedit, to see what other apps might be aware of it. No match.


The properties dialog was mildly interesting. The Read-Only attribute is 'semi-checked', i.e., the grayish check mark you get when some parts are and some parts aren't. Naturally I immediately unchecked this, and tried to delete the folder. No go. Opening properties again reveals the gray check mark next to Read-Only has returned. All the stats, size, size on disk, files, folders, all these are zero. There do not appear to be any shares on the folder, so that's not it either.


Finally, I tried opening the partition's properties, and running the Tools/Error Checking utility. This didn't turn up any problems either.


Fwiw, this directory was created by [a popular gui zip tool] when I tried to unpack a tar-and-zipped archive created on another system with command line utils. The archive was definitely corrupt, but I've never seen such a file do anything worse than crash the zip app, and certainly never leave permanent glitches in the file system.


So what else can possibly be going on to make this folder behave this way?


Answer



I could also be security. Right click the subdirectory, go to Properties, then Security. What users/groups have rights to the subdirectory? Try and add Everyone and give all rights, then save and see if you can open the subdirectory. If you can, try and delete it.


php - Curl POST - 411 Length Required



We have a RestFUL API we build in PHP. If we make the request:




curl -u api-key:api-passphrase https://api.domain.com/v1/product -X POST


We get back:



411 - Length Required


Though if we simply add -d "" onto the request it works and no 411 error. Is there a way to not require adding -d to the curl command?




We are using lighttpd web server, and believe its lighttpd NOT php who is returning the 411 error.


Answer



You are correct -- lighttpd doesn't support POST requests with an empty message body without a 'Content-Length' header set to zero, and CURL sends such a request. There's argument back and forth about who's right, but in my opinion, lighttpd is broken. A POST with no Content-Length and no Transfer-Encoding is perfectly legal and has no message body.



Adding -d "" causes CURL to send a Content-Length: 0 header, which resolves the problem.



You could modify lighttp. Find the code that issues the 411 error and instead set the content length to zero.


cron - Unable to send external email via crontab

I'm trying to send email out from crontab. I've tried making crontab run a basic shell script, as well as specifying the actual command within the crontab. I've tried doing this with mpack and ssmtp . I've noticed that if i execute the command or shell script in a terminal, it works fine. It only fails when i try to schedule it.




This is the basic essence of the command i need to run, where im emailing abc@abc.com the contents of a file. The file is generated daily and is named after the year, month, and day.



echo -e "to: abc@abc.com\nsubject: abc123\n" | ssmtp abc@abc.com < `date +%y%m%d`.txt


Similar thing with mpack



mpack -s "abc123" `date +%y%m%d`.txt abc@abc.com



I've figured that it has something to do with the date variable. If i substitute that with the actual name of the file, then it all works fine. I've made sure to escape the % symbol, and have tried replacing the backtick with $(date +%y%m%d) with no luck.



Crontab looks like this



10 10 * * * /home/user/./script.sh


Also tried this method



10 10 * * * echo -e "to: abc@abc.com\nsubject: abc123\n" | ssmtp abc@abc.com < `date +\%y\%m\%d`.txt



I've made sure that the shell script includes #!/bin/sh, checked all file permissions, and changed the environmental path to include the directories for ssmtp and mpack.



Any suggestions why the date variable is making this fail? Do i need to escape anything else?



Thanks

amazon web services - AWS - Redirect Naked Domain to WWW



I'm hosting a static website on AWS and want the naked domain to redirect to www.

For example, if the user enters example.com I want it to show as www.example.com.



I found this question which is the exact same question as mine, but I want to ask a few more specifics before I take my site offline to change this.



I followed the AWS tutorial to deploy a static website. So if I want my root bucket to redirect to the www bucket, I will deploy the HTML/CSS/JS files to the www bucket and then set the root bucket to redirect?



Lastly, when I set the bucket policy. This is how the tutorial explained to do it for the root domain:



{
"Version":"2012-10-17",

"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}



I wouldn't need this anymore and instead would put this on the www bucket. However, do I change the Resource to www.example.com/*?



{
"Version":"2012-10-17",
"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",

"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}


So it would be



"Resource": "arn:aws:s3:::www.example.com/*"



Is that how you would set it up?


Answer



Create your www.example.com bucket and set it up for hosting a static website. Apply your policy (pasted here for completion):



{
"Version":"2012-10-17",
"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",

"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}


The Resource property is updated with the current bucket.




Once that's in place, deploy your website to this bucket. Update your Route 53 records to point www.example.com to this bucket. You should then test www.example.com to ensure it's working. If not, fix it. Only when it's working should you continue.



Once the www.example.com is working, then you would modify your example.com bucket to simply redirect to www.example.com. No need to modify your Route 53 records since it's already pointing to your example.com bucket.



Once this is done, your browser should hit example.com and then be redirected to www.example.com.



When you're all done, you can delete all the objects from your example.com bucket since they're not being accessed anymore. You can also remove any custom permissions/policies on that bucket.


email - Should I reject mail to test domains?

RFC6761 states about example domains (such as example.com):




Application software SHOULD NOT recognize example names as special and SHOULD use example names as they would other domain names.





Currently, these example domains are set up with a webserver that explains their purpose. The domains lack any MX records; it doesn't have a null MX record either. Because of this, an MTA will try to deliver mail to the A record, which is the IP of the web server, which doesn't accept mail, causing mail to be queued on my MTA until it eventually expires.



Clearly, following RFC6761 doesn't work so well if you're a postmaster.



Are there disadvantages to rejecting all mail to example addresses? Are there any sources that have a recommendation about this?







EDIT For context: We automatically check queue size and if it gets too big someone has to manually check why that is happening. Lately this happens because of one application sending to example domains. Naturally, it should not do that and the correct solution is to fix the application, but that's not going to happen for reasons I won't get into. Since that is the situation, I feel rejecting mails in our mail filters is a better solution than ignoring mails in our alerting software.



FWIW: I agree with the "you should not need to do this" sentiment, but sometimes the world is not perfect.

Tuesday, October 28, 2014

macos - How can I use Boot Camp again now that there is a recovery partition between my Mac and Windows partitions?


It's been a very long time since I booted natively into Boot Camp (I've been using virtual machines for a while), but I'd like to do it again to play Windows-exclusive games. When I tried, though, I got a message about the partition not being bootable.


So, I thought I would just reinstall Windows, since I now have Windows 8 and there's nothing important on my Windows 7 partition. So I fired up the Boot Camp assistant and told it to merge my Windows partition into my Mac OS partition again, and it failed. It told me, descriptively, that "[my] disk cannot be restored to a single partition". The Windows partition is gone, but my Mac OS partition is not bigger than it was before.


I tried again from the Disk Utility, and I got a message that told me, instead, that this is because of a filesystem error.


So I went and looked at a lower-level what's going on and found out that either Lion or Mountain Lion took a 650 MB chunk at the end of my Mac OS partition to put the recovery partition, which is now sandwiched between the Mac OS partition and the (now defunct) Windows partition:


$ sudo gpt -r show disk0
start size index contents
0 1 PMBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 720414896 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC
720824536 1269544 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
722094080 254679055
976773135 32 Sec GPT table
976773167 1 Sec GPT header

Partition index 1 is the 200 MB EFI partition; index 2 is my Mac OS partition; index 3 is the recovery partition; and that free block is the place where my Windows partition was before the Boot Camp assistant thrashed it away.


Obviously, with a new partition in the way, the system can't reintegrate the free space into my Mac OS volume.


I don't really care about the partition layout since I planned giving Windows 8 the exact same size. However, I'm concerned that I won't be able to install Windows again with this current setup.


How can I install Windows 8 on my Mac, without breaking my Mac OS install, under these circumstances?


Here are some things I have been thinking about:



  1. Use gpt to create a partition in the free space, then newfs to create something Windows can install on, but this does not modify the MBR in a Boot Camp-compatible way;

  2. Move the recovery partition to the far end of the disk (right before the secondary GPT table) and then use Disk Utility to merge the contiguous free space into my Mac OS partition, and then use the Boot Camp assistant normally, but I have no idea how to move partitions;

  3. Back up everything, bulldoze everything on the disk, freshly reinstall Mac OS, freshly install Windows, restore backups on Mac OS, but that's gonna take forever, and I can't do it until I get a big enough drive (I'm temporarily not doing any back ups, unfortunately).


Any guidance?


Answer



There are two basic approaches to installing Windows on your system with minimal disruption. Both begin with creating GPT partition(s) for Windows in that big block of free space. You can then do one of two things:



  • Create a fresh hybrid MBR on the disk that refers to the Windows partitions alone. Although most of the "traditional" tools for doing this, like gptsync, won't work in your situation, some will. My own GPT fdisk (gdisk) is one that can do the job. You can then boot the Windows installer in BIOS mode and install in that way. You should then be able to boot using the Option key to enter Apple's boot manager or use a third-party boot manager like rEFIt or rEFInd.

  • Wipe any hybrid MBR data that might exist on the disk and restore a traditional legal protective MBR. You can then boot the Windows 8 installer in EFI mode and install it that way. You may also want rEFIt or rEFInd as a boot manager. This approach is theoretically cleaner, but AFAIK Apple doesn't support it, and it may not work well on all computers. This lengthy forum thread describes efforts to do this, initially with Windows 7 but later with Windows 8. There are probably other forum posts that describe how to do it, but I don't have references offhand.


I strongly recommend you read that first link I provided on hybrid MBRs; they're an ugly hack that break down easily, and your configuration requires doing things with them that could easily lead you into trouble. If you understand them, you're less likely to have problems with a hybrid MBR -- whether it's to create a new one or verify that all traces of your old one are gone.


windows 10 - Administration rights lost

I appear to no longer be an administrator. When I try to change anything I get the message "...is managed by your system administrator".


Can anyone help me get my status back?


(I'm using win10).

Maintenance (TRIM) of SSDs in HW RAIDs

I have 2 ARECA 8040 HW-RAIDs, with 8 SSDs each. One is RAID10 with Intel 520 SSDs, the second is RAID5 with Samsung 840 SSDs. Both are connected to the Server with one shared LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 card.



I am heavily reading/writing/deleting on the RAIDs. From my stuporous measurements I am convinced that TRIM command or a SECURE ERASE must be issued to restore original performance.




The HW-RAID is not accepting nor passing the command to the SSDs:



fstrim -v /media/ssdraid1/ 
fstrim -v /media/ssdraid2/


both fail as unsupported.



The KISS solution I see is: Move all data to others disks. Unmount the RAID volume. Shut it down. Take out all SSDs and connect them to SATA directly. Issue the TRIM (preferred) or SECURE ERASE (if trim not supported) command. Put all the SSDs back into the RAID and move all data back on it.




I don't like about the KISS solution, that I have to move all the data off and on the RAID. It will take long and I will need free disk space for that. One can do this without stopping the PG database running on these RAIDs using tablespaces. But I would be some "touching" on a running system.



I read in the Areca manual:




A disk can be disconnected, removed, or replaced with a different disk
without taking the system off-line. The ARC-8040 RAID subsystem
rebuilding will be processed automatically in the background. When a
disk is hot swapped, the ARC-8040 RAID subsystem may no longer be

fault tolerant. Fault tolerance will be lost until the hot swap drive
is subsequently replaced and the rebuild operation is completed.




So.. now I have the following idea:



for (N = 1 to 8) {
* Remove Nth SSD from running RAID
* Connect it directly to SATA on a desktop mashine
* Issue TRIM (preferred) or SECURE ERASE (if trim not supported) to restore initial performance

* Plug it into the RAID again
* Wait for the HW-RAID to resync the disk
}


My Question(s): Is this a good idea - and if it is not why not? Will it work? Do you see any problems with RAID5 or RAID10 configuration? Should I "tell the RAID" that I will remove the drive beforehand?

Windows Anytime Upgrade unsuccessful

I tried to upgrade my Windows 7 Home Premium to Professional but the upgrade failed and here's the error in the upgrade.log file:
DoTransmogrify failed due to error 0x80070002


I did a quick search and afterwards disabled UAC as well as my Anti Virus. UAC asked me to restart the computer so I did. However, to my surprise, when the computer was booting, the upgrade process continued and finished successfully (I'm using Professional now). So I want to know why did this happen? Originally it said unsuccessful then exited yet when I restarted the computer everything continued just fine. Will I run into any problem in the future?

linux - Why ext filesystems don't fill entire device?


I've just noticed any of ext{2,3,4} filesystems i'm trying to create on 500G HDD don't use all available space (466G). I've also tried reiser3, xfs, jfs, btrfs and even vfat. All of them create fs of size 466G (as shown by df -h). However, ext* creates fs of 459G. Disabling reserved blocks increases space available to user, but size of fs is still 459G.


The same is for 1Tb HDD: 932G reiserfs, 917G ext4.


So, what is this 1.5% difference? Why it happens and is there the way to make ext fill whole volume?


UPD:
All tests done on the same machine, on the same HDD etc. It doesn't matter how 466G differs from marketing 500G. The problem is it differs for different FS'.


About df - it shows total FS size, used size and free space. In this case I have:


for reiserfs:



/dev/sda1 466G 33M 466G 1% /mnt



for ext4:



/dev/sda1 459G 198M 435G 1% /mnt



If I turn root block reservation off, 435G changes to 459G - full size of fs (minus 198M). But fs itself is still 459G for ext4 and 466G for reiser!


UPD2:
Filling volumes with real data via dd:


reiserfs:



fs:~# dd if=/dev/zero of=/mnt/1
dd: запись в «/mnt/1»: На устройстве кончилось место
975702649+0 записей считано
975702648+0 записей написано
скопировано 499559755776 байт (500 GB), 8705,61 c, 57,4 MB/c

ext2 with blocks reservation turned off (mke2fs -m 0):



fs:~# dd if=/dev/zero of=/mnt/1
dd: запись в «/mnt/1»: На устройстве кончилось место
960356153+0 записей считано
960356152+0 записей написано
скопировано 491702349824 байта (492 GB), 8870,01 c, 55,4 MB/c

Sorry for russian, but i've run it in default locale and repeating it is too long. It doesn't matter, dd output is obvious.


So, it turns out that mke2fs really creates smaller filesystem, than other mkfs's.


Answer



There are two reasons this is true.


First, for some reason or another OS writers still report free space in terms of a base 2 system, and hard drive manufacturers reports free space in terms of a base 10 system. For example, an OS writer will call 1024 bytes (2^10 bytes) a kilobyte, and a hard drive manufacture would call 1000 bytes a kilobyte. This difference is pretty minor for kilobytes, but once you get up to terabytes, it's pretty significant. An OS writer will call 1099511627776 bytes (2^40 bytes) a terabyte, and a hard-drive manufacturer will call 1000000000000 bytes a terabyte.


These two different ways of talking about sizes frequently leads to a lot of confusion.


There is a spottily supported ISO prefix for binary sizes. User interfaces that are designed with the new prefix in mind will show TiB, GiB (or more generally XiB) when showing sizes with a base 2 prefix system.


Secondly, df -h reports how much space is available for your use. All filesystems have to write housekeeping information to keep track of things for you. This information takes up some of the space on your drive. Not generally very much, but some. That also accounts for some of the seeming loss you're seeing.


After you've edited your post to make it clear that none of my answers actually answer your question, I will take a stab at answering your question...


Different filesystems use different amounts of space for housekeeping information and report that space usage in different ways.


For example, ext2 divides the disk up into cylinder groups. Then it pre-allocates space in each cylinder group for inodes and free space maps. ext3 does the same thing since it's basically it's ext2 + journaling. And ext4 also does the exact same thing since it's a fairly straightforward (and almost backwards compatible) modification of ext3. And since this meta-data overhead is fixed on filesystem creation or on resize, it's not reported as 'used' space. I suspect this is also because the cylinder group meta-data is at fixed places on the disk, and so is simply implied as being used and hence not marked off or accounted for in free-space maps.


But reiserfs does not pre-allocate any metadata of any kind. It has no inode limit that's fixed on filesystem creation because it allocates all of its inodes on-the-fly like it does with data blocks. It, at most, needs some structures describing the root directory and a free space map of some sort. So it uses much less space when it has nothing in it.


But this means that reiserfs will take up more space as you add files because it will be allocating meta-data (like inodes) as well as the actual data space for the file.


I do not know exactly how jfs and btrfs track meta-data space usage. But I suspect they track it more like reiserfs does. vfat in particular has no inode concept at all. Its free space map (the size of which is fixed at filesystem create (the infamous FAT table)) stores much of the data an inode would, and the directory entry (which is dynamically allocated) stores the rest.


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...