Thursday, April 30, 2015

linux mint - Is read-only BIOS flashable?

I would like to replace Chrome OS with Linux Mint on Acer Chromebook. Problem is it has a read-only BIOS (thanks...!). Would it be possible/feasible to flash this and make it into an read-write BIOS and then change the OS ? (I am newbie-ish, please forgive if this is obviously not possible)

linux - HDD appears to have both MBR and GPT


I have an external HDD which appear to have both MBR and GPT. The result is that Windows reads different partitions than Ubuntu and OS X. The GPT seems correct to me (I can access and use the disk fine in Ubuntu and OS X), while the MBR has an old partition table. Is there a way to remove the MBR/fix this issue without wiping the drive?


Output from fdisk -l:


Disk /dev/sdb: 3,7 TiB, 4000787029504 bytes, 7814037167 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: 6D14A59C-0E35-4D79-AFC2-DEC63ACAA2E2
Device Start End Sectors Size Type
/dev/sdb1 2048 6176047103 6176045056 2,9T Microsoft basic data
/dev/sdb3 6176047104 7813774983 1637727880 781G Apple HFS/HFS+

From OS X diskutil list


/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *4.0 TB disk2
1: Microsoft Basic Data maxntfs 3.2 TB disk2s1
2: Apple_HFS TMm 838.5 GB disk2s3



Screenshots from Windows 10 disk management (as links, too low rep for images):


screen1
screen2


Answer



gdisk (“GPT fdisk”) has a one-step(-ish) option to create a protective MBR.


$ gdisk /dev/sdb
x
n
w

This will:



  1. Launch gdisk

  2. Enter expert mode

  3. “create a new protective MBR”

  4. write table to disk and exit”


It may even detect the problem and offer to fix it right away. You currently have a so-called Hybrid MBR, though it’s out of sync.


Why can't I boot off another drive or change BIOS options after hibernating?



I hear that many people running Windows are able to hibernate their computer,
temporarily boot some other operating system,
then shutdown that other OS and resume Windows right where they left off.




How do I make sure that the next computer I get has this ability?



Is there some way I can get my laptop to do that, or is my laptop simply incapable of doing that?



If this laptop is incapable of doing that, is there a name for
"the kind of PC that has the ability to let me hibernate Windows, and boot some other OS, and then later resume Windows where I left off"
to distinguish them from
"the kind of PC that won't let me do that" (such as my current laptop)
?




details



On some computers (such as my HP Pavilion dm1 laptop),
after I tell Windows to hibernate and I wait for all the lights to go dark, after I press the power button, the computer boots up directly into windows without ever giving me the option to select some other boot disk or to change bios options.



I thought "hibernate" turned the computer completely off.
I thought turning on a computer always gave the "press ESC for BIOS options" (or some similar message) every time it was turned on after being completely off.
But that doesn't seem to be the case with some computers, such as my laptop.




(I can tell "hibernate" is different from "sleep".
When I tell Windows to "sleep", the light on the power button blinks, and then when I press any key on the keyboard, it quickly wakes up directly into windows.
When I tell Windows to "hibernate", all the lights go dark, and none of the keyboard buttons do anything except for the power button.)



Out of all the options "sleep", "hybrid sleep", "hibernate", and "shutdown",
only when I tell Windows to "shutdown" do I later get the "press ESC for BIOS options" message the next time it starts up -- only then can I choose to boot this laptop off the flash drive with my other OS.
All the other options boot directly into windows, completely bypassing the "press ESC for BIOS options" message.


Answer



What you need to do is drain all power from the machine, by unplugging it and either waiting for a while (at least 5 mins) or pressing the power button a couple of times (this attempts to start it, which will drain any capacitors in the PSU - typically, fans might spin up or LEDs might light for a bit).







This is an interesting quirk of how different motherboards (specifically, the BIOS/UEFI firmware) handle displaying the menu when hibernating. ACPI defines a special power state for hibernation, S4 (under G1, as are all other standby modes), which the BIOS can decide to handle differently. for example by booting from the same device without displaying a menu.



What you need to do is put the machine into the powered off state, S5 (under G2, normally entered when 'shut down'). The easiest way is by going to G3 first, by removing all power.



There does not appear to be an option within Windows to tell it to use S5 for hibernation - but then, that's what S4 was designed for anyway. Windows is not doing anything wrong. It may be useful to take a look at your BIOS/UEFI options/manual, since there may be an option to change which state the computer enters when an S4 signal is received.



The only real way to ensure that your next motherboard has this option or always displays the boot menu is through experience, whether yours or others'. This is not usually an advertised feature.




One thing to note is that the BIOS (I'm not familiar with the UEFI booting process) might load a certain device (e.g. a hard drive), but it is up to the bootloader on that device to resume from hibernation or provide additional options. The Windows bootloader will always resume when hibernated, as far as I know. I believe GRUB may be configurable to provide the boot menu every time.



(source)


hard drive - Spiking Disk Usage

So recently my HP Envy 15 was sent away to get serviced as the BIOS corrupted after an automatic BIOS update. To fix this, they replaced the motherboard and that is all the supposed changes that they made to the computer. When I got my HP back it has been running alot slower than how fast it should be running. I did the obvious by running CCleaner and Malwarebytes, cleaning out the TEMP files but it all isn't improving the system performance.



My computer has 8GB RAM, Intel Core i7-4700MQ CPU 2.40GHz and a 1 TB hard drive, running Windows 8.1 and whole setup is less than 6 months old.



This is what the disk usage looks like after about 20 minutes after start-up but for the first 5-10 minutest after start up the Disk is running at 100% for most of the time.



I have been looking at the processes that are causing the most disk usage and it is the System for the most of the time (when I start another program like photoshop then that uses most of the disk usage.)



My understanding of Disk usage is really lacking, as when I am reading the stats, it says 100% (or near) but then only 1.6MB/s write speed (for example).




It looks like that this user also has a similar situation: Windows 8.1 Update 1 Disk Usage 100%



enter image description here

macos - Installing Ubuntu on VirtualBox from iso


I'm having some trouble. I have VirtualBox installed on both a OS X Lion Macbook Pro and a Windows XP Lenovo Laptop, and they both behave the same way. I've downloaded Ubuntu as an .iso from here: http://www.ubuntu.com/download/desktop


I set up the virtual machine and everything looks great, but when it gets to boot time it just sits there with a black screen and blinking prompt. The file command on the iso says it's a bootable image, so I'm n


Ubuntu


Answer



When you boot the Ubuntu CD, you will first see this screen:


Ubuntu 12.04 Boot Screen


Press Ctrl while you see this screen. You will then get the Ubuntu boot menu where you can choose a language and boot options.


Select your language, then press F6 for boot options. Use your arrow keys to select nomodeset so that an x appears next to it.


Ubuntu 12.04 Boot Screen with nomodeset selected


Then press Esc. Now, use your arrow keys to scroll down to Install Ubuntu, and press Return.


windows 10 - How can I efficiently recover a permanently deleted folder at once?


There was a folder on my hard drive that I want to recover after I deleted it. How can I restore an entire folder from the file system?


I have tried some tools to recover deleted files listed in these articles:


However these programs appear wasteful because they seem to recover files without a directory structure. I don't want to preview and then recover each file individually, but I would just like to specify a folder to be restored.


How can I restore a deleted folder at once?


Answer



Most tools shown on those websites are file carvers. In order to develop a strategy for data recovery you need to understand the two main different categories of tools for recovering files:



  1. File carvers → They scan any kind of disk and try to recover known file types by checking for specific signatures. For instance, JPEG files always start with bytes FF D8. This method only works for non-fragmented files and you don't get any clue about a file's name or location.


  2. Tools that work at the file system level → They read (possibly damaged) partitions by looking at the directory tree and then use the information specified there to access files. For this reason they can access any file as long as it is listed in the file system.



In principle you might think that carvers are basically useless, due to their limitations. However, this is not correct. Carvers can recover non-fragmented files on any kind of file system, even if you don't know its format. Also, they can recover non-fragmented files after their metadata (file records) have been completely removed from the file system.


In your case, the scenario is the following:



  • you have a recently deleted folder

  • you want to rebuild its directory structure

  • you need to restore all the elements inside


Thus you won't make any use of file carvers and you should avoid them. You need a tool that "speaks" NTFS (the file system used by Windows).


You could try to recover the files from Windows directly, however that would be a terrible idea. The more you use your OS, the more likely you are to overwrite them with new data.


For this reason, stop using Windows now and boot your PC using a Linux live DVD or USB (basically any modern version will do, no matter if it is Ubuntu, Fedora or anything else). If you don't have a live DVD or USB ready, use another computer to create it or buy a magazine that includes a Linux DVD. Do not use your PC to create the bootable medium as that would write a lot of stuff on your hard disk during the operation.


When you have loaded the system, connect an external USB drive to store the recovered files.



Disclaimer: I am the developer of RecuperaBit. Moreover, the following part is based on my previous answers posted here on Unix & Linux and here on Ask Ubuntu.



Identification of the correct drive


Run sudo lsblk to identify your main NTFS partition (let's say the C: drive). The output might look a bit like this example:


$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 32G 1 disk
├─sda1 8:1 0 500M 1 part
└─sda2 8:2 0 31.5G 1 part
sr0 11:0 1 2.8G 0 rom /cdrom
loop0 7:0 0 2.1G 1 loop /rofs

This tells me that this drive has a small 500 MB partition (the Windows boot loader) and a larger one of 31.5 GB over the whole 32 GB disk. Hence I now know that the C: drive of the virtual machine I am testing is /dev/sda2.


Using TestDisk


Your partition is not damaged, since you only deleted some files. Therefore you can try to use TestDisk which is an excellent piece of software for data recovery.


If you are running a Debian-based OS (including Ubuntu) you can install it with the following command:


sudo apt-get install testdisk

After this step, run it on the drive:


sudo testdisk /dev/sda2

Follow the on-screen instructions. Basically you need to press Enter until it asks you for a partition table type (None because we are scanning a single partition).


When it shows you a list stating that the partition is NTFS, you will see some options at the bottom. Select List to show its contents. You should be able to browse the files and navigate where the original directory was.


Note that, due to how Windows handles the recycle bin, the directory might be found in C:\$Recycle.Bin and not in its original place. Basically, look for it until you find it.


If you find it, highlight it with the arrow keys and then press C. This will enter the copy mode. You need to navigate to the external USB drive (it will be somewhere in /media/, i.e. inside media in the root directory of the Linux system) and then press C again to select it as the destination directory.


Done, you have copied the whole folder!


If you don't find it, the index records of the parent directory of the deleted folder might have been cleared so the folder you are looking for is not listed anymore.


In that case, follow the next session.


Using RecuperaBit


My MSc thesis was about reconstructing heavily damaged NTFS drives. When index records get damaged or overwritten, files and directories disappear from the directory tree even though they can still be recovered.


This is why I developed RecuperaBit, which uses a bottom-up approach for NTFS reconstruction. Follow these steps to recover your folder:



  • Create a directory named recuperabit_output in your external USB drive.

  • Download RecuperaBit from GitHub and extract it into a folder.

  • Run it passing the drive and the path where to store the recovered files as arguments:


    sudo python /path/to/RecuperaBit/main.py /dev/sda2 -o /path/to/the/external/USB/drive/recuperabit_output

  • Let it scan the drive by pressing Enter.


  • Type csv 0 list.csv to generate a list of files.

  • Open the resulting CSV file with LibreOffice to find the identifier of the directory. Example:


    enter image description here


    If I wanted to recover System Volume Information, that would be directory 31.


  • Go back to the RecuperaBit console and type restore 0 31 where 0 means the first partition, i.e. the only one you are analyzing.



There you go, you now have your files in the external USB drive, under recuperabit_output/Partition0.


Google Cloud Compute MongoDB Deployment security issue and keyFile issue


I've ran into this super annoying issue today.


Basically, I have setup a MongoDB database using the GCP Marketplace offering. It sets up a primary node, secondary, and an arbiter. Which is super cool.
What it doesn't do is security. Like, at all. So only natural I had to set it up myself. Well, now 20 hours later and a few good punches to my own face I am still struggling to get it running.


Basically, this is my partial config:


security:
authorization: enabled
keyFile: '/etc/mongodKey'

If I comment out the keyFile the instance runs. But it cannot connect to any other nodes, because of the security being enabled. And no, I cannot disable it, are you mad?


The thing about the keyFile though... As I understand, mongod cannot open it, so it won't start. I suppose /etc is not a good place to put it in? I tried other folders, but to no avail. Nothing works.


And I need to have that security measure, since the database needs to be connected to by my colleagues using Robo 3T. So dropping the external IP address is out of the question.


What me do wrong? Please help as I'm pulling my own hair out.


This is the output of sudo service mongod status:


● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-08-21 15:28:08 UTC; 4min 29s ago
Docs: https://docs.mongodb.org/manual
Process: 1024 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
Main PID: 1024 (code=exited, status=1/FAILURE)
Aug 21 15:28:08 m-vm-0 systemd[1]: Started MongoDB Database Server.
Aug 21 15:28:08 m-vm-0 systemd[1]: mongod.service: Main process exited, code=exited, status=1/F
Aug 21 15:28:08 m-vm-0 systemd[1]: mongod.service: Unit entered failed state.
Aug 21 15:28:08 m-0 systemd[1]: mongod.service: Failed with result 'exit-code'.

Edit:


I checked the mongod.log. Yes it is a permission issue. And I cannot solve it.


I tried doing sudo chmod 400 /etc/mongodKey but it doesn't do anything.
Please, someone, where do I put the key file so it is readable by mongodb? This is very important!


Answer



If you use the GCP MongoDB marketplace deployment named "MongoDB", that allows you to setup replication, know the following:


They do not setup the security in the initial configuration, thus there are 2 options:



  1. Turn off the External IP

  2. Enable authorization in the mongod.conf


If you go for the first solution, you won't be able to easily connect to the database from any other external sources.


If you go for the second solution, you'll need to do the following:



  1. Generate a key, the whole process can be found here: https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/


  2. Copy the file contents


  3. SSH into all of your Compute Engine instances

  4. Choose a directory

  5. sudo touch

  6. sudo nano

  7. Paste the key you generated on your computer and save

  8. sudo chmod 600

  9. sudo chown mongodb:

  10. Update your mongod.conf which is found under /etc/mongod.conf

  11. Uncomment security, authorization, keyFile

  12. Provide the path under key keyFile to your keyfile

  13. Stop all instances and start them again


Now MongoDB has access to the keyfile.


What as nightmare. And chmod 400 didn't work for me as specified in the documentation. I had to set it to chmod 600 .


linux - Cannot properly read files on the local server

I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure.



I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error.



In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration.



Some things I've tried after searching Google and StackExchange:





  • CHMOD 0777 both directory and files,

  • CHOWN root, apache (only two users I can think of that PHP would use),

  • Importing SQL files with total size under both upload_max_filesize and post_max_size,

  • PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory),

  • PHP safe mode is OFF (it was never ON)



At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection.



Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

Redirect all incoming traffic from a secondary public IP to an internal IP address using iptables



I'm currently trying to figure out how I can forward traffic from a secondary public IP address of my dedicated server to an internal IP of my network using iptables in order to make e.g. webservers and the like visible from outside.



My setup is a dedicated server containing three virtual machines which form a "private LAN". The connection between those is established and the virtual machines can connect to the internet through a bridge between the isolated LAN and the physical server. Allowing outgoing traffic is established using the following rule (LAN: 192.168.x.x, Example Public Address: 8.8.8.8):



iptables -t nat -A POSTROUTING -s 192.168.1.101 -j SNAT --to-source 8.8.8.8



This works fine - if I open an internet browser and go to whatismyip.com it will now no longer show the server's main IP address, but instead it will show the secondary IP just the way it's supposed to do.



However, now I'd love to do the other way around and install e.g. a web server on one of the virtual machines and make it available to the public through my secondary IP. I was searching for the answer and found I'm supposed to add a PREROUTING rule in order to accomplish this, thus I tried the following:



iptables -t nat -A PREROUTING -d 8.8.8.8 -j DNAT --to-destination 192.168.1.101


Connecting to port 80 of the public IP will time out, though. It seems like I'm still missing something or there's a mistake in the way I do the rules.




Please note: Rather than opening only a specific port, I'd like to forward all incoming traffic on that specific IP to the virtual machine and handle security over there.



Any advice would be appreciated - perhaps I'm just missing something minor.


Answer



You will need a combination of DNAT and SNAT, and you need ip_forwarding active.



First, check ip_forwarding:



cat /proc/sys/net/ipv4/ip_forward



If it is 1 (enabled), go ahead. If not, you will have to put net.ipv4.ip_forward=1 on /etc/sysctl.conf and run sysctl -p.



The first rule is DNAT (assume 8.8.8.8 as the external IP and 192.168.0.10 as the internal):



iptables -t nat -A  PREROUTING -d 8.8.8.8 -j DNAT --to-destination 192.168.0.10


When a external system (e.g. 200.100.50.25) sends a packet reaching 8.8.8.8 will have the DESTINATION changed to 192.168.0.10, and sent away. But the source will be 200.100.50.25, the packet will be processed and the response packet can:





  1. Be dropped by 192.168.0.10 that may not know how to route it. Not desirable.


  2. Be sent by 192.168.0.10 to the default gateway and to internet. As soon as it reaches 200.100.50.25, this system will had never heard of 192.168.0.10 and will drop the packet. Not good.


  3. Be dropped on the first hop, as 192.168.0.10 is a private address and not routeable on Internet.




To solve this, you need the second rule, SNAT:



iptables -t nat -A POSTROUTING -s 192.168.0.10 -j SNAT --to-source 8.8.8.8



With this rule, every packet that comes from 192.168.0.10 will have the source changed to 8.8.8.8 and sent away.



The collateral effect is that every log on 192.168.0.10 will show 8.8.8.8 as the client, not the real client. Tracking abusers will be a little harder.


Drupal install and permissions



So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below.



-bash-4.1$ ls -hal
total 28K
drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 .
drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 ..

-rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php
drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files
-rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php


However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck.



Update:



To further explore the situation, I created sites/default/richard.php with the following code:




error_reporting(E_ALL);
ini_set('display_errors', '1');
mkdir('files');
print("
User is ");
passthru("whoami");
passthru("pwd");
?>



Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following:




Warning: mkdir(): Permission denied in
/var/www/html/sites/default/richard.php
on line 9 User is apache
/var/www/html/sites/default





Update 2:



Safe mode appears to be off...



-bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \;
safe_mode = Off
safe_mode_gid = Off
safe_mode_include_dir =
safe_mode_exec_dir =
safe_mode_allowed_env_vars = PHP_

safe_mode_protected_env_vars = LD_LIBRARY_PATH
sql.safe_mode = Off

Answer



There are couple of things to consider




  1. Turn selinux off or set it to permissive mode


  2. Check the selinux context of read/write permission directory


  3. Clear the cache of your browser and try again.



  4. restart apache and try again.


  5. Check if the directory has any disk quota and it exceeded the limit.



battery - USB A to C cable with USB 2.0 charger, low current


Note: this is not about a specific device but the general aspects of charging a USB device using accessories with different USB standards so I hope this fits here.


I've got a Samsung Galaxy Smartphone with a USB 2.0 Type C charging port.


Using the stock Samsung wall plug (9 V, 1.7 A output or 5 V, 2 A fall-back) and charging cable (USB A to C) the device gets charged utilizing Samsung's "adaptive fast charging" technology.


Using the included charging cable with a generic USB A wall plug without Power Delivery (5 V, 2 A output) nets a very slow charging speed of approx. 900 mA. Incidentally the same current as specified for USB 3.0 ports.


The same exact charger and a regular Micro USB cable would charge any old smartphone with a maximum of 2A sans utilizing USB-PD. Why doesn't it work with the OEM Samsung USB A-to-C cable? The USB 2.0 specification also states a maximum of 500 mA and every charger in the last years exceeded that. The Samsung device doesn't even have a USB 3.0 port, it's just USB 2.0 with a Type C connector.


I'm trying to understand this from a technical viewpoint. Can you point me into the right direction? USB 3.0+ and Type C is honestly pretty confusing.


Answer



USB Type C is simply the connector - the protocol in use is USB 3.X. Of course, it affects the connector, but the limitation is in the phones support of 2.0 and not because its a type C connector.


Also, AFAIK Samsungs "adaptive fast charge" actually uses Qualcomms solution of delivering a 9V load instead - this is then converted at the phones side. VOOC (or oneplus Dash) charging works with current only, this is where you may get confused. I think the device should still take 2 Amps, but it may be device limited to work only with samsung fast chargers - for safety or for exclusivity, maybe both.


If you see Oneplus cables, they are comparably quite thick - this is to support the thicker wires, which of course are needed for the 4 amps sent to the phone. Your cable might not be thick enough to support the larger currents - try another USB-C cable if you can, one that's preferably rated at a larger current. The included cable is good for 9V at a lower current, but may not support currents larger than the fallback of 2.0A. That's why its recommended to try different chargers. cables, etc. Also, to make it a fair test, try and let the battery discharge to somehting like 20% - so that the device isn't software limited the charge! (It reduces it to scale of milliamps at 80% to protect the battery)


python - Replicating Chrome http authentication for a website

At work, I have a process that requires me to build a table based on information I find on an intranet website. So far I have done this by hand: I get the information using the form on the website and I input it into an access table which I upload to our company database. I thought I would try and automate this procedure using Python's get command from the requests library. However, the get request returned a 401 status code. Apparently I need authentication to access that information. Google Chrome and Internet Explorer both do that authentication automatically, it seems. I can't quite figure out how to do it though. The headers variable of the get Response states that the authentication being used is "Negotiate, NLTM." My question is, is there an easy way to determine what credentials Chrome/Explorer are providing to the server?


Thanks

windows 8.1 - How can I prevent a folder from being inadvertently deleted by myself?




I have a very important folder on my Desktop. I occasionally clean up my desktop and I am very concerned that I might delete the mentioned folder inadvertently. Is there a way to prevent such a disaster without limiting my frequent read-write operations on the folder's content? Note I don't mind deleting the content inside the folder one by one on occasion but the folder itself matters to me. If it is deleted, I lose a lot of efforts.


Answer



Don't try to avoid the inevitable. Use backups and version control.



You could deny yourself the Delete permission though. Deleting files and folders within that directory is a separate permission that you could also disable when required.


domain name system - how solve ERROR: No reverse DNS (PTR) entries. The error MX records are:



i have remote access to my windows server 2008 r2(DNS-IIS-FTP-MAIL),
please see the link below for my web site :
http://www.intodns.com/polyafzar.com



how can i fix the error below in my server :





ERROR: No reverse DNS (PTR) entries. The problem MX records are:
234.60.7.31.in-addr.arpa -> no reverse (PTR) detected
233.60.7.31.in-addr.arpa -> no reverse (PTR) detected
You should contact your ISP and ask him to add a PTR record for your ips




thanks in advance


Answer



Just like it says:




"You should contact your ISP and ask him to add a PTR record for your
ips"





You have to get whomever looks after your hosting to provide rDNS records in order to eliminate this error.


Wednesday, April 29, 2015

hard drive - How to disable S.M.A.R.T. warnings at computer startup?


My computer recently presented me with a possibility that my hard drive is about to fail. On further investigation I found out that it was a S.M.A.R.T. status failure. I've got a WD hard drive, so I went and got their WE Diagnostics Tool. It confirmed the S.M.A.R.T. warning, but an extended test passed as you can see in this picture:


enter image description here


I've read a bit about about S.M.A.R.T. and realized that my HDD can fail. This is not my primary computer thought and was recently formated, so it does not contain anything really important. It's a laptop and recently I've mostly used it to watch movies on my TV screen.


With that said I was wondering if I can disable the annoying warning BIOS shows at startup? I've looked in BIOS, but found no settings, it actually had really few settings. The laptop runs some version of Phoenix BIOS.


Oh, and as a side question. If I leave the disk in my laptop and it fails, can it damage any other components?


UPDATE (Jan 11, 2015): If anybody reading this in concern for their drive. After more then 3 years my drive is still doing fine. I haven't used the laptop heavily or about 2 of those years, but for the last few months it's running an Ubuntu-based media server and the drive isn't showing any signs that it might stop working.


Answer



There should be an option called Internal HDD. Go into it and at the very bottom there should be SMART Monitoring option that you can disable.


The good news:



137 Relocated Sector Sector Relocated. There may be repairable media
errors on a platter. The automatic repair feature can attempt a repair
if possible. You may need to rescan to ensure that the repairs were
effective. Replace the drive if the error repeats.



I would recommend grabbing a copy of Hiren's Boot CD and try to repair.


ubuntu - Every windows 8 boot breaks grub


I have Windows 8 and Ubuntu 13.04 dual booting with UEFI and grub. (re-used the windows UEFI partition)
After I used boot-repair, everything seemed fine.
After I boot into Windows, however, I can't boot into Ubuntu anymore. Grub still allows me to select it, but the screen just goes blank and there are no error messages.


Here's the really weird part: If I have the liveUSB stick plugged in (but not being booted from), then Ubuntu boots. And after Ubuntu boots, I can remove the live USB stick and continue to boot normally into Ubuntu... until I boot into Windows 8. Then I need to have the live USB key to "unlock" the ability to boot into linux again.


I've heard of Dell's software writing to the EFI partition, but this is an Asus machine and I haven't heard of their shovelware doing that.


Answer



Seems like the answer might be in this other post: run the following in Windows 8 to prevent corruption on shutdown:


powercfg /h off

Unfortunately, I can't test it because EasyBCD broke the Windows bootloader, then EasyRE failed to repair Win8 but disabled my ability to boot linux. The break was so bad that Windows recovery drive can't do anything either. So thanks to neosmart, I went from a small problem with two OS's, to a machine that's less useful than a potato.


Rather than try to dig my way out of this mess, I think I'm just going to wipe the system and start from scratch. If it is applicable, I'll try the above command and report back.


performance - Electrical Surges And Power Outages - What Is The Proper Laptop Battery Care While Running Solely On Battery?




Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. UPS is specifically there for protection from surges and power outages. Since so far I always run laptop on battery, I question the proper usage to prolong battery life.


Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger.


The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged.

Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Also, since keeping my laptop in good condition is important to me, I no longer leave it plugged in mains without battery and since UPS served me as backup in power outages and surge protector, I now rely on my battery to substitute UPS, at least in means of a backup in power outages.


Dell Latitude E6420
Windows 7 64-bit

windows server 2012 r2 - Remote Office Domain Controller - Redirected Folders




My first question here - I tried my best to find the answer before posting.



I currently manage a small health center (25 employees) that has a single domain (health.local for this purpose) running on Server 2012 R2. The health center is planning to open a secondary location in 6 weeks or so. These two buildings will be connected by a site-to-site IPsec connection.



In the current domain, all users have redirected folders with offline files enabled. Makes it easy for workstation replacement.



What would be some recommendations for the new office? It will be about the same size (25 employees or so). I plan to put in a domain controller. Should this be linked to the current domain? My concerns would obviously be the redirected folders running over the IPsec. Employees can be in either office. I've researched DFS and found that it is not recommended for redirected folders because of cases where an account can be logged into in two locations which would create a DFS conflict.



I've also considered a brand new domain with an established trust between the two domains, but this would bring up the issue of users having a password that is different at each location (which would be a problem lol).




I've read about read-only DCs, but that doesn't solve the folder redirection issue.



What do you guys think? Thanks for any help / suggestions in advance.


Answer



This is possible but has it's limitations.



You can have the second domain controller as a member DC. Then you can replicate the fileshare with the roaming profiles using whatever kind of replication technology you want between machines at site1 and site2. You could use DFS for example, or put the profiles on a NAS and replicate that to a second NAS somehow. This way, you would have all the profiles on both sites at all times, and the machines would get their copy from the local machine.



The limitation is that due to limited bandwidth this will take time, and if someone quickly changes from one site to the other it might not replicate in that time frame. It will also take bandwidth away from using whatever software they have to use. So maybe you have to replicate at night, but then employees couldn't quickly switch at all.


windows 7 - My SSD drives have a very slow write speed (800MB read / 5MB write!)

I've done a lot of digging to try and figure out why I am experiencing slow write speeds to my SSD drives. They’re model number is: PNY XLR8 SSD9SC240GMDA-RB 2.5" 240GB SATA III Internal Solid State Drive (SSD).



They are running on SATA ports 1 and 2 TRIM is enabled. I can’t get SMART data because they’re set up in a RAID 0 (though the problem still persists with individual drives).




  1. I've run defrag using O&O's offline defrag tool.

  2. I took my SSD drives off of RAID 0 and tested them individually with the same results.


  3. My external USB hard drive is faster at writing than my SSDs!

  4. I've made sure that my drive is set to AHCI (before, it was RAID 0)



I am using Windows 7 and these drives were running just fine before. The funny thing is that now the problem is occurring with both drives.



FWIW, I upgraded to Windows 10 a few weeks ago and started noticing the slowness. Thinking it was Windows 10 that was the cause, I downgraded back to Windows 7 Ultimate.



enter image description here




Here's the AS SSD result (clearly not normal):



enter image description here

Double Clicking Batch File Windows cannot find file

I've found variants of this question being asked all over the place, and I've attempted all of the actual answer I've found out there (including changing the registry for associations). The basic issue is this:



  1. I create a batch file (simple batch that has an echo and a pause) on the desktop or in any folder in the computer.

  2. I double click the batch file to run it and get:


enter image description here


Workarounds include:



  1. Right click and run as administrator

  2. Open command prompt and call the batch file by name


I was dealing with the issue by just using those workarounds for a while, but I'm starting to think my inability to get certain programs to work properly (android.bat in Android Studio won't run because of file not found despite being there, for instance) is related to this issue. If I can't get it fixed in the next week, I'm likely going to reformat.


Additional info:



  1. This is Windows 8.1 Pro 64-bit

  2. There are no other file types which have this double-click execution issue

  3. If I call a batch file from another batch file using the workarounds, it works

  4. This happens regardless of the folder I'm in, and does not happen on a Hyper-V virtual machine using the same copy of windows

  5. All windows updates are run and I've done virus scans and such - the only other thing that's been going wrong simultaneously is that the machine seems to be slowing down a bit (in particular when I try to open file dialogs in ANY program - which now take ~5 seconds to open instead of being instantaneous).


Any ideas would be much appreciated! It may just be time for a reformat (it's been a year or so).


Edit: Wasn't aware of SuperUser. Seems like this question may be more appropriate over there!


Edit 2: Anyone over here have an ideas?

Unable to see scheduled custom tasks after upgrading to windows 10


I had added some basic tasks to display a message at regular intervals using Task scheduler. After upgrading to windows 10 , I am unable to view or edit them. The tasks are working fine (Message pop up is coming as scheduled).


When I tried to create a new task, I found that "Display a message" option is deprecated in windows 10. Is that the reason the tasks are not listed in the Task scheduler library? I am able to view the newly created task (with "Start a program" option). Is there any way to delete the old tasks I created prior to the upgrade?


Update:


As @Peter suggested, I could list the tasks using schtasks command. The tasks I created prior to upgrade are there in the list. But when I try to delete the task using schtasks /Delete /TN "\mytask" command, it is throwing an error:


ERROR: The specified task name "\mytask" does not exist in the system.


I could delete newly created tasks using the command btw.


Update:


Deleting the tasks from the C:\Windows\System32\Tasks folder fixed the issue for me. But for my coworker, a simple machine restart fixed the issue :)


Answer



Do a machine restart. Most of the time it fixes the issue. Especially if you are facing the issue immediately after upgrading to windows 10.


If it doesn't work, you may have to delete the obsolete task using the Registry Editor as mentioned in comment by @w32sh, which involves deleting the tasks from C:\Windows\System32\Tasks as the final step.


performance - Using two DDR-2 800 MHz instead of two DDR-2 533 MHz RAM



My PC has following hardware right now:





  • Intel Core 2 Duo E6600 2.4 MHz

  • ASUS P5B-VM Motherboard

  • 1 GB DDR 2 533 MHz RAM

  • 2 GB DDR 2 800 MHz RAM

  • 7200 RPM SATA 2 HDD

  • NVIDIA GeForce GTS 250 1 GB




My motherboard supports maximum of DDR 2 800 MHz RAM. Since one of my RAM is rated 533 MHz, the motherboard adjusted both RAM speed to 533. (I bought 2 GB RAM 2 years after I bought the whole system.)



I use Windows 7 Ultimate. The system rating index for my PC is 5.5, which is the slowest one among all parameters. This 5.5 rating comes from RAM speed.



I want to know, if I replace the 533 RAM with 800 RAM, will the overall performance increase be noticeable? For your information, my PC is used for gaming, graphics design (AutoCAD, Sketchup, Corel Draw etc), video editing (Corel VideoStudio Pro X3) and conversion (VideoStudio & Handbrake), and other general purpose (internet, listnening music, watching movies etc.).



It is not possible for me to benchmark the upgrade until I buy the RAM. So I'd like to know in advance if the upgrade will be noticeable and then I can buy it.



Thanks in advance.


Answer




Your machine will be running all your RAM at the speed of the slowest (the 533). Whether there will be much difference running at 800 depends on what latencies the RAM and memory controller support between them and which benchmark you run, but it is highly unlikely to be slower.



If you take the slower DIMM out and run with just the 2Gb@800 the motherboard should run at that speed - you can then rerun any benchmarks and check some of your usual software tasks (though not tasks that would usually need the extra RAM).



I suspect that you will see very little real world difference unless you are number-crunching over large in-memory data sets. Most of your tasks will be more I/O bound than CPU/RAM bound, just plain CPU bound (if your CPU has gobs of cache), or (in the case of games) far more sensitive to graphics card performance (and the performance of the bus the graphics card is connected through). Of the tasks you list in your question, I think video conversion is most likely to be were you'll notice a difference, but even that is probably be more I/O bound (when reading large files to process) and CPU bound (doing the hard work on each frame in tight loops over cached data) so I think the difference will be small.



If you do notice a boost from the RAM running faster, then upgrade the 1Gb DIMM to a faster model to get your full 3Gb back (or get 2Gb if you are running a 64-bit OS that can use it all - you'll probably find the price difference between 1Gb and 2Gb very small at the moment) - if not just put the current 1Gb DIMM back in.


windows - Disabling the prompt to "Click Continue to permanently get access to this folder" (e.g. via GPO)

http://support.microsoft.com/en-us/kb/950934 describes the manner in which, when a member of the Administrators group uses Explorer to navigate to a folder to which the Administrators group has permission, the user will be prompted to "Click Continue to permanently get access to this folder".



When they do this, Explorer alters the ACL of the folder to grant that specific user Full Control to the folder. The MS link describes exactly the design constraint that requires it to be this way.



However, it ruins the permission set for that folder and makes central management of permissions effectively impossible. For example, if the named user is later removed from the Administrators group, that ACL entry still exists to permit them access to that folder.




I'm not looking to disable UAC (I actually like the distinction between elevated and non-elevated), and I am happy to use alternative tools to navigate and view files in an elevated fashion.



The eventual intent is to run one of the workarounds described in the MS link (either using a separate file navigator that can run elevated, or defining a separate group to control access to the whitelisted folders) but, all the time Explorer continues to clobber the ACLs of the folder, at will, it makes it impossible to identify where these workarounds need to be applied (short of regularly auditing every folder for ACL changes).



I would simply prefer to have the standard "access denied" message, if I attempt to access a restricted folder when running non-elevated in Explorer.



Is there a setting (either one-time on each box, or via GPO) that removes this "permanently get access" prompt, while retaining the other facilities of UAC?



NB: I fully understand why this prompt exists, what it means and why the behaviour is as it is (although I don't necessarily agree with the design decision). However, I should point out that I am not looking to discuss workarounds relating to the working practice of my users, nor the merits/pitfalls of UAC or Administrators group membership.

How risky is to install windows on external hard drive?

i have a hp laptop which currently has no HDD installed into it,so i decided to install windows to my Western Digital my passport 2TB external hard drive.



i found two article on how to install windows on external hard drive.



one using the waik and one easy install with wintousb(both article available on into windows website).




now in both article at some point it says i shoud format the hard drive.



so my first question is can i make another partition separate from my data partition on the same hard drive and use that as windows partition without losing any data ?



secondly what are the chances that my data gets lost at some point(consider that i have no backup of my data and the data is much more crucial then having windows) ?



i currently have Ubuntu 15.04 install on the same external hard drive on it's own partition.

Tuesday, April 28, 2015

windows - Securely erasing all data from a hard drive


I am about to sell my old desktop PC and I am cautious about some of my sensitive information being available to the purchaser, even after reformatting the hard-drive, using data recovery software.


How can I securely wipe the hard drive so that the data on it cannot be recovered?


Although I specifically want help with my Windows PC, it wouldn't hurt if there were suggestions for Macs as well.


Answer



Look into Darik's Boot and Nuke. It's a bootable CD which lets you securely erase your hard drives.


boot - Windows 8.1 Laptop with UEFI - How CSM Mode really works?


I have Asus X200MA Laptop with UEFI Firmware & Windows 8.1 pre-installed in UEFI mode (CSM Disabled). Now if I turn CSM on, I find that it still successfully boots into Windows 8.1 which otherwise uses GPT partition scheme.


I am curious to know how is this possible? if I correctly understand, CSM emulates BIOS Mode booting in the firmware. So it should look for MBR disk and since it does not find any should not allow booting Windows 8.1 off GPT.


Thanks.


Answer



In most firmwares, enabling CSM mode doesn't fully revert the system to 1990's behavior – it merely adds special "BIOS disk" entries to the regular boot menu. If those entries fail, however – e.g. if the disk has no bootloader 'signature' – the firmware still keeps trying the next entry, until it eventually reaches the "UEFI: Windows" item.


Remember how even actual BIOS systems have a "boot selection menu" for choosing which device to boot from – they're not forced to stick to a single boot source. So UEFI CSM doing the same is nothing new.




(Also, BIOS does not typically care about actual partitions, just the bootcode, and a GPT-partitioned disk can very well contain BIOS bootcode in the 'protective' MBR. And vice versa, EFI bootloaders can live on MBR-partitioned disks.)


security - Enabling cipher TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) on Windows Server 2003+ISA 2006



I have been given a task to disable all "weak" ciphers/protocols on our very old ISA server based on Windows Server 2003. I have disabled all protocols but TLS1.0, and all ciphers but RC2/128, RC4/128 and Triple DES 168/168. But Qualys SSL Labs test utility does not display me that I have a 3DES encryption available on my ISA server. The only cipher suites listed are:




TLS_RSA_WITH_RC4_128_MD5 (0x4)  
TLS_RSA_WITH_RC4_128_SHA (0x5)


This KB says that when Triple DES 168 cipher is enabled, the TLS_RSA_WITH_3DES_EDE_CBC_SHA cipher suite is available. However, it is not. We need this cipher suite to allow a Windows 8.1 Phone connecting to ActiveSync published by this ISA. What could be the reason of 3DES encryption to be unavailable in this configuration, and what should we do in order to allow the connection for a Windows 8.1 phone without being vulnerable to POODLE?



EDIT: There was apparently a server-side malfunction of some sort, a reboot fixed 3DES availability, although the same KB states that registry change should have worked at once. I've got another server with the same problem, got it fixed with registry modification only, though.


Answer



If your registry change didn't take effect immediately, then just restart your computer.


windows - How to clone a Win7 boot partition onto the SAME hard drive?

I want to have two or even three clones of my operating system partition on the same hard drive. No I don't want the clones on different hard drives. I want three partitions on one hard drive, two of which are the clones of my current system partition.


I want to power up my computer, and see 3 "Win 7" operating systems to select to boot from.


No I don't want to install windows 7 three times and reinstall all my programs/settings for each install. I want to specifically clone my current partition, because I have already programs/configs over the past year.


I have "easy to do backup" (free software), but it seems it only allows me to restore my OS partition ("boot partition") onto another physical hard disk. There is no option to do what I want.


I have searched all over the internet, and nobody wants to do what I want. They only refer to cloning a OS/boot partition onto another physical hardisk.


Anyone have a solution?

Should I remove my laptop battery?






I keep my Thinkpad T60 laptop plugged in all the time. Should I remove my battery or if not, should I drain it occassionally?


If it should be drained, how often?


Thanks.


Answer



No - you can leave it in.


People who think it should be removed, or people who advise it are usually thinking about the "Memory Effect". NiCd (Nickel-cadmium) batteries were affected by this, however new Li-ion (Lithium-ion) are not affected.


On some laptops, (especially "green" laptops with specialised power chips) you will get poor performance with the battery removed as it under-performs. For example MacBook Performance Plunges When Battery Removed.


Any intelligent laptop from the past few years should be intelligent to discharge and rotate frequently meaning that you do not need to manually drain it. For example, my laptop is plugged in (usually) 24x7, and every 24-48 or so hours, it drops to 95% and recharges.


Lastly, One bonus of having a laptop is that you get a "free" UPS! of course, if you have a separate UPS, forget this point, but if there is a power cut, you may be happy that you left it in.


android - Fix SD card that cannot be formatted


I have, not one but two microSD cards that my phone (Samsung Galaxy Young, Gingerbread OS) seem to have broken. One is 1GB and the other is 2GB. The 1GB one won't be formatted.


When I put the 1GB one into the computer the computer prompts for a formatting. I don't care for the content so I tried to format it, but to no avail; the format fails and I have no idea what to do to make it work again.


I tried using the SDformatter software, but it can't format the card as it is write protected. I'm googling to solve it but so far no success.


My computer OS is win7 if that's of any relevance.


Answer



Golden Rule #1


As soon as an SD card [or USB stick] starts to play up - bin it.
They're not worth the effort once they error.


I go through literally hundreds of them for work. Low write count, high read count.


If they error once, they will error again. Quality control on them is, let's say… variable.


Some of them have a controller chip that will permanently lock them to read only if they detect a write error, as a preservation measure. There is no way to unlock them once this happens.


Golden Rule #2


Don't use them to store anything valuable.


Edit:
If the data on an SD card was truly valuable, it is theoretically possible to replace the controller chip, or even directly access the memory itself. This service can be performed by data recovery specialists, but they charge a lot for their efforts & still can make no guarantees.


Rules 1 & 2 are still 'best practise'


Does glusterFS allow you to use multiple "classes" of storage?



I am used to traditional SANs. On a traditional SAN, we're able to create tiers of storage (fast disks, slow disks), and then allocate volumes from those tiers of storage. This way a device that needs ultra low latency and ultra high throughput can get volumes built out of flash disks, and a data warehouse can get volumes built on slow spinning media, etc. Now obviously this is for block storage, but I am trying to figure out if gluster supports anything similar.



I've been reading through the Gluster docs, and I can't seem to find anything that directly supports classes of service.



If I deploy 9 storage nodes with 10 disks each, and the first 3 have "fast" NVME disks, the next 3 have "medium" enterprise SAS disks (15k rpm), and the last 3 have "slow" 7200rpm SATA disks, would gluster support providing different storage pools for each "class" or "tier" of service? I know a single pool can be hetereogeneous, but I am looking for separate pools.



The only thing I have found so far are mentions of "tiering" inside the gluster docs, but in their parlance they define "tiering" as a single pool with multiple types of disks in it, where slow disks act like cold storage for data that is rarely accessed, and fast disks act like caches.




Does gluster support classifying tiers of storage and making those tiers available to different endpoints?


Answer



Yes, you can create these kind of different 'classes'. What you would do with Gluster, is create a volume (basically network filesystem, share or export) that has all the bricks on one type of media, and an other volume with bricks on an other type.



Many users name the volumes after projects or departments, but there is no reason that you could not call a volume 'fast', 'medium' or 'slow'.



The tiering feature in Gluster is not what you are looking for. In addition it is not stable enough for most users, and slowly on its way out.


macos - Two exact same USB flash drives(sticks) not exactly the same


I just bought two 8 GB USB flash drives, SanDisk Cruzer Fit (http://www.sandisk.com/products/usb/drives/cruzer-fit/), they are exactly the same.
Since I planned using them as Mac OS X install drives, during a Mac OS X installation (booted from DVD) I erased and formatted them.


After erase and format, one of the drives has an orange/yellowish icon and the other a white icon. Even after I restart my Mac and boot into the installation again and format them again(in different or same order), one(the same one) always gets this orange icon. And that one with the orange icon also has a peculiar issue when I click on the eject button. It does not eject. The partition gets unmounted but that is it. It just stays grayed out and never disappears from disk utility menu. The one with the white icon ejects normally.


I've tested the same drive(orange one) under Windows and it behaves perfectly normal. Safe removal works as expected. I've run some tests and everything appears to function as it should.
So it only bothers me why is this difference on the mac. What does this orange icon represent anyway? Does it mean something?


Here are two photo-screenshots i snapped from one and then from the other drive:
The white one


The orange one


UPDATE:


Accidentally I found out that I've kept the packings from these drives. And only now I see that the graphic design on the front is actually different. Which would indicate that these are from different series. I don't know which one is from which drive though! Anyhow, everything else seems pretty much the same. Here are some photos(the drive on the right is the "orange" drive. As for the packing, as I said, I don't know):


sticks1
sticks2
package1
package2


Answer




  • The Orange Icon indicates a "removable" disk.

  • The White icon inidicates a "fixed" disk.


Both can be mounted andunmounted and will function almost identically.


As far as Mac OS and other Unix-like OSes are concerned, the difference is cosmetic except when trying to create bootable devices.
Both are equally fast, both can still be unmounted or ejected.


Unfortunately there is no driver or utility that can fake those flags or change the controller on the drive to indicate that it is a fixed disk or removable disk, it is hard coded into the device's controller by the manufacturer.


Why did Sandisk and others do this?
To expand on what @chirality stated:


If your flash drive was created after 2012, there is a high likelihood that it is a Windows 8 certified flash drive, which (according to this) means that it is listed as a "Fixed disk" in disk management, and that write caching is disabled by default.
Windows 8 certified flash drives are designed to allow removal at ANY time without damage to the drive's contents. this was designed to support Windows-to-go's "resiliency and unintended removal feature":



The resiliency and unintended removal feature of Windows To Go
automatically froze my computer screen upon removal of the drive,
giving me 60 seconds to re-insert. If the Windows To Go drive is
reinserted into the same port it was removed from, Windows will resume
at the point where the drive was removed – without the loss of in
process work or data. If the USB drive is not reinserted, or is
reinserted into a different port, the host computer will turn off
after 60 seconds.



Even more information is available in this Technet FAQ and this Microsoft blog post.


windows - How can I remove malicious spyware, malware, adware, viruses, trojans or rootkits from my PC?


What should I do if my Windows computer seems to be infected with a virus or malware?



  • What are the symptoms of an infection?

  • What should I do after noticing an infection?

  • What can I do to get rid of it?

  • how to prevent from infection by malware?



This question comes up frequently, and the suggested solutions are usually the same. This community wiki is an attempt to serve as the definitive, most comprehensive answer possible.


Feel free to add your contributions via edits.



Answer



Here's the thing: Malware in recent years has become both sneakier and nastier:


Sneakier, not only because it's better at hiding with rootkits or EEPROM hacks, but also because it travels in packs. Subtle malware can hide behind more obvious infections. There are lots of good tools listed in answers here that can find 99% of malware, but there's always that 1% they can't find yet. Mostly, that 1% is stuff that is new: the malware tools can't find it because it just came out and is using some new exploit or technique to hide itself that the tools don't know about yet.


Malware also has a short shelf-life. If you're infected, something from that new 1% is very likely to be one part of your infection. It won't be the whole infection: just a part of it. Security tools will help you find and remove the more obvious and well-known malware, and most likely remove all of the visible symptoms (because you can keep digging until you get that far), but they can leave little pieces behind, like a keylogger or rootkit hiding behind some new exploit that the security tool doesn't yet know how to check. The anti-malware tools still have their place, but I'll get to that later.


Nastier, in that it won't just show ads, install a toolbar, or use your computer as a zombie anymore. Modern malware is likely to go right for the banking or credit card information. The people building this stuff are no longer just script kiddies looking for fame; they are now organized professionals motivated by profit, and if they can't steal from you directly, they'll look for something they can turn around and sell. This might be processing or network resources in your computer, but it might also be your social security number or encrypting your files and holding them for ransom.


Put these two factors together, and it's no longer worthwhile to even attempt to remove malware from an installed operating system. I used to be very good at removing this stuff, to the point where I made a significant part of my living that way, and I no longer even make the attempt. I'm not saying it can't be done, but I am saying that the cost/benefit and risk analysis results have changed: it's just not worth it anymore. There's too much at stake, and it's too easy to get results that only seem to be effective.


Lots of people will disagree with me on this, but I challenge they are not weighing consequences of failure strongly enough. Are you willing to wager your life savings, your good credit, even your identity, that you're better at this than crooks who make millions doing it every day? If you try to remove malware and then keep running the old system, that's exactly what you're doing.


I know there are people out there reading this thinking, "Hey, I've removed several infections from various machines and nothing bad ever happened." Me too, friend. Me too. In days past I have cleaned my share of infected systems. Nevertheless, I suggest we now need to add "yet" to the end of that statement. You might be 99% effective, but you only have to be wrong one time, and the consequences of failure are much higher than they once were; the cost of just one failure can easily outweigh all of the other successes. You might even have a machine already out there that still has a ticking time bomb inside, just waiting to be activated or to collect the right information before reporting it back. Even if you have a 100% effective process now, this stuff changes all the time. Remember: you have to be perfect every time; the bad guys only have to get lucky once.


In summary, it's unfortunate, but if you have a confirmed malware infection, a complete re-pave of the computer should be the first place you turn instead of the last.




Here's how to accomplish that:


Before you're infected, make sure you have a way to re-install any purchased software, including the operating system, that does not depend on anything stored on your internal hard disk. For this purpose, that normally just means hanging onto cd/dvds or product keys, but the operating system may require you to create recovery disks yourself.1 Don't rely on a recovery partition for this. If you wait until after an infection to ensure you have what you need to re-install, you may find yourself paying for the same software again. With the rise of ransomware, it's also extremely important to take regular backups of your data (plus, you know, regular non-malicious things like hard drive failure).


When you suspect you have malware, look to other answers here. There are a lot of good tools suggested. My only issue is the best way to use them: I only rely on them for the detection. Install and run the tool, but as soon as it finds evidence of a real infection (more than just "tracking cookies") just stop the scan: the tool has done its job and confirmed your infection.2


At the time of a confirmed infection, take the following steps:



  1. Check your credit and bank accounts. By the time you find out about the infection, real damage may have already been done. Take any steps necessary to secure your cards, bank account, and identity.

  2. Change passwords at any web site you accessed from the compromised computer. Do not use the compromised computer to do any of this.

  3. Take a backup of your data (even better if you already have one).

  4. Re-install the operating system using original media obtained directly from the OS publisher. Make sure the re-install includes a complete re-format of your disk; a system restore or system recovery operation is not enough.

  5. Re-install your applications.

  6. Make sure your operating system and software is fully patched and up to date.

  7. Run a complete anti-virus scan to clean the backup from step two.

  8. Restore the backup.


If done properly, this is likely to take between two and six real hours of your time, spread out over two to three days (or even longer) while you wait for things like apps to install, windows updates to download, or large backup files to transfer... but it's better than finding out later that crooks drained your bank account. Unfortunately, this is something you should do yourself, or a have a techy friend do for you. At a typical consulting rate of around $100/hr, it can be cheaper to buy a new machine than pay a shop to do this. If you have a friend do it for you, do something nice to show your appreciation. Even geeks who love helping you set up new things or fix broken hardware often hate the tedium of clean-up work. It's also best if you take your own backup... your friends aren't going to know where you put what files, or which ones are really important to you. You're in a better position to take a good backup than they are.


Soon even all of this may not be enough, as there is now malware capable of infecting firmware. Even replacing the hard drive may not remove the infection, and buying a new computer will be the only option. Thankfully, at the time I'm writing this we're not to that point yet, but it's definitely on the horizon and approaching fast.




If you absolutely insist, beyond all reason, that you really want to clean your existing install rather than start over, then for the love of God make sure whatever method you use involves one of the following two procedures:



  • Remove the hard drive and connect it as a guest disk in a different (clean!) computer to run the scan.


OR



  • Boot from a CD/USB key with its own set of tools running its own kernel. Make sure the image for this is obtained and burned on a clean computer. If necessary, have a friend make the disk for you.


Under no circumstances should you try to clean an infected operating system using software running as a guest process of the compromised operating system. That's just plain dumb.




Of course, the best way to fix an infection is to avoid it in the first place, and there are some things you can do to help with that:



  1. Keep your system patched. Make sure you promptly install Windows Updates, Adobe Updates, Java Updates, Apple Updates, etc. This is far more important even than anti-virus software, and for the most part it's not that hard, as long as you keep current. Most of those companies have informally settled on all releasing new patches on the same day each month, so if you keep current it doesn't interrupt you that often. Windows Update interruptions typically only happen when you ignore them for too long. If this happens to you often, it's on you to change your behavior. These are important, and it's not okay to continually just choose the "install later" option, even if it's easier in the moment.

  2. Do not run as administrator by default. In recent versions of Windows, it's as simple as leaving the UAC feature turned on.

  3. Use a good firewall tool. These days the default firewall in Windows is actually good enough. You may want to supplement this layer with something like WinPatrol that helps stop malicious activity on the front end. Windows Defender works in this capacity to some extent as well. Basic Ad-Blocker browser plugins are also becoming increasingly useful at this level as a security tool.

  4. Set most browser plug-ins (especially Flash and Java) to "Ask to Activate".

  5. Run current anti-virus software. This is a distant fifth to the other options, as traditional A/V software often just isn't that effective anymore. It's also important to emphasize the "current". You could have the best antivirus software in the world, but if it's not up to date, you may just as well uninstall it.


    For this reason, I currently recommend Microsoft Security Essentials. (Since Windows 8, Microsoft Security Essentials is part of Windows Defender.) There are likely far better scanning engines out there, but Security Essentials will keep itself up to date, without ever risking an expired registration. AVG and Avast also work well in this way. I just can't recommend any anti-virus software you have to actually pay for, because it's just far too common that a paid subscription lapses and you end up with out-of-date definitions.


    It's also worth noting here that Mac users now need to run antivirus software, too. The days when they could get away without it are long gone. As an aside, I think it's hilarious I now must recommend Mac users buy anti-virus software, but advise Windows users against it.


  6. Avoid torrent sites, warez, pirated software, and pirated movies/videos. This stuff is often injected with malware by the person who cracked or posted it — not always, but often enough to avoid the whole mess. It's part of why a cracker would do this: often they will get a cut of any profits.

  7. Use your head when browsing the web. You are the weakest link in the security chain. If something sounds too good to be true, it probably is. The most obvious download button is rarely the one you want to use any more when downloading new software, so make sure to read and understand everything on the web page before you click that link. If you see a pop up or hear an audible message asking you to call Microsoft or install some security tool, it's a fake.
    Also, prefer to download the software and updates/upgrades directly from vendor or developer rather than third party file hosting websites.




1 Microsoft now publishes the Windows 10 install media so you can legally download and write to an 8GB or larger flash drive for free. You still need a valid license, but you don't need a separate recovery disk for the basic operating system any more.


2 This is a good time to point out that I have softened my approach somewhat. Today, most "infections" fall under the category of PUPs (Potentially Unwanted Programs) and browser extensions included with other downloads. Often these PUPs/extensions can safely be removed through traditional means, and they are now a large enough percentage of malware that I may stop at this point and simply try the Add/Remove Programs feature or normal browser option to remove an extension. However, at the first sign of something deeper — any hint the software won't just uninstall normally — and it's back to repaving the machine.


php fpm - nginx / php-fpm error logging

I'm trying to figure out where the PHP errors are going in my setup. I'm running nginx as the reverse proxy to PHP-FPM, but I'm not seeing the various E_NOTICE or E_WARNING messages my app is producing. The only reason I know they're happening is failed responses and NewRelic catching stack traces.



Here's the logging config:



nginx.conf



proxy_intercept_errors on;
fastcgi_intercept_errors on;



php.ini



error_reporting  =  E_ALL
display_errors = Off
display_startup_errors = Off
log_errors = On
log_errors_max_len = 1024
ignore_repeated_errors = Off
ignore_repeated_source = Off

report_memleaks = On
track_errors = On
error_log = syslog


php-fpm.conf



[global]
error_log = /var/log/php-fpm/fpm-error.log


[www]
access.log = /var/log/php-fpm/access.log
access.format = "%t \"%m %r%Q%q\" %s %{mili}dms %{kilo}Mkb %C%%"
catch_workers_output = yes

php_flag[display_errors] = on
php_admin_flag[log_errors] = true


rsyslog.conf




:syslogtag, contains, "php" /var/log/php-fpm/error.log


I've configured PHP to log to syslog, however FPM has no syslog function so it's logging to a file. I don't really care where the errors end up, just that they end up somewhere.



Any clues on how I might get this to work?

Monday, April 27, 2015

boot - Creating NTFS bootable USB drive

I am trying to create bootable USB drive from MSDN Windows Server 2016 ISO file but one of the files in the image named install.wim is 4.38 GB in size so FAT32 drive will not work nor can I create a bootable DVD drive because size of the image is 5.26 GB.


When I create the drive using Windows 7 USB/DVD Download tool the drive remains NTFS but it is not bootable.
When I use UltraISO it always formats the drive as FAT32 and is bootable but install fails because intall.wim file is invalid.


What is the solution to this problem? Is there a tool that will create bootable NTFS USB drive I can use to install Windows Server?

kvm virtualization - What does take all the cpu here?

On a small SSD VPS I got 2 GB of RAM and 2 vCPU core (dedicated to my server), virtualized via KVM. So far so good. The server is mainly used for databases (MySQL) and fast network file storage (via sshfs). Currently around 5 folders are mounted to a remote server via sshfs.



When I look into htop I can see 100% CPU load even though when sorting processes by CPU usage none of the processes take up that much CPU on it's own or multiple processes combined. Also the load average indicates that the server is mainly dozing around. From this question I found out that the blue CPU bar indicate that a "Low priority thread" takes up the CPU.




Here are some screenshots:
System load via htop
System load via top
CPU usage from Munin



How can I find out which process is using up all CPU power? Is it even using CPU power or is that just a visual bug, caused by KVM? Does sshfs use cpu power that cannot be tracked from userspace?

performance - How to make `rm` faster on ext3/linux?



I have ext3 filesystem mounted with default options. On it I have some ~ 100GB files.



Removal of any of such files takes long time (8 minutes) and causes a lot of io traffic, which increases load on server.



Is there any way to make the rm not as disruptive?


Answer




The most interesting answer was originally buried in a comment on the question. Here it is as a first class answer to make it more visible:




Basically no method from here worked, so we developed our own. Described it in here:
http://www.depesz.com/index.php/2010/04/04/how-to-remove-backups/ – depesz Apr 6 '10 at 15:15




That link is an incredibly thorough analysis of the exploration for and discovery of a workable solution.



Note also:




The article says:




As you can see, I used -c2 -n7 options to ionice, which seem sane.




which is true, but user TafT says if you want no disruption then -c3 'idle' would be a better choice than -c2 'best-effort'. He has used -c3 to build in the background and has found it to work well without causing the build to wait for ever. If you really do have 100% io usage then -c3 will not let the delete ever complete but he doesn't expect that is what you have based on the worked test.


disk space - Windows 7 Deployment Image Servicing and Management (DISM) tool fails when trying to clean C:windowswinsxs


I am running Windows 7 Home Premium 64-bit edition and I am tyring to run the Windows 7 Deployment Image Servicing and Management (DISM) tool, DISM.exe, have to try and clean/reclaim some space from the C:\windows\winsxsfolder.


These are my results per following the instructions of @GvS's answer from this SO post: Why does the /winsxs folder grow so large, and can it be made smaller?



C:\>DISM /online /cleanup-Image /spsuperseded
Deployment Image Servicing and Management tool Version: 6.1.7600.16385


Image Version: 6.1.7601.18489


Service Pack Cleanup can't proceed: No service pack backup files were
found. The operation completed successfully.



Can anyone tell me what's going on here?


Answer



You've already cleaned the RTM files or installed Windows 7 with a DVD which already has the Sp1 included. In both cases you can't shrink WinSxS with this command any longer.


You can only cleanup WinSxS by running disk cleanup after installing Update KB2852386


Searching for files within a number of folders in command line in Windows 10

I'm trying to find the proper syntax for finding all the files with a specific name that are in a multitude of folders. So - I have a directory with 100+ folders, in each folder there are files that are uniquely named but have a similar string in each name (AC_DATA). I want to find the names and directories of all those AC_DATA* files. I've tried many combinations, this one works if I have the name of the folder:



dir -r C:\DATA[foldername] /b | findstr /s /i AC_DATA*



but this does not work when I want to find all the files that are in those folders. I need to find these files while not being in the C:\DATA\ directory. I can do this in Windows 7 and unix but 10 is stumping me.

Debian Testing (Jessie) custom kernel and ATI driver installation borked

I am on Debian Jessie/Sid 64-bit and trying to use:




3.12.0-customkernel
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Seymour [Radeon HD 6400M/7400M Series]


I installed the ATI drivers based on https://wiki.debian.org/ATIProprietary#configure. I am getting "Oh no! Something has gone wrong " instead of proper GDM. I removed the .Xauthority file, the temp X files, etc., but none of that was any help.



Then I installed the amd-catalyst-13.11-beta6-linux-x86.x86_64.run driver by forwarding my display to another computer. Somehow the AMD driver was failing to install without X screen. I created the initial xorg.conf with aticonfig. That did not work either.



I am not sure if I am hitting a bug here, or something is corrupt in my system. At this point I am out of ideas and I cannot find any leads on the web either. So SU is my last hope I guess.




Here are some logs



Xorg -configure
_XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
(EE)

Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
(EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
(EE)
(EE) Server terminated with error (1). Closing log file.


[ 98.053] _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
[ 98.053] _XSERVTransMakeAllCOTSServerListeners: server already running

[ 98.053] (EE)
Fatal server error:
[ 98.053] (EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
[ 98.053] (EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
[ 98.053] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
[ 98.053] (EE)
[ 98.053] (EE) Server terminated with error (1). Closing log file.


xorg.1.log
[ 4717.378] X Protocol Version 11, Revision 0
[ 4717.379] Build Operating System: Linux 3.12.0-rc6-patser+ x86_64 Debian
[ 4717.380] Current Operating System: Linux hitit 3.12.0-customkernel #1 SMP Fri Dec 20 23:05:55 CST 2013 x86_64
[ 4717.380] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.12.0-customkernel root=UUID=3ac264da-5290-4bf0-a5dc-4efb7c65e9bd ro quiet
[ 4717.381] Build Date: 25 November 2013 01:54:46PM
[ 4717.382] xorg-server 2:1.14.3-5 (Maarten Lankhorst )
[ 4717.383] Current version of pixman: 0.30.2
[ 4717.385] Before reporting problems, check http://wiki.x.org

to make sure that you have the latest version.
[ 4717.385] Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 4717.388] (==) Log file: "/var/log/Xorg.1.log", Time: Sat Dec 21 13:05:06 2013
[ 4717.389] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[ 4717.389] (==) No Layout section. Using the first Screen section.
[ 4717.389] (==) No screen section available. Using defaults.
[ 4717.389] (**) |-->Screen "Default Screen Section" (0)
[ 4717.389] (**) | |-->Monitor ""

[ 4717.389] (==) No monitor specified for screen "Default Screen Section".
Using a default monitor configuration.
[ 4717.389] (==) Automatically adding devices
[ 4717.389] (==) Automatically enabling devices
[ 4717.389] (==) Automatically adding GPU devices
[ 4717.389] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[ 4717.389] Entry deleted from font path.
[ 4717.389] (==) FontPath set to:
/usr/share/fonts/X11/misc,
/usr/share/fonts/X11/100dpi/:unscaled,

/usr/share/fonts/X11/75dpi/:unscaled,
/usr/share/fonts/X11/Type1,
/usr/share/fonts/X11/100dpi,
/usr/share/fonts/X11/75dpi,
built-ins
[ 4717.389] (==) ModulePath set to "/usr/lib/xorg/modules"
[ 4717.389] (II) The server relies on udev to provide the list of input devices.
If no devices become available, reconfigure udev or disable AutoAddDevices.
[ 4717.389] (II) Loader magic: 0x7fbeb4527d00
[ 4717.389] (II) Module ABI versions:

[ 4717.389] X.Org ANSI C Emulation: 0.4
[ 4717.389] X.Org Video Driver: 14.1
[ 4717.389] X.Org XInput driver : 19.1
[ 4717.389] X.Org Server Extension : 7.0
[ 4717.389] (II) xfree86: Adding drm device (/dev/dri/card0)
[ 4717.389] (II) xfree86: Adding drm device (/dev/dri/card1)
[ 4717.391] (--) PCI:*(0:0:2:0) 8086:0116:104d:907b rev 9, Mem @ 0xc0000000/4194304, 0xb0000000/268435456, I/O @ 0x00008000/64
[ 4717.391] (--) PCI: (0:1:0:0) 1002:6760:104d:907b rev 0, Mem @ 0xa0000000/268435456, 0xc8400000/131072, I/O @ 0x00007000/256, BIOS @ 0x????????/131072
[ 4717.391] (II) Open ACPI successful (/var/run/acpid.socket)
[ 4717.392] Initializing built-in extension Generic Event Extension

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...