Tuesday, September 30, 2014

macos - Could not open the dk file

I dont know what I have done, but it is over a month now that I can not open my virtual machine, I'm using a MacBookPro with Fusion 2.0.6. I recently upgraded my system to Snow Leopard. but it was working fine. Now I get a message that says" File not found: Windox XP. vmdk.


I went to all my backups and when I unpack the files in the virtual machine, there is not such a file. the only file that I can see that may contents the data is the .vmem file that show to have 1.2 GB of data.


I have tryed and tryed to contact VMware support but it has become an impossible task, If theres someone that could give me some ideas on how to recover my virtual machine????

domain name system - Multihomed Server or What?

The first thing i need say is english isnt my native language so i hope u guys can understand me.
I`m willing start host an website and mail services on my home, the problem is i dont have many computers, just an old but "good" computer able to use vmware and run 3 o 4 windows 2008.




The first idea i have is do this setup:



Host Windows 2008 DC
Start an exhchange server in one Virtual PC
Start an webserver in one Virtual PC
Start an database server in one Virtual PC



My question start about the basics, the DNS, reading the microsoft docs, i see is an good pratice do a subdomain for DC and set the main domain on webserver, but i dindt get how this works... The link where i get this info:



http://technet.microsoft.com/en-us/library/cc759036.aspx




So the point is i do the DC, install the Webserver, start an dns, set the DC DNS as forward to webserver DNS? Then the DC dns become something like as secondary DNS is it?



The setup i`m planning is the best option?
Someone could gimme an hint if i would start by this setup or where i can get more info to my limited structure?



My PC Hardware:



16 GB RAM




3 HDs 1 TB each RAID 1+0



Processor Intel Core 2 Quad Extreme 3.2 Ghz

windows 7 - Caps lock key behavior on PS/2 keyboard is reversed


I'm on a Windows 7 desktop with an external ps2 keyboard and its Caps lock key is behaving in opposite manner, like when I turn on the caps lock and the caps lock key light is lit its typing in small alphabets and vice-versa. It works fine for a while after rebooting the system however it re-appears again so I'm trying to resolve the cause.


I've gone through several forums and here are the several things I've tried :



  1. Press both shift keys down.

  2. Press tab and check if some shift key is down.

  3. Check for sticky keys.


I've also read that it could be some program modifying the way caps lock works but that's not the case either because I've checked the task manager and there is no such strange process running which could do so.


I've tried using On screen keyboard and its showed that the caps lock key is pressed whereas on my physical keyboard it wasn't and vice-versa. What could be the possible cause ? Thanks for any suggestions or help in advance.


Answer



Just a shot in the dark from this forum post, which is the only one I found that didn't suggest a stuck shift key (as your on screen keyboard displays caps lock on):



Try this trick. Go into Microsoft Word and type "tHANKS" and it will autocorrect it back to "Thanks" and your keyboard should be in sync with your monitor. I've noticed that my screen says the caps lock is off but my keyboard says it is on and after I try this trick it works again.



It sounds weird but it's got 38 upvotes and a whole lot of "thanks"; I do not know what the root cause is.


filesystems - Map linux folders by size



I have AWS linux instance, Currently there are many folders in the instance.
I would like to map all the folders and their size, so i could come back in 1 month and check which folder occupy high amount of storage (maybe the logging folder).




What is the best way to achieve this ?
so i could compare the size of this month and next month more easily.
Thanks.


Answer



Use the ncdu utility. Record the values. Come back and check again in a month :)



ncdu 1.7 ~ Use the arrow keys to navigate, press ? for help                                                         
--- /ppro ----------------------------------------------------------------------------------------------------------
170.0GiB [##########] /data
104.6GiB [###### ] /sldata
54.4GiB [### ] /isam
48.8GiB [## ] /slisam

27.8GiB [# ] /hist
15.4GiB [ ] /prt
12.1GiB [ ] /jmail
10.1GiB [ ] /zephyr2
9.7GiB [ ] /edi
7.9GiB [ ] /savdata2
6.2GiB [ ] /io

computer building - Use CDs that came with hardware or download drivers from Manufacterers website?



I am in the process of building a computer and wondered if I should just use the drivers that shipped with the hardware or if it would be worth the effort to go to the manufacturers websites and download the latest versions for everything?



Does anyone have any thoughts or recommendations?


Answer



Get it from the manufacturer's website. They will be the most up to date. Keep the CDs just in case. In some cases, if your Ethernet driver isn't installed you can't go online to download drivers, so you would need to use the CD to first get online to download drivers. Usually your operating system will auto-detect a driver for your network interface though.


windows 7 - How do I check what processes are actively using the network?




Recently, I discovered that something is downloading on my computer even if I doesn't do anything with it.


Is there any program that can check what process or application that is using the network?

windows 8.1 - How Do I Allow A Standard User to Install Programs?


Yes, I realize that defeats the purpose of standard user. But here is the situation. A friend of mine wants to set up time limits for his home schooled son, and his son needs to be a standard user in order to use Family Safety. But at the same time, his son (who I personally think is way too old to have time limits), needs to be able to update and install some of his games on his computer. So I guess it's just sort of "light" parental control (just something to help him focus and not get distracted with hours of gaming).


It is the basic version of Windows 8.1, so there is no Group Policy editor (gpedit.msc). However, I'm fairly good with computers and I figured out how to install it. But it doesn't have a lot of the features that the "real" gpedit seems to have. Here is an example of what is in my gpedit:


enter image description here


Answer



YES!!! I did it!!! Here is what I did:



  1. Click Start and type cmd. When cmd.exe shows up, right-click and select Run as Administrator (this allows you to run Command Prompt at an elevated level).

  2. Type net localgroup Power Users /add /comment:"Standard User with ability to install programs." and hit enter.

  3. Now you need to assign user/group rights. Download ntrights.exe from here. These are the instructions from sevenforums:



A) Open the downloaded .zip file, and extract (drag and drop) the
ntrights.exe file to your desktop.


B) Right click on the ntrights.exe file, click on Properties, General
tab, and click on the Unblock button if available. NOTE: If you do not
have a Unblock button under the General tab, then the file is already
unblocked and you can continue on to step 1C.


C) Right click on the ntrights.exe file and click on Move.


D) Open Windows Explorer and navigate to and open the
C:\Windows\System32 folder, then Paste the ntrights.exe file to move
it here.


E) If prompted, click on Continue and Yes to approve moving the
ntrights.exe file into the System32 folder, then close the Windows
Explorer window.




  1. In an elevated command prompt (see step 1), type ntrights -U "Power Users" +R SeNetworkLogonRight and hit enter. Type in the same thing again, only change SeNetworkLogonRightwith something else. You can try the following:

    • SeInteractiveLogonRight

    • SeChangeNotifyPrivilege

    • SeSystemtimePrivilege

    • SeTimeZonePrivilege

    • SeCreatePagefilePrivilege

    • SeCreateGlobalPrivilege

    • SeCreatePermanentPrivilege

    • SeIncreaseWorkingSetPrivilege

    • SeIncreaseBasePriorityPrivilege

    • SeLoadDriverPrivilege

    • SeSystemEnvironmentPrivilege

    • SeManageVolumePrivilege

    • SeProfileSingleProcessPrivilege

    • SeSystemProfilePrivilege

    • SeShutdownPrivilege



For a complete list of User Rights and explanations, see my comment below (I can't post more than two links; if someone wants to edit this to add the link, please feel free).



  1. Once that is complete, you then need to give your new "Power Users" group permission to write to the C drive. Open my computer, right click on the C drive, and go to Properties. Click on the Security tab. Click on Edit... then Add... and in the big box under Enter the object names to select type Power Users and click Check Names and click OK.

  2. Under the heading Group or User Names, you will see "Power Users." Click on it, and click on the checkmark beside Full Control. It should automatically check off everything else, but if not, manually check everything else. The only thing that you can't check because it is grayed out is Special Permissions. Click Apply, and it will give your group permissions. Ignore any errors that come up and continue anyway (I think assigning group rights in step 4 took care if this).


7.Open an elevated command prompt again (step 1), and type net localgroup "Users" "Power Users" /ADD This nests your Power Users Group within Users so that way it is basically a Standard User account, but with additional privileges.



  1. Type net localgroup "Power Users" user_000 /ADD(user_000 being the user name for the account you are trying to keep as a Standard User and allow to install programs). This will still keep your user in the Users group, but will also add the user to the new Power Users group (so it is part of multiple groups). Note: If your user is signed in with a Microsoft Passport, you can find out your username by clicking on Startand typing control userpasswords2 and hitting enter. Then click on the user account you need to find the name for, and click Propertiesand you'll see your actual user name.


ALL DONE! And you will notice that Family Safety is still enforced, yet the user can't change its settings or give additional time or unblock websites or whatever. Nor can the user add another user with the User Accounts feature. Yet the user can install programs. :)


Mac Mini drive problems but SMART verified: bad hard drive or controller?

I have a 3-year-old Intel Mac Mini (EDIT: pretty sure it's the 1.66 GHz Core Duo T2300) at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least.


DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way.


Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer.


Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

linux - Multiple Reverse SSH Tunnels using Single Port

I am able to accept reverse SSH connections from multiple remote systems on a single server using a port for each connection:




Remote A: ssh -fN -R5000:localhost:22 user@server-ip -p22
Remote B: ssh -fN -R5001:localhost:22 user@server-ip -p22
Remote C: ssh -fN -R5002:localhost:22 user@server-ip -p22


I can access these systems from my local client as needed:



Access Remote A: ssh root@server-ip -p5000
Access Remote B: ssh root@server-ip -p5001



This requires forwarding one port per remote system on the server. When 100+ remotes connect, do I have any options other than opening 100+ ports in the server firewall and statically assigning each remote to a port as above? My goal is to enable multiple remotes to create tunnels on demand, where I can query who is connected.



I found that sslh is a multiplexer that can differentiate between traffic on a single port based on protocol but this only applies to different protocols eg. ssl/ssh. Is there a solution that allows multiple tunnels on a single port?



Example:



Remote A: ssh -fN -R5000:localhost:22 user@server-ip -p22 -identifier abc123
Remote B: ssh -fN -R5000:localhost:22 user@server-ip -p22 -identifier def456

access Remote A: ssh root@server-ip -p5000 -identifier abc123
access Remote B: ssh root@server-ip -p5000 -identifier def456

domain name system - DKIM check=pass but DomainKeys check=neutral

I got it to set up DKIM, I sent a test mail to check-auth@verifier.port25.com and reply was :



SPF check:          pass
DomainKeys check: neutral
DKIM check: pass
SpamAssassin check: ham



looks good, but why DomainKeys check just neutral ? in the mail-header I see the DKIM-signature, is that a big problem that its just neutral ? how to fix that ?

hard drive - How to interpret S.M.A.R.T and Badblocks results


I have bought a used SSHD ( Seagate Laptop SSHD - ST500LM000-1EJ162 ) on ebay. Regarding to S.M.A.R.T the disk might be damaged somehow, I am not sure. To correctly interpret the S.M.A.R.T values, I need your help.


Regarding to S.M.A.R.T I have a tremendous amount of Raw-Read-Error's and Seek-Error's. I have read a lot of different threads about this topic so far and what I have found out is that these two values mentioned are almost irrelevant, because there is no standardization on what kind of error's need to occur to let these two values ( Raw-Read-Error's and Seek-Error's ) raise. It's the manufacturer who decides on this - Generally speaking: Seagate tend to have high RAW-Values of Raw-Read and Seek-Errors, while Western Digital tend to have low RAW-Values in this segment. I've read, because of this fact, it would be useless trying to interpret the RAW-Values of these two Attributes, instead I should compare the columns named VALUE with WORST and THRESHOLD.
And here the next problem comes in. Now it is the opposite: A higher VALUE than THRESHOLD is preferred.


To make things more clear, have a look at the smartctl -a /dev/sdb/ snippet below



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 237676480

Regarding S.M.A.R.T, I have a Raw_Read_Error_Rate with a RAW-Value of 237676480. This looks dangerous in first place. But regarding to the columns VALUE WORST THRESH I have a actual(?) VALUE of 120. WORST-case once was 099 and if it falls below THRESH 006 the disk should be considered broken.


Same goes for Reallocated-Sector's. The lower the column-values compared with the THRESH-value the worse the disk-condition.


So regarding to my S.M.A.R.T snippet below, my disk never ever Reallocated anything.



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0

Now lets have a look at Reported-Uncorrected-Error's. As far as I understand, these errors are count, whenever the disk fails to reallocate a bad sector with the result that the data stored inside such a sector is/was lost.



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
187 Reported_Uncorrect 0x0032 099 099 000 Old_age Always - 1

Regarding to the S.M.A.R.T snippet above, the disk had one Uncorrected Sector in it's lifetime. Regarding to the columns VALUE and WORST there is no need to be afraid about any disk-failure.


Another attribute is Airflow-Temperature-Cel. First I installed the disk in my 12 years old Laptop and did run badblocks to check my disk. While badblocks was running for several hours I checked the S.M.A.R.T temperature value and saw the column VALUE was equal to WORST and both did fall below THRESH. As RAW_VALUE I had a statement like: DISK IS FAILING. So I decided to turn of my Laptop and install that SSHD in my home-server that has better airflow and restarted badblocks. So when checking this S.M.A.R.T attribute now, the column WORST describes the case, that happened the day before in my Laptop, while the column VALUE shows the actual temperature. Comparing VALUE with THRESH the temperature is fine. Trying to interpret the RAW_VALUE is something I have problems with. Here the snippet



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
190 Airflow_Temperature_Cel 0x0022 068 037 045 Old_age Always In_the_past 32 (0 120 37 26 0

Last but not least, there is some S.M.A.R.T information I have never ever read in any S.M.A.R.T outputs during my lifetime, and I have absolutely no clue on how to interpret these:



Error 4 occurred at disk power-on lifetime: 521 hours (21 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 71 03 80 04 11 40
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ea 00 00 00 00 00 00 00 00:13:30.508 FLUSH CACHE EXT
61 00 08 00 09 9c 40 00 00:13:30.507 WRITE FPDMA QUEUED
61 00 08 78 e1 42 40 00 00:13:30.507 WRITE FPDMA QUEUED
61 00 28 f0 44 9d 40 00 00:13:30.507 WRITE FPDMA QUEUED
61 00 08 00 6f 71 47 00 00:13:29.805 WRITE FPDMA QUEUED
Error 3 occurred at disk power-on lifetime: 519 hours (21 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 51 00 a0 25 e7 06
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ea 00 00 00 00 00 00 00 00:11:47.000 FLUSH CACHE EXT
61 00 08 88 c4 a0 40 00 00:11:45.863 WRITE FPDMA QUEUED
60 00 08 40 d4 08 49 00 00:11:45.863 READ FPDMA QUEUED
61 00 08 00 09 9c 40 00 00:11:45.863 WRITE FPDMA QUEUED
60 00 12 19 47 5a 40 00 00:11:45.863 READ FPDMA QUEUED
Error 2 occurred at disk power-on lifetime: 519 hours (21 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 40 d4 08 09 Error: WP at LBA = 0x0908d440 = 151573568
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
61 00 08 78 e1 42 40 00 00:10:28.019 WRITE FPDMA QUEUED
61 00 08 e0 96 a0 40 00 00:10:27.914 WRITE FPDMA QUEUED
61 00 08 98 95 a0 40 00 00:10:27.914 WRITE FPDMA QUEUED
61 00 08 70 95 a0 40 00 00:10:27.914 WRITE FPDMA QUEUED
61 00 08 58 95 a0 40 00 00:10:27.914 WRITE FPDMA QUEUED
Error 1 occurred at disk power-on lifetime: 426 hours (17 days + 18 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 71 03 80 04 11 40
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ea 00 00 00 00 00 00 00 00:35:26.857 FLUSH CACHE EXT
61 00 08 00 09 9c 40 00 00:35:26.856 WRITE FPDMA QUEUED
61 00 08 ff ff ff 4f 00 00:35:26.161 WRITE FPDMA QUEUED
61 00 08 ff ff ff 4f 00 00:35:26.161 WRITE FPDMA QUEUED
61 00 08 ff ff ff 4f 00 00:35:26.160 WRITE FPDMA QUEUED

From the postings I have read on different forums, people tend to advice to replace disks before things start to become worse. Also I have read how a few people comment that they have been able to use such disks for several years before they died to death. For me, this is new land. I never ever had a disk with so many errors. Probably the owner before did handle that disk bad. For example shaking his laptop a lot, or the SATA-connectors did not suit perfect, causing errors too. As said, I have no clue, on how to interpret these parameters. It's like an experiment I am going to do with this disk.


I checked the disk with badblocks -wvs -b 4096 -o badblox.result /dev/sdb and had no errors - DO NOT COPY&PASTE THAT BADBLOCKS COMMAND!!!. But when comparing the results of smartctl -a /dev/sdb before and after running badblocks the number of Raw_Read_Error_Rate and Seek_Error_Rate increased a lot while all the other Attribute-values remained the same. Check the snippet below:


Before running badblocks.



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 104 099 006 Pre-fail Always - 6995776
7 Seek_Error_Rate 0x000f 059 055 030 Pre-fail Always - 107395771838

After babdblocks had finished.



ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 120 099 006 Pre-fail Always - 237676480
7 Seek_Error_Rate 0x000f 059 055 030 Pre-fail Always - 107395783395

The whole S.M.A.R.T Output can be reviewed on PasteBin:


So my questions are:



  • How much serious damage does this disk have?

  • Is my interpretation about Raw-Read and Seek-Error correct?

  • Having zero Reallocated Sectors is a good thing?

  • Having only one not Reallocated error is not too bad?

  • Zero errors when running badblocks means that the disk is in a good shape?

  • How do i have to interpret the Error 1 to Error 4?

  • Any more test I should do, apart from selftest smartctl -t long /dev/sdb that is running actually?


Answer



Very quickly:



  • Raw values mean nothing. They can vary from firmware to firmware, and unless you know exactly what your raw value means for your specific hardware, don't try to interprete them. Sometimes it's obvious (temperature in celsius), often it isn't.


  • The values are normed to 100, lower is worse. If it's 100 or above, no need to worry. If it's below 100, the harddisk is showing a bit of wear. If it gets close to the threshhold, or under it, start to worry.


  • All harddisks have raw read errors. That's a consequence of the high density of today's drives, and that's what the inbuilt error-correction is for.


  • So: Your raw read rate looks fine. Your reallocated sector rate is excellent, meaning nothing seriously happened yet. A few reallocated sectors are nothing to worry about.


  • Your temperature is too high for some reason, check that the harddrive is cooled properly. The seek error rate is too high, this may be a consequence of the temperature being to high, causing the metal to expand a bit, which may move the head position out of spec.



So the one bit you need to worry about is proper cooling. If you can make that work, the seek errors should go down, and in your place I'd keep the harddisk. (But, of course, you are doing backups, aren't you?)


Edit


Error 1-4 come from a log of the five most recent errors that were communicated on the ATA layer. Usually you get a header like


SMART Error Log Version: 1
ATA Error Count: xxx (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]

So one could look up command and feature values in the ATA standard to find out more details about what happened. But having errors occur from time to time is by itself nothing to worry about: the embedded controller is complex, the interaction with the host is complex, the timing is complex; if some odd circumstances happen, that's one way to get an error. Other ways are bugs in the embedded controller firmware that only trigger under these odd circumstances.


Only when errors occur frequently, right now, and continue to occur it's time to worry, especially if it's always the same error.


You have three errors that occured after a cache flush, and one after a write (LBA = logical block address). Two happened together, probably as a consequence of the same problem, and the one before and the one after happened independently because of that. In your place, I'd completely ignore those: Whatever caused them is over, and it's not happening again.


ubuntu - Convert apache config to nginx equivalent for logs




I have this custom log in apache



 SetEnvIFNoCase User-agent "ELB-HealthChecker/2.0" skiplog
LogFormat "%t \"%r\" %>s %O | client:%a | Local:%A | Host:%v | %H | %m | %P(pid) | TimeTaken:%T | %q |DataReceived:%I-Sent:%O-Total:%S | %l | %u | \"%{Referer}i\" \"%{User-Agent}i\"" mylog
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined_mylog
CustomLog /var/log/apache2/access.log mylog env=!skiplog
CustomLog /var/log/apache2/access_org.log combined_mylog env=!skiplog



Basically i want to skip the log if user agent is "ELB-HealthChecker/2.0"



Now i have shifted to nginx and i am not sure how to use it in there


Answer



The nginx access_log directive accepts a parameter if which causes it to log only if the given condition is not false or an empty string.



So, you can set a variable and in the case that that user agent appears set the variable to 0. You can do this most easily in a map:



map $http_user_agent $loggable {
default 1;

ELB-HealthChecker/2.0 0;
}


Then you can modify access_log to check the variable:



    access_log /var/log/nginx/whatever-access.log log_format_name if=$loggable;

windows server 2008 - How to get LDAP connection string for my ActiveDirectory

I am trying to get Grails LDAP plugin to work with my Active Directory.




The plugin requires a lot of things which I'm not really familiar with as I don't know much about Active Directory.



Here are the things required by the plugin:



// LDAP config
grails.plugins.springsecurity.ldap.context.managerDn = '[distinguishedName]'
grails.plugins.springsecurity.ldap.context.managerPassword = '[password]'
grails.plugins.springsecurity.ldap.context.server = 'ldap://[ip]:[port]/'
grails.plugins.springsecurity.ldap.authorities.ignorePartialResultException = true // typically needed for Active Directory
grails.plugins.springsecurity.ldap.search.base = '[the base directory to start the search. usually something like dc=mycompany,dc=com]'

grails.plugins.springsecurity.ldap.search.filter="sAMAccountName={0}" // for Active Directory you need this
grails.plugins.springsecurity.ldap.search.searchSubtree = true
grails.plugins.springsecurity.ldap.auth.hideUserNotFoundExceptions = false
grails.plugins.springsecurity.ldap.search.attributesToReturn = ['mail', 'displayName'] // extra attributes you want returned; see below for custom classes that access this data
grails.plugins.springsecurity.providerNames = ['ldapAuthProvider', 'anonymousAuthenticationProvider'] // specify this when you want to skip attempting to load from db and only use LDAP

// role-specific LDAP config
grails.plugins.springsecurity.ldap.useRememberMe = false
grails.plugins.springsecurity.ldap.authorities.retrieveGroupRoles = true
grails.plugins.springsecurity.ldap.authorities.groupSearchBase ='[the base directory to start the search. usually something like dc=mycompany,dc=com]'

// If you don't want to support group membership recursion (groups in groups), then use the following setting
// grails.plugins.springsecurity.ldap.authorities.groupSearchFilter = 'member={0}' // Active Directory specific
// If you wish to support groups with group as members (recursive groups), use the following
grails.plugins.springsecurity.ldap.authorities.groupSearchFilter = '(member:1.2.840.113556.1.4.1941:={0})' // Active Directory specific


I'm using Windows 2008 Server and know the following:



IP = 10.10.10.90
Name = bold.foo.bar (This is what I see under Active Directory Users and Computers)

Domain =`BOLD`
Group = `MANAGERS`
Users = USERA (part of MANAGERS group) and USERB (not part of MANAGERS group)


Question



Can I get some help on filling in some/most of the configurations required? I have access to the Active Directory Domain Services in Server Manager so if most of the information will come out of there, I can get it.



PS: I don't have the luxury of a Sys Admin helping me on this. So I'm the developer left filling both roles :)

filesystems - My system became suddenly read-only


System Linux Mint 17 x64.


My local FS suddenly became read-only, I had to restart.


Why could this happen?


Could I fix it without restart?


syslog file


Answer



The reason why the system became read-only is that I was trying to mount a problematic ntfs partition, which, on fail, made the local fs become read-only, probably for security / integrity reasons.


Using setenv in Apache (Windows) DocumentRoot




Stack trying to migrate a configuration from Linux to Windows Apache 2.2 (via WAMP)




We are trying to set an ENV Apache variable to be used as DocumentRoot and rest of directives thereafter so we can use the same set of confs on server and local working copies of developers.



In Debian Linux Apache we have an envvars file that is loaded with "EXPORT" directives. This does not work on Windows but



Setenv ROOT_TO_FILES "C:/wamp/www/test"


Seems to work as displayed by a phpinfo()




But when we use it inside a DocumentRoot directive of an vhost:



DocumentRoot ${ROOT_TO_FILES} 


Apache looks for that literal text under its own root.



This is the way we use it in Linux but we've also tried the syntax of env=ROOT_TO_FILES



The apache doc seems clear about how to "define" but not how to use.

Also I see Apache 2.4 includes a new "Define" directive that seems to do exactly this which makes me think that it might not possible in previous versions.



Any samples of use under Windows Apache would be appreciated.


Answer



Apache's SetEnv directive just defines a variable that PHP or Perl scripts can use, for example they show up in PHP's $_SERVER[] global array.



SetEnv, nor it's cousins, have have anything to do with use of variables in Apache directives.



http://httpd.apache.org/docs/current/mod/mod_env.html#setenv




DocumentRoot usually needs a literal string...



What you want to do is use mod_macro (included in Apache 2.4.5 / *before that, you can find it as an extra module to download)...
http://httpd.apache.org/docs/current/mod/mod_macro.html



*For Apache 2.2, search Google for mod_macro Apache 2.2 VC9. TS/Thread-safe is for mod_php, NTS/Non-Thread-Safe is for PHP-FCGI.


filesystems - Should the nodatacow mount option be used in btrfs in a database server? Does it disable bit corruption checksums?

I am looking at implementing btrfs in raid 10 configuration for a database server and I am an confused about the nodatacow option.


According to https://btrfs.wiki.kernel.org/index.php/Gotchas:



Files with a lot of random writes can become heavily fragmented
(10000+ extents) causing trashing on HDDs and excessive multi-second
spikes of CPU load on systems with an SSD or large amount a RAM. On
servers and workstations this affects databases and virtual machine
images. The nodatacow mount option may be of use here, with associated
gotchas.



The documentation then states that nodatacow option is:



Do not copy-on-write data for newly created files, existing files are
unaffected. This also turns off checksumming! IOW, nodatacow implies
nodatasum. datacow is used to ensure the user either has access to the
old version of a file, or to the newer version of the file. datacow
makes sure we never have partially updated files written to disk.
nodatacow gives slight performance boost by directly overwriting data
(like ext[234]), at the expense of potentially getting partially
updated files on system failures. Performance gain is usually < 5%
unless the workload is random writes to large database files, where
the difference can become very large. NOTE: switches off compression !



Does this mean that this option should be selected for disks in database servers AND this using this option will disable corruption checksums?

macos - How to delete old $PATH echo in Mac


After i changed my Mac username(using the way supplied by Apple Documents), I found that when echo $PATH, there are some old paths in it.


/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Cellar/tomcat/7.0.42/bin:/Users/WaterWood/eclipse/android-sdk-macosx/platform-tools/:/Users/majie/.rvm/bin

WaterWood is my old username and not exists anymore(I also deleted the folder "/User/WaterWood")


I checked all configuration file, such as .bashrc, .bash_profile, .zshrc, /etc/paths, /etc/paths.d/, but found nothing with "/User/WaterWood".


How to delete these from my $PATH? Thanks.


================


Update:


First I thought it was some bug of oh-my-zsh, and I reinstalled it. It works well, but when i re-login the problem appears again.


Change system shell to zsh(in system preference), run /usr/libexec/path_helper(the result is wrong)



PATH="/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Cellar/tomcat/7.0.42/bin:/Users/WaterWood/eclipse/android-sdk-macosx/platform-tools/:/Users/majie/.rvm/bin"; export PATH;



Change system shell to bash, run path_helper(the result is correct)



PATH="/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin"; export PATH;



I delete all specific configuration in home folder(.zshrc, etc), but also has a "waterwood" in my terminal window, that drives me crazy.


enter image description here


Answer



Check this link if you're facing the same problem.


Default configuration file depends on which shell you're using. Back to my question, I forgot to check ~/.zprofile("waterwood" is set in this file).


windows 7 - Left shift key displays right click menu




Whenever I press the left shift key, the right click menu pops up.



This is similar to the problem that I found discussed here: Shift key pops up a menu



I ran a keyboard test, and it shows that whenever I press the left shift key, it registers that as shift + the menu key, which also brings up the menu (see screenshot below).



enter image description here



I know shift + F10 does this, and I am fairly certain the F10 key is not stuck.




Has anyone ever encountered this problem before or can only suggest some methods to solve this?


Answer



As this issue only happens with the one keyboard, then it may be worth while just getting a new keyboard to save any headaches.






You can down load the free tool AutoHotKey and map the Shift + Menu combination to just the left shift and see if this overwrites the fault.



Obviously this will only work if you never need to use the combination Shift+Menu



Monday, September 29, 2014

APACHE SetEnv directive (from .htaccess) not send to CGI process

I don't understand Apache2 mecanism in this scenario :



1/ In this location : var/www/cgi-bin/ (user's group rights : www-data)

i've a CGI script (php-cgi) who will execute PHP app + VAR environement version :



#!/bin/bash
# file : var/www/cgi-bin/cgi-php
exec "/usr/bin/php-cgi$PHP_VERSION"


This script have : chmod a+x executed at runtime by user www-data (Apache server)



All PHP versions are located in :





/usr/bin/php-cgi5.6



/usr/bin/php-cgi7.0



/usr/bin/php-cgi7.1



/usr/bin/php-cgi7.2





All versions working fine.



2/ In Apache Server .conf file, i use :




Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Require all granted
AllowOverride All



SetEnv PHP_VERSION 7.1
ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/
Action php-cgi /cgi-bin-php/php-cgi
AddHandler php-cgi .php


When Apache2 restart i can see in http : PHP Version 7.1 loaded (works fine).



3/ If in .htaccess file i put another PHP_VERSION variable, this not sent ??




File : /var/www/html/.htaccess



SetEnv PHP_VERSION 5.6


In this case .htaccess file do nothing at all.
In http i can see : PHP Version 7.1



Question :

Why SetEnv PHP_VERSION X.X at runtime is working (when i start Apache2).



And why i can't set a new variable (SetEnv PHP_VERSION X.X) from .htaccess file ?
I think Apache won't send $PHP_VERSION variable to the shell environment (Ubuntu Server 16.04).



If anyone can help me...
Thanks a lot.



Source : https://www.codejam.info/2014/08/apache-get-php-version-from-environment.html

cmd.exe - CMD or BAT to copy/replace files from a relative path to a .lnk target location



Example :




I will have the following :




  • Folder that contains :




    1. The CMD or BAT file

    2. the file/files needed to be copied ( Ex: file1.exe and file2.exe )



  • A .lnk located at the desktop ( Ex: C:\Users\Home\Desktop\Example.lnk) which is a shortcut for ( EX: D:\folder\Example.pdf )




I need the CMD/BAT file to copy file1.exe and file2.exe from its current relative location and paste or paste/replace them to the .lnk target location after reading it which is D:\folder\



Edit






I have tried the following to replace gravity.pdf with another version of gravity.pdf located at the same folder of the bat command :




@echo off
setlocal
rem get the .lnk target directory
for /f "tokens=* usebackq" %%i in (`type "C:\Users\Abdo\Desktop\Gravity.lnk ^| find "\" ^| findstr/b "[a-z][:][\\]"`) do (
set _targetdir=%%~dpi
)
rem copy the files
copy /y Gravity.pdf %_target%
endlocal



but an error comes "the syntax of the command is incorrect."



I am tring to understand the code , canot get how %_target% will refere to the target full path of the gravity.lnk which at my case now is D:\Books\



Edit 2







I have removed some inserted lines from the code and now an empty cmd black window opens but nothing changes :



code :



@echo off
setlocal
rem get the .lnk target directory
for /f "tokens=* usebackq" %%i in (`type "C:\Users\Abdo\Desktop\Gravity.lnk ^| find "\" ^| findstr/b "[a-z][:][\\]"`) do (set _targetdir=%%~dpi)
rem copy the files
copy /y Gravity.pdf %_target%

endlocal

Answer



How do I copy/replace files from a relative path to a .lnk target location?



Use the following batch file:



@echo off
setlocal
rem get the .lnk target directory

for /f "tokens=* usebackq" %%i in (`type "C:\Users\Home\Desktop\Example.lnk" ^| find "\" ^| findstr/b "[a-z][:][\\]"`) do (
set _targetdir=%%~dpi
)
rem copy the files
copy /y file1.exe %_targetdir%
copy /y file2.exe %_targetdir%
endlocal






My code gives an error "the syntax of the command is incorrect".




  • You are missing the " after lnk in the for command.


  • %_target% should be %_targetdir% (that was a mistake in my batch file - now fixed).




Here is the corrected version of your batch file:




@echo off
setlocal
rem get the .lnk target directory
for /f "tokens=* usebackq" %%i in (`type "C:\Users\Abdo\Desktop\Gravity.lnk" ^| find "\" ^| findstr/b "[a-z][:][\\]"`) do (
set _targetdir=%%~dpi
)
rem copy the files
copy /y Gravity.pdf %_targetdir%
endlocal






Further Reading




  • An A-Z Index of the Windows CMD command line - An excellent reference for all things Windows cmd line related.

  • find - Search for a text string in a file & display all the lines where it is found.

  • findstr - Search for strings in files.

  • parameters - A command line argument (or parameter) is any value passed into a batch script.


  • set - Display, set, or remove CMD environment variables. Changes made with SET will remain only for the duration of the current CMD session.

  • type - Display the contents of one or more text files.

  • for /f - Loop command against the results of another command.


multi boot - Possible to restore Windows Partition disk image?

I have a machine that was dual-booting Windows 7 and Ubuntu 12.04. I used the Disks program in Ubuntu to make a disk image of my windows partition which I saved on an external disk. I then reformatted the main drive and installed Ubuntu 12.10. I created a spare partition and restored that Windows disk image to that partition.


Is it possible to boot Windows? I have run boot-repair and Grub now gives me both Ubuntu and Windows options but when I select Windows I get only a black screen with blinking cursor. I ran Startup Repair from Windows Recovery (via USB stick) but still no luck, though it did find the Windows partition successfully. I also tried running lilo inside Ubuntu but still can't boot Windows. Any other ideas?


PS, I'm on a Netbook with no DVD drive so I can't just reinstall Windows though if that is my only option I can borrow an external DVD drive eventually.

How to prolong the life of an external hard drive?



In ten years, across several different machines, different companies, and different operating systems I've noticed a trend that external hard drives die long before their internal counterparts. Everyone I've spoken to who has used external hard drives for any period of time has shared the same experience.



At first I thought it was because they are moved a lot more than internal drives, but my laptop internal hard drives seem to last as long as my desktop ones, and I've had the same short lifespan when external drives are never moved.



Is this a known issue? Do external hard drives have substantially shorter lifespans than internal drives? If so, what can be done?


Answer



If you put an external harddrive next to the server/desktop and:





  • Do not carry it around in your bag like it is a piece of rock.

  • Do not toss it around like a piece of fruit.

  • Do not expose it to cold wet outside weather followed by dry hot (room) temperatures

  • Do not unplug it and pick it up while it still is spinning.

  • And mount it in a proper case with a decent PSU and sufficient cooling,

  • ...



then I see no reason why they should not last just as long as internal drives.




Ofc, there is a reason why people buy external drives. And they often do get exposed to one or more of the conditions I mentioned above.


RC4 cipher not working on Windows 2008 R2 / IIS 7.5



I have schannel configured to disallow insecure protocols and ciphers as per standard recommendations but I Sslscan only reports AES & 3DES as available cipher options.
Although RC4 should be enabled, and is setup as the preferred cipher, it just doesn't come up as an option.



The schannel registry settings are configured as follows:



HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
Ciphers
AES 128/128: Enabled (1)

AES 256/256: Enabled (1)
DES 56/56: Enabled (0)
NULL: Enabled (0)
RC2 128/128: Enabled (0)
RC2 40/128: Enabled (0)
RC2 56/128: Enabled (0)
RC4 128/128: Enabled (1)
RC4 40/128: Enabled (0)
RC4 56/128: Enabled (0)
RC4 64/128: Enabled (0)

Triple DES 168/168: Enabled (1)
Protocols
PCT 1.0
Server: Enabled (0)
SSL 2.0
Server: Enabled (0)
SSL 3.0
Server: Enabled (1)
TLS 1.0
Server: Enabled (1)

TLS 1.1
Server: DisabledByDefault (0), Enabled (1)
TLS 1.2
Server: DisabledByDefault (0), Enabled (1)
HKLM\SYSTEM\CurrentControlSet\Control\


The output of SSLScan is:



Supported Server Cipher(s):

Rejected SSLv2 168 bits DES-CBC3-MD5
Rejected SSLv2 56 bits DES-CBC-MD5
Rejected SSLv2 128 bits IDEA-CBC-MD5
Rejected SSLv2 40 bits EXP-RC2-CBC-MD5
Rejected SSLv2 128 bits RC2-CBC-MD5
Rejected SSLv2 40 bits EXP-RC4-MD5
Rejected SSLv2 128 bits RC4-MD5
Failed SSLv3 256 bits ADH-AES256-SHA
Failed SSLv3 256 bits DHE-RSA-AES256-SHA
Failed SSLv3 256 bits DHE-DSS-AES256-SHA

Failed SSLv3 256 bits AES256-SHA
Failed SSLv3 128 bits ADH-AES128-SHA
Failed SSLv3 128 bits DHE-RSA-AES128-SHA
Failed SSLv3 128 bits DHE-DSS-AES128-SHA
Failed SSLv3 128 bits AES128-SHA
Failed SSLv3 168 bits ADH-DES-CBC3-SHA
Failed SSLv3 56 bits ADH-DES-CBC-SHA
Failed SSLv3 40 bits EXP-ADH-DES-CBC-SHA
Failed SSLv3 128 bits ADH-RC4-MD5
Failed SSLv3 40 bits EXP-ADH-RC4-MD5

Failed SSLv3 168 bits EDH-RSA-DES-CBC3-SHA
Failed SSLv3 56 bits EDH-RSA-DES-CBC-SHA
Failed SSLv3 40 bits EXP-EDH-RSA-DES-CBC-SHA
Failed SSLv3 168 bits EDH-DSS-DES-CBC3-SHA
Failed SSLv3 56 bits EDH-DSS-DES-CBC-SHA
Failed SSLv3 40 bits EXP-EDH-DSS-DES-CBC-SHA
Accepted SSLv3 168 bits DES-CBC3-SHA
Failed SSLv3 56 bits DES-CBC-SHA
Failed SSLv3 40 bits EXP-DES-CBC-SHA
Failed SSLv3 128 bits IDEA-CBC-SHA

Failed SSLv3 40 bits EXP-RC2-CBC-MD5
Failed SSLv3 128 bits RC4-SHA
Failed SSLv3 128 bits RC4-MD5
Failed SSLv3 40 bits EXP-RC4-MD5
Failed SSLv3 0 bits NULL-SHA
Failed SSLv3 0 bits NULL-MD5
Failed TLSv1 256 bits ADH-AES256-SHA
Failed TLSv1 256 bits DHE-RSA-AES256-SHA
Failed TLSv1 256 bits DHE-DSS-AES256-SHA
Accepted TLSv1 256 bits AES256-SHA

Failed TLSv1 128 bits ADH-AES128-SHA
Failed TLSv1 128 bits DHE-RSA-AES128-SHA
Failed TLSv1 128 bits DHE-DSS-AES128-SHA
Accepted TLSv1 128 bits AES128-SHA
Failed TLSv1 168 bits ADH-DES-CBC3-SHA
Failed TLSv1 56 bits ADH-DES-CBC-SHA
Failed TLSv1 40 bits EXP-ADH-DES-CBC-SHA
Failed TLSv1 128 bits ADH-RC4-MD5
Failed TLSv1 40 bits EXP-ADH-RC4-MD5
Failed TLSv1 168 bits EDH-RSA-DES-CBC3-SHA

Failed TLSv1 56 bits EDH-RSA-DES-CBC-SHA
Failed TLSv1 40 bits EXP-EDH-RSA-DES-CBC-SHA
Failed TLSv1 168 bits EDH-DSS-DES-CBC3-SHA
Failed TLSv1 56 bits EDH-DSS-DES-CBC-SHA
Failed TLSv1 40 bits EXP-EDH-DSS-DES-CBC-SHA
Accepted TLSv1 168 bits DES-CBC3-SHA
Failed TLSv1 56 bits DES-CBC-SHA
Failed TLSv1 40 bits EXP-DES-CBC-SHA
Failed TLSv1 128 bits IDEA-CBC-SHA
Failed TLSv1 40 bits EXP-RC2-CBC-MD5

Failed TLSv1 128 bits RC4-SHA
Failed TLSv1 128 bits RC4-MD5
Failed TLSv1 40 bits EXP-RC4-MD5
Failed TLSv1 0 bits NULL-SHA
Failed TLSv1 0 bits NULL-MD5


Prefered Server Cipher(s):
SSLv3 168 bits DES-CBC3-SHA
TLSv1 256 bits AES256-SHA




As you can see, RC4 is not accepted as an option.
I've used the same configuration (except for TLS 1.1-1.2) on Windows 2003R2/IIS6 servers before and RC4 hasn't been a problem.



Can anyone help me find why RC4 128/128 is not working?



Thanks!


Answer



The issue why RC4 isn't working is that is has to be set to 0xfffffff or 4294967295 in the registry, not 1 to enable it.




Here's some PowerShell functions which were used to set our IIS installs up with the PCI compliant:



This function is used to enable/disable required protocols



function Set-IISSecurityProtocols {
$protopath = "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols"
& reg.exe add "$protopath\PCT 1.0\Server" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$protopath\SSL 2.0\Server" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$protopath\SSL 3.0\Server" /v Enabled /t REG_DWORD /d 00000001 /f
& reg.exe add "$protopath\TLS 1.0\Server" /v Enabled /t REG_DWORD /d 00000001 /f

& reg.exe add "$protopath\TLS 1.1\Server" /v Enabled /t REG_DWORD /d 00000001 /f
& reg.exe add "$protopath\TLS 1.1\Server" /v DisabledByDefault /t REG_DWORD /d 00000000 /f
& reg.exe add "$protopath\TLS 1.2\Server" /v Enabled /t REG_DWORD /d 00000001 /f
& reg.exe add "$protopath\TLS 1.2\Server" /v DisabledByDefault /t REG_DWORD /d 00000000 /f
& reg.exe add "$protopath\TLS 1.1\Client" /v Enabled /t REG_DWORD /d 00000001 /f
& reg.exe add "$protopath\TLS 1.1\Client" /v DisabledByDefault /t REG_DWORD /d 00000000 /f
& reg.exe add "$protopath\TLS 1.2\Client" /v Enabled /t REG_DWORD /d 00000001 /f
& reg.exe add "$protopath\TLS 1.2\Client" /v DisabledByDefault /t REG_DWORD /d 00000000 /f



}



And this function is where you set what ciphers are allowed to be used, or not used



function Set-IISSupportedCiphers {
$cipherpath = "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers"
& reg.exe add "$cipherpath\NULL" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\DES 56/56" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC2 40/128" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC2 56/128" /v Enabled /t REG_DWORD /d 00000000 /f

& reg.exe add "$cipherpath\RC2 128/128" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC4 40/128" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC4 56/128" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC4 64/128" /v Enabled /t REG_DWORD /d 00000000 /f
& reg.exe add "$cipherpath\RC4 128/128" /v Enabled /t REG_DWORD /d 4294967295 /f
& reg.exe add "$cipherpath\Triple DES 168/168" /v Enabled /t REG_DWORD /d 4294967295 /f
& reg.exe add "$cipherpath\AES 128/128" /v Enabled /t REG_DWORD /d 4294967295 /f
& reg.exe add "$cipherpath\AES 256/256" /v Enabled /t REG_DWORD /d 4294967295 /f



}



Once these changes have been set (A reboot is required afaik) you then can set the priority in which the ciphers are used.



To immune the BEAST vulnerability it's reccomended you used RC4 first as outlined @ http://www.phonefactor.com/blog/slaying-beast-mitigating-the-latest-ssltls-vulnerability.php



This will be the case until either




  • All browsers patch the BEAST vulnerability


  • Or everyone starts supporting TLS1.2


How to use a retail Windows 7 Professional license key to upgrade an installed Windows 7 Home Premium machine



I purchased a retail version of Windows 7 Professional and installed it on a computer. That machine has died, and I replaced it with a new computer which came with Windows 7 Home Premium already installed.



I'd like to have Professional, not Home Premium, on the new machine, and I don't want to pay for an "Anytime Upgrade" because I already have a valid Windows 7 Professional license (for the dead computer).




Is there a way to legally upgrade using my Professional license key? I've already installed programs, data, etc on the new machine, so I don't want to reformat and start from scratch.


Answer



Just run Anytime Upgrade, and when you're prompted for a license key, enter your retail key. It should work just as well.


ubuntu - what permissions should I give to a folder on apache when it demands write and execute permissions



I am trying to get a few Content Management Systems up and running.
But I have security concerns with respect to them




1) please see following link
http://www.dokeos.com/doc/installation_guide.html section 2 says
The following directories need to be readable, writeable and executable for
everyone:




  • dokeos/main/inc/conf/

  • dokeos/main/upload/users/

  • dokeos/main/default_course_document/

  • dokeos/archive/


  • dokeos/courses/

  • dokeos/home/



I am not very happy with this idea of having directories to be
readable,writeable and executable for every one.



2) http://doc.claroline.net/en/index.php/Install_general_information



the section

Rights on folders says



" If you don't want to set write access on the whole folders, which is
recommended for security reasons, give to the web server user write access on
these folders : "



Is this a recommended practice.?



3) Also another LMS (Learning Management System) while installing asked to give
some folders writeable and executable for every one

here is a link
http://atutor.ca/atutor/docs/installation.php
While installing it I got a message




“The directory you specify must be created if it does not already exist  


and be writeable by the webserver. On
Unix machines issue the command chmod

a+rwx content, additionally the path
may not contain any symbolic links.
chmod a+rwx /var/www/atutor/content”




4) Another LMS docebolms asked to give write permissions on



files/doceboCore/photo
files/common/users
files/doceboLms/course

files/doceboLms/forum
files/doceboLms/item
files/doceboLms/message
files/doceboLms/project
files/doceboLms/scorm
files/doceboLms/test


I checked its documentation
http://www.docebo.org/doceboCms/index.php?mn=docs&op=docsπ=5_4&folder=7

but was not that helpful.



I am not at all convinced by the idea of giving permissions to read,write and
execute as these Learning Management Systems say.
Let me know what you people have to say?
What is the best practise in such situations?


Answer



You're right to be concerned, and too many application vendors resort to telling you you need to grant full permissions to every user in order to avoid problems. They do this to minimize support calls, rather than to maximize security.



It does make sense that the web server account would need write access to certain directories to store uploads or generated files. And execute access on directories is required in Unix in order for the contents of the directory to be enumerated by a user, so that will also be necessary.




Ultimately, what you want is for the user account that is running the web server process (most likely www-data if you are using a packaged web server on Ubuntu) to own the folders in question, and then the standard permissions of 755 (rwxr-xr-x) are sufficient, or if you are on a shared system with other untrusted users, you'd want 700 (rwx------).



So, in your first example, assuming those directories already exist, you would need to do this:



$ sudo chown -R www-data:www-data dokeos/main/inc/conf/ dokeos/main/upload/users/ dokeos/main/default_course_document/ dokeos/archive/ dokeos/courses/ dokeos/home/
$ sudo chmod 755 dokeos/main/inc/conf/ dokeos/main/upload/users/ dokeos/main/default_course_document/ dokeos/archive/ dokeos/courses/ dokeos/home/


Again, if you are on a shared system, you may wish to replace "755" with "700" on the second line. If you know that www-data is not the user running your web server, replace that item with the correct value. You can run the same two commands on the directories for the second system as well. In both cases, write access is probably necessary, but only for the single user running the web server, not for everyone.




Good luck.


centos - Weird access log on my server




Everyday, there's one IP 58.218.204.110 try to get a non-exist file hxxp://216.245.205.74/judge.php from my server. The IP 216.245.205.74 is not my server IP. Do I just ignore it or is there any problem? Thanks.



Wordpress stats:



Date Time IP Threat Page OS Browser



August 4, 2010 13:23:07 58.218.204.110 0 hxxp://216.245.205.74/judge.php Windows XP Internet Explorer 6



August 4, 2010 10:08:53 58.218.204.110 0 hxxp://216.245.205.74/judge.php Windows XP Internet Explorer 6




August 4, 2010 06:58:07 58.218.204.110 0 hxxp://216.245.205.74/judge.php Windows XP Internet Explorer 6



Access Log:



58.218.204.110 - - [30/Jul/2010:01:01:25 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [30/Jul/2010:03:49:36 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [30/Jul/2010:06:46:42 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"




58.218.204.110 - - [30/Jul/2010:09:27:22 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [30/Jul/2010:12:20:24 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [30/Jul/2010:14:56:25 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [31/Jul/2010:22:36:58 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 404 286 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [03/Aug/2010:01:42:46 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 301 - "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"




58.218.204.110 - - [04/Aug/2010:10:08:52 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 301 - "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"



58.218.204.110 - - [04/Aug/2010:13:23:06 -0700] "GET hxxp://216.245.205.74/judge.php hxxp/1.1" 301 - "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"


Answer



I guess you substituted http with hxxp in the messages (it isn't clear). If so, someone is probing your server to see if it is configured to act as proxy. Since you don't seem to be running mod_proxy, it returns 404 (Not found).



Usually, there is no need to worry. If you have servers publicly visible to the Internet, you are going to see this every single day. Also, people trying to exploit all kinds of vulnerabilities in all kinds of software (phpMyAdmin is particularly annoying), even the ones you don't have installed. Also, ISC.SANS.DFind...



However, those 301 (Redirect) responses are strange...


hard drive - Windows 8 Pro 100% Disk Usage on Startup

I had Windows 7 on this laptop before. This didn't happen. It just goes all the way up to 100% and everythings slow when the real usage is only like 0,8MB/S-1MB/S


What could be wrong? I already did both disk checks although the second one took a while because it got stuck on 28% but it finished later. There is a firmware update on the acer website saying "Will increase hard drive performance" yet I'm scared of bricking my hard drive.


What should I do? Oh and by the way I defrag very frequently and it does this even on clean boot. It only does this on startup, but it's really annoying.


It's a Western Digital Scorpio Blue WD7500BPVT

boot - Installing Windows XP (SP3) from USB stick on a Netbook with only FreeDOS

Recently I bought an Asus X201E-KX179D netbook and it has no Windows installed on it, only the BIOS. No DVD Drive, no OS installed.


When I start my netbook, I get three options:



  1. Load FreeDOS with maximum RAM free using EMM386

  2. Load FreeDOS including HIMEM XMS-memory driver

  3. Load FreeDOS without drivers


And I get the option to press:


F5=Bypass startup files
F8=Confirm each line of CONFIG.SYS/AUTOEXEC.BAT


I copied the Windows XP (SP3) .iso image file to my pendrive (Scandisk-8GB) using Rufus and made my USB ready for boot as well.


However, when I insert my USB and enter BIOS setup for changing boot priority, USB Boot is not shown in the boot options, or it is not recognising the USB. When I restart my netbook with the USB inserted, again it goes back to Freedos mode.


Kindly help me to install windows xp (SP3) or Windows 7 (Which is the best option?).


Additional info: USB Ports (Unlocked) in BIOS Setup

Sunday, September 28, 2014

hardware - HP ProLiant DL360 G7 hangs at "Power and Thermal Calibration" screen



I have a new HP ProLiant DL360 G7 system that is exhibiting a difficult-to-reproduce issue. The server randomly hangs at the "Power and Thermal Calibration in Progress..." screen during the POST process. This typically follows a warm-boot/reboot from the installed operating system.



enter image description here



The system stalls indefinitely at this point. Issuing a reset or cold-start via the ILO 3 power controls makes the system boot normally without incident.



When the system is in this state, the ILO 3 interface is fully accessible and all system health indicators are fine (all green). The server is in a climate-controlled data center with power connections to PDU. Ambient temperature is 64°F/17°C. The system was placed in a 24-hour component testing loop prior to deployment with no failures.




The primary operating system for this server is VMWare ESXi 5. We initially tried 5.0 and later a 5.1 build. Both were deployed via PXE boot and kickstart. In addition, we are testing with baremetal Windows and Red Hat Linux installations.



HP ProLiant systems have a comprehensive set of BIOS options. We've tried the default settings in addition to the Static high-performance profile. I've disabled the boot splash screen and just get a blinking cursor at that point versus the screenshot above. We've also tried some VMWare "best-practices" for BIOS config. We've seen an advisory from HP that seems to outline a similar issue, but did not fix our specific problem.



Suspecting a hardware issue, I had the vendor send an identical system for same-day delivery. The new server was a fully-identical build with the exception of disks. We moved the disks from the old server to the new. We experienced the same random booting issue on the replacement hardware.



I now have both servers running in parallel. The issue hits randomly on warm-boots. Cold boots don't seem to have the problem. I am looking into some of the more esoteric BIOS settings like disabling Turbo Boost or disabling the power calibration function entirely. I could try these, but they should not be necessary.



Any thoughts?




--edit--



System details:




  • DL360 G7 - 2 x X5670 Hex-Core CPU's

  • 96GB of RAM (12 x 8GB Low-Voltage DIMMs)

  • 2 x 146GB 15k SAS Hard Drives

  • 2 x 750W redundant power supplies




All firmware up-to-date as of latest HP Service Pack for ProLiant DVD release.



Calling HP and trawling the interwebz, I've seen mentions of a bad ILO 3 interaction, but this happens with the server on a physical console, too. HP also suggested power source, but this is in a data center rack that successfully powers other production systems.



Is there any chance that this could be a poor interaction between low-voltage DIMMs and the 750W power supplies? This server should be a supported configuration.


Answer



So, after bringing a third system into the mix, and experiencing the same issue, we began to question the environment. I dug up a copy of the HP ProLiant Servers Troubleshooting Guide and found the POST problems flowchart shown below.



enter image description here




Carefully running through the steps in the chart, we realized that the one constant across all of the servers was a KVM switch attached to the data center crash cart. This was a consumer-class USB-enabled KVM. As per the highlighted node in the flowchart, Do you have known good KVM?, I could not answer conclusively.



So, we unplugged the servers from the KVM switch and ran an automated boot, sleep 300; reboot sequence in rc.local. The servers had no issues with this, regardless of the normal DIMM, low-voltage DIMMs, PSU wattage, etc.



This was all the result of a poor interaction with a USB KVM switch. By virtue that this was the console, it ensured we'd see the failure if we were looking for it. Self-fulfilling...


cron - crontab day of month not working



My crontab is working even too much: today is the 21th (of november, 2015) and both of these lines gets executed. I really cannot figure out why.




* * 1 * 0,6 echo "test in dom" >> /opt/testweekend
* * * * 0,6 echo "test" >> /opt/testweekend

Answer



Today, 21 Nov 2015, is a Saturday, so the second is clearly eligible to run. But the first is, also; the man page for crontab(5) says that




Note: The day of a command's execution can be specified in the
following two fields — 'day of month', and 'day
of week'. If both fields are restricted (i.e., do not contain the "*" character), the command will be run when

either field matches the current time. For example,
30 4 1,15 * 5 would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.




Thus your first entry will run every minute of every Saturday and Sunday, and every minute of the first of every month.


windows - Keyboard strange behavior

I have a ASUS R900V laptop, 1 year ago, I spilled a small amount of water over the keyboard, and it started behaving strangely, writes aq when I push a or q buttons, and some other thing like this...
I started using an external keyboard, and after a month it started working well again.
Now, after nearly a year, the keyboard started doing this crazy thing by itself againm I thought it was a hardware issue, so I brought my external keyboard again, but the strangest thing is that, the internal keyboard works fine for like a minute after I turn it on, also randomly sometimes, so maybe it's more like a software issue, but the problem is still even after formatting my pc

windows - How to detect why I cannot delete directory?



I cannot delete empty directory, I would like to know why. That directory did contain movie that was being played by my custom player and it might somehow be still blocking the directory. however it is closed and not visible in process list under ctrl+alt+delete.



I have installed Process Monitor and when I from console execute




rmdir directory


it says it cannot be deleted and PM says



Operation: CreateFile
Result: SHARING VIOLATION

Desired Access: Read Attributes, Delete, Synchronize

Disposition: Open
Options: Directory, Synchronous IO Non-Alert, Open Reparse Point
Attributes: n/a
ShareMode: Read, Write, Delete
AllocationSize: n/a


what can I do to discover why I cannot delete that dir and how to delete it?



P.S. I know I will be able to delete it after restaring computer, but that I would like to know why my player is blocking that directory.



Answer



You can use Process Explorer's "find handle" feature with (part of) the directory name to see what processes, if any, have a handle on that directory.


Dual Booting Windows 7 and Ubuntu


I'm planning on dual booting Windows 7 and Ubuntu on one of my old computers, when I was looking at disk management I have this:


enter image description here


My question is, if I install Ubuntu on the D: drive, will it work properly or would I get errors? I haven't done a dual boot in years and never had something like this before. Thanks for the answers in advance.


Answer



I don't think that you are going to have problems with that partition scheme.
Just remember that your data stored there will be deleted, so do a backup.
Also, during Ubuntu installation, you can format that partition in Ext4 instead of NTFS, which is better for linux use, and add another small partition to use as swap.


windows - SCCM Client Image deployment options



At my organization, we are using SCCM to manage OS deployments. Right now, it's rather complicated to get an image out to a client and I'm looking for an easier way. Here is what we have to do right now to get this to work:



We first have to gather the computer name and MAC adress so we can properly target the machine. All of the computers we want to reimage are added to a collection that has one required task. This task reboots the drive into a PXE environment, wipes the hard drive, and drops the OS onto the drive.



The problem with this is that when you add a computer to the collection, the process doesn't run until the next scheduled task check. This can take up to 30 minutes depending on when you add it to the collection. On top of that, there is no way to set a timer so techs have to wait until off hours to drop any images.




What's worse is that we don't have a way to manually kick off the image process. If we have a clean harddrive, we have a choice on how to proceed. We can use non-SCCM images to get an OS on it, install the SCCM client, and then reimage. We can also hunt for the MAC address for the computer (assmuning there is a label somewhere), and PXE boot it once we add it to the collection.



Do you have a headache? Cause I do. SCCM is managed my another department, and it's like pulling teeth to get any of our 500 computers reimaged. They even flat out told me that a first pass image run has an average success rate of %60.



There has got to be a better way to do this.


Answer



The SCCM client polling interval is a configurable setting that applies to all clients in the site. Sounds like it's set to 30 minutes at your site but (depending on network limitations/server load) this can be safely changed down to 15 minutes by someone with the appropriate SCCM site access.



There are ways to force a machine to check outside of its normal polling schedule.




There's an open source tool called SCCM Client Center that you can use to connect to a client machine and check/set a lot of the SCCM details with (as long as you have the appropriate permissions).



What you can do is once you've placed a machine in a collection, rather than waiting for it to poll you can connect to it using the Client Centre, select Client Actions -> Download Machine Policy, then wait a minute or two and select Client Actions -> Apply Machine Policy. This forces it to connect to the SCCM server collect any pending policy changes (eg new adverts) and then once they're downloaded you tell it to run them.



Setting a timer on the job running is down to whoever set up the advert for the job in the first place, they had a number of options they can pick for scheduling the job once it's picked up by a machine and presumably in this case picked "As soon as possible". You can also set up Maintenance Windows on machines that set when jobs can/can't be run on them which would stop builds happening in working hours if that's what you want, but it sounds like these parts are outside of your control unfortunately?



I could be mistaking how you're set up but normally once a machine's set up in SCCM you don't need to know it's MAC, and if you ever do you can just find the machine in the console right click it and look at it's details. Adding machines to a collection can be done by machine name, MAC, IP or pretty much any criteria, you only need to know one unique thing about it.



New 'bare metal' builds are obviously slightly different but we don't currently use that part of SCCM (but are planning to move over to it) so I can't tell you much there.



sublime text 2 - Edit both opening and closing html tag


Is there a way (core or plugin) that a closing html tag name changes accordingly when renaming the opening one ? A behaviour similar to multi-selection...


Answer



The Zen coding plugin has a command "select matching tag name" which selects the nearest opening and closing tags relative to the cursor.


command line - Batch rename files in linux using folder name



I have a load of directories (2005 - 2012), each with files (01.jpg - 100.jpg).




If I wanted to rename all the files into the base directory, renamed to, for example, Folder 2005 - 01.jpg, what would be the easiest way of doing this by the command line in Linux?



For example from



/home/mark/images/2005/01.jpg
/home/mark/images/2005/02.jpg
/home/mark/images/2005/03.jpg
/home/mark/images/2006/01.jpg
/home/mark/images/2006/02.jpg
/home/mark/images/2006/03.jpg



to



/home/mark/images/Folder 2005 - 01.jpg
/home/mark/images/Folder 2005 - 02.jpg
/home/mark/images/Folder 2005 - 03.jpg
/home/mark/images/Folder 2006 - 01.jpg
/home/mark/images/Folder 2006 - 02.jpg
/home/mark/images/Folder 2006 - 03.jpg



Surely there must be a simple one liner for this? I know that you can use, e.g. {2005-2012} to access the multiple directories, but I'm not sure how to then access that value later when renaming.


Answer



#!/bin/bash
for year in 20??; do
pushd "$year"
for file in *; do
echo mv "$file" ../"Folder ${year} - ${file}"
done

popd
done


Remove the echo if the output looks good to you.


windows 7 - Renaming multiples files with only a part of the original file name


I want to rename 40 png files in one folder, they have very long names. They are named serially like this, "blah...blah...blah160.png", "blah...blah...blah200.png" i.e. after 40 alphanumeric characters comes the serial number in three digits (160), I want only the last three digits to remain in the file name, so, "blah...blah...blah160.png" should become "160.png", is there a simple one line DOS (cmd.exe) command in win 7?


Answer



The following cmd file should do the job:


@echo off & setlocal
for %%F in (*.png) do call :doIt %%F
goto xit
:doIt
set name=%~n1
set num=%name:~-3%
set ext=%~x1
set lentest=%name:~40,3%
if not [%lentest%]==[] (
copy "%1" %num%%ext%
:: del "%1"
)
goto :EOF
:xit
endlocal

Uncomment the "del" line to actually delete the version with the long name.


It is possible to squeeze this in fewer lines, but this would make it less comprehensible.


What is the easiest method of checking SMART status for your hard drive?


I've seen programs in the past that were able to check the SMART status of a hard disk drive but it wasn't easy for me to find. Also, I think I had to boot into the CD in order to check on it. What is your preferred method for getting this data to hopefully preempt any disk failures?


Answer



I think "S.M.A.R.T. Monitoring tools" is the one I've used before. They give you all the parameters.


If you are going to be fooling with SMART, I'd recommend looking at the Google paper on drive failures. They are one of the few groups on the planet that have enough drives to do any real analysis, so their comments on the usefulness of SMART are probably the best research you'll find on the subject.


windows 7 - Move boot partition from TrueCrypt to new drive?


I have a laptop with a 500 GB HDD. The C partition takes up almost all the drive. Have 2-3 other small partitions for manufacturer recovery software or system stuff; these came with the laptop.


The C partition is encrypted with TrueCrypt.


I want to migrate everything to a new, 750 GB HDD. I am not concerned about the recovery partition.


It looks like I can't do the easier options (gparted, DriveImage XML, EASEUS Disk Copy, etc.) because of TrueCrypt. If they even work, I'd end up with a bunch of unallocated space, and all I could do is make another drive, which I don't want. extcv claims to resize TrueCrypt volumes, but its last update is in 2010, and it's only compatible with older TrueCrypt volumes.


Do I have options besides:



  1. Set up new OS install on 750 GB HDD, then copy all my data over and set everything back up.

  2. Unencrypt 500 GB volume, then use the easier options to copy everything over to the 750 GB volume.


Answer



If you're moving data to a new drive, what needs to be resized? Why bother unencrypting?


Option 1 sounds correct.


raid - HP DL380 G5 Predictive failure of a new drive

Consolidated Error Report:
Controller: Smart Array P400 in slot 3

Device: Physical Drive 1I:1:1
Message: Predictive failure.



We have an HP DL380 G5 server with two 72GB 15k SAS drives configured in RAID1. A couple weeks ago, the server reported a drive failure on Drive 1. We replaced the drive with a brand new HDD -- same spares number. A few days ago, the server started reporting a predictive drive failure on the new drive, in the same bay.



Is it likely the new drive is bad... or more likely we have a bay failure problem?



This is a production server, so any advice would be appreciated. I have another spare drive, so I can hot swap it if this is a fluke and new drive is just bad.



THANKS!

CharlieJ

acronis trueimage - I just deleted my backup file! How do I save it?

I just accidentally deleted a backup file that I need to restore my system. It's an Acronis True Image TIB file. It was stored at H:\My backups and the name of the file was File_backup_2012-10-18.tib.


I did a quick scan with Recuva 1.43.623 and it found the file using the recovery wizard, but it was unable to recover it. The "state" of the file is "unrecoverable". So the resulting file is 0 byte.


I am trying to do a deep scan with Recuva right now but it takes a lot of time. If it should fail, what other recovery option do I have? Is there any other good file recovery software that's free to use for home users?


I do have a second copy of the whole system partition, but I needed this file backup copy because it is more up to date.


recuva 1.43


That's the file, right there! But why is Recuva unable to recover it?

Saturday, September 27, 2014

laptop - AC not working properly


Yesterday evening i used my laptop without any issue, this morning i went to work and i have noticed that the AC charger is not working properly. This is the behavior:



  • If i connect the laptop to the charger while the laptop is not on I see the "power" LED up, if there's the battery the LED is blinking from red to white (i think it means the battery is charging)


  • If I try to power up the laptop i see the "power" LED up but the laptop uses the battery as power source rather than the AC, if i unplug the AC then plug it back in, it charges for few seconds then stops. I have to unplug the AC from the laptop in order to make it recognize the charger has been unplugged, if i unplug the charger from the power socket it won't recognize that the charger has been unplugged


  • If i try to power the laptop with the battery unplugged but the charger on, it will power up for 1 second and then in shuts down again



Now i want to know what it's not actually working (the charger probably, but i'm not sure), what tests can i run to rule things out?
The PC is an HP dv6 3011el and it's not under warranty anymore


Answer



The most likely cause is the charger. It's also the simplest to replace. The easiest way to test would be to try another HP charger if you can get hold of one. It may not need to be the exact same laptop model - it's likely that the charger output ratings do not necessarily change for different laptops from the same manufacturer.


My HP charger output (for what it's worth) is 19.5V 3.33 A 65W.


Microsoft Storage Spaces without ReFS




I'm testing a new build of a surveillance, video recording server. My OS (Windows 10 Professional) is on an SSD and my data is stored to 8 spinning disks. I wanted to try out Storage Spaces and noticed that ReFS is not an option with Windows Professional any more. Is it "safe" to use Storage Spaces with NTFS or would it be worth it to go with Windows 10 Pro for Workstations to get the ReFS option?


Answer



There isn't a clear answer here, as it depends on your requirements. Can you lose the data? How long can your PVR "server" be down while you repair NTFS offline?



ReFS has integrations with Storage Spaces, so that it can recover files from mirror/parity blocks, thus making repairs extremely fast and online, without any interruption.



If you want to use ReFS (and I think you should), then your options are:




  • Windows Server - since you're saying "video recording server"


  • Windows 10 Enterprise

  • Windows 10 Pro for Workstations


virus - Can Windows reset save my computer?


I've gone through some kind of virus and malware apocalypse after downloading some stuff, and antiviruse programs fail to clean everything. What is worse, after deleting what seemed to me as some malware remnants, Microsoft Edge stopped working.


So I thought that the only way to recover my PC is a full Windows reset, but I don't have an installation disc nor restore point. I still can reset my PC, but I'm not sure if Edge will work again. I just don't want to end up with a clean computer without any access to the internet.


Does anyone know if resetting my PC will help?


Answer



Microsoft offers the MediaCreationTool free of charge. Get an empty 8gb or larger USB stick. Use the tool to convert it into an installation media.


Download your network drivers and save them on the USB stick.



  1. Backup your important file to USB external hard drive.

  2. Boot from USB stick

  3. Format, and re-install windows

  4. Re-apply your network driver.

  5. Use web browser to download everything else.


http://devid.info/ will help


What happens to the recovery partition from Dell after Windows 10 Upgrade?

I bought a Dell Studio XPS 8100 desktop back in 2010, which had Windows 7 installed and came with a partition for Dell Factory Restore.




After having installed Windows 10, what happened to that partition? Did the installation get rid of it? If not and I were to use it to do a Dell Factory Restore, would it "reinstall" Windows 7?



Sorry if this is a duplicate of a question somewhere, I didn't see one exactly like this asking about Windows 10 upgrade and Dell's Recovery partition.

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...