Monday, November 30, 2015

windows 7 - win 7 installation issue "No device drivers were found..."

Just bought a new PC and I'm trying to install win 7 from USB. I had my ISO file on my other laptop, and I used windows usb tool to copy the ISO to the USB after formatting it to exFAT (tried NTFS but it didn't copy... <- why is that? ).


Anyway, the UEFI BIOS menu recognizes my USB (4 GB) and starts loading the windows boot manager, but then I get this msg:



"No device drivers were found. Make sure that the installation media contains the correct drivers, and then click ok"



Now, I've seen some solutions to this problem like swap USB ports (I did - didn't help. I have 2 blue USB ports (USB 3) and a whole bunch of regular (USB 2), neither works). Also I saw an advice to turn off something in BIOS so the boot will be able to work with USB3, but as I see it, the BIOS already recognized my USB, so, why wouldn't continue with installation?


The USB is still recognized while in boot manager, I think. After I get this msg, I can browse for the drivers, and inside my PC, there is a dir of boot X - my USB...


Can you help pls? Thanks!


To make things clearer:


I see a screen like this one, but with no items on the list...


http://en.community.dell.com/resized-image.ashx/__size/550x0/__key/communityserver-discussions-components-files/906/driver1.JPG

windows 10 - CPU Usage jumps down from 100% to normal levels after opening task manager



I've been searching for an answer to this and have tried all the options described in similar questions.



My CPU usage is at 100% when I open the task manager and then jumps down to regular levels once I open it.



I've scanned for malware using Malwarebytes, ADWcleaner, ESET online scanner, windows defender, and they have all come back clean.



I also used process explorer to find out what process was causing this but couldn't find anything out of the ordinary.



Answer



Task manager has a lot to do when it loads. It has become quite complicated in recent years.



The task manager has to scan through hundreds of processes




  • For each process


    • Get process name


    • get executable name

    • PID

    • Memory usage - Working set, private, etc

    • various other information


  • Get the current statistics for each process (Process tab)


    • Memory Use

    • Disk use


    • Network use

    • Group processes by executable and get memory totals and per process


  • Get the historical statistics for each process (App History tab)


    • Memory Use

    • Disk use

    • Network use



  • Get current system statistics (Performance)


    • CPU usage

    • CPU information

    • Disk usage and utilisation

    • Ethernet/Wifi addresses and utiilisation

    • Graphics cards details (memory, type, shared memory) and utilisation


  • Startup tasks from various locations


  • Logged in users and processes

  • Services details


    • Name

    • State - running, stopped, etc

    • PID

    • User, etc





And then it has to build and populate the UI with all that information. It could be loading up various tabs at the same time as grabbing information from the system.



In order to do it quickly it could well be using multiple threads to get all the information from the system as quickly as possible. It does have a substantial amount of information to grab and (for me) still loads in under a second or two. Granted some information is duplicated across the Process and Details tabs, but there's a lot of rearrangement and processing to group and display and collate information for the Process tab.



I don't consider this a sign of malware.


Root Login Message

Does anybody have a idea how to do The following on Linux?



$ ssh root@
Please login as the user "ec2-user" rather than the user "root".



Not only disable the root login, but show an message when login with the root user and then terminate the ssh connection?




Thank you.

Windows 10 MBR boot - bootrec /rebuildbcd says "requested system device cannot be found"

During the cleanup after building my PC I managed to break my Windows installation and I can't figure out how to fix it without starting over.


I have a Samsung 960 M.2 SSD in an Asus Strix X370-F and 2 sata HDDs. I've installed Windows 10 pro on the SSD from an old DVD via a USB DVD drive. I've accidentally booted the DVD in legacy mode instead of UEFI for the install, and it formatted the SSD in MBR. For some reason it also formatted one of the HDDs in MBR and made a 500-ish MB system partition on it.


After setting up everything, I nuked the HDD and reformatted it as GPT with a single partition, deleting the system partition. So now I have an MBR SSD with the OS and a recovery partition and 2 GPT HDDs with some user files, but nothing related to the system.


The PC comes up with a blank prompt. I tried running the boot recovery from the DVD both booting it as UEFI and legacy but it did nothing. I tried bootrec /fixmbr and bootrec /fixboot, they completed successfully, since then I get missing bootmgr at boot. I tried bootrec /rebuildbcd, it finds the Windows install but then says "requested system device cannot be found". I tried the suggestion to export and delete my bcd first but that failed too, I don't even have a boot folder.


All guides I find for next steps suggest to create an EFI partition but I can't do that on an MBR drive. What can I do instead?


Thanks a lot!

smart - Hard Drive Failure - Next Steps and Process


I have a 2TB Barracuda that has some bad sectors (detected via CrystalDiskInfo). When I turned on my computer, I cannot detect my D:, E:, F:... which are all mapped to my failed disk. What I mean is that I don't see my missing partitions when I go to 'This PC':
A screenshot of Windows Explorer showing available drives C: and K:


Here is the latest report from last year on CrystalDisk when I first noticed the bad sectors. Since last year, the bad sector count has not increased.
Screenshot from CrystalDiskInfo. Disk's status is indicated as


Unfortunately I have a lot of software and data mapped to this failed drive. Installation of the software will be a pain, and I didn't back up this month's data. I am wondering what alternatives do I have in recovering my hard drive?


My first thought is that I should somehow clone my hard drive. If I am to do so, would the software clone the drive exactly (ie - even clone the partitions)? Would I need to use another healthy hard drive that is exactly the same size (2TB)? What happens when the cloning software encounters an unhealthy sector?


I also noticed that Windows 10 is taking an extremely long time to restart/shut-down. I have Windows 10 installed on a SSD and it normally takes < 15 seconds to restart/shut-down. Now it is taking several minutes. Is this because Windows is trying resolve something on the missing/failed drive?


Update - I have unplugged my failed drive and Windows has been loading within 15 seconds (my OS is installed on my SSD).


Answer



HD Clone was able to clone all but my failed partition to another drive. Note that the trial version supports up to 2TB disk clones.


hard drive - Windows vista reinstall on a new HDD


I have a dell optiplex 320, installed with vista business. Unfortunately the HDD is packing up so I need to get a new one to put in to it. I have created a system image backup so should be able to get it back to its current state when vista is installed on the new HDD. The issue is that dell have put the 'recovery discs' on the HDD that is currently failing. Does anyone have any suggestions of how I could install vista business on the new HDD using this partition as I doubt dell would give me any free support. Thanks =)


Answer



If you have a system image backup, restore it to the new drive then restore from the second partition of the new drive?


Alternatively, install the new drive along side the old one and try a restore onto the new drive (preferably before the disk fails!).


hard drive - Can I incrementally transfer all files in a disk while at the same time changing partition schemes and sizes?

Currently I have an Ext4, 1TB hard drive in a Linux system where personal files are saved. The system files, including home folder, are in another (SSD) drive.


I want to format the SSD to install Windows, but then the Ext4 drive would at best be read-only (of course I am sorry for having chose Ext4 back then..)


Then I got the following idea, but I don't know how dangerous that may be:



  1. Make sure to free a lot of space and leave the 1TB disk with at least 40% free space;

  2. Use GParted to shrink the Ext4 partition;

  3. Use GParted to create a NTFS partition occupying all free space;

  4. Mount both partitions and move some folders from Ext4 to NTFS

  5. Unmount both, srink Ext4 further, and expand NTFS to catch up with newly freed space;

  6. Repeat steps 4 and 5 until every file has been transfered;

  7. Delete (now empty) Ext4 partition;

  8. Expand NTFS to occupy all available space.


Specifically, I'd like to know about risk of losing some files, or worse, losing all content from a partition or from the whole disk.


Besides that, I'd like to know how I could perform this operation in a safer way, if you think that is more recommended (I don't have spare disks around. I have a paid Dropbox Account, bit still I'd like to spare the weeks I would need to upload everything.

windows 7 - How to create an admin account from non-admin privileges

I want to create an Administrator account from a non-admin account (such as guest) in Windows 7.


Is it possible?

Sunday, November 29, 2015

domain name system - Is it possible to have a secondary managed DNS provider to quickly delegate to when DDOS attack on our *primary* external DNS provider happens?

So our DNS provider, every so often, experiences DDOS attacks on their systems that causes our front-facing web sites to go down.



What are some options in terms of reducing dependency on a SINGLE external managed DNS provider? My first thought was using lower expire TTL and other SOA TTLs, but it feels like these affect secondary DNS server behavior more than anything else.




i.e. If you experience a DNS outage (due to DDOS, in this example) that lasts more than, say, 1 hour, delegate everything to a secondary provider.



What do people do out there when it comes to their external DNS and using another managed DNS provider as backup?



Note to our friendly moderators: this question is much more specific then the " "generic mitigate DDOS attack" questions out there.



EDIT: 2016-05-18 (A few days later): So, first off thank you AndrewB for your excellent answer. I have some more information to add here:



So we reached out to another DNS service provider and had a chat with them. After thinking and doing a bit more research, it's actually a LOT more complicated than I thought to go with two DNS providers. This is not a new answer, it's actually more meat/info to the question! Here is my understanding:




-- A lot of these DNS providers offer proprietary features like 'intelligent DNS', for example DNS load balancing with keepalives, logic chains to configure how responses are handed back (based on geo location, various weights to records, etc. etc.). So the first challenge is to keep the two managed providers in sync. And the two managed providers are going to have to be kept in sync by the customer who has to automate interacting with their APIs. Not rocket science, but an ongoing operational cost that can be painful (given changes on both sides in terms of features and APIs).



-- But here is an addition to my question. Let's say someone did do use two managed providers as per AndrewB's response. Am I correct in that there is no 'primary' and 'secondary' DNS here as per spec? I.e., you register your four DNS server IPs with your domain registrar, two of them are one of your DNS providers, two of them are DNS servers of the other. So you would essentially just be showing the world your four NS records, all of which are 'primary'. So, is the answer to my question, "No"?

hard drive - Bootable USB stick with Truecrypt

I want to boot the TrueCrypt rescue disk from a USB flash drive. I have Windows 7 64bit so I cannot use Grub4Dos to do this (it only works on 32bit system), and 99% of all documentations on how to create this rescue USB stick involve Grub4Dos.



So I tried using a program called FlashBoot but couldn't figure it out. I did some operation on the USB stick using FlashBoot, to make the USB bootable or something to a DVD ISO. However as I got lost and wasn't sure what I was doing, i abandoned FlashBoot and just formatted the USB stick.



Then I found the following instructions to USE syslinux for this purpose:




mephisto wrote:

Ok, this is how it worked for me:




  1. Format the USB-Stick with FAT

  2. Download the newest SYSLINUX package.

  3. Extract the syslinux archive (in my case the newest one was syslinux-3.70.zip)

  4. The only 2 files you actually need from the archive are syslinux.exe from the win32 directory.... and memdisk from the memdisk directory.

  5. Assuming your USB-Stick has the drive letter X, execute the following command: syslinux X:

  6. After that there should be a (hidden) file on your USB-Stick called ldlinux.sys

  7. Download BBIE or (any other image extractor you know works).


  8. Assuming your USB-Stick das the drive letter X, execute the following command: bbie TruecryptRescueDisk.iso. Then this process should have created a file called image1.bin

  9. Rename image1.bin to something like tc.img

  10. Copy (the previously extracted file) memdisk and tc.img to your USB-Stick

  11. Create a file on the USB-Stick called syslinux.cfg with the following content: default memdisk initrd=tc.img




I followed those instructions to the letter. Before doing so I again formatted the USB stick, this time as FAT32 with 16k. Then I tried to restart the system. This is what I saw (large version):



enter image description here




Now why on Earth would it say FlashBoot loader there? It seems very bizarre. I formatted the USB stick, a full format that took like 5 minutes. Not only that but after seeing this I formatted it like 5 times and redid the above instructions and still see this screen.



Does anyone have an idea where I am going wrong here?



I'm asking this question here and not on the Truecrypt forums because they really frown upon people asking this question there, as it has been answered many times. They simply will not help. However my case has not been answered by the numerous tutorials on the internet.

boot - Installing secondary hard drive

I wanted to install secondary hard drive so I mounted the hard drive in the computer and I connected the power cable and the data cable. I powered up the computer and chose the master hard drive in the boot menu (the secondary drive appeared as well). The OS (windows 10 pro) was started but the OS didn't show the secondary hard drive.


How can I install the secondary hard drive corrently?


Edit: The hard drive doesn't appeared on the "disc management" tool but it does appear on the BIOS menu.


This is the


I am very sorry about my computer languege. you have to belive me that this is the disk management tool.....

iis - Anticipating/preventing patch or upgrade problems on database/web servers

I maintain 2 environments in my current project, 2 servers (1 Web Server & 1 SQL Server) for both production and Test. Last month we installed/upgraded to the lated Microsoft patches/securities and the Report Manager from Reporting Services stopped working in the Test database server. After days of troubleshooting I found that the ASP.NET 2.0 Web Service Extension in ISIS was completely removed and everything was set to prohibited.



I can't say for sure but I think this was caused by the patch/updated that I'd done on that server a few day prior. What can one do to anticipate or prevent these types of impacts on SQL Server, Reporting Services, IIS, ASP.NET when a patch is installed?

Keyboard keys not working properly

Some of the keys in my laptop keyboard are not working properly. Following are the issues:



  1. When pressing o, the keyboard prints 9o. 9o is also getting printed when pressing 9.


  2. Also pressing 0 (zero) automatically makes the screen as full screen which is the function of F11.


  3. Pressing 2 does the function of Ctrl+F which finds something on the screen.



I have tried Some keyboard keys not working properly but it does not work for me.

virtualization - Should I split up an application in multiple, linked, Docker containers or combine them into one?



Background



I'm currently working on building an application which I want to deploy to Docker containers.



The containers will run on a server of mine. I want to have the ability to run other applications on the same server without bloating up the amount of Docker images that are ran.




The different parts / containers as of now are:




  • Nginx (Reverse proxy, static resources)

  • Node (App frontend)

  • Node (App backend / API)

  • Mongo (Database)




Thoughts



The general idea I have is that each distinct part should be ran as a container. My concern is that if I were to run another application on the same machine I will end up bloating it with an unhandable amount of linked images.



This could be solved by making one image for each application. So that the aforementioned services will be part of one image. Does this conflict with the overall security or purpose of Docker in the first place?



Clarification



Does having multiple services in one Docker image conflict with the purpose of Docker?




Will the overall security benefits of containers be removed when running the services from one image?


Answer



Docker themselves make this clear: You're expected to run a single process per container.



But, their tools for dealing with linked containers leave much to be desired. They do offer docker-compose (which used to be known as fig) but my developers report that it is finicky and occasionally loses track of linked containers. It also doesn't scale well and is really only suitable for very small projects.



Right now I think the best available solution is Kubernetes, a Google project. Kubernetes is also the basis of the latest version of Openshift Origin, a PaaS platform, as well as Google Container Engine, and probably other things by now. If you're using Kubernetes you'll be able to deploy to such platforms easily.


How can I disable the IMAPI.exe service in Windows Vista and Windows 7?


I want to disable the IMAPI CD Burner service because I keep getting Power Calibration Error messages.


In Windows XP, it's through services.msc, but I can't seem to find it anywhere in Windows 7 or Windows Vista.


Does anyone know how to disable or kill that service?


Answer



That service doesn't exist in Windows 7 or Vista. The only thing you can do is disable burning from the shell through Group Policy. As for your error there are a few things you can try:



  1. See if there are any firmware upgrades for your burner

  2. Try to clean the burner.

  3. One person on the following post recommended downloading the following file, which he says fixed some problems that led to the error going away.


recycling - is there any data that supports SSDs as more environmentally friendly?


I'm trying to come up with a case for replacing our laptop HDDs with SSDs in our IT dept. Besides saving a lot of developer time, is there any data out there to support my argument that they are more environmentally friendly? Esp. with regard to construction and power consumption. Can you think of any thing that I'm missing?


Update1: I am routinely slowed down by my HDD. I'm on a laptop so my swap file is sitting on a 5400 rpm hdd. I typically sit at 80% memory used when developing so I hit the swap a lot. I have the option of going to a 64b OS (minimal gain really, considering I only have 1 mem slot free) or upgrading to an SSD. So I'm losing time already all the time. So assuming I will replace the drives, is there an environmental bonus over the long term to replacing the drive?


Update2: What about power over a year? How much power would a laptop consume being used 40 hours a week and hitting the swap very frequently on a HDD vs. SSD?


Answer



No; they don't save (as much) power as you might expect. See Tom's Hardware's analysis on SSD vs. HD power specifically this page. At least for laptops under Windows.


Update 12/2011: "A disk-based drive will always consume more power absolutely. At the system level, an SSD increases power consumption because CPU and memory utilization rises in response to increased I/O activity (they're not sitting there, waiting on a hard drive to send data)".


macos - Always opening external links in incognito window Chrome on Mac OS

Until version 68 Google Chrome when you clicked on external link it would open it window that is currently active no matter if it is incognito or not. Now, when you click on external link and Incognito window is window that you are currently on it would open it in current non-incognito window.


This is not a bug, this is the way Chrome is supposed to work as stated here: https://productforums.google.com/forum/#!topic/chrome/yefXCMUfjz8


Is There a way to edit how external links open on Mac OS Google Chrome? I would like to always open external links in incognito window


In Windows 10 you can do this by changing Regedit.exe:


HKEY_LOCAL_MACHINE\SOFTWARE\Classes\ChromeHTML\shell\open\command


From
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -- "%1"


To
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -incognito -- "%1"


Can something similar be done on Mac OS?

Saturday, November 28, 2015

Laptop will boot to some usb flash drives but not others


Laptop: HP Compaq 6710b


I can boot from usb just fine with the following usb flash drives:



  • Cruzer micro 4GB

  • HP 4GB


The flash drive that will not boot:



  • Flash Voyager 8GB


To knock out variables I did the following:



  • Using Hard Disk Low Level Format Tool
    I performed a low level format

  • Full erase with Flash Memory Tookit

  • In windows 7 I formated the drive to fat32

  • Used USB-Boot-Tester to write to the drive

  • Also used uNetbooting with various distros to see if that would make a difference


My guesses on what could be preventing the drive from booting:



  • The laptop does not support booting
    to usb flash drives larger than 4GB

  • The drive is defective in some way


Answer



have you tried HPUSBFW.EXE (HP USB disk format storage tool)
try FAT16(which it calls FAT I think), FAT32, NTFS
I see it available here
http://codinguniverse.com/files/HPUSBFW.EXE
a slightly newer version is 2MB as oppose to 440KB.


Or, much newer versions of the HP thing. which they call
HP drive key boot utility
about 44MB!


FAT16 is limited to 2GB, so see how that goes..
try FAT32 , and try NTFS..
I haven't tried it in a long while..


partitioning - How to remove setup partition ( volume ) after OS install

I was installing a new Windows on my new SSD from a USB drive but the often appearing error "We couldn't create a new partition or bla bla.." appeared so I used cmd and diskpart to create a new 5GB volume on my SSD, copied the installation files there and ran the installation again from that partition.




It worked, I installed the OS, everything works fine... except...
That partition where the OS installation files are is still there, I want to remove it but it seems that it's not that simple.



It is marked as "active" and "system" in the Windows drive format tool. ( picture below )



disks



I tried marking my C disk where the OS actually is installed as active but then Windows can't boot anymore. Also when my PC boots it shows me an annoying screen where I can choose to enter Windows 10 or go to Windows setup again ( obviously booting the installation again from that volume ).




How the hell do I get rid of this thing?

windows 8.1 - How can I move my Documents folder back outside my Downloads folder?


I'm not sure what happened but somehow the Documents folder got moved to inside my Downloads folder. I'm trying to move it back but I keep getting "The folder or file is open in another program." It does the same thing in safe mode.


Answer



This is a bit strange since Downloads should already be inside Documents. So if you click into Documents, do you find a new Downloads folder inside it, and inside it the Documents folder again? If this is so, you have gotten yourself a looped junction problem. Try running the junction utility to list and remove the unwanted junction.


Otherwise you needs must have moved Downloads somewhere else than it should have been, and Documents too. To untangle the problem I'd try moving first Documents, then Downloads, in a totally different branch of the directory tree, then moving Documents back into C:\Users\YourUser, then Downloads into Documents. And then read on for a possibly necessary REGEDIT fix.


But before you do anything, this is a good moment to ask yourself, "Do I value my data, and have a full and adequate backup?" ("No" is an allowed answer, by the way. It's your data. You can do with it as you see fit). If on the other hand you do value your data, and don't have a backup, make one now.


Then you can run a disk check, just to rule out some easy-to-automatically-fix problem. The backup will come in handy, in the "theoretically impossible" case that the easy fix determined by Windows actually consists in totally blasting the whole folder and recreating it shiny, new, and empty.


If there was no junction and the disk checks OK, you seem to have a tangled folder structure problem. Uncommon but solvable.


So now for example you create an empty folder in C:\ called Movable and move Documents in there. Then you move Downloads in there too, aside (so Movable containst two folders, Documents and Downloads). Then you can move it back to where it should be.


A less easy but surer way to do this is to download a Linux LiveCD with NTFS support such as SystemRescue. You want the graphical interface. Once you boot in that, you just drag and drop Documents back, then reboot again in Windows. It will probably ask to run another disk check. Let it do so.


Finally the difficult way is to change the My Documents pointer in the Registry. This also serves if the pointer already got changed and/or after moving it with any of the other methods, Documents no longer behaves as it should. You want to fire up Registry Editor and check out:


HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders

In both keys you want to verify the contents of "Personal". It should be something like


C:\Users\YourUser\Documents

and it should point to a directory that exists.


In a pinch, you can create a new C:\Users\YourUser\Documents and move the hidden desktop.ini and other content from the "bad" Documents folder to the new one, and reset its permissions. Then you can try deleting the old once it's empty. Remember to turn on "Show Hidden and System Files" option.


Since you have a backup, if the Documents folder fits in the Recycle Bin, you can try moving the Documents folder to the Recycle Bin (this often bypasses several "open resource" problems), then drag and drop from the Recycle Bin to the Users\ folder. This might damage the registry entry above, which will need to be reset.


A useful page on the folder location topic is http://www.askvg.com/tips-tweak-and-customize-windows-8-1-explorer-this-pc/.


security - Looking for a easy to use Live CD to wipe a hard drive






A remote family member has a computer that is going in for service, but she wants to make sure the hard drive is securely wiped.


I am looking for a Live CD that has a simple menu option to do a full and secure system wipe.


Answer



DBAN (Darik's Boot 'n Nuke) is without a doubt the best: http://www.dban.org/ Just press enter at the boot prompt to "autonuke".


sendmail - Configure Exim to send to internal & external addresses

I inherited a web site that's apparently using Exim as its MTA. Let's say that we can access the site at:




http://example.com/



The users who work at Example Corp. noticed that they did not receive email when the PHP web application attempted to send mail to addresses like:



jane.doe@example.com
support@example.com
etc.



The Question




The SPF records seem to work best when the server sends mail from a hostname of example.com. However, we cannot email anyone at example.com when we have that as the hostname.



I changed the hostname on the server, but now it doesn't work with the existing SPF records (details below).



I think I need advice on configuring either the hostname or Exim.



Background



Email sent to external addresses at GMail, Yahoo, Mailinator, etc. went through just fine. I use Mailinator for testing emails because you can email any address without having to create a full account. I ran tests using syntax like this.




This test would succeed.



echo "This is message body." | mail -s "SMTP Test 1" -r "from_address@example.com" to_address@mailinator.com


This test would fail.



echo "This is message body." | mail -s "SMTP Test 1" -r "from_address@example.com" to_address@example.com



Some simple routing tests can be done by using the address testing option. This test would succeed.



exim -bt to_address@mailinator.com
to_address@mailinator.com
router = dnslookup, transport = remote_smtp
host mail.mailinator.com [2600:3c03::f03c:91ff:fe50:caa7] MX=10
host mail.mailinator.com [23.239.11.30] MX=10


This test would fail.




exim -bt support@example.com
support@example.com is undeliverable


This post was helpful and pointed me in the direction of the hostname setting.
http://jblevins.org/log/hostname



I realized that the public DNS had an entry called "store.example.com" that pointed to the correct IP address. I entered that as the hostname.




sudo hostname store.example.com



Ensure store.example.com is inside the network file. This should ensure the hostname sticks after reboot.



sudo nano /etc/sysconfig/network

sudo service exim restart


The problem is that now Google complains about the lack of an SPF record.




Received-Spf: none (google.com: user@store.example.com does not designate permitted sender hosts) client-ip=xxx.xxx.xxx.xxx;



I realize I could create an SPF record, but it would be simpler to use the existing one for example.com. When that was the hostname, the header in GMail said:



Received-Spf: pass (google.com: domain of user@example.com designates xxx.xxx.xxx.xxx as permitted sender) client-ip=xxx.xxx.xxx.xxx;



Server Environment



CentOS release 6.6




ls /etc/alternatives/ -l | grep mta

lrwxrwxrwx. 1 root root 23 Feb 23 09:28 mta -> /usr/sbin/sendmail.exim
lrwxrwxrwx. 1 root root 19 Feb 23 09:28 mta-mailq -> /usr/bin/mailq.exim
lrwxrwxrwx. 1 root root 29 Feb 23 09:28 mta-mailqman -> /usr/share/man/man8/exim.8.gz
lrwxrwxrwx. 1 root root 24 Feb 23 09:28 mta-newaliases -> /usr/bin/newaliases.exim
lrwxrwxrwx. 1 root root 15 Feb 23 09:28 mta-pam -> /etc/pam.d/exim
lrwxrwxrwx. 1 root root 19 Feb 23 09:28 mta-rmail -> /usr/bin/rmail.exim
lrwxrwxrwx. 1 root root 19 Feb 23 09:28 mta-rsmtp -> /usr/bin/rsmtp.exim

lrwxrwxrwx. 1 root root 18 Feb 23 09:28 mta-runq -> /usr/bin/runq.exim
lrwxrwxrwx. 1 root root 22 Feb 23 09:28 mta-sendmail -> /usr/lib/sendmail.exim

exim -bV
Exim version 4.72 #1 built 10-Oct-2014 09:23:33
Copyright (c) University of Cambridge, 1995 - 2007
Berkeley DB: Berkeley DB 4.7.25: (September 9, 2013)
Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc TCPwrappers OpenSSL Content_Scanning DKIM Old_Demime
Lookups (built-in): lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm nis nis0 nisplus passwd sqlite
Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa

Routers: accept dnslookup ipliteral manualroute queryprogram redirect
Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp
Fixed never_users: 0
Size of off_t: 8
OpenSSL compile-time version: OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL runtime version: OpenSSL 1.0.1e-fips 11 Feb 2013
Configuration file is /etc/exim/exim.conf

linux - How to break a line in vim with auto-wrap paragraph turning on?


I am trying to use the vim autowrap functionality to automatically wrap my paragraph into lines no longer than 80 letters in real time as I type. This can be done by set textwidth=80 and set fo+=a. The a option of the vim formatoptions or fo basically tells vim to wrap the entire paragraph while typing.


However, there is a very annoying side-effect, that I can no longer break a line by simply pressing enter.



This is a sample sentence.



Say for the above sentence, if I want to make it into:



This is


a sample sentence.



Usually I can just move the cursor to "a" and enter insert mode and then press enter. But after set fo+=a, nothing will happen when I press enter in the insert mode at "a". One thing I do notice is that if there is no space between "is" and "a", pressing enter will insert a space. But nothing else will happen after that.


So what do I miss here? How do I stop this annoying behavior?


Answer



After some exploration, I find a workaround that can solve the problem to some extent, though not perfect.


The basic idea is that when entering a line break, disable the auto-wrapping temporarily when sending and resume auto-wrapping after that. There are multiple ways of doing that. And the best one as far as I know is using the paste mode, since you don't have to exit insert mode when entering paste mode. So just make the following commands into any key binding you like in insert mode. The one I am using right now is inoremap


The reason why I think this one is not optimal is that for some reason I cannot bind in this way, but have to use another key.


If or can be configured in this way then the problem is 100% solved.


Not able to access a folder with read and write permission for group



I can't access a directory that has both r permissions for the group my current user belongs to, in the others column, it has rw-



[ec2-user@host]$ ls -la
drwxrw-rw- 8 apache apache 4096 Nov 4 14:16 .git


[ec2-user@host]$ cd .git
-bash: cd: .git: Permission denied

[ec2-user@host]$ cat /etc/group |grep ec2-user
wheel:x:10:ec2-user
ec2-user:x:500:
apache:x:48:ec2-user



Why can't I descend into this directory?


Answer



In the Unix permissions model, in order to enter (descend into) a directory, you need (somewhat unintuitively) execute permission on the directory.



In order to list the files in a directory, you need read permission on the directory. (This can be conceptualized by considering a directory as a file that holds a list of other files and their locations. In fact, that's pretty much what a directory is at the conceptual level.)



It doesn't really make any sense to have read permission but not execute permission on a directory, but having execute permission without read permission has a valid use case (where you want to be able to access files knowing their names, but not to be able to enumerate the files).



Your directory is owned by user apache, group apache; judging by your prompt, you are ec2-user; those two are not the same. Hence, either group or other permissions apply. Based on your /etc/group snippet, ec2-user belongs to the apache group, so group permissions would apply. Group permissions is read and write, but not execute, and hence you cannot descend into the directory.




First, add some group execute permissions. Second, you might want to remove some write permissions. World writable is almost never what you want.



Note that in order to change the permissions on the directory, you need to have write permission to its parent directory. (So to change the permissions on ./.git, you need write permission to .. This works the same for files and directories alike.)


Friday, November 27, 2015

macos - Importing photos on Mac OS X

I recently switched for a PC to a Mac. In Windows I used to download photos from my camera into folders I created in the pictures directory. When I started using iPhoto 09, I imported those folders into iPhoto and it seems like I created duplicates, wasting valuable disk space. Whenever I connect my camera, iPhoto pops up automatically and offers to import the images. These images are then stored in folders which are not readily visible in the iPhoto library. I would like to be able to keep the pictures in general folders that are unrelated to any software and to be able to view them, tag them and manipulate them with iPhoto.


How do I do it?


Thanks
Zvi

windows server 2012 r2 - DISM /add-package syntax

I'm having difficulty with the syntax for dism /add-package on Windows Server, and what it requires as arguments in some cases. I can't find good (unambiguous) reference material for this online, I've tried.


The situation is that I'm trying to service the /online system; I managed to /remove-package an entire package so I tried to re-add it using /add-package, which should be simple, from install.wim on the DVD. But the needed arguments are not clearly explained on Microsoft's websites, for /add-package to a live system, and surprisingly I couldn't get it right. I also tried to mounted the install.wim as a folder and to /add-package from that, but again, could not find the syntax to make it work. Help would really be appreciated.



  1. DISM /get-feature needs a .WIM or a folder as a source where the feature can be found. What would count as a valid location, and especially, must a .WIM be mounted or is pointing to the install.wim (or install.wim:index) enough? If a .WIM + index can be directly referenced, what is the syntax?


  2. When adding a package using /add-package, is the package path/file itself a sufficient identifier, or must one provide a package name or other identifier as well? If so, what identifiers are valid and how are they found?


  3. If the package files are within a wim (eg the install DVD's install.wim) does one need to specify a path within that .WIM, or is specifying the .WIM (or .WIM+index) alone, enough?


  4. dism /image:X:\MOUNTEDWIM /get-packages on a mounted windows install.wim, only shows the few packages that seem to be relevant to the install; many packages that I expected to be in the source weren't listed. But trying to be more specific, using dism /image:X:\MOUNTEDWIM\Windows\servicing\Packages /get-packages, fails completely. What's wrong?


  5. What is the syntax to add an entire removed package to the live /online system, from say, install.wim:2 (from DVD or mounted folder, or either), if the package was accidentally /removed? What identifier or path, and other arguments, would I use?



Failed syntax I tried (using source DVD -> install.wim file):



  • dism /online /add-package /packagepath:"Microsoft-Windows-PACKAGE~amd64~~6.3.9600.16384" /limitaccess /source:"X:\sources\install.wim"

  • dism /online /add-package /packagename:NAME /packagepath:"X:\sources\install.wim\"

  • dism /online /enable-feature /featurename:NAME /All /Source:"X:\sources\install.wim" /LimitAccess

  • dism /online /get-features /Source:"X:\sources\install.wim" /LimitAccess

  • dism /online /add-package /packagepath:"Microsoft-Windows-PACKAGE~amd64~~6.3.9600.16384" /source:install.wim

  • dism /online /add-package /packagepath:install.wim

  • dism /online /add-package /packagepath:install.wim /ignorecheck


Failed syntax (same install.wim file, mounted as a folder):



  • dism /online /add-package /packagename:NAME /all /packagepath:X:\MOUNTEDWIM\Windows

  • dism /online /add-package /packagename:NAME /packagepath:X:\MOUNTEDWIM\Windows

  • dism /get-packages /image:X:\MOUNTEDWIM\Windows

  • dism /get-packages /packagepath:X:\MOUNTEDWIM\Windows

  • dism /image:X:\MOUNTEDWIM /get-packages


(Not one was correct!)

Nginx rewrite rule (subdirectory to subdomain)



I would like to redirect admin subdirectory to a subdomain. I tried to create this rule for Nginx however it's not working:



location ^~ /admin/ {
rewrite ^/admin(.*) http://admin.example.com$uri permanent;
}



Thank you
Regards


Answer



Something like this should do the job:



location ^~ /admin/ {
rewrite ^/admin/(.*) http://admin.example.com/$1 permanent;
}


hosting - Can an organisation take over a domain?




Basically I am working with a church who has a new pastor. The churches website hosting is paid by the church bank account and the domain name is up for renewal. The company are telling us that the previous pastor (the registrar) must be the one who renews the domain. As the old pastor is in another country we can't get old of him.



I looked on whois and the "Trading as" tag was set to the name of the church. So although the old pastor is the owner of the domain, is there no way that the church can renew the domain? We can prove that the church pastor has changed within the last two years. The hosting company have no problem billing them for the hosting, it's just the domain name we can't do anything with. We are not trying to transfer the domain either.



I will be looking forward to hearing back from you all.



UPDATE



We don't have access to the cPanel and we don't have access to the ewbsite with a username or password, that is what the old pastor has.




Peter


Answer



Domain ownership (and hence, who has the authority to effect changes to its registration) is mantained by ICANN (or RIPE in Europe).



A whois query will show you the administrative contact for the domain; this can be an individual, or an organisation, or a RIR or LIR (these have handles instead of names).



Any of the registered contacts for your domain should be able to contact the registrar and ask them to update the registry details.



If the registry entry does not have multiple contacts listed, you can apply to the registrar to alter the registration by showing them you are authorized to act on behalf of the domain owner.




If one person who is no longer available is the domain owner, I doubt they will allow you to trivially change it.



Collect as much detailed information as you can and make a case that you really need to take over the domain; it is ultimately in the hands of the registrar, but if the domain has expired, you have a much better chance of transferring ownership.



And yes, you DO want to transfer the domain; that's what changing ownership means.


browser - Weird redirects

We are getting a weird redirct from black-face.com (which I needed for a school assignment) to some other pages always flashing through zeroredirct1 (dot) com. On other computers, it happens for other sites too.


Our network runs from a netgear r7000


It is happening on



  • Android 5.1 Nexus 6 on both data and our WiFi network

  • OSx El Capitan on our WiFi network

  • iOS iPad 9.1 on our WiFi network

  • Windows 10 on our WiFi Network


Deleting caches and cookies seems to only be a temporary fix as it comes back within about a day. We tried changing the DNS on our router to Google's public DNS. This may have solved it, at least temporarily, as for right now, it is going to the correct site but not sure how we get rid of it entirely.


It has not changed our default homepage. I don't think it is rooted in an extension (although it may be) because it is also happening on mobile devices which don't support them. All firmware and OS software are up to date.


Any recommendations would be much appreciated.

windows 7 - Need help diagnosing my machine

I have something that just slows my computer to a crawl sometimes. Not running anything big. Yesterday all I had running (besides background apps) were Firefox & Windows Explorer and could barely even switch screens.



Nothing showing up in the task manager as hogging CPUs.
I have all non-essential services stopped (MySQl & MSSQL) unless I need them.



I made some restore points not long ago, but they disappeared.

This is a development mach with a LOT of apps installed, so I really, really do not want to re-install Windows.



So, what I'm looking for are ideas or tools I can use to help diagnose this problem.
The only clues I have is this started right after I




  • installed Office 2013 (with Office 2010 still installed as well)

  • installed Visual Studio 2012 (also keeping 2010 as a co-install)

  • and installed MSSQL 2012 (upgrade from 2008, no co-install)




Also, computer runs fine in Safe Mode. I've just ran out of ideas of what to check.



Any help / suggestions would much appreciated.



Thanks



P.S. I'm running Win 7 Pro (x64). Office is also 64 bit. Visual Studio & MSSQL are 64 bit if that option was available (not sure). 77GB free space on the hard drive. 4GB RAM installed.

How to survive anonymous DDOS attack?




Every time the anonymous group targets a website, they are able to take it down.. even for large corporates / governments with professional.



I read (basic theory) about dealing with normal DDOS attack, with DDOS protection techniques.



But why do these techniques fail in case of Anonymous group attacks?



Is there any success stories about surviving through a really good organized DDOS attack?



Answer



Most mechanisms to identify and mitigate attacks like anonymous attacks are well known, and most Anti-DoS products and services can deal with them with high rates of success.
However, sometimes organizations and enterprises do not have a tuned or updated protection policies. Furthermore, I was amazed to discover that many of them do not have Anti-DoS protection at all, neither by product nor by service.



Anonymous usually use well known tools. There is no reason that a local SOC/NOC or service provider's SOC/NOC will not be able to block their attacks. The question is whether detection and blocking are accurate enough without false positives of blocking legitimate traffic as well. As the consequence of that is a successful DoS/DDoS...



In general there are three paths of dealing with DDoS/DoS attacks:




  1. Having enough resources (bandwidth, servers, etc) - not realistic option as attack volume can exceed the bandwidth you have and the cost of having unlimited computation power is huge.


  2. 'Renting' Security Service Provider services - a good solution, depends on the specific provider's capabilities. However, you should note that most MSSP work with scrubbing centres in Out-of-Path mode. This means they rely in many cases on traffic analysis protocols, such as NetFlow, to identify the attacks. While this option works swell with DDoS or large volumetric attack, it cannot identify low and slow attacks. You can overcome this limitation if you are ready to make the call yourself to the MSSP once you detect problems with the traffic yourself. Another limitation of "scrubbing centres" approach is that usually only one direction of the traffic is inspected.

  3. Having your own Anti-DoS solution, installed inline. Though sometimes more expensive, this option will provide you the best security as scanning attempts brute-force attempts and many other security threats can be dealt by an inline device. Inline device is effective as long as the attack's volume doesn't exceed your pipe bandwidth. Working in inline mode guarantees detection of low and slow attack, and even intrusions, depends on the equipment you want to use.



As you can see, there is no clear answer to the question, as it depends on many parameters, budget is only one of them. The quality of the service or product is a significant aspect as well -
- Can it generate 'real-time' signatures for accurate mitigation without affecting legitimate traffic? reducing the false-negative ratio?
- Does it include a behavioural learning and detection modules? or Does it use only rate-based thresholds?
- Does it include authentication options (for HTTP/DNS and other protocols)? again for reducing the chances of false negative.
- Does it include an action escalation mechanism, a closed feedback option that can automatically use more aggressive mitigation actions based on the success of the current mitigation action taken?
- What is the mitigation rate the service/product can offer, regardless of the legitimate traffic rates.

- Does the product include a 24/7 emergency service? (most MSSPs have it, not all products)



Cheers,


linux - Mint 17 and Windows 8.1 Dual Boot: No EFI partition

I am working on the installation for Mint 17 and need to understand what is going on with my Windows 8.1 booting. Here is an image showing all partitions from Windows:


http://i.imgur.com/Xvf5OjE.png


As you can see the EFI patition is 100% free space, and "Boot" is included in the status message for the C partition.


In these instructions the author suggests changing the "efi" parition to EFI Boot partition using the "Change" option. However, I do not have an "efi" partition in under my available partitions in the Mint installation.


If I create root, home, and swap partitions, and click "Install Now" I get the following error:



The partition table format in use on your disks normally requires you to create a separate partition for boot loader code. This partition should be marked for use as a "Reserved BIOS boot area" and should be at least 1MB in size.



UPDATE >> It looks like I do have an EFI system partition, but Mint isn't recognizing it as such, and is not providing the option to "Change" to "EFI Boot partition" selected.


mint@mint ~ $ sudo parted -l
Model: ATA TOSHIBA THNSNS25 (scsi)
Disk /dev/sda: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 316MB 315MB ntfs Basic data partition hidden, diag
2 316MB 588MB 273MB fat32 EFI system partition boot
3 588MB 722MB 134MB Microsoft reserved partition msftres
4 722MB 123GB 123GB ntfs Basic data partition msftdata
5 245GB 246GB 472MB ntfs hidden, diag
6 246GB 254GB 8389MB ntfs Basic data partition hidden, diag
7 254GB 256GB 2147MB Basic data partition hidden

How should I proceed?

windows - What should the order of DNS servers be for an AD Domain Controller and Why?




This is a Canonical Question about Active Directory DNS Settings.



Related:







Assuming an environment with multiple domain controllers (assume that they all run DNS as well):




  • in what order should the DNS servers be listed in the network adapters for each domain controller?

  • Should 127.0.0.1 be used as the primary DNS server for each domain controller?

  • Does it make any difference, if so what versions are affected and how?


Answer



According to this link and the Windows Server 2008 R2 Best Practices Analyzer, the loopback address should be in the list, but never as the primary DNS server. In certain situations like a topology change, this could break replication and cause a server to be "on an island" as far as replication is concerned.




Say that you have two servers: DC01 (10.1.1.1) and DC02 (10.1.1.2) that are both domain controllers in the same domain and both hold copies of the ADI zones for that domain. They should be configured as follows:



DC01
Primary DNS 10.1.1.2
Secondary DNS 127.0.0.1

DC02
Primary DNS 10.1.1.1
Secondary DNS 127.0.0.1


Change the Windows 8 product key after installation?


I just completed a fresh install of Windows 8 Pro as it was released to MSDN. Installation went without a hitch however I can't find out where to enter the correct product key for the copy I have.


Clicking on the Activate this Computer in Activity Centre displays the following screen, but no option to change the product key:


Windows 8 Activation Screen


Answer



Just follow these steps to add/change product key using Command Prompt and slmgr.vbs:



  1. Launch Command Prompt as an Administrator.


  2. At the command prompt, type in slmgr.vbs -ipk(insert your product key here) and click Enter.


  3. To activate windows, type in slmgr.vbs -ato and click Enter.



All information form this Source


Product Key for windows 7

Hello can anyone tell me about the windows product key
I have brought a dvd to install windows 7 please tell me where is the product key written there is a barcode written in its cover is that the product or it is written some where else and please tell me about the disks how to carry them in between installation

Dell laptop keyboard doesn't work

I'm trying to fix my in-laws laptop, it's a Dell Studio 1745 that's running Windows 7 64 bit.


The problem is that most of the keys on the keyboard do not work. The function keys work and the caps lock and numpad keys work, but no other keys do.


If I hit the F2 key enough times when starting up, I can get to the BIOS, but after that even the function keys stop working.


If I let it go all the way to the Windows login screen, I can see that the caps lock and num lock work - little images on screen actually appear, but they don't toggle the state of the key, i.e.,capslock is always off, numlock is always off.


Using the fn+function combo works, so changing the brightness, etc. works fine. I'm stumped.
I've tried disconnecting power and battery and leaving it for an hour or so before starting up but that hasn't helped either.


Also - this might be a red herring - the touchpad is failing as well, the MS Device Manager says that it's failing with status 10, "unable to start device"

exchange - Failure to match name to address list

I have a multi-tennat Exchange environment. I am trying to migration from 2007 to 2013 and all but one customer is working fine.



When I try to setup Outlook on this one customer's computers (in their office), Outlook says, "The action cannot be completed. The name cannot be matched to a name in the address list." When I click OK, Outlook shows me the user's mailbox server in the Micorsoft Exchange server field (the Exchange 2007 server).



I verified that the customer has an Autodiscover SRV record in internal and external DNS, and that the test user can log into webmail internally. I also verified that I can telnet to the 2013 CAS's external IP address over 443. Finally, I verified that webmail.hostedDomain.com resolves to the correct IP (and responds to ping).




When I try to setup the same account on a laptop outside of the customer's network (specifically, the same domain as the Exchange servers), Autodiscover works fine and I can log into the mailbox.



From the test machine (on the customer's network), Remote Connectivity Analyzer shows:




Attempting to send an Autodiscover POST request to potential Autodiscover URLs.
Autodiscover settings weren't obtained when the Autodiscover POST request was sent.



Test Steps




The Microsoft Connectivity Analyzer is attempting to retrieve an XML Autodiscover response from URL https://webmail.hostedDomain.com:443/Autodiscover/Autodiscover.xml for user dtest@customerDomain.com.
The Microsoft Connectivity Analyzer failed to obtain an Autodiscover XML response.



Additional Details



An HTTP 401 Unauthorized response was received from the remote Unknown server. This is usually the result of an incorrect username or password. If you are attempting to log onto an Office 365 service, ensure you are using your full User Principal Name (UPN).
HTTP Response Headers:
request-id: 6a387132-e372-4bf9-9833-779286820a61
Set-Cookie: ClientId=HMCLPHFOUYPIWAYOVXSW; expires=Fri, 05-Aug-2016 16:57:49 GMT; path=/; > HttpOnly
Server: Microsoft-IIS/8.5

WWW-Authenticate: Negotiate,NTLM,Basic realm="webmail.hostedDomain.com"
X-Powered-By: ASP.NET
X-FEServer: E2013ServerName
Date: Thu, 06 Aug 2015 16:57:49 GMT
Content-Length: 0




What gives?

Thursday, November 26, 2015

Disable Windows 10 Preview Builds


On my laptop I installed a fresh copy of Windows 10 (had to swap the hard drive anyways, so I didn't upgrade from Windows 7). Then I activated the preview builds and got the th2-preview. In this version Windows 10 can be activated directly with a Windows 7 key, this is why I did that.


Up to there everything worked as expected, but now I am getting frequent Bluescreens that I don't get on my desktop PC that runs the regular non-preview Windows 10, so I want to disable preview builds on my Laptop as well and go back to the regular Windows 10. So I go to the Updates-Screen under Settings, and press "Stop receiving preview builds". Now it tells me that I can disable preview builds for a few days, but if I want to disable them permanently, I have to do something else. There is a link where they should tell me what to do. I click the link and then I get to a website, where they tell me, I need to go to the updates-screen and need to press "Stop receiving preview builds", which of course, leads me back to the very same website. So as you see there is a bit of recursivity going on.


Do you know any way to break that loop? ;) Is there any way to stop receiving preview builds?


Answer



Appearently you can only get off the Insider Program if your current build is the same as the most recent non-Insider build. Until then you have to wait for the non-Insider builds to catch up to your Insider build (for that better get onto the slow update track) and then you can leave the Insider Program.


windows 7 - How to automatically shorten names in a folder and its subfolders?

I'm trying to do a backup on my Dropbox folder but I get the following message:



Source Path Too Long


The source file name(s) are larger than is supported by the file system. Try moving to a location which has a shorter path name, or try renaming to shorter name(s) before attempting this operation.



My Dropbox folder contains 115752 files, so I wonder if there's a way to automatically shorten the names that are too long?


I'm running on Windows 7.

mysql server, open 'dead' connections



My basic question is what kind of impact does this have on the server?




Let's say, for example, there is an older program in my company that opens connections to a mysql database server at a high rate (everything they do with the application basically opens a server connections). However, this application was not designed in the way to dispose of the connections after they where created. A lot of the time the connections remain open but are never used again, open 'dead' connections I guess you could say.



They just remain connected until the server times them out, or until an admin goes in and removes the sleeping connections manually. I'm guessing this could be responsible for sometimes not able to connect errors, etc. that we receive from other systems that try to access the mysql database? (connections limit reached)



Could this slow down the server as well? Curious what all this could exactly cause.


Answer



You could play some games with the timeout values in MySQL.



For example, the default value for 'wait_timeout' and 'interactive_timeout' is 28800 (that's 8 hours)




You can see what they are set to by running this:



SHOW VARIABLES LIKE 'interactive_timeout';
SHOW VARIABLES LIKE 'wait_timeout';


If you want to lower these to, say, 1 minute, a MySQL restart is not required.



Run these as the root user:




SET GLOBAL interactive_timeout=60;
SET GLOBAL wait_timeout=60;


This will assure that any new MySQL connections will timeout in 60 seconds.



then add these lines to /etc/my.cnf under the [mysqld] section



interactive_timeout=60
wait_timeout=60



Of course, it is easier to restart mysql to remove the remaining sleeping connections. All connections, going forward from there, will timeout in 60 seconds.



Give it a try and let us know !!!


storage area network - SAN design: File & Block level access?



The short question: can I share file and block level traffic on the same SAN? Perhaps more importantly, should I? The gory details are below...



I'm hopefully putting the finishing touches on a new SAN design, and our new planned storage (EMC VNXe3100) will support being an iSCSI target, our original goal. It also supports file-level storage as well via CIFS and NFS. Some of the features we hope to use (particularly deduplication) are only available via file-level shares.



The VNXe3100 has 2 controllers with 2 NICs per controller. Each NIC is going to a different switch, so either the controller or the switch can fail, and we should still be in business. This means that both file and block traffic would need to be enabled on each NIC. I'm assured by our rep that this is possible.




My plan is to put the VNXe and the 5 host servers on the same VLAN and subnet (call it 192.168.1.x). This should keep my block-level iSCSI stuff only in that VLAN with no route out. But I would have a route out to the rest of the network for the file-level traffic on a different subnet (192.168.55.x). So each NIC would have an IP address for block traffic in the 1.x range and another for file traffic in the 55.x range.



Since we are new to the world of iSCSI and the world of SAN/NAS devices, I want to make sure this isn't some horrible intermingling. But it would be really nice to expose our VMWare as NFS and get the VMs deduplicated on our hardware, and not having to maintain another file server would also be a bonus.



If there's something else I'm overlooking, I'm all ears.


Answer



I'm not familiar with the inner workings of EMC's arrays, but I've been lead to believe that they are a block SAN with a file-level NAS controller bolted on -- you can have iSCSI LUNs that go directly to your servers, or you can export them to the NAS head, and share them out as NFS/CIFS. You can have different LUNs set up with different access types, but a single LUN can be one or the other (block- or file-level access) but not both.



Other systems (ie, NetApp) work in reverse; NAS is their native format, and an iSCSI or FC LUN is just a single huge file that it serves out with those protocols (with some protection to keep you from inadvertently messing them up if you access the parent directory with NFS.)




With only 2 NICs per controller, you might run into some issues trying to mix block and file access. With file-level access (being based on IP), they rely on the underlying protocol stack for redundancy (typically you configure the ports in a failover bond group, with a single IP across the pair), whereas iSCSI descends from storage-world, and expects redundancy to be handled above it in the stack, by way of a multipathing driver on the attached hosts. It's likely that a port on the EMC can't be both configured with it's own IP for multipathing, and having a virtual IP in a failover group (notwithstanding failover of the whole controller; I'm not sure how the EMCs handle that). Doing iSCSI on top of a bonded interface can work, but you won't be able to get the added performance of multipathing.


Microsoft Edge doesn't get updated with Windows updates



I have Microsoft EdgeHTML 15.15063, but I want to update to the latest stable release of EdgeHTML. I regularly apply Windows updates, but the browser hasn't been getting updated automatically, despite the fact that it's the only way I know of to update the browser.




Is there a way to download a manual patch? Or is there something that might prevent Edge from getting updates?



I have Windows 10 Enterprise.



My solution: Thanks for the information. I have accepted the answer that led me to discover the actual problem. I found that my version update was failing. I ran chkdsk /r (which found and repaired a problem with Windows Update) and rebooted. My updates were able to be successfully installed after that.


Answer




I want to update to the latest stable release of EdgeHTML.





If you want the current stable release of EdgeHTML, it means you must upgrade your installation, to Windows 10 1709 or Windows 10 1803. EdgeHTML 17 is technically the current version, but EdgeHTML 16, is considered the current stable version.




  • Windows 10 April 2018 Update includes EdgeHTML 17


  • Windows 10 Fall Creators Update includes EdgeHTML 16






Is there a way to download a manual patch?




You can manually download the current 1709 or 1803 Windows 10 ISO. However, EdgeHTML 16 and EdgeHTML 17, can only run on their respected versions.



In other words, it is not possible to update EdgeHTML 15, which is included in the Windows 10 Creators Update (1703), to a newer version unless you upgrade Windows to the respected version.





I regularly apply Windows updates, but the browser hasn't been getting
updated automatically, despite the fact that it's the only way I know
of to update the browser.




If you have installed all the currently released cumulative updates for Windows 10 Version 1703 then you have the current version of EdgeHTML 15. If you have KB4103731 installed, then you are running 15063.1088, which means you are using the current version of EdgeHTML 15.


Windows 7 warns of low memory condition when plenty of memory is available - can I set a threshold?


My windows 7 machine with 16GB of physical RAM occasionally warns of of a low memory condition, asking me to close programs to free up memory. When I check task manager or resource monitor I find that there is nearly 8GB free. Several (one or two) processes (rdbms, tomcat server, etc.) will be consuming large (~4GB of memory each) but the machine does not seem to be running low on memory when this warning is displayed.


My questions:



  1. Why is this warning being displayed if the amount of physical memory seems to be more than adequate for the tasks at hand?

  2. Is there a way to set thresholds for when these low memory warnings are issued?


Answer



The problem is that while the memory is available, Windows 7 has already promised it to applications. The solution is to make sure you have a large enough pagefile. This allows Windows to continue to make commitments without fear that should all the commitments be claimed at once it won't have enough physical memory to meet them.


I explained this phenomenon in more detail here. You can have plenty of free memory but Windows can still be unable to allocate more because that free memory is already promised to applications that will probably never use it.


windows - How to get rights of admin after I disabled all admin accounts in my computer


I accidentally disabled my admin account.


After I login to another account I found I can not get admin rights, because all admin accounts on my computer are disabled.


I clicked on 'run as administrator' but only see a smart card choice (all the admin accounts are disabled so no choice provided)


enter image description here


I don't want to re-install my OS, help!




Details:


I have account A on my computer.


I got a new computer so I want to give the old one to my mother.


Creating a new user account B for her and disable account A.


Logout and restart.


Successfully login B, now I found I could not get admin rights because no account choice is provided.




More informations about my machine:


OS is Windows 10 and my admin account is a Microsoft Account.


I have an arch-linux installed on my computer and dual-boot it using GRUB, so maybe I can not use safe mode.


Answer



The following tutorial will allow you to enable the default Administrator. There are other ways to change the permission of an existing user and/or change the password of an existing Administrator. Those methods are not covered by this tutorial and are considered out of scope for the purpose of this question.


This tutorial assumes you know how to create an installation disk, boot to that disk, and enter WinRE which is contained on that disk. This tutorial won't cover how to do that. This tutorial does not require access to an Administrator account, the entire purpose of this tutorial is to enable the built-in Administrator account which by default is disabled.


Enable or Disable Built-in Administrator in Command Prompt at Boot




  1. Download the Windows 10 .ISO


  2. Within WinRE at a command prompt, type regedit, and press Enter.


  3. In the left pane of Registry Editor, click/tap on the HKEY_LOCAL_MACHINE key.



enter image description here



  1. Click/tap on File (menu bar), and on Load Hive.


enter image description here



  1. Open the drive that you have Windows 10 installed on, and browse to the location below.



X:\Windows\System32\config




  1. Select the SAM file, and click/tap on Open.


enter image description here



  1. In the Load Hive dialog, type REM_SAM, and click/tap on OK.below)


enter image description here



  1. In the left pane of Registry Editor, navigate to and open the key below.



HKEY_LOCAL_MACHINE\REM_SAM\SAM\Domains\Accounts\Users\000001F4



enter image description here



  1. In the right pane of the 000001F4 key, double click/tap on the F binary value to modify it.


  2. In line 0038, change 11 to 10, click/tap on OK



enter image description here



  1. Close Registry Editor and the command prompt.


  2. Click/tap on Continue to startup back in Windows 10.




Source


Related Question: Where can I get a clean ISO of the Windows 10 Anniversary update?


TCP performance differences between RH Linux and Solaris in java?



While comparing java TCP socket performance between RH Linux and Solaris, one of my test is done by using a java client sending strings and reading the replies from a java echo server. I measure the time spent to send and receive the data (i.e. the loop back round trip).




The test is run 100,000 times (more occurrence are giving similar results). From my tests Solaris is 25/30% faster on average than RH Linux, on the same computer with default system and network settings, same JVM arguments (if any) etc.



I don't understand such a big difference, is there some system/network parameters I am missing?



The code used (client and server) is shown below if anybody is interested into running it (occurrence count has to be given in command line):



import java.io.*;
import java.net.*;
import java.text.*;


public class SocketTest {

public final static String EOF_STR = "EOF";
public final static String[] st = {"toto"
,"1234567890"
,"12345678901234567890"
,"123456789012345678901234567890"
,"1234567890123456789012345678901234567890"
,"12345678901234567890123456789012345678901234567890"

,"123456789012345678901234567890123456789012345678901234567890"
};

public static void main(String[] args) throws UnknownHostException, IOException, InterruptedException {
double mean = 0.0;
int port = 30000;
int times = Integer.parseInt(args[0]);
String resultFileName = "res.dat";
new EchoServerSimple(port); // instanciate and run
Socket s = new Socket("127.0.0.1", port);

s.setTcpNoDelay(true);
PrintWriter pwOut = new PrintWriter(s.getOutputStream(), true);
BufferedReader brIn = new BufferedReader(new InputStreamReader(s.getInputStream()));
long[] res = new long[times];

int j = 0;
for(int i = 0; i < times; i++) {
if(j >= st.length) j = 0;
long t0 = System.nanoTime();
pwOut.println(st[j++]);

brIn.readLine();
res[i] = System.nanoTime() - t0;
mean += ((double)res[i]) / times;
}
pwOut.println(EOF_STR);
s.close();
print(res, resultFileName);
System.out.println("Mean = "+new DecimalFormat("#,##0.00").format(mean));
}


public static void print(long[] res, String output) {
try {
PrintWriter pw;
pw = new PrintWriter(new File(output));
for (long l : res) {
pw.println(l);
}
pw.close();
} catch (FileNotFoundException e) {
e.printStackTrace();

}
}

static class EchoServerSimple implements Runnable {

private ServerSocket _serverSocket;

public EchoServerSimple(int port) {
try { _serverSocket = new ServerSocket(port); }
catch (IOException e) { e.printStackTrace(); }

new Thread(this).start();
}

public void run() {
try {
Socket clientSocket = _serverSocket.accept();
clientSocket.setTcpNoDelay(true);
PrintWriter pwOut = new PrintWriter(clientSocket.getOutputStream(), true);
BufferedReader brIn = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
try {

while(true) {
String s = brIn.readLine();
pwOut.println(s);
if(s.equals(EOF_STR)) { clientSocket.close(); break; }
}
} catch (Exception e) {
e.printStackTrace();
try { clientSocket.close(); } catch (IOException e1) { e1.printStackTrace(); }
}
} catch (IOException e) {e.printStackTrace(); }

}
}
}


I'm using the JRE 1.6.0_18 for both OS, on a single 2.3GHz dual core Nehalem.
The 2 OS ar Solaris 10 and RH Linux 5.4 with RT Kernel 2.6.24.7.



Thanks a lot.


Answer




In Solaris that's called "TCP Fusion" which means two local TCP endpoints will be "fused". Thus they will bypass the TCP data path entirely.



Try to disable it and run your test again:




# echo do_tcp_fusion/W 0 | mdb -kw
do_tcp_fusion: 0x1 = 0x0


You should probably try to create a test environment which mimics your production as close as possible. That probably means setting up a real network with network adapters.




If you want to play with connection throttling or other complex network situations, I suggest you place a FreeBSD box between your two endpoints and play with ipfw/dummynet of pf/altq.


apache 2.4 - apache2.4 - ubuntu 16.04 - different php versions for different vhosts



I'm just a developer, but I can't think of a more adequate forum for my question:
After upgrading to ubuntu 16.04 the standard version of php is 7.0. For my purposes it would be best anyway to have a setup, that allows to





  • either choose in the vhost-config-files, which php-version to use

  • or switch easily from one version to another



I would need php 5.4 and 5.5 as an option. So I asked google and tried the solutions I found, but couldnt get none of them working.



I'm stuck at this situation:
Tried some solutions with ppa:ondrej/php, but this broke my package management.
Installed apache-dev, php7.0 and phpbrew. Managed to build php 5.5.38 via phpbrew, tested on shell.

Then I tried to adapt this answer to my actual situation.
But a lot of things are different in ubuntu 16.04, and after several days of reading and trying I come back to this question:

How to install, keep up-to-date and use that three php versions 7.0, 5.5, 5.4 for apache 2.4 in ubuntu 16.04?

Thanks
Ejoo



P.S. offline for some hours now



Answer



There has to be a very good reason to use unsupported versions of PHP, such as 5.4 and 5.5, which are not even receiving security updates any more.



In any case, and ignoring that fact, nowadays, the easiest way by far to achieve this is by using containers, because it completely eliminates the dependency problems, and keeps the host OS clean from PHP.



The Docker Hub official PHP image supports versions from 5.6.29 to 7.1.0.



It is trivial to have a web server (containerised or not) acting as a proxy using Virtual Hosts to front those PHP containers running different versions of PHP.



If you really want to run an unsupported version of PHP, you will need to write a custom Dockerfile. You can use the upstream repository as a reference.



networking - disabling hyper-v in windows 10 causes complete network failure


I have disabled (uninstalled) Hyper-V so I can run VM-Ware and have completely lost the network. It shows the cable connected but there is no internet connection. Ive tried a different cable and have reset my switch and router with no success. If done the netsh commands to reset the ip, ipv4, ipv6, winsock and I have killed the two winsock keys in the registry. Was unable to edit the nettcpip.inf file - access was denied (even after running notepad as administrator).


I am running out of internet searches and ideas. Does anyone have an idea of what I need to do (short of re-installing windows)?


Answer



I finally ended up restoring from a restore point. I am sure there is something else that can be done instead, unfortunately, I needed the network on that computer immediately.


windows 7 - Cancel AVG through task manager gives "Access Denied"

When I start up my computer (windows 7), AVG starts up. Usually this isnt a bad thing, but recently it's been taking up close to 40 percent of the listed CPU according to task manager. When I right-click on the process and select end process, it just says "Access Denied" I'm on an administrator account, and I should have full access. A while ago I took "ownership" of the whole system32 file, so I assume that extends to the AVG subfolder. Anyway, I tried to edit the permissions (through properties>security>modify permissions) but that still said access denied. I then tried through windows explorer, and got the same result, even though it said my account was the owner of the file. No user had full permissions, TrustedInstaller, System, Administrator, or Users. As I only have close to 50 gigs left on my current hard drive, I'd like to solve this without downloading anything, though if I need to I will. Any recommendations about how to do this?



Thanks
~Keelen

ESXi 6.7 VMWare Backup to External USB Drive

Using VMWare ESXi 6.7.0 with Essentials licensing, I would like to backup my VMs to a plugged External HD. Previously we did it with Veeam because we were using a Windows Server, but now with their Hypervisor we couldn't find some way to do it.



What I've already seen:



Veeam - Couldn't find an apropriate solution for what we need, is it possible to install it or use remotely without paid agent solutions?



OVF Export - From what I did understand by reading this feature would use the network to export the VM. That´s not exactly what I'm looking for (if it's this)



ghettoVCB.sh - I'm not sure if this one works with this version, from what I did read it works with version 5 and 4.




Is there any other way to achieve this using free alternatives to communicate with ESXi 6.7?



Thank you very much!

power supply - Asus laptop doesn't turn on

My girlfriend has an Asus laptop with a battery that dosen't work, so she uses it while it's connected to the wall.




This laptop has been in use for 2 years+ I think, and today suddenly, it stoped working.
there are a few LED lights that still light up, but that's it. Other then that, the screen is black,
and nothing works.



I searched the web for solutions and tried something, that seem to work for a lot of people.



1.unplug battery (that easy, it doesn't have one).



2.take out the charger cord.




3.hold power button for 30 second.



3.insert power cord.



4.power up the laptop.



That didn't work for me.



I made a video of the lights that still work, when I turn on the machine.




https://www.youtube.com/watch?v=6r4mWA0z67A&feature=youtu.be



I should add,that some of the noises that we used to hear when the machine was started are gone.
Could it be that it just died?



Any idea what happened?

windows - Installed Google Chrome 64 bit, but am I launching the 32 or 64 bit version?



I installed Chrome 64 bit, but am I running the 64 bit version?



I went to Control Panel\System and Security\System to confirm that I'm running an 64 bit version of Windows on an x64 bit processor.




I don't have a folder C:\Program Files\Google, but I do have a folder C:\Program Files (x86)\Google\Chrome\Application.



When I to go task manager, I see that Google Chrome (32 bit) is running.



When I go to Control Panel\Programs\Programs and Features, I only Google Chrome listed one time.



This is the 2nd computer that I'm seeing this on.


Answer



This happened with me as well. Confirm that you downloaded the correct installer (64bit). Then quit Chrome completely and restart it. Go to the settings menu (hamburger icon) > About Chrome (chrome://chrome/). Under the version number, there will be a message about an update and a button that says "Relaunch." Click on that button to update Chrome properly.




When you go on the page after this, the version number should say "Version 37.0.x (64-bit)". I believe this happens because Chrome doesn't really close itself and keeps running in the background, so a manual relaunch is required to make the switch.


Wednesday, November 25, 2015

windows 7 - Run a batch file on a remote computer as administrator

I am trying to run a batch file (to install some software) on a remote computer. To do this, I am using PSExec.



psexec.exe \\COMPUTER C:\swsetup\install.bat



This works fine, apart from some of the installs fail due to the script not running as an administrator (if I log on, right-click and select "Run as Administrator" the script runs and installs successfully.


I have tried running as administrator with the /runas command, with no luck



psexec.exe \\computer cmd



and then



runas /user:computer\administrator C:\swsetup\install.bat



The system flicks up with "Enter password for account" and then jumps back to the cmd prompt without letting me type the password in. The same issue happens if I try and do



runas /user:myaccount@domain.int C:\swsetup\install.bat



Is there a way around this, or am I going to have to visit the machine, log on, and then run the script on each machine?

Resetting Windows Vista Ultimate Password






I have received a laptop with Windows Vista Ultimate installed, but I never received the password to login, is there a way I can reset the password?


Answer



Here's a tutorial on how you can reset the password. There are about 4 ways listed on the page on how you can do this so it shouldn't be that difficult.


permissions - Can't create files with `ubuntu` user under `/var/www`

I've added ubuntu user to the www-data group and set the folder permissions as follows:



sudo gpasswd -a "$USER" www-data
find /var/www -type f -exec chmod 0640 {} \;
sudo find /var/www -type d -exec chmod 2750 {} \;


I can verify that ubuntu has been added to the group (running groups shows ubuntu www-data). I can access and read any files and directories in the /var/www directory as ubuntu.




I want to grant write permissions to ubuntu user in certain directories. Running sudo chmod -R g+w /var/www/public/uploads/ gives ubuntu access to write into this folder.



The problem is that when www-data creates new directories in /var/www/public/uploads/, ubuntu does not have permission to write in these newly created directories.



That is, when www-data creates /var/www/public/uploads/some-new-folder/, ubuntu cannot touch files in some-new-folder.



How can I change the permissions so that any files and directories created by www-data in specific paths will be writable by ubuntu as well?

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...