Wednesday, December 31, 2014

windows 7 - Win7 x64 Pro Fresh stuck on Installing KB3020369

I've got a new PC from Lenovo and it is just hanging when installing KB3020369. I have tried a heap of troubleshooting steps including using wsusoffline, but it is still stuck.


I can't find any logs for why it's stuck on this msu in event viewer or WindowsUpdate.log (the latter may be irrelevant when using wsusoffline).


Thanks for your help in advance.


Cheers

windows 7 - Laptop display dims when running on battery power


I recently bought a Lenovo IdeaPad Y580 and it has one minor annoyance: It continues to dim the screen when I unplug it from the power cord. I have tried setting the screen brightness in the advanced power settings but no matter what I set this to, the laptop always gets dimmer when unplugged.


enter image description here


So how can I keep the display brightness at 15% at all times?


Answer



Unfortunately, all the power settings in windows may be complete voids due to LENOVO's own power management application,
Look for "Lenovo Energy Management" it should have some customisable settings.


If there are no such settings then Try disabling "Lenovo Energy Management", from Task Manager


If nothing works and if you are so desperate to achieve custom settings, Uninstall the Energy Management Software. (Not Recommended)


storage - DL180 G6 12 bay backplane connections



I have an HP DL180 G6 12 bay server with a P212 Raid card. I can not open the server to see inside but I would like to know what connections the backplane has. Right now it has 6 1TB hdd attached. According to the HP raid array configuration program; the 6 drives are on one SF-8087 connection. (i can not find any hp documents about the backplane)




What I would like to do is add another HP raid card with 2x SAS connectors to create a 8 drive raid 10. Is this possible or how is the backplane set up?


Answer



The HP ProLiant DL180 G6 12-bay LFF drive backplane has a single 4-lane SFF-8087 SAS connector (See #4 on the graphic below).



The backplane has an integrated SAS expander that accommodates the 12 bays. The expander actually supports 14-ports, with two backplane-mounted SATA connectors to interface to the rear-mount 2-drive cage option (HP #488234-B21).



If you're interested in replacing your Smart Array P212 controller, you can safely use a Smart Array P410 or Smart Array P812 controller to accomplish your goal. Only one SFF-8087 port will be used.



enter image description here




Also see the DL180 G6 Service Guide.


windows - Use QWERTZ keyboard as a QWERTY


I want to buy a laptop with Windows 8 with the German keyboard QWERTZ but i will only use it for English and Greek language.
So, i will change the language from control panel to English and work like this. I don't care if Z and Y letters will physically appear the opposite, or if some other strange symbols will be on the keys.


What i really care about and want to clear is :


If i switch to english/greek language using the German keyboard QWERTZ, will all the keys work exactly the same as in a QWERTY keyboard? All the letters/shortcuts/symbols/shift-ctrl-alt combinations etc will work the same?


Answer



Yes, the keys will work the same, provided the physical layout (i.e. number and placement of keys) is the same between your german QWERTZ and english/greek QWERTY keyboard.


For my localized danish keyboard, there is the same number and placement of keys, so I can use a US keyboard layout with no problems on my physical DK (danish) keyboard.


Why does my USB stick not show up in Windows Vista?


Just reinstalled a Lenovo laptop with Vista.


Two separate USB sticks, that work fine on another computer, will not show up on this Vista computer. USB ports work fine for other stuff. USB sticks worked before I reinstalled the computer.


After looking around, I tried going into Disk management, to see if they appear there - they don't.


Is there some sort of service, that might be disabled, not allowing removable drives to mount? Or what else can be wrong?


Answer



After trying all kinds of tricks, also the stuff John Rudy suggested - I eventually gave up and installed windows 7 on the pc. Then everything was fine.


For future reference, the machine was a brand new Lenovo W500 - and it was the preinstalled Vista.


hardware - Is it possible to use server PCIe cards/PCIe RAID controller on desktop motherboard?



I've found myself in possession of a MSI 785GT-E63 motherboard. I need to put a PCIe x8 RAID controller on it, but the x16 PCIe slot is certainly originally intended for graphics cards.




Would putting a RAID controller instead of a graphics card on there work?


Answer



It is going to work, I put a x4 RAID card in one of x16 slots :).


windows 7 - Disable Recent Items in Jump Lists for Certain Programs

Is there any way to turn off the recent items in the jumplists of specific programs in Windows 7? This feature is useful on some programs (like my text editor) but there are other programs that I don't need everyone to see what I've opened recently (like my video player). I've searched around for a solution to this and I've found two "solutions":





  1. Turn off recent items in all jumplists (open the Taskbar and Start Menu Properties and uncheck the "Store and display recently opened items in the Start menu and the taskbar").

  2. Manually clear the recent items history.



Neither of these options seem very useful to me. #1 seems like the better solution if you really don't want someone to see your recent documents but then you lose that functionality for all programs instead of just the ones you want while #2 seems like it's something that's way to easy to forget about.

iis 7 - How failover should work in IIS cluster with Application Request Routing?



I have set up several servers with IIS and connected them to the load balancer - server with installed IIS Application Request Routing. I have created a server farm and added two servers. Then I stopped IIS on the first server and tried to open my web site. It returned me an error:




502 - Web server received an invalid response while acting as a gateway or proxy server.
[There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.]



But if instead of stopping IIS I shut down the first server, I'm getting a response from the next server which is online. The question is, what the expected behaviour should be for failover with ARR, should it switch me to the next server if IIS is stopped and server is online?



Additional info:
I've tried to shut down each server and I was able to open web site in both cases, so there is no problem with connections or configuration. I'm using even distribution and round-robin load balance algorithm. The problem appears only if I stop IIS and leave server online.


Answer



In order to let the ARR server know when not to route requests to a server, you need to set up a Health Test.





  • Select your server farm in the IIS Manager

  • Select the Health Test Feature.

  • Here you can fill in an URL to run the health check against, (ie. a text file containing a good word, or the index page on the web site your trying to load balance).



Once you have configured a health test, you should be able to stop IIS on one of the servers, and see that requests to the ARR server is getting answered from the other.



For maintenance scenarios, you can gracefully shut pull out a web server from load balancing, using the Monitoring and Management feature. Select the server you need to drain, and select Disallow new connections from the Actions pane


raid - File History not working Windows 8.1

I have file history turned on yet no data is saved.

When I turn FH off and then on I get an error message: "File History has found files that are encrypted with Encrypting File System, on a network location, or on a drive that doesn't use the the NTFS file system. These files won't be backed up."

When I turn on file history I get the message that File History is saving copies of my files but nothing is written to the backup disk.

I have tried backing up to my C drive but that doesn't work either.

I have a Dell Studio XPS 435t / 9000 running Intel Rapid Share Raid 1 with four 1tb drives as C and D. I update from Windows 7 Pro to Windows 8 and then Windows 8.1.
I have a WD external 2TB drive with 1.3 GB free. (I have a system image of both my drives.)
I have searched for EFS files or directories but none exist. None of the drives are a network location. Everything is NTFS. All the security permissions are correct (as far as I can tell.)

When I press Run Now the date saved is not updated.
I cannot find anyone with this problem.

Help!

Regards,
Chris

Dim Specific Windows

Is there a way to dim specific, individual windows?





I often find the ubiquitous white or almost-white backgrounds of many programs and pages I browse unpleasantly bright. Often they either cannot be configured at all, or not enough, or have too many special cases to be configured efficiently. Images or videos can introduce excess brightness .



Lowering whole screen/system brightness is not a solution because a bright window next to a bunch of dark windows is a bad no matter how much the global settings are changed.




I want to dim just windows that are too bright for me do that they are tolerable next to my preferred/configured dark windows.





Ideally, I would like to:




  1. quickly dim a window when it becomes too bright,

  2. configure some programs to always have all or some their windows dimmed by default,

  3. be able to manually toggle the dimming on individual windows, and


  4. retain full functionality.



At the bare minimum, if you give me a programmatic way to dim a window, I can and will do the rest with some programming language.





I would like to do it without stealing focus or otherwise blocking keystrokes, mouse events, and other window events, so that it stays dimmed even as I interact with it, including resizing, moving, and hiding the window (which I usually do with my own AutoHotKey scripts, but often enough with the built-in keyboard and mouse ways too).



I would prefer a portable solution that I can reuse between Windows 7 and Windows 10 (and if it works on Windows 8 too that would be a plus), but I will gladly accept anything at this point. My most pressing daily need is Windows 7.




The less I have to install to make it work, the better, but I'll install anything remotely reasonable at this point.



I am a libre software zealot at heart, but in a moment of weakness I'd maybe even use a shady binary off the internet if it did the job well enough, and I would definitely pay good money for a polished closed solution from a reputable source.



I would really enjoy it if there to be no weird visual artifacts or noticeable delays, and I would be willing to tolerate some constant CPU load to make that happen, but even a solution that only works while the window is stationary and that I have to manually reapply and even clean up after would be an improvement.



Would be nice if it didn't leak memory, but if I have to restart it once a week or daily, it would still be worth it.

.htaccess - Advanced Redirect 301 htaccess



I need through redirect 301 htaccess this:



Of these:



https://ejemplo.com/post-noticia/octubre-2019/122-nombre-de-mi-noticia



To this:



https://ejemplo.com/noticia/nombre-de-mi-noticia


I have tried this:



Redirect 301 /post-noticia/octubre-2019/ https://ejemplo.com/noticia/



But the result is:



https://ejemplo.com/noticia/122-nombre-de-mi-noticia


I need the result to be:



https://ejemplo.com/noticia/nombre-de-mi-noticia



That is, omit: 122-



Thank you


Answer



Have you tried Redirect 301 /post-noticia/octubre-2019/122- https://ejemplo.com/noticia/?



If 122- is a variable (you haven't gave a clue), you have to use either RedirectMatch which accepts regexps:



RedirectMatch 301 "/post-noticia/octubre-2019/\d+-(.+)$" "https://ejemplo.com/noticia/$1"



or fiddle with mod_rewrite.


git - File Sync (Mirror/Replicate) From source to multiple targets

My situation is this; I need to find a way of mirroring a folder with 60gb+ of mp3 files from a server (or cloud) to at least 100 clients based around the country. All of these clients are internet connected and when these clients leave the place where the Server resides, the latest copy of the 60gb has been copied onto the hard drive to avoid having to download the full 60gb when installed in its new location. Changes are regurly made to the Server side, from new files being added, to ID3 tags being altered (which may not change the file size). I need some sort of solution that will possibly involve the clients scheduled to look at the server & initiate a download of changes & additions, whilst skipping matching files on sever and client.


Ideally a one-way dropbox is perfect, however I can't find anything like this. I've looked at back-up solutions, however these seem to be opposite of what I wish to do (uploading from many to one rather than one to many). I've come across Git & NAS, however not being that technically proficient, I can't understand if they're right for what I need.


If anyone could provide any advice or suggestions on this it would be great?


Also any details i've not mentioned please do ask


Thanks!

ntfs - "You need to format the disk in drive J: before you can use it" is shown, disk is not corrupted

I have a 7-8 years old laptop (HP EliteBook 6930p), running Windows XP with a 500GB SATA Hard Drive. The computer is painfully slow and I'm about to give it away as a donation, but first I would like to backup all my data in it.



I thought the easiest way to do this would be to take the HD out of the computer, and connect it to my new laptop using a SATA to USB adapter I have. I've unscrewed the HDD, connected it via the adapter to my new Windows 10 laptop, and I got a popup saying "You need to format the disk in drive J: before you can use it". At first I was panicked that I somehow got the disk corrupted, but when I connected it back to the old laptop it booted just fine and my data was there.




Why do I get this message saying the disk is not formatted, even though it obviously is? How can I connect this old HDD to my new computer to copy my files?



Some more information: The HDD has only one NTFS partition. I tried running testdisk command line tool and it told me that the boot sector is corrupted. I did not manage to "fix" it using testdisk tool (maybe because there's nothing to fix, the disk is just fine. It's just that my new computer can't read it). My SATA to USB adapter is not damaged because when I use it with another HDD everything works just fine.



Please do not suggest other ways to copy the files such as over LAN or cloud storage service. These won't work for me as the old computer can't connect to any network.

Windows XP Installation from USB




Is there any easiest way to make windows xp installation from USB ? Looking through google doesnt gives me anything. Any clues?

ubuntu 10.04 - KVM guests lose connectivity after networking restart



we're setting up an ubuntu server 10.04 host with kvm. The host is setup with a bond and bridged interfaces to allow the guests access to the network without natting.
Our current configuration is working fine, except when we're restarting the network with an /etc/init.d/networking restart.
After restarting the network, the guests lose connectivity. The only way to restore it is to halt the guest and start it again.




I've been looking around but I can't seem to find any known bug/issue/report of this behavior.



Here follows our network configuration file:



auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
bond-slaves none

bond-mode active-backup
bond-downdelay 250
bond-updelay 120
bond-miimon 100

auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1


auto eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1

#bridge used by host
auto br-vlan180
iface br-vlan180 inet static
address 10.0.0.200
netmask 255.255.255.0

gateway 10.0.0.1
vlan-raw-device bond0
bridge_ports vlan180
bridge_maxwait 0
bridge_fd 0
bridge_stp off
#bridge without address, used by vm
auto br-vlan120
iface br-vlan120 inet manual
vlan-raw-device bond0

bridge_ports vlan120
bridge_maxwait 0
bridge_fd 0
bridge_stp off


Thank you



ADDENDUM - brctl show output before and after nw restart:




BRCTL SHOW BEFORE NW RESTART



brctl show
bridge name bridge id STP enabled interfaces
br-vlan120 8000.984be1644072 no vlan120
vnet0
vnet1
br-vlan180 8000.984be1644072 no vlan180
virbr0 8000.000000000000 yes



BRCTL SHOW AFTER NW RESTART



brctl show
bridge name bridge id STP enabled interfaces
br-vlan120 8000.984be1644072 no vlan120
br-vlan180 8000.984be1644072 no vlan180
virbr0 8000.000000000000 yes



apparently, the two virtual interfaces fail to come back after network restart.



PS BEFORE NW RESTART



ps -ef | grep qemu
root 1784 1 6 11:45 ? 00:00:40 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name test02 -uuid ee6d84b6-dbf8-d93c-b32f-8ae6b7d9b80e -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/test02.monitor,server,nowait -monitor chardev:monitor -boot c -drive file=/dev/sysvg/test02,if=virtio,index=0,boot=on,format=raw -drive file=/root/ubuntu-10.04.2-server-amd64.iso,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=52:54:00:2c:d1:26,vlan=0,name=nic.0 -net tap,fd=48,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -vnc 127.0.0.1:0 -vga cirrus -soundhw es1370
root 2711 1 89 11:55 ? 00:00:14 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 2 -name nttest -uuid 04ca381e-0510-7d3c-c7e2-8f7d7b6ea58f -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/nttest.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/dev/sysvg/nttest,if=ide,index=0,boot=on,format=raw -drive file=/root/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_English_w_SP1_MLF_X17-22580.ISO,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=52:54:00:62:1b:2e,vlan=0,name=nic.0 -net tap,fd=51,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -vga cirrus -soundhw es1370


PS AFTER NW RESTART




ps -ef | grep qemu
root 1784 1 4 11:45 ? 00:00:59 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name test02 -uuid ee6d84b6-dbf8-d93c-b32f-8ae6b7d9b80e -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/test02.monitor,server,nowait -monitor chardev:monitor -boot c -drive file=/dev/sysvg/test02,if=virtio,index=0,boot=on,format=raw -drive file=/root/ubuntu-10.04.2-server-amd64.iso,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=52:54:00:2c:d1:26,vlan=0,name=nic.0 -net tap,fd=48,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -vnc 127.0.0.1:0 -vga cirrus -soundhw es1370
root 2711 1 39 11:55 ? 00:03:51 /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 2 -name nttest -uuid 04ca381e-0510-7d3c-c7e2-8f7d7b6ea58f -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/nttest.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/dev/sysvg/nttest,if=ide,index=0,boot=on,format=raw -drive file=/root/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_English_w_SP1_MLF_X17-22580.ISO,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=52:54:00:62:1b:2e,vlan=0,name=nic.0 -net tap,fd=51,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -vga cirrus -soundhw es1370

Answer



Well, there is the problem, when you restart the networking, the vnetX tap devices are not reconnected, causing the VMs to lose connectivity with the bridge.



I guess you could manually reconnect them to the bridge, since they are still running, but the right way to do this would be to migrate the VMs away from a host where you make network changes, or take the VMs down, if you're in a single host mode. In most corporate level systems this is called "maintenance mode", and changing the network config is definitely maintenance.


capacity planning - MongoDB and datasets that don't fit in RAM no matter how hard you shove



This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get.



But now for some performance details!



During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption.



The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits.




A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect.



Finished document sizes vary widely, but the median size is about 8K.



The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks.



I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.


Answer



This is going to be a bunch of small points. There is sadly no single answer to your question, however.




MongoDB allows the OS kernel to handle memory-management. Aside from throwing as much RAM as possible at the problem, there are only a few things that can be done to 'actively manage' your Working Set.



The one thing that you can do to optimize writes is to first query for that record (do a read), so that it's in working memory. This will avoid the performance problems associated with the process-wide Global Lock (which is supposed to become per-db in v2.2)



There is no hard-and-fast rule for RAM vs SSD ratio, but I think that the raw IOPS of SSDs should allow you to go with a much lower ratio. Off the top of my head, 1:3 is probably the lowest you want to go with. But given the higher costs and lower capacities, you are likely going to need to keep that ratio down anyway.



Regarding 'Write vs Reading phases', am I reading correctly that once a record is written, it is seldom updated ("upserted")? If that is the case, it may be worthwhile to host two clusters; the normal write cluster, and read-optimized cluster for "aged" data that hasn't been modified in [X time period]. I would definitely enable slave-read on this cluster. (Personally, I'd manage that by including a date-modified value in your db's object documents.)



If you have the ability to load-test before going into Prod, perf monitor the hell out of it. MongoDB was written with the assumption that it would be often be deployed in VMs (their reference systems are in EC2), so don't be afraid to shard out to VMs.


windows 10 - Hard disk bit rot?

One bit on my 2 TB hard disk occasionally changes from a 0 to a 1 or vice versa. This seems to be an occasional read error.


I found this out because I have a program which runs at startup, computes SHA1 hashes of all my data files, and reports any that are different from hash values saved the last time.


I happened to have an extra copy of the affected file, so I am able to perform a bitwise comparison. The file is a JPEG image file and I can view it without any reported problem (so the bit must be in the RGB data values rather than any crucial image metadata or header part).


My disk's SMART info suggests there is no problem.


C:> wmic
wmic:root\cli>diskdrive get status
Status
OK
OK

The disk is an ST2000DM001-1ER164 in device manager. All partitions show as "Healthy" in Disk Management.


This is what my startup file-checker says:


Filecheck report
Some previously "inactive" files have been modified!
C:\Users\RGB\Pictures\From-Phone\Images\IMG_20160719_090140630.jpg

I can check it's logfiles to see the SHA1 hash values for the two copies of the image:


C:> findstr 140630 filecheck.dat
2016-07-19 EMzHG9bZUqA1OkuiouZoN+mD8X4= C:\Users\RGB\Pictures\From-Phone\Images\IMG_20160719_090140630.jpg
2016-07-19 DhbuPVUu6A4Eo7BIkQww17iCakk= C:\Users\RGB\Pictures\2016\2016-07\2016-07-19\IMG_20160719_090140630.jpg

I can do a binary comparison to see what changed


C:> cd \Users\RGB\Pictures
C:> fc /b From-Phone\Images\IMG_20160719_090140630.jpg 2016\2016-07\2016-07-19\IMG_20160719_090140630.jpg
Comparing files [...]
0013B232: 40 00

That's a one bit difference. It doesn't look like crypto malware, maybe HD is going bad?


The next day


Filecheck report
Some previously "inactive" files have been modified!
C:\Users\RGB\Pictures\From-Phone\Images\IMG_20160719_090140630.jpg

Now both files are the same


C:> findstr 140630 filecheck.dat
2016-07-19 DhbuPVUu6A4Eo7BIkQww17iCakk= C:\Users\RGB\Pictures\2016\2016-07\2016-07-19\IMG_20160719_090140630.jpg
2016-07-19 DhbuPVUu6A4Eo7BIkQww17iCakk= C:\Users\RGB\Pictures\From-Phone\Images\IMG_20160719_090140630.jpg

I have both networked and offline backups of my data. I have created a Windows 10 system recovery drive on a USB flash stick.


What can I do to assess whether I need to replace the hard disk urgently?

Catch-All for MS Exchange 2013 SP1 on specific Authoritative Domains




I have an MS Exchange 2013 SP1 Environment with an Edge Server in the DMZ.
I have several email domains added to the accepted domains and all are authoritative.



For the purposes of this query, we will call them:



yyy.com (catch-all)
zzz.com (catch-all)
123.com (catch-all)



abc.com (mailbox email policy)




Three of these domains do not receive many emails and I am trying to get every email coming to these domains into my personal mailbox in the form of a catch-all.



I've set up a transport rule as follows:



If the message...recipients's address domain portion belongs to any of these domains: 'yyy.com' or 'zzz.com' or '123.com'

Do the following...Redirect the message to 'admin@abc.com'
and Stop processing more rules


Except if...Is sent to 'Inside the organization'


I've disabled (to the best of my knowledge) the recipient filtering on both the MBX and Edge server but when I send to test@yyy.com, it will still bounce back saying that the user does not exist.



Have restarted the transport service after each change and still to no avail.



These are the commands I ran:



[PS] C:\>Set-RecipientFilterConfig -Enabled $false

[PS] C:\>Disable-TransportAgent "Recipient Filter Agent"


These succeeded on the Edge server but returned the following error on the Mailbox Server:



Transport agent "Recipient Filter Agent" isn't found.
Parameter name: Identity
+ CategoryInfo : InvalidArgument: (:) [Disable-TransportAgent], ArgumentException
+ FullyQualifiedErrorId : [Server=SV-EXCH-01,RequestId=564e806d-465e-40e9-b120-6e7ae554f1f1,TimeStamp=13/08/2014 8
:31:56 AM] [FailureCategory=Cmdlet-ArgumentException] 11DD97EF,Microsoft.Exchange.Management.AgentTasks.DisableTra

nsportAgent


Any help or feedback would be much appreciated!


Answer



It seems that when you set the domain as Authoritative, Exchange will do a recipient look-up and then bounce when it doesn't find the user regardless of what the mail-flow rules are. To remedy this, the domain needs to be set to Internal Relay.



The other part of the problem was the mail-flow rule itself which states "Except if...Is sent to 'Inside the organization'". If the domain is part of the accepted domains list, it is considered "inside the organization". This exception needs to be taken out.



If you require a catch-all to complement users that do exist on that particular domain, a dynamic distribution group can be set up to list all existing emails on that domain.



Tuesday, December 30, 2014

SOLVED - No admin account windows 10

I just started with a fresh machine (W10 Pro), and I just ******* it up. I created a user for this computer and it got admin rights. Also, from the users and groups menu, I have seen that there is another "administrator" account there. What I need to do is to have any account with admin rights. I don't have access to a recovery DVD/USB.


So, I've decided to remove admin rights from the user account, but, after doing that, I've realised that the administrator account is blocked, so, I can't change anything, neither create or give admin rights to any account.


I've tried several things from google, like:


Net user administrator /active:yes


which says that I don't have admin rights (yes, windows, I know that). Also, I've tried to use lusrmgr.msc and, again, no admin rights.


I've found also, that I can use shift while rebooting to access some system options, the problem is, whatever I choose, the computer seems to hang and keeps there for a while, doing nothing (no leds blinking) and I need to kill the computer using the power button.


Tried to go to uefi settings using F5, F7, F10 and F11. No luck.


EDIT:


System get stuck while trying to boot from USB/DVD or into UEFI/BOOT options using F9 (as suggested per HP support).


EDIT2: I was able to fix this issue. First, I downloaded a recovery tool from the manufacturer, that boots before the OS and allowed me to disable fastboot "feature". By having the fastboot feature disabled, I was able to use manufactured recovery tool to restore the factory image from the recovery tool.


Thanks everyone for the help.

Disabling a laptop keyboard in place of a USB keyboard?

I frequently plug a USB keyboard into my laptop to use instead of the laptop keyboard. However, when I do this, the OS (Windows Vista) then accepts input from both the laptop keyboard and the USB keyboard. I want to place the USB keyboard on top of the laptop keyboard and that might result in accidental keypresses if the laptop keyboard is still enabled. So, is there any way to disable the laptop's built-in keyboard when a USB keyboard is plugged in? My laptop is a Dell Inspiron.

Cannot Create Key: Error Writing to the Windows 7 Registry



I have Windows 7 Ultimate 64 bit. Things were going well until I had to install Outlook 2007 and Visio 2007 on my machine for some client work. After that Microsoft Office 2007 started trying to reconfigure itself every time i was launched. After some uninstalls, registry cleaning, re-installs, and various other experimental changes I was able to correct the "Configure" issue [for all programs except Visio and I'm willing to accept that].



However, during the process I lost the ability to do "File-->New-->Word Document" and "File--> New-->Excel Document", etc..




I tried repairing Office, but that did not add the menu items back in.



After some searching it appears this issue can be fixed by adding registry keys, as described here. Unfortunately I am unable to add those registry keys. The reg files from the link give an error: "Error Accessing Registry".



I opened up RegEdit and try to add the keys manually, I get get the error "Cannot Create Key: Error Writing to the Registry."



I have also tried some programs such as Creative Elements Power Tools and FileTypesMan to address this issue, but neither one was able to solve it. I didn't get any errors from those tools, but it did not add items back into the "new" menu.



For the most part my experiments have been with trying to get excel in the file new menu, but long term I want to get them all back there.




I am running regedit as an administrator. I have re-assigned ownership of the key in question to the administrator group. I have also given the Administrator group. my login account, the system account, and the everyone account full access to "HKEY_CLASSES_ROOT.xlsx" key [and the "HKEY_CLASSES_ROOT" key). That had no affect.



I also tried to use subinacl.exe to give access to those registry keys, but that did not address the issue.



I'm assuming I did something during my initial attempt to solve the problem that somehow blocked off access to that set of keys. I just have no idea what that would be.



I'm at a loss. While googling has provided plenty of possible solutions to my various problems, none of them have worked.



Any ideas?


Answer




While trying to fix this I eventually hosed the registry completely; and the computer wouldn't load. In the end I re-installed the OS from scratch and reinstalled all programs. Things have been working better than ever since doing so.


windows - Will "chkdsk /r" scan the free area of harddisk for physical damages?


From microsoft documentation of chkdsk command, it has the following commonly used switches:



/f
Fixes errors on the disk. The disk must be locked. If chkdsk cannot lock the drive, a message appears that asks you if you want to check the drive the next time you restart the computer.


/r
Locates bad sectors and recovers readable information. The disk must be locked. /r includes the functionality of /f, with the additional analysis of physical disk errors.


/b
NTFS only: Clears the list of bad clusters on the volume and rescans all allocated and free clusters for errors. /b includes the functionality of /r. Use this parameter after imaging a volume to a new hard disk drive.




Q1:
Does it mean /r switch will scan for both logical errors in files (logical file corruptions) and physical HDD damages (like bad sectors)?

Q2:
If /r switch does scan for bad sectors, will it scan the entire HDD (both used and free areas) ?

Q3:
Do the differences between /r and /b lie in that /r will skip scanning for the sectors previously marked as bad sectors while /b will scan all sectors (no matter normal or bad)?


Therefore, /b will update the list of marked bad sectors, which means releasing false-positive bad sectors for normal usage (This often happens when cloning an old HDD with bad sectors to a brand new HDD which should have no bad sectors in ideal case). Am I correct?

Q4:
If my understanding is correct in Q3, then I would wonder about the mechanism of determination for bad sectors.


Suppose there is a bad sector(already marked as bad) in old HDD and it is not 100% dead practically, so it could read once in several attempts. Then I clone the old HDD to a brand new one, so the bad sector records are also copied to the new HDD.


If now I run chkdsk /b for the brand new HDD, will there be a chance that this abnormal sector will be released as a normal sector for read/write? That sounds dangerous and unreliable.


Is it worth to use /b for the brand new HDD after cloned?


Answer



Firstly credit to Akinaand Moab


Answers for all 4 questions are yes.


Furthermore, /b switch will scan the entire disk surface.
And after first failed attempt (write/verify or chkdsk /r), the bad sector must be marked as bad and it will never be used in future until format, chkdsk /b or similar action.


Windows 7 does not detect USB mass storage devices anymore


In an attempt to get a WD USB HDD to work I followed a suggestion to uninstall all devices of type Mass Storage using USBDeview. Now, Windows 7 does not show any mass storage device, even the ones that worked previously. For example, a connected USB pen drive is listed in USBDeview as not connected:


Screenshot of USBDeview


In the Device Manager it appears with an exclamation mark:


Screenshot of Device Manager


It is not shown at all in Disk Managment.


I already tried reinstalling my ThinkPad T420si's Intel chipset drivers, but to
no avail.


How do I get USB mass storage to work again?


Answer



Someone pointed me to a post in sevenforums.com by Difusal.
After following the steps in the post, I deleted the entry with the exclamation
mark in the Device Manager, then replugged the device. The mass storage driver
installed automatically, and the drive is now detected.


Steps, copied from the aforementioned forum post, reformatted and with typos
and spelling corrected:



  1. Open Windows Explorer.


  2. Go to C:\Windows\System32\DriverStore.


    You will have a couple of folders and files.


    You will have *.dat files and another file named: infcache.1


  3. Right click every file (don't touch the folders!) and choose properties.


  4. Go to the Security tab.


  5. Click Edit.


  6. Choose your account and check the box: Full Control (see ss)


  7. Click OK.


  8. Repeat for every file.


  9. Select all the files (*.dat and infcache.1).


  10. Press Shift + Del.


  11. Click OK.


  12. Now, go to C:\Windows\System32\Driver Store\File Repository.


  13. Search folders containing usbstor.inf.


  14. Open it (if you have more than one, choose the most recent).


  15. Copy usbstor.inf and usbstor.PNF.


  16. Paste those two files to C:\Windows\inf.


  17. Reboot your PC and voilà! :b



If I've upgraded Windows 7, which product key is stored in the registry?

I have a PC that came with an OEM copy of Windows 7 Home Premium. I bought a Professional upgrade product key from Best Buy. I have since lost the upgrade key.


I know that I can use Magical Jelly Bean Keyfinder to get the key. The question is, which key is it finding: the Home or Pro key?

slipstream - How do I integrate SP1 and SP2 for Windows Vista?

I would like to integrate SP1 and SP2 in one single Windows Vista DVD setup disc. There is one popular article on TechRepublic on how to "reverse integrate" SP1 and SP2 for Vista. But the article is linking to a Microsoft download page for "Automated Installation Kit (AIK) for Windows Vista SP1 and Windows Server 2008".


But there is also another download for "Automated Installation Kit" on the Microsoft download website.


Which one should I get? Do I need AIK for SP1 to reverse integrate SP1 and SP2 for Vista? Or do I get the regular one (one that's not for SP1)?


My Vista media came with no Service Pack whatsoever. So I'm thinking maybe the first one is for Vista with integrated SP1?


Also, does it matter which language version I get? I know that I need "all language" standalone SP1 and SP2 for Vista, because my Vista version is in Swedish. But is it the same with the AIK, do I need AIK for Swedish version of Windows? Or is this just the language of the AIK interface?


Is there any other way to do this? Is there perhaps a legal way of obtaining a DVD image of Vista with SP1 and SP2 already integrated? Except for becoming a MSDN or TechNet Plus member?


I just need a way of re-installing Windows Vista with as many updates pre-installed as possible, so I would prefer to have the SP1 and SP2 installed at the same time. Is that too much to ask? Why won't Microsoft make it simple and make the Windows Vista ISO files with integrated SP2 available for all?


Why is that not a legitimate way of obtaining it? You know, having in mind that to download pretty much anything from Microsoft Download Center they are now enforcing Windows validation process, and as it is a licensed and genuine version of Windows Vista that I have I see no reason why I would not be allowed to download it... why I would need to get myself a MSDN or Technet Plus membership just for this sake.


Update:
I followed the TechRepublic guide and everything went fine until I came to the step where I'm supposed to make a bootable ISO using OSCDIMG.


C:\Program Files\Windows AIK\Tools\PETools>oscdimg /b "c:\program files\windows
aik\tools\petools\x86\boot\etfsboot.com" /n /o /m /l "FRTMCxFRE_SV_DVD_WAIK" "L:
\slipstream3\Temp VIC\Vista x64 SP2" "L:\slipstream3\Temp VIC\ISO\Vista Home Pre
mium x64 SP2.iso"
OSCDIMG 2.45 CD-ROM and DVD-ROM Premastering Utility
Copyright (C) Microsoft, 1993-2000. All rights reserved.
For Microsoft internal use only.
ERROR: Could not open boot sector file ""
Error 3
C:\Program Files\Windows AIK\Tools\PETools>

What the hell is error 3? And why is it trying to open "" (double quote marks) and not the actual file, \boot\etfsboot.com? What am I doing wrong here? I went over this and repeated the command several times. There appears to be no problem with the syntax, and the file paths are correct.


I have booted into the working OS (Vista Home Premium 64-bit). System disk drive letter is C. According to diskmgmt.msc it is located on disk 1, partition 1. I have reinstalled Vista on disk 2, partition 5. This is the only primary partition (not active) on that disk. While in the working OS this partition is given the drive letter Z.


Drive letter L is located on disk 2, partition 4. I use this disk for storage. This is where I copied the DVD disc to and also the location of the modified install.wim file (using imagex).


so...



  • disk 1, part 1: Vista 64-bit (working OS)

  • disk 2, part 5: Vista 64-bit (reinstall location)

  • disk 2, part 4: Vista RTM DVD mod (imagex)


Location of oscdimg.exe:


C:\Program Files\Windows AIK\Tools\PETools>dir oscdimg.exe /b
oscdimg.exe
C:\Program Files\Windows AIK\Tools\PETools>

As you can see, the path to OSCDIMG is correct.


C:\Program Files\Windows AIK>dir imagex.exe /s
Volymen i enhet C har etiketten Vista (ST1PT1)
Volymens serienummer är AAAA-AAAA
Innehåll i katalogen C:\Program Files\Windows AIK\Tools\amd64
2006-11-02 01:08 466 944 imagex.exe
1 fil(er) 466 944 byte
Innehåll i katalogen C:\Program Files\Windows AIK\Tools\ia64
2006-11-02 00:57 968 704 imagex.exe
1 fil(er) 968 704 byte
Innehåll i katalogen C:\Program Files\Windows AIK\Tools\x86
2006-11-02 00:34 381 440 imagex.exe
1 fil(er) 381 440 byte
Totalt antal filer:
3 fil(er) 1 817 088 byte
0 katalog(er) 9 287 438 336 byte ledigt
C:\Program Files\Windows AIK>

Here, we see that the file (external command) imagex.exe is located in 3 different folders, and they have different sizes. I didn't notice this before. Is it in fact necessary to use the EXE file from the x64 folder if you are capturing an image of a 64-bit Vista? I ran it from C:\Program Files\Windows AIK\Tools\PETools>.


So I didn't change directory to C:\Program Files\Windows AIK\Tools\amd64 or \ia64 (this one is for Itanium processors if I'm not mistaken). But at the start of PE Tools Command Prompt it does a path update as you can see below.


Updating path to include peimg, oscdimg, imagex
C:\Program Files\Windows AIK\Tools\PETools\
C:\Program Files\Windows AIK\Tools\PETools\..\AMD64
C:\Program Files\Windows AIK\Tools\PETools>

There is probably an environment variable added in Windows so that one could run imagex independent of where you're at in Command Prompt. So I wouldn't expect this to be the problem.


Now, the "boot sector" it's looking for should be located somewhere in the \Windows AIK folder. Is this in fact the etfsboot.com file?


C:\Program Files\Windows AIK>dir etfsboot.com /s
Volymen i enhet C har etiketten Vista (ST1PT1)
Volymens serienummer är AAAA-AAAA
Innehåll i katalogen C:\Program Files\Windows AIK\Tools\PETools\amd64\boot
2006-09-18 13:27 2 048 etfsboot.com
1 fil(er) 2 048 byte
Innehåll i katalogen C:\Program Files\Windows AIK\Tools\PETools\x86\boot
2006-09-18 13:27 2 048 etfsboot.com
1 fil(er) 2 048 byte
Totalt antal filer:
2 fil(er) 4 096 byte
0 katalog(er) 9 274 441 728 byte ledigt
C:\Program Files\Windows AIK>

As you can see there is one in \PETools\amd64 and one in \PETools\x86. I used the one in x86 folder, but they both appear to be the same (according to file size).


So what did I miss? It's probably something obvious but I'm too blind to see it. I would prefer to use the built-in OSCDIMG command to make the bootable ISO file. I am not motivated to purchase a copy of UltraISO for this task as suggested by the VistaForums.


Update 2:
Like I stated before I have reinstalled Vista Home Premium 64-bit to disk 2, partition 5 (drive Z). While using imagex to capture the Windows image, do I point it to the Z:\ or the folder Temp VIC\Vista x64 SP2folder where I copied the DVD disc?


I noticed a difference in the imagex command shown on TechRepublic and the VistaForums.


TechRepublic:



imagex /compress maximum /flags Ultimate /capture H:\ "N:\Temp
VIC\Vista x64 SP2\sources\install.wim" "Ultimate x64 SP2"



VistaForums:



imagex /compress maximum /flags "Ultimate" /capture d: c:\install.wim
"Ultimate"



Update 3: It looks like they are doing a move and replace operation at at a later step, as opposed to overwriting the existing install.wim file as suggested by the TechRepublic guide. This is because they on VistaForums are not copying the files from the Vista DVD disc to HDD using Windows Explorer. Instead, they are loading the DVD disc in UltraISO, and then saving an image of it on the HDD.


Note that this way they are preserving not only the files you normally see in Explorer when you load the disc, but they are also able to keep the boot information from the disc. Perhaps, this is why the imagex command is complaining about not being able to open boot sector file? Could this be it? If so, then there is no other way but to make an image of the Vista DVD disc and in-place edit it with a modified install.wim file using software like UltraISO.


Hmmm... so complicated...


I will second my call for Microsoft to start making the Windows DVD images available online. It's useless without a valid product key anyway, so why resist? Is it better to download it from places like TPB and get a virus that then spreads to all Windows users (even those who pay their licenses)? These days Microsoft is offering digital delivery of Windows, and this is a good start, but it's just not enough.


Those who already have licensed copy of Windows should be allowed to download it from Microsoft whenever or how often they want. With no need to become a TechNet Plus or MSDN subscriber first! Hell, it's no more complicated than downloading the latest version of any software program, like Adobe Photoshop or Lightroom.


I for instance have a licensed copy of Lightroom 4. But I am currently using version 4.0. Now to get the latest updates, pre-packed in the installer, I would only need to download the 4.1 installer EXE file. So that next time I install it I would have the latest updates from the start.


This is what we are asking for with Windows - to be able to download a DVD image of Windows with more up-to-date features, service packs, windows updates, latest version of WMP and Internet Explorer, etc. So that when you install it you have all the latest stuff. Why is this not possible with Windows? Where is the difference? Yes, Windows is the operating system, but what is it really? It's a program! On which of course other programs are then running. It's sort of a "middleware" that has direct hardware access. But it's a program!


If you ask me, I think they are just being ignorant. They are like the music companies of the computer world. Slow, lazy and ignorant. It will probably take them another decade before they start making Windows images freely available. And less complicated!


Thanks guys for your help and support so far! I will let you know if or when I figure this out. I will try some of your other suggestions, but if everything else fails I will just have to accept that I must spend a day or two downloading and installing service packs and updates for Vista every time I reinstall it (and I do that at least 4 times times a year).


Update 5: Right! The good news is that I have now finally managed to make the OSCDIMG command work. So now I have an ISO image of Vista and I have burned it to a DVD for testing. I haven't installed it yet, but so far it seems to be OK, it boots and the setup also starts while in Windows.


The first problem with error 3 was that I had included a space after the /b switch and the path to the boot file.


So instead of:


oscdimg /b "c:\program files\windows aik\tools\petools\x86\boot\etfsboot.com"

it should be:


oscdimg /b"c:\program files\windows aik\tools\petools\x86\boot\etfsboot.com"

After getting rid of error 3 I then got error 5! I got rid of one of them and got another one!


C:\Program Files\Windows AIK\Tools\PETools>oscdimg /b"c:\program files\windows a
ik\tools\petools\x86\boot\etfsboot.com" /n /o /m /l "FRTMCxFRE_SV_DVD_WAIK" "L:\
slipstream3\temp vic\vista x64 sp2" "L:\slipstream3\temp vic\iso\Vista Home Prem
ium x64 SP2 (3).iso"
OSCDIMG 2.45 CD-ROM and DVD-ROM Premastering Utility
Copyright (C) Microsoft, 1993-2000. All rights reserved.
For Microsoft internal use only.
ERROR: Could not delete existing file "L:\slipstream3\temp vic\vista x64 sp2"
Error 5

For some strange reason... the OSCDIMG command only accepts the command if you paste it in! It doesn't like it when you type in the command. So you have to copy and paste it into the command prompt. So if you first type it in notepad and then copy and paste it to the command prompt it should work.


Update 6: I hope this will be the last update. Now, the second error I got appears to be caused by yet another space in the wrong place. I had a space between the /l switch and the label text string. You have to remove it.


Compare this:


oscdimg /b" C:\Program Files\Windows AIK\Tools\PETools\x86\boot\etfsboot.com" /n /o /m /l "FRTMCxFRE_SV_DVD_WAIK" "L:\slipstream3\Temp VIC\Vista x64 SP2" "L:\slipstream3\Temp VIC\ISO\Vista Home Premium x64 SP2 (3).iso"

to this:


oscdimg /b"C:\Program Files\Windows AIK\Tools\PETools\x86\boot\etfsboot.com" /n /o /m /l"FRTMCxFRE_SV_DVD_WAIK" "L:\slipstream3\Temp VIC\Vista x64 SP2" "L:\slipstream3\Temp VIC\ISO\Vista Home Premium x64 SP2 (3).iso"

You just have to watch out for these... I would like to call it traps actually! If you make sure you type (or copy and paste) the command in right it should work. I think they did this on purpose! Because... what other CMD or DOS command will not allow you to add a space before typing in the attribute after the switch?... Right! So I feel like they did this on purpose just to screw with us, sort of to make sure that you were actually using the original Microsoft guidelines from MSDN or TechNet that describes Windows Vista deployment and imaging in detail.


Don't get me wrong here, I'm not really against Microsoft, I just think that they sometimes... well most of the time, they make things... well, let's just say that they could have done it better. These are trivial things, but they are important. I mean why would you want to type the path to the El Torito boot file as "/bc:\program files with no space in-between? Come on! Could it be that "space" was not invented at the time?


I will try to sum up the whole process and post it as an answer to this question.

domain name system - NetworkManager is not changing /etc/resolv.conf after openvpn dns push



I've got a problem which is "NetworkManager is not updating /etc/resolv.conf after openvpn connection with dns push configured".



Here's my openvpn server config: (I've changed domain name to ABC.COM for security reason ;))



########################################
# Sample OpenVPN config file for
# 2.0-style multi-client udp server

#
# Adapted from http://openvpn.sourceforge.net/20notes.html
#
# tun-style tunnel

port 1194
dev tun

# Use "local" to set the source address on multi-homed hosts
#local [IP address]


# TLS parms
tls-server
ca keys/ca.crt
cert keys/static.crt
key keys/static.key
dh keys/dh1024.pem
proto tcp-server

# Tell OpenVPN to be a multi-client udp server

mode server

# The server's virtual endpoints
ifconfig 10.8.0.1 10.8.0.2

# Pool of /30 subnets to be allocated to clients.
# When a client connects, an --ifconfig command
# will be automatically generated and pushed back to
# the client.
ifconfig-pool 10.8.0.4 10.8.0.255


# Push route to client to bind it to our local
# virtual endpoint.
push "route 10.8.0.1 255.255.255.255"

push "dhcp-option DNS 10.8.0.1"

# Push any routes the client needs to get in
# to the local network.
#push "route 192.168.0.0 255.255.255.0"


# Push DHCP options to Windows clients.
push "dhcp-option DOMAIN ABC.COM"
#push "dhcp-option DNS 192.168.0.1"
#push "dhcp-option WINS 192.168.0.1"

# Client should attempt reconnection on link
# failure.
keepalive 10 60


# Delete client instances after some period
# of inactivity.
inactive 600

# Route the --ifconfig pool range into the
# OpenVPN server.
route 10.8.0.0 255.255.255.0

# The server doesn't need privileges
user openvpn

group openvpn

# Keep TUN devices and keys open across restarts.
persist-tun
persist-key

verb 4


As you can see it's basicaly sample config with little tuning.




Now..



On my machine (openvpn client), I can see that dns is ok:



{17:12}/etc/NetworkManager ➭ nslookup git.ABC.COM 10.8.0.1
Server: 10.8.0.1
Address: 10.8.0.1#53

Name: git.ABC.COM

Address: 10.8.0.1

{17:18}/etc/NetworkManager ➭ nslookup ABC.COM 10.8.0.1
Server: 10.8.0.1
Address: 10.8.0.1#53

Name: ABC.COM
Address: 18X.XX.XX.71



openvpn logs on server side says (if I understand correctly) that DNS has been pushed:



openvpn[13257]: TCPv4_SERVER link remote: [AF_INET]83.30.135.214:37658
openvpn[13257]: 83.30.135.214:37658 TLS: Initial packet from [AF_INET]83.30.135.214:37658, sid=3251df51 915772f3
openvpn[13257]: 83.30.135.214:37658 VERIFY OK: depth=1, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, emailAddress=mail@ABC.COM
openvpn[13257]: 83.30.135.214:37658 VERIFY OK: depth=0, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, emailAddress=mail@ABC.COM
openvpn[13257]: 83.30.135.214:37658 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
openvpn[13257]: 83.30.135.214:37658 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
openvpn[13257]: 83.30.135.214:37658 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
openvpn[13257]: 83.30.135.214:37658 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication

openvpn[13257]: 83.30.135.214:37658 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
openvpn[13257]: 83.30.135.214:37658 [jacek] Peer Connection Initiated with [AF_INET]83.30.135.214:37658
openvpn[13257]: jacek/83.30.135.214:37658 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled)
openvpn[13257]: jacek/83.30.135.214:37658 MULTI: Learn: 10.8.0.10 -> jacek/83.30.135.214:37658
openvpn[13257]: jacek/83.30.135.214:37658 MULTI: primary virtual IP for jacek/83.30.135.214:37658: 10.8.0.10
openvpn[13257]: jacek/83.30.135.214:37658 PUSH: Received control message: 'PUSH_REQUEST'
openvpn[13257]: jacek/83.30.135.214:37658 send_push_reply(): safe_cap=940
openvpn[13257]: jacek/83.30.135.214:37658 SENT CONTROL [jacek]: 'PUSH_REPLY,route 10.8.0.1 255.255.255.255,dhcp-option DNS 10.8.0.1,dhcp-option DOMAIN ABC.COM,ping 10,ping-restart 60,ifconfig 10.8.0.10 10.8.0.9' (status=1)



openvp logs on my side:



Aug 05 17:13:55 localhost.localdomain openvpn[1198]: TCPv4_CLIENT link remote: [AF_INET]XXX.XX.37.71:1194
Aug 05 17:13:55 localhost.localdomain openvpn[1198]: TLS: Initial packet from [AF_INET]XXX.XX.37.71:1194, sid=89cc981c d57dd826
Aug 05 17:13:56 localhost.localdomain openvpn[1198]: VERIFY OK: depth=1, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, emailAddress=mail@ABC.COM
Aug 05 17:13:56 localhost.localdomain openvpn[1198]: VERIFY OK: depth=0, C=XX, ST=XX, L=XXX, O=XXX, OU=XXX, CN=XXX, name=XXX, emailAddress=mail@ABC.COM
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication

Aug 05 17:13:58 localhost.localdomain openvpn[1198]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
Aug 05 17:13:58 localhost.localdomain openvpn[1198]: [static] Peer Connection Initiated with [AF_INET]XXX.XX.37.71:1194
Aug 05 17:14:00 localhost.localdomain openvpn[1198]: SENT CONTROL [static]: 'PUSH_REQUEST' (status=1)
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.1 255.255.255.255,dhcp-option DNS 10.8.0.1,dhcp-option DOMAIN ABC.COM,ping 10,ping-restart 60,ifconfig 10.8.0.10 10.8.0.9'
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: timers and/or timeouts modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: --ifconfig/up options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: route options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: ROUTE_GATEWAY 10.123.123.1/255.255.255.0 IFACE=wlan0 HWADDR=44:6d:57:32:81:2e
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: TUN/TAP device tun0 opened

Aug 05 17:14:01 localhost.localdomain openvpn[1198]: TUN/TAP TX queue length set to 100
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip link set dev tun0 up mtu 1500
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip addr add dev tun0 local 10.8.0.10 peer 10.8.0.9
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: /usr/sbin/ip route add 10.8.0.1/32 via 10.8.0.9
Aug 05 17:14:01 localhost.localdomain openvpn[1198]: Initialization Sequence Completed


It looks like everything's fine.




But. I checked /var/log/messages also... and I found that line:



Aug  5 17:14:01 localhost NetworkManager[761]:  /sys/devices/virtual/net/tun0: couldn't determine device driver; ignoring...


ip a returns:



5: tun0:  mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
link/none
inet 10.8.0.10 peer 10.8.0.9/32 scope global tun0

valid_lft forever preferred_lft forever


route -n returns:



# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.123.123.1 0.0.0.0 UG 0 0 0 wlan0
10.8.0.1 10.8.0.9 255.255.255.255 UGH 0 0 0 tun0

10.8.0.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
10.123.123.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0


So basically everything works, except the DNS being pushed... Oh! Right, and my /etc/resolv.conf:



# Generated by NetworkManager
domain home
search home
nameserver 10.123.123.1



Where's the issue?



(I have a response from Windows-user with openvpn client, that on his side DNS works fine, so it's an issue on my side.



Ok now I have another response (after I restarted openvpn service on server side) - it's not working.



I must say that it worked yesterday on my machine too.. so have I screwed up something on server? What could it be? )




Edit:
Okay, I've got another Windows-user response (the same user as before) - it's working now. So.. I guess it was caused by openvpn restart and some delays with it. I haven't done anything since then. So we're back onto my machine.



I also traced that that wierd tun0 message appeared also yesterday, and yesterday it worked. Or maybe I added entry to resolv.conf by myself? I don't remember.. (damn it)


Answer



This works for me: http://www.softwarepassion.com/solving-dns-problems-with-openvpn-on-ubuntu-box/



The important step is adding following two lines of configuration into your client openvpn config file:



up /etc/openvpn/update-resolv-conf

down /etc/openvpn/update-resolv-conf


Also ensure the resolvconf package is installed on the client, because that update-resolv-conf script depends on it.



It works with openvpn client service or command to start it manually.



However, the Ubuntu Network Manager doesn't this. It's an issue so far: https://bugs.launchpad.net/ubuntu/+source/openvpn/+bug/1211110


amazon ec2 - Port 443 set up SSL on Nginx + Ubuntu + EC2

I've tried everything and I searched Google behind some solution, but can not configure SSL (https) in my Nginx server that is within a Ubuntu 14.04.2 LTS on Amazon EC2.
My website works perfectly on port 80 with HTTP, but I would leave it safer adopting HTTPS.



Considerations:




  1. whenever I try to access it via https:// gives the error: ERR_CONNECTION_TIMED_OUT


  2. the command curl -v https://www.mywebsite.com/ returns:
    curl: (7) Failed to connect to www.mywebsite.com port 443: Connection timed out


  3. the command nc -vz localhost 443 returns: Connection to localhost 443 port [tcp/https] succeeded!



  4. the command nc -vz myserverIP 443 returns:
    nc: connect to myserverIP port 443 (tcp) failed: Connection timed out


  5. TCP 443 port for HTTPS are open to anywhere on Security Groups (Amazon ec2 firewall) on inbound and outbound.


  6. `netstat -ntlp | grep LISTEN:



    tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1244/proftpd: (acce
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1130/sshd
    tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5633/nginx
    tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 5633/nginx
    tcp6 0 0 :::22 :::* LISTEN 1130/sshd

    tcp6 0 0 :::443 :::* LISTEN 5633/nginx
    tcp6 0 0 :::80 :::* LISTEN 5633/nginx




Nginx configurations:




  1. nginx.conf: http://pastebin.com/ebSaqabh


  2. ssl.conf (called by include of conf.d on nginx.conf):
    http://pastebin.com/FzVAtjGz



  3. sites-available/default:



    server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;
    charset utf-8;
    root /usr/share/nginx/html;
    index index.php index.html index.htm;



    server_name mywebsite.com www.mywebsite.com;

    #return 301 https://mywebsite.com$request_uri;
    #rewrite ^(.*) https://www.mywebsite.com$1 permanent;
    location / {
    #try_files $uri $uri/ =404;
    try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {

    root /usr/share/nginx/html;
    }

    location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/var/run/php5-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_read_timeout 300;

    }
    #include /etc/nginx/common/w3tc.conf;
    include /etc/nginx/common/wordpress-seo-plugin-support.conf;


    }




I do not know what else to do to resolve this. Someone could help me? Do you have something wrong in my configuration of Nginx? Or need to change anything else in the Amazon?

bash - Use sed to modify and execute previous command


I've got a bunch of pdf files that are joined with a very long command line sitting in a directory. Some are in English and some are in French, differentiated by _e.pdf and _f.pdf.


Because they're joined in a specific order the command line can't be shortened, but I'd like to modify and re-execute it, simply replacing _e by _f. How can I use sed (or other) for this?


Let's say the command is


pdfjoin file1_e.pdf file2_e.pdf file3_e.pdf

and in history it's command 10.


I've got as far as


echo !10 | sed 's/_e.pdf/_f.pdf/g'

which echoes the command I want to run. But I actually want to run that, not just display the command.


Answer



Have you tried backticks?


`echo !10 | sed 's/_e.pdf/_f.pdf/g'`

Though I can't help feeling that you should be using make.


eAccelerator Causes PHP Include to Fail in Wordpress

SERVER: Linux CENTOS 6
PLESK 10.4.4



I have been installing Wordpress on many subdomains on our dedicated server. All of them run CRON jobs every 10 minutes.



Long story short, the time to load first byte was getting to over 10 seconds.



I did some research and found that eAccelerator helps with speed issues for PHP-intensive websites and another website that gives some instruction on how to do this.




http://imanpage.com/code/how-install-yum-zend-optimizer-eaccelerator-and-apc



After installing the Atomic repo and doing a YUM update I installed eAccelerator like this:



yum install php-eaccelerator.x86_64


I checked the PHP version after the install and found this:



PHP 5.3.14 (cli) (built: Jun 14 2012 16:34:56)

Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies
with eAccelerator v0.9.6-svn358-dev, Copyright (c) 2004-2007 eAccelerator, by eAccelerator
with the ionCube PHP Loader v4.0.10, Copyright (c) 2002-2011, by ionCube Ltd.


So I was like...YAY, that was easy.



THEN I started noticing ALL the PROBLEMS.




First, a few of my MYSQL tables crashed and had to be repaired. The only way to get the REPAIR to work in PHPMyAdmin was to first login through SCP and change the owner of the actual database files to MYSQL, from it being blank. After this the repair worked and the tables are fine.



Next, A job I am running which connects to an external MYSQL server suddenly stopped working with a pasword authentication error. I changed the connect string from DBHOST to the actual IP:port and now the CRON reports:



PHP Warning:  mysql_connect(): Lost connection to MySQL server at 'reading initial communication packet', system error: 111 in /usr/local/bin/video-queue.php on line 230


FINALLY, and the reason why I did this in the first place was that all my Wordpress installs after working FINE for a long time suddenly stopped being able to call a CLASS that I know gets included (because it worked fine before). So now I get this:



Fatal error: Class 'PPT_Widgets_ARTICLES2' not found in /var/www/vhosts/md1network.com/albany/wp-includes/widgets.php on line 324



That particular class is located in another directory, but it is there. Another piece of information is that the file that contains the CLASS were placed there by EXTRACTing a zipped file (via Plesk) by using PHP unzip. This unfortunately screwed up the owner and permissions, but the sites were ok.



I noticed that the YUM update rewrote my PHP.ini file so I thought it screwed up the include path. I still don't know if that is the case.



I have tried altering the owner and permissions on the file where the CLASS is and the widgets.php file also. None of these worked and it still thinks the CLASS doesn't exist. As a matter of fact, any time you include wp-config.php (which DEFINEs the MYSQL db, user, and password, it throws the same error about that stupid class (which I wrote and was working fine so it can't be all that stupid).



Before this the only mods to the server were the installation of FFMPEG and PHP TIDY.




ALSO...ANOTHER STRANGE THING is that all the CRON jobs are running FLAWLESSLY and they use the same INCLUDE of wp-config.php.



It's like the problem is isolated when PHP tries to run from the browser (eAccelerator?)



I have another Wordpress site that is running fine on the same server after I repaired a few of its tables. The sites that are having a problem do not have any corrupt tables.



I hope thats enough information.



PLEASE HELP.
Rick

Convert long file names to short names windows 10?


Is there a way to convert long file names into a truncated short form on pc? Is there a simple way to do this? I'm new to using the command prompt / PowerShell and don't fully understand what long scripts are doing and how to modify them.


I want to transfer all of my files to an external hard drive but many of the files are from a mac with long names and I receive an error when I try to transfer them.


Answer



Save the following into a file Set-DosFileName.ps1


[CmdletBinding(SupportsShouldProcess=$true)]
Param(
[parameter(Mandatory=$true)]
[string]$folder,
[switch]$recurse
)
$fso = New-Object -ComObject Scripting.FileSystemObject
Get-ChildItem -Path $folder -File -Recurse:$recurse | ForEach-Object {
$shortName = $fso.getfile($_.Fullname).ShortName
if ($shortName -ne $_.Name)
{
$fullShortName = Join-Path $_.Directory -ChildPath $shortName
Move-Item -LiteralPath $_.Fullname -Destination $fullShortName
}
}

To use this open a PowerShell window and change into the directory where you saved the file:


cd "D:\folder where you saved the script"

then:


.\Set-DosFileName.ps1 -folder "D:\myfiles\Foo Bar" -whatif

The script should show how it would rename your files.


To include all files in subdirectories add the -recurse switch:


.\Set-DosFileName.ps1 -folder "D:\myfiles\Foo Bar" -whatif -recurse

If everything looks fine, remove the -whatif switch to actually rename the files. I would still keep a backup of the original files just in case anything goes wrong.


I haven't tested this with a large number of files, be aware that some file names may be pretty ugly.


Monday, December 29, 2014

windows - How can I uninstall K9 Web Protection without K9 admin password?


I'm trying to uninstall K9 from a Windows Vista 32-bit computer, and I do not have the K9 admin password or know the K9 admin email. I do however have administrative access to the computer. I also can install Cygwin or boot to a live USB if that would help. Is there any way to uninstall this program without the password and email information?


Edit: I forgot to mention that I tried several methods to uninstall, and so far none have worked:



  • I tried the usual uninstall method from control panel.

  • I tried reinstalling K9 and then trying to uninstall.

  • I tried disabling K9 from msconfig.

  • I tried installing my own license file like it says in this link (K9 doesn't allow you to download license files anymore).

  • I tried deleting C:\Windows\System32\drivers\bckd.sys and restarting the computer.


Edit 2: I figured it out and the answer is below, with one caveat: it requires paid software (albeit a free trial of paid software if you have never used it before). I wish I could give an answer using free software, but this is the only solution that worked so far. If anyone has a method that is simpler or doesn't require the use of paid software, feel free to post it, and I'll try it out and select it as the answer if it works. and31415 has posted an answer which requires no extra (paid) software.


Answer



No third party software required. Tested with K9 Web Protection version 4.4.268 on Windows Vista SP2 (32-bit). Confirmed by @stiemannkj1 to be working on Windows 7. Should also work on Windows 8.x.



  1. Create a new text file. Copy and paste the following batch script code, then save:



    @echo off
    set keys=^
    "HKLM\SOFTWARE\Blue Coat Systems"^
    "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\k9filter.exe"^
    "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Blue Coat K9 Web Protection"^
    "HKLM\SYSTEM\CurrentControlSet\Services\bckd"^
    "HKLM\SYSTEM\CurrentControlSet\Services\bckwfs"
    REM remove registry keys
    for %%G in (%keys%) do reg delete "%%~G" /f >nul
    set folders=^
    "%programfiles%\Blue Coat K9 Web Protection"^
    "%programdata%\Microsoft\Windows\Start Menu\Programs\Blue Coat K9 Web Protection"
    REM remove folders
    for %%G in (%folders%) do rd /s /q "%%~G"
    set files=^
    "%windir%\System32\drivers\bckd.sys"
    REM remove files
    for %%G in (%files%) do del "%%~G"
    pause
    exit /b

  2. Make sure file extensions are shown and rename it to RemoveK9.cmd (or whatever you like, as long as it has the .cmd extension).


  3. Restart Windows in Safe Mode.


  4. Right-click the .cmd file and select Run as administrator from the context menu. Wait for the batch script to finish.


  5. Start regedit.exe and navigate to the following registry key:


    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\Root\LEGACY_BCKD

    Right-click the LEGACY_BCKD key and select Permissions from the context menu. Click Advanced and select the Owner tab. Then select Administrators from the owner list, tick the Replace owner on subcontainers and objects option, and then click OK. Then select Everyone and tick the Allow checkbox for the Full Control permission. Click OK, right-click the key and finally choose Delete.


  6. Restart Windows.



linux - How can users log in when LDAP is down?



If a linux server uses LDAP for authentication, and the LDAP server is down for some reason, how can users log in?




I guess the answer is that they can't? So I suppose what I'm really asking is what should the system administrators do to protect against this situation? What are the best practices?



My particular situation is that we have a small group of developers working on a small number of servers. Individual user accounts on each server are becoming a nuisance so we're looking to implement centralised authentication via LDAP. I'm concerned about the scenario where some issue on our LDAP server means no-one can log into anything. So I'm trying to figure out what we should do about this.



My thoughts so far:




  • Having multiple replicated LDAP servers so that we don't have a single point of failure seems like a good idea but it will add a lot of complexity which we really want to avoid.

  • Should we just make sure that there's always one user configured locally on each server which we could use as a back door if LDAP isn't working? Is that a serious security compromise?


  • Would the users see any difference between the LDAP server being down and them just entering the wrong password?


Answer




  • Linux (like Windows with AD) has the capability to cache successful logins and can use this cache in case of an LDAP outage (this is either done via SSSD or nscd - if you are on RHEL/CentOS/Fedora, I recommend using SSSD). Naturally, this works only if the user has logged in successfully recently into that machine. Also, of course, this doesn't work for services not using PAM but using an LDAP server directly to authenticate, e.g. some web service.


  • Adding replication in such a simple setup is not very difficult and I don't believe it adds a lot of complexity but the added resilience is well worth the effort in my view.


  • Having a working local user with sudo rights is mandatory in my view, even with caching (at the very least if you turn of root login via ssh).

  • Wether the users see a difference is dependent on the client implementation. With PAM and caching, the users wouldn't even notice, unless the user is not in the cache (which would look like a wrong password until you look into the logs).


Easy way to switch power plan in Windows 10


I've a 27" external monitor connected to my laptop and I use my laptop's screen as secondary. I've both screens turned on while I'm working. When I'm watching movies, I just keep my external monitor on and turn off the lid of the laptop.


I've created a power plan in Power Options called Laptop Screen off which basically does nothing when the lid is off, then I use my external monitor as my only screen.


When I'm working, I activate another power plan which supports high performance for programming and running virtual machines. This power plan puts the laptop to sleep when the lid is closed.


Anyways, I keep switching between these plans depending on what I'm doing. This was all easy in Windows 7/8.1 as I just clicked the battery icon and switched it.


I upgraded to Windows 10 last week and now I've to dig deeper to get there. There should be an easier way? Is there a small tool I can use to do what I'm doing in less no. of steps?



  1. Click battery icon in task-bar --> power and sleep settings


    enter image description here


  2. Then additional power settings


    enter image description here


  3. Switch power plan



Answer



Open a command prompt and type in the following command:


powercfg /l

This'll show you your powerschemes with their GUID (example:)


Existing Power Schemes (* Active)
-----------------------------------
Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e (Balanced)
Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c (High performance) *
Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a (Power saver)

Make a selecton for the GUID you want to work with and right click to copy that text to the clipboard.


Now create a new textdocument and name it for example Scheme - Balanced.cmd
(the .cmd is important, what comes before is up to you)


Right-click the file and choose edit.


In the file write:


powercfg /s xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

where the x's are replaced by the GUID you copied to your clipboard earlier.


So in my example that'd be:


powercfg /s 381b4222-f694-41f0-9685-ff5bb260df2e

Save the file. Now, each time you execute that file, your powerscheme will be set to that scheme.


Tools for load-testing HTTP servers?





I've had to load test HTTP servers/web applications a few times, and each time I've been underwhelmed by the quality of tools I've been able to find.




So, when you're load testing a HTTP server, what tools do you use? And what are the things I'll most likely do wrong the next time I've got to do it?


Answer



JMeter is free.



Mercury Interactive Load Runner is super nice and super expensive.


Can I use an Windows 7 or 8 key to do a clean Windows 10 install using ISO-image?


Can I use an Windows 7 or 8 key to do a clean Windows 10 install using ISO-image ?



I haven't tried it yet, so I'm asking if any of you guys have experience using a Windows 7 or 8 key to do a clean Windows 10 install ?


I've not yet had the chance of upgrading inside Windows 8.1, so I'm considering the option of just upgrading myself using a downloaded ISO-image.

linux - Swapping special keys

I am on archlinux and trying to swap the left Alt key with the left Ctrl key for my convenience in emacs as well as bash command line editing. I use the following ~/.xmodmap:



remove mod1 = Alt_L
remove control = Control_L
keycode 37 = Alt_L
keycode 64 = Control_L
add mod1 = Control_L
add control = Alt_L

For some unclear reason to me, although xev output shows that indeed the two keys are swapped, no application actually uses the new bindings. Can someone enlighten me?

sendmail - Email forwarding from my domain to gmail - FAIL




[There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point]



Background: I use Gmail for my email client. My email is example@gmail.com. However the email that people communicate to me with is me@example.com. I run the server that hosts www.example.com and other domains, at ServerBeach.



Up to yesterday, I had SENDMAIL painlessly just forward emails to me@example.com to example@gmail.com and everything was fine, for several years in fact.



Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server.



Looking into it I found a bunch of emails sitting on my server with content like this:




... while talking to gmail-smtp-in.l.google.com.:
>>> RCPT To:
<<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that
<<< 450-4.2.1 prevents additional messages from being delivered. Please resend your
<<< 450-4.2.1 message at a later time. If the user is able to receive mail at that
<<< 450-4.2.1 time, your message will be delivered. For more information, please
<<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76
pitosalas@gmail.com... Deferred: 450-4.2.1 The user you are trying to contact is
receiving mail at a rate that
>>> DATA

<<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of
<<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our
<<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked.
<<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review
<<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76
554 5.0.0 Service unavailable
... while talking to alt1.gmail-smtp-in.l.google.com.:


From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved.




I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works.




  1. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it?

  2. What is the 'correct' way to configure things so that emails to me@example.com behave as if they were sent to example@gmail.com?


Answer



On average, how many emails would you say are forwarded from your ServerBeach server to Google?




Do you have reverse DNS set up correctly with a matching "A" record for your ServerBeach Server? You can test that by doing an nslookup, but using your server's IP address for the query. I'm not sure how much you know about DNS so let me give you a brief overview:




  • An A record associates a domain name to an IP (so google.com's A record would be 1.2.3.4, for example.)

  • A Reverse DNS record does the opposite - so a query for 1.2.3.4 would return "google.com" to continue the previous example.

  • Most of the time, rDNS is irrelevant. However, some mail servers (google for example) like to see a matching rDNS record as an indication that you're not a spammer. Having an incorrect or mismatching rDNS record could cause your mail to bounce.



Sending too many messages or misconfigured DNS could cause you to be tagged as a spammer.




Also, head over to CheckOR.com and test to see if your mail server is an "Open Relay," meaning that anyone can use your server to send email to whoever they want (That's bad - and spammers have tools to scan for open relays to use them to send their spam.)


linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...