Have you seen this warning when you try to click a link in Outlook or Word? "This operation has been canceled due to restrictions in effect on this computer. Please contact your system administrator." Here is a screen shot:

There are many reason this warning can happen. Typically, the cause is that some setting in the registry (the database of configuration data on a Windows computer) has become corrupted. How exactly it became corrupted is an open question. One completely, 100% foolproof, way to corrupt the registry is to install and then uninstall Google Chrome. I discovered this when I realized that the error goes away if you reinstall Google Chrome, even if you do not set Google Chrome to your default browser. It turns out that Google leaves behind pointers to a ChromeHTML file type handler for web pages, but removes the file type handler itself during uninstallation.

To make it easier to repair a computer that has had Chrome installed I decided to write a registry file that restores the registry to its original state, and makes your computer work again. The file ended up being quite long. Google leaves behind a lot of detritus in the registry after uninstalling Chrome. It even leaves behind pointers to file icons in the, now removed, Chrome program.

If you have this error, you can use the registry script below. Or, you can do just the essential surgery by removing these two registry keys:

HKEY_CURRENT_USER\Software\Classes\.htm
HKEY_CURRENT_USER\Software\Classes\.html

That will at least make your computer mostly functional again. To really restore full functionality after you install Google Chrome you need to run the registry file, or reinstall Windows.

Technical Details

When Outlook or Word starts it reads certain registry values to learn which program to use as the file handler for web pages. Those registry keys include HKCR\.htm[l] and the ones listed above. The values in those keys do not actually point to the file handler, but rather, describe the file type of the file handler. Normally, that file type is htmlfile. When you click a link in an e-mail your computer will look up the program that handles the htmlfile file type, and opens the link using that program.

When you install Google Chrome and set it to be the default browser it creates the keys above, along with many others. Those keys set the file type for .htm and .html files to ChromeHTML. The ChromeHTML file type, understandably, points to chrome.exe as the program to invoke. As long as Chrome is set to the default browser the operating system will take the route link->hkcr\software\classes\.html->hkcr\software\classes\ChromeHTML->chrome.exe. If Chrome is not set to the default browser for that user the operating system knows to launch the default browser instead.

When you uninstall Google Chrome it deletes the ChromeHTML key, but not the keys listed above, and many others. When Outlok launches it reads those handlers and tries to find the ChromeHTML file type that those keys defines. The ChromeHTML file type has, however, been deleted. Outlook (or rather Word, which is the email rendering program) catches that but does not have a good error message to show the user. It has been programmed to display the "this file type has been blocked" error when it can't find the file type in the registry. Thus, Outlook and Word (and any other program that handles HTML links)  work correctly when Chrome is installed but not set to default, but fail after you uninstall Chrome and the linkages in the registry are incomplete.

 

If you have ever taken a survival course you have probably heard the instructor talk about how you need to be aware of your surroundings. Much of survival is about recognizing where you are, what is safe, and what is not. The Internet is no different. By far the most important factor in safe use of the web is recognizing where you are, and making appropriate decisions about what is safe and what is not; what is to be expected, and what is extraordinary. Unfortunately, most people either do not know how to tell where they are, or do not do so on a regular basis.

In  the first post, we discussed fake e-mail and how to recognize them.There are similar cues to tell what web sites you are visiting. Consider this picture:

 First, look at the address bar, the part that says http://www.microsoft.com/en/us/default.aspx. This part is called the Uniform Resource Locator (URL). You probably already know that this is the address of the server you are connected to. What you may not know are which parts of it are interesting when making a decision about whether to trust this site. The first part of the URL is the "protocol" used: http in this case. A "protocol" is essentially the "language" that defines the words a client, in this case your web browser, can use to ask the server, in this case the web site, to send it some information, in this case a web page. The protocol above, http, is the standard protocol used on the web. Unless the protocol is https it is useless in making a safety decision.

The second part you see is the name of the web server: www.microsoft.com. It has three parts. From right to left, it defines ".com" which originally meant "commercial" to distinguish it from edu (educational), gov (government) and so on. These days, it really is just what is known as a "top level domain" or TLD used to denote many sites on the Internet. There are some generic TLDs, such as .org (typically used for non-profits), .com, .edu, and a lot of national ones, such as .uk (United Kingdom), .fr (France), .ru (Russia), .cn (China), and many others.  The TLD used in this case is not useful in making a safety decision.

The rest of the name of the web server, however: www.microsoft.com, is very useful. The "microsoft" part, in particular, identifies the organization or service that you are connected to. In other words, you are connected to a Microsoft server.

When you use the URL to make a decision about where you are, be careful with what you are looking for though. Criminals will often try to modify the name. For example, it may be “m1crosoft.com” (1 instead of ‘I’), “micorsoft.com” (misspelled name), or “g00gle.com” (zeros instead of ‘o’). Any time you see a misspelled URL it is almost certainly a fake!.

There is only one secure way to determine whether you trust the URL, however, and that is if it uses https as the protocol. Consider this picture:

The protocol used now is https. https is actually another protocol (either Secure Sockets Layer – SSL - or Transport Layer Security - TLS) layered on top of http. It is not crucial right now to know the technical details of that; merely what features it provides, which is two things: First, https provides a way for the web browser and the web server to encrypt all traffic between them so it cannot be intercepted and read in between the two. http, on the other hand, is unencrypted. Second, and most importantly, https gives YOU a way to identify the web server you are connecting to. When you use https, you get the little padlock in the address bar. Most people ignore the padlock, but if you go to a URL that you do not recognize, such as perhaps “live.com” it can give you the crucial information you need to make a decision.If you click on the padlock you get a screen like this:

This screen is the really important part. It tells you who issued the Digital Certificate, and the name of the entity it was issued to.In this case you can tell that VeriSign has issued a certificate to Microsoft Corporation. This is the conclusive information on which business you are connecting to. Of course, you can only trust that as far as you trust the issuer of the certificate. Generally, however, it is safe to trust any certificate that does not cause your browser to say the certificate should not be trusted. The browser vendor has already decided which issuers you should trust, and unless you have good reason to, there is usually no point in doubting that decision. If a certificate should get stolen, your web browser will throw a warning because it can only be used on the site it was originally issued for. In other words, while the criminals may be able to register "m1crosoft.com" and even get a certificate for it, they cannot use a certificate that says "microsoft.com" in it. The browser will complain if they do.

In the picture above, the address bar is green. It is not always green. That only happens if the site uses a special identifier (known as a “Digital Certificate”, which I will explain in a later post) called an Extended Validation Certificate (EV Cert). An EV Cert means that the business has paid a lot more for the certificate than a standard one, and in return, the issuer of the certificate has performed some additional validation, such as ensured that the business has a physical office somewhere. The color of the address bar actually provides little value to you when trying to determine which site you are on though. Some browsers, such as Internet Explorer, will show you information such as the business name in the little popup when the site uses an EV Certificate. However, the same information is typically available in the Details tab if you click the "View certificates" link in the pop-up. Click the "Subject" row and you will see it, as in this picture:

Using the padlock and evaluating which organization you are connected to is the only safe way to decide which site you are on. In many cases, you probably do not need that, but in some situations, such as when you are downloading software, and shopping or banking online, it is crucial.

Problems with Site Identification

Some sites use other techniques than https to help you identify them. For example, some banks have you type your username on a form that does not have a password field. Once you do it shows you a picture that you selected and the password field. The idea is that you selected the picture when you set up the account, and since the bank now shows you this picture you are supposed to trust that you are connected to the bank. Unfortunately, this system is not secure at all. Any attacker can probably guess your username (is it first initial+last name or last name+first initial?). If an attacker can guess your username, which is not really a secret, they can obtain your picture, steal it, and show it to you on his site. The criminals can even fake out the entire system by tricking you into going to their site and typing your username. They then submit your username to your bank, retrieve your picture, and show it to you on the attackers’ site. It is trivial to circumvent the system of using pictures to identify sites. The only trustworthy way to identify the site is to inspect the certificate.

Unfortunately, many sites, including some very large credit card issuers, do not use https to serve the login form. Using https for the form is strictly speaking not required to encrypt the password when you send it to them. However, it misses the second part of https: the part where the site identifies itself to you. If the site does not server the login form over https you cannot verify where you are sending your password because you do not yet have a certificate to verify it with. If a company does this, complain to them and request that they serve the login form over https. If they refuse, take your business elsewhere. For example, Discover Card not only refuses to serve the logon form over https, but even redirects you from https to http if you type https in the address bar. After repeated complaints I decided to just cancel my Discover Card and use American Express instead, which redirects you to https should you accidentally type http in the address bar. Discover Card does not care about my privacy and safety, while American Express actually helps protect me against my own typos.

October is National Cyber Security Awareness Month, and as I stated in the last post, I decided to celebrate by writing some Security Awareness posts. Almost as if they knew what I was going to write about, I received this spam comment on my last post this morning:

"such a very informative and valued article, regards"

The poster's name, which is undoubtedly fake, was hotlinked to: hxxp://www.antivirus-finder.blogspot.com. That, in turn, turns out to be a blog that links to various unknown and quite possibly shady anti-malware programs. ("Malware" is a collective term for malicious software, such as viruses, worms, trojan horses, spyware, adware, etc. Consequently, "anti-malware" is software that, at least purports to, remove or stop malware). The latest post on the site points to something called "ClamWin Antivirus" which I have never heard of. I tried scanning it using a public malware scanner but it was so large that it could not be scanned.  A quick analysis was unable to tell me whether it was malicious, but I would never install it based on these tell-tale signs:

  1. The underhanded way in which the link was sent to me, hidden in a comment on an unrelated blog-post
  2. Never having heard of it before
  3. It is too large to scan, which could be intentional to make it more difficult to tell whether it is malicious
  4. It installed additional unwanted software when I put it on a test system:

    Any software that automatically tries to install additional software you did not ask for should be immediately considered suspicious.

It turns out in this case that I was a little extra paranoid. ClamAV is legitimate, but given the choice, I will always tend toward not installing something.

Malicious anti-malware is epidemic on the Internet. I wrote an article on it a couple of years ago. The problem has not gone away, however, and the authors have become craftier than ever in their attempts to get You to install their wares. My all time favorite is "Green AV" which claims to donate part of the money you pay to rainforests.

There are some very simple rules of thumb you can follow, however, to protect yourself against fake anti-malware:

  1. No web site can scan your computer for malware merely by your going to it. Many web sites claim to, and that is how they try to fool you into thinking you are infected and need to pay for a new anti-malware program. There are a few legitimate ones that do scan your computer, such as Microsoft's OneCare, but they all require you to agree to install something to complete the scan. That leads us to the second rule of thumb:
  2. NEVER permit a web site to install software unless you consider a site trustworthy. You have to look at the address bar to see where you are. In a future post, I will talk about how to recognize fake software and sites.
  3. Never install software that just showed up and that you did not ask for. In fact, be extremely selective about what software you install. The less software you install from the Internet, the less likely you are to get malware.
  4. If you feel you need to install something, don't do it unless you have scanned it using a reputable anti-malware scanner. A good one is http://virustotal.com. Make sure you type the link correctly. Virtually every variant of virustotal.com is registered by malware purveyors or domain squatters. Virustotal scans files you upload using most every commercial anti-malware vendor. Here is an example report from VirusTotal.
  5. Use real anti-malware. The list in the example report from VirusTotal is not a bad starting poing. Perhaps an even easier one is to simply go buy something from a reputable online merchant, such as Amazon. Getting it from Amazon guarantees that you get something that is real.
  6. If you absolutely feel the need to install something, do a quick web search on it first. If you find hundreds of pages dedicated to removing it, chances are it is fake!

In summary, remember these key points: install only the software you absolutely need, and make sure you get it from a reputable supplier.

October is National Cyber Security Awareness Month, and as I stated in the last post, I decided to celebrate by writing some Security Awareness posts. Almost as if they knew what I was going to write about, I received this spam comment on my last post this morning:

"such a very informative and valued article, regards"

The poster's name, which is undoubtedly fake, was hotlinked to: hxxp://www.antivirus-finder.blogspot.com. That, in turn, turns out to be a blog that links to various unknown and quite possibly shady anti-malware programs. ("Malware" is a collective term for malicious software, such as viruses, worms, trojan horses, spyware, adware, etc. Consequently, "anti-malware" is software that, at least purports to, remove or stop malware). The latest post on the site points to something called "ClamWin Antivirus" which I have never heard of. I tried scanning it using a public malware scanner but it was so large that it could not be scanned.  A quick analysis was unable to tell me whether it was malicious, but I would never install it based on these tell-tale signs:

  1. The underhanded way in which the link was sent to me, hidden in a comment on an unrelated blog-post
  2. Never having heard of it before
  3. It is too large to scan, which could be intentional to make it more difficult to tell whether it is malicious
  4. It installed additional unwanted software when I put it on a test system:

    Any software that automatically tries to install additional software you did not ask for should be immediately considered suspicious.

It turns out in this case that I was a little extra paranoid. ClamAV is legitimate, but given the choice, I will always tend toward not installing something.

Malicious anti-malware is epidemic on the Internet. I wrote an article on it a couple of years ago. The problem has not gone away, however, and the authors have become craftier than ever in their attempts to get You to install their wares. My all time favorite is "Green AV" which claims to donate part of the money you pay to rainforests.

There are some very simple rules of thumb you can follow, however, to protect yourself against fake anti-malware:

  1. No web site can scan your computer for malware merely by your going to it. Many web sites claim to, and that is how they try to fool you into thinking you are infected and need to pay for a new anti-malware program. There are a few legitimate ones that do scan your computer, such as Microsoft's OneCare, but they all require you to agree to install something to complete the scan. That leads us to the second rule of thumb:
  2. NEVER permit a web site to install software unless you consider a site trustworthy. You have to look at the address bar to see where you are. In a future post, I will talk about how to recognize fake software and sites.
  3. Never install software that just showed up and that you did not ask for. In fact, be extremely selective about what software you install. The less software you install from the Internet, the less likely you are to get malware.
  4. If you feel you need to install something, don't do it unless you have scanned it using a reputable anti-malware scanner. A good one is http://virustotal.com. Make sure you type the link correctly. Virtually every variant of virustotal.com is registered by malware purveyors or domain squatters. Virustotal scans files you upload using most every commercial anti-malware vendor. Here is an example report from VirusTotal.
  5. Use real anti-malware. The list in the example report from VirusTotal is not a bad starting poing. Perhaps an even easier one is to simply go buy something from a reputable online merchant, such as Amazon. Getting it from Amazon guarantees that you get something that is real.
  6. If you absolutely feel the need to install something, do a quick web search on it first. If you find hundreds of pages dedicated to removing it, chances are it is fake!

In summary, remember these key points: install only the software you absolutely need, and make sure you get it from a reputable supplier.

The U.S. President has declared October 2010 to be "National Cyber Security Awareness Month." While the term "cyber" may not be particularly clear to most people, what this really is about is How To Stay Safe Online; and not just in America. Staying safe online is crucial everywhere.

To celebrate, I thought I'd try and jam in as many little advise posts as possible between now and, well, when everyone knows how to stay safe online. Thus, without further ado:

Advise #1: No, you really haven't won the U.K. Lottery.

Nor have you won the Microsoft Lottery. Nor does anyone really want you to share the fortune their deceased husband/uncle/father/president/iguana left them. These are all scams. What they are trying to do is get you to pay them for the information that supposedly will reward you with untold riches.

These scams are sometimes known as "Nigerian Scams" because they many of them originate in Nigeria. More technically, they are known as "Advanced Fee Frauds". The objective is to get you to pay some amount now in return for riches later. Of course, there are no riches later. What there are are hordes of people in Internet Cafes all over the developing world, and probably Toronto and Iowa as well, who are making a living by fooling people into sending them advance fees in return for the winnning lottery ticket.

This may sound too elementary to many, but the truth is that these scams work! The hordes are out there because there are people who pay them enough to sustain the business model. Ask around and see how many of your parents, relatives, children, friends, neighbors, and fellow commuters on the bus realize that these are scams. I think you will be shocked to see how many do not realize that all these are fake. I certainly was shocked when I found out that one of my neighbors lost $5,000 to one of these scammers.

But, of course, none of us would ever get fooled by these scams. So, I have one favor to ask of you: please make sure that everyone you know won't get fooled. Let's put the scum behind the advanced fee fraud out of business once and for all. All it will take is for each of us to make sure that we don't let any of our acquaintances fall prey.

It appears Apple is the only company around that doesn't use Microsoft Exchange. Apple's recently released iOS (not to be confused with Cisco's IOS) 4 apparently wasn't tested with Exchange at all. Many users are reporting slow e-mail sync, and apparently Exchange server admins are none too happy with the load these devices are putting on the Exchange server - much more than the old OS did.

Of course, you cannot downgrade a device that has been upgraded to iOS 4. iPhone Operating Systems are signed by Apple at run-time and Apple refuses to digitally sign anything below iOS 4 now, so if you upgrade your device, you are stuck, unless you are willing to jailbreak your device, and right now, you can't jailbreak an iOS 4 device that was not jailbroken prior to the upgrade.

That leaves you with Apple's solution: a configuration profile that modifies the settings on your device.

Unfortunately, the configuration profile is unsigned. Configuration profiles make critical changes to how your device operates. Therefore, Apple supports signing them so their source can be authenticated. Too bad Apple doesn't bother with this itself. Rather, Apple's recommendation appears to be that users download and install random unsigned configuration profiles found on the Internet.

If you have upgraded your iTunes to version 9.2 you may have run into the problem that your computer no longer recognizes your iPhone/iPad/iPod. I had the problem on one computer, but not the other. When you connect the device it starts installing the driver, then it fails and iTunes never sees your device.

After a fruitless 45 minutes on the phone with and Apple support technician that seemed to be new to Windows and who eventually hung up on me, and a web search that turned up nothing I decided to take things in my own hands. Here is how you fix this problem if you have it.

First, open Computer Management. You can do this by clicking the little Window logo that is your Start menu, right click "Computer", and click "Manage."

Click the Device Manager node. At the bottom you see Universal Serial Bus Controllers. It may be expanded already, otherwise expand it.

Now connect your device. It will start loading the driver, the Device Manager window will go blank once or twice, and eventually you will see the Apple Mobile Device USB Driver. It will have a yellow exclamation mark super-imposed on it. That signifies it failed to load. If it doesn't show up, disconnect the device and try it again. I had to try a couple of times before it stuck long enough to do anything with it.

Right click Apple Mobile Device USB Driver and select Properties. Click the Driver tab, and then click the Uninstall button. You will get a dialog that asks you whether you want to also delete the driver. You do. The driver is faulty and you want to get rid of it. It appears the iTunes 9.2 installer actually destroys the configuration of that driver. I have not investigated how exactly it does this but I would expect that the registry configuration is flawed. Perhaps someone can post the flawed and correct ones and we can come up with an easier way to fix this problem?

Once you have deleted it close Computer Management and open the Programs and Features Control Panel applet. You now want to uninstall just about everything made by Apple. I found it easier to sort on publisher here so I can see everything. You want to get rid of Bonjour, Quicktime, Apple Software Update, Apple Application Support, Apple Mobile Device Support, and iTunes itself. Theoretically I suppose there ought to be some subset of this that you can remove, but the Apple support technician who was unable to solve the problem was adamant that Apple has failed to provide any way to install just the Apple Mobile Device Support component, which is all you actually need.

At this point you have an Apple free computer. You may actually want to seriously consider leaving it in this state and go find a mobile device from a manufacturer who does not consider the operating system they are programming for to be the enemy. However, if you want to actually keep your Apple device, go ahead and reinstall iTunes. Unlike the uninstall process, a single installer lays everything down for you. You won't need to restart afterward unless you have Outlook open while installing. Once you are done reinstalling, plug your device back in and the driver should now load properly.

It's worth noting that during the iTunes installation you will be presented with two User Account Control (UAC) prompts. This is because Apple deliberately designed its installer to do so. Obviously, making Windows seem more annoying than Mac OS has been a corporate goal of Apple's for some time now and this is just one small part in this. Technically the reason this happens is because the installer installs both iTunes and QuickTime and rather than elevate the entire installer with a single UAC prompt and then launch both component installers from there, they elevate each of the component installers with its own prompt. Considering it takes considerably less code to do this with a single installer one can only assume that this was a deliberate decision made to annoy Windows users.

Posted Sat, Jun 26 2010 11:26 by Jesper's Blog
Filed under: ,

A very commonly required feature for mobile access to email is remote wipe - the ability to reach out and wipe all corporate data off a mobile device. Exchange ActiveSync supports this feature and has for several versions now. You, as the Exchange or Security administrator can issue a remote wipe command to a compliant device, or the user can do it themselves through Exchange, and the next time the user connects the device will be wiped.

There are two major flaws in that design. One is the well understood "the next time the user connects" part: you cannot reach out to the device and immediately wipe it. The devices do support receiving remote commands through SMS, but for some reason there is no function in Exchange to use that feature to somehow, securely, trigger a remote wipe.

It turns out, however, that there is another, possibly even larger, flaw in the implementation of remote wiping in Exchange ActiveSync. Here is the work flow:

  1. Device connects to Exchange Server
  2. Device transmits DeviceID
  3. Exchange server asks for authentication
  4. Device authenticates
  5. Exchange server checks if a remote wipe command has been issued for the device
  6. ...

Spot the flaw yet? Consider this scenario

  1. Bob failed to sufficiently internalize the sexual harassment training and racks up enough points to get fired
  2. Bob is walked to the door with his shiny personal Windows Phone 7 Smartphone or whatever in his pocket
  3. IT Department is notified that Bob has been terminated and disables/deletes his account
  4. IT Department, following the security policy, issues a remote wipe command to Bob's phone

Pop quiz: What happens to all the company confidential data on Bob's device?

Answer: Nothing! It will stay there for as long as Bob decides it should. Go back and look at the connection workflow again. The Exchange server will only send the remote wipe command to Bob's device after Bob has already authenticated. The IT Department did the absolutely logical thing and disabled Bob's account. Therefore, he will never successfully authenticate again. The way remote wipe is implemented in Exchange ActiveSync means we just lost the ability to wipe our data off Bob's mobile phone.

The alleged solution to this is that you should reverse steps 3 and 4 in your firing process: leave Bob's account active until his device gets wiped. If that makes you just a little queasy you are not alone. In my opinion, this is a major feature miss. Remote wipe in Exchange ActiveSync is only useful when a user loses his or her device, and even then, it is lacking since you cannot reach out to the device and wipe it. Remote wipe in Exchange ActiveSync is utterly useless when people are terminated from their emoloyer.

Munir Kotadia, an IT Journalist in Australia, has finally managed to figure out how to blame Microsoft for the fake anti-malware epidemic. Apparently, the reason is that "Microsoft could save the world from fake security applications by introducing a whitelist for apps from legitimate security firms" and, presumably, has neglected to do so out of sheer malice.

I'm clearly not a thinker at the same level as Munir; maybe that is why I don't fully get this white list he proposes. Does he want one only of security software? How would you identify security software? I can see only two ways. The first is to detect software that behaves like security software. If you scan files for viruses, hook certain APIs, quarantine things occassionally, and throw frequent incomprehensible warnings, you must be security software. The problem is, the fake ones only do the latter of those four. If you use heuristic detection of security software it would be absolutely trivial for the fake packages to not trip the warnings. They just have to avoid behaving like security software. Of course, if they actually DID behave like security software, we would not have this problem, would we?

 The second approach I can think of is to have all security software to identify themselves as such, both the fake and the real. They could set some bit in the application manifest, the file which describes the application. I propose that it should look like this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <assemblyIdentity type="win32"
                    name="RBU.FakeAntiMalware.MyCurrentVersion"
                    version="6.0.0.0"
                    processorArchitecture="x86"
                    publicKeyToken="0000000000000000"
                    securitySoftware="True"
  />
</assembly>

Note the flag in the manifest above that identifies this package as security software. Now Microsoft can just compare the name of the package against a list of known good software and if it does not match, block it. This extremely simple mechanism works just as well as the "evil bit": http://www.ietf.org/rfc/rfc3514.txt. In fact, if we simply change the manifest like this, we can avoid the whole white list altogether:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <assemblyIdentity type="win32"
                    name="RBU.FakeAntiMalware.MyCurrentVersion"
                    version="6.0.0.0"
                    processorArchitecture="x86"
                    publicKeyToken="0000000000000000"
                    malicious="True"
  />
</assembly>

There you have it! Microsoft should make it part of the logo guidelines to require all malicious software to identify itself as malicious. Problem solved! You may go back to surfing the intarwebs now.

The sharp-eyed security experts in the crowd may have spotted a minor flaw in this scheme, however. What if the malicious software refuses to identify itself? Curses to them! Maybe we need something better. Perhaps Munir's whitelist is to be a whitelist of all software? That would be simpler to be sure. In fact, using Software Restriction Policies (SRP), which has been built into Windows for years, we can restrict which software can run. Now all we need is our whitelist. Of course, as Munir points out, it is Microsoft's responsibility to produce that whitelist.

Producing the whitelist would be conceptually simple. Microsoft would simply have to create a division that ingested all third party software, tested it, and validated it as non-malicious. DOMUS (The Department of Made Up Statistics) estimate the number of third-party applications for Windows at somewhere between 5 and 10 million, including shareware, freeware, open source, commercial applications, in-house developed applications, line of business applications, the kiosk applications that drive your ATM, your gas pump, your car, and probably a space craft or two. In order to avoid becoming an impediment deployment, Microsoft would have to test all such software for malice, with an SLA of 24-48 hours, yet guarantee that software does not turn malicious after several weeks or months. It would also need to ensure that any updates do not introduce malicious functionality. In other words, to meet these requirements, Microsoft would need to do just two things: (a) develop a method of time travel, and (b) hire and train all of China to analyze software for malicicous action. I'm sure the Trustworthy Computing division is working on both problems.

I am not arguing that reputation scoring does not have some promise, which is what Symantec's Rob Pregnall was actually talking about, and which Munir turned into an indictment of Microsoft. However, reputation systems are not only fallible and can be relatively easily manipulated. Without consumers actually understanding what the reputation score means, and learning how to value it over the naked dancing pigs, it will never help. Again, it comes down to how we educate consumers on how to be safe online and why, instead of scaring them into buying more anti-malware software. I may be mistaken, but I was under the impression that the reason Freedom of the Press is a cherished human right is because the Press is there to educate the public. Why is the press, along with government and the IT Industry, not doing more to educate the public on how to tell real from fake?

One of the last areas where more tool support is needed is in monitoring the various attributes in Active Directory (AD). Recently I got curious about the delegation flags, and, more to the point, how to tell which accounts have been trusted for delegation. This could be of great import if, for instance, you have to produce reports of privileged accounts.

KB 305144 gives a certain amount of detail about how delegation rights are presented in Active Directory. However, it is unclear from that article how to discover accounts trusted for full delegation, as opposed to those trusted only for constrained delegation; and the various flags with "DELEGATION" in them are not as clearly explained as I would like. Nor was I able to glean any insight into this from the various security guides and recommendations for Windows. I asked around, and got great answers from Ken Schaefer. By spinning up a Windows Server 2003 Domain Controller in Amazon EC2 and running a few tests, I was able to verify that Ken was indeed correct.

Delegation rights are represented in the userAccountControl flag on the account object in AD, whether a user or a computer account. There are a couple of different flags involved, however. Here are the values set in various circumstances:

For a computer account, the default userAccountControl flag value is 0x1020, which is equivalent to the WORKSTATION_TRUST_ACCOUNT & PASSWD_NOTREQD values being set. A user account is set to 0x200 (NORMAL_ACCOUNT) by default.

When you enable full delegation, 0x80000, or TRUSTED_FOR_DELEGATION, gets ANDed to the userAccountControl flag. This is irrespective of domain functional level. In other words, in a Windows 2000 compatible domain, checking the "Trusted for delegation" box; and, in higher functional levels, checking "Trust this computer for delegation to any service" using the "Kerberos Only" setting, both result in the same flag being set. The same flag is set on user accounts when you check the "Account is trusted for delegation" checkbox.

In a Windows Server 2003 or higher functional level domain you gain the ability to trust an account for delegation only to specific services: constrained delegation. If you configure constrained delegation using Kerberos only, the userAccountControl value is not changed at all. The account simply gets a list of services it can delegate to in the msDS-AllowedToDelegateTo flag.

However, if you configure constrained delegation using any protocol, the userAccountControl value gets ANDed with 0x1000000, or TRUSTED_TO_AUTH_FOR_DELEGATION.

There is also a flag in userAccountControl called NOT_DELEGATED. This flag is set when you check the box "Account is sensitive and cannot be delegated."

This tie-back to the graphical user interface, as well as explanation of the various flags, should help an auditor construct a query that lists all accounts trusted for delegation in an arbitrary domain. Obviously, any account with TRUSTED_FOR_DELEGATION set should be considered extremely sensitive; as sensitive as a Domain Controller or Enterprise Admin account. An account with TRUSTED_TO_AUTH_FOR_DELEGATION set is probably less sensitive, depending on which specific services it can connect to, but still quite sensitive as it can use other protocols than Kerberos. Finally, and least sensitive of those accounts trusted for some form of delegation, are those that are only permitted to delegate to specific services using Kerberos.

It's official. I just received an e-mail from Thawte notifying me that, as of November 16, 2009, the most innovative and useful idea in PKI since its inception, the Web of Trust, will die.

Thawte was founded 14 years ago by Mark Shuttleworth. The primary purpose was to get around the then-current U.S. export restrictions on cryptography. Shuttleworth also had an idea that drew from PGP: rather than force everyone who wanted an e-mail certificate to get verified by some central entity - and pay for the privilege - why not have them verified by a distributed verification system, similar to the key signing system used by PGP, but more controlled. This was the Web of Trust. Anyone can get a free e-mail certificate, but to get your name in it instead of the default "Thawte FreeMail User" you had to get "notarized" by at least 2 people (or 1, if you managed to meet Shuttleworth himself or a few select others). The Web of Trust was a point-based system, and if you received 100 points (requiring at least three notary signatures) you became a notary yourself. The really cool idea was that it created a manageable system of trust based not so much on the six degrees of separation as on the fact that most of us are inherently trustworthy beings.

In 1999 Shuttleworth sold Thawte to Verisign for enough money for him to take a joyride into space, found the Ubuntu project, and to live without worries about money for the rest of his own life and that of several of his descendants. Verisign, of course, is in the business of printing money, only in the form of digital certificates, and certainly not in giving anything away for free. Not that there is anything inherently wrong with that, but it iscertainly at odds with Thawte's free service, so it was really just a matter of time before the latter was disbanded. WIth it goes the Web of Trust.

Finally, on November 16, 2009, the Web of Trust will be removed as a free competitor to Verisign's paid service that does the same thing. It will be a sad day indeed.

At least for the short to medium term. That is the, quite obvious, conclusion drawn in a Newsweek article entitled "Building a Better Password."  The article goes inside the CyLab at Carnegie-Mellon University to understand how passwords may one day be replaced. It is interesting reading all around.

The article is not without some "really?" moments though, such as this quote:

The idea of passphrases isn't new. But no one has ever told you about it, because over the years, complexity—mandating a mix of letters, numbers, and punctuation that AT&T researcher William Cheswick derides as "eye-of-newt, witches'-brew password fascism"—somehow became the sole determinant of password strength.

Actually, I do believe someone did tell you about it. Five years ago now, in fact.

Today I finally got wind of my first piece of true standard user malware. MS Antispyware 2008 has turned standard user. The version in question installs the binaries in c:\documents and settings\all users\application data\<something>, and makes itself resident by infecting HKCU\...\Run. Curiously, the legitimate anti-malware program (one of the top 3) failed to detect the infector.

Obviously, this version is much easier to remove than the ones that require admin privileges. However, MS Antispyware is not about being hard to remove. It just needs to run until the user pays for the privilege, and more than likely, even as a standard user, many people will fall for it.

On a somewhat unrelated note, just as I was wondering who would fall for these types of scams, I met a real person that did; a not-particularly-well-off disabled retiree who was scammed out of $5000 by an organized crime ring that claims to have won you a lottery, as long as you just pay them for the ticket first. That particular scam was run partially by phone and partially online. And, the scumbags apparently didn't think they had scammed her out of enough money so they kept calling her even after she sent them the money. I advised her to call Rob McKenna's office (Attorney General of Washington State). Mr. McKenna's office stated that they felt horrible for her. Apparently that was about all the comfort they could give. I must say that level of action was not particularly impressive, and does not really live up to Mr. McKenna's campaign promises of cracking down on scammers.

In an absolutely astonishing move Microsoft's Polish subsidiary decided to do some photoshopping on its Business Productivity Infrastructure page to tailor it to the Polish market. Here you can see the U.S. original. In one of the least sensitive moves this year, the Polish subsidiary decided that black people in Poland do not need to be empowered, so here you can see what its version of that page looked like for a few hours today. As you can see from the current version on the Polish site, someone with a bit more human sensitivity than a teaspoon, and an I.Q. that is at least room temperature (celsius), decided to fix it. This evening Microsoft empowers everyone equally - even in Poland.

Last week, an expert from Verizon, nee Cybertrust, posted a note about the Active Template Library (ATL) security vulnerability over on the Verizon Business Security Blog. For home users, the phone company now advises you to use a different browser, ostensibly because IE and ActiveX are inherently insecure. I felt that quite missed the point that (a) browsers are software, and (b) all software has vulnerabilities, and (c) extension technologies in browsers add functionality, which (d) is implemented in the form of software, and therefore (e) introduce additional vulnerabilities. Just because Internet Explorer's extension technology is called ActiveX does not mean it inherently has any more, or less, vulnerabilities than the extension technologies in other browsers. ActiveX received a, deservedly, horrible reputation when it first came out about ten years ago. Since then Microsoft has actually put a lot of effort into securing the user's browsing experience, but for some reason, people keep dragging up old vulnerabilities from many years ago as proof that Microsoft does not care about security. Doing so is unfair and denigrates what is probably most comprehensive software security program in the industry.

So, I decided to try to make that claim in the comments. That generated a response from "Nathan Anderson," who did not bother really reading what I wrote, used a flawed interpretation of data to "prove" that Firefox and Chrome are far more secure than IE, ignored Low Rights IE, and concluded by, in essence, calling me an idiot.

My comment also generated a response from Dave Kennedy, who appears to have been the original poster, and who thinks I went too far.

At this point, I'd probably do better to ignore the discussion, but Mr Kennedy posited a very interesting question, and I thought I'd like to explore it a little. Here it is:
"How many millions of dollars have been lost and thousands of individuals have become the victims of identity fraud that can be laid squarely at the feet of criminal exploitation of vulnerable ActiveX controls?"

I don't know. How many? And how does it compare with the number of millions of dollars lost because users click on things they shouldn't, while running as admins? How does it compare with the number of millions of dollars lost due to vulnerable versions of Flash and Acrobat; which are vulnerable on all browsers? All of those would be fantastic statistics to have. If anyone has them, I'd love to see them.

To the Nathans of the world: I never said Firefox and Chrome are less secure than IE. All I pointed out was that they do not benefit from a sandbox the way IE does on Vista and Win7. They could. Easily. Stripping privileges out of a token and setting an integrity level is quite simple. The difficult part is really just to build an escalation method to be able to perform tasks outside the sandbox.  It is just that their respective manufacturers have chosen not to implement this functionality. I really wish they had. It would greatly improve the difficulty of exploiting either browser.

In addition, Firefox, etc, may not have ActiveX, but they have other extension mechanisms, and a vulnerable extension is a vulnerable extension, whether it is ActiveX or not. It is correct that Chrome has fewer vulnerabilities than either Firefox or IE, but a reasonable argument can be made that it is because of how long it has been out and the amount of attention from security researchers it has received so far. Chrome is not yet a year old. In that time, Chrome 1.x and 2.x have racked up 9 advisories (12 vulnerabilities), according to Secunia. I included both versions because of how fast they were released. It provides a more accurate measure of the impact on the end user. Chrome 3.x is still considered a preview release as far as I can tell, so I excluded it. Firefox 3 (the only supported Firefox version for most of the one-year timeframe) had 9 advisories in 2009 so far, and an additional 5 in late 2008. Internet Explorer 7 in that timeframe has 6. 

Based on these figures, I would submit there is no statistically significant difference between the three browsers. I am not trying to minimize the ATL vulnerability, which was sloppy in the extreme, and I am not trying to denigrate either Firefox or Chrome, as I use and enjoy both, although mostly Firefox, which I used to write this post. I am simply saying that all software has vulnerabilities, and that the attackers will be opportunistic enough to exploit any or all of them if it is necessary to meet their needs.

Vulnerability counting misses the point entirely though. All the bad guys need is one unpatched vulnerability. Furthermore, that vulnerability can reside in the browser, or in anything else running in the browser.The common add-ins, such as Flash and Acrobat, have vulnerabilities regardless of which browser they are running in. Even if the user has a fully patched and non-vulnerable browser, all the attacker needs is one unpatched add-in. Adding a new browser requires adding new add-ins, so now you have two copies of Flash to maintain, two copies of Acrobat to maintain, and another browser.Simply adding more software to maintain does not make people more secure. Most users would probably be far better off maintaining only one browser and spending the additional effort on learning how to browse more securely.

Finally, whether a computer is fully patched or not; whether a browser or its extensions are full of holes or not; the most vulnerable part of any system is almost always the user. Humans are still at v. 1.0 and there have not been a single security patch issued for them yet. There has been no Trustworthy Computing Initiative to stamp out security vulnerabilities in people. Therefore, the easiest way to hack anything is almost always to ask a legitimate user to do it for you. Simply present the user with something he values more than an intangible and incomprehensible security benefit, and your job is done. Many of the attacks today do not even use software vulnerabilities. It is more reliable and less expensive to exploit the user directly.

This morning I talked to my dad. After a few minutes of polite small talk, I heard the 10 little words I have come to dread: “I had some problems with my computer the other day.” The video card on his laptop had died. The screen was just black. He has a Dell Vostro, so he called Dell Technical Support. They sent a contractor technician out; with a motherboard. The technician, having no real qualifications other than the need for a job; and no real training other than how to fill out the repair paperwork, installed the motherboard. Three days later he returned with the video card the computer actually needed, and the computer started again.

At this point, the following conversation ensued:

Dad: When I started the computer I got an error message

Me: What did the message say?

Dad: How should I know? It was written for “people” like you. I didn’t understand a word of it. It just said something about some software not working and it should be reinstalled

Me: Which software?

Dad: I don’t know. I told you, I didn’t understand it.

Me: So what did you do?

Dad: I figured it must have been Windows. Windows never works properly, so that made sense. I thought if I reinstalled Windows it would all work.

Me: And…?

Dad: Now Office doesn’t work.

Me: When you say “reinstalled Windows” did you do an in-place upgrade?

Dad: Can you restate that again in Human?

Me: Did you upgrade Windows?

Dad: No, the upgrade option was grayed out.

At this point, if, like me, you are a cubicle-dwelling, bespectacled nerd with the social skills of a turnip you know exactly what happened. He created a new side-by-side installation of Windows. Sure enough, in the C:\Windows.Old folder were his old Users folder, his old Windows folder, his Program Files folder, and all the other contents of his hard drive. I pointed this out to him to explain what happened.

This is when Dad drew the completely logical assumption: “OK, so if I just copy the Microsoft Office folder from there to C:\Program Files it will work?”

No. It won’t. It would if software were designed for the humans that actually use it. Unfortunately, it is not. It is designed by and for the same people: cubicle-dwelling bespectacled nerds with the social skills of turnips; people who have never spent any significant time interacting with humans, and who have never met any of the real users who will use the products they design. If we had actually met and interacted at length with real people at any point over the past 15 years, we probably would have realized already that designing a “program” that consists of 3,829 files, spread over 60 folders, is not how people expect it to work. That, by the way, is not a random figure. It is the number of files and folders in C:\Program Files\Microsoft Office on my laptop. Lest you were now to say that someone else knows better, iTunes vomits 2,718 files over 1064 folders, in two different hierarchies. Why don’t you try to move either to your cavernous external hard drive to save space and see how well that works?

Is it that my dad was being illogical? No. Moving the Office folder would indeed be incredibly logical; totally rational in fact. If you bought a new file cabinet, you could easily take the files out of the old file cabinet, put them in the new one, and they actually still remain readable! You could even take one of your old pens, scribble a note on them in the process, and a year later you can read the note! Amazing that ain’t it? If file cabinets were computers you certainly could try to remove the file from the computer. It would prompt you with a dialog asking if you really wanted to do that, once per character on the page. Once you accepted the prompts, you could insert the file into the new cabinet. When you tried to read it, however, you would find that the ink fell onto the floor between the two file cabinets. The magic fixative that keeps the ink on the paper works only as long as the paper stays in the old file cabinet.

We have a mental model consisting of physical, tangible things. There is a school of thought in Cognitive Science that believes the basic wiring of the human brain was forged in caves. Our brains were designed to address the biggest concerns of the day: evading the saber-toothed cat, spearing a wooly mammoth for dinner, and, for at least half the population, clubbing a suitable mate to drag home to the cave. (Presumably, the other half of the population lived in fear of getting clubbed and dragged away). Our brains were not exactly wired to understand the convoluted product management decisions that resulted in almost four thousand files and thousands of directories. And they certainly were not wired to understand that all those files and directories are utterly useless without the settings, which are stored elsewhere – in a place that does not really exist – and are joined to the file system manifestation of the software only in the very loosest sense of the word.

 Every time I boot Windows these days – and especially Windows 7 – I feel like the software is designed to be some kind of punishment. It’s meant to exact revenge on us for the designers being bullied in elementary school. So much of the software we software engineers design feels vindictive, counter-intuitive, and illogical. When the users finally figure out basic interaction styles, we change it all. When people finally learn that you can click on things on the quick launch menu to start them, we get the bastardized task bar in Windows 7 that only activates existing copies. When we finally figure out how to make find things on the start menu it becomes polluted with several hundred useless icons like iSCSI Initiator. Rather than features to make it easy to use, we bloat software up with new features because that’s what the computer journalists look for. I keep hoping for a release of a major piece of software that just works; that is elegant, that shows thoughtfulness in how the software was plumbed together, and that is designed from the ground up not to add new features but to be intuitive to the poor people who have to use it. Unfortunately, I never will. “Intuitive”, “elegant”, and “just works” are words you never see in computer journals, except maybe in Macworld. 

Sometimes I feel like the only piece of software ever designed to work EXACTLY the way its intended users expected it to work is Solitaire. Predictably, my sources tell me that Microsoft laid off the guy who wrote it in May.

In May, in one of the more inexplicable moves this year, Microsoft laid off my good friend Steve Riley, four days before he was to deliver half a dozen presentations at TechEd. Fortunately, it did not take Steve long to find a new gig. This Monday, he starts as the latest Evangelist & Strategist for Amazon Web Services!

I'm very very happy for Steve, and very excited about what he can do in that role. Web Services are where the future is, and Steve is extremely well suited to the role. Please join me in wishing him good luck!

For the past few days I've been following the Microsoft Video Control Vulnerability with interest. Basically, it's another vulnerable ActiveX control that needs killbitted. Last night, Microsoft posted a work-around which involves using a Group Policy ADM template (ADM is the template format that was deprecated in Vista and Windows Server 2008). Unfortunately, the template tattoos the registry, which is not really recommended.

I contemplated for a while writing a work-around for this issue, but then remembered that I actually did; almost three years ago. The workaround I wrote then, for another ActiveX vulnerability will not tattoo the registry, and will be much simpler to deploy with an Enterprise Management System. Just take the CLSIDs from the advisory (there are 45 of them) and run my script that many times with the -k switch. If you wish to revert the change, run the same script with the -r switch.

The Consumer Federation of America just published a report on identity theft services entitled "Are Identity Theft Services Worth The Cost?" The conclusion is that many are not, and that regulation is needed in that industry. It is a very interesting read.

Recently I had a very interesting incident. I wrote an article some time in 2008 and the publisher paid me a little bit of money for it. That means the publisher must send a report to the Internal Revenue Service (IRS - the U.S. tax department) reporting that they paid me, as well as send me a form called a 1099 form that I can use to report this money on my tax return.

A few days ago the comptroller for the publisher sent me an e-mail asking for my social security number (my national ID number for any non-Americans that are unfamiliar with the term). As is my custom, I responded that I really do not care to e-mail my social security number, but if he gives me a phone number I will gladly call him and let him know. This he did. I called, and within 15 minutes of the call I received a form California DE 542 in the mail. The DE 542 is required by the state of California when money is paid to a contractor, or a contract is entered into to pay money to a contractor. Its purpose is to permit the state to track payments to parents who do not pay their child support. Not only do I not need this form as I am not a resident of California; it also contains, you guessed it:

my social security number.

At this point I started wondering what part of "I do not wish to have my social security number transmitted by clear-text e-mail" was unclear. I sent a message to the sender that informed him that this could quite possibly be considered a data breach and require notification under Washington State SSB 6043, which requires formal breach notification. As of today, I am still awaiting a response. Any response.

Just because I felt like griping to someone, I forwarded the e-mail to my favorite accountant. Her response was "yeah, I know lots of CPA firms who e-mail around unencrypted 1040s." (A "1040" is the U.S. federal tax return form). I was absolutely floored. Last week credit card processor Heartland reported that they had experienced what may very well be the largest data breach in world history. Many banks are replacing every single one of their credit cards because of it. In fact, I took a call from a distressed bank manager just this morning asking whether it would be prudent to do so (the answer was "yes"). Yet, does that not pale in comparison to the number of unencrypted 1040s e-mailed around by tens of thousands of accountants every year, and the untold millions other tax-related forms that traverse unencrypted network channels?

If you steal my credit card number, I can call the bank and ask them to issue me a new number. A few days later, I have a new card. The bad guy can, at worst, incur a few hundred dollars in charges, maybe a few thousand if they are really lucky. Yet, credit card data is somehow seen as the primary piece of data that needs protection. How many news reports have you read that discuss a computer breach and include the words "no credit card numbers appear to have been compromised?" Have we completely lost sight of the fact that there may be other pieces of information that need protection?

Consider the corollary. If you steal my social security number, you can take over my house, get any number of credit cards in my name, give me a criminal record, get a driver's license in my name... And, how do I clean it up? If I call the Social Security Administration and ask for a new number because my existing number has been compromised they would simply laugh at me. Only in exceptionally rare circumstances do they issue new numbers. In some states I am permitted, if my social security number has been compromised, to put in a credit report freeze, but the burden is on me, as the victim, to prove that my information has been compromised before I can get a freeze. If I am deemed worthy of getting the barn door closed after the horses have fled, I get to pay $30-60, per freeze, per credit bureau, requested by certified mail. And each freeze may only be good for 90 days. That's only in some states. Other states prohibit credit freezes, and a few, more progressive ones, actually permit consumers to close the barn door before the horses run away. The freeze usually still costs money, and usually is still time-limited, and usually still requires that you use certified mail to each credit bureau to request it. Fortunately, you can "thaw" the freeze by making a single phone call and typing in a four-digit pin.

What is wrong with this picture? Why are accountants and comptrollers still e-mailing around the source data - social security numbers; while we as consumers only seem to care about the derived data - the credit card number? Why is there a Payment Card Industry (PCI) Data Security Standard that, while widely ignored, attempts to set data protection standards for cardholder data; but no Social Security Number security standard that establishes requirements for protection of social security numbers and liability for anyone who compromises someone else's Social Security Number?

Why do we not see any Attorney's General up in arms over that one? Who is going to help me protect the source data?

 

More Posts Next page »