I’ve been playing a lot lately with cross-site scripting (XSS) – you can tell that from my previous blog entries, and from the comments my colleagues make about me at work.
Somehow, I have managed to gain a reputation for never leaving a search box without injecting code into it.
And to a certain extent, that’s deserved.
But I always report what I find, and I don’t blog about it until I’m sure the company has fixed the issue.
So, coffee store, we’re talking Starbucks, right?
Right, and having known a few people who’ve worked in the Starbucks security team, I was surprised that I could find anything at all.
Yet it practically shouted at me, as soon as I started to inject script:
Well, there’s pretty much a hint that Starbucks have something in place to prevent script.
But it’s not the only thing preventing script, as I found with a different search:
So, one search takes me to an “oops” page, another takes me to a page telling me that nothing happened – but without either one executing the script.
The oops page doesn’t include any of my script, so I don’t like that page – it doesn’t help my injection at all.
The search results page, however, that includes some of my script, so if I can just make that work for me, I’ll be happy.
Viewing source is pretty helpful, so here’s what I get from that, plus searching for my injected script:
At this point, I figure that I need to find some execution that is appropriate for this context.
Maybe the XSS fish will help, so I search for that:
Looks promising – no “oops”, let’s check the source:
This is definitely working. At this point, I know the site has XSS, I just have to demonstrate it. If I was a security engineer at Starbucks, this would be enough to cause me to go beat some heads about.
I think I should stress that. If you ever reach this point, you should fix your code.
This is enough evidence that a site has XSS issues to make a developer do some work on fixing it. I have escaped the containing quotes, I have terminated/escaped the HTML tag I was in, and I have started something like a new tag. I have injected into your page, and now all we’re debating about is how much I can do now that I’ve broken in.
And yet, I must go on.
I have to go on at this point, because I’m an external researcher to this company. I have to deliver to them a definite breach, or they’ll probably dismiss me as a waste of time.
The obvious thing to inject here is “"><script>prompt(1)</script>” – but we saw earlier that produced an “oops” page. We’ve seen that “prompt(1)” isn’t rejected, and the angle-brackets (chevrons, less-than / greater-than signs, etc, whatever you want to call them) aren’t rejected, so it must be the word “script”.
That, right there, is enough to tell me that instead of encoding the output (which would turn those angle-brackets into “<” and “>” in the source code, while still looking like angle-brackets in the display), this site is using a blacklist of “bad words to search for”.
Why is a blacklist wrong?
That’s a really good question – and the basic answer is because you just can’t make most blacklists complete. Only if you have a very limited character set, and a good reason to believe that your blacklist can be complete.
A blacklist that might work is to say that you surround every HTML tag’s attributes with double quotes, and so your blacklist is double quotes, which you encode, as well as the characters used to encode, which you also encode.
I say it “might work”, because in the wonderful world of Unicode and developing HTML standards, there might be another character to escape the encoding, or a set of multiple code points in Unicode that are treated as the encoding character or double quote by the browser.
Easier by far, to use a whitelist – only these few characters are safe,and ALL the rest get encoded.
You might have an incomplete whitelist, but that’s easily fixed later, and at its worst is no more than a slight inefficiency. If you have an incomplete blacklist, you have a security vulnerability.
Back to the story
OK, so having determined that I can’t use the script tag, maybe I can add an event handler to the tag I’m in the middle of displaying, whether it’s a link or an input. Perhaps I can get that event handler to work.
Ever faithful is the “onmouseover” event handler. So I try that.
You don’t need to see the “oops” page again. But I did.
The weirdest thing, though, is that the “onmooseover” event worked just fine.
Except I didn’t have a moose handy to demonstrate it executing.
So, that means that they had a blacklist of events, and onmouseover was on the list, but onmooseover wasn’t.
Similarly, “onfocus” triggered the “oops” page, but “onficus” didn’t. Again, sadly I didn’t have a ficus with me.
You’re just inventing event names.
Sure, but then so is the community of browser manufacturers. There’s a range of “ontouch” events that weren’t on the blacklist, but are supported by a browser or two – and then you have to wonder if Google, maker of the Chrome browser and the Glass voice-controlled eyewear, might not introduce an event or two for eyeball tracking. Maybe a Kinect-powered browser will introduce “onwaveat”. Again, the blacklist isn’t future-proof. If someone invents a new event, you have to hope you find out about it before the attackers try to use it.
Again, back to the story…
Then I tried adding characters to the beginning of the event name. Curious – that works.
And, yes, the source view showed me the event was being injected. Of course, the browser wasn’t executing it, because of course, “?onmouseover” can’t be executed. The HTML spec just doesn’t allow for it.
Eventually, I made my way through the ASCII table to the forward-slash character.
Yes, that’s it, that executes. There’s the prompt.
Weirdly, if I used “alert” instead of “prompt”, I get the “oops” page. Clearly, “alert” is on the blacklist, “prompt” is not.
I still want to make this a ‘hotter’ report before I send it off to Starbucks, though.
Well, it’d be nice if it didn’t require the user to find and wave their mouse over the page element that you’ve found the flaw in.
Fortunately, I’d also recently found a behaviour in Internet Explorer that allows a URL to set focus to an element on the page by its ID or name. And there’s an “onfocus” event I can trigger with “/onfocus”.
So, there we are – automated execution of my chosen code.
Anything else to make it more sexy?
Sure – how about something an attacker might try – a redirect to a site of their choosing. [But since I’m not an attacker, we’ll do it to somewhere acceptable]
I tried to inject “onfocus=’document.location=”//google.com”’” – but apparently, “document” and “location” are also on the banned list.
“ownerDocu”, “ment”, “loca” and “tion” aren’t on the blacklist, so I can do “this["ownerDocu"+"ment"]["loca"+"tion"]=” …
Very quickly, this URL took the visitor away from the Starbucks search page and on to the Google page.
Now it’s ready to report.
Hard part over, right?
Well, no, not really. This took me a couple of months to get reported. I tried “email@example.com”, which is the default address for reporting security issues.
An auto-reply comes my way, informing me this is for Starbucks staff to report [physical] security issues.
I try the webmaster@ address, and that gets me nowhere.
The “Contact Us” link takes me to a customer service representative, and an entertaining exchange that results in them telling me that they’ve passed my email around everyone who’s interested, and the general consensus is that I should go ahead and publish my findings.
So you publish, right?
No, I’m not interested in self-publicising at the cost of someone else’s security. I do this so that things get more secure, not less.
So, I reach out to anyone I know who works for Starbucks, or has ever worked for Starbucks, and finally get to someone in the Information Security team.
This is where things get far easier – and where Starbucks does the right things.
The Information Security team works with me, politely, quickly, calmly, and addresses the problem quickly and completely. The blacklist is still there, and still takes you to the “oops” page – but it’s no longer the only protection in place.
My “onmooseover” and “onficus” events no longer work, because the correct characters are quoted and encoded.
The world is made safer and more secure, and a half a year later, I post this article, so that others can learn from this experience, too.
By withholding publishing until well after the site is fixed, I ensure that I’m not making enemies of people who might be in a position to help me later. By fixing the site quickly and quietly, Starbucks ensure that they protect their customers. And I, after all, am a customer.
The Starbucks Information Security team have also promised that there is now a route from security@ to their inbox, as well as better training for the customer service team to redirect security reports their way, rather than insisting on publishing. I think they were horrified that anyone suggested that. I know I was.
And did I ever tell you about the time I got onto Google’s hall of fame?