Thursday, August 16, 2012

Simple Way to Increase Security and Privacy and Reduce Spam

A few years ago I came up with a technique to reduce my spam messages. I'm sure I wasn't the first to think it up, but it's worked very well for years and I've never missed a real message or wasted too much time on spam. After IOActive released some privacy research they've done this week I realized this can help with that too. 

If you haven't followed the story, IOActive did some automated scanning of popular web services for high-profile executives. They were looking to find out whether people like Steve Ballmer use Dropbox, or if the CEO of Zappos uses Nikeplus.com (yes in both cases). This was accomplished by attempting to register for these sites using the executives' official corporate email address. 

Their approach was a pretty clever way to get the information. There may be a perfectly valid reason for some of the findings. For example, if an executive publicly announces his and his company's support for another service. But the number of results - 930 accounts across 840 executives - suggests that at least some of these are for personal use.

My Technique

I use a different email address for each new account I set up. But I don't have to create tons of new free accounts at Gmail or Hotmail. I own several domain names, one of which is just for creating throw-away email addresses. Any email to that domain gets redirected to my primary email account. Once it's there, it is put into a folder without ever hitting my inbox. 

Sounds like it might be tricky to remember all these addresses, but it's not really. I just use a consistent formula for coming up with the address. For example, "site.com@domain.com". To remember your account name just look in the browser bar. 

And ever since I started using a password manager it's gotten even easier and more secure. I just create a random name and password and store it all away. The software figures out my username and password, I just have to click a couple of buttons.

Fighting Grey-Mail

If you're not familiar with grey-mail, it's the emails you get that come from accounts you've signed up for on the Internet. Now these aren't quite spam, because they come from known senders to accounts you provided, but then they're also not something you want to wade through constantly. 

I woke up this morning to about a half-dozen new pieces of grey-mail in my email. But I didn't have to look at any of it, I only know the number because I clicked on the folder I have that collects it automatically. The system I use works perfectly because it's automated, I have total control and it never misjudges an email. 

I simply dump all the messages that come in but aren't addressed to me directly over to a folder. I check that every once in a while and try unsubscribing from the biggest offenders. It usually works, but sometimes it doesn't. And of course if I'm expecting anything then I go check that folder.

Increasing Security and Privacy

And this also adds a little more security to your accounts. But it's the security-through-obscurity kind of system, so don't rely on it solely. If you're the kind of person who reuses passwords - and just about everybody does this to some extent - then you have some additional protection against password reuse attacks. If a hacker has the account emails and passwords for one of your accounts, they can't then get into other accounts without a little extra work. That won't stop a determined attacker, but it will protect you against somebody just running a list.

The Result

I still get spam emails. Even with this system every day I get a handful of messages that show up in my spam folder. But it's not many - in fact, far less than the grey-mail number. In the last month I have gotten 9 spam messages, but over 150 grey-mail. 

The only people who have the email address I use are my friends. So either my friends' accounts have been compromised or somebody guessed my email address. But still, only 9 spam messages per month and no time wading through grey-mail is pretty spectacular! And as a side benefit I'm protecting my privacy and security a little bit more. 

Monday, July 16, 2012

Wall of Creeps

Lately there's been a lot of conversation about how to curb creepy behavior at Defcon. Last year women and goons had "red" and "yellow" cards that they gave to guys who were acting like asses. This plan backfired, as the cards became sought-after swag leading to high demand. The idea was floated this year and has been nearly universally shot down as ineffective, counter-productive and immature.

I propose a different tact - a Wall of Creeps. Creeps - men, women or otherwise - would have their photo on a physical or virtual wall, outing them. The idea is that public shame would act as a deterrent to keep people who are clearly over the line, more under control without forcing conformity. This tactic removes the incentive (scarcity and perceived exclusivity of the cards) and mixes in a strong social disincentive. It won't stop all acts of creephood, but should help cut down on the truly aberrant behavior.

The photo would be a mug shot taken by a goon, which means somebody has to be creepy enough to get dragged to a goon and have the goon stop what they're doing long enough to take the photo. That reduces false positives. Also there should be some criterion for redemption, such as a donation to a cause or a handwritten note or whatever else would make the offendee forgive. Maybe a TTL or a minimum sentence too. After 3 infractions though the creep's photo would be posted for the rest of the con.

Con-goers would be invited to heckle and deride offenders for as long as their face is up there, but not physical violence, doxing, harassment, or generally being a creep/ass, etc. The board could also be used for party organizers to blacklist certain people, etc. I'd love to hear feedback - what do you think?

Friday, June 08, 2012

Interesting Conversation from Gold Farmer

I saw this interesting conversation posted on a Diablo III fansite today and it has a lot of relevance to Information Security. The interview is around the act of gold farming, or using automated bots to find massive amounts of in-game gold and items that can then be sold for cash. But at one point the conversation goes into how online game accounts are compromised.

The gold farmer claims that most game account compromises come from one source - forums. Attackers compromise a fan forum site and get the username, email and passwords (or hashes). These credentials are then used to attempt to log into the game, as well as email accounts, PayPal, banks, credit cards and other online services. The entire process of checking accounts is automated through tools. These accounts can then be either used by the original criminals or sold to other criminals.

See below for the relevant text or see the entire interview with a Diablo III gold farmer.

MeD: Do you have any information on the account hacking that people are reporting even with having the authenticator?
Farmer: Yeah, I know everything about that.
MeD: Would you be willing to share that information with us?
Farmer: They don’t hack the computers, the passwords.
MeD: When you say they don’t hack the computers, they don’t have the player’s computers or they don’t hack Blizzard’s computers?
Farmer: They hack forums and such and take the same email and password and test it on Blizzard.
MeD: That’s what I thought. And that is testament to all of you guys out there who are using the same email and password for forums and such for your game.
Farmer: If they have 1 million stolen emails and passwords they might get 1% to 10%
MeD: What type of websites are targets for this?
Farmer: Diablo websites or Blizzard in general.
MeD: So you are talking about Diablo fansites that have forums that you know have been succesfully hacked these and get the log ins and passwords.
Farmer: Yeah, correct, it’s easy.
MeD: And in the forums of BLizzard are you able to get anything out of there?
Farmer: No. Blizzard is bullet proof, logically.
MeD: I ran forums quite a while ago and we had 130k+ members and we had issues with hack attempts at our forum accounts quite often. We were very puzzled about it. There was one time when they got everyone’s log in and password but they didn’t log into anyone’s forum account. Do you suppose that when they got into our forums do you think they were just looking to match up
Farmer: Yeah. They used it to try on people, mail and Blizzard and such. It’s called combo.
MeD: Is that a mispronunciation of your program or is that what it’s actually called?
Farmer: Nah. It’s made to make combo lists.
MeD: We reset everyone’s password, we did that for them. We were worried they were trying to hack into the forum accounts. This was many years ago by the way. What I didn’t realise then but I’m realising now is that this was all about accessing the game accounts and it had nothing to do with our forums. I bet that alot of these forums that are getting compromised are getting compromised over and over again. Would you say that is correct?
Farmer: Yeah and Paypal and banks, Facebook and so forth and small percent Russian spammers.
MeD: They are testing this against multiple things, they are not just testing this against Diablo account they also test against Paypal and their bank log ins.
Farmer: They test it against everything and sell it.
MeD: How much do they sell these for?
Farmer: It depends on what’s on them.
MeD: 10c an account, $10 an account? Do you know the range there?
Farmer: ??? Doesn’t sell.

Wednesday, June 06, 2012

LinkedIn Password Hash Redux

This LinkedIn password hash leak has become a real storm of activity today. This post might not have much longevity, but I hope to quickly recap and summarize what we know, what we don't, what we guess and what we recommend. Everything here comes from correspondance on Twitter, blogs and what have you, so it should all be taken with a grain of salt (pun not intended).

What we know:
  • 6.5 Million password hashes were posted on a password cracking website. The author said they were from LinkedIn and that they were unsalted SHA-1 format. Some of the hashes had several digits zeroed out. 
  • No account names were included with the post, meaning it's not possible to link the passwords to accounts with the data found.
  • LinkedIn has been investigating whether there was an internal breach, but has not yet publicly acknowledged anything they have found.
  • LinkedIn has said that "some of the passwords that were compromised correspond to LinkedIn accounts." However, this statement is sufficiently vague that it could mean nothing more than common passwords are used for LinkedIn and found in the compromised data.
  • Many security researchers who use unique passwords for LinkedIn and no other site have found those passwords in the leaked data. These passwords are said to be highly unlikely to be used by anyone else.
  • An Android app update occurred shortly after the breach was discovered. However, it's unclear if the two events are related.
  • A security vulnerability in the LinkedIn iOS app reported today does not call out password security as an issue.
What we don't:
  • We don't know whether there was a breach at LinkedIn or not. Likely they haven't yet completed their internal investigation.
  • We don't yet know if more information was leaked, such as account names, credit card numbers or other private information.
  • We don't know if more accounts have been exposed than those found in the original source.
  • We don't know if there is an active vulnerability that could be exploited again to gain access to more password hashes.
What we guess:
  • Mikko Hypponen has suggested that the list may have come from a LinkedIn web interface vulnerability, but was simply speculation based on past breaches.
  • Researchers have speculated that passwords that have digits zeroed out have already been compromised, or that they are used for banned passwords.
  • There has been speculation that some password hashes are not from LinkedIn, though it's hard to find evidence either way.
  • There has been speculation that the 6.5 Million passwords may cover all accounts on LinkedIn, due to some passwords being used by many different people. However, a number of people have reported that their password was not found among those leaked.
  • Some reports suggest the leaked passwords may be 6 months old.
What we recommend:
  • If you have a LinkedIn account, change your password soon. Make it something strong. LinkedIn published some very generic account and password security suggestions, but I prefer the excellent xkcd panel on passwords.
  • Many security professionals have called for LinkedIn to begin adding a salt value in their password hashing process, in order to strengthen security. 
  • Other security professionals have mentioned specific password storage mechanisms built into programming languages which represent the latest techniques in thwarting password cracking, such as bcrypt, scrypt and PBKDF2. This has the added benefit of reducing the risk of an improper implementation which could itself lead to security issues.
  • Two sites have been set up to check your password against the list. The sites appear to be safe, in that they won't steal your password, but for the paranoid you can also submit the password hash. I don't personally recommend that anyone do this, unless you have already changed your LinkedIn password and it was unique. But it's fun to look for possible passwords!  

Sunday, May 27, 2012

On the Recent Blizzard and Diablo 3 Account Compromises

As an avid Diablo fan, I eagerly watched and waited for Blizzard to create Diablo 3. My first impression is that they did a masterful job creating it. Yes, there are some initial frustrations, but it definitely has that Diablo feel to it and despite the running jokes about Error 37 as a new prime evil, I've found that the most powerful boss enchantment has been Time Thief - the ability to suck hours off the clock without me realizing it. Bravo, Blizzard, Diablo 3 is a triumph!

But recently there has been a lot of controversy around compromised accounts in Diablo 3. Many players have found that their characters have been stripped of gold and high-level gear. That's as much a tragedy as being robbed in the physical world - the possessions you've worked for so long and felt so happy to acquire are taken from you by an unknown assailant. People feel violated and angry, which is understandable and which is our nature. Many have lashed out at the closest target. 
The most common and convenient target of anger has been Blizzard's security and practices. Many accusations have sprung up that Blizzard, its servers, the game or other technology has been "hacked" and that essentially any player or account could be compromised because of that. In an interesting parallel, this is commonly the first thing people assume when their bank account has been compromised. 
 
The banking world has long confronted security challenges for online services. For as long as online banking has been a reality, malicious individuals have been hoping to compromise accounts and steal money from them. And so banking has come a long way in combating those threats. I've performed dozens of audits for financial institutions around their information security practices, including a component dealing with authentication in online banking (FIL-103-2005, FIL-77-2006 and FIL-50-2011 if you want to look it up). 
Today, banking is one of the safest activities you can engage in online, although it is also one of the most targeted. Cybercriminals from around the world target banks, banking sites and accounts and it has become every bit as disciplined and efficient as any business. The complexity and innovation is staggering. Yet excellent security measures taken by banks effectively thwart almost any attack out there, when used as intended on both the bank's side and on the account holder's side. 

Most bank account compromises in the last decade or so haven't happened because the bank was hacked - they've happened with legitimate account credentials. It used to be that most online banking accounts were compromised by the victim giving away their username and password or other sensitive information after clicking on links in fake emails. But banks improved the security and attackers responded by becoming more sophisticated. Now most of the time compromises happen because the account holder logs into their account from a computer that has malicious software installed. And it's highly likely that this is what has happened with most of the Diablo 3 account compromises. 
So how does this relate to Blizzard and to Diablo 3?
Blizzard has, in fact, said that malware has been the root cause in nearly all of their compromise investigations. Today's cybercriminals have become very sophisticated in their methods. As Blizzard has also pointed out, there is no one way that they get the information and access necessary to compromise accounts. Essentially they use whatever means they need to, in order to get what they want. In practice, this means there are likely multiple groups, each using many different types of attacks to get as many accounts as they can. 

As with bank account holders, gamers have gotten more savvy about giving away information which would allow someone else to access their account. But the attackers have adapted as well and use other ways of getting that information than by sending fake emails. Here are some of the more creative and sophisticated ways the thieves operate.
  • By calling you, if you can believe it! There's a good video walking through a typical attack where a cyber criminal may call you on the phone
  • Text messaging or emails directing you to call a phone number, usually about account compromise, expiration or closing. The phone number then has a recording asking you to enter your information. You never even have to talk to a person and you've given up too much information.
  • If you are using the same email address and password on another site, if that site is compromised your Diablo 3 account may be too. These compromises happen somewhat frequently, such as the Gawker Media account compromise a couple of years ago. 
  • It's possible to buy compromised systems from cybercriminals. Many of the more sophisticated networks have millions of computers that are infected - far too many for the original criminals to take advantage of. So they sell access to others.
  • It's also possible to buy accounts from cybercriminals. Often they have account credentials for systems they don't typically target - for example if they only target bank accounts, they may sell gaming accounts for some additional profit.
  • Newly compromised accounts are prioritized. The criminals have so many accounts they target the ones that have the highest net worth first. There are stories of operations centers with account queues where each new account is evaluated and ranked according to the amount of money the thieves can get. 
By far the most common way most bank accounts are compromised, and likely Diablo 3 accounts, is simply by installing malware on your computer without you knowing it. Without going into the myriad ways that this can happen, it's sufficient to say that you don't have to visit the shadier side of the Internet to run into malware. Most sites that distribute malware are legitimate. In fact, more than 90% of infected sites find out that they're compromised from someone else. Even some of the most mainstream sites have become malware distributors at times - ESPN, NASA and the Wall Street Journal have all infected their visitors with malware. Many of these sites use standard malware toolkits which exploit dozens of vulnerabilities, generate new malware package for each site visitor and test it against the common antivirus suites before sending it along. It sounds like science fiction, but it's not.
How to protect yourself? 
Security is hard. That's what makes it so hard for an organization like Blizzard to give you one simple answer. But that's not what a lot of people want to hear - even the people in charge of security for companies with huge budgets to protect their information assets often ask "What's the one thing I should do?" So it's not a surprise that most individuals would look for the "silver bullet" solution, if you will.
It's hard to describe how to protect yourself much better than Blizzard themselves did. So instead of rehashing it, I'll just link to Blizzard's excellent article on keeping yourself safe from account theft. But if you're in a hurry I'd say the top 3 things you can do are:
  1. Use the authenticator. Banks use similar technology to protect millionaires and billionaires. If you value your stuff, you can't get a better bargain than this! Even the cost of the physical token is inexpensive compared to what it's worth. Blizzard modestly says they're selling these at cost, but that really means they're taking a loss because of all the infrastructure and personnel resources they deploy on the back end. If you're looking for a "silver bullet" to protect your Diablo 3 account, this is the closest you'll come.
  2. Don't reuse passwords. If you use the same password for your email, battle.net and bank, odds are you're practicing poor password security. My recommendation is to use something like LastPass or KeePass, which make good password security easy.
  3. Update your OS, browser and plugins. Most modern operating systems and browsers will automatically update for you. But it's easy to see the update notification and procrastinate. Don't. Don't wait more than a day or two to update, once you see the notification. For plugins, it's sometimes harder because they don't often announce their updates. Adobe Flash, Adobe Reader and Oracle/Sun Java are the main attack vectors used of all the plugins out there, and they're getting better about notifying you of updates.
How can Blizzard do more to protect you?
I want to preface this section by saying that I don't know the details on what Blizzard is doing on their end to protect player accounts. I'd guess there's a lot going on that they don't talk about, or at least that I haven't read about. But that doesn't mean they can't improve. But I know they're already doing a lot to secure accounts. In many cases, more than your bank does! Things like forcing stronger passwords, investigating many of the reported instances of theft, publishing and linking to a great deal of information, giving you the authenticators, proactively communicating security steps. It even seems like they're refunding money to some gamers whose accounts were compromised, even after determining that Blizzard wasn't at fault - that's got to be some of the best response ever from a gaming company!
What follows is a few ideas I've taken from other industries that may help Blizzard improve. (Or not - again, I don't know for sure what they're doing on their end.)
  • Look at metadata associated with each previous login for the account. Often this metadata will differ between legitimate and malicious login attempts. Things like geolocation, keyboard layout, OS or game language or other data will be significantly different between a player and a thief.
  • Watch the common locations where compromised accounts are publicly posted for any gamer accounts that use the same account name or email address.
  • Drop a unique "cookie" that identifies the system a player logs in from. If the cookie has changed since the last login, or the cookie has been used with multiple accounts, this should raise a flag.
  • If there are multiple logins in rapid succession from a single IP or IP block, this should raise a flag.
All of these items can be indicators of a potentially compromised account or of a potential cybercriminal. Of course these measures consume personnel and system resources, meaning it will cost more to administer - but then how much do the reputation damage and time spent answering questions cost? And it will also result in frustrated players unable to login - but then you can take the stance of "we're sorry that you're unable to login, but it's for your own security" which is hard to argue with. And in conjunction with an email address, phone number, Skype or Twitter account, or other contact mechanism these false positives can be resolved very quickly.
And for our part, players should really be more tolerant of security measures. Again, adding an authenticator to your account takes an additional 5 minutes to set up and 5 seconds to use in practice. But it cuts the probability of compromise to nearly zero even if your system is fully compromised! And if you're like most people I know today, you appreciate it when your bank stops an apparently fraudulent transaction, even if it turns out to be legitimate. So do what's needed to help yourself be more proactive with security. A little initial setup can save you a lot of frustration in the end.

Is there anything I've missed? Do you have a different opinion? I'd love to hear about it so I can address the concern or amend my article. Constructive feedback is always welcome.

UPDATE: In an interview, a Chinese gold farmer claims to know the source of compromised accounts. According to him, forums are being compromised and the email addresses and passwords from there are used to try to log in to Battle.net. This is a pretty common tactic and underscores the importance of using unique passwords across sites and games. And if you're not willing to do that, get the Authenticator which will prevent this.

Wednesday, May 23, 2012

New Research Published on Mobile Malware

Researchers at NCSU have started the Android Malware Genome Project, which is designed to identify and classify known malware samples for study. The researchers' results were recently presented and published at the Proceedings of the 33rd IEEE Symposium on Security and Privacy in San Francisco, California. The paper, entitled Dissecting Android Malware: Characterization and Evolution (PDF link), analyzes the 1,200 samples collected between August, 2010 and October, 2011. The research analyzes the samples to attempt to determine how it is installed (infection vector), how it updates and its primary activities on the mobile device, as well as the sample's relation to other samples.

The research groups infection vectors into several categories. Far and away the largest infection vector is through repackaging and redistributing modified versions of legitimate applications. The second group is spying applications - that is, software for one person to watch another person's activities. Some malicious software purports to do something (which it may or may not), but installs malware in addition - these are so-called Trojan Horses.

There were also several primary types of activity that the samples performed. Many of the samples attempted to elevate privileges on the device by taking advantage of a flaw in the Android operating system. The goal with this action is to allow the application to have greater access to the functionality of the device. Nearly all of the samples attempted to connect the device to a larger group of compromised devices controlled by the malware authors - a so-called Botnet. Researchers found that another common activity was contacting premium services, such as SMS text messaging. Many of the malware samples also collected information, such as user accounts, text messages and phone numbers.

The researchers also looked at the evolution of the malware samples and families over time. Specifically they looked in depth at two malware families to illustrate the rest, DroidKungFu and AnServer Bot. These two malware families show that authors have incorporated many sophisticated features to help circumvent detection and frustrate researchers attempting to study the samples, among other things. And their analysis showed that mobile malware is rapidly maturing.

Some other interesting analysis was performed on the samples. The researchers ran all the collected samples against four mobile anti-virus packages Detection rates ranged from 20-80% effectiveness, with a big name A/V company firmly at the back of the pack. Unknown malware is likely much more successful than these results indicate, meaning anti-virus software really needs to catch up.

Wednesday, May 09, 2012

Securely Deleting Data Before Donating or Recycling Your Devices

Katherine Boehret has a good article over on All Things D about recycling your technology. But it overlooks one crucial point - you need to make sure your information is deleted before you hand it over. If you don't, your information, including financial data, could wind up in someone else's hands. A recent case-in-point was made when many refurbished Motorola Xoom devices were sold with their old owners' data still on them. When that happens it can lead to embarrassment (think private photos, videos), identity theft, financial fraud or other unpleasant things.

To avoid any of these calamities, you'll want to take steps to wipe out your data. You should do this regardless of what the company or person you're giving it to tells you. But don't worry, securely erasing your information has never been easier! Many devices have mechanisms built in to do just that. And there are some good tools out there for your desktops and laptops.

Securely Erasing your iPhone, iPod Touch or iPad

Apple's website has simple instructions on how to securely erase an iPhone, iPod Touch or iPad. Here are the steps from Apple's support site:
You can remove all settings and information from your iPhone, iPad, or iPod touch using "Erase All Content and Settings" in Settings > General > Reset.
For even more security, plug your device into your laptop and use iTunes to restore the device to its factory settings (but do not restore from a previous backup) before using the Erase All Content and Settings feature. Here are the steps from Apple's support site:
  1. Verify that you are using the latest version of iTunes before attempting to update.
  2. Connect your device to your computer.
  3. Select your iPhone, iPad, or iPod touch when it appears in iTunes under Devices.
  4. Select the Summary tab.
  5. Select the Restore option.
  6. When prompted to back up your settings before restoring, select the Back Up option (see in the image below). If you have just backed up the device, it is not necessary to create another.
    Prompt text: "Do you want to back up the settings for the iPod before restoring the sofware?"
  7. Select the Restore option when iTunes prompts you (as long as you've backed up, you should not have to worry about restoring your iOS device).
    Prompt text: "Are you sure you want to restore the iPod to its factory settings? All of your mnedia and other date will be erased."
  8. When the restore process has completed, the device restarts and displays the Apple logo while starting up:
    Prompt text: "Your iPod has been successfully restored to factory settings, and is restarting. Please leave your iPod connected. It will appear in the iTunes window after it restarts."
    After a restore, the iOS device displays the "Connect to iTunes" screen. For updating to iOS 5 or later, follow the steps in the iOS Setup Assistant. For earlier versions of iOS, keep your device connected until the "Connect to iTunes" screen goes away or you see "iPhone is activated."

Securely Erasing your Android Device

If you have an Android phone or tablet, you also have an easy option to securely erase the data. Though it's not quite as simple as with Apple devices, since Android has many versions and many devices that it supports. On Android, within the Privacy Settings dialog there is an option to delete all the data. The Google online manual for Android describes the option this way:
Opens a dialog where you can erase all of your personal data from internal tablet storage, including information about your Google Account, any other accounts, your system and application settings, any downloaded applications, as well as your music, photos, videos, and other files.
You should make sure that you have selected the options to delete internal memory and any memory on a SD card. If you don't have that option, the easiest thing to do would be to simply remove the SD card before donating, recycling, selling or giving it away.

Securely Erasing your Blackberry Device

Research In Motion's Blackberry devices differ in the steps to wipe them. Instead of trying to mention all versions and models, I'll suggest you look through Settings, Options or Device Settings to find something that mentions Security. In there, you should find something that says Security Wipe or Wipe Handheld. For specific directions you may be able to search the web and find help.

The same advice for SD cards applies to the Blackberry as to the Android phones. If it doesn't offer you the option, you should probably just the card.

Securely Erasing your Windows Mobile Device

Microsoft Windows Mobile also has multiple versions with different ways to securely erase your data. And again, my advice is to explore on your own. Look through Settings or Options until you find something called System Tab, Security, or About. In there you should see Reset, Reset your Phone or Clear Storage. Again, for specific directions you may be able to search the web and find help.

And for SD cards on Windows Mobile devices, you should likely just keep it.

Securely Erasing your Desktop or Laptop

There is some great free software called DBAN that will securely erase data from your desktop or laptop. I've personally used it for years and highly recommend it. Simply download the ISO file and burn it to CD or DVD. The method to do this varies across operating systems. If you need help, search the Internet and you'll find lots of information.

Off Topic: Traveling with Technology

There was a Twitter conversation with Martin McKeay and Jerry Gamblin today talking about how geeks handle traveling with all our technology. Jerry suggested that Martin write a blog post, but I decided to beat him to the punch. ;) This is part of an upcoming series of posts to my travel blog under the heading of Traveling Skills: The Art of Packing. In this post I describe how and what I pack as a geek who travels with technology, as well as why. Even though it's a bit off topic, I'm mentioning it here since so many of the folks who read this blog travel a lot.

I hope you enjoy it! Tips for Traveling with Technology

Detecting DNS Changer Infection with CloudFlare and OpenDNS

If you're using CloudFlare to enhance speed and security (it's a great, free service, by the way!), you'll want to check out one of their latest apps, created in conjunction with OpenDNS. The app will notify your website visitors if they are infected with the DNS Changer malware.

If you're not familiar with the DNS Changer malware, it modifies settings of the victim computers, rerouting traffic to banks and other sites of interests through the hands of the bot masters. This means sensitive information could be compromised.

Last year the FBI was able to legally take over the DNS Changer rerouting systems, protecting the victims to some degree. However, the FBI has to relinquish control in July, meaning victim systems which have not been fixed will be unable to access websites as normal. The FBI has an in-depth writeup on the DNS Changer malware (PDF link), along with information on how to find out if you're infected and how to fix the problem.

Enter the CloudFlare application. If you enable this application, CloudFlare will notify DNS Changer infected visitors to your website that are compromised. They also provide a link with instructions on how to fix the problem. Here's what the notification looks like:

Friday, May 04, 2012

Firewalls and Anti-Virus Aren't Dead - Should They Be?

Over the last several years, firewalls and anti-virus have been losing effectiveness. Many in the information security community have recognized this. Unfortunately many of the business and operations people haven't. The threats that these technologies (tools to assist in a solution, not the solution themselves) were designed to solve have changed. That's not to say that they do nothing - they can still be useful - but your organization needs to know what they're meant to counter and how to use them properly.

I was inspired to finally write this down by a story Wendy Nather contributed to Infosec Island, entitled Why We Still Need Firewalls and AV. While I agree with her general premise, I think the article doesn't get to the real heart of the issue. When firewalls and anti-virus were all we had and effectively countered the threats we faced, they tended to be used more as they were designed. But now, firewalls and anti-virus don't counter the majority of the threats and aren't used very well.

Firewalls were invented a couple of decades ago to keep Internet-borne threats out. The firewall has its roots in the early 1990s, a time when commerce was prohibited on the Internet and most companies didn't have any presence there. As computer networking grew in popularity, connecting to the Internet was a way to share information across organizations, as well as internally. However, within a decade, Internet attacks were prevalent and organizations needed a way to protect the devices on their network. The firewall was popularized as a way to enforce a hard separation between the outside and the inside. The major advantage to this approach was that it was much cheaper than securing every single device. And at the time just as effective, since most devices had no need to communicate over the Internet and so a small set of connections were allowed to pass through the firewall.

The Internet landscape has changed drastically since then. And with it, the Internet threats. Modern business processes are highly dependent upon and thoroughly integrated with the Internet. Organizations invite masses of Internet devices into their network to deliver web pages, email content, support mobile devices and dozens of other reasons. At the same time, devices within the network routinely initiate communications to the Internet and pass data back and forth. Firewalls have gotten better, but they simply can't handle the new ways in which organizations work on the Internet, nor the more sophisticated threats. They still have a use as a tool to protect networks, but more tools are needed.

Similarly, anti-virus was first developed to detect, prevent and remove individual viruses. These software packages were simplistic, identifying malicious programs and files by looking for indicators or "signatures" that were unique to each virus. This was, again, before the Internet was widely used and most virus transmission was very slow. The anti-virus industry was easily able to keep up with new viruses and forms of existing viruses. This was a time when the number of specimens was very small and they didn't change very often. Updating the signatures was a task done once a year or so, and in fact when the subscription-based licensing model for anti-virus was initially launched it was widely viewed as somewhat of a betrayal of trust - paying continually for the same software. It was a different time.

But today's situation is vastly different from what anti-virus was designed to deal with. Because of the proliferation of Internet connectivity, malicious software spreads very quickly. Instead of taking months to spread to thousands of systems, it takes seconds to spread to millions. And the malware itself has become much better at avoiding detection, taking steps to hide its signature. Most viruses today are obfuscated a number of times and checked to make sure no anti-virus software can detect it - all in a matter of seconds, and all before it's sent out to its victim. And viruses are often created and distributed in such a way that anti-virus vendors don't get a copy before the malicious software finds its victim. And when it does infect a device, it frequently disables any anti-virus software and hides in such a way that it can't be detected except by sophisticated, usually manual techniques. Anti-virus software largely still relies on signature based detection, which is increasingly growing ineffective and often slows down system performance.

Further decreasing the effectiveness of firewalls and anti-virus in organizations is the way they're used. Because of the massive number of connections in and out of a network, definitions of what is and is not allowed and exactly how to allow or deny network connections have become a sprawling mess. And underneath all this complexity, many organizations don't even do the basics right - properly configuring and managing these tools. And administering anti-virus often means running daily reports of issues and sending a technician onsite to manually investigate what's gone wrong. Firewalls and anti-virus cost many organizations millions of dollars a year and are failing to do what they should.

So why should we keep these things around? In the case of firewalls, they do exactly what they are supposed to do and do it quite well. Organizations just need to get smarter about using them. That means limiting firewalls' purpose to what they do well and handing off other duties to other tools. In addition, organizations need to make sure they have a good firewall management program - even small organizations. And anti-virus should be re-understood as a broader concept of endpoint protection. This includes securing configurations and access, restricting software to that which is known to be safe and putting tools in place to detect anomalous behavior. Anti-virus software packages can help fulfill the last piece - telling systems administrators that a known threat has been detected or that suspicious activity has been happening.

But one thing I think we as security professionals should be advocating is reducing the amount of money and resources spent on these technologies. Instead, shift to more effective ways to secure an organization. For example, by providing better training to IT staff for using with the existing tools and technologies. Or improving security awareness programs so that viruses (not to mention many other types of attacks) are less likely to be effective. In the end, this will allow an organization to maintain the same level of security at a lower cost or to increase security at the same cost.

Wednesday, May 02, 2012

What Infosec Can Learn from Enron

Enron's financial auditors and management conspired against their investors. The system that was supposed to protect against this kind of fraud, instead worked against the people it was supposed to protect. And there was hell to pay when the organizations collapsed and when the fraud was exposed. The Harvard Business Review today makes the point that just because an auditor approves something, that doesn't always mean its right.

Information security professionals, take a lesson from Enron: auditors aren't the sole authoritative voice, and they can be fooled or coerced just like anyone else. Too often internal and external auditors are trusted as the arbiters of what's right and wrong. But this can fail an organization if the executives don't understand what role the auditors should be playing.

Auditors serve as an important check on the system by assessing against a known framework. But there is always room for interpretation in any standard. That's especially true in areas where standards are evolving quickly or where a new field is opening up. That was the case for Enron with the "mark to market" strategy, and that is true today in Infosec.

How do auditors fail the organizations they serve?
Let's use the Payment Card Industry Data Security Standard (PCI-DSS) as an example. The PCI-DSS has done a lot of good over the years it has been around. But as IT, payment systems and threats have changed, it has had a hard time keeping up. As an instructor famously said in a class I attended, the DSS only changes once every two years; but the Security Standards Council (SSC) can change the meaning of the words they use at any time.

The PCI-DSS has also heavily misinterpreted. The standard is meant to be flexible so organizations can find the right security controls, rather than blindly following what's written. However, many auditors stick staunchly to the standard, verbatim. That means the company either has to jump through hoops to get their official compliance stamp, or can game the system to fit within the narrow definitions. Other auditors are so easily influenced or coerced by their client that virtually any control is deemed adequate.

And there's room for abuse of the standards, as well. Some audit companies are well-known for providing "clean" or "green" reports to their clients (sometimes those who spend above a certain dollar level), regardless of what the actual security looks like. Breaches have left several organizations wondering why they paid high fees to auditors who didn't find the security flaws.

So it's important to know how much to trust your auditors and what role they serve. You can't give them authority to make your decisions for you, but you can use them as advisers. In the Enron case, their auditors had huge amounts of business in other areas, meaning there was a conflict. In your organization the auditor may be trying to get a big contract, unseat a competitor, make a name for himself or whatever. In these cases the bad advice is almost always unintentional, but still present.

Probably once every month or two I speak with a high-level executive looking to hire someone to check behind their auditor. It's usually because the executive suspects of one of the failures above. In reviewing the work done by the auditor I usually find that the executive's instinct is right.

How can you help your audit succeed?
Choose your auditors carefully and use the right process. I helped write the SANS whitepaper How to Choose a Qualified Security Assessor (PDF link) and there's other good questions to ask for choosing an auditor elsewhere. But it goes beyond just choosing the right auditor, you have to have the right audit process in place. Here are some tips to avoid the pitfalls that got Enron into trouble.
  • Evaluate Reputation. Not just whether they've done a lot of audits before, or whether all their clients pass, but whether they are perceived to have high integrity, technical capabilities and security knowledge. Don't get this from the auditor or their hand-picked references, ask around. Reputations follow companies and people and are spread quickly.
  • Evaluate Skillset. Auditors falling too far toward leniency or rigidity often do so because they are not well-versed in IT and security. That means they don't understand the intent of the standards they're auditing against, which means they can't possible give you good advice that's outside of the letter of the standard.
  • Oversight. Make sure there is good oversight of the auditors that are performing the work. This could be done by an internal audit group, a CISO or even a CIO. The point is that someone needs to make sure the work is not just done, but done right - thorough, accurate and independent.
  • Use Auditors as Checks. Don't forget that auditors should be checks on what you're doing, they shouldn't be telling you exactly how to do it. They've often got a lot of good advice, but you have to weigh that advice carefully within the context of your business and your ethics.
But even going through a thorough diligence process can get you stuck with a bad auditor. That happens. But when it does, you've always got the option to get a second opinion. And I'll leave you with a great video on some signs you've got a bad auditor.

Saturday, April 28, 2012

What Biosecurity Can Learn From Infosec

Introduction
Recently there has been been debate over research on the H1N1 strain of influenza. This is the strain sometimes called avian flu or bird flu. Many researchers have been studying all they can about the disease, while many researchers, institutes and governments are trying to prevent more research. The arguments on both sides are complex and nuanced, and each side has many valid points.

While I don't want to recap the entire argument, I'll try to summarize each position in a sentence or two. The pro- side believes that legitimate research will help us deal with any eventual version of the virus that can spread from human-to-human. The con- side belives that research makes it more likely that a very deadly strain might make its way out of the lab, or that terrorists or governments will be able to more quickly have a weaponized strain.

Current Situation
While public science on H1N1 may cease, the virus itself will keep evolving. Life will always experiment with new forms. Eventually one of these may be a variant of H1N1 that is successful in spreading human-to-human. That is, it finds a new evolutionary niche it can exploit.

And certainly it's to be expected that organizations currently researching biotechnology for warfare would continue. What we saw with chemical weapons in the first World War was private research done by corporations being co-opted for use by the military. Bioweapons research groups may even reduce or discontinue their work in the presence of public research, because any effective weapon is likely to be much less effective if it is well understood and can be effectively combated.

Comparison to Infosec
There has always been a lot of research on finding security vulnerabilities in software. Some researchers look for vulnerabilities so issues can be fixed. Some researchers look for vulnerabilities so they can break into systems. And so in the Infosec community we used to have similar discussions as those going on in the Biotechnology and Biosecurity community now. 

That debate always used to remind me of a bad movie chase scene. When the person fleeing sees a big rock coming up quickly, he stays calm and turns at just the right angle - a near miss. When the person chasing sees it he panics and throws his arms in front of his face - a gratuitous explosion ensues.

Fortunately in the Infosec industry we have mostly moved toward the first course - staying calm and taking just the right angle. But for a while we, too, had lots of people who tried to make the rock go away by hiding from it. Many times this was software developers who reacted more violently to the legitimate research than to the criminal research! And the software developers have benefited by having much more robust and secure products.

Benefits of research and publication
Research helps in that:
  • identifies potential issues before they are found in the wild
  • allows us to prepare for likely strains before we see them
  • able to refine methods of doing this kind of research
  • gives us a better idea of the actual threat level - more or less severe than imagined
  • shows us indicators of what an outbreak might look like
  • shows us indicators of what an attacker might need to create a bioweapon
Publication helps in that:
  • publicizes the fact that these risks exist and are being studied
  • attracts more scientists
  • attracts more funding
  • allows results to be peer reviewed
  • identifies those working in the field to facilitate collaboration
Alleviating fears
One fear is that the research may be co-opted by a nation state for biowarfare. But I would argue that the antidote for that is more research, not less. The same fear exists in Infosec and that's just the way we've tried to deal with it. Hundreds of people doing security research, banging away on software. They're not going to find all of the bugs, but they're likely to find quite a few of them. Looking at it from the other direction, if governments quash open, public research, the only people who will be looking for a bug will be the ones looking for a weapon. And of course you can't legislate nature not to find a virulent strain.

But like in the Infosec community, there is still a question of the right amount of disclosure. Some in both fields advocate a full disclosure stance. That is, every detail should be published as soon as it is known. Others advocate a more limited disclosure policy, only publicly releasing enough information to describe the issue and to protect against it. Releasing certain technical details only to those who will be a part of the solution to the problem.

Publishing technical details is important for a couple of reasons. First, it ensures that the results can be replicated. This part of the scientific process is critical to the reliability of the results, as well as to identify potentially significant but unknown variables or mistakes. Second it provides foundations upon which future scientists can improve their techniques. Process and methodological innovation are critical to the scientific process, especially in this case where nature and bioterrorists are continually improving their results.

In many minds, the biggest fear is that we do nothing. If nature or a bioweapons group creates a viable threat, our lack of preparation will doom us to a greater impact. But if we understand the H1N1 virus well then we will either have defenses in place or can quickly take action. And as previously mentioned, public research may discourage groups from trying to develop weaponized versions in the first place.

Summary
Biotechnology should be looking (particularly in the case of Avian H5N1 Influenza) to increase scientific study and publication, rather than suppress it. The more scientists who work on it and publish their results, the more likely we are to find a way to defeat both a natural and unnatural strain of the disease. But certain technical details should be limited to a smaller group who rigorously review those details to make sure legitimate researchers stay ahead of the alternatives.

Articles
http://www.virology.ws/2012/01/03/should-we-fear-avian-h5n1-influenza/
http://www.economist.com/node/21553448
http://arstechnica.com/science/news/2012/02/maybe-avian-flu-isnt-that-deadly.ars
http://arstechnica.com/science/news/2012/02/study-of-deadly-flu-sparks-debate-amidst-fears-of-new-pandemic.ars

Update 2012-05-07: Since I published this article, I've had some discussions and there have been some new developments.
  • First, the paper in question has been published. Second, Nature has written a good article explaining the circumstances around publishing the controversial article
  • Second, I want to make clear that what I'm advocating is for Biosecurity to review our discussions and debate and apply it to their own situation. In other words, learn from our mistakes, successes and thought processes to speed up and improve their own.
  • Third, I've replaced "Biotechnology" with "Biosecurity" where it seems appropriate, in order to clarify to whom I am referring. I know Biotech spends billions and has well developed processes in place for research. Infosec ourselves can probably learn from their process.

Friday, April 20, 2012

Cybercrime Does(n't) Pay?

Earlier in the week a couple of Microsoft researchers released a study of cybercrime financial loss statistics (Sex, Lies and Cybercrime Surveys - PDF link). Effectively their research indicated that bad sampling, survey and statistical methods have led to a number of dubious results. I think most of us who are involved in the industry have known this intuitively for a while. Any time you have metrics purportedly for the same thing that vary by factors of 10-1000, that says something isn't quite right. 
 
The conclusion of the paper is essentially that estimates of the cybercrime economy are grossly exaggerated. And they make the point well enough that I won't belabor it here. Go read the actual article (linked above). I'm more interested in how this applies to other areas and studies. Here are a couple of points I think are particularly relevant, as well as a couple of others.
  1. Heavy Tails. Means (averages) are most useful when all the data are clustered closely around that number. When the distribution is very wide, you're going to have a problem getting people to understand what the results mean. For example, if I said that the average cost of a DVD player is $100 it doesn't tell you anything meaningful about the market for DVD players. That's because the costs range broadly, so the mean is almost arbitrarily in the middle somewhere.
  2. Garbage In, Garbage Out (GIGO). Since the data in these studies is typically collected by sending surveys, it's impossible to verify its integrity. In some cases people outright lie, but in others they simply don't know true costs and are just guessing. They may be higher or lower than the actual, but since there are never negative values, the overall trend almost necessarily has to push the number higher than the true value. But by how much it's impossible to know.
  3. Attribution. It's not easy to know where fraud came from. How do you know that somebody stole your credit card number from an online database, versus going through your trash or copying it at the restaurant down the street? This kind of attribution is especially hard for consumers who often can only know about an incident after actual fraud or if they are issued a new credit card. If both things happen within a year or so, the consumer is likely to think one caused the other, though as we know correlation does not imply causation.
  4. Self-Selecting Population. The people who respond have at least one thing in common - they return surveys. They may have other things in common, like a tendency to overestimate numbers, to be particularly susceptible to cybercrime, or any number of dozens of things that could influence the validity of these studies.
This isn't just a problem that affects cybercrime statistics, though. The Ponemon institute annually puts out a similar report on losses due to breaches (as well as a report on cybercrime). Their methodology is similar to the ones discussed in the Microsoft paper, and therefore suffers from some of the same flaws. To get consistent results over time that show a trend consistent with expectation, I suspect that some data manipulation goes on, which would add yet another layer of bad science (if true - I only have my gut instinct to go on, not any facts).
 
One group that tries obsessively to get the science right is Verizon Business who puts out an annual breach report. This uses much better science and statistics and can be counted on to have some rigor. Results can vary wildly year-over-year because they are always introducing new data populations (several groups contribute their figures, most with a different demographic that they serve), but that will normalize somewhat in the next 5 years as their data set gets big enough that new populations skew it less. The raw data is collected and published openly so there can be some good peer review of it, an important step for ensuring validity of results and conclusions.

But these studies don't have to be done poorly. By changing the way the researchers go about it, they could get much better results. For example, if instead of going to consumers with a survey, the authors had been able to get the information from banks the numbers would have likely been very different. The banks would have objective measurements of customer losses, a properly large sample size, a randomly distributed population, and would likely know more about the source of the fraud.

Although many of these studies fail at basic science, I'm hopeful that the information security industry will get better. Both at true academic research and at coming up with accurate public metrics for many of the most important data. We'll get there as we mature as an industry, but it will take a while. Until then, stay skeptical.

Thursday, April 19, 2012

Back: Better, Faster, Stronger

I'm back! After about a 4 year hiatus in this space, I plan on remaking my place in the security blogosphere. Not that I haven't been active since then in security - I have! And I've been involved in the community, too. But this space has been conspicuously vacant as I've tried to maintain a relatively low profile.

But now I'll be back to saying it publicly, rather than sending it through a corporate lens or self-censoring. I'll be posting as often as I find the time to cobble something together. If the past 4 years of output is anything to judge by, that will probably be a lot of stuff coming your way! And I'm going to try to play around with the content, format and delivery too. Keep it loose and entertaining, as well as informative.

One key to that, I think, will be to make better use of social media. I'm going to start off with Twitter, as that tends to be where most of my colleagues and peers gather. So if you haven't already, hit me up @beauwoods.