Trustworthy Computing & The Security Development Lifecycle

On January 15, 2002, Bill Gates distributed a memo titled ‘Trustworthy Computing’. This memo outlined his vision for reinforcing security as a serious factor during application development, and he declared that when facing a choice between adding features and resolving security issues in development, security must always be prioritised. An excerpt:

“If we discover a risk that a feature could compromise someone’s privacy, that problem gets solved first. If there is any way we can better protect important data and minimize downtime, we should focus on this. These principles should apply at every stage of the development cycle of every kind of software we create, from operating systems and desktop applications to global Web services.”

From this, the Security Development Lifecycle (SDL) was eventually born, a process for reducing software maintenance costs while improving product reliability related to security through a combination of documented phases such as threat modelling and attack surface analysis.

On the 13th of May, I used part of my scholarship from the Royal Academy of Engineering to fly out to the United States to attend Microsoft’s Security Development Conference 2013. Held in San Francisco, California, the two-day event served to improve the awareness of both the technical and developmental challenges faced with writing secure software, and to promote use of the SDL.

With this being my first time in the US, I quickly gained an appreciation for the cultural and climatic differences between London and SF, and my experience was most gratifying. I held skepticism that such a place could exist where rain wasn’t as predictable as the sun coming up, but I was swiftly proven wrong. I landed sometime in the afternoon, and though my watch told me it was 11pm London time, I didn’t feel like acting like it were true.

I was fortunate enough to have a long-lost friend living relatively nearby, and we used the opportunity to catch up and shape my US experience with definitive positivity. I quickly learned that I don’t get jet lag, we ate unhealthy American burgers, healthier (and real!) Mexican food, thoroughly experienced the SF culture and saw the sights.

security development conference

It’s like being back in uni again!

But when we weren’t out being tourists, I was sat in the InterContinental, in true lecture-fashion, hearing talks from a variety of organisations including Adobe, Cisco, IBM, Twitter, Verizon and HP, learning about the SDL. The conference kicked off with Steve Lipner, Scott Charney and Howard Schmidt giving an account of the early days of trustworthy computing at Microsoft and beyond.

The day continued with a strong emphasis on the ISO 27034-1 standard, with talks split up into three tracks: Engineering for Secure Data, SDL & Data Security, and Business Risk & Data Security. I chose to mostly attend the talks in the engineering track, and though they weren’t as technical as I’d hoped, they provided useful considerations related to secure development for programmers.

“Security at development time is rapidly becoming conventional wisdom”

Standards shouldn’t be prescriptive, or industry will invent their own ways of improving faster than they can catch up. An example we were given was the case of the Data Encryption Standard (DES) taking a significant amount of time to improve from 56 bit keys to 3DES, and eventually AES. By the time this had happened, industry had moved on and solved the encryption strength problem themselves.

“A 472% increase in Android malware since 2011”

A number of Gartner studies were referenced in talks on mobile application security, with a reported 5.6 billion mobile connections existing today. It was therefore very important that the basics of security were honoured by developers, who are guilty of over provisioning the permissions for their apps. They should:

  • Using HTTPS not only for posting sensitive data, but also for presenting the forms to resist snooping
  • Using a suitable cryptographic cipher suite with appropriately strong keys for encryption
  • Sandboxing, for robustness from datamining and brute force attacks
  • Static application security testing for XSS / SQLi remediation
  • Identifying taint propagation by threat modelling and tracing inputs from source to sink
  • Keeping interesting information out of the source code
  • Mitigating side channel data leakage by checking caches/logs/clipboards
elevation of privilege

Elevation of Privilege – A threat modelling card game

These points, though seemingly obvious, appear to be forgotten in some high profile cases out there today. The majority of vulnerabilities are still dominated by stored XSS and SQLi exploits, and though it has taken roughly a decade (2000 – 2012) to go from a ‘do nothing’ approach to a reactive approach, we are just beginning the transition into the proactive.

Security without usability, however, is paralyzing. We heard about the importance of keeping software functional, secure and usable in a talk on cloud security, measured boot and UEFI. We were given a demonstration of the use of the Trusted Platform Module (TPM) for whitelisting/blacklisting firmware changes. The implications were that one could prevent access to internal resources in an enterprise environment if devices did not comply with policies such as being on the latest OS update. I was rather skeptical about this talk, since I felt there was a contradiction in the promise of usability and the action of firmware based blocking for security.

We must keep security transparent to the user, and if many users cannot get online due to an OS update coming out the previous day, both usability and productivity are impacted. Yet, we learn nothing about the security of the devices – we simply block them for not being up to date. This, though preventative, to me borders on software based paranoia, and feels like it would yield far more problems than it hopes to solve.

The most technical talk I experienced was one on Integral Security, given by Robert C. Seacord from CERT. He was asked to give, in one hour, a talk which he said would typically need the day. We heard an extremely fast paced whirlwind tour of integer types in the C standard, with discussions on wraparound and overflow. Hearing the story of a wraparound error in a 16 bit counter causing Comair to ground 1100 flights on Christmas Day in 2004 helped emphasise the need for careful bounds checking and type safety.

Most programmers wouldn’t think twice before adding an unsigned char to a signed char, then storing the result in an int, which is terribly bad practice. We saw examples of arithmetic overflow in operations where it might not be obvious, such as division, modulo and in comparison operations. As C programmers we must avoid conversions that result in the loss of value or sign, and we must be mindful that conversions to a type with greater rank or signedness are safe. The bottom line:

“In C world, know what you’re doing and be careful”

On the second day of the conference, we heard talks on Single Sign On, Claims Based Authentication, a roundup of SDL adoption and a rather curious talk by Brad Arkin, Chief Security Officer at Adobe, titled “Accepting Defeat and Changing The Battle Plan”.

He called out his experience that ‘making software more secure by finding and fixing vulnerabilities in code’ was a ‘complete waste of time’, and backed up his claim with cases where Adobe Reader and Flash Player suffered their greatest series of zero-day vulnerabilities after extensive fuzz-testing and fixing. His emphasis here was on tackling dogma in the security profession, explaining that “you don’t have to outrun the bear, just the last person”.

Adobe Reader zero days

Adobe Reader zero days after fuzz-testing

This talk was somewhat unsettling for me, as I couldn’t help but feel a tone of defeatism about the topic. Though fuzz-testing had no impact on the number of attacks Adobe suffered, I would consider the case of there being significantly more exploits against them had there not been any tests. He explained that by outrunning the last person rather than the bear, they reached a point where “vulnerabilities were still there, but mitigations made it harder to exploit, so attackers moved to exploiting SWF in Office”. It seemed like the priority here was not to write secure code, but to shift hackers’ targets onto others. By dismissing fuzz-testing and fixing as pointless, I felt one could misinterpret the talk into using it as ammunition against adopting SDL practices, which I found counterproductive.

Though it may be the case that developers won’t write mathematically provable secure code in practice as they did in University, the attitude that it will fail from the outset is toxic. I believe the same level of effort we see put into reasoning about concurrent code for deadlock prevention should be put into secure development, and the attitude from the outset should be to obsessively strive for robust, functional and secure code, from the ground up. Fuzz-testing may not change the fact that the security model of an application is flawed, but its value is significant and essential in the first few stages. I do, however, understand that it isn’t a one-size-fits-all solution to closing vulnerabilities – careful work and investment will be necessary for fixing what’s left. It is these concepts I feel were argued against rather than the dogma in this talk.

Google I/O

Google I/O at the Moscone West Conference Center

Ultimately, I gained enough from the Security Development Conference overall to further realise the significance of security considerations from ground up when designing software. The talks were motivating and useful, with a bit of fun in the form of PHP-bashing too. I met developers from various startups, and some cool folks from NIST. It also turned out that we were two blocks from the Google I/O conference, which explained the numerous sightings of people wearing Google Glass.

I returned to London with eager anticipation for the next time I go back to the US, as the conference and my experiences have given me much to reflect on. One quote in particular I liked from Robert C. Seacord was that:

“Compiler warnings are the First line of defence. The Zeroth line of defence is knowing how to code correctly”.

So I guess ‘real’ programmers do pay attention to warnings after all…

– Alex Kara

All views expressed in this post are those of the author and are not representative of any associated entity.

Advertisements

About Alexander Karapetian

Software Engineer & Computer Scientist from Imperial College.
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s