Features 12.03.2024

Software Security Liability Regimes: The Good, The Bad and the Buggy

Untangling the nuances of liability regimes for software makers that rush code to market.

The United States is looking at liability regimes for secure software and security bugs. But whilst the concept is a no-brainer on paper, Danny Bradbury finds that it’s just not that simple

Most of us skip past software end-user license agreements (EULAs) without a second thought, eager to simply get our job done or immerse ourselves in the latest entertainment experience. Those ‘agreements’ often include waivers absolving vendors from any legal action should their software cause damage.

This sour-tasting ‘take-it-or-leave-it’ approach might be about to change. The White House has signalled that it’s progressing with a controversial tech policy: a software liability regime.

“We must drive toward a future where defects in our technology products are a shocking anomaly,” said CISA director Jen Easterly, testifying before the House Select Committee on Strategic Competition Between the United States and the Chinese Communist Party in January 2024. “A future underpinned by a software liability regime based on a measurable standard of care and safe harbor provisions for software developers who do responsibly innovate by prioritising security.” At this point, we’ll drop in the Wikipedia definition of Safe harbor: “a provision of a statute or a regulation that specifies that certain conduct will be deemed not to violate a given rule.”

A week later, National Cyber Director Harry Coker said that he’s working to implement some of these requirements.

“The Strategy says we need to hold software manufacturers accountable when they rush insecure code to market: so, we’re working with academic and legal experts to explore different liability regimes,” he said, speaking on February 7 at the Industry Council Intersect Tech Policy Summit. He promised engagement to hear the industry’s perspective “soon”.

Neither the Strategy itself nor an implementation plan that the Office of the National Cyber Director (ONCD) released last July go into much greater detail on what a software liability framework might look like. The plan calls for a legal symposium to explore possible approaches to the issue. It will include discussion of a safe harbor framework to protect companies that demonstrate secure software development practices.

A long-debated topic

Sandy Carielli, principal analyst at Forrester, says that software liability has been a long-debated topic.

“They’ve been batting about this question of how they can drive better ownership of security at the software vendor level,” she says. “I think it’s come up a lot more recently pertaining to some of the NIST guidance and executive orders around just application security and security by design.”

Carielli cites a Forrester blog post about a security hack at pharmacy claims processor Change Healthcare in February that delayed prescription processing and disrupted healthcare operations. With incidents like these having an immediate material effect on people, she warns that the stakes are rising: “The reason these conversations are happening is because there’s the recognition that software outages can kill people.”

These liability discussions are emerging outside the US, too. This April, the Product Security and Telecommunications Infrastructure Act 2022 comes into force in the UK. It applies to companies that make, import, or distribute ‘smart’ connected products in the UK and can levy fines of up to £20,000 for companies that violate the rules, up to a maximum of £10 million or 4% of annual global revenue (whichever is highest).

Untangling the nuances

While appealing at the headline level, the idea of holding software vendors liable for flaws is fraught with questions, warn experts. For one thing, what constitutes liability? Some cases might be more obvious than others.

In many instances, customers might have failed to install available patches. “More often than not, when it is a software vulnerability, it’s an old one,” says Paul Holland, head of research at the Information Security Forum. “It’s very rare for a zero-day vulnerability that’s completely unknown to the market to be used in an attack.”

“The reason these conversations are happening is because there’s the recognition that software outages can kill people” Sandy Carielli

Failing to patch software might seem to put all the liability on the user, but Carielli counters: “What is the timeline through which an organisation should be expected to upgrade a third-party library once a vulnerability is discovered in order not to be liable if they’re breached?”

Holland points out that patch requirements can change on a per-industry basis. He gives connected devices in critical infrastructure applications as an example. “A lot of them have properties built into them so that they can only work safely in that environment,” he says. “Change anything, and you have to completely revalidate that.” That process might take months.

Even when a company uses the most up-to-date version of a third-party library, zero-day bugs can still emerge. Who is liable, then, and does this change if the software involved is open source?

Who is liable for enforcing security controls?

Holland points out that software flaws aren’t the only security problems facing companies. “Most of the attacks come via some kind of phishing email,” he says. Carielli also points to insider attacks and credential stuffing as other causes of those problems. However, even in these cases, we could argue that some liability could still fall on the vendor for not supporting or demanding strong security controls. Did they at least support multi-factor or passkey authentication in their product, if not enforce it? Did they make it easy to configure least-privilege access in their products?

Deciding where liability begins and ends between a software vendor and a user will be a complex, case-by-case affair, says Jim Dempsey, senior policy adviser to the Stanford Program on Geopolitics, Technology and Governance and a lecturer at the UC Berkeley School of Law, in his Standards for Software Liability paper for the non-profit law publication Lawfare. He argues it will involve a “fact-finding process that allocates responsibility between them”. He says we’ll rely on common law negligence and sector-specific rules for user liability.

When considering a standard of care for software vendors, he suggests establishing a baseline set of features and controls. Government seems to be leaning in that direction with its safe harbour concept. It would establish a baseline level of practices organisations could follow to minimise liability risk under any legislation. But what would those practices be?

Security by Design

Security by design is a vital part of CISA’s discussions. In October 2023, the Agency updated the six-month-old draft of its Secure by Design guide. It includes three principles, each of which recommends several measures (just some of which are cherry-picked here):

Take Ownership of Customer Security Outcomes

  • Reduce the size of software hardening guides
  • Discourage the use of unsafe legacy features
  • Create secure configuration templates
  • Vet open-source packages and help to sustain/improve their source code
  • Align with zero-trust architecture

Lead from the Top

  • Include details of a secure by design programme in financial reports
  • Create a secure by design council

Embrace Radical Transparency and Accountability

Transparency requirements on the vendor would doubtless be key, simply forcing them to reveal what they’re doing to make their software secure. In the US, the FDA has already required medical device manufacturers to publish a secure bill of materials, for example. CISA’s document focuses on transparency with measures including:

  • Publish aggregate data on trends like customer MFA adoption and statistics on how widely software patches have been adopted.
  • Publish a Software Bill of Materials (SBOM)
  • Publish a vulnerability disclosure policy (and publish complete information on all vulnerabilities)

Memory safety

Another measure in the report has been one of Easterly’s biggest pushes: the use of memory-safe languages. Speaking at Carnegie Mellon University in February 2023, she advocated for using memory-safe languages to stamp out memory buffer overflow bugs – a common cause of security flaws. This February, the ONCD released a detailed technical report reinforcing this request.

“The only way to make software more secure is to dramatically increase its cost, change the market, and reduce the amount of software being written” Robert Graham

Carielli argues that while many newer programmes are likely already being written in memory-safe languages, revisiting millions of legacy lines would be beyond many companies. “There’s only so much we’re going to be able to do to address all the technical debt and it never entirely goes away. So you do risk mitigation,” she says. CISA’s Security by Design guide advises companies to develop applications in memory-safe wrappers, even if they can’t rewrite the entire code.

Another measure advanced by CISA is to document conformance to a secure software development life cycle (SDLC) framework. It suggests NIST’s Secure Software Development Framework as a possible assurance standard on which to lean on. Still, Dempsey argues that this is heavily based on the process – how software gets made rather than what is delivered at the end.

We need something that focuses more on a baseline level of security features and outcomes, he says – a list of ‘must haves and no-nos’. We see some of this in CISA’s own document. A call for non-default passwords (in 2024, are we really still discussing this?) is an example, as is another recommendation to provide secure configuration templates as security guardrails for customers.

Pushback on secure design

While some suggest that security by design will hold vendors to a higher standard, not everyone is convinced. Robert Graham, founder of cybersecurity consultancy Errata Security, numbers internet-scale scanning tool Masscan among his creations. He argues that security by design is harder than we think.

“Software engineers are already diligent. The reason for software bugs is complexity and the sheer amount of software we demand programmers write,” he says in his newsletter. “Security is a tradeoff. The only way to make software more secure is to dramatically increase its cost, change the market, and reduce the amount of software being written.”

As an example, Graham cites a path traversal bug mentioned in Dempsey’s paper. Graham explains that this bug was already well understood but extremely difficult to remove. “There is no easy fix. Whatever you come up with won’t actually reduce the number of patch traversal bugs. What it will do is add a lot of cost to the software development process,” Graham says.

“It would be very easy to go down the path of all the corner cases that security by design won’t solve,” counters Carielli. “But security by design forces you to identify and solve the big architectural problems upfront. If you don’t do that, then if something is discovered later on, it’s a hell of a lot harder to fix.”

Debugging the legal and technical minefield surrounding software liability is equally difficult as sanitising buggy software in the first place. This is why no comprehensive solution yet exists – and why any forthcoming policies and frameworks will be subject to intense scrutiny.

Latest articles

Be an insider. Sign up now!