Choices and responsibility

Brief notes on tools we choose to use, their impacts and risks, and who bears the cost.

3 min read
ai llm security

Every time new technology comes out and becomes widely available, there is a difficult balance to be found between the exuberance of early adopters—keen to drive broad usage and land early wins unlocked by the new thing—1and those who would seek to better understand the downsides before going too deep. There are essential (and obvious) risks none disagree on, but these are few. The real challenge is in negotiating the middle-ground, all the second+ order effects. It’s fair that neither side prevails unchallenged2.

My primary perspective comes from doing Security Engineering work across companies of different sizes (and cultures), some defense but mostly offense. On either side though, I’ve largely thought about what the new technology amplifies in terms of risk. Today, of course, it’s LLMs, but the problem space is fractal: we run into variants3 of the same meta issues all the time in security, because that’s the nature of our work. Something shiny, often useful, almost never designed with safety4 as a primary property.

Yet, in accepting that we cannot wait for good-enough guardrails around new technology within certain time constraints, we also do not get to abdicate our responsibility for the choice of using such tech. We might not be aware of all the specific second+ order effects, but we can be sure they exist5.

So it is with LLMs and all they’ve recently unlocked in terms of working with software and systems (e.g., code assistance, vulnerability discovery, semi- or fully- autonomous personal agents). Broadly, we’ve been able to convert A LOT of ideas into running code, and got some pretty funny and clever things out, too. Quite fast, generally. Yet, all of this software remains the responsibility of whatever human actor is ultimately at the top of the pyramid, and we should neither pretend otherwise, nor enable unaccountability in this regard.

There are situations where and simply using whatever the LLM generated for code is fine without paying it too much attention: prototypes, one-off scripts, even tools that would only ever impact that human if there was a problem (i.e., skin in the game), etc.

Yet for code that’s to be shared with other people it remains the human’s responsibility to review it, ensure its quality, and do so both in terms of respect, and empathy, before sharing it in the first place. That human is still responsible.


I really enjoyed reading (and recommend) both Russ Cox’s post in golang-dev@ on suggestions for how to approach aspects of this issue in that project, and the Oxide RFD (576) he mentions.

Footnotes

  1. I happen to know how to use em-dashes; this article is 100% organic.

  2. Even this concession is uncomfortable for some cultures, which would much rather build up a high degree of confidence/containment before allowing anything. This moves into a different discussion on risk, and while that conversation is interesting and worth iterating on, it is not how things currently work in practice.

  3. Injections of all kinds are as old as computing, as is the unsafe mixing of code and data.

  4. I think of safety in general, not just security. They are deeply intertwined, yet the actual, meaningful property we should strive to deliver upon has to be safety; security is a critical and related, but not identical, attribute.

  5. At a macro scale this gets complex, and we see plenty of imbalances in who pays for these externalities; it’s why we have regulations for some things, frequently the result of socialized negative outcomes we want to avoid repeating.