Should ChatGPT censor its output? Interestingly, there are very vocal advocates on both sides of the spectrum. So who's right?
As I've thought about it, I've found this debate to be directly analogous to similar ones in dozens of other areas, including FDA regulations, SEC regulations, App Store restrictions, social media censorship, and many others.
In all cases, it seems as though people are trying to grapple with a trade-off between two particular types of harm: visible harm and invisible harm. Visible harm is something obviously bad that happens, usually from not enough regulation or censorship. For example, people dying from a "bad drug" getting out to the public would be a "visible harm" because it's easy to see and measure the deaths that could have been avoided by censoring it. Meanwhile, people dying because of "fewer good drugs" getting approved, e.g. due to an overly-burdensome approval process, would be an invisible harm because it's not something that's easy to measure, as we'll discuss later on.
In all cases it seems like there are two kinds of people:
- Decelerators are people who want to slow things down because they prioritize reducing visible harm. People who want to censor ChatGPT, or who prefer tighter regulations in general, would be in this camp because they believe the damage done by "harmful use-cases" proliferating would outweigh the benefit of "more good use-cases" being discovered from a totally open and free platform.
- Accelerators are people who generally believe that invisible harm outweighs visible harm. In the case of ChatGPT, they would likely argue something like "more novel business models" being discovered by reducing censorship would make it worth an increase in "harmful use-cases" emerging earlier.
In all cases, it seems there is a balance to strike. The internet is clearly rife with porn, money laundering, violent video games, and other vices (visible harms). Yet, over time, the internet clearly demonstrated that some amount of this degeneracy is worth tolerating because it means we also get instant messaging, Amazon, food delivery services, telehealth, and other unambiguously valuable use-cases. To decelerate the early internet would likely have caused an invisible harm at the time of slowing down the entrepreneurs creating these actually useful services. And today, most would probably look back and say that it was worth it to allow the internet to run in an accelerated way.
But what did the argument for accelerators look like when computers were really expensive and the only tangible use-case for the internet was porn, as it was in the early nineties? Here we see a very clear asymmetry between the case an accelerator has to make vs the case a decelerator has to make.
Generally, a decelerator can readily point to a myriad of visible harms that would be caused by allowing a technology to proliferate. Porn, money laundering, violent video games, etc... are obvious vices that existed early on in the case of the internet. Meanwhile, the accelerator's case requires the imagination of an invisible harm such as the lack of global instant messaging, online food delivery, etc..., which didn't even exist yet. Not only that, but the accelerator has to prove that the invisible harm outweighs the visible harm, while the decelerator only needs to sufficiently fear-monger a few obvious visible harms in order to win the day.
As such, this asymmetry between the accelerator's burden of proof and the decelerator's burden of proof seems to stem from two core issues:
- The first is a difficulty in measuring invisible harm because, well, it's invisible.
- The second is a human bias to reduce visible harm to zero, even at the expense of increasing invisible harm arbitrarily. Put another way, even if you could perfectly measure visible and invisible harm, there is likely still a bias toward preferring a reduction in visible harm over a reduction in invisible harm.
- To use drug regulations as a more visceral example, it feels like many would rather live in a world where "nobody dies from a bad drug" rather than a world where "more people are saved by good drugs than die from bad drugs." The latter inherently involves taking part in a societal-level trolley problem of sorts, which humans seem inherently averse to.
I believe this generally causes societies to bias more heavily toward deceleration, or "reducing visible harm at all costs," than would generally be optimal if one is seeking to reduce the sum of visible and invisible harm. Practically, this means society will generally have more regulations than is optimal, prefer less innovation than is optimal, etc etc...
And thus we have the accelerator's dilemma. Because the accelerator's case generally involves increasing visible harm, which people hate, and reducing invisible harm, which people have trouble properly accounting for, we should generally expect to see decelerators prevail. Which is unfortunate because the prevalence of deceleration is likely an indicator that visible harm is being "over-killed" at the expense of a potentially significant reduction in innovation, new business models and, potentially, human lives.
In what follows, I walk through several examples where the accelerator's dilemma seems to be playing a significant part in society's decision-making. My hope is not to convince you that society should be making different trade-offs than it is today, but rather simply to draw attention to the asymmetry between the accelerator's case and the decelarator's case. Armed with the knowledge of this bias, you can decide for yourself if a different trade-off could make sense.
With that, let's dive into some examples that I personally find interesting:
- Pharma. Everyone can see when a "bad drug" results in more deaths. But "fewer good drugs" is virtually impossible for society to account for, and yet it is the natural result of implementing a costly approval process (prioritize the decrease of visible harm).
- Moreover, suppose that you believed in loosening drug approval regulations because you believed that the negative impact of "fewer good drugs" was currently higher than the negative impact from "more bad drugs" would be. How would you convince someone of this?
- Unfortunately, you can't prove and measure the impact of "fewer good drugs" without actually running the experiment of loosening regulations and seeing what the drug market produces. This is because market-driven questions like, "how many more early-stage drugs would get funded and hit the market if there was no approval process?" are impossible to accurately measure without allowing all investors in the market to recalibrate all of their decisions, likely over a period of years. Meanwhile, it is trivial to prove a lower bound on the harm caused by "more bad drugs" simply by looking at drugs that get rejected by the current approval process. The visible harm is obvious and significant, while the invisible harm is hard-to-measure (but could still be quite significant).
- The end result is an asymmetry between the arguments made by accelerators and decelerators. Simply put, decelerators have an easier time arguing their case because the harm they're reducing is visible and measurable, while the harm accelerators are reducing is invisible, impossible to measure, and likely to actually increase visible harm. And indeed, even if the lives lost by "more bad drugs" was strictly outweighed by the "more good drugs" you get from loosening regulations, you would likely still have many opposed to the loosening because they prefer to reduce visible harm to zero (at the expense of increasing invisible harm arbitrarily). In other words, many would rather live in a world where "nobody dies from a bad drug" rather than a world where "more people are saved by good drugs than die from bad drugs."
- Investment. Everyone can see when a "bad fundraise" scams grandma out of her life savings. But "fewer good companies" being formed is impossible for society to account for, and yet it is the natural result of implementing stringent regulations around fundraising, as the SEC enforces today.
- These regulations clearly decrease visible harm by making it harder for scammers to go after grandma, and it is virtually impossible to make the case for loosening them without being comfortable with an increase in this visible harm. Namely, without loosening regulations and seeing what the market does, it is impossible to measure the impact of "fewer good companies," and properly weigh it against the impact of "more scammed investors."
- Some would argue that we've already run the experiment of "looser securities laws" in the 1920's, before the SEC was formed. One could argue that, back then, we decided, as a society, that the increased scamming didn't justify the increase in capital formation. But consider the argument that "so much has changed since the 1920's, especially around the flow of information and money caused by the internet and crypto respectively, that it could be time to revisit such regulations." Unfortunately, it is impossible to convincingly prove the veracity of this statement, precisely because it is impossible to prove and measure the impact of "fewer good companies" without actually running the experiment, and inevitably increasing the number of "scams" in the process.
- Meanwhile, it is much easier to get a sense of the "fewer bad companies" by just looking at the scams that arise today, which will occur in both highly-regulated and highly-unregulated regimes. Thus, people who believe in "looser securities laws" have an inherently more difficult time demonstrating their case than people who believe in "tighter securities laws" or "less freedom." The natural result is likely a societal bias toward more regulation, which could be a good or bad thing depending on how you weigh "scammed grandmas" against "fewer Tesla-level companies existing."
- App store approvals. Everyone can see when a "bad app" proliferates, e.g. one full of viruses and ransomware. But "fewer good apps" is the natural consequence of the "app store approval process" we have today, and yet it is virtually impossible to account for.
- As a thought experiment, suppose you believed that just stifling crypto apps, like Apple has been doing for years now, has caused the loss of new business models that could have generated potentially billions in societal value. Well, because we can't prove and measure that loss the way we can prove and measure "harm" from crypto scams etc... Thus, we may never get an opportunity to run this experiment under Apple's regime.
- And finally... AI. Everyone can see the downside of AI generating "illegal" images, or enhancing the spread of misinformation. But how do you account for all of the new, legal, business models that are potentially being stifled by overly-zealous content restrictions? Imagining them today would be like imagining AirBnb in the 80's when computers cost one year of household income. Unless you run the experiment by allowing uncensored content, which would be guaranteed to increase visible harm, it will be very difficult to get a sense of how much invisible harm the censorship is causing.
- The unintended consequences of censorship cannot be under-stated. In the early internet, for example, the tremendous popularity of porn arguably helped to materially subsidize the adoption of "real" use-cases like Amazon. You might not buy a computer just to order things off of Amazon, but if you bought it for some other reason, even if that reason is related to a vice, then a business model like Amazon's suddenly makes sense. It's possible that similar arguments could apply to AI use-cases as well, and vice-driven use-cases could help subsidize the enhancement of models for more general use-cases.
And so we come back to our original question that we posed at the beginning: Who's right? Accelerators or decelerators?
Unfortunately, like most societal issues, it's not black and white. Decelerators have a point, as do accelerators, and which you prefer ultimately comes down to personal preference to some extent. But, I will leave you with this string of logic that should hopefully make you at least more sympathetic to your seemingly reckless accelerator friends:
- If you value all human lives equally, then it does seem like invisible harm should be treated as equivalent to visible harm. Not valuing it this way means you are basically deciding "not to pull the lever" in the trolley problem. This isn't a "wrong" choice, but it would arguably be inconsistent with valuing all human lives equally.
- If you value invisible harm the same as visible harm, then understand that you are probably "unusual" from a societal standpoint because it is an inherent human bias to overweight visible harm relative to invisible harm.
- As such, if you value invisible harm the same as visible harm, then you would probably disagree with the amount of "deceleration" enacted by most democratic societies today. This is for the simple reason that democracies are governed by their people who, by and large, exhibit a bias toward reducing visible harm in favor of potentially-higher invisible harm.
- Mature governments like the United States have historically been pretty good about retroactively mitigating the harmful effects of "too much acceleration." Can you think of a problem we have in the US today that the government hasn't adequately solved by passing regulations to make it illegal, and using its power to enforce those regulations?
- Finally, technological innovation tends to surprise to the upside. If you tried to explain the vast splendor we have today to someone alive in 1980, they would have literally thought you were in Wonderland (and actually they still didn't have iPhones in Wonderland). That is not to say you couldn't get the same result with more deceleration than the early internet had, but it's just to say that you could be giving up a lot by doing that!
As such, the reason why I think accelerators have a "dilemma" when compared to decelerators is simply because accelerators have an unusual tendency to both see more upside in the proliferation of a new technology and to care significantly more about reducing invisible harm from it not proliferating fast enough.
This is not to convince you that accelerators are "right," but rather simply to point out that their case is an inherently more difficult one to make to most people today, even if they are right. If society was dominated by reckless, overly-optimistic dreamers who consistently under-estimate the dangers of new technology, then we'd likely be calling it "the decelerator's dilemma" instead.