Home Finance Don’t Pause AI, Finance, Or Thinking. Break Free From Constraints.

Don’t Pause AI, Finance, Or Thinking. Break Free From Constraints.

Corrected, Sept 27, 2023: A previous version of this article referred to the petition generically as a pause on AI and cited the seven critical questions as applying to “AI systems.” It has been corrected to note that the petition specifically called for a pause on training of AI systems more powerful than GPT-4 and that the seven critical questions apply to “more and more powerful AI systems.”

Six months ago, thousands of people petitioned for a six-month pause on training artificial intelligence (AI) systems more powerful than GPT-4. This month, coinciding with what would have been the end of the pause that did not happen, the Future of Life Institute, which launched the original petition for that pause, has released what it considered to be seven critical questions [1] that must be answered about “more and more powerful AI systems,” loosely defined, to ensure the safety and survival of the human race.

The thought-provoking questions may have no immediately obvious answers if we constrain our attention to the latest AI models. But there are AI systems with a rich history and fairly obvious answers: AI in finance.

We have had AI in finance for a long time, though it’s often synonymously called machine learning (ML). It’s an excellent application area because of the vast amount of data that finance provides.

The presumption is that poor answers to these critical questions would argue for pauses or even outright bans. Let’s answer them for AI in finance and see if that broader presumption makes sense.

1. “Could the systems you are building destroy civilization?”

You could write an AI in finance research paper yourself today. Pick any of the dozens or hundreds of AI/ML algorithms. Pick a geographical market. Pick a time period. Pick a trading frequency. Train and test your model and report the results. Did your research just usher in an end to civilization?

Sometimes, the AI in finance results look good. Probably the best ones are never submitted to any journal but are instead implemented. Eventually, AI in finance could conceivably become so good that whoever wields it will amass enormous wealth and move from being a marginal price taker to a dominant price maker.

Even without any trading volume, simply releasing the predictions into the world could destroy modern finance as we know it. If an infallible oracle revealed tomorrow’s prices today, prices would adjust immediately. Financial markets thrive due to differences in opinion; if that is curtailed, then finance, and perhaps civilization itself, would be destroyed. Should we pause research on AI in finance?

2. “Do you understand how these systems work?”

The one thing we know about AI/ML systems in general is that on one level, nobody understands how they work, and on another simultaneous level, everybody understands how they work. This is true for models in finance, art, language, and even sports analytics. We built the models and can build them again. We know that a neural network forward propagates inputs according to simple weights and activation functions. We know that a random forests model allows for many random decision subtrees to vote on the outcome. We know that a regression linearly combines inputs.

But we don’t know how they will evolve over time and we may not have assessed their outputs for all possible new inputs. Even for a simple linear regression, what happens if the inputs stray outside the trained ranges? Or what if the regression is routinely updated on new data which could include outliers that substantially change the weights used?

The attempt to understand and build an intuition for how your model works is a critical part of research, but failing to fully understand does not mean the apocalypse is nigh. The only model that could never surprise you is a short and simple lookup table.

3. “What would make you pause, and how would you do it?”

A financial trading system running on AI would typically pause after losing money—though not necessarily, especially if the poor returns indicate increased opportunities. However, there is a natural limit in bankruptcy. Usually, a “pause” would mean evaluating the models and running some decomposition on realized profits and losses to ensure that the assumptions of the model are still being met and that it is continuing to work as expected. Yet this is not really a “pause” any more than a check-in at the doctor or a tune-up of your car.

4. “Could your systems be used to kill and hurt people?”

For an AI in finance system, killing seems preposterous but hurting seems routine. Almost surely, someone else will have earned less money if your system earned you more. Are they hurt? Could others lose enough money to kill themselves or others? It’s not impossible. Does that mean AI in finance needs to be shut down?

5. “Are you responsible for harms?”

It’d be hard to argue that you’re responsible for other people’s losses. If your trading system makes money legally, and reduces other people’s profits, you are probably under no obligation to make them whole.

6. “Could we lose control?”

Even human systems of trading have lost control and made mistakes. These are often disparagingly called “fat finger” errors [2]. Automated AI systems in finance can surely lose control just as easily.

7. “Is it democratic?”

Arguably not. AI in finance could destabilize democracies. Then again, so can humans, if they bet enough against the currencies of various countries [3].


AI in finance fails every question of AI safety.

Does this mean we should pause the fields of quantitative finance and algorithmic finance and all of financial engineering? Probably not. Probably the more reasonable conclusion is that the questions do not help determine if a pause should happen.

Here are a few other systems that could be fun to put through those questions:

  1. AI regulations. The government system of regulating anything and everything has already set its sights on AI. But we know that regulations often backfire [4, 5]. Could the regulations themselves cause more harm and danger? Perhaps we should pause regulation until we are sure it will not do more harm than good.
  2. Population growth. The system of human development means babies are born everyday who can wield enormous influence on the future, both for good and for bad. Should parenting, pregnancies, mating, and dating all be paused?
  3. Thought. You don’t have to be a newborn baby. Any human is capable of new creative thought. Your next idea might destroy the world. And there are billions of people having trillions of ideas. Should we pause thought? Who would remain to tell us when we can start up again? An AI?

[1] Future of Life Institute (Sept 2023). “AI Answers Now.”

[2] Chen, James (Sept 2022). “Fat Finger Error: What it is, How it Works, Examples.” Investopedia.

[3] O’Brien, Timothy L. (Dec 6, 1998). “He’s Seen the Enemy. It Looks Like Him.” New York Times.

[4] Maymin, Philip Z. (Mar 7, 2011). “Why Financial Regulation is Doomed to Fail.” Econlib.

[5] Maymin, Philip Z. and Maymin, Zakhar G. (2012), “Any Regulation of Risk Increases Risk,” Financial Markets and Portfolio Management 26:3, 299-313.