The newest job on Wall Street doesn’t involve early mornings or late nights, or even Wall Street itself. It’s fully remote, pays $150 an hour, and involves teaching AI how to do the work of investment-banking analysts.
That’s the premise of Project Mercury, a secretive OpenAI effort to automate the grunt work of finance typically done by real, live investment-banking analysts. According to Bloomberg, OpenAI has hired more than 100 former bankers from JP Morgan, Morgan Stanley, Goldman Sachs, and similar firms to train its models to run discounted cash-flow analysis, formulate financial rationale for deals, and put together pitchbooks in the same way their former bosses once demanded.
Such tasks have, until now, formed the basis of the apprenticeships in the high-finance world. Young analysts spend a few brutal years deep in the trenches: checking numbers, formatting decks, anticipating what a vice president will ask for next, getting yelled at by senior players, and not getting much sleep. After that, they move up the ladder to earn more money and do more interesting work. Many, of course, leave investment banking altogether as soon as they’re fully trained; the system has a widely acknowledged function in that it encourages brutalized trainees to flee the industry.
The “bulge bracket” — the handful of global investment banks that dominate dealmaking — employs roughly a million people worldwide, though only a sliver of those are actual bankers churning out models and pitchbooks. Against this backdrop, Bloomberg columnist Matt Levine pointed out Project Mercury’s peculiar circularity. Most junior bankers leave after two years, worn down by the 100-hour weeks and the tyranny of small errors. “Once you’re out anyway, you might be perfectly happy to train a robot to replace junior bankers,” he wryly concluded.
The hourly rate — $150, or about $300,000 a year if it were full-time, which is higher than most entry-level banking salaries even at the Goldman Sachses of the world — is an added incentive.
But if AI handles the entry-level workload, what happens to the entry level itself?
“The idea that AI will fully replace junior bankers is overstated,” Rob Langrick, chief product advocate at the CFA Institute, told Quartz. “We’re seeing an evolution, not an elimination, of the entry-level role.” He pointed to the organization’s 2025 Graduate Outlook Survey, which found that “only 13% think AI will make it significantly more difficult to land the job they want.”
“AI has the potential to help investment professionals work more efficiently, but it also raises the bar for skills required,” Langrick said. “It’s creating a greater focus on interpersonal skills and ethical implementation. We see this as an opportunity for those who adapt early, leveraging both AI intelligence and human intelligence. We believe that a human must stay in the loop for AI to be successful in the investment process.”
Read more: Tracking OpenAI’s flurry of deals with Nvidia, Walmart, AMD, and more
Langrick added that banking remains “a zero-defect game where spotting errors in a deck before a client meeting is a prized skill. Senior bankers have no tolerance for defects in client materials. And any time saved with these tools will almost certainly be reinvested in working on more pitches.”
He also warned about a potential generational gap. “We’ve started to pick up signals of concern from some professionals that any missing junior generation at a firm may cause succession issues down the road,” Langrick said. “When you talk to a lot of banks and asset managers, however, there have not been too many examples at large firms of reductions in graduate intake numbers.”
He added: “We do expect junior analysts to be increasingly overseeing automated processes, ensuring accuracy, upholding the high ethical standards the industry demands and frankly ‘signing off’ on output to send up the chain.” But there’s upside to this, too. Rather than losing the apprenticeship experience, “analysts are gaining more exposure to decision-making earlier in their careers. This shift will require new analysts to use both technical and judgmental skills earlier than past generations.”
What finance veterans really think
Among those who’ve worked in the industry, reactions range from enthusiasm to more skeptical takes. One former Morgan Stanley analyst, who asked not to be named, said, “I think it’s good because it’ll free up a lot of really smart people to work on more exciting problems if it works. Past a point, financial modeling is kind of rote and it’s a waste of time for some of our brightest minds to be doing it if it’s unnecessary.”
Another former analyst, also speaking anonymously, was more doubtful: “Given how prone AI is to fabrications, I wouldn’t trust the output for anything important, and ‘checking to make sure what the AI did is right’ seems like low-status work. And of course, a model doesn’t do anything magic, what makes it valuable is the validity of the assumptions.”
Read more: Is the AI boom actually a bubble? Here’s everything you need to know
For now, OpenAI’s models remain in the training phase, still learning the delicate etiquette of margins and the appropriate use of italics. The company says its goal is to “improve and evaluate capability across different domains,” not to replace humans outright. But even that statement hints at a future where young finance types enter the industry not to build models, but to audit the models machines have built for them.
That could sound efficient, even humane. But it also risks draining away the firsthand knowledge — the feel for the numbers, the finely attuned B.S. detector one develops watching inputs create outputs, colloquially known as GIGO or “garbage in, garbage out” — that only comes from building things yourself, piece by piece. Even a perfect model is only as good as the assumptions that go into it.
For OpenAI, of course, the objective is clear. Project Mercury is part of a larger push to make its technology more useful to businesses. More profitable, too. Despite a private-market valuation north of $500 billion, the company still hasn’t turned a profit. Teaching its GPT models to do relatively well-compensated corporate work is part of the plan to change that, up to and including the work of thousands of junior banking analysts — potentially, anyway.
📬 Sign up for the Daily Brief
link

More Stories
Korea steps up investment banking by tapping first IMA operators
PRIVATE SAAS COMPANY SURVEY REVEALS AI-DRIVEN TRANSFORMATION AND SUSTAINED OPERATIONAL EXCELLENCE
Anthropic’s $50B Investment in AI Infrastructure: What Does It Mean for Crypto Banking?