Recursive prompt engineering: how Gemini 3.0 Pro refined and executed its stock-picking playbook We made Gemini 3.0 pro re-engineer its own deep research workflow using results from the last run.
Re-running one of our best stock-picking workflows: using GPT-5.2 to spot information asymmetry in public markets Can GPT-5.2 find the "hidden" edge in public data again?
We let GPT-5.2 design its own stock-picking methodology. We let GPT-5.2 design a 3-prompt workflow that hunts overlooked public signals, validates catalysts in filings, links macro/micro tailwinds, and picks a single 12-month stock using scenario-weighted expected returns, landing on $MP.
We reran our most successful investment methodology using Gemini 3 Pro. In 2023 we used GPT-4 to design a “most investable” hypothetical stock and found a real-world match that returned 117% vs 15.9% for the S&P 500. Here we rerun that process with Gemini 3 Pro and share the new thesis it surfaces.
We used OpenAI's latest flagship reasoning model, o3-mini, paired with their newest product, Deep Research, to identify the most investable mid-cap stock in America. We tasked OpenAI's o3-mini with finding the highest-potential mid-cap stock.
We gave OpenAI's latest flagship model, GPT-o1-preview, data on 3,704 U.S. stocks and asked it to select the best one for a one-year investment. We used OpenAI's GPT-o1 Preview model to generate the latest stock recommendation for The GPT Investor. In this essay, we will outline the technology, methodology, and result of this process.
We used ChatGPT to create a formula for rating value stocks, applying it to 3,708 U.S. stocks to pinpoint the current best-value stock. Stock recommendation from The GPT Investor.
We used BabyElf AGI and GPT-4 to generate stock recommendations Stock recommendations from the GPT Investor.
Sentiment Analysis: We used BabyDeerAGI and GPT-4 to generate stock recommendations Sentiment Analysis Using Autonomous Agents.
We used GPT-4 and BabyDeerAGI to generate stock recommendations A deep dive into the technology, methodology, and results.