How Tech, Business, and Culture Are Quietly Redefining the Future

Observations from a small island, connecting micro-signals to larger shifts in tech, business, and culture.

Latest Small Island Research Notes

Could AI’s Next Growth Phase Be Faster Than Expected?

2026-04-01

Executive Summary

A recent remark by Groq founder Jonathan Ross raises an important question. If models begin to improve the quality of their own learning signals, then the AI growth logic we have become familiar with may no longer follow the same path of diminishing returns.

This article does not ask whether Ross’s claim should be accepted at face value. It asks whether the idea behind it is already supported by a set of meaningful weak signals. From Google DeepMind’s continued push into reasoning, to OpenAI’s gradual formalization of high quality feedback, to NVIDIA’s inclusion of post-training, test-time scaling, and agentic scaling in its next generation platform narrative, these developments suggest that AI progress may no longer be only a story of static pretraining and scale expansion.

Still, weak signals do not mean a trend has already been established. At this stage, we still lack cross-task and repeatable proof. We also lack public evidence that a long-cycle flywheel has truly formed. At the same time, issues such as reward hacking, misalignment risk, and the real world challenge of deployment and demand absorption remain unresolved.

The core judgment of this article is therefore a cautious one. Ross’s argument is not an unfounded exaggeration, but neither is it enough to support the view that AI has already entered a new phase of self-accelerating growth. What matters now is whether these weak signals will gradually connect into a structure that can be repeatedly observed and validated. If that happens, then AI’s next growth phase may indeed turn out to be faster than many currently expect.

Explore more notes from Small Island Research Notes on Tech and Future, a project by Researcher and Research.

Latest Small Island Research Notes

Could AI’s Next Growth Phase Be Faster Than Expected?

2026-04-01

Executive Summary

A recent remark by Groq founder Jonathan Ross raises an important question. If models begin to improve the quality of their own learning signals, then the AI growth logic we have become familiar with may no longer follow the same path of diminishing returns.

This article does not ask whether Ross’s claim should be accepted at face value. It asks whether the idea behind it is already supported by a set of meaningful weak signals. From Google DeepMind’s continued push into reasoning, to OpenAI’s gradual formalization of high quality feedback, to NVIDIA’s inclusion of post-training, test-time scaling, and agentic scaling in its next generation platform narrative, these developments suggest that AI progress may no longer be only a story of static pretraining and scale expansion.

Still, weak signals do not mean a trend has already been established. At this stage, we still lack cross-task and repeatable proof. We also lack public evidence that a long-cycle flywheel has truly formed. At the same time, issues such as reward hacking, misalignment risk, and the real world challenge of deployment and demand absorption remain unresolved.

The core judgment of this article is therefore a cautious one. Ross’s argument is not an unfounded exaggeration, but neither is it enough to support the view that AI has already entered a new phase of self-accelerating growth. What matters now is whether these weak signals will gradually connect into a structure that can be repeatedly observed and validated. If that happens, then AI’s next growth phase may indeed turn out to be faster than many currently expect.

Explore more notes from Small Island Research Notes on Tech and Future, a project by Researcher and Research.