This article is archived. For updated research and insights, please visit our new site Small Island Research Notes on Tech and Future.

Can Industry Research Really Predict the Future?

Industry researchers are often asked to predict the future: next quarter’s market share, five-year growth trajectories, the next destination in the global supply chain. But are such expectations realistic? Without systems for timely feedback, institutional validation, or long-term credibility building, can industry analysis truly bear the burden of forecasting?

This essay reframes the issue from a structural perspective. It argues that the difficulty in making accurate predictions stems not from a lack of skill or effort, but from the absence of institutions capable of supporting, verifying, or rewarding such predictions. In this context, the real value of research may not lie in calling future events. It may instead reside in identifying early misalignments between belief and reality, and in preserving records of those invisible fractures before they surface.

When no system exists to reward or remember, the role of the researcher shifts. We are not prophets, but witnesses. We leave behind observations not because they are guaranteed to be remembered, but because someone, somewhere, will need them when the narrative begins to turn.

In the course of doing research, we are often asked questions like:

“What do you think this company will look like five years from now?”

“Do you expect this industry to reverse course next quarter?”

“How much longer can Taiwan hold its position in this supply chain?”

These questions are hard to avoid. In fact, they seem perfectly natural. After all, we have grown used to thinking of research as a way to forecast the future. It often feels as if the ability to see ahead is the ultimate source of value.

But this article begins from a different place.

It is not about the accuracy of investment models or the precision of specific forecasts.

While industry research is often referenced by investors and can influence capital flows, our focus here is not on returns. It is on something else:

When forecasts cannot be institutionalized, and when there is no system for validation or feedback, is industry research still worth doing? And if so, what kind of value does it leave behind?

This article tries to answer that question by asking something deeper:

In a world where systems fall short, how can researchers find their place and understand their responsibility?

1.  Why Are We So Drawn to Prediction?

Across industries, in investment circles, and even in media and academia, there is a persistent obsession with forecasting the future.

What will the market share look like next quarter?

Can this company double its growth over the next five years?

Which country will the supply chain move to next?

We often hope that industry analysis can offer answers as precise as a weather forecast. The expectation seems reasonable. After all, the more data we have and the more sophisticated the models become, the more accurate our predictions should be.

But in reality, moments of true predictive clarity are rare.

If we take an honest look at how industry analysis works in practice, we often find that forecasts are vague, tentative, and filled with assumptions. This isn’t because researchers aren’t trying hard enough. It is because the environment they operate in has never been built to reward precision in prediction.

2.  The Trouble with Prediction Is Really a Problem of Institutions

The difficulty of industry research is not just a matter of technical limitations. It is also a consequence of institutional gaps.

We do not have systems in place that allow predictions to be received, verified, or translated into lasting credibility.

  • No feedback or verification mechanism: Unlike financial markets where price serves as real-time feedback, industry forecasts are rarely evaluated. No one is held accountable for being right or wrong in a measurable way.
  • No space for revision or reputation-building: Most industry reports end once they are published. There are few opportunities to revisit, revise, or track their accuracy over time. Even if a prediction turns out to be correct, it is hard to prove that the research got it right.
  • A mismatch between forecasting timelines and institutional expectations: Many forecasts aim to capture trends over three to five years. But institutions and markets often expect results on a quarterly or even monthly basis. This misalignment marginalizes long-term observations and makes it difficult for them to carry weight.

Some have suggested using crowdsourcing or prediction markets to close these gaps. Even in areas with high information flow and strong incentives, such as finance or elections, these mechanisms remain difficult to implement. In industry research, which is far less structured, they are even harder to sustain.

And so we return to a central question:

Without institutional support, are we still making predictions at all?

Or are we actually doing something else entirely?

3.  If We Can’t Predict Events, What Can We Do Instead?

Perhaps it’s time to let go of the expectation that industry research should predict specific events. Instead, we can begin to see its role in a different light. The value of research may not lie in telling us what will happen next, but in helping us see where the current structures are starting to show signs of strain or misalignment.

This way of seeing is closer to George Soros’s theory of reflexivity:

  • Markets reflect not reality itself, but the beliefs shared by many.
  • When those beliefs drift too far from reality, that’s when reversals tend to occur.
  • What matters most is not the exact timing of the reversal, but the ability to notice the divergence early.

From this perspective, industry research doesn’t need to promise precision.

Instead, it should focus on recognizing when the market starts to believe in a story that may never come true.

As we saw in the case of Wolfspeed, trust collapsed before the industry fundamentals did. And in Broadcom’s story, structural consistency allowed the company to maintain credibility without leaning on exaggerated narratives.

Will the market eventually correct this misalignment? There is no way to know for sure. But if it does, the shift may come faster than expected.

4.  Outside the System: The Researcher’s Role and Responsibility

Some may ask: if predictions lack institutional support and cannot be verified or reinforced, what is it that researchers are still doing?

This, I believe, is precisely where the researcher’s role becomes clearest.

We are not prophets of the market. We are witnesses and quiet observers of the narratives that shape it.

Our responsibility has never been to predict the most accurately. Rather, it is to ask:

  • Can we recognize the break between belief and reality before others do?
  • Can we remember what the supply chain used to look like, and explain why the narrative turned when it did?
  • Can we remain that steady pair of eyes when institutions grow short-sighted?

This kind of work is not rewarded by the market. When capital retreats, narratives collapse, and systems are rewritten, only a few people will look back and search for those who once spoke with clarity and remembered the details.

The value of research does not lie in predicting future numbers. It lies in preserving our sensitivity to change and our understanding of structure.

These observations may never be fully acknowledged by formal systems. But perhaps that is what allows them to endure.

Extended Conclusion: If Prediction Fails, What Remains of Industry Analysis?

If there is anything you choose to take away from this piece, perhaps it could be these four layers of reflection:

1.  At the level of knowledge: Understanding why predictive systems struggle to take root

You might see more clearly that the difficulty of institutionalizing industry forecasts does not stem from a lack of analytical effort. Rather, it comes from the absence of a foundation that can hold judgments, verify perspectives, and build trust over time.The issue is not that predictions are too weak, but that systems are too shallow.

2.  At the level of method: Reframing what we expect from research

The value of research has never been about precision in prediction. It lies in recognizing when belief and reality begin to drift apart. What matters is not who made the most accurate call, but who first noticed the fracture forming.

3.  At the level of reflection: Rethinking the role of the researcher

For those who do this work, this essay may serve as a quiet reminder. Even when systems offer no feedback and our judgments go untested, we can still be the ones who remember the structure and can explain why the narrative shifted. This may not earn rewards from the market, but it may be remembered by a few who matter, over the long term.

4.  At the level of worldview: On systems, trust, and the flow of knowledge

Finally, you might begin to ask different questions. What kind of knowledge is worth preserving? How is knowledge really accumulated? When systems cannot hold truth, are we still willing to remain observers?

If no one is tasked with judgment, then what we leave behind are small and persistent traces. These are observations that continue to be recorded and quietly passed along.

We do not know if they will be remembered. They may fade into the background, or one day be rediscovered in a moment no one expected.

This is what research looks like when there is no system to respond. It is lonely. But it may also be the most honest form it can take.

We leave these notes behind because, perhaps, you will be the one who finds them.

This article is part of our Future Scenarios and Design series.
It explores how possible futures take shape through trend analysis, strategic foresight, and scenario thinking, including shifts in technology, consumption, infrastructure, and business models.

See more in this category, or explore more notes here.

Note: AI tools were used both to refine clarity and flow in writing, and as part of the research methodology (semantic analysis). All interpretations and perspectives expressed are entirely my own.