By Inder Gopal & D Manjunath, respectively professors at Indian Institute of Science, Bangalore, and IIT Bombay
Anthropic’s release of Claude Cowork and Claude Code triggered a sharp dip in Indian IT stocks, signalling a vastly changed technology ecosystem. India’s manpower-intensive services companies are clearly under threat from artificial intelligence (AI), and a transition to an intellectual property-driven future leveraging AI is now an existential necessity. When confronted in Davos recently with the label “second-tier AI power”, Union Technology Minister Ashwini Vaishnaw gave a thoughtful riposte, laying out a layered AI taxonomy and arguing that leadership in many of the layers makes India decidedly not second tier.
While this assertion is sound, the underlying anxiety is real—Indians aspire to a indigenous or desi AI that is an unequivocal global leader. The key question about realising this aspiration is not if the government should be involved, but how. We answer this through a careful examination of historical successes and failures in governance of technology development—from Tokyo to Washington, and from Centre For Development Of Telematics (C-DoT) to United Payments Interface (UPI).
Many well-intentioned government projects fail due to a misunderstanding of technological evolution. In the early 1980s, Japan’s manufacturing engine, powered by the ministry of international trade and industry (MITI), was unstoppable. Buoyed by this success, MITI launched the ambitious Fifth Generation Computer Systems (FGCS) project with a massive 10-year budget. They mobilised universities and corporations to develop native hardware for AI. Initial momentum was high and fear of Japanese prowess caused tremors in the US as it contemplated losing its technological lead. However, FGCS ended in failure, set Japan’s computer industry back, and contributed to its “lost decade”.
What went wrong? FGCS was driven by planners and policymakers building for the present not for the future. Borrowing a football metaphor, MITI was “running to where the ball is and not where the ball will be”. Their hubris and a docile community following their lead drove the project down a dead-end path. This provides a stark lesson—non-technologists cannot and should not micro-manage technology innovation.
The development of the internet in the US offers a contrasting blueprint for success. In the 1960s, the Defense Advanced Research Projects Agency (DARPA) sought a resilient communication system and funded a radical proposal for a “packet switched” network from Paul Baran, a technologist at the RAND Corporation. Amid scepticism from many, DARPA handed control to technologists such as David Clark from the Massachusetts Institute of Technology, who collectively established Internet’s core principles. Clark’s guiding mantra—rough consensus and working code—prioritising practical functionality and adaptability over rigid, bureaucratic specifications and standards, is still the principle of Internet evolution.
Taiwan followed a similar trajectory for its semiconductor industry. They recruited Maurice Chang, an Intel executive, to lead their investment in chip foundries. His insight to use bleeding-edge R&D to create a “pure-play foundry” paid off at Taiwan Semiconductor Manufacturing Company Limited (TSMC), transforming global chip manufacturing in a way that no industry outsider could have. Clearly, government must appoint the right experts in decision-making roles. DARPA routinely hires external experts to manage programmes rather than generalist administrators. India must similarly ensure that experts are in charge, not merely advisers without power or risk of failure.
Comments
Post a Comment