Canadian tech and the AI Bubble Debate: Why Infrastructure, Demand, and Depreciation Matter

business

Table of Contents

Introduction — a provocative thesis for Canadian tech leaders

The AI conversation has a new antagonist: a famous investor betting that the AI industry is overvalued. His thesis is not a simple “AI is useless” claim. It centers on accounting, asset life, and the economics of infrastructure. For the Canadian tech community this matters because the debate touches the reality of data centers, power, chip supply, and enterprise adoption across the GTA and beyond. Leaders in Canadian tech need to understand whether they face a speculative bubble or a once-in-a-generation infrastructure wave that demands strategy and capital. This article explains the arguments, evidence, and practical steps for Canadian businesses, CIOs, and investors.

What defines a bubble?

An economic bubble forms when asset prices detach from their sustainable economic value because of excessive speculation and hype. Classic frameworks break the cycle into stages:

  • Displacement — a new innovation attracts attention and capital.
  • Boom — rapid price appreciation as investors pile in.
  • Euphoria — valuations detach from fundamentals and narratives dominate.
  • Profit taking — informed participants sell at the top.
  • Panic — prices collapse as everyone rushes to exit.

Two historical patterns stand out. One is the pre-1929 credit-fueled bubble that crashed into a decade-long depression because leverage and speculation outpaced real economic capacity. The other is the late 1990s internet buildout: huge investment in infrastructure—many fiber networks lay underused for years—yet the eventual payoff transformed the economy. Both patterns are instructive for Canadian tech: is the AI investment boom primarily speculation, or is it a necessary infrastructure build that will deliver long-term value?

How big is the AI infrastructure wave?

The scale is startling. Global hyperscale spending on AI infrastructure is projected to jump sharply over the next few years. Analysts forecast enormous capital expenditures for servers, GPUs, data centers, and power. One respected banking house projects hyperscale spending will rise by double-digit percentages year-on-year for several years, with total outlays reaching into the hundreds of billions.

McKinsey and other consultancies estimate that by 2030 the global cost of data centres and related infrastructure to support AI compute needs could reach into the multiple trillions. That magnitude of buildout is historically comparable to railroads, electricity grids, and the early internet. For Canadian tech, this creates both opportunity and risk: major projects, new data centre campuses in Ontario and Quebec, and an intense demand for skills, real estate, and utility capacity.

Demand is the crucial variable

Infrastructure investment makes sense only if demand justifies it. Demand for AI comes in two distinct layers:

  • Inference providers — companies that build and host large models like generative chat systems.
  • Model consumers — enterprises and individual users who embed models into products and workflows.

On the consumer side, uptake has been dramatic. A conversational AI platform that launched in late 2022 scaled to tens of millions of weekly active users within months and to hundreds of millions within a couple of years. That level of consumer demand is a leading indicator that many users find real, repeatable value from these systems.

On the enterprise side adoption is still in early phases. Surveys by major consultancies show most organizations remain in experimentation and pilot stages; only a minority have scaled AI widely across operations. That matters for Canadian tech leaders because enterprise modernization cycles are slower to change than consumer habits. The business case for AI in a bank, a hospital, or a manufacturer requires integration, security, governance, and measurable ROI—things that take time and resources.

Real revenue traction: evidence beyond hype

Not all growth is market cap storytelling. Some AI companies are generating strong revenue run rates from enterprises, especially via API-driven services that power embedding of models inside applications. One fast-growing model provider reported a run rate rising from about $1 billion to over $5 billion within eight months—numbers that represent contractually recurring revenue, not pure paper valuations.

For Canadian tech firms, the takeaway is clear. There are genuine commercial models in AI: subscription-driven consumer services, APIs for enterprise integration, and specialized AI services for vertical use cases such as healthcare and finance. The presence of paying customers on scale reduces the immediacy of a “pure bubble” label, but it does not eliminate other systemic risks.

Nvidia, circular capital, and the risk of trading value

At the center of the AI stack sits a few dominant hardware suppliers. One company in particular has become essential for modern AI compute. Its GPUs are the workhorses for model training and inference. That dominance creates a circular flow of capital that merits scrutiny.

Large model builders, cloud providers, and neocloud GPU renters buy chips and services from the same hardware vendors. Sometimes those hardware vendors take equity stakes in their customers. Those customers then spend a large portion of investment capital buying more hardware and services from the vendor. The result is a complicated web of dollars that can look like a closed loop: vendor invests in customer; customer buys vendor hardware; vendor sells services to customer and others.

This arrangement is efficient in one sense: it accelerates buildout. It is concerning in another: it may amplify valuations without clear incremental value creation beyond hardware sales. The observation is not a condemnation of the technology. It is a call to examine whether money is primarily financing productive capacity or is simply turning capital in a circle.

The depreciation argument: why a short-seller thinks AI is inflated

The most salient attack on AI valuations focuses on how companies treat asset life. Corporations buying expensive GPUs must expense that capital over the chips’ useful life through depreciation. If a company assumes an asset will be useful for eight to twelve years, its annual depreciation expense is small. That improves short-term reported earnings because the hardware cost is spread out over many years.

The short-seller’s thesis is that GPUs and specialized accelerators will become functionally obsolete quicker than companies claim. If the actual useful life is three to four years instead of eight to twelve, reported earnings are artificially inflated. When the mismatch corrects, valuation multiples could compress dramatically. That is the core of the short position: not a bet that AI will fail but a bet that accounting assumptions are optimistic and will revert.

Evidence on both sides

The depreciation debate is not purely theoretical. Tech operators and cloud providers report that older-generation accelerators remain in service for many years. Several major cloud vendors say their five-to-eight-year-old silicon still runs at high utilization. That suggests hardware can remain productive beyond the shortest depreciation assumptions.

On the other hand, the pace of innovation in AI models and training approaches is rapid. New architectures and specialized accelerators can deliver substantial performance and efficiency gains. If next-generation chips reduce energy consumption per inference by a substantial factor, then older chips will offer lower economic value and may be displaced faster than expected. The true useful life of a chip is a function of the economics of running models on it, not just whether it can technically perform tasks.

Power constraints: a real-world limiter

One non-accounting factor that disrupts the simple story is power availability. Large cloud providers report they are unable to plug in all the GPUs they have in inventory because of insufficient power capacity. The physical constraints of electricity supply, cooling, and real estate mean that high GPU inventory does not equal immediate compute capacity.

For Canadian tech, this is an opportunity and a liability. Canada’s electricity profile varies by region. Provinces with abundant hydroelectric supply and low-carbon grids present attractive locations for AI data centers. Quebec, Manitoba, and parts of British Columbia can offer competitive power economics and sustainability advantages. That plays directly into corporate site selection decisions and the growth of Canadian tech data center campuses.

Where this matters for Canadian tech companies and investors

Whether the AI market is a bubble or an infrastructure wave, Canadian businesses must make pragmatic decisions. The implications span procurement, strategy, finance, and workforce planning.

  • Procurement and capital allocation: finance teams should stress-test depreciation assumptions. What happens to earnings and cash flow if useful life is shorter? Scenario planning can prevent valuation surprises.
  • Site strategy for data centers: Canadian regions with stable, low-carbon power should be prioritized for proof-of-concept and production deployments. Energy scarcity at scale is real; selecting locations with resilient grids mitigates that risk.
  • Vendor risk management: when vendors take equity and re-buy hardware from ecosystem partners, procurement teams should demand clarity on total cost of ownership and contract terms that avoid circular exposure.
  • Enterprise adoption roadmap: Canadian organizations should focus on high-impact pilots that produce measurable ROI. The enterprise adoption story will be won one use case at a time—fraud detection in financial services, automated claims triage in insurance, productivity automation in legal and professional services.
  • Skills and retention: the acceleration of AI creates fierce demand for ML engineers, cloud architects, and data ops talent. Canadian tech hubs—Toronto, Montreal, Waterloo, Vancouver—need coordinated training pipelines to avoid talent gaps.

How to evaluate AI investments — a practical checklist for CIOs

For Canadian tech leaders making allocation decisions, a clear evaluation framework reduces exposure to speculative shocks.

  1. Quantify user value: estimate monthly recurring value per end user for any deployment. Compare that to capital costs and operating expenses.
  2. Stress-test depreciation: model outcomes with conservative useful life assumptions and shorter upgrade cycles.
  3. Assess power supply risk: confirm grid capacity, peak demand charges, and redundancy at candidate data centre locations.
  4. Inspect vendor capital flows: are large vendors recirculating investment? Ask for transparent disclosure of related-party arrangements.
  5. Measure adoption velocity: require proof that pilots scale to measurable process changes or revenue lift within a defined timeframe.

Why the “bubble” label might miss the nuance

Labeling the entire AI industry a bubble simplifies a complex transition. The present moment mixes legitimate commercial adoption, enormous infrastructure investment, fast-paced technical innovation, and the human tendency to extrapolate winners. The right lens distinguishes between:

  • Assets and companies that are building productive capacity and delivering measurable value.
  • Financial flows driven by narrative and circular capital that obscure real margins.

Canadian tech stakeholders should prepare for both possibilities: short-term volatility and long-term structural change. Infrastructure buildouts can look overheated in the near term but still be essential for long-term productivity gains. Historic analogues—the internet buildout of the late 1990s or the electrification era—show that timing mismatches do not necessarily invalidate the underlying technological shift.

Scenario planning for Canadian tech organizations

Designing strategies for multiple futures makes organizations resilient.

  • Optimistic scenario: AI continues to deliver productivity gains. Canadian tech exporters build new AI services, local data centers expand, and talent pools grow. Investments in compute pay off over longer horizons.
  • Mid scenario: Enterprise adoption lags but advances steadily. Some hardware becomes obsolete quicker than expected. Companies that focused on high ROI use cases thrive, while others face write-downs.
  • Pessimistic scenario: A rapid market contraction forces sharp revaluation. Firms with stretched depreciation assumptions and circular exposure suffer most. Those with strong product-market fit and diversified revenue survive and consolidate market share.

What Canadian investors should watch

Investors must separate companies that provide fundamental end-user value from those that are primarily resellers of hardware or caught in an investing loop. Watch for:

  • Revenue composition: recurring API revenue and enterprise contracts versus one-time hardware sales.
  • Capital intensity: high capex without clear path to revenue growth is a flag.
  • Depreciation policies: conservative useful life assumptions, frequent refresh cycles, and transparent maintenance schedules matter.
  • Geographic advantages: companies leveraging Canada’s energy profile and proximity to research hubs may have durable competitive edges.

Canadian tech opportunities that remain compelling

Despite the controversy, there are concrete areas where Canadian tech companies can capture outsized value:

  • Edge AI for industry: manufacturing and natural resources firms in Canada can benefit from low-latency, on-premise models tailored to operational data.
  • AI for regulated sectors: healthcare and financial services require models that meet privacy and compliance needs. Canadian providers with strong governance can win these contracts.
  • Sustainable data centres: leveraging hydroelectric and renewable energy, Canadian data centre operators can offer differentiated green compute for conscious customers.
  • AI tooling and MLOps: Canadian startups that reduce deployment friction, manage model lifecycle, and optimize cost per inference will find enterprise demand.

Checklist for boardrooms and finance committees

Boards in Canadian tech companies should ask management the following:

  • How conservative are our asset life estimates for AI hardware?
  • What is our expected payback period for AI-related capex under different usage scenarios?
  • Do we have contractual protections against vendor capture and circular investment risk?
  • Have we secured reliable, long-term power contracts or redundancy for our compute footprint?
  • Which AI projects will produce measurable revenue improvement or cost reduction within 12 to 24 months?

Conclusion — pragmatic optimism for Canadian tech

The AI era is not a simple boom-or-bust story. It is a complex economic transformation combining infrastructure, software, and human adoption. For Canadian tech the stakes are high: data centres, talent, and industrial use cases are on the table. The short-seller’s warning about depreciation and circular capital is a valuable discipline for executives and investors. It forces rigorous financial modelling and attention to real unit economics.

At the same time, large-scale adoption and consumer traction demonstrate that AI delivers tangible value in many contexts today. The Canadian technology ecosystem should approach the coming years with pragmatic optimism: invest in high-return applications, choose data centre locations wisely, stress-test financial assumptions, and build partnerships that align incentives rather than create opaque capital loops.

Leaders who take these steps will convert the current moment from speculative mania into durable competitive advantage for Canadian tech companies and the national economy.

FAQ

Is AI really in a bubble?

Not necessarily. The market displays some characteristics of a bubble, notably rapid valuation growth and investor euphoria. However, strong consumer adoption and meaningful enterprise revenue in many segments indicate substantial underlying value. The immediate risk is mismatched timing between infrastructure investment and enterprise implementation rather than a categorical failure of the technology.

What does the depreciation argument mean for Canadian tech companies?

It means finance teams should model more conservative useful lives for GPUs and accelerators when forecasting earnings and return on investment. If hardware becomes outdated faster than expected, companies will see higher annual depreciation and potentially lower reported profits. Canadian tech firms with capital-intensive AI investments should plan for faster refresh cycles and prioritize projects with clear payback.

How do power constraints affect AI deployments in Canada?

Power constraints can limit the ability to activate purchased GPUs or expand capacity quickly. Canada has competitive advantages in regions with abundant hydroelectric power, making those locations attractive for large-scale deployments. However, site selection must account for peak demand, grid stability, and long-term contracts with utilities to avoid stranded hardware.

Are investments in AI infrastructure wasteful if enterprise adoption lags?

Not necessarily. Infrastructure builds can be a long-term bet on demand that grows into unused capacity. The internet and electrification are historical examples where early overbuild eventually paid off. The key is to align infrastructure investments with credible demand signals and to structure financing to withstand timing mismatches.

What should Canadian investors focus on right now?

Investors should prioritize companies with recurring revenue, transparent capital allocation, strong governance, and operations in regions with favorable energy economics. Business models that directly monetize model usage or deliver measurable enterprise ROI are preferable to those primarily dependent on hardware turnover.

How can Canadian tech firms create durable AI value?

Build vertically integrated solutions for regulated industries, focus on MLOps and cost optimisation, leverage Canada’s energy advantage for sustainable data centres, and invest in talent pipelines in Toronto, Montreal, and Waterloo. Prioritize projects with short to medium-term measurable impact to create a defensible pathway to scale.

What is the one immediate action Canadian CIOs should take?

Run a cross-functional scenario plan that stress-tests depreciation, power availability, and scaled adoption timelines for all major AI investments. Use the results to prioritize projects with the strongest risk-adjusted returns and to negotiate vendor contracts that limit circular capital exposure.

Will Canadian tech lose out to global players in AI?

Not inevitably. Canada’s research strengths, strong academic hubs, and access to low-carbon power create unique advantages. Success depends on strategic focus: domestic firms must deliver compliant, high-value solutions for local industries while competing globally on specialized services and sustainability.

How should boards adjust oversight for AI projects?

Boards should demand clear KPIs tied to ROI, require conservative accounting assumptions, verify power and real estate risk, and ensure disclosure of related-party investments. Regularly review progress and be prepared to reallocate capital away from low-return projects.

What is the bottom line for Canadian tech?

AI represents a major structural shift with both hype and substance. Canadian tech leaders should prepare for volatility, focus on measurable value creation, and use Canada’s energy and research advantages to capture a durable share of the next wave of AI-driven growth.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Read

Subscribe To Our Magazine

Download Our Magazine