This article distills and expands that demonstration into a comprehensive guide for Canadian tech readers: why Strands matters, how to set it up with AWS Bedrock or alternative LLM providers, how to build tools and agents, and how to design multi-agent graph and swarm workflows tailored to business intelligence use cases. The aim is to leave C-suite and tech leaders with a clear path to prototype and pilot agent-driven intelligence in their organizations.
Table of Contents
- Why Strands matters for Canadian tech
- High-level overview: What the demo shows
- Setting up Strands: environment, credentials, and the first agent
- Model choice: a critical decision for Canadian tech teams
- Custom tools: turning code into capabilities
- Multi-agent collaboration: math and text agents as a primer
- Designing a multi-agent research team for business intelligence
- Graph vs. swarm: choosing the right orchestration pattern
- Shared memory and state management
- Model orchestration: mixing and matching models
- Security, privacy, and Canadian compliance considerations
- Integration patterns: combining Strands with enterprise stacks
- Operationalizing an agentic BI pipeline: a practical roadmap
- Cost considerations for Canadian tech organizations
- Extending Strands: crew AI, LangChain, and custom MCP tools
- Real-world applications for Canadian tech sectors
- Example: from headlines to executive brief
- Best practices: prompts, tools, and governance
- Common pitfalls and how to avoid them
- Why Canadian organizations should act now
- Implementation checklist for Canadian tech teams
- Frequently Asked Questions (FAQ)
- Conclusion: a practical path for Canadian tech to harness agentic BI
Why Strands matters for Canadian tech
Strands is open-source, free, and model-agnostic — three characteristics that make it compelling for Canadian tech teams seeking control, flexibility, and enterprise readiness. It supports multiple orchestration paradigms (graph and swarm), baked-in memory, and a diverse set of tools. For Canadian tech firms that prize sovereignty, extensibility, and predictable pipelines, these features are particularly attractive.
Here’s why Strands is a strategic choice for the Canadian tech scene:
- Open-source transparency: Teams in the GTA and across Canada can inspect, customize, and audit the framework — important for regulatory and vendor-risk assessments.
- Model-agnostic architecture: Strands plays well with AWS Bedrock models, OpenAI, Claude, and others, allowing Canadian tech organizations to choose models based on compliance, cost, and capability.
- MCP and memory support: The framework includes tools for externalized computations and shared memory, simplifying multi-agent coordination for business intelligence tasks.
- Integration-friendly: Strands can be combined with LangChain, CrewAI, and existing MLOps or data engineering stacks commonly used by Canadian enterprises.
For technology leaders weighing whether to integrate agentic systems into procurement roadmaps or pilot innovation projects, Strands presents a low-risk, flexible option that aligns with Canadian tech governance expectations.
High-level overview: What the demo shows
Matthew Berman’s walkthrough begins with a minimal Strands project that executes a simple calculator tool and rapidly escalates to a sophisticated multi-agent business intelligence pipeline. The demo touches on critical topics for Canadian tech practitioners:
- Setting up a project environment and AWS credentials using IAM and Bedrock permissions.
- Running a minimal agent with a built-in calculator tool.
- Creating custom tools and decorating them as Strands tools so agents can call them.
- Composing multiple agents — math and text agents — to demonstrate message passing and cooperative workflows.
- Building a multi-agent research team that fetches live news headlines, simulates social sentiment, compiles background intelligence, analyzes market dynamics, scores sentiment, generates recommendations, and synthesizes an executive report.
- Choosing between graph and swarm orchestration patterns depending on the use case.
Below, each stage is expanded with practical advice, architectural considerations, and ways Canadian tech teams should think about adoption and scale.
Setting up Strands: environment, credentials, and the first agent
The starting point is simple: create a project folder with a requirements file and an environment variables file. Strands looks for credentials by default in the environment, and in Matthew’s example, Bedrock (AWS) is the initial provider. Canadian tech organizations that already use AWS will find this familiar, while others can substitute OpenAI or any supported LLM.
IAM best practices for Canadian enterprises
Before running the demo, the video walks through AWS Console: granting the bedrock policy to an IAM user and generating programmatic access keys. Canadian tech teams should follow enterprise-grade policies:
- Use IAM roles with least-privilege policies rather than long-lived user keys where possible.
- Store secrets in an enterprise secrets manager (AWS Secrets Manager, HashiCorp Vault) rather than plaintext .env files in source repositories.
- Enable CloudTrail monitoring and alerting for anomalous API activity.
- Consider regional restrictions to satisfy Canadian data residency needs where applicable.
With credentials in place, a minimal Strands script can import an Agent class and a tool (like calculator), instantiate the agent, run a simple prompt such as “What is the square root of 1764?”, and get an answer. This micro-example demonstrates the control flow: tools are functions with descriptive docstrings that the agent can discover and call.
Model choice: a critical decision for Canadian tech teams
One of the most important design decisions in agent systems is the model selection. Strands is model-agnostic: Matthew shows switching between Bedrock models (e.g., Nova Pro, Nova Light, Claude Sonnet 4) and OpenAI’s GPT-3.5 Turbo without changing the orchestration code.
Canadian tech teams should evaluate models using criteria beyond raw capability:
- Compliance and data residency: Does the provider offer data handling that complies with Canadian privacy regulations?
- Latency and cost: Model invocation patterns and per-token costs matter when you scale multi-agent queries.
- Robustness and safety: Different models have different propensity for hallucinations — choose models with better guardrails for regulatory risk.
- Specialization: Use models that fit the task (e.g., summarization vs. strategic analysis vs. code generation).
Matthew’s demo includes a neat pattern: a small coding agent that chooses the best model dynamically based on the task and model strengths. While this is custom code in the example, Canadian tech teams may embed model-selection logic in orchestration layers to optimize for cost, capability, and compliance.
Custom tools: turning code into capabilities
Where Strands shines is the ease of converting Python functions into agent-accessible tools. A tool in Strands is a function annotated with a decorator and a triple-quoted docstring that explains its inputs, outputs, and intent. This docstring acts as the agent-facing contract. Let’s walk through how this maps to enterprise patterns:
- Tool definition: A function that fetches headlines, reads a CSV from a data lake, or calls an internal analytics API.
- Tool contract: The docstring tells the agent exactly what arguments to pass and what to expect back — essential for predictable orchestration.
- Tool security: Access to tools should be gated; production systems often restrict which agents can call which tools based on roles.
- Testability: Tools are regular code units and can be unit-tested, ensuring reliability as the agentic system scales.
In the demo, Matthew writes a “Get AI Headlines” tool that scrapes TechCrunch for headlines relevant to a topic. He describes the tool’s inputs and outputs in the docstring so agents know what to expect. The tool then returns a pipe-separated string of headlines as promised — a simple contract that prevents ambiguity during agent orchestration.
Multi-agent collaboration: math and text agents as a primer
To demonstrate inter-agent coordination, the video constructs a simple workflow with a math agent and a text agent. The math agent supports add and multiply; the text agent supports counting words and characters. The demo orchestrates a small multi-step task across these agents, showing how specialized agents can collaborate to solve a composite problem.
There are important architectural lessons here for Canadian tech implementers:
- Modularity: Small, single-purpose agents are easier to test and reason about than monolithic agents.
- Tool ownership: Assign tools to agents by domain expertise (text tools to NLP specialists, numeric tools to math/analytics agents).
- Message contracts: Use structured outputs (JSON, delimited strings) to ensure downstream agents can parse results reliably.
- Memory and state: Shared memory in Strands reduces the need to pass redundant context manually across agents.
Designing a multi-agent research team for business intelligence
Matthew’s signature demo builds a full team of agents to produce a business intelligence (BI) report on any topic. This is the section where Canadian tech readers will find the most practical ideas to adopt within their organizations.
The agent roles in the demo are:
- Content Agent: Fetches and processes live news headlines from sources such as TechCrunch.
- Social Media Agent: Simulates social sentiment and generates sample posts; in the demo, social scraping is simulated for ethical simplicity.
- Research Agent: Compiles background intelligence, identifies key players, and constructs timelines.
- Strategic Expert Agent: Analyzes market dynamics and competitive landscapes.
- Sentiment Agent: Scores emotional tone, provides psychological insights and stakeholder sentiment breakdowns.
- Recommendations Agent: Produces actionable strategies with step-by-step implementation advice.
- Executive Synthesizer: Combines outputs into a concise, executive-level report.
This breakdown mirrors the functional roles of a human BI team and demonstrates how agent specialization creates an efficient assembly line for intelligence. For Canadian tech enterprises pursuing digital transformation, this model provides a blueprint for augmenting human analysts with agentic automation.
Practical example: reporting on AI funding in Canada
Imagine a Toronto-based venture capital firm wants a weekly briefing on AI funding in Canada. The multi-agent pipeline can be parameterized to focus on “AI funding – Canada” and run on a schedule with the following workflow:
- Content Agent scrapes headlines from domestic and international outlets, with emphasis on Canadian sources and regional outlets in the GTA.
- Research Agent compiles background intelligence on involved companies, funding rounds, and policy announcements.
- Social Media Agent simulates or measures social chatter among Canadian tech influencers and venture accounts.
- Strategic Agent analyzes implications for market entrants and competitive threats in the Canadian market.
- Sentiment Agent measures tone from public and investor-facing channels.
- Recommendations Agent prescribes responses: deal pursuit, PR strategies, or market entry tactics for Canadian startups.
- Executive Synthesizer produces a concise briefing for partners and the investment committee.
This use case demonstrates how agents can compress days of human research into repeatable, rapid insights — a compelling value proposition for the Canadian tech investment community.
Graph vs. swarm: choosing the right orchestration pattern
Strands provides two orchestration modes: graph and swarm. Choosing between them depends on the required predictability and exploration needs.
Graph (flow-chart style):
- Deterministic pipelines: each agent’s execution depends on previous outputs.
- Best for compliance-sensitive BI where traceability and reproducibility are essential.
- Clear mapping between input and output — helpful for audit trails and regulatory reporting in Canadian industries.
Swarm (parallel exploration):
- Concurrent agents operate in parallel without fixed ordering.
- Good for brainstorming, creative analysis, and redundancy — multiple agents may approach the same problem from different angles.
- Useful for market exploration tasks typical in early-stage product-market fit discovery among Canadian startups.
Canadian tech teams should choose graph when deterministic pipelines and audits are required, and swarm when the problem benefits from parallelization and diverse hypotheses. Organizations can also combine patterns across different stages of a workflow — for example, using swarm for exploratory research, then graph for structured recommendations.
Shared memory and state management
Strands features built-in shared memory across agents. This feature simplifies cross-agent context passing and reduces the boilerplate that would otherwise be required to persist intermediate state. For distributed enterprise systems, this has practical implications:
- Reduced orchestration overhead: Agents can consult shared memory rather than being fed the full context on every call.
- Context consistency: Shared memory provides a single source of truth for agent states, reducing synchronization errors.
- Governance: Memory usage can be audited, and retention policies can be enforced to meet Canadian privacy rules.
However, teams should harden memory usage by implementing retention policies, access controls, and encryption to satisfy corporate governance and privacy requirements. Canadian tech organizations operating in regulated verticals (e.g., finance, healthcare) must be especially vigilant about what data is persisted and for how long.
Model orchestration: mixing and matching models
One intriguing pattern in the demo is dynamically selecting a model based on task needs. Matthew shows how different Bedrock models — Nova Pro, Nova Light, Claude Sonnet 4 — can be chosen for their respective strengths. Canadian tech teams can formalize this approach in production:
- Task-based model routing: Summarization tasks route to a fast, cheaper summarizer; high-stakes strategic analysis routes to higher-quality models with better alignment tools.
- Model fallback: For critical tasks, implement a fallback model for redundancy if the primary model is unavailable.
- Cost optimization: Route high-volume, low-complexity requests to lighter models to control spend.
Architecturally, model routing can be embedded in an orchestration layer that evaluates task metadata and selects the appropriate model or ensemble of models.
Security, privacy, and Canadian compliance considerations
Agentic systems can surface sensitive information if not designed carefully. For Canadian tech organizations, the following considerations are essential:
- Data residency: Determine whether model providers and the Strands deployment meet Canadian data residency or sovereignty requirements.
- PII handling: Ensure tools and agents sanitize or avoid transmitting personally identifiable information to third-party models where necessary.
- Access controls: Implement role-based access for agents and tools; audit which agents have permission to call external APIs.
- Monitoring: Set up logging and monitoring for agent outputs to catch hallucinations or biased conclusions.
- Human-in-the-loop: For high-risk recommendations (e.g., regulatory strategy), require human sign-off before publication.
These guardrails are not optional for enterprises operating in regulated sectors within Canada. They are necessary to manage legal risk and build stakeholder trust.
Integration patterns: combining Strands with enterprise stacks
Strands integrates well with enterprise ecosystems. Below are practical integration patterns that Canadian tech teams should consider when operationalizing agentic workflows:
- Data pipelines: Connect content agents to enterprise data lakes (S3, GCS) and event-driven ingestion for real-time intelligence.
- Analytics and BI tools: Output structured data from agents into dashboards (Power BI, Looker) to enable human analysts to interrogate model outputs.
- Workflows and orchestration: Use existing workflow engines (Airflow, Step Functions) to schedule and monitor multi-agent jobs.
- ML Ops: Integrate with model governance and monitoring stacks for drift detection and performance tracking.
- Collaboration tools: Publish executive summaries to internal tools (Confluence, Slack) for seamless distribution.
These integration patterns make agent outputs actionable and ensure alignment with corporate IT policies in Canadian organizations.
Operationalizing an agentic BI pipeline: a practical roadmap
For Canadian tech teams ready to pilot Strands for business intelligence, here’s a recommended step-by-step plan:
- Define a high-value pilot: Pick a clear business question that BI can answer weekly (e.g., competitor funding, regulatory updates in AI for healthcare in Canada).
- Scope agents narrowly: Start with a 3-agent setup (content fetcher, research synthesizer, executive summarizer).
- Secure the pipeline: Use least-privilege IAM roles, secure storage for secrets, and policy-based access to tools.
- Instrument for monitoring: Log agent calls, tool usage, and model responses to build an audit trail.
- Human review gates: Introduce review steps before executive distribution.
- Measure value: Track time saved, decision velocity improvements, and accuracy of recommendations compared to manual research.
- Iterate and expand: Add specialized agents (sentiment, strategic analyst) and consider moving exploratory tasks to swarm mode.
This phased approach reduces risk and demonstrates value to stakeholders in the Canadian tech ecosystem.
Cost considerations for Canadian tech organizations
Agentic systems can be cost-effective or costly depending on execution. Key levers to control cost include model selection, request batching, and caching of intermediate computations. Here are recommended practices:
- Use lighter models for high-volume tasks: Summaries and routine extracts can often be handled by cheaper models.
- Cache news and intermediate results: Avoid re-calling models for unchanged inputs.
- Batch requests: Combine similar prompts to reduce API overhead.
- Monitor token usage: Set caps and alerts to detect runaway usage.
Cost control is especially important for Canadian startups and SMEs that need predictable burn rates while experimenting with agentic capabilities.
Extending Strands: crew AI, LangChain, and custom MCP tools
Strands is designed to be composable. It can interoperate with CrewAI, LangChain, and custom MCP tools. Canadian tech teams can leverage these integrations to accelerate development and introduce sophisticated capabilities:
- LangChain: Use familiar chaining patterns and retrievers for complex retrieval-augmented generation (RAG) tasks.
- CrewAI: Leverage team-oriented agent orchestration, particularly useful for collaboration across analyst groups in the GTA.
- Custom MCP: Write domain-specific tools (financial models, legal checkers) and expose them as Strands tools for agents to call.
These extensions help align the framework with organizational practices, enabling larger teams to collaborate on agentic pipelines.
Real-world applications for Canadian tech sectors
Agentic BI pipelines are not theoretical — they map directly to Canadian industry needs. Below are examples of actionable applications across sectors:
- Venture capital and private equity: Automated deal scouting, weekly investment briefs on target sectors in Canada.
- Financial services: Regulatory monitoring for domestic policy changes and market impact analysis.
- Telecom and infrastructure: Competitive intelligence on policy shifts and vendor activity across the GTA.
- Healthcare tech: Surveillance of research breakthroughs, funding announcements, and regulatory guidance.
- Supply chain and manufacturing: News-driven risk assessments and supplier monitoring for Canadian manufacturers.
These use cases highlight how agentic systems can accelerate decision-making and provide differentiated competitive intelligence for Canadian tech players.
Example: from headlines to executive brief
In the demo, the multi-agent pipeline synthesizes a two-page executive report on a topic entered interactively. The steps look like this:
- User inputs the topic (e.g., “What is happening with OpenAI right now?”)
- Content Agent scrapes relevant headlines from TechCrunch and other sources.
- Social Media Agent simulates sentiment and generates representative posts.
- Research Agent pulls background info, timelines, and key players.
- Strategic Agent analyzes market implications.
- Sentiment Agent scores tone and psychological drivers.
- Recommendations Agent formulates next actions.
- Executive Synthesizer consolidates everything into a concise briefing.
The final output is an actionable, human-readable report that can be shared with executives. For Canadian tech firms, this report can be tailored to include regional priorities, such as market dynamics in the GTA or implications for federal policy and funding programs.
Best practices: prompts, tools, and governance
Implementing a robust agentic BI capability requires attention to prompts, tool design, and governance:
- Prompt engineering: Define clear system prompts for agent roles to ensure consistent behavior across runs.
- Tool documentation: Treat tool docstrings as formal contracts and version them to maintain backward compatibility.
- Auditability: Log agent decisions and tool calls for traceability and post-hoc analysis.
- Human oversight: Establish approval gates for high-impact outputs and train staff to validate model findings.
These practices give Canadian tech organizations control over their agentic pipeline and build confidence among stakeholders.
Common pitfalls and how to avoid them
Several common mistakes can derail pilot projects. Here are practical tips to avoid them:
- Overcomplicating early: Start small with narrow agents and expand once value is proven.
- Ignoring security: Treat agent tools as first-class attack surfaces and secure them accordingly.
- No human validation: Humans should review initial outputs to catch hallucinations and bias.
- Underestimating cost: Monitor model spend and optimize by selecting appropriate models.
- Poor observability: Implement logging, monitoring, and alerting from day one.
Why Canadian organizations should act now
The AI landscape is evolving rapidly. Firms that incorporate agentic intelligence into their workflows early can unlock efficiency gains and strategic insights that translate into competitive advantage. For Canadian tech players — from startups in the GTA to national enterprises — Strands offers a pragmatic, low-friction way to experiment with agentic systems without being locked into a single model provider.
“Strands is open source, model agnostic, and comes with built-in memory and MCP integration — a compelling option for Canadian tech teams looking to build robust agentic pipelines.” — Matthew Berman
Implementation checklist for Canadian tech teams
- Define a high-value pilot with clear KPIs (time saved, decision impact).
- Secure IAM and secrets management; prefer short-lived credentials and roles.
- Start with a small, focused agent set and explicit tool contracts.
- Select models based on task needs, cost, and compliance.
- Instrument monitoring and logging for audit and observability.
- Introduce human review steps for high-risk outputs.
- Iterate and expand into swarm workflows for exploratory tasks.
Frequently Asked Questions (FAQ)
What is Strands and how does it differ from other agent frameworks?
Strands is an open-source multi-agent framework designed to be model-agnostic and integrate with multiple orchestration patterns (graph and swarm). Unlike some closed frameworks, Strands allows developers to plug in any model provider, create custom tools with descriptive docstrings, and benefit from built-in memory and MCP tool support. This makes it a flexible choice for Canadian tech teams that require transparency, extensibility, and enterprise-grade controls.
Can Canadian tech companies use Strands with models other than AWS Bedrock?
Yes. Strands supports multiple model providers. While Matthew’s demo uses AWS Bedrock and provides guidance for IAM and model selection, developers can use OpenAI models or other providers. The framework’s model-agnostic design lets teams adopt models based on performance, compliance, and cost considerations relevant to Canadian tech organizations.
How can Canadian enterprises protect sensitive data when using Strands?
Enterprises should implement least-privilege IAM policies, use secure secrets managers, enforce data residency requirements when necessary, sanitize inputs to avoid transmitting PII to third-party models, and maintain audit logs for all model invocations and tool calls. Additionally, retention policies and encryption for shared memory should be standard practice.
What’s the difference between graph and swarm orchestration?
Graph orchestration is deterministic and sequential, suitable for pipelines where step-by-step traceability is paramount. Swarm orchestration runs agents in parallel, useful for exploration and generating diverse hypotheses. Canadian tech teams should choose graph for compliance-heavy workflows and swarm when seeking creative or explorative outcomes.
How can smaller Canadian startups minimize cost while experimenting with agentic systems?
Startups should begin with a narrow pilot, use cheaper models for high-volume tasks, batch requests where possible, cache intermediate results, and monitor token usage tightly. Using lighter models for routine tasks and reserving higher-quality models for strategic analysis can dramatically reduce costs.
Can Strands integrate with existing analytics and BI tools?
Yes. Strands can export structured outputs to data lakes, feed dashboards (Power BI, Looker), and integrate with workflow engines and ML Ops pipelines. These integrations make agent output accessible and actionable across enterprise systems used by Canadian tech firms.
Is Strands suitable for regulated industries in Canada?
Yes, with proper governance. Strands can be adapted to meet regulatory requirements through data residency planning, strict IAM controls, human-in-the-loop checkpoints, and robust auditing. For industries like finance and healthcare, additional safeguards around PII and model usage are necessary.
Conclusion: a practical path for Canadian tech to harness agentic BI
Strands represents a practical, flexible platform for building agentic business intelligence systems. Matthew Berman’s walkthrough demonstrates how quickly teams can move from a simple calculator agent to a sophisticated multi-agent research team that collects headlines, simulates sentiment, conducts research, analyzes strategy, and produces executive-ready reports.
For Canadian tech leaders, the framework aligns with the needs of the national tech ecosystem: it supports transparency, allows model choice for compliance and cost control, and integrates well with existing enterprise tools. Whether the goal is giving analysts superpowers in the GTA or providing the boardroom with fast, reliable intelligence across the country, Strands provides a viable foundation.
Now is the time to pilot agentic workflows: start small, secure thoroughly, and measure impact rigorously. Canadian tech organizations that adopt these practices will accelerate decision-making, reduce research overhead, and gain a competitive edge in an increasingly AI-driven marketplace.
Is your business ready to pilot an agentic BI system? Share your plans and challenges — and consider starting with a tightly scoped use case where measurable wins can be achieved within weeks.