Skip to content

Spec-Driven Power Automate: Parallel Flow Generation From a Single Spec

How a precise markdown spec, tag-based architecture, and parallel AI agents turn a batch of notification flows into a clean Dataverse solution import. Pattern walkthrough.

Alex Pechenizkiy 11 min read
Spec-Driven Power Automate: Parallel Flow Generation From a Single Spec

A batch of notification flows turned around in minutes, packaged into a solution ZIP, imported clean into a Dataverse environment. No designer. No copy-paste. Specs in git, AI agents producing flow JSON in parallel from those specs.

That outcome is realistic, but it is not free. The execution time is short only because the architecture work that makes it possible has already been done. Naming conventions, machine-readable specs, separation of concerns, and a packaging pipeline are the prerequisites. With those in place, a notification batch is largely an execution exercise. Without them, no AI assistant can produce consistent output.

This article walks through the pattern using a fictional example: a Performance Management application at a fictional firm, Northwind, with a custom evaluation and signing workflow on Dataverse. Schema names use the placeholder publisher prefix app_*. The example is illustrative, but every pattern below is the one you would use on a real build.

Timeline showing governance foundation leading to AI-powered flow generation

The Starting Point: Not a Blank Canvas

The pattern only works when the environment already follows governance. In the Northwind example, the Performance Management application has business flows already running before notifications are added. They follow a tag convention.

  • REV (Review Cycle) for cycle-creation and access-team flows
  • EVL (Evaluation) for evaluator assignment, signer resolution, and signing config
  • STP (Signing Step) for the signing chain, step advancement, and rejections

Every business flow is named consistently. Specs live in git as markdown files, version-controlled and diffable. Each flow does exactly one thing. No multi-concern flows. No flow that both processes business logic and sends emails.

This foundation is not built for AI. It is built because it makes maintenance possible. The fact that it is also machine-readable is the dividend.

When an AI assistant reads the spec for the first time, it understands the tag convention immediately. It can see that REV01 creates evaluation records, that EVL04 stamps signing configuration, that STP01 advances the signing chain. The naming convention is the documentation.

Deep-dive: Tag-Based Flow Architecture.

The Requirement: A Notification Batch

The example requirement is a notification layer for the existing signing workflow. People need to know when forms are assigned, when signatures are due, when deadlines approach, and when rejections happen.

The product owner enumerates around a dozen distinct email types in the work item:

  • Form assignment alerts to authors when evaluations are assigned
  • Signing step activation alerts when a signer’s turn arrives
  • Self-sign acknowledgment notices, with distinct wording from standard signature requests
  • Reminder emails at 30, 14, and 7 days before due dates
  • Past-due escalation to supervisors when authors or signers miss deadlines
  • Rejection notifications to the author and to all previous signers in the chain
  • Evaluation completion confirmations
  • Review cycle launch and close announcements to all participants

The work item also answers a handful of design questions from a prior planning session. Those answers become binding decisions.

Reconciling Spec Drift

A draft notification spec already exists in the repo. The product owner’s answers do not fully align with that draft. Conflicts have to be resolved before any flow is built.

Conflict Original Spec Decision Resolution
Digest content List all open items New items + summary count of older pending Update draft to match decision
Past-due frequency Daily to author only Weekly to author + daily supervisor escalation Add supervisor escalation flow
Rejection scope Notify author only Notify author + all previous signers Add rejection-to-signers flow
Completion notification Not specified Yes, with full signer chain details Add evaluation-complete flow
Self-sign wording Generic signing language Distinct acknowledgment wording Split into two flows

The last row is worth a note. Distinct wording for self-sign vs standard signature is not a UI detail. It means splitting one flow into two. Different recipient logic, different subject lines, different body text. One flow cannot serve both purposes without becoming a branching mess.

Reconciliation expands the original count. Every conflict is documented in the spec before any JSON is generated. Documentation first. Always.

Deep-dive: Spec-First Development.

The Architectural Decisions That Make This Work

Four decisions, made before flow generation, determine whether the build succeeds: notifications never write to Dataverse, daily digests replace real-time triggers, each flow handles exactly one email template, and a tag convention groups flows by function. These constraints give the AI clear boundaries and make the output production-ready without rework.

1. Notifications Never Write to Dataverse

Every notification flow is read-only. It queries Dataverse tables, builds an email body, sends mail through a shared mailbox, and exits. It never updates a record. It never sets a status. It never modifies the signing chain.

A notification failure must never break the business process. If the “Ready for Signature” email fails to send, the signing step is still awaiting the signer. The signer can still open the form and sign. The business flow is unaffected.

The reverse is also enforced. Existing business flows (REV, EVL, STP) are never modified when notifications are added. Zero changes to production logic. Zero regression risk.

2. Daily Digest, Not Real-Time

When the cycle-creation flow opens a review cycle, it bulk-creates evaluation records. Real-time triggers on evaluation creation would flood a supervisor with many emails in seconds. That is not a notification. That is spam.

The daily digest pattern solves this. Most flows run on a schedule, once per day at a fixed business-hours time, weekdays only. Each flow queries for records that need attention, groups them by recipient, and sends one consolidated email per person.

The exceptions are rejection flows. Rejections are real-time because they are rare, urgent, and result from deliberate human action. A signer rejected a form. The author needs to know now, not tomorrow morning.

3. One Flow Per Email Template

A separate flow for each email type, not one flow with many branches. This is non-negotiable.

Each flow’s run history shows exactly one email type. When the product owner asks “are reminder emails going out?”, the answer is in one specific flow’s run history. When the same owner wants to disable a single preview email during a pilot, that one flow is turned off without touching anything else.

One flow per template also means one set of FetchXML queries per flow, one trigger configuration, one run history. Debugging is trivial. Inventory tracking is clean.

4. The NTF-EMAIL Tag Convention

The tag NTF-EMAIL is not arbitrary. It leaves room for NTF-INAPP (in-app notifications) and NTF-TEAMS (Teams adaptive cards) in the future. The tag also gives the AI a grouping signal. “Build all NTF-EMAIL flows” carries the scope, the naming pattern, and the architectural constraints in three words.

Deep-dive: Notification Architecture.

The Build: Parallel Agents, One Spec

With specs updated and architecture locked, flow generation is the shortest part of the process.

The work parallelizes by functional group. Each AI agent thread takes one group:

Agent batch Functional Group
Author-facing scheduled flows Form assignment, reminders, past-due to author
Signer-facing scheduled flows Signing step ready, self-sign acknowledgment, signer heads-up
Event-driven and completion flows Rejections, completion confirmation
Cycle broadcasts and escalations Cycle launch, cycle close, supervisor escalation

Each agent produces Power Automate Editor format JSON files. The patterns are consistent across all agents because they are documented in the spec before parallel execution begins. Every agent reads the same spec. Every agent follows the same conventions.

The consistency comes from the spec, not from the AI. The AI is the executor. The spec is the source of truth.

What Every Flow Shares

All notification flows in the example use identical patterns:

  • FetchXML queries instead of OData $filter, because temporal operators (last-x-hours, olderthan-x-days) and linked entity joins cannot be expressed in OData
  • Variable initialization at the top-level action sequence, as Power Automate requires
  • SharedMailboxSendEmailV2 action sending from a shared no-reply mailbox, not the flow owner’s personal mailbox
  • Sequential concurrency on every Apply to each loop (parallelism = 1) to prevent race conditions when building HTML email bodies
  • Deep links in every email using appid and forceUCI=1 parameters for direct navigation to the record
  • Environment variables for environment URL, notifications mailbox, and app ID, plus two connection references (Dataverse and Office 365 Outlook)

When the output from all agents lines up side by side, the flows are structurally identical where they should be, and appropriately different where business logic requires it. That is what spec-driven development looks like.

Deep-dive: FetchXML in Power Automate.

The Packaging: From JSON to Importable Solution

A folder of flow JSON files is not a deployable solution. The packaging pipeline turns those files into something Dataverse can import.

  1. 1

    Base solution export as template

    Export an existing solution as the starting point. This provides the correct XML schemas, content types, and publisher information.

  2. 2

    Format conversion

    PA Editor format JSON is not the same as solution export format. Conversion adds a properties wrapper, simplifies connection references, and sets the correct schemaVersion.

  3. 3

    Deterministic GUIDs via UUIDv5

    Same input always produces the same GUID. Re-running the build updates existing flows rather than creating duplicates. This is critical for iterative development.

  4. 4

    XML manifest updates

    customizations.xml gets a Workflow entry for each flow. solution.xml gets a version bump and RootComponent entries.

  5. 5

    ZIP creation with forward slashes

    Node.js archiver with forward-slash path separators. PowerShell Compress-Archive creates backslashes, which causes silent import failures in Dataverse.

A clean ZIP imports on the first attempt. All flows appear in the solution, linked to the correct connection references and environment variables. None of this works with loose flows. Every flow must be solution-aware from the start.

Deep-dive: Building Solution ZIPs.

What AI Tends to Get Wrong

The success path is not the whole story. AI-generated Power Automate flows have a recognizable set of mistakes that show up early.

AI Proposes Why It Is Wrong Correction
Real-time triggers for every notification Bulk-create operations would flood recipients with many emails in seconds Daily digest pattern with a 24-hour lookback window
Adding email actions inside existing business flows Mixes concerns: a notification failure can break the signing chain Completely separate notification flows
Generic flow names like 'Send notification email' Impossible to monitor, debug, or selectively disable individual notification types Tagged convention with one flow per template
Action types used where trigger types are required Schema validation fails at import; the flow does not appear in the solution Replace action type with the matching trigger type and re-run packaging
Lowercase or invalid GUID formats Solution import silently drops the workflow component UUIDv5 with the canonical hyphenated format and uppercase hex where required
Inconsistent connection reference logical names Flows fail to bind on import in a downstream environment Single shared connection reference name across the entire solution

Every one of these mistakes is reasonable from a surface-level perspective. Real-time notifications sound better than digests. Adding an email action to an existing flow sounds simpler than building a new one. A single notification flow sounds more efficient than several separate flows.

But every one of these proposals creates problems in production. The AI is optimizing for apparent simplicity. The architect is optimizing for maintainability, isolation, and operational clarity.

The correction pattern matters. When you explain the reasoning once for one agent, the other agents can be told to read the corrected spec and never repeat the same mistake. Correction is not a one-off fix. It is a permanent architectural adjustment.

Deep-dive: What AI Gets Wrong.

Verification Before Calling It Done

Generated flows are not finished flows. The verification step is short but mandatory.

  1. 1

    Count check

    The solution contains exactly the flows the spec lists. Nothing extra. Nothing missing.

  2. 2

    Schema validation

    Each flow JSON parses as valid Power Automate Editor format. Trigger types match trigger usage. Action types match action usage.

  3. 3

    Smoke test in dev

    Each flow runs at least once in a dev environment with a representative record. The run history shows the expected query result, the expected email template, and a successful send.

  4. 4

    Connection references and env vars resolve

    After import, the flows bind cleanly to the Dataverse and Office 365 connections, and to the env vars carrying environment URL, mailbox address, and app ID.

Smoke test failures are usually not flow-logic bugs. They are connection-binding issues, env-var misconfiguration, or shared-mailbox permission gaps. Walking the verification checklist surfaces them in minutes.

The Governance Payoff

Every governance practice that exists before AI-assisted generation becomes a force multiplier when AI enters the picture.

Governance Practice Pre-AI Value Post-AI Value
Tag-based naming (REV/EVL/STP/NTF) Clean inventory and monitoring AI uses tags to batch flows for parallel generation
Specs in git (markdown, version-controlled) Human documentation and PR reviews AI reads the spec and generates flow JSON directly from it
One flow per logical operation Clear run history and debugging AI follows the same pattern across the entire batch
Solution-aware flows Clean ALM and deployment Packaging pipeline produces deterministic GUIDs
Environment variables Multi-environment deployment Every flow uses the same env vars with zero hardcoded values
Source control for flow JSON Diffable PRs and rollback Every flow tracked in git, reviewable before import

Without tags, the AI has no grouping signal. Without specs, the AI has no instructions. Without separation of concerns, the AI modifies the business flows. Without naming conventions, the AI produces flows named “Send notification.”

A short execution time is not an AI achievement. It is a governance achievement that AI makes visible.

What to Do Differently Next Time

Two improvements consistently pay back the investment.

First, write the notification spec earlier. A complete spec before the work item arrives removes most of the conflict reconciliation. The build time stays short. The prep time drops to near-zero.

Second, build the solution packaging pipeline once and reuse it. A reusable pipeline means every subsequent batch goes from spec to importable ZIP without rebuilding the tooling. The pipeline is governance investment, not AI investment.

The Takeaway

A notification batch turned around in minutes sounds like an AI story. It is not. It is a governance story.

The AI does not know that real-time triggers will flood recipients. You tell it. The AI does not know that notifications should be separated from business logic. You tell it. The AI does not know that each notification should be independently disableable. You tell it.

What the AI does is execute. Fast. Consistently. Across parallel threads. It takes the patterns established over months of governance work and replicates them across the batch without deviation, without fatigue, and without the inconsistencies that creep in when the same person hand-builds flow number twelve at 4 PM on a Friday.

A short execution time is the dividend. The governance is the investment.

If you are building on Power Platform and thinking about AI-assisted development, start with governance. Name your flows. Write specs. Put your flow JSON in source control. Build a flow inventory.

Then, when the AI arrives, it has something to work with.


Spec-Driven Power Platform Series

This article is part of a series on building Power Automate solutions with specs, governance, and AI:

  1. Series Overview
  2. Tag-Based Flow Architecture
  3. Spec-First Development
  4. Notification Architecture
  5. FetchXML in Power Automate
  6. Building Solution ZIPs
  7. What AI Gets Wrong

AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.

Stay in the loop

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

Related articles