Skip to content

Spec-First Power Automate Development: Why Your Flow Specs Should Exist Before the Designer Opens

Designer-first Power Automate is opaque to source control and AI. Spec-first development makes flow generation reviewable, diff-able, and parallelizable.

Alex Pechenizkiy 9 min read
Spec-First Power Automate Development: Why Your Flow Specs Should Exist Before the Designer Opens

A Power Automate flow built in the designer is opaque to source control. The decisions that shaped it, why this trigger, why this filter, why this recipient, live in the developer’s head and the designer’s undo stack. Both are gone the moment the tab closes.

A Power Automate flow generated from a machine-readable spec is the opposite. The decisions live in markdown, in git, in pull requests, with a diff history. The flow JSON is output. The spec is source.

This article argues for spec-first Power Automate development as a methodology: write the spec before opening the designer, treat the spec as the source of truth, and use AI to translate the spec into flow JSON deterministically.

Spec-first pipeline: spec document feeds AI agents that generate Power Automate flow JSON, packaged into a solution, imported to the target environment

The Designer-First Trap

Most Power Automate projects follow a predictable pattern. The product owner describes what they need. The developer opens the designer. Actions get dragged onto the canvas. Details get figured out in real time. Documentation, if it happens at all, comes after the flow works.

By then, the spec is already stale. It describes what was planned, not what was built.

This is the default because it feels productive. You are “building.” But you are also making architectural decisions in real time with no record of why. Every decision lives in the designer’s undo history, which disappears when you close the tab.

Microsoft’s own coding guidelines recommend adding descriptive notes to actions “just as you would add comments to lines of code.” That advice is sound, but it is backwards. Comments describe code that already exists. Specs describe code that should exist.

The difference matters when you have more than two or three flows to build, and it matters even more when you want AI to help build them.

What a Machine-Readable Spec Looks Like

A machine-readable spec is a structured markdown file stored in git that AI agents and humans can both parse without ambiguity. It uses consistent table columns, exact Dataverse schema names, and precise option-set values rather than vague descriptions.

A spec is not a Word document in SharePoint. We covered why in Living Documentation in Git. Word docs cannot be diffed, reviewed in pull requests, or branched. They have “last modified by” but not “what specifically was modified.”

A flow spec lives in one or two markdown documents alongside the solution code:

  • Notification Requirements Spec. Every product-owner decision captured: digest vs real-time delivery cadence, email wording variants by recipient role, escalation paths, and recipient scoping rules. This is the human-readable design intent.

  • Flows Spec. The complete flow inventory. Every flow cataloged with its tag, display name, trigger type, table queried, recipient logic, email subject pattern, and priority tier. This is the implementation intent.

Both live in the project’s git repository alongside the solution code, not on a wiki, not on SharePoint, not in the designer.

The Flow Inventory Table

The flow inventory is the core of the spec. Every flow on a single page:

TagDisplay NameTriggerTableRecipientEmail SubjectPriority
NTF-01App | [NTF-01] Form Assigned, Daily DigestRecurrence (8:00 AM ET, weekdays)app_evaluationAuthorAction Required: {Form Name} AssignedP1
NTF-02App | [NTF-02] Ready for Signature, Daily DigestRecurrence (8:00 AM ET, weekdays)app_signingstepSigner (non-self)Action Required: Sign {Form Name}P1
NTF-05App | [NTF-05] Rejection to AuthorStep status changed to Rejectedapp_signingstepAuthorRejected: {Form Name}P1

In Tag-Based Flow Architecture, the tagging convention is introduced. The spec is where that convention is formally documented, every tag mapped to its flow, table, and trigger, with no ambiguity left to runtime.

Trigger Definitions

Scheduled flows use the Recurrence trigger with explicit time zone, day-of-week mask, and a corresponding lookback window expressed in the FetchXML query (for example, last-x-hours operator with a value matching the recurrence interval).

Real-time flows use the Dataverse trigger on the relevant table, filtered by attribute change to a specific option-set value (referenced by code, not by display label, so the spec is unambiguous across environments).

Action Sequence Pattern

A spec defines the canonical action sequence so every flow of the same class is structurally identical:

Initialize variables (sequential chain at top level)
  -> FetchXML query (List Rows with FetchXML, or Dataverse trigger payload)
    -> Apply to each (sequential, concurrency = 1)
      -> Resolve recipient email
      -> Build HTML body fragment (AppendToStringVariable)
      -> Detect recipient change (group break)
        -> Send email (SharedMailboxSendEmailV2 or equivalent)
        -> Reset accumulator

Every spec-driven flow uses this skeleton. The variables differ. The shape does not.

FetchXML Queries

Each flow’s FetchXML is fully specified in the spec: table, attributes, filter conditions, linked entities, sort order. Generic skeleton:

<fetch version="1.0" output-format="xml-platform"
       mapping="logical" distinct="false">
  <entity name="app_evaluation">
    <attribute name="app_evaluationid" />
    <attribute name="app_name" />
    <attribute name="app_duedate" />
    <attribute name="app_assignee" />
    <filter type="and">
      <condition attribute="app_status"
                 operator="eq"
                 value="{ACTIVE_STATUS_GUID_PLACEHOLDER}" />
      <condition attribute="app_assignee"
                 operator="not-null" />
      <condition attribute="modifiedon"
                 operator="last-x-hours" value="24" />
    </filter>
    <order attribute="app_assignee" />
  </entity>
</fetch>

Status GUIDs and other environment-specific values are placeholders in the spec and are resolved from environment variables at import time.

Email Templates and Environment Variables

Email subject and body patterns are documented with merge field placeholders so the spec describes the email contract, not just the trigger. Environment variables are listed by schema name and purpose:

Schema NamePurpose
app_EnvironmentURLBase URL for deep links in emails
app_NotificationsMailboxShared mailbox “send from” address
app_ModelDrivenAppIDModel-driven app ID for deep link construction

This pattern is consistent with Microsoft’s guidance on environment variables in solutions, where references that vary by environment are externalized rather than hardcoded.

Specs as AI Instructions

Here is the insight that changes the economics: when a spec is precise enough, it stops being documentation. It becomes a prompt.

A spec containing exact table names with schema prefixes, exact option-set values, complete FetchXML queries, email subject patterns with merge fields, and a canonical action sequence is precise enough for an AI coding assistant to generate correct flow JSON without clarifying questions. The AI is not designing. It is translating.

That distinction matters. Design lives in the spec. Translation is mechanical, and mechanical work is what AI does well.

The methodology supports parallel generation. Group flows that share a table, a trigger class, or a recipient pattern, and dispatch each batch to a separate AI agent. Every agent reads the same spec. Every agent emits structurally identical JSON. The action sequence pattern, the FetchXML shape, the email envelope conventions are consistent across batches because the spec defined them before any generation began.

Without the spec, AI-assisted Power Automate development is a conversation: explain the same patterns in every chat thread, correct the same mistakes, get inconsistent results. With the spec, AI-assisted development is a build pipeline: input spec, output flow JSON, repeatable.

The spec is the interface contract between the architect (who makes the design decisions) and the AI agents (which execute those decisions at scale).

The Pipeline Shape

End-to-end, spec-first looks like this:

spec (markdown in git)
  -> AI generation (parallel batches)
    -> flow JSON definitions
      -> solution ZIP (packed alongside other components)
        -> import to target environment
          -> verify against spec

Each stage is reviewable. The spec is reviewable in a pull request. The generated JSON is reviewable as a diff against the previous spec output. The solution import is reviewable as part of the deployment pipeline. Verification compares runtime behavior against the spec, not against memory.

For solution packaging, see Building Solution ZIPs. For the source-control practices that wrap the JSON, see Flow Versioning and Source Control.

Living Documentation That Cannot Go Stale

The governance repo described in The Power Platform Governance Repo has a docs/ folder. That is where flow specs live, versioned, diffable, reviewable.

project-root/
  docs/
    notification-requirements.md     <- what to build and why
    power-automate-flows-spec.md     <- every flow cataloged
  flows/
    App-NTF-01-FormAssigned.json
    App-NTF-02-ReadyForSignature.json
    ...

When the spec changes, the commit diff shows exactly what changed. When flow JSON changes, the corresponding spec update appears in the same pull request. Reviewers see both the “what changed” (spec) and the “how it changed” (code) in one review.

Microsoft’s ALM basics call source control the “single source of truth” for solutions. Spec-first extends that practice to specifications themselves: specs in git, not specs in SharePoint.

The Spec-Update-Then-Code Rule

This is the discipline that makes everything else work. A strict rule: update the spec before changing code. Always.

When a new requirement arrives, the process is:

  1. 1

    Read the requirement

    Capture the requirement in a tracked work item with explicit answers to design questions.

  2. 2

    Reconcile against existing spec

    Compare the new answers to the current spec. List every conflict.

  3. 3

    Update the spec in git

    Resolve every conflict, document the decision, update the flow inventory table.

  4. 4

    Then write code

    Only after the spec reflects the full, reconciled truth.

The reconciliation step is where spec-first development pays for itself. Conflicts surface against text, not against deployed flows. A typical reconciliation table:

ConflictOriginal SpecNew DecisionResolution
Digest contentList all open itemsNew items plus summary countUpdate spec to new decision
Past-due cadenceDaily to assigneeWeekly to assignee plus daily to supervisorAdd supervisor escalation flow
Rejection scopeAssignee onlyAssignee plus prior signersAdd additional rejection flow
Completion noticeNot specifiedYes, with signer summaryAdd completion flow
Self-action wordingGenericDistinct copy for self vs othersSplit into two flow variants

Each row would have surfaced mid-build, or worse, in production, under designer-first. In the spec, each one is a paragraph and a table edit. Spec conflicts are cheap. Code conflicts are expensive.

The spec is also intentionally a living document. As the build progresses, ambiguities surface, edge cases appear, and the spec is refined. That iteration is captured in commits, not in side conversations.

Designer-First vs Spec-First

Dimension Designer-First Spec-First
Starting point Open designer, drag actions Open spec, catalog every flow
Decision record Decisions live in undo history (lost on close) Decisions recorded in versioned markdown
Conflict detection Conflicts surface during testing or production Conflicts surface during spec reconciliation
AI compatibility AI infers intent from conversation AI reads structured spec, generates precise JSON
Parallel development One person, one flow at a time Multiple agents execute batches simultaneously
Onboarding New dev reverse-engineers intent from canvas New dev reads the spec
Change tracking Modified-by timestamp, no diff Git diff shows exact changes in the same PR
Documentation debt Written retroactively, if ever Documentation is the starting artifact

Where to Start

Adopting spec-first across an entire portfolio is not the goal. Adopting it on one project is.

Pick an active project with three or more flows. Create a docs/flow-spec.md file in the repo. Build the flow inventory table: tag, display name, trigger, table, recipient, subject, priority. One row per flow. Commit it.

The next time a requirement comes in, update the spec first. Then build. Then verify the code matches the spec. That is the whole loop.

Microsoft’s adoption guidance recommends standardizing “how your workload team writes, reviews, and documents code by using naming conventions and a style guide.” A flow spec is that style guide made concrete: not a PDF on a wiki, a living document in git that every pull request touches.

The whole point is that the spec exists before AI gets involved. No spec, no parallel generation. No precision, no correct output. The spec is not overhead, it is the prerequisite.

For the architectural patterns the spec encodes, see Tag-Based Flow Architecture and Notification Architecture.


Spec-Driven Power Platform Series

This article is part of a series on building Power Automate solutions with specs, governance, and AI:

  1. Tag-Based Flow Architecture, how prefix conventions make flow inventories manageable
  2. Spec-First Development, why specs should exist before the designer opens
  3. Notification Architecture, notifications that cannot break business logic
  4. FetchXML in Power Automate, when OData $filter is not enough
  5. Building Solution ZIPs, the undocumented packaging guide
  6. What AI Gets Wrong, and why human correction is the point

AZ365.ai , Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.

Stay in the loop

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

Related articles