Skip to content

What AI Gets Wrong on Power Platform: Common Anti-Patterns and How to Correct Them

AI proposes the most common Power Automate patterns by default, which is wrong for many real solutions. Here are the anti-patterns to watch for and how to correct them via spec discipline.

Alex Pechenizkiy 10 min read
What AI Gets Wrong on Power Platform: Common Anti-Patterns and How to Correct Them

AI is good at executing well-defined patterns. It stumbles on Power Platform specifics that are not well documented: the gap between designer-format and export-format JSON, action-versus-trigger type confusion, connection-reference shape, deterministic GUID seeding for idempotent imports, and naming conventions that scale past three flows. These are the places AI tends to produce wrong defaults, and the places where a clear spec plus deliberate correction pays back many times over.

AI-proposed embedded notification architecture compared with corrected separated notification flows

The Anti-Patterns

Anti-Pattern 1: Real-Time Triggers for Everything

AI tends to propose real-time triggers for every notification flow. On the surface it looks reasonable. Real-time is the most common Power Automate trigger pattern. Someone creates a record, a flow fires, an email goes out.

It breaks under bulk operations. When a setup or onboarding routine generates dozens or hundreds of records simultaneously, each app_* create fires every downstream notification. A supervisor responsible for fifteen direct reports receives fifteen separate emails within seconds. Not a digest. Not a summary. An inbox flood.

The corrected pattern for most notification flows is a daily digest. One scheduled run per recipient per day, at a fixed time, listing all new items since the last run. A single email with a table of assignments instead of fifteen emails with deep links.

The exception is rare, urgent, deliberate human actions. Rejection of a submission, for example, is one at a time, immediate, and always intentional. Those notifications stay real-time. Most of the rest move to schedule.

Correction prompt template: “Default to a daily digest at a fixed time per recipient. Use real-time only for events that are rare, urgent, and triggered by a deliberate human decision. State the trigger type and reasoning in the spec for each flow.”

Anti-Pattern 2: Embedding Email Actions in Business Flows

AI tends to propose adding email steps directly into the business logic flow. The signing-step advancement flow already knows when a step becomes “Awaiting.” Why not add an email action right there?

Because a notification failure must never break the business chain. If the email connector throws a throttling error, the shared mailbox is temporarily unavailable, or the HTML template has a rendering issue, none of that should prevent the signing step from advancing. The business process is the primary concern. Notifications are secondary. They need to be independently deployable, independently disable-able, and independently testable.

The corrected pattern is strict separation. Notification flows share zero logic with the business flows they observe. Notifications can be disabled in production without affecting business chains. Notifications can be redeployed without touching business logic. Notification development can be handed to a different team entirely.

Correction prompt template: “Notification flows must be separate from business flows. They share no logic and no dependencies. A notification failure must never block a business action. Spec each notification flow as its own deployable unit.”

Anti-Pattern 3: Generic Naming

AI tends to propose descriptive but generic names. “Send Email When Form Assigned.” “Notify Signer of Ready Step.” That works for three flows. It falls apart at fourteen. It completely breaks when planning for future channels.

The corrected pattern is a structured prefix convention. Channel type prefix, sequence number, human-readable suffix. Something like <CATEGORY>-<CHANNEL>-<NN> - <Description>. When new channels (in-app, Teams) ship later, they slot into the same architecture without renaming or reorganization. The tag-based architecture covers this in depth.

Correction prompt template: “Use a <CATEGORY>-<CHANNEL>-<NN> - <Description> naming convention for all flows. Reserve channel codes for future channels even if not implemented today.”

Anti-Pattern 4: Designer Format vs Export Format Confusion

AI tends to produce flow JSON in the format Power Automate’s designer accepts when you copy-paste a flow definition. That format is not the same as what a Dataverse solution export expects. The export format wraps the definition in a properties object, simplifies connection-reference shape, and adds a schemaVersion field. Microsoft’s documentation does not spell this out cleanly. AI trained primarily on designer-format snippets will reach for the wrong shape.

The corrected pattern is to explicitly tell the AI which format to produce, and to keep a known-good exported flow in the spec as a reference template. Once the AI sees the export-format target, it can transform consistently across many files.

Correction prompt template: “Produce flow JSON in solution export format, not designer format. Wrap the definition in a properties object. Use connection references in the simplified shape used by exported solutions. Include schemaVersion. Use this exported flow as the structural template: <paste known-good flow>.”

Anti-Pattern 5: Action vs Trigger Type Confusion

AI sometimes mixes up trigger and action operation types in connector references, particularly for Dataverse. A “When a row is added” trigger and an “Add a new row” action share connector branding but differ in operationId and metadata. AI that has seen many designer snippets can blur the distinction, producing JSON that imports but fails at runtime with cryptic schema errors.

The corrected pattern is to enumerate the exact operationId, apiId, and connection-reference style in the spec for each trigger and action you use. Once enumerated, AI uses them consistently.

Correction prompt template: “For each trigger and action, specify the exact operationId and apiId from the connector. Do not infer them from action names. Trigger operations and action operations are distinct even when their UI labels overlap.”

Anti-Pattern 6: Non-Deterministic GUIDs

AI by default generates random GUIDs for workflow IDs and component IDs. That is fine for one-shot generation. It is wrong for any pipeline where you re-import a solution and expect updates rather than duplicates. Random GUIDs cause Dataverse to create new flow records on every import instead of updating existing ones.

The corrected pattern is deterministic GUID seeding. UUID v5 with a stable namespace and a stable input string (such as <solution>:<flow-name>) produces the same GUID on every run. Re-import updates in place. The solution ZIP packaging guide covers the mechanics.

Correction prompt template: “Generate workflow IDs with UUID v5 using a project namespace constant and <solution>:<flow-name> as the input. Never use UUID v4 or random GUIDs for components that need to update in place across re-imports.”

Anti-Pattern 7: Connection Reference Shape Drift

AI often produces inconsistent connection-reference shapes across files in the same solution. One flow uses an inline connection object, another uses a logical-name reference, a third hardcodes a connection ID. The solution imports but connections fail to bind in the target environment, requiring manual fix-up after every deploy.

The corrected pattern is one connection-reference shape across the entire solution, defined once in the spec and applied to every flow. Connection references should resolve by logical name, not by ID, so the same package can move between environments.

Correction prompt template: “All flows in this solution use connection references by logical name. Define each connection reference once in the spec. Every flow that uses a given connector references the same logical name. No inline connections, no hardcoded IDs.”

Why Does AI Get Power Platform Architecture Wrong?

This is not an AI quality problem. It is a training data distribution problem. AI optimizes for the most common pattern. The most common Power Automate pattern is real-time triggers. Most flows fire on record creation or update. Most notification examples in Microsoft Learn show an email action embedded in the same flow that does the work. Most tutorials use descriptive names without a tagging system. Most snippets are designer-format, not export-format.

AI gives the statistically most likely answer. That answer is wrong when the domain has constraints the most common pattern does not account for:

  • Bulk operations. Most Power Automate examples create records one at a time. Real solutions often create hundreds at once during an onboarding or cycle-open routine.
  • Security boundaries. Most tutorials treat notifications as part of the business flow. Many real solutions require notifications to be completely isolated from business logic so that throttling and template errors cannot block business advancement.
  • Future channels. Most examples solve for one channel today. Real solutions plan for in-app, Teams, and email through the same naming and architecture.
  • Idempotent deploys. Most examples assume one-shot creation. Real solutions deploy repeatedly and must update in place.

AI has no way to know these constraints from a generic prompt. Microsoft’s own Copilot documentation is explicit: “All changes done by copilot should be reviewed in the designer.” That is not a disclaimer. It is an accurate description of how AI-assisted development works. The output needs human review, every time.

The Correction Pattern

The interesting behavior is not that AI gets these wrong. It is what happens after a correction.

When you correct one anti-pattern, the way you frame the correction matters. “Change real-time to scheduled” is not as effective as “Default to scheduled because real-time triggers flood inboxes during bulk creation. The exception is rare urgent human actions like rejection.” The reasoning is what makes the correction generalize.

Once a correction is given with reasoning, the AI tends to apply that reasoning to every subsequent flow in the same session, and (if the correction is captured in the spec) every subsequent session. Correct once, the correction propagates.

This is fundamentally different from manual fix-up. A junior developer who makes a mistake on flow one might make it again on flow seven and forget by flow twelve. Human consistency degrades across repetitive work. AI consistency, once corrected, holds.

The pattern is simple. Correct with reasoning, capture the reasoning in the spec, watch the AI propagate it. Not “do it differently.” Rather, “do it differently because X.” The “because” is what makes the correction stick.

Where AI Excels

After the anti-patterns are corrected and the spec is updated, AI executes the corrected patterns at scale with high consistency. The work that benefits most:

Consistent flow definitions across many flows. Variable initialization chains at the top level (a Power Automate platform constraint). FetchXML queries instead of OData $filter for temporal conditions and linked-entity joins. Sequential Apply-to-each loops with concurrency set to one for predictable email body construction. SharedMailboxSendEmailV2 from a shared mailbox, not SendEmailV2 from the flow owner.

Format conversion. Designer-format to export-format transformation across many files, once a known-good template is in the spec. The kind of repetitive structural rewrite humans make small mistakes on by file three.

Deterministic GUID generation. UUID v5 with the project namespace, ensuring repeatable builds. Import the solution twice and it updates existing flows rather than creating duplicates.

XML manifest updates. customizations.xml workflow entries. solution.xml version bumps and RootComponents. The kind of tedious, error-prone editing that humans get wrong through fatigue.

ZIP packaging. Node.js archiver with forward-slash path separators. PowerShell’s Compress-Archive creates backslashes, which causes silent import failures in Dataverse. The full packaging pipeline is documented in Building Dataverse Solution ZIPs Programmatically.

This is the work AI transforms. Not the architectural decisions, but the execution of those decisions at scale, with consistent shape across a volume of repetitive work.

Parallel Agent Execution

A structured naming convention does more than organize flows for humans. It enables batching for AI agents. Multiple agents can work in parallel, each responsible for a functional group of flows (author-facing scheduled, signer-facing scheduled, event-driven and completion, escalation and broadcast). Each agent reads the same spec. Each produces flows following the same patterns. The output is consistent not because the agents coordinate, but because they all read the same document.

This is the scalability argument for spec-first development. Without a spec, each agent makes different assumptions about FetchXML structure, variable naming, email formatting, and error handling. With a spec, independent agents produce output that looks like one person wrote it.

The Honest Assessment

The conditions under which AI-assisted Power Platform development pays off are specific. They are worth stating plainly.

Correction time is front-loaded. The first portion of a session goes to fixing wrong architectural defaults. If the AI defaults are accepted at face value, the resulting solution ships with email floods, fragile coupling, brittle imports, and naming that does not scale. An inexperienced developer who skips correction ships a broken system faster than they could ship a manual one.

The spec is the bridge. AI cannot learn a domain in one chat session. It can read a spec. The documentation-first approach is what enables parallel agent execution and what makes corrections stick across sessions. Spec-writing time is not overhead. It is the interface between human judgment and AI execution.

Consistency is the value. Writing one flow takes roughly the same time with or without AI. Writing many flows with identical patterns, consistent naming, matching FetchXML structure, and uniform error handling is where AI provides compounding returns. The acceleration is real, but its magnitude depends on the team’s spec-writing discipline. Vague specs produce inconsistent output regardless of which tool generates it.

The developer must know what right looks like. AI proposes real-time triggers because that is the most common pattern. Knowing it is wrong for a bulk-creation domain requires experience the AI does not have. The acceleration only materializes when the human can catch wrong defaults.

Neither party can do the other’s job efficiently. The human cannot match AI’s structural consistency across many files in one sitting. The AI cannot know domain constraints from a generic prompt. Both roles are essential. That is the honest assessment.

Once AI generates the flows, quality gates still belong before anything reaches production. The AI-powered flow review patterns catch structural issues that even well-corrected AI output can introduce.


Spec-Driven Power Platform Series

This article is part of a series on building Power Automate solutions with specs, governance, and AI:

  1. Tag-Based Flow Architecture - How structured prefixes make many flows manageable
  2. Spec-First Development - Why specs should exist before the designer opens
  3. Notification Architecture - Notifications that cannot break business logic
  4. FetchXML in Power Automate - When OData $filter is not enough
  5. Building Solution ZIPs - The undocumented packaging guide
  6. What AI Gets Wrong (this article) - Anti-patterns and how to correct them
  7. AI-Powered Flow Review - Quality gates before production

AZ365.ai - Azure and AI insights for architects building on Microsoft. Follow Alex on LinkedIn for architecture deep dives.

Stay in the loop

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

Related articles