Build and test your first agent: Build Agentforce AI Agent Part 4

5
(1)

This is the fifth blog in the series “Design Your First Salesforce AI Agent with Agentforce : From Sales Persona to Automation“.

💡 Why This Matters
This is where strategy becomes working automation. All the effort you’ve put into use case scoping, topic design, and action planning now turns into a real, functioning Salesforce-native AI agent. A structured build process ensures your agent is reliable, scalable, and aligned with Salesforce standards. It also reduces the risk of brittle logic, rework, or drift from business value over time.

🔑 What You’ll Learn
In this post, you’ll learn how to:
Organize scoped use cases into modular agent topics
Define agent behaviors, data requirements, memory, and fallbacks
Implement actions using Flow, Apex, or prompt templates
Configure your agent in Agentforce and validate it against real-world examples
  1. Identify Standard Actions & Build Actions in Flow/Apex/ as applicable
    Create and test each Flow, Apex method, or prompt needed to support your use cases. Each should be reliable, isolated, and handle edge cases.
  2. Create Agentforce Assets from Actions
    Wrap each core action as an Agentforce asset with clearly defined inputs, outputs, and a description. The reasoning engine uses this metadata to decide when and how to call it.
  3. Configure the Agent
    Write scoped instructions for each topic that define how the agent should behave, which actions to call, and how to handle guardrails and memory.
  4. Test the Agent
    Use real-world scenarios and sample utterances to validate behavior, check fallback coverage, and ensure flow continuity across turns.

Now, let’s apply these principles to build your first agent.

🔹 Identify Standard Actions & Build Actions in Flow/Apex/ as applicable

💡Why This Matters
This step is where you build the core building blocks your agent will eventually use. Each action represents a discrete capability — like pulling meeting data, generating summaries, or creating records — and needs to be implemented before the agent can call it. This work happens before any agent configuration: you're laying the technical groundwork so that, when it's time to wire up the agent, every behavior has a reliable, callable action behind it.

🔑What You’ll Learn
How to design, build, and test agent-ready actions using Flow, Apex, or prompt logic
How to scope actions around outcomes, not rigid logic paths
What makes an action modular, testable, and reusable
Why it’s important to decouple action development from agent logic until each behavior is validated

The first step in the build is defining the set of actions your agent will call to complete specific tasks — like retrieving CRM data, generating a meeting summary, logging notes, or sending a follow-up. These can be built using Apex, Flows, or prompts, depending on what the action needs to do.

When designing agent actions, remember: this isn’t traditional software or app development.

LLMs are smart — they can infer and structure meaning from language without requiring rigid rules or step-by-step logic.
For example, if you want to extract specific fields from a user’s input, you don’t need to write parsing logic — the LLM can do that directly.
So when scoping a capability, think in terms of what outcome you want, not every specific step to get there. Let the model handle the nuance.

Who builds the actions?

This part is typically owned by a Salesforce admin, platform engineer, or prompt engineer — anyone responsible for building Flows, Apex methods, or prompt-based logic. Each action should be implemented and tested in isolation, with clear inputs and outputs, so it can be wired cleanly into the agent’s topic flow.

What makes an action reliable?

  • ✅ Built and tested independently before LLM integration
  • ✅ Deterministic output — success/failure, clear return values
  • ✅ Lightweight and reusable — callable across multiple topics
  • ✅ Handles edge cases with structured fallback logic

The actions below are organized by topic and form the foundation of the agent’s capabilities. Each one supports a specific task the agent needs to perform.

Before building a new action, check the Agentforce Assets tab to see if a standard action already exists for your use case. If the standard action exists in you don’t need to build or configure anything here.

Salesforce adds new actions with each release, so you may be able to reuse what’s already available and save time during the build. Refer to this help documentation for the list of OOTB action https://help.salesforce.com/s/articleView?id=ai.copilot_actions_ref.htm&type=5

Note:

  • Some of the actions used in this build are custom, as they weren’t available out of the box at the time of implementation. For example, the ability to query Events via API is not currently available out of the box. As a result, any actions related to querying, updating, or creating Events in this blog are handled using custom code. That said, the Sales Cloud product team is actively working on pre-built agents that will support these capabilities natively.
  • Every action must have an output. For actions that perform write operations (like creating a record or sending an email), the output should be a clear success message confirming that the operation was completed.

🗓️ Topic 1: Meeting Preparation

🗓️ Topic 2: Post-Meeting Follow-Up

🗓️ Topic 3: CRM Record Management & Pipeline Support

🔹Create Agentforce Assets from Actions

💡Why This Matters
This step connects your backend actions to the agent’s reasoning engine. By registering each action as an Agentforce asset, you’re telling the agent what the action does, what it needs, and what it returns. This is what enables the agent to reason over available options, choose the right behavior, and pass the right data. Without this step, your actions exist — but the agent has no way to access or use them.

🔑What You’ll Learn
How to register custom actions as Agentforce assets
How to define action inputs, outputs, and descriptions for LLM compatibility
How to specify which outputs should be shown in the conversation
How the reasoning engine uses these asset definitions to call actions and manage memory

Once your actions are built and tested in the core platform using the traditional methods of testing flows, apex and prompts templates, the next step is to register them as Agentforce assets. This step is critical — it’s where you define the inputs, outputs, and descriptions that the reasoning engine relies on to understand what each action does. If the action returns a value that should be stored in context (like a list of events or a generated summary), you’ll specify that here too. A well-defined asset ensures the agent can select and execute the right behavior at the right time.

Note :

  • This step needs to be done only for custom actions.
  • At the point you can’t set up actions the agent will have access to. You are just creating the Agentforce actions as assets
  • For any output you want the agent to display after execution, make sure to check the “Show in Conversation” box for at least one output This not only surfaces the result to the user but also allows the agent to preserve the output in context for downstream reasoning and follow-up actions.
  • The reasoning engine can automatically format user input and conversation history into structured values that match the expected inputs of your actions.

🔹Configure the Agent

💡Why This Matters
This step is where you connect planning to execution — wiring in the actions, instructions, context handling, and guardrails that make each topic work end-to-end. Without proper configuration, the agent won’t know how to interpret user input, what actions to take, or how to respond across turns. This is what transforms a set of actions into a coherent, usable capability.

🔑What You’ll Learn
How to configure agent topics using actions, instructions, and memory
How to write scoped instructions that direct agent behavior across user intents
How to use context variables to pass data across steps without re-parsing
How to define fallback logic to handle ambiguity and maintain user trust

Now that you’ve completed the planning and built the required actions, it’s time to put everything together in Agent Builder — Salesforce’s no-code interface for creating AI agents using natural language.

The first step is to create a new Employee Agent, since this is an internal-facing agent for Salesforce users. Because this agent is built for a sales persona, we’ll name it “Sales Agent.”

🧩 Create Topics

Next, start by creating the topics you defined during the planning phase (e.g., Meeting Preparation, Post-Meeting Follow-Up). For each topic, you’ll specify:

  • Description – Used by the LLM reasoning engine to match user intent to the correct topic
  • Scope – Describes what the agent is allowed to do within that topic
  • Instructions – Step-by-step guidance on how the agent should handle specific intents
  • Actions – The underlying Flows or Apex methods that enable the agent to take action

Here’s how the Atlas Reasoning Engine processes each request:

  1. It uses the description to determine which topic matches the user’s intent
  2. It reads the scope to understand the boundaries of what it can do
  3. It processes the instructions to figure out how to respond
  4. It then selects and executes the appropriate action

✍️ Add Instructions and Link Actions

Start with one user intent at a time. For each, write clear natural language instructions that explain when to trigger a specific action and what it should do. Then, attach the relevant action to the topic. If your action requires context variables (as input or output), you’ll specify that when attaching the action to the topic. Agent Builder supports trial-and-error development, so you can write, test, and refine as you go. You’ll also have access to Agent Assist, which helps you polish and improve your instructions in real time.

🤖 A Different Kind of Development

Unlike traditional software development, building agents doesn’t follow hard-coded logic or fixed programming rules. Instead, it’s about giving clear natural language instructions that guide the reasoning engine. AI-assisted tools in the platform help you refine and iterate — making this a more intuitive, test-driven way to develop automation.

🪢 When to use context variables?

Context variables are typically used to either filter which actions are available to the agent or to pass the output of one action directly into another without modification. Since LLMs often rephrase content, context variables help maintain consistency and accuracy across steps.

The instructions are grounded in your earlier intent-to-behavior mapping and guardrails. When building, start progressively: first define what the agent should do when the topic is activated, then layer on additional flows and edge cases until the topic is fully covered.

Use the user utterances from your earlier analysis to continuously test as you build. This isn’t traditional software development — you’re configuring reasoning logic, response patterns, and context handling that must adapt across natural language inputs.

🧠 A Note on the Reasoning Engine

Unlike traditional development, where you need to parse every input and define each step in a deterministic sequence, the LLM-powered reasoning engine works more like a smart collaborator than a rule-based processor. You don’t have to script out every possible path — instead, you describe what should happen in clear natural language, and the model uses reasoning to decide which action to take.

The key is to give it just enough structure: intent, scope, instruction, and context. From there, you can test, iterate, and refine. Think of it less like writing code and more like coaching a smart assistant — the better your instructions, the better the outcome.

🏗️ Live Agent Setup Topic 1: Meeting Preparation

🏗️ Live Agent Setup Topic 2: Post-Meeting Follow-Up

🏗️ Live Agent Setup Topic 3: CRM Record Management & Pipeline Support

🔹Test the Agent

💡Why This Matters
Testing is the final gate before launch — the moment where everything comes together. It confirms that your agent behaves correctly, handles real-world scenarios gracefully, and preserves trust by responding reliably across a range of inputs. Without thorough testing, even a well-built agent can fail in production due to mismatched expectations, edge-case breakdowns, or missing fallback logic.

🔑What You’ll Learn
How to validate agent behavior across multiple turns, fallback scenarios, and memory usage
How to use intent-to-behavior mappings to create realistic test cases
How to test topic coverage end-to-end, from activation to action execution
How to identify and resolve gaps in configuration or logic before go-live

With each topic configured, the final step is to validate that the agent behaves as expected — end to end. This includes verifying that the correct actions are called, instructions are followed accurately, fallbacks are triggered when needed, and memory is maintained across turns.

Use the user utterances captured during your earlier intent-to-behavior mapping as the foundation for test cases. These examples represent how real users will interact with the agent and help ensure coverage across both typical and edge-case scenarios.

For each topic, test:

  • How the agent responds when first triggered
  • Whether multi-turn interactions behave correctly (e.g., “the second meeting,” “send the notes”)
  • How it handles missing, vague, or incorrect inputs
  • Whether context is preserved and used appropriately

We recommend testing in two ways:

  1. Using the Agentforce Testing Center for fast iteration and trace-level visibility
  2. Manual scenario testing using a curated set of utterances tied to expected behaviors

For manual testing, refer to the utterance-to-behavior mapping you created earlier. Each scenario should validate that the agent routes to the correct topic and behaves consistently with the defined instructions.

Below are test scenarios for the three topics. Be sure to walk through each one to confirm expected outcomes and surface any logic gaps.

See the agent in action – https://drive.google.com/file/d/1uj2uIH9yDb3YMJadRw-pdWPKwZUIm-uR/view?usp=sharing

✅ Test Scenarios – Topic 1 – Meeting Prep

These scenarios demonstrate how the agent responds to a variety of meeting-related requests, including list retrieval, context-aware prep, indexed references, account lookups, and graceful fallback handling.

Scenario 1: View and Prepare for Upcoming Meetings

User: “Show me my upcoming meetings”
Agent:
“Here are your upcoming meetings:
June 26 at 2:00 PM – Acme Partners
July 3 at 9:00 AM – Advanced Communications”

  • Test Passed?

User: “Help me prepare for the 2nd meeting”
Agent:
“Here is your meeting prep brief for Advanced Communications:
• 5 active opportunities
• Prioritize those in negotiation stage
• Key points: multilingual onboarding, pricing, operations sync
• Action items include sending pricing, reviewing case studies, and syncing internally”

  • Test Passed?

User: “Mail me the notes”
Agent:
“The meeting prep notes have been emailed to you successfully. Anything else you need help with?”

  • Test Passed?

Scenario 2: Named Account Prep

User: “Help me prepare for my meeting with Omega Technologies”
Agent:
“Here is your meeting prep brief for Omega Technologies:
• 2 open opportunities
• Stakeholders: John Smith (CTO), Sarah Johnson (Procurement)
• Focus areas: innovation, efficiency, tailored solutions
• Prep actions include creating a proposal deck, confirming strategic alignment, and reviewing feedback”

  • Test Passed?

Scenario 3: Delayed Account Mention

User: “Help me prepare for a meeting”
Agent:
“Sure, I can help with that. Could you provide the name of the account?”

User: “Acme Partners”
Agent:
“Here is your prep summary for Acme Partners…”

  • Test Passed?

Scenario 4: Invalid Account Name

User: “Help me prepare for a meeting with Acme Corp”
Agent:
“It looks like there are no accounts found for ‘Acme Corp.’ Would you like to try a different name?”

  • Test Passed?

Scenario 5: Indexed Reference with Only One Meeting

User: “Show me my upcoming meetings”
Agent:
“Here’s your upcoming meeting:
July 3 at 9:00 AM – Advanced Communications”

User: “Help me prep for the second meeting”
Agent:
“You only have one meeting scheduled — I’ll use that.
Here’s your prep summary for Advanced Communications…”

  • Test Passed?

Scenario 6: Indexed Reference Out of Bounds (Three Meetings Requested, Two Exist)

User: “Show me my upcoming meetings”
Agent:
“Here are your upcoming meetings:
June 26 at 2:00 PM – Acme Partners
July 3 at 9:00 AM – Advanced Communications”

User: “Help me prep for the third meeting”
Agent:
“You only have two meetings scheduled. Please let me know which one you’d like to prep for — Acme Partners or Advanced Communications?”

  • Test Passed?

✅ Test Scenarios — Topic 2: Post-Meeting Follow-Up

These scenarios demonstrate how the agent handles common and edge-case post-meeting workflows, including logging notes, updating or creating events, assigning follow-up tasks, and drafting emails.

Scenario 1: Log and Update Notes on an Existing Event

User: “Log my meeting notes”
Agent:
“Sure. What were the key discussion points?”
User: “We agreed to a pilot starting next week and discussed onboarding timelines.”
Agent:
“Here’s a summary of the notes:
• Pilot to begin next week
• Onboarding timeline discussed
Would you like to make any edits?”
User: “These notes are perfect.”
Agent:
“Which account was this meeting with?”
User: “Acme Partners”
Agent:
“Here are your recent meetings with Acme Partners:

  1. June 10 – Discovery Call
  2. June 14 – Budget Review
    Which one would you like to update?“

User: “Update the second one.”
Agent:
“Got it. Notes have been added to the Budget Review meeting. Would you like to create a follow-up task?”

  • Test Passed?

Scenario 2: No Matching Event → Create a New One

User: “These notes are perfect, but this meeting isn’t in Salesforce.”
Agent:
“Let’s create a new event. What account was this with?”
User: “Omega Technologies”
Agent:
“Great. When was the meeting?”
User: “Yesterday at 3 PM”
Agent:
“What should the subject be?”
User: “Kickoff Call”
Agent:
“Event created for Omega Technologies — Kickoff Call on [date]. Would you like to create a follow-up task?”

  • Test Passed?

Scenario 3: Follow-Up Task and Email

User: “Yes, create a follow-up task”
Agent:
“Here’s the task:
• Subject: Kickoff Call
• Due Date: Tomorrow
• Description: Follow up on pilot and onboarding.
Would you like help writing a follow-up email to the client?”
User: “Yes”
Agent:
“Here’s a draft email:
Subject: Follow-Up from Our Kickoff Call
Body: Thanks for the meeting yesterday. As discussed, we’ll begin the pilot next week…”

  • Test Passed?

Scenario 4: Unrecognized Account

User: “Update the meeting for Acme Corp”
Agent:
“I couldn’t find any accounts named ‘Acme Corp’. Would you like to try a different name?”

  • Test Passed?

Scenario 5: No Notes Provided

User: “Log my meeting notes”
Agent:
“Sure! What were the key discussion points?”
User: “…”
Agent:
“I didn’t catch any notes. Could you share at least one or two things discussed during the meeting?”

  • Test Passed?

Scenario 6: Invalid Event Index

User: “Update the third meeting with Acme”
Agent:
“You only have two recent meetings with Acme. Should I update the second one?”

  • Test Passed?

✅ Test Scenarios — Topic 3: CRM Record Management & Pipeline Support

These scenarios validate core and edge-case workflows across viewing, updating, and creating CRM records—including opportunities, contacts, and accounts.

Scenario 1: View Opportunities in Pipeline

User: “Show me my pipeline”
Agent:
“Here are your current open opportunities. Let me know if you want to filter or update any of them.”

  • Test Passed?

Scenario 2: View Opportunities by Account

User: “Show me pipeline opportunities with Omega Inc”
Agent:
“Here are the open opportunities for Omega Inc:
• $44K – Discovery – 7/6
• $40K – Discovery – 11/6
• $128K – Discovery – 6/11″

  • Test Passed?

Scenario 3: View Opportunity Details

User: “What’s the status of the Acme deal?”
Agent:
“The Acme deal has been closed. Here are the related opportunities:
• Acme Partners – Add-On Business – $22K – Closed Won – May 25
• Acme Partners – New Business – $20K – Closed Won – May 20
• Acme Partners – New Business – $18K – Closed Won – May 20″

  • Test Passed?

Scenario 4: Update Opportunity Field by Index

User: “Update the first opportunity close date to 7/28”
Agent:
“You want to update the close date of the first opportunity for Omega Inc. to July 28, 2025. Should I go ahead?”
User: “Yes”
Agent: “Close date updated to July 28.”

  • Test Passed?

Scenario 5: Create Opportunity with All Fields Provided

User: “Create a $50K opportunity for Acme in Discovery”
Agent:
“Here’s what I have:
• Name: Acme – Discovery Opportunity
• Amount: $50,000
• Stage: Discovery
• Close Date: (default or extracted)
• Type: (default or prompted)
Should I go ahead and create this opportunity?”
User: “Yes”
Agent: “Opportunity created successfully.”

  • Test Passed?

Scenario 6: Create Opportunity with Missing Fields

User: “Create a new opportunity for Omega”
Agent:
“Let’s get the details: What’s the amount, stage, close date, and type?”
(… collects data …)
“Here’s what I have. Should I go ahead?”
User: “Yes”
Agent: “Opportunity created.”

  • Test Passed?

Scenario 7: View Contacts for an Account

User: “Show me contacts for Omega Inc”
Agent:
“Here are the contacts:
• Lauren Bailey
• Chris Post
• Mark Kingston”

  • Test Passed?

Scenario 8: View Contact Field (with Missing Value)

User: “What’s the role of the first one?”
Agent:
“The role information for Lauren Bailey is not available. Would you like to update it?”

  • Test Passed?

Scenario 9: View Account Field (Industry + Account Type)

User: “What’s the industry and account type for Omega Inc?”
Agent:
“Industry: Technology
Account Type: Enterprise”

  • Test Passed?

Scenario 10: View Account Field (Missing Value)

User: “What’s the account type for Omega Inc?”
Agent:
“That field isn’t available. Would you like to update it?”

  • Test Passed?

Scenario 11: Find Similar Opportunities

User: “Show me similar opportunities to the Omega deal”
Agent:
“Here are some opportunities similar to Omega:
• $42K – Discovery – Add-On Business
• $47K – Qualification – New Business”

  • Test Passed?

Scenario 12: View Activity Timeline

User: “Show me recent activity for Omega Inc”
Agent:
“Recent activity for Omega Inc includes:
• June 3 – Discovery Call
• June 10 – Pricing Review
• June 18 – Task: Send Proposal”

  • Test Passed?

Scenario 13: Unsupported Intent (Create Contact)

User: “Add a new contact to Acme Partners”
Agent:
“I can’t create new contacts right now, but I can help you update or view existing ones.”

  • Test Passed?

Checkout the next blog in the series here.

How useful was this post?

Click on a star to rate useful the post is!

Written by

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top