Gorgias logo
Gorgias logo

All articles

How Gorgias' AI Agent worksUpdated 12 hours ago

The agentic logic that powers Gorgias’s AI Agent allows it to dynamically address every customer request with a high level of accuracy, safety and reliability — just like a capable human agent would.

Most AI agents powered by large language models (LLMs) can generate text based on a prompt. But this basic question-and-answer model is often limited in its ability to provide the complete solutions that your customers expect or handle complex requests with multiple intents.

With Gorgias AI’s specialized logic, here’s what’s different:

  • AI Agent doesn’t just respond to questions with a matching piece of knowledge. Instead, in preparing a response, it interprets your knowledge and instructions to make ongoing decisions about how to fully satisfy the intent behind the customer’s request.
  • At each step of the process, AI Agent adapts its course and initiates parallel tasks to ensure it gathers the right information and completes the necessary tasks before forming a response — all while providing updates to the customer as it works.

In this article, you'll learn more about the technology that powers Gorgias’s AI and the controls we offer to ensure your AI Agent is accurate, reliable and safe.

What powers AI Agent?

Gorgias’s AI Agent uses generative AI and large language models (LLMs) — the same type of technology behind tools like ChatGPT, Anthropic and Claude. These models understand natural language, generate text, and handle multi-turn conversations.

Unlike general-purpose AI tools, Gorgias's AI Agent is purpose-built for Shopify ecommerce brands and closely trained on your store’s data, products, processes.

What makes Gorgias’s AI Agent Different?

AI Agent doesn’t rely on fixed scripts or a one-size-fits-all model. It’s tailored specifically to your brand and trained on the following data sources:

  • Your Shopify store (orders, customer data, products)
  • Your Help Center articles
  • Your public website and product catalog
  • Your custom Guidance (for example, how returns or cancellations should be handled)
  • Your uploaded documents 
  • Your Actions — tasks performed in third-party apps (for example, process a return in Shopify)

This allows AI Agent to generate context-aware, brand-specific answers that align with your policies and tone of voice.


How AI Agent responds to and interacts with customers

When AI Agent is asked a question, it uses agentic logic to interact with the customer and generate a response. This means AI Agent is able to assist customers like a capable human agent would, and less like a traditional pre-scripted chatbot. With agentic logic, AI Agent can:

  • Follow complex processes step-by-step
  • Make decisions based on real-time context
  • Take multiple actions in parallel
  • Keep customers informed throughout the process

Agentic logic allows AI Agent to respond with flexibility, speed, and accuracy, especially in real-time chat conversations. It also helps the AI adapt to more complex or nuanced customer requests.

Here how it works in 3 steps:

Step 1 — Evaluate customer’s request

When AI Agent receives a request, it performs a check to evaluate whether the message is something that it should respond to. It looks for signs of malicious activity, like phishing for confidential information, spam messages and other threats.

AI Agent also checks whether the message is listed as an excluded topic (included in your Prevent AI from answering Rule).

Step 2 — Identify knowledge and make decisions

After AI Agent evaluates and verifies a customer’s question, the next step is to identify what tools are available to generate a response that satisfies the request.

Based on the request, AI Agent runs through the following process:

  • What resources should I use to create a response? → AI Agent looks at the tools it has available to respond:

    • Use knowledge: answer with information from your website, help center, uploaded documents and other sources
    • Perform an Action: complete a task to update information in your connected apps (e.g. update an address in Shopify or cancel a subscription in Loop Subscriptions)
    • Start sales playbook: follow guidelines to assist a shopper on your website and close a sale (Shopping Assistant skills)
  • Determine next steps (and repeat) → after AI Agent identifies a tool it should use, it evaluates whether the resources it identified will fully satisfy the customer’s request. If not, AI Agent starts additional, parallel tasks while keeping the customer informed throughout the process.

    • For example, AI Agent may identify your Guidance on How to process a return as the correct resource to handle a customer’s request
    • Your Guidance may instruct AI Agent to take different steps based on whether the customer is a VIP, whether they have loyalty points, and based on the customer’s location.
    • AI Agent is able to interpret the Guidance and ask “am I ready to reply yet?” to determine its next steps and initiate parallel tasks. Has it checked the customer’s VIP status? Does it need to update loyalty points in another tool?
    • Just like a capable human agent, AI Agent continues to perform tasks, adapting to intermediate feedback, until all the steps outlined in your Guidance for How to process a return are complete.

Step 3 — Generate response and check quality

Once AI Agent is satisfied that it has retrieved the right information and completed the required tasks, it can generate a complete response.

  • Generate a response → AI Agent uses a generative model to write a complete answer. The response is written according to your tone of voice instructions. It is also adapted to the channel where the conversation with your customer is happening (for example — AI Agent writes shorter, more conversational messages on chat versus comprehensive messages on email).
  • Check response for quality → before sending a generated response, AI Agent performs a quality check. This step ensures its response is coherent and appropriate. It makes sure the response is grounded in factual information from your knowledge sources and avoids hallucinations. The response is only sent if it passes a second AI model’s confidence threshold. Otherwise, AI Agent hands the conversation over to your team.

How you control and optimize AI Agent

You have full control over how AI Agent interacts with customers and generates responses. AI Agent has multiple tools and settings to help you optimize its answers, improve accuracy and increase its coverage.

  • Controls and customization → you customize and control how AI Agent responds, what it can do, and what information it uses. The more AI Agent knows how to do, the more it can successfully resolve repetitive support inquiries and guide interested shoppers toward a sale. Here are settings and features you can customize:

  • Feedback and reporting → you can access performance reports to understand where AI Agent is working well and identify areas where it can improve. Additionally, your feedback on tickets that AI Agent has handled directly influences how the AI selects knowledge and generates responses for similar questions in the future.
  • Optimize recommendations → you can use the Optimize page to continuously improve AI Agent’s performance over time and increase its coverage. You can use this page to identify topics where AI Agent underperforms and invest in making updates to relevant knowledge.

Safety and security

AI Agents interact with real customers, handle personal data, and sometimes take automated actions on behalf of your brand. This makes trust, safety and data protection critical to your success.

Without proper safeguards, the generative capabilities of an LLM could respond with incorrect answers, reveal sensitive data or take unauthorized actions. That’s why Gorgias’s approach to AI prioritizes transparency, control and privacy in terms of how AI Agent is built and deployed.

  • Grounded Responses: AI Agent only answers based on verified content from your knowledge sources (your Help Center, Guidance, Shopify data, uploaded documents and connected URLs). If it can’t find an answer, it won’t respond.
  • Quality Assurance Checks: Every response generated by AI Agent passes through an internal QA system using a second AI model. Responses are only sent if they meet a high confidence threshold.
  • Exclusion and Handover Topics: You can define topics the AI should ignore or automatically hand off to your human agents. This is ideal for legally sensitive or high-risk subjects.
  • Transparency and Review: Every AI-generated message is labeled “Automated” in chat. You can review every response and its sources to track, audit, and fine-tune performance.

If the requirements for AI Agent to safely respond to a customer’s request have not been met, AI Agent lets the customer know and hands over the ticket to your human team.

AI security measures

For a more comprehensive overview of Gorgias’s security measures, you can read our Security and Privacy FAQ for AI Agent.

Regional hosting

Your data is securely maintained on Google Cloud Platform (GCP). Gorgias uses multiple servers globally, and hosts data on the server closest to your location. For example, if you are located in the European Union, your server will be located in the EU.

Compliance

Gorgias maintains several international accreditations and controls to ensure the highest standards of safety and security. Learn more about our security and privacy accreditations.

  • SOC 2, Type 2 compliance since 2020
  • HIPAA compliant
  • Regular penetration tests: detailed tests on Gorgias’s application and infrastructure by third-party security experts

Secure data handling

Gorgias complies with strict data privacy regulations such as GDPR, CPRA and all applicable privacy laws ensuring that customer data is handled securely and responsibly. In practice, this means:

  • Secure Handling of PII: Gorgias uses encryption and industry-leading access controls to protect customer data, including names, locations, and IP addresses.
  • No training third-party LLMs: the data AI Agent accesses does not train large language models (LLM) from OpenAI or other third-party providers. We require a policy of zero data retention with our providers. This means that once a request is processed, the data is not stored or logged.
  • No Long-Term Storage: Customer interaction data is deleted after processing. It is never stored, reused, or saved.
 For more information about our legal, security and privacy practices, visit our trust center.


Was this article helpful?
Yes
No