Prophecy AI
This page introduces Prophecy's AI capabilities, including descriptions of its task-oriented and agentic features.
While Prophecy's AI features are enabled by default, they can be disabled upon request for Dedicated SaaS and self-hosted deployments.
Introduction
Prophecy's Copilot is built to help data teams speed up data pipeline development. Instead of working only on code, Copilot can assist with developing visual pipelines, so less technical users can contribute without writing SQL or Python. For technical users, Copilot accelerates development by generating expressions, suggesting transformations, and creating scripts.
Copilot works by understanding your project metadata. It learns from information like project descriptions, table names and descriptions, column names and descriptions, and other metadata.
For SQL projects only, Prophecy generates knowledge graphs to enable additional functionality like AI agent. In addition to project metadata, knowledge graphs also contain information about fabrics, including dataset information for data exploration.
Prophecy only tests prompts in English. Other languages may work, but support depends on the LLM provider and should be explored at your own discretion.
Capabilities
Copilot helps you with one-click tasks such as automated documentation generation, error fixing with one-click solutions, and the ability to generate unit tests and data quality checks. Additionally, Prophecy's AI agent can help you build SQL pipelines through natural language. You can ask the agent to help with tasks such as adding gems, exploring datasets in your SQL warehouse, and visualizing data.
To learn more, see Copilot for SQL projects and Copilot for Spark projects.
LLM providers and model families
Prophecy integrates with multiple LLM providers and model families. This gives you flexibility in choosing the right models depending on your deployment type and performance needs.
SaaS uses a Prophecy-managed OpenAI subscription with GPT-4o and GPT-4o mini. Meanwhile, Dedicated SaaS and self-hosted deployments are expected to connect to customer-managed endpoints.
Each AI endpoint configuration requires two models:
- Smart LLM for complex tasks, such as
gpt-4o
. - Fast LLM for lightweight tasks, such as
gpt-4o-mini
.
For Dedicated SaaS deployments, contact Prophecy to configure a custom endpoint.
Supported providers
Supported models
While Prophecy can connect to all providers shown in the diagram, the following models are officially tested and supported:
gpt-4o
gpt-4o-mini
gemini-2.5-flash
gemini-2.5-flash-lite
Security
Prophecy employs rigorous industry practices to safeguard the security of the Prophecy application and maintain the privacy of customer data. Below are just a few components of our comprehensive security strategy and system structure:
- Prophecy does not store or send your data to any third-party large language model (LLM) providers. Instead, Prophecy uses rich metadata to construct its knowledge graph. As a result, Prophecy can interface with LLM providers while keeping your data private.
- Prophecy conducts annual penetration tests to test its posture and identify vulnerabilities. For our latest penetration test report, see the Pentest Report.
- Prophecy maintains SOC-2 compliance as audited by PrescientAssurance.
Read more details on Prophecy’s security and compliance posture at our Security Portal.