Security Summary for Ask
Ask Security, Data Handling, and Responsible Use
Ask is designed to give you AI-assisted analytics support while protecting customer data. It uses stateless model inference, regional data storage, tenant isolation, encryption, access controls, monitoring, and responsible AI safeguards. Customer data is not used to train public or shared AI models.
What is Ask?
Ask is conversational AI inside the HappySignals platform. It helps authorized users ask natural-language questions about platform analytics, experience data, and operational insights.
Ask uses Microsoft Azure OpenAI services and is designed with security, data residency, monitoring, and responsible AI controls in mind.
Who can use Ask?
Ask is an add-on for the HappySignals platform and is included in the Drive package.
Ask is available to authenticated HappySignals platform users, such as administrators and analysts. Employees who only answer surveys do not have access to Ask.
What data is processed?
When a user submits a question, Ask may process:
- the user’s prompt
- relevant feedback data
- conversation history, when needed
- curated knowledge content used to improve answer quality
The prompt is handled through the HappySignals GenAI orchestration layer and processed with Azure OpenAI models.
Is customer data used to train AI models?
No. Customer data processed through Ask is not used to train public or shared foundation models.
Where is conversation data stored?
Conversation data is stored in the HappySignals tenant database. Stored information may include:
- user prompts
- AI-generated responses
- conversation history
- interaction metadata
- timestamps
Each customer environment uses a logically separated database schema for tenant-level isolation.
Users cannot see other users’ conversations.
How is model processing handled?
Model inference is stateless. This means the AI model processes a request temporarily to generate a response, and the underlying LLMs do not retain prompts or responses after processing. Prompts are stored in the HappySignals platform and logged in the HappySignals backend systems for 12 months.
For UK-hosted customers, AI model inference may be performed in the EU region because the required AI capabilities are not currently available in the UK region. In this case, processing is temporary and stateless, and no customer data is stored outside the UK platform environment.
What is the RAG knowledge store?
Ask may use Retrieval-Augmented Generation, or RAG, to ground responses in curated HappySignals knowledge content.
This knowledge content:
- is managed through HappySignals SharePoint
- uses version control and publishing workflows
- is stored in Azure Blob Storage
- does not contain customer-specific data
How is interaction data used?
Interaction data may be used by HappySignals to improve prompt orchestration, service development, safety mechanisms such as hallucination detection, and overall system health, performance, and reliability.
What security controls are in place?
Ask uses several security and governance controls, including:
- encryption in transit
- encryption at rest
- SSO authentication
- role-based access controls for operational access
- content filtering and safety controls to prevent harmful, abusive, unlawful, or inappropriate inputs and outputs
- auditable logging
- monitoring through observability tooling
What should users avoid submitting?
Users should not submit sensitive personal data, regulated information, or confidential information.
Are AI responses always correct?
No. AI-generated responses may be inaccurate, incomplete, or unsuitable for a specific situation.
Users should treat Ask responses as advisory. Operational or business decisions should always be reviewed by a person and should not rely only on AI-generated output.