Responsible AI

Last Updated: April 15, 2026

At Smalt AI, we believe that AI should augment human capabilities, not replace human judgement. As an AI platform serving the financial services industry, we hold ourselves to the highest standards of responsibility, transparency, and ethical practice.

Our AI Principles

1. Human-in-the-Loop

AI is a powerful tool, but critical decisions require human oversight. We design our platform so that:

2. Transparency

You deserve to understand how our AI works:

3. Data Privacy and Protection

Your data is sacred to us:

4. Fairness and Bias Mitigation

We are committed to minimising bias in our AI outputs:

5. Safety and Reliability

We build with safety as a core requirement:

6. Accountability

We take responsibility for our AI systems:

How Our AI Works

Architecture Overview

Smalt AI uses a multi-model architecture to deliver the best results:

ComponentDescription
Foundation Models We use leading AI models (Anthropic Claude, Google Gemini) via their enterprise APIs. These models provide the core reasoning and language capabilities.
Intelligent Routing Our system intelligently routes queries to the most appropriate model based on the task type, optimising for quality and efficiency.
Context Engineering We use advanced context management to provide models with relevant information while minimising unnecessary data exposure.
Specialised Skills Domain-specific capabilities (financial modelling, document generation, research) are built as structured skill modules that guide AI behaviour.
Output Validation Generated outputs pass through validation layers to catch formatting issues, calculation errors, and policy violations.

What Our AI Does NOT Do

Regulatory Alignment

We monitor and align with emerging AI regulations globally:

Regulation / FrameworkOur Approach
EU AI Act We classify our system as a general-purpose AI application and comply with transparency and documentation requirements. We monitor regulatory guidance for financial services-specific requirements.
UK AI Regulation We follow the UK's pro-innovation framework and sector-specific guidance from the FCA and other regulators.
NIST AI RMF Our risk management practices are informed by the NIST AI Risk Management Framework.
ISO/IEC 42001 We are aligning our AI management practices with this emerging standard.

Known Limitations

We believe in being upfront about what AI cannot do:

Feedback and Reporting

We actively welcome feedback on our AI systems:

Our Commitment

We are committed to evolving our responsible AI practices as the technology and regulatory landscape develops. We will:

Questions about our AI practices?
Contact us at support@smaltai.com or speak to your account manager.