Enterprise AI Gateway

DNXT Publisher Suite · Enterprise AI Gateway

AI in Pharma.
Without the
Compliance Risk.

Your teams are already using AI. The question is whether it's governed. DNXT's AI Gateway is the compliance infrastructure that sits between every AI interaction and your regulated data — enforcing PII detection, content policy, and 21 CFR Part 11 audit logging before a single token leaves your environment.

🤖 OpenAI GPT-4o
🧠 Anthropic Claude 3.5
🔷 Azure AI GPT-4 Turbo
💎 Google Gemini Pro
🛡️ DNXT AI GATEWAY Compliance Layer · All Requests Proxied · Full Audit
🔍 PII Filter ● Active
📋 Content Policy ● STRICT
Rate Limiter ● Enforcing
DNXT APPLICATION FEATURES Smart Linking · Search · Document AI
📝 AUDIT LOG — 21 CFR PART 11 ✓ [14:32:07] req-a8f2 · tenant:acme · PASS · tokens:847 ✗ [14:32:01] req-b3c1 · PII BLOCKED · patient_id detected
4 AI Providers
Natively Supported
AES-256 Encrypted API Key
Storage Per Tenant
100% AI Requests Audited
in Tamper-Proof Log
3 Content Policy Levels:
MINIMAL / STANDARD / STRICT

Who This Is Built For

AI in regulated environments fails not because the technology is wrong, but because the governance infrastructure doesn't exist. This changes that for three distinct stakeholders.

🏛️
VP of Regulatory Operations
Mid-size Pharma · Biotech · CRO

You've fielded three requests from submission teams to "just use ChatGPT" for document drafting, and you've said no every time — not because AI isn't useful, but because you have no visibility into what data is being pasted into public AI interfaces. You know shadow IT is happening anyway, and when an audit comes, you'll be the one explaining why patient data from a Phase III trial ended up in an external model's training pipeline. You need a way to say yes to AI without creating a compliance time bomb.

  • Full audit trail of every AI interaction, per user, per session — defensible in an FDA inspection
  • PII and proprietary data blocked at the gateway before it reaches any external model
  • Replace shadow IT with a governed, sanctioned AI channel your QA team can sign off on
  • Set content policy to STRICT mode for submission workflows, STANDARD for internal drafting
🔒
Chief Information Officer
Pharma · Specialty Biopharma

Your legal team has sent you three memos about AI vendor data retention policies. Your security team flagged that fourteen employees have entered drug formulation details into free-tier AI tools in the last ninety days. You want a single, centralized control point — one place to approve AI providers, set spend caps, rotate API keys, and demonstrate to auditors that AI use across the organization is governed. Right now, that control point doesn't exist, and every new AI feature request from a product team creates a new risk surface.

  • One gateway controls all AI provider integrations — no per-application API key sprawl
  • AES-256 encrypted key vault with per-tenant isolation and rotation capability
  • Hard cost caps and rate limits prevent runaway AI spend from any single tenant or feature
  • Real-time cost analytics with token usage broken down by tenant, provider, and feature
📄
Head of Regulatory Affairs
Biotech · Specialty Pharma · CRO

Your submission specialists are spending 6–8 hours manually cross-referencing ISS tables against individual study reports to build a coherent Section 2.5 Clinical Overview. You know AI-assisted linking and summarization would cut this to under 90 minutes — you've seen the demos. But every time you bring it to legal and QA, the conversation stalls on three questions: Can AI see our unpublished IND data? Who logs what the AI generated? What happens if the model injects incorrect citations? You need AI that comes with answers to those questions baked in.

  • AI-assisted document linking and cross-reference generation within a fully audited session
  • Prompt injection detection prevents adversarial inputs from corrupting AI-generated regulatory content
  • Every AI-generated suggestion stored in audit log with model version, temperature, and input hash
  • Regulatory teams finally get AI productivity without waiting 18 months for an internal IT project

How It Works

Every AI request in DNXT passes through a deterministic, multi-stage pipeline. Here's exactly what happens between the moment a user invokes an AI feature and the moment a response reaches the application layer.

1
Tenant Identity & Configuration Resolution

Before any AI logic executes, the gateway resolves the requesting tenant's configuration record from the platform database. This record contains the tenant's approved AI provider, content policy level (MINIMAL, STANDARD, or STRICT), enabled feature set, rate limit thresholds, and cost cap settings. If a feature is not explicitly enabled for that tenant, the request is rejected at this stage with a structured error — no AI model is ever contacted. This means configuration is the first and primary control gate, not an afterthought enforced at the application layer.

2
Rate Limit & Budget Pre-Check

The gateway queries the tenant's current usage counters — requests per minute, tokens consumed in the current billing window, and cumulative cost against the configured hard cap. These counters are maintained in a fast in-memory store and updated atomically to prevent race conditions under concurrent load. If the request would breach either the rate limit or the cost cap, it is queued or rejected with an appropriate status code before any PII scanning or model API calls begin, ensuring cost governance has zero latency impact on valid requests.

3
PII Detection & Automatic Redaction

The full prompt — system message, user message, and any injected context — is passed through the PII scanner before the external API call is constructed. The scanner uses pattern-matching rules and entity recognition to detect patient identifiers (names, DOBs, subject IDs), unpublished trial identifiers, email addresses, and common regulatory document metadata that should not leave the platform. Under STRICT policy, any detected PII causes the request to be blocked entirely and logged. Under STANDARD policy, detected entities are replaced with typed placeholders (e.g., [PATIENT_ID]) and the redacted prompt is forwarded. The redaction mapping is stored server-side so responses can be re-hydrated if appropriate, and the redaction event is recorded in the audit trail regardless of policy outcome.

4
Prompt Injection Detection

Prompt injection — where malicious or accidental input attempts to override the system prompt and redirect model behavior — is a specific risk in regulatory workflows where AI-generated content may be incorporated into submissions. The gateway applies a set of heuristic and structural checks to identify injection patterns: instruction override phrases, role-switching attempts, delimiter manipulation, and base64-encoded payloads that encode instructions the PII scanner would miss. Flagged prompts are either blocked (STRICT) or sanitized and logged (STANDARD). The injection detection runs independently of the PII scanner so that both checks complete before the API call is assembled.

5
Secure API Key Retrieval & Provider Routing

The tenant's AI provider API key is stored in the platform's encrypted key vault using AES-256 encryption at rest, isolated by tenant. The gateway retrieves and decrypts the key at runtime for the duration of the API call only — keys are never persisted in application memory beyond the request lifecycle. The gateway then constructs the provider-specific API request (OpenAI, Anthropic, Azure OpenAI, or Google Vertex AI) using the appropriate SDK, including model selection, temperature, and max token limits configured per feature. This abstraction means a tenant can switch AI providers by updating a single configuration record without any application code changes.

6
Response Validation & Content Filtering

When the AI provider returns a response, the gateway does not pass it directly to the application. The response content is validated against the tenant's content policy: checking for hallucinated regulatory citations, inappropriate content flags from the provider's own safety layer, and structural compliance with the expected response schema for the invoked feature. Under STRICT mode, responses containing unverifiable external citations or any content flagged by the policy engine are returned to the application as filtered with the original model output preserved in the audit log but excluded from the user-facing result. This creates a clean separation between what the model said and what the application surface shows.

7
Tamper-Proof Audit Log Commit

After the request-response cycle completes — regardless of outcome — the gateway commits a complete audit record to the platform's immutable audit store. The record includes: timestamp (UTC), tenant ID, user ID and session token hash, the feature invoked, the AI provider and model version used, input token count, output token count, estimated cost, PII scan result, injection scan result, content policy applied, final disposition (pass/block/redact), and a SHA-256 hash of the sanitized prompt. This audit record cannot be modified or deleted by tenant administrators. It is formatted to satisfy 21 CFR Part 11 requirements for electronic records, including the ability to generate a human-readable audit trail report for regulatory inspection.

Platform Capabilities

Six production-grade controls that make AI deployment in regulated environments something your QA team will actually approve — not reluctantly tolerate.

🔄

Multi-Provider AI Architecture

The gateway natively supports OpenAI (GPT-4o, GPT-4 Turbo), Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku), Azure OpenAI Service, and Google Vertex AI (Gemini Pro). All four are abstracted behind a single internal API, so application features — smart linking, document summarization, semantic search — are written once and work regardless of which provider the tenant has selected. When OpenAI changes pricing or a client's security team mandates Azure-hosted models only, the switch is a configuration change, not a development project. Tenants are never locked into a single AI vendor's roadmap or pricing trajectory.

⚙️

Per-Tenant Configuration & Feature Flags

Every tenant in the platform operates under a discrete AI configuration record that controls exactly which AI features are active in their environment. A CRO with 20 sponsor clients can enable AI-assisted document linking for clients with signed AI addenda and disable it entirely for clients in jurisdictions with restrictive AI policies — without any code deployment. Configuration changes take effect at the gateway level within seconds, meaning QA teams can activate or suspend AI features in response to evolving guidance (EU AI Act, FDA's AI/ML action plan) without waiting for a software release. This is the difference between AI governance and AI hope.

💰

Rate Limiting & Cost Controls

AI cost overruns are a real operational risk: a poorly scoped feature that passes large regulatory documents as context can consume thousands of dollars in tokens within hours. The gateway enforces configurable rate limits (requests per minute, requests per hour) and hard cost caps (daily and monthly, per tenant) maintained in atomic counters that survive concurrent load without race conditions. When a tenant approaches their configured cap, the gateway returns a structured rate-limit response with retry-after headers rather than silently failing. Cost analytics are available in the platform admin dashboard broken down by tenant, feature, provider, and time window — giving operations teams the visibility to optimize model selection and context window usage before costs escalate.

🔍

PII Detection & Automatic Redaction

Regulatory documents contain patient identifiers, subject randomization codes, investigator names, and unpublished compound identifiers that must not be transmitted to external AI providers. The gateway's PII scanner evaluates every prompt — including injected document context from platform features — using a pattern library specifically tuned for pharmaceutical and clinical trial data, including ICH E6 GCP identifiers, CIOMS form fields, and FDA Form 3500A field structures. Under STRICT policy, PII causes immediate request blocking with the detection event logged. Under STANDARD policy, identified entities are replaced with typed tokens before the external call is made, and the mapping is stored server-side for response re-hydration. All redaction events are written to the audit trail with the entity type detected, not the raw value.

🛡️

Prompt Injection Prevention

Prompt injection attacks in a regulatory context are not theoretical — a document containing adversarial text could instruct an AI to generate fabricated cross-references, override system behavior, or leak the system prompt that defines the AI's regulatory constraints. The gateway's injection scanner applies heuristic rules to detect instruction override patterns, role confusion attempts (e.g., "ignore previous instructions"), delimiter exploitation, and encoding-based evasion before the external API call is assembled. This runs as a separate scan pass from PII detection, meaning both checks complete independently and both results are recorded. In STRICT mode, any injection flag terminates the request; in STANDARD mode, the flagged content is sanitized and the incident is logged at warning severity for security review.

📋

21 CFR Part 11 Audit Trail

Every AI interaction generates an immutable audit record that satisfies the core requirements of 21 CFR Part 11 for electronic records: it is attributable (user ID, session token), legible (human-readable structured format), contemporaneous (committed within the same transaction as the request), original (SHA-256 hash of sanitized prompt), and accurate (includes model version, token counts, cost, and policy disposition). Tenant administrators can access their AI audit trail through the platform's regulatory reporting interface and export it as a structured CSV or PDF for inclusion in a quality system audit or inspection readiness package. The audit store is append-only — neither tenant administrators nor platform operators can modify or delete records — making it defensible under data integrity expectations articulated in FDA's 2018 data integrity guidance.

DNXT vs The Alternative

The pharma industry's existing platforms were not designed with AI governance as a first principle. Here's what that looks like in practice when you try to bolt AI onto them.

Capability DNXT AI Gateway Veeva Vault RIM EXTEDO / LORENZ
eCTDmanager · docuBridge
Internal / Shadow IT
OpenAI direct / homegrown
AI Provider Flexibility ✓ Multi-Provider OpenAI, Anthropic, Azure AI, Google Vertex — configurable per tenant without code changes ⚠ Proprietary Only Veeva AI features use Salesforce Einstein infrastructure. No ability to use alternative models or maintain provider optionality. ✗ None Native EXTEDO and LORENZ offer no AI capabilities within their submission management tools as of 2024. AI would require external integration with no governance layer. ⚠ Direct API Flexible but ungoverned. Individual teams pick providers independently, creating key sprawl and no central cost visibility.
PII Detection & Redaction ✓ Automated Pharma-specific PII scanner on every prompt. Redaction under STANDARD, blocking under STRICT. All events audited. ✗ Not Available Vault's AI Assist features (document summarization, content generation) do not include outbound PII scanning before content is sent to Salesforce Einstein endpoints. ✗ Not Applicable No AI layer exists to scan. ✗ None Users paste content directly into model interfaces. No automated protection. Relies entirely on user awareness and policy documentation.
Prompt Injection Protection ✓ Dedicated Scanner Independent injection detection pass with heuristic + structural analysis. Runs before API call on every request. ✗ No Evidence Veeva has not published technical documentation on prompt injection controls for Vault AI features. Assumed to rely on OpenAI/Salesforce platform-level moderation only. ✗ Not Applicable No AI layer exists. ✗ None No scanning. Adversarial document content can override AI behavior undetected.
21 CFR Part 11 Audit Trail ✓ Full Compliance Immutable, per-interaction audit record: user, timestamp, model, tokens, cost, policy result, input hash. Exportable for inspection. ⚠ Partial Vault maintains audit trails for document actions, but AI interaction-level logging (what was sent to the model, what response was generated, what policy was applied) is not documented as Part 11-compliant at the AI layer. ✗ Not Available Document audit trails exist but no AI-specific audit infrastructure. ✗ None Chat history in commercial AI tools is controlled by the vendor's retention policy, not the pharma company's. Not defensible in an inspection.
Cost Controls & Spend Caps ✓ Hard Caps Per-tenant daily and monthly cost caps with rate limiting. Real-time usage analytics by feature and provider. Prevents runaway spend from any single integration.