AI features are mandatory marketing in every GRC platform in 2026. Actually useful are only a few. SecTepe.Core has three AI building blocks we have experienced as effective in practice – and one clear boundary where we deliberately do not use AI.
1. Policy Generation with Contextual Refinement
From a wizard step with ~10 questions (industry, size, cloud usage, regulatory context) an LLM generates a complete policy draft already tailored to your organization. Three important properties:
- Templates as foundation: the LLM supplements and refines templates, doesn't generate from scratch. Significantly reduces hallucination risk.
- Compliance anchors: every paragraph is linked with the requirements covered (ISO 27001 A.X.Y, BSI building block Z). Auditor sees the purpose at once.
- Approval workflow: every draft requires human approval – no automatic live switch.
2. Retrieval-Augmented Audit Assistant
During an audit the auditor asks questions like "how is your supplier security assessment documented?". The RAG assistant searches the own ISMS (policies, procedures, evidence) semantically and delivers a direct answer with source citations. Advantages over a pure LLM answer:
- Answers based on actual documents, not on training data.
- Sources are output too – auditor and compliance officer can verify.
- Cross-framework: same question, different evidence pool depending on active framework.
3. STRIDE-Based Threat Modelling
For a new system (e.g. a new API, a new cloud service) the LLM generates a first draft of a STRIDE threat model (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege). Architect/security engineer reviews, supplements, discards – and has in 30 minutes what would otherwise have cost two workshop half-days.
Where We Deliberately Do Not Use AI
- Risk evaluation: no automatic scoring of a risk by an LLM. Risk evaluation is a deliberate management decision with acceptance responsibility – not "the AI says it's 7/10".
- Audit sign-off: no LLM may set compliance status. A human signs off, the LLM only supports preparation.
- Incident classification: critical classifications (e.g. "NIS-2 reportable yes/no") run rule-based with a clear audit trail – not by an LLM.
Data Protection: What Happens to the Content?
In self-hosted operation, the LLM optionally runs against a local Ollama instance – the data never leaves your own infrastructure. Anyone wanting to use GPT-4 for higher quality can enable it; all requests then run over a preconfigured OpenAI DPA with EU region.
Realistic Expectation
AI makes compliance faster, not better. A generated policy is not "ready" – it is a qualitatively good first draft a human with domain knowledge has to sharpen. A RAG assistant doesn't replace a compliance officer, but turns 30-minute research answers into 30-second answers.
Conclusion
AI in compliance workflows is valuable exactly when it is intended as an accelerator with human approval – and dangerous when sold as a replacement for human decisions. SecTepe.Core holds this dividing line consistently – and delivers measurable hour savings per compliance cycle where AI helps.