DEPTH000%
PROJECT.01 — MSc PROJECT // CASE STUDY

AI Teaching Assistant For LMS

RAG-powered academic assistant delivering contextual course-specific support within a Learning Management System.

TYPEMSc Computing Project
ROLEFull Stack Developer
DATE2025 — 2026
STACKNext.js 15, TypeScript, PostgreSQL, Ollama
STATUSCompleted

01 — THE CHALLENGE

Traditional Learning Management Systems lack intelligent, context-aware academic support. Students often struggle to find relevant information within course materials, and general-purpose AI assistants provide answers that aren't grounded in the specific curriculum. The challenge was to build a system that could answer student questions using only their module-specific learning materials while maintaining academic integrity through governance controls.

02 — THE APPROACH

The solution was architected around Retrieval-Augmented Generation (RAG), a technique that grounds AI responses in actual course documents. The pipeline involves: document upload and parsing (PDF/DOCX), automated chunking and tokenisation, vector embedding generation using Ollama, and semantic similarity search using pgvector on Neon PostgreSQL. A governance layer was implemented to detect and refuse requests that violate academic integrity, such as direct assignment completion.

03 — KEY FEATURES

  • Natural language query interface with module-aware chat sessions
  • RAG pipeline using embeddings and vector similarity search for semantic document retrieval
  • Document ingestion supporting PDF and DOCX with automated chunking, tokenisation, and embedding
  • Governance layer for academic integrity — detects and blocks misuse
  • Local AI inference using Ollama for contextual academic responses
  • Session persistence and interaction history logging
  • Model abstraction layer supporting multiple AI providers
  • Research logging for evaluation and analysis

04 — TECHNICAL STACK

Built with Next.js 15 and TypeScript for a scalable full-stack architecture. The database layer uses Neon PostgreSQL with pgvector extension for vector similarity search, managed through Drizzle ORM. Ollama handles local AI inference and embedding generation, enabling the system to run without external API dependencies. The frontend uses Tailwind CSS for responsive, accessible interfaces across all devices.

05 — OUTCOME

The system successfully demonstrates how RAG can be applied to educational contexts, providing accurate, curriculum-grounded responses while enforcing academic integrity. The modular architecture supports future integration with existing LMS platforms, and the governance framework provides a foundation for institutionally controlled AI usage in education.