CIMPS 2025: LLM Optimization Techniques for Compliance Assessment

PhD student Victor Terrón presented his research on LLM optimization techniques for compliance assessment at CIMPS 2025 in Lima, Peru.

Victor Terrón presenting at CIMPS 2025

Research Overview

The work provides a comprehensive analysis of 9 different optimization techniques for Large Language Models in the context of software compliance assessment:

Techniques Analyzed

  1. RAG (Retrieval-Augmented Generation) - Enhancing LLM responses with retrieved context
  2. LoRA (Low-Rank Adaptation) - Efficient fine-tuning of large models
  3. Prompt Engineering - Optimizing input prompts for better outputs
  4. Fine-tuning strategies - Domain-specific model adaptation
  5. Chain-of-thought prompting - Structured reasoning approaches
  6. And more…

Key Findings

The research demonstrates how these techniques can be applied to improve the accuracy and reliability of LLMs when assessing software compliance with industry standards and regulations.

Congratulations to Victor on this excellent presentation!


Originally shared on LinkedIn




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • France 24: Discussion on AI Safety and the Future of AI Development
  • IFE Conference: The Future of AI in Education
  • SLICE: Semantic Language-Indexed Code Extraction at NeurIPS 2025
  • MICAI 2025 Panel: Generative AI's Impact on Education
  • Beyond SWE-Bench: Multi-language Automated Program Repair at MICAI 2025