This project will investigate how trust in generative models such as Large Language Models (LLMs) and Large Multimodal Models (LMMs) can be enhanced by integrating with a symbolic reasoning engine that embodies SME knowledge. We will explore neurosymbolic methods, neural augmentation (specialization of models) and uncertainty quantification methods to provide measures of credibility in model predictions. The project will contribute to GAIA (Generative AI Assistant), a secure LLM/LMM integrated with symbolic logical constraints, to reason through large national security datasets.