Citations:
Ye, X., et al. (2024). Effective Large Language Model Adaptation for Improved Grounding and Citation Generation. arXiv preprint arXiv:2311.09533. https://arxiv.org/abs/2311.09533 Healy, K., et al. (2026). Internal Representations as Indicators of Hallucinations in Agent Tool Selection. arXiv preprint arXiv:2601.05214. https://arxiv.org/abs/2601.05214 Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073. https://arxiv.org/abs/2212.08073


HGModernism is always great! The narwhal wikipedia page deep dive is an all time favourite of mine.