Author: Just Summit Editorial Team
Source: Alliance Bernstein
33 sec readExplore the same thread
AI’s growing role in investment research and portfolio management brings powerful new capabilities, but also the risk of hallucinations that can distort analysis and decisions. Firms are responding by tightening how models are used: experts craft precise prompts, confine models to vetted research and market data, and require transparent citations so every claim can be traced back to a source. They are also adopting “maker-checker” style controls for AI, using multiple models and human specialists to cross-check outputs before they inform client recommendations or risk views.
Crucially, experienced teams feed model errors back into development cycles, improving robustness over time while maintaining strong governance. At the same time, some controlled hallucination is being harnessed as a creative tool for scenario design or filling gaps in imperfect datasets—always with human judgment as the final arbiter for capital allocation and client portfolios.
Source and archive