Like other AI models, foundation models are still contending with the risks of AI. This is a factor to keep in mind for enterprises considering foundation models as the technology underpinning their internal workflows or commercial AI applications.
Bias: A model can learn from the human bias present in the training data, and that bias can trickle down to the outputs of fine-tuned models.
Computational costs: Using existing foundation models still requires significant memory, advanced hardware such as GPUs (graphics processing units) and other computational resources to fine-tune, deploy and maintain.
Data privacy and intellectual property: Foundation models might be trained on data obtained without the consent or knowledge of its owners. Exercise caution when feeding data into algorithms to avoid infringing on the copyright of others or exposing personally identifiable or proprietary business information.
Environmental toll: Training and running large-scale foundation models involves energy-intensive computations that contribute to increased carbon emissions and water consumption.
Hallucinations: Verifying the results of AI foundation models is essential to make sure they’re producing factually correct outputs.