With 20.5 million customers in France, Bouygues Telecom is the third largest B2C and B2B telecom provider for mobile telephony, Internet and IPTV services. Its business model is deeply grounded in the belief that digital innovation is key to elevating customer experience and business growth. To drive sustained growth, the company set its ambition on accelerating the use of artificial intelligence (AI) for decision-support across several processes and enterprise operations.

 

Setting the stage

With this vision, Bouygues Telecom strived to establish an AI culture across the company by democratizing AI in all business units and facilitating the scaling up of AI-related projects. Bouygues Telecom partnered with IBM in 2019 to accelerate the pace of innovation with AI. The primary goal of the collaboration was to co-create and co-develop enterprise AI capabilities leveraging cloud-native AI apps and after the initial AI innovation phase, Bouygues Telecom pivoted its focus on driving business value creation at scale by tapping into the full potential of AI.

 

Scaling AI for value creation

With IBM’s strategic partnership, Bouygues Telecom established the right platform and technology foundation to operationalize and scale AI across hyper marketing, financial flow control, prevention, lead triage and qualification, intelligent routing, contract management and logistics analysis. As the company progressed through its AI transformation, it defined the following set of imperatives to enable the enterprise journey:

  • Establish key capabilities required for enterprise-grade AI
  • Quickly experiment & pilot AI solutions to drive business value creation through AI at scale
  • Accelerate time-to-value with AI while minimizing operational risk
  • Provide an open, scalable, cost-efficient, secure infrastructure and platform for AI
  • Upskill AI talent across both IT and business

 

Navigating the challenges

About 90 percent of companies have difficulty scaling AI across their enterprises, and data is a key reason for why AI expansion fails. Bouygues Telecom is one of the bold leaders to develop AI vision, and its thoughtful approach from experimentation and engineering has ensured a successful AI at scale implementation.

For accelerating its data and AI architecture, Bouygues Telecom opted to migrate to a cloud infrastructure. In the process, the company also faced huge set of challenges in this effort including complex data system, proof-of-concept approach for AI not aligned with IT standards and there was a dire need to develop AI capabilities leveraging cloud-native AI apps from MVP to scale, with faster time to value.

IBM’s AI at scale service capabilities supported Bouygues Telecom’s transformation. IBM co-designed custom data and AI reference architecture covering multiple cloud scenarios that can be extended to all AI and data projects across the AWS cloud as well as other cloud/ on-prem platforms. In the implementation phase, enterprise subsystems were integrated with best-of-breed external services to generate higher quality user experiences and outcomes. This allowed them to securely scale the use of AI and machine learning across the organization and for multiple use cases, for nearly every business application and work processes optimization, i.e. democratization of AI. The platform provided an efficient and scalable infrastructure for training, testing, and deploying integrated AI and data services while addressing the variety of data systems and standard IT process constraints. This additionally provided a robust, trusted and secure environment for over 10 teams to build and infuse AI solutions into business processes.

 

AI built on AWS

The AI platform structure blueprint was based on four governed machine learning environments with different AWS accounts:

  • Application development used by developers to build applications without access to production data. This environment is used to build Artifacts by running AWS CodeBuild jobs and creating AWS CloudFormation templates
  • ML training environments for building and training machine learning algorithms with read only access to production data only from SageMaker for training Amazon SageMaker pipeline and Baseline Processing Job
  • Evaluation environment for testing and staging by deploying SageMaker endpoints and batch transform and process jobs
  • Production environment for serving, inference and continuous improvement with real data. Models can be deployed as serverless APIs through an Amazon API Gateway endpoint, and AWS Lambda function in front of Amazon SageMaker Endpoints

 

MLOps Framework: MLOps tooling of the platform helps teams orchestrate the deployment of the models between the environments. Services included AWS SageMaker, Cloud Formation and code pipeline. This allowed provisioning of infrastructure, staging the model, managing dependencies, orchestrating the multiple steps that occur when a model is called, and serving the model with robustness, scalability, and high availability.

 

DevOps Framework: DevOps practices like continuous integration and continuous delivery let organizations deliver rapidly in a safe and reliable manner. Infrastructure automation practices included infrastructure as code and configuration management, help to keep computing resources elastic and responsive to frequent changes. The AWS CodePipeline service utilizes AWS CodeCommit as the version control and AWS CodeBuild for installing dependencies.

 

Production Environment: The Prod-Train and Prod-Run are equipped with the tooling to monitor important production model metrics and alert when models may need to be retrained. It also accelerates the ability to retrain models by automatically capturing and selecting the new training data into Prod-Train environment. AWS SageMaker collects training metrics and real time inference data from the endpoints using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. Alarms are configured based on thresholds and send notifications or take actions when those thresholds are met. The monitoring solution leverages SageMaker Model monitor for data quality, model quality monitoring in training and bias drift in production. It also uses Clarify explainability monitoring that includes a scalable and efficient implementation of SHAP.

 

Accelerated time-to-value

While many peer organizations take about a year to scale AI from concept to production, Bouygues Telecom completed the effort in just 4 months. In June 2021, the first version of the production ready AI platform was released on AWS allowing the deployment of four different cloud native AI projects across the following business processes:

  • B2B incoming lead triage
  • Smart alerts for data consumption
  • Contracts analyzer for procurement
  • Invoice validation for finance

The new AI platform enabled Bouygues Telecom to:

  • Experiment by simplifying data access using local data governance, reducing data availability to only a few days using the Datalake Cloud, and equipping with a catalog of ready to use cloud AI tools
  • Minimum time to production could be achieved with simple and compliant deployment by integration with the CICD (continuous integration and continuous deployment) chain in MLOps, integrating with IT applications with an aim to roll out multi-cloud IS strategy in 2022.

 

For more information about IBM services for AI, please visit https://www.ibm.com/services/artificial-intelligence.

Was this article helpful?
YesNo

More from Artificial intelligence

A new era in BI: Overcoming low adoption to make smart decisions accessible for all

5 min read - Organizations today are both empowered and overwhelmed by data. This paradox lies at the heart of modern business strategy: while there's an unprecedented amount of data available, unlocking actionable insights requires more than access to numbers. The push to enhance productivity, use resources wisely, and boost sustainability through data-driven decision-making is stronger than ever. Yet, the low adoption rates of business intelligence (BI) tools present a significant hurdle. According to Gartner, although the number of employees that use analytics and…

The power of remote engine execution for ETL/ELT data pipelines

5 min read - Business leaders risk compromising their competitive edge if they do not proactively implement generative AI (gen AI). However, businesses scaling AI face entry barriers. Organizations require reliable data for robust AI models and accurate insights, yet the current technology landscape presents unparalleled data quality challenges. According to International Data Corporation (IDC), stored data is set to increase by 250% by 2025, with data rapidly propagating on-premises and across clouds, applications and locations with compromised quality. This situation will exacerbate data silos, increase costs…

Where to begin: 3 IBM leaders offer guidance to newly appointed chief AI officers

4 min read - The number of chief artificial intelligence officers (CAIOs) has almost tripled in the last 5 years, according to LinkedIn. Companies across industries are realizing the need to integrate artificial intelligence (AI) into their core strategies from the top to avoid falling behind. These AI leaders are responsible for developing a blueprint for AI adoption and oversight both in companies and the federal government. Following a recent executive order by the Biden administration and a meteoric rise in AI adoption across…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters