With 20.5 million customers in France, Bouygues Telecom is the third largest B2C and B2B telecom provider for mobile telephony, Internet and IPTV services. Its business model is deeply grounded in the belief that digital innovation is key to elevating customer experience and business growth. To drive sustained growth, the company set its ambition on accelerating the use of artificial intelligence (AI) for decision-support across several processes and enterprise operations.

 

Setting the stage

With this vision, Bouygues Telecom strived to establish an AI culture across the company by democratizing AI in all business units and facilitating the scaling up of AI-related projects. Bouygues Telecom partnered with IBM in 2019 to accelerate the pace of innovation with AI. The primary goal of the collaboration was to co-create and co-develop enterprise AI capabilities leveraging cloud-native AI apps and after the initial AI innovation phase, Bouygues Telecom pivoted its focus on driving business value creation at scale by tapping into the full potential of AI.

 

Scaling AI for value creation

With IBM’s strategic partnership, Bouygues Telecom established the right platform and technology foundation to operationalize and scale AI across hyper marketing, financial flow control, prevention, lead triage and qualification, intelligent routing, contract management and logistics analysis. As the company progressed through its AI transformation, it defined the following set of imperatives to enable the enterprise journey:

  • Establish key capabilities required for enterprise-grade AI
  • Quickly experiment & pilot AI solutions to drive business value creation through AI at scale
  • Accelerate time-to-value with AI while minimizing operational risk
  • Provide an open, scalable, cost-efficient, secure infrastructure and platform for AI
  • Upskill AI talent across both IT and business

 

Navigating the challenges

About 90 percent of companies have difficulty scaling AI across their enterprises, and data is a key reason for why AI expansion fails. Bouygues Telecom is one of the bold leaders to develop AI vision, and its thoughtful approach from experimentation and engineering has ensured a successful AI at scale implementation.

For accelerating its data and AI architecture, Bouygues Telecom opted to migrate to a cloud infrastructure. In the process, the company also faced huge set of challenges in this effort including complex data system, proof-of-concept approach for AI not aligned with IT standards and there was a dire need to develop AI capabilities leveraging cloud-native AI apps from MVP to scale, with faster time to value.

IBM’s AI at scale service capabilities supported Bouygues Telecom’s transformation. IBM co-designed custom data and AI reference architecture covering multiple cloud scenarios that can be extended to all AI and data projects across the AWS cloud as well as other cloud/ on-prem platforms. In the implementation phase, enterprise subsystems were integrated with best-of-breed external services to generate higher quality user experiences and outcomes. This allowed them to securely scale the use of AI and machine learning across the organization and for multiple use cases, for nearly every business application and work processes optimization, i.e. democratization of AI. The platform provided an efficient and scalable infrastructure for training, testing, and deploying integrated AI and data services while addressing the variety of data systems and standard IT process constraints. This additionally provided a robust, trusted and secure environment for over 10 teams to build and infuse AI solutions into business processes.

 

AI built on AWS

The AI platform structure blueprint was based on four governed machine learning environments with different AWS accounts:

  • Application development used by developers to build applications without access to production data. This environment is used to build Artifacts by running AWS CodeBuild jobs and creating AWS CloudFormation templates
  • ML training environments for building and training machine learning algorithms with read only access to production data only from SageMaker for training Amazon SageMaker pipeline and Baseline Processing Job
  • Evaluation environment for testing and staging by deploying SageMaker endpoints and batch transform and process jobs
  • Production environment for serving, inference and continuous improvement with real data. Models can be deployed as serverless APIs through an Amazon API Gateway endpoint, and AWS Lambda function in front of Amazon SageMaker Endpoints

 

MLOps Framework: MLOps tooling of the platform helps teams orchestrate the deployment of the models between the environments. Services included AWS SageMaker, Cloud Formation and code pipeline. This allowed provisioning of infrastructure, staging the model, managing dependencies, orchestrating the multiple steps that occur when a model is called, and serving the model with robustness, scalability, and high availability.

 

DevOps Framework: DevOps practices like continuous integration and continuous delivery let organizations deliver rapidly in a safe and reliable manner. Infrastructure automation practices included infrastructure as code and configuration management, help to keep computing resources elastic and responsive to frequent changes. The AWS CodePipeline service utilizes AWS CodeCommit as the version control and AWS CodeBuild for installing dependencies.

 

Production Environment: The Prod-Train and Prod-Run are equipped with the tooling to monitor important production model metrics and alert when models may need to be retrained. It also accelerates the ability to retrain models by automatically capturing and selecting the new training data into Prod-Train environment. AWS SageMaker collects training metrics and real time inference data from the endpoints using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. Alarms are configured based on thresholds and send notifications or take actions when those thresholds are met. The monitoring solution leverages SageMaker Model monitor for data quality, model quality monitoring in training and bias drift in production. It also uses Clarify explainability monitoring that includes a scalable and efficient implementation of SHAP.

 

Accelerated time-to-value

While many peer organizations take about a year to scale AI from concept to production, Bouygues Telecom completed the effort in just 4 months. In June 2021, the first version of the production ready AI platform was released on AWS allowing the deployment of four different cloud native AI projects across the following business processes:

  • B2B incoming lead triage
  • Smart alerts for data consumption
  • Contracts analyzer for procurement
  • Invoice validation for finance

The new AI platform enabled Bouygues Telecom to:

  • Experiment by simplifying data access using local data governance, reducing data availability to only a few days using the Datalake Cloud, and equipping with a catalog of ready to use cloud AI tools
  • Minimum time to production could be achieved with simple and compliant deployment by integration with the CICD (continuous integration and continuous deployment) chain in MLOps, integrating with IT applications with an aim to roll out multi-cloud IS strategy in 2022.

 

For more information about IBM services for AI, please visit https://www.ibm.com/services/artificial-intelligence.

Was this article helpful?
YesNo

More from Artificial intelligence

24 IBM offerings winning TrustRadius 2024 Top-Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

Generate Ansible Playbooks faster by using watsonx Code Assistant for Red Hat Ansible Lightspeed

2 min read - IBM watsonx™ Code Assistant is a suite of products designed to support AI-assisted code development and application modernization. Within this suite, IBM watsonx Code Assistant for Red Hat® Ansible® Lightspeed equips developers with generative AI (gen AI) capabilities, accelerating the creation of Ansible Playbooks. In early 2024, IBM watsonx Code Assistant for Red Hat Ansible Lightspeed introduced model customization and a no-cost 30-day trial. Building on this momentum, we are excited to announce the on-premises release of watsonx Code Assistant for Red Hat Ansible Lightspeed,…

Unlocking business transformation: IBM Consulting enhances Microsoft Copilot capabilities

3 min read - Generative AI is not only generating significant revenue for tech companies, but it's also yielding tangible benefits. For large organizations implementing AI solutions across their entire enterprise, the impact can be substantial. For example, reducing customer support costs or increasing engineering capacity can lead to billions of dollars in added value to their bottom line. Microsoft is at the forefront of innovation in the generative AI market, where advancements in natural language processing (NLP) are powering the reasoning engine behind…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters