With 20.5 million customers in France, Bouygues Telecom is the third largest B2C and B2B telecom provider for mobile telephony, Internet and IPTV services. Its business model is deeply grounded in the belief that digital innovation is key to elevating customer experience and business growth. To drive sustained growth, the company set its ambition on accelerating the use of artificial intelligence (AI) for decision-support across several processes and enterprise operations.

 

Setting the stage

With this vision, Bouygues Telecom strived to establish an AI culture across the company by democratizing AI in all business units and facilitating the scaling up of AI-related projects. Bouygues Telecom partnered with IBM in 2019 to accelerate the pace of innovation with AI. The primary goal of the collaboration was to co-create and co-develop enterprise AI capabilities leveraging cloud-native AI apps and after the initial AI innovation phase, Bouygues Telecom pivoted its focus on driving business value creation at scale by tapping into the full potential of AI.

 

Scaling AI for value creation

With IBM’s strategic partnership, Bouygues Telecom established the right platform and technology foundation to operationalize and scale AI across hyper marketing, financial flow control, prevention, lead triage and qualification, intelligent routing, contract management and logistics analysis. As the company progressed through its AI transformation, it defined the following set of imperatives to enable the enterprise journey:

  • Establish key capabilities required for enterprise-grade AI
  • Quickly experiment & pilot AI solutions to drive business value creation through AI at scale
  • Accelerate time-to-value with AI while minimizing operational risk
  • Provide an open, scalable, cost-efficient, secure infrastructure and platform for AI
  • Upskill AI talent across both IT and business

 

Navigating the challenges

About 90 percent of companies have difficulty scaling AI across their enterprises, and data is a key reason for why AI expansion fails. Bouygues Telecom is one of the bold leaders to develop AI vision, and its thoughtful approach from experimentation and engineering has ensured a successful AI at scale implementation.

For accelerating its data and AI architecture, Bouygues Telecom opted to migrate to a cloud infrastructure. In the process, the company also faced huge set of challenges in this effort including complex data system, proof-of-concept approach for AI not aligned with IT standards and there was a dire need to develop AI capabilities leveraging cloud-native AI apps from MVP to scale, with faster time to value.

IBM’s AI at scale service capabilities supported Bouygues Telecom’s transformation. IBM co-designed custom data and AI reference architecture covering multiple cloud scenarios that can be extended to all AI and data projects across the AWS cloud as well as other cloud/ on-prem platforms. In the implementation phase, enterprise subsystems were integrated with best-of-breed external services to generate higher quality user experiences and outcomes. This allowed them to securely scale the use of AI and machine learning across the organization and for multiple use cases, for nearly every business application and work processes optimization, i.e. democratization of AI. The platform provided an efficient and scalable infrastructure for training, testing, and deploying integrated AI and data services while addressing the variety of data systems and standard IT process constraints. This additionally provided a robust, trusted and secure environment for over 10 teams to build and infuse AI solutions into business processes.

 

AI built on AWS

The AI platform structure blueprint was based on four governed machine learning environments with different AWS accounts:

  • Application development used by developers to build applications without access to production data. This environment is used to build Artifacts by running AWS CodeBuild jobs and creating AWS CloudFormation templates
  • ML training environments for building and training machine learning algorithms with read only access to production data only from SageMaker for training Amazon SageMaker pipeline and Baseline Processing Job
  • Evaluation environment for testing and staging by deploying SageMaker endpoints and batch transform and process jobs
  • Production environment for serving, inference and continuous improvement with real data. Models can be deployed as serverless APIs through an Amazon API Gateway endpoint, and AWS Lambda function in front of Amazon SageMaker Endpoints

 

MLOps Framework: MLOps tooling of the platform helps teams orchestrate the deployment of the models between the environments. Services included AWS SageMaker, Cloud Formation and code pipeline. This allowed provisioning of infrastructure, staging the model, managing dependencies, orchestrating the multiple steps that occur when a model is called, and serving the model with robustness, scalability, and high availability.

 

DevOps Framework: DevOps practices like continuous integration and continuous delivery let organizations deliver rapidly in a safe and reliable manner. Infrastructure automation practices included infrastructure as code and configuration management, help to keep computing resources elastic and responsive to frequent changes. The AWS CodePipeline service utilizes AWS CodeCommit as the version control and AWS CodeBuild for installing dependencies.

 

Production Environment: The Prod-Train and Prod-Run are equipped with the tooling to monitor important production model metrics and alert when models may need to be retrained. It also accelerates the ability to retrain models by automatically capturing and selecting the new training data into Prod-Train environment. AWS SageMaker collects training metrics and real time inference data from the endpoints using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. Alarms are configured based on thresholds and send notifications or take actions when those thresholds are met. The monitoring solution leverages SageMaker Model monitor for data quality, model quality monitoring in training and bias drift in production. It also uses Clarify explainability monitoring that includes a scalable and efficient implementation of SHAP.

 

Accelerated time-to-value

While many peer organizations take about a year to scale AI from concept to production, Bouygues Telecom completed the effort in just 4 months. In June 2021, the first version of the production ready AI platform was released on AWS allowing the deployment of four different cloud native AI projects across the following business processes:

  • B2B incoming lead triage
  • Smart alerts for data consumption
  • Contracts analyzer for procurement
  • Invoice validation for finance

The new AI platform enabled Bouygues Telecom to:

  • Experiment by simplifying data access using local data governance, reducing data availability to only a few days using the Datalake Cloud, and equipping with a catalog of ready to use cloud AI tools
  • Minimum time to production could be achieved with simple and compliant deployment by integration with the CICD (continuous integration and continuous deployment) chain in MLOps, integrating with IT applications with an aim to roll out multi-cloud IS strategy in 2022.

 

For more information about IBM services for AI, please visit https://www.ibm.com/services/artificial-intelligence.

Was this article helpful?
YesNo

More from Artificial intelligence

Top 6 innovations from the IBM – AWS GenAI Hackathon

5 min read - Generative AI innovations can transform industries. Eight client teams collaborated with IBM® and AWS this spring to develop generative AI prototypes to address real-world business challenges in the public sector, financial services, energy, healthcare and other industries. Over the course of several weeks, cross-functional teams comprising client teams, IBM and AWS representatives worked to design, develop and iterate on prototypes that push the boundaries of what's possible with generative AI. IBM used design thinking and user-centric approach to guide the…

IDC 2024 SaaS CSAT Award for Financial Governance, Risk and Compliance presented to IBM, September 2024

2 min read - IBM's prowess in the Financial Governance, Risk and Compliance (GRC) sector has been recognized by IDC, a leading global market intelligence firm. In its 2024 SaaS Path Survey, IBM emerged as a standout performer, securing the highest customer satisfaction scores in the Financial GRC application market. The survey, which collected ratings from approximately 2,900 organizations worldwide, asked customers to rate their vendor on over 30 different customer satisfaction metrics. IBM's high customer satisfaction scores, compared to the overall average in…

IBM experts break down LLM benchmarks and best practices

3 min read - On September 5, AI writing startup HyperWrite’s Reflection 70B, touted by CEO Matt Shumer as “the world’s top open-source model,” set the tech world abuzz. In his announcement on X, Shumer said it could hold its own against top closed-source models, adding that it “beats GPT-4o on every benchmark tested” and “clobbers Llama 3.1 405B. It’s not even close.” These were big claims—and the LLM community immediately got to work independently verifying them. Drama ensued in real-time online as third-party…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters