Faster and better decisions with AI-driven self-serve Insights

Incedo LighthouseTM with Self-Serve AI is a cloud-based solution that is creating a significant business impact in Commercial Effectiveness for clients in the pharmaceutical industry. Self-serve entails empowering them with actionable intelligence to serve their business needs by leveraging the low-code AI paradigm. This reduces dependency on data scientists and engineers and makes faster iterations of actionable decisions and monitoring their outcomes by business users.

As internal and external enterprise data continues to grow in size, frequency, and variety, the classical challenges such as sharing information across business units, lack of a single source of truth, accountability, and quality issues (missing data, stale data, etc.) increase.

For IT teams owning diverse data sources, it becomes an added workload to ensure the provisioning of the enterprise-scale data in requisite format, quality, and frequency. This also impedes meeting the ever-growing analytics needs of various BU teams, each having its own request as a priority. Think of the several dashboards floating in the organizations created at the behest of various BU teams, and even if with great effort they are kept updated, it is still tough to extract the insights that will help take direct actions to address critical issues and measure their impact on the ground. Different teams have different interaction patterns, workflows, and unique output requirements – making the job of IT to provide canned solutions in a dynamic business environment very hard.

Self-service intelligence is therefore imperative for organizations to enable business users to make their critical decisions faster every day leveraging the true power of data.

Enablers of self-service AI platform – Incedo LighthouseTM

Our AWS cloud-native platform Incedo LighthouseTM, a next-generation, AI-powered Decision Automation platform t arms business executives and decision-makers with actionable insights generation and their assimilation in daily workflows. It is developed as a cloud-native solution leveraging several services and tools from AWS that make the journey of executive decision-making highly efficient at scale. Key features of the platform include:

  • Customized workflow for each user role: Incedo LighthouseTM is able to cater to different needs of enterprise users based on their role, and address their specific needs:
    • Business Analysts: Define the KPIs as business logic from the raw data, and define the inherent relationships present within various KPIs as a tree structure for identifying interconnected issues at a granular level.
    • Data Scientists: Develop, train, test, implement, monitor, and retrain the ML models specific to the enterprise use cases on the platform in an end-to-end model management
    • Data Engineers: Identify the data quality issues and define remediation , feature extraction, and serving using online analytical processing as a connected process on the platform
    • Business Executives: Consume the actionable insights (anomalies, root causes) auto-generated by the platform, define action recommendations, test the actions via controlled experiments, and push confirmed actions into implementation
  • Autonomous data and model pipelines: One of the common pain points of the business users is the slow speed of data to insight delivery and share action recommendation, which may take even weeks at times for simple questions asked by a CXO. To address this, the process of insights generation from raw big data and then onto the action recommendation via controlled experimentation has been made autonomous in Incedo LighthouseTM using combined data and model pipelines that are configurable in the hands of the business users.
  • Integrable with external systems: Incedo LighthouseTM can be easily integrated with multiple Systems of Record (e.g. various DBs and cloud sources) and Systems of Execution (e.g. SFDC), based on client data source mapping.
  • Functional UX: The design of Incedo LighthouseTM is intuitive and easy to use. The workflows are structured and designed in a way that makes it commonsensical for users to click and navigate to the right features to supply inputs (e.g. drafting a KPI tree, publishing the trees, training the models, etc.) and consume the outputs (e.g. anomalies, customer cohorts, experimentation results, etc.). Visualization platforms such as Tableau and PowerBI are natively integrated with Incedo LighthouseTM thereby making it a one-stop shop for insights and actions.

Incedo LighthouseTM in Action: Pharmaceutical CRO use case:

In a recent deployment of Incedo LighthouseTM, the key users were the Commercial and Business Development team of a leading Pharma CRO. The company had drug manufacturers as its customers. Their pain point revolved around the low conversion rates leading to the loss of revenue and added inefficiencies in the targeting process. A key reason behind this was the wrong prioritization of leads from conversion propensity and total lifetime value perspective. This was mainly due to manual, human-judgment-driven, ad-hoc,, static, rule-based identification of leads for the Business Development Associates (BDA) to work on.

Specific challenges that came in the way of the application of data science for lead generation and targeting were:

  • The raw data related to the prospects – that was the foundation for f for e predictive lead generation modeling – was in silos inside the client’s tech infrastructure. This led to failure in developing high-accuracy predictive lead generation models in the absence of a common platform to bring the data and models together.
  • Even in a few exceptional cases, where the data was stitched together by hand and predictive models built, the team found it difficult to keep the models updated in the absence of integrated data and model pipelines working in tandem.

To overcome these challenges, the Incedo LighthouseTM platform was deployed. The deployment of Incedo LighthouseTM in the AWS cloud environment not only brought about real improvements in target conversions but also helped transform the workflow for the BDAs. By harnessing the power of Data and AI, as well as leveraging essential AWS native services, we achieved efficient deployments and sustained service improvements.

  • Combine the information from all data sources for a 360-degree customer view, enabling the BDAs to look at the bigger picture effortlessly. To do so effectively, Incedo LighthouseTM leveraged AWS Glue which provided a cost-effective, user-friendly data integration service. It helped in seamlessly connecting to various data sources, organizing data in a central catalog, and easily managing data pipeline tasks for loading data into a data lake.
  • Develop and deploy AI/ML predictive models for conversion propensity using Data Science Workbench which is part of the Incedo LighthouseTM platform, after developing the data engineering pipelines that create a ‘single-version-of-the-truth ’ every time raw data is refreshed. This was done by leveraging the pre-built model accelerators, helping the BDAs sort those prospects in the descending order of their conversion propensity, thereby maximizing the return on the time invested in developing them. The Data Science Workbench also helps with the operationalization of various ML models built in the process, while connecting model outputs to various KPI Trees and powering other custom visualizations. Using the Amazon SageMaker Canvas, Incedo LighthouseTM enables machine learning model creation for non-technical users, offering access to pre-built models and enabling self-service insights, all while streamlining the delivery of compelling results without extensive technical expertise.
  • Deliver key insights in a targeted and attention-driving manner to enable BDAs to make the most of the information in a short span of time. Incedo LighthouseTM leverages Amazon QuickSight, a key element in delivering targeted insights, that provides well-designed dashboards, KPI Trees, and intuitive drill-downs to help BDAs and other users make the most of the information quickly. These tools allow leads to be ranked based on model-reported conversion propensity, time-based priority, and various custom filters such as geographies and areas of expertise. BDAs can double-click on individual targets to understand deviations from actuality, review comments from previous BDAs, and decide on the next best actions. QuickSight seamlessly integrates with Next Gen Stats apps, and offers cost-effective scalable BI solutions, interactive dashboards, and natural language queries for a comprehensive and efficient user experience. This resulted in an increased prospect conversion rate due to data-driven automated decisions leveraging AI that are disseminated to BDA in a highly action-oriented way.

The financial services industry has undergone immense disruption in recent years, with fintech innovators and digital giants eroding the market share of traditional banking institutions. These new players are championing enhanced customer experiences through personalization as a common strategy to drive user adoption and build market share. In this cloud-centric narrative, we’ll explore the value of personalization in retail banking and how AWS services can be leveraged to empower this transformation with Incedo LighthouseTM.

The Value of Personalization for Retail Banking

The adoption of personalization strategies has become a central focus for banks to enhance customer experience and deliver significant business impact. This includes:

  1. Building a Growth Engine for New Markets and Customer Segments
    New-age fintech companies have leveraged data-driven products for expedited underwriting, utilizing data like bureau scores, digital KPIs, and social media insights to assess a prospect’s creditworthiness. Traditional banks must swiftly adopt similar data-driven approaches for faster loan fulfillment and to attract new customer segments. AWS cloud services can facilitate this transition at speed by offering scalable, flexible, and secure infrastructure.
  1. Maximizing Customer Lifetime Value
    To maximize the share of customers’ wallets, banks are now focusing on improved cross-selling, superior servicing, and higher retention rates. Next-generation banks are employing AI-driven, next-best-action recommendations to determine which products to offer at the right time and through the most effective channels. This shift involves transitioning from reactive retention strategies to proactive, data-driven personalized approaches.
  1. Improved Risk Assessment and Mitigation Controls
    Personalization is not confined to marketing; it extends to risk management, fraud detection, anti-money laundering, and other control processes. Utilizing sophisticated AI models for risk, fraud, and AML detection, combined with real-time implementation, is crucial to establishing robust risk defense mechanisms against fraudsters.The impact of personalization in retail banking is transformative, with opportunities to enhance experiences across all customer touchpoints. Incedo’s deployment of solutions for banking and fintech clients showcases several use cases and potential opportunities within the cloud-based landscape.

incedo-personalization-solution

Building a Personalization Engagement Engine with AWS

A successful personalized engagement engine necessitates integrated capabilities across Data, AI/ML, and Digital Experiences, all hosted on the AWS cloud. The journey begins with establishing a robust data foundation and a strategy to enable a 360-degree view of the customer:

  1. Data Foundation to Support Decision Automation
    Traditional banks often struggle to consolidate a holistic customer profile encompassing product preferences, lifestyle behavior, transactional patterns, purchase history, preferred channels, and digital engagement. This demands a comprehensive data strategy, which, given the extensive storage requirements, may require building modern digital platforms on AWS Cloud. AWS services and tools facilitate various stages of setting up this platform, including data ingestion, storage, and transformation.
    AWS Glue: A large amount of frequently updated data in various source systems that are needed by the ML models is brought into the common analytical storage (data mart/data warehouse/data lake, etc.) using AWS Glue jobs, implementing ETL or ELT logic along with value-add processes such as data quality checks and remediation.
  1. AI, ML, and Analytics-Enabled Decisioning
    Identifying the right product or service to offer customers at the ideal time and through the preferred channel is pivotal to delivering an optimal experience. This is achieved through AI and ML models built on historical data. AWS offers services like Amazon SageMaker to develop predictive models and gain deeper insights into customer behavior.
    AWS Sagemaker: The brain of Personalization lies in the Machine Learning models that take advantage of the wealth of customer-level data across demographic, behavioral, and transactional dimensions to develop insights and recommendations to enhance the customer experience significantly, as explained in the use cases above.
  1. Optimal Digital Experience
    Personalization goes beyond data; it requires the right creatives and effective communication to drive customer engagement. AWS services for data integration and analytics enable A/B testing of digital experiences, ensuring the creation of best-in-class customer journeys.While data, AI, and digital experiences are the core building blocks of a personalized engagement layer, the orchestration and integration of these capabilities are essential for banks to realize the full potential of personalization initiatives. Building these capabilities from scratch can be time-consuming, but the AWS Cloud provides the scalability and flexibility required for such endeavors.
    AWS Compute: AWS provides scalable and flexible computing resources for running applications and workloads in the cloud. AWS Compute services allow companies to provision virtual servers, containers, and serverless functions based on the application’s requirements, enabling pay for what you use, and making it a cost-effective and scalable solution. Key compute services used in Incedo LighthouseTM are Amazon EC2 (Elastic Compute Cloud), AWS Lambda, Amazon ECS (Elastic Container Service), and Amazon EKS (Elastic Kubernetes Service).

Turning Personalization into Reality with Incedo LighthouseTM and AWS

Building personalization capabilities is just the first step; embedding personalization recommendations into enterprise workflows is equally critical. This integration ensures that personalized experiences are not just theoretical but are actively implemented and drive customer engagement.

value-potential-personalization

Incedo’s LighthouseTM solution for CX personalization accelerates the journey, offering an enterprise-grade solution that significantly reduces time-to-market for data/AI-enabled marketing personalization. It automates AI/ML-enabled decisions from data analysis to customer outreach, ensuring personalized offerings are delivered to customers at the right time. Incedo’s solution includes a prebuilt library of Customer 360-degree data lakes and AI/ML models for rapid implementation, supported by digital command centers to facilitate omnichannel engagement.

No matter where banking clients are in their personalization journey, Incedo’s solution ensures that they experience tangible benefits within weeks, not years. The implementation is complemented by a personalization roadmap that empowers organizations to build in-house personalization capabilities.

In the fast-paced world of banking, personalization is essential for acquiring new customers, maximizing their value, and retaining the best customers. Trust, combined with personalization capabilities, ensures traditional banks maintain their competitive edge against fintech players and digital giants.

In the ever-evolving landscape of financial services, personalization powered by AWS offers banks a strategic advantage in acquiring and retaining customers. Incedo’s LighthouseTM solution, hosted on AWS Cloud, enables rapid implementation and ensures that banks can quickly harness the benefits of personalization. This approach is not just a trend but a necessity for banks looking to stay competitive and provide a superior banking experience.

Complexity of decision making in the VUCA world

In today’s VUCA (Volatile, Uncertain, Complex and Ambiguous) business environment, the decision makers are increasingly required to make decisions at speed, in a dynamic and ever evolving uncertain environment. Contextual knowledge including cognizance of dynamic external factors is critical, and the decisions need to be made in an iterative manner employing ‘test & learn’ mindset. This can be effectively achieved through Decision Automation Solutions that leverage AI and ML to augment the expert human driven decision-making process.

Incedo LighthouseTM for Automated Decision Making

Incedo LighthouseTM an AWS cloud native platform has been designed and developed from the ground up to automate the entire process of decision making. It has been developed with the following objectives:

  1. Distill signal from noise: The right problem areas to focus on are identified by organizing KPIs into a hierarchy from lagging to leading metrics. Autonomous Monitoring and Issue Detection algorithms are further applied to identify anomalies that need to be addressed in a targeted manner. Thereby, effectively identifying crucial problem areas that the business should focus its energy on, using voluminous datasets that are updated at frequent intervals (typically daily).
  2. Leverage context: Intelligent Root Cause Analysis algorithms are applied to identify the underlying behavioral factors through specific micro-cohorts. This enables action recommendations that are tailored to specific cohorts as opposed to generic actions on broad segments.
  3. Impact feedback loop: Alternate actions are evaluated with controlled experiments to determine the most effective actions – and use that learning to iteratively improve outcomes from the decisions.

Incedo LighthouseTM is developed as cloud-native solution leveraging several services and tools from AWS that make the process of executive decisions highly efficient and scalable.

Incedo LighthouseTM implements a powerful structure and workflow to make the data work for you via a virtuous problem-solving cycle with an aim to deliver consistent business improvements through automation of 6-step functional journey of Problem Structuring & Discovery to Performance Improvement to Impact Monitoring.

6-step-functional-journey-problem-structuring

Step 1: Problem Structuring – What is the Problem?

In this step, the overall business objective is converted into a specific problem statement(s) based on Key Performance Indicators (KPIs) that are tracked at the CXO level. The KPI Tree construct is leveraged to systematically represent problem disaggregation. This automation enhances the decision making process by enabling a deeper understanding of the issue and its associated variables. Incedo LighthouseTM provides features that aid the KPI decomposition step, such as KPI repository, self-serve functionality for defining the structure of KPI trees and publish those with latest raw data automatically.

Step 2: Problem Discovery – Where is the problem?

Here the objective is to attribute the anomalies observed in performance, which are significant deviations from the performance trend, to a set of customers / accounts / subscribers. Incedo LighthouseTM provides features, which are a combination of rule-based and anomaly detection algorithms, that aid in identifying most critical problem areas in the KPI trees, such as Time Series Anomaly Detection Non-time series Anomaly Detection, Cohort Analyzer and Automated Insights.

Step 3: Root Cause Analysis – Why is there a problem?

Once the problem is discovered at the required level of granularity, identification of the root causes that drive the business performance becomes critical. To automate the root cause identification for every new or updated data set the Root Cause Analysis must be packaged into a set of pre-defined and pre-coded model sets that are configurable and can be fine-tuned for specific use case scenarios. Incedo LighthouseTM enables this using pre-packaged configurable model sets, the output of which is presented in a format that is conducive for the next step, which is, action recommendations. These model sets include Clustering, Segmentation and Key Driver Analyzer.

Step 4: Recommended Actions

However sophisticated the algorithms are, if the workflow stops at only delivering the insights using anomaly detection and root cause analyzer etc, it would still be a lost cause. Why? Because the executives are not supported with recommendations to take corrective, preventive or corroborative actions based on the insights delivered. Incedo LighthouseTM incorporates the Action Recommendation module that enables the actions to be created at each cohort (customer microsegment) level for a targeted corrective or improvement treatment based on its individual nuance. The Action Recommendation module helps define and answer questions for each cohort: What is the action, Who should be the target for the action, and When the actions should be implemented and state the Goal of the action in terms of KPI improvement target.

Step 5: Experimentation

Experimentation is testing various actions on a smaller scale, and being able to select the optimal action variant that is likely to produce the highest impact when implemented on full scale. Incedo LighthouseTM has a Statistical Experimentation engine that supports business executives to make informed decisions on actions to be undertaken. Some of the key features of the module are: Choice of the experiment type from the options such as A/B Testing, Pre vs. Post etc., Finalization of the target population and Identification of the success metrics and define targets.

Step 6: Impact Monitoring

Post full scale implementation of actions, through their seamless integration into organization’s operating workflows, tracking their progress on an ongoing basis is critical for timely interventions. Our platform ensures that the actions are not merely implemented but are continuously monitored for their impact on key performance indicators and business outcomes.

A two-way handshake is required between Incedo LighthouseTM and the System of Execution (SOE) that is used as an operations management system to continually monitor the impact of the actions on ground. Incedo LighthouseTM covers the following activities in this step – Push Experiments/Actions, Monitor KPIs, and Experiment Summary.

Incedo LighthouseTM in AWS environment

Infrastructure to host the Incedo LighthouseTM platform plays an important role in the overall impact that the platform creates on business improvements through better and automated decision making. In cases where the clients are already leveraging the AWS Cloud, the Incedo LighthouseTM implementation takes advantage of the following AWS native services that provides significant efficiencies for successive deployments and ongoing service to the business users. A few of the AWS Services prominently used by Incedo LighthouseTM are:

AWS Compute: AWS provides scalable and flexible compute resources for running applications and workloads in the cloud. AWS Compute services allows the companies to provision virtual servers, containers, and serverless functions based on application’s requirements, and enable pay for what you use, making it a cost-effective and scalable solution. Key compute services used in Incedo LighthouseTM are: Amazon EC2 (Elastic Compute Cloud), AWS Lambda, Amazon ECS (Elastic Container Service) and Amazon EKS (Elastic Kubernetes Service).

AWS Sagemaker: There are various ML models that are the brain behind various modules in Incedo LighthouseTM the Anomaly Detection, Cohort Analyzer, Action Recommendation and Experimentation etc. All these models are developed, trained, validated and deployed via AWS Sagemaker.

AWS Glue: The large amount of frequently updated data in various source systems that is needed by the ML models is brought into the common analytical storage (data mart/ data warehouse/ data lake) etc. using AWS Glue jobs that implement ETL or ELT logic along with value add processes such as data quality checks and remediation.

Incedo LighthouseTM boosts effectiveness and efficiency of executive decision making with the power of AI. As a horizontal cloud-native platform powered by AWS, it is the key to achieving consistent business improvements across domains and use cases.

The explosion of data is a defining characteristic of the times we are living in. Billions of terabytes of data are generated every day, and million more algorithms scour this data for patterns for our consumption. And yet, the more data we have, the harder it becomes to process this data for meaningful information and insights.

With the rise of generative AI technologies, such as ChatGPT, knowledge workers are presented with new opportunities in how they process and extract insights from vast amounts of information. These models can generate human-like text, answer questions, provide explanations, and even engage in creative tasks like writing stories or composing music. This breakthrough in AI technology has opened up possibilities for knowledge workers.

Built with powerful LLMs, Generative AI has taken the world by a storm and led to a flurry of companies keen to build with this technology. It has indeed revolutionized the way we interact with information.

And yet, in this era of ever-increasing information overload, the ability to ask the right question has become more critical than ever before.

While the technology has evolved faster than we imagined, its potential is limited by the ways we use it. And while there is scope for immense benefits, there is also a risk for harm if users don’t practice judgment or right guardrails are not provided when building Gen AI applications.

As knowledge workers and technology creators, empowering ourselves and our users relies heavily on the ability to ask the right questions.

Here are three key considerations to keep in mind:

1. The Art of Framing Questions:

To harness the true potential of generative AI, knowledge workers must master the art of framing questions effectively. This involves understanding the scope of the problem, identifying key variables, and structuring queries in a way that elicits the desired information. A poorly constructed question can lead to misleading or irrelevant responses, hindering the value that generative AI can provide.

Moreover, knowledge workers should also consider the limitations of generative AI. While these models excel at generating text, they lack true comprehension and reasoning abilities. Hence, it is crucial to frame questions that play to their strengths, allowing them to provide valuable insights within their domain of expertise.

2. The Need for Precise Inquiry:

Despite the power of generative AI, it is essential to remember that these models are not flawless. They heavily rely on the input they receive, and the quality of their output is heavily influenced by the questions posed to them. Hence, the importance of asking the right question cannot be overstated.

Asking the right question is a skill that knowledge workers must cultivate to extract accurate and relevant insights from generative AI. Instead of relying solely on the model to generate information, knowledge workers need to approach it with a thoughtful mindset.

3. Collaboration between Humans and AI:

Generative AI should be viewed as a powerful tool that complements human expertise rather than replacing it. Knowledge workers must leverage their critical thinking, domain knowledge, and creativity to pose insightful questions that enable generative AI to augment their decision-making processes. The synergy between human intelligence and generative AI has the potential to unlock new levels of productivity and innovation.

Think of Gen AI as a powerful Lego block, a valuable component within the intricate structure of problem-solving. It’s not a replacement but an enhancement, designed to work in harmony with human capabilities to solve a problem.

In conclusion, in the age of generative AI, asking the right questions is fundamental. Careful framing of queries unlocks generative AI’s true power, enhancing our decision-making. Cultivating this skill and fostering human-AI collaboration empowers knowledge workers to navigate the information age and seize new growth opportunities.

A Complementary Partnership

“Data is the new currency.”— has gained immense popularity in recent years as data is now a highly valuable and sought-after resource. Overtime data continues to be accumulated and is becoming increasingly abundant.​​ The focus has now shifted from acquiring data to effectively managing and protecting it. As a result, the design and structure of data systems have become a crucial area of interest, and research into the most effective methods for unlocking its potential is ongoing.

While innovation and new ways keep coming to the fore, the best of the ideas currently consists of two distinct approaches in the form of data mesh and data fabric. Although both aim to address the challenge of managing data in a decentralized and scalable manner, they have different approaches and benefits, and they differ in their philosophy, implementation, and focus.

Data Mesh

The architectural pattern was introduced by Zhamak Dehghani for data management platforms that emphasize decentralized data ownership, discovery, and governance. It is designed to help organizations achieve data autonomy by empowering teams to take ownership of their data and provide them with the tools to manage it effectively. Data mesh enables organizations to create and discover data faster through data autonomy. This contrasts with the more prevalent monolith and centralized approach where data creation, discovery, and governance are the responsibility of just one or a few domain-agnostic team(s). The goal of data mesh is to promote data-driven decision-making and increase transparency, break down data silos, and create a more agile and efficient data landscape while reducing the risk of data duplication.

Building Blocks of Data Mesh

Data Mesh Building Blocks

Data Mesh Architecture

Since data mesh involves a decentralized form of architecture and is heavily dependent on the various domains and stakeholders, the architecture is often customized and driven as per organizational needs. The technical design of a data mesh thus becomes specific to an organization’s team structure and its technology stack. The diagram below depicts a possible data mesh architecture.

Data Mesh Architecture

It is crucial that every organization designs its own roadmap to data mesh with conscious and collective involvement of all the teams, departments, and line of Business (LoBs), with a clear understanding of their own set of responsibilities in maintaining the data mesh.

Data Mesh Management Teams
Data mesh is primarily an organizational approach, and that's why you can't buy a data mesh from a vendor.

Data Fabric

Data Fabric is not an application or software package; it’s an architectural pattern that brings together diverse data sources and systems, regardless of location, for enabling data discovery and consumption for a variety of purposes while enforcing data governance. A data fabric does not require a change to the ownership structure of the diverse data sets like in a data mesh. It strives to increase data velocity by overlaying an intelligent semantic fabric of discoverability, consumption, and governance on a diverse set of data sources. Data sources can include on-prem or cloud databases, warehouses, and data lakes. The common denominator in all data fabric applications is the use of a unified information architecture, which provides a holistic view of operational and analytical data for better decision-making. As a unifying management layer, data fabric provides a flexible, secure, and intelligent solution for integrating and managing disparate data sources. The goal of a data fabric is to establish a unified data layer that hides the technical intricacies and variety of the data sources it encompasses.  

Data Fabric Architecture

It is an architectural approach that simplifies data access in an organization and facilitates self-service data consumption. Ultimately, this architecture facilitates the automation of data discovery, governance, and consumption through integrated end-to-end data management capabilities. Irrespective of the target audience and mission statement, a data fabric delivers the data needed for better decision-making.

Data Mesh

Principles of Data Fabric

Principles of Data Fabric
Parameters Data Mesh Data Fabric
Data Ownership
Decentralized
Agnostic
Focus
High data quality and ownership based on expertise
Accessibility and integration of data sources
Architecture
Domain-centric and customized as per organizational needs and structure
Agnostic to internal design with an intelligent semantic layer on top of existing diverse data sources
Scalability
Designed to scale horizontally, with each team having their own scalable data product stack
Supports unified layer across an enterprise with the scalability of the managed semantic layer abstracted away in the implementation

Both data mesh and data fabric aim to address the challenge of managing data in a decentralized and scalable manner. The choice between the two will depend on the specific needs of the organization, such as the level of data ownership, the focus on governance or accessibility, and the desired architecture.

It is important to consider both data mesh and data fabric as potential solutions when looking to manage data in a decentralized and scalable manner.

Enhancing Data Management: The Synergy of Data Mesh and Data Fabric

A common prevailing misunderstanding is that data mesh and data fabric infrastructures are exclusive to each other i.e., only one of the two can exist. However, fortunately, that is not the case. Data mesh and data fabric can be architected to complement each other in a way that the perquisites of both technologies are brought to the fore to the advantage of the organization. 

Organizations can implement data fabric as a semantic overlay to access data from diverse data sources while using data mesh principles to manage and govern distributed data creation at a more granular level. Thus, data mesh can be the architecture for the development of data products and act as the data source while data fabric can be the architecture for the data platform that seamlessly integrates the different data products from data mesh and makes it easily accessible within the organization. The combination of a data mesh and a data fabric can provide a flexible and scalable data management solution that balances accessibility and governance, enabling organizations to unlock the full potential of their data.

Data mesh and data fabric can complement each other by addressing different aspects of data management and working together to provide a comprehensive and effective data management solution.

In conclusion, both data mesh and data fabric have their own strengths but are complementary and thus can coexist synergistically. The choice between the two depends on the specific needs and goals of the organization. It’s important to carefully evaluate the trade-offs and consider the impact on the culture and operations of the organization before making a decision.

The client is one of the top 5 leading provider of technology, communications, information and entertainment products and services, and a global leader in 5G technologies.

Client was looking for a competent modernization partner to enable effective metrics collection and analysis that provides ways to improve automated flows in network troubleshooting and trouble ticket management for its 5G network.

See how Incedo’s solution helped the client with:

monetizing-5G-services-short-time

Monetizing 5G services in short time

improved-delivery-effectiveness

Improved delivery effectiveness through higher productivity and Sprint predictability

significantly-reduced-lead-time

Significantly reduced lead time by 40% in solving customer issues

The client is one of the top 5 leading provider of technology, communications, information and entertainment products and services, and a global leader in 5G technologies.

The client was looking to enhance customer experience on existing customer channels through service-oriented solutions for 5G network. They wanted to sunset the onshore heavy teams and transition offshore with complete program ownership with supplier partner.

See how Incedo’s solution helped the client with:

improved-delivery-effectiveness-through-higher-productivity

Improved delivery effectiveness through higher productivity and Sprint predictability

significantly-reduced-lead-time

Significantly reduced lead time by 40% in solving customer issues

transitioned-complete-program-offshore

Transitioned complete program offshore and sunset the onshore teams for further cost optimization

Payer contracting is a key function for any life sciences organization to ensure their drugs have the desired Market Access against its competition. Across the next 5 years, the gross contracted sales market expects to grow from $ 120B USD to $ 170B USD. This makes it critical for Pharma contracting team to change focus from being reactive, and shift gears on Payer Contracting, defining market access strategies and seek opportunities to make the Payer Contracting decision making accurate and efficient.

Now, the stages across Payer Contracting is long and complex with multiple questions that needs to be answered by Contracting Team, and other stakeholders – to decide if they should contract , or not? Moreover, if they decide to contract- is it going to be financially profitable? ( i.e. desired GTN, positive ROC etc.) Alternatively, is it going to be a strategic contract for the Contracting Team?

By answering, each of the above set of qualitative and quantitative questions- a contracting Team can take a decision, whether to contract, or not to contract with the Payer for the particular brand, in question. The answer to the qualitative questions rests completely on the subject matter expertise of the contracting team, but the answer to each of these analytical questions- can be provided using Reporting, Analytics and AI/ML features and state of the art- decision engine and decision science platformsImplementation of RPA, BI (Business Intelligence) & AI features across the Payer Contracting Life Cycle can improve the overall efficiency of the team from a decision-making standpoint (optimize rebate payout, improve ROC, impact pull through), but also lead to improved accuracy and speed of the overall process.

In this point of view, we will do a deep dive into the current Payer- Pharma Contracting space, the key intrinsic, and macro challenges pharma organizations have in this space (i.e. Loss of Exclusivity, PBM consolidation).  Finally lay down opportunities for Pharma, as to how AI, RPA, and Analytics can change the future of Payer Contracting Strategy for a Pharma organization.

About the Author

Debjit
Debjit Ghosh

Director – Life Sciences Solutions, Incedo Inc.

Rebate Payout is one of the biggest sources of cash outflow for any Pharma organization. With an ever changing market dynamics and pressure from Payers, PBMs and States , it is critical to have a closer look into the rebate payout process so that any leakage of revenue due to inefficiencies in the Rebate Validation and Reconciliation Process can be reduced. Even a 10% improvement in reducing leakage can lead to an overall cost savings of billions of dollars for the Pharma Industry.

In the POV sections, we will try to understand the Pharma Rebate Payout process in more detail and highlight opportunities for Pharma as to how AI, RPA, and Analytics can be utilized to optimize their Rebate Adjudication Process and in turn lead to cost savings for the industry.

Key Area of Impact: Pharma- Commercial

About the Author

Debjit
Debjit Ghosh

Director – Life Sciences Solutions, Incedo Inc.

When the GDPR became a law in Europe, American companies not doing business in Europe heaved a sigh of relief that they did not have to comply with the stringent requirements laid out in the law. However, as Arab spring showed us, movements have a way of spreading beyond borders. Privacy spring has now come to USA in the form of CCPA (California Consumer Privacy Act). CCPA provides protection to all California residents and lays down markers on how the consumer data can be used by various businesses. THE ACT PROVIDES THE CALIFORNIA RESIDENTS THE RIGHT to know what kind of data is being collected for them and for what purpose it is getting used.

In this whitepaper, Mr. Gaurav Sehgal (VP, Financial Services, Incedo) explains the impact of CCPA on wealth management industry and more narrowly on the impact of legislation on wealth managers. Learn about how wealth manager should approach the legislation and be prepared for its implementation.

As the race for top financial advisor talent heats up, Broker dealers are increasingly setting aside higher recruiting budgets to attract top financial advisors from their competitors. Any given month, AUM north of 500 mn dollars is moving from one broker dealer to another. The financial advisor’s first interaction with the prospecting firm is via the advisor onboarding process.

A good workflow driven Online onboarding process can deliver a transformational experience to the advisors and the home office teams can drastically cut down on the affiliation cycle times. Leveraging from what has worked for our clients (broker dealers, banks, TAMPs) in implementing workflow driven processes, the whitepaper distils the key tenets of a good onboarding system from various engagements.

Download this whitepaper to get insights into:

Investor protection framework has been a key focus area for regulators for a long time. While the Securities and Exchange Commission’s (SEC) fiduciary rule governs the financial advisors, the brokers have been regulated by the Financial Industry Regulatory Authority (FINRA) Rule 2111 (Suitability). The Department of Labor (DOL) fiduciary rule tried to create a common framework for the governance for brokers and advisors, but the rule was revoked following stiff opposition from brokers and industry groups. In June 2019, the SEC passed the Regulation Best Interest (REG BI), which establishes that broker-dealers and financial advisors need to work in the best interest of the consumer and to eliminate conditions that further a firm’s interest over a client’s interest.

This white paper will try to assess how the regulations will impact the industry, brokers, and the various systems and applications.

With a focus on customer experience and a reduction of end user pain points, our client, one of the world’s leading provider of technology, communications, information and entertainment products and services, with a presence around the world, wanted a robust system to program manage their product development roadmap and build deep knowledge of 5G iEN product portfolio while addressing challenges to accelerate 5G rollout. Incedo brought its advanced capabilities in the AI/ML to devise long term solutions that improved its business outcomes.

Learn how our solutions helped the client with:

 
cost-optimization-and-reduction

Cost Optimization and Reduction by engineering resource optimization, superior engineering delivery and engineering depth

revenue-enhancement-monetization

Revenue Enhancement and Monetization through increased product innovation velocity, changing the Channel Mix and establishing an analytics and data driven enterprise – Digital and Analytics CoE with AI / ML capability

improved-operations-efficiency

Improved operations efficiency by virtue of Scale and Speed of Delivery

In a world of rapidly changing customer experiences as well as the content consumption habits of the audiences, a full service video streaming platform provider was looking for ensure the best connectivity, quality and user experience for its product platform to maintain its competitive edge.

Incedo brought its considerable expertise in data science, analytics, cloud and design to create a specialised media lab as an incubation centre to support development, enhancement and re-engineering of the various product applications including the development of content sharing apps for mobile and OTT devices. The efforts resulted in the following:

Increased scalability, achieved reduction in functional time ensuring Reduction in go-to-market time and operations cost by 50%

Zero downtime through proactive service monitoring, reduced consumer complaints and increased download rates for the apps, maximized revenue from day one with reduction in OTT platform build release time from 12 hrs to 1 hr.

Enhanced viewer experience with smoother streaming and progressive downloads, through adaptive protocols for high-quality streaming (HLS, RTMP, MPEG-DASH).

Successfully initiated an app for Apple TV through the incubation centre.

Improved and extended Device compatibility leading to increased New Client Wins.

A US-based mortgage solution provider and a full-service lender servicing customers for more than three decades wanted a scalable solution that optimizes agility with minimal costs. They wanted a self-serving loan solution based on the changing load requirements of concurrent users.

So when our client wanted a self-serving loan solution for its end customers, while reducing customer service overheads, Incedo rose to challenge to help them build a platform that reduces the frequent downtime, eliminating the capacity and load related issues, while radically reducing the processing time.

With new solutions and a platform in place, the client generated results through:

Cost saving of $1 million, spread over a three-year period, in license fees.

agility-to-scale-and-seamlessly

Agility to scale and seamlessly manage and optimize underutilized services as well as demand spikes, reducing infrastructure costs with timely reviews and reporting

Self-serving portal for customers to have a better control over their account thereby reducing manual intervention of customer care team and reduced process time, leading to an improvement in customer acquisition, retention, and lower customer service overheads.

Intuitive and superior UX for a superior customer experience.

100% availability of loan portal, high performance with low latency and response time.

Our client, one of the leading biotechnology company had a Commercial Copay Program. These Copay Programs are designed to address patient financial barriers to treatment, promote patient adherence to therapy and enable prescribers to provide the most appropriate treatment option without patient financial concerns. So when the client was looking to improve adoption and utilization for its Commercial Copay Program, Incedo helped gain insights on strategic account planning as well as provide solutions regarding the program enrollment and copay utilization.

Incedo’s solution significantly reformed the efficiency of identifying and matching prescribers based on the prescriber’s demographics using fuzzy logic. This mapping was used to report accurate copay enrolments and claims metrics. Upon analyzing and understanding the business metrics and refining them properly to report relevant information, the solution resulted in the following:

Empowered field reps in new expanded roles with adoption and utilization insights through business intelligence reports to significantly increase customer acquisition. Field reps were enabled to provide proactive support to prescribers by identifying areas of low co-pay program enrollment and/or utilization.

Copay optimization for each brand and geography leading to optimized costs with optimal coverage amount for each brand/insurance plan.

Increased customer engagement with better collaboration and coordination across a territory board, by identifying opportunities where copay program utilization occurs for a product but not for other products.

By introducing fuzzy logic, the IDs for 26% (9,519) additional records were mapped as compared to the traditional logic

manual-intervention-reduced

Manual intervention reduced due to fully automated python fuzzy match system

COVID-19 pandemic has brought a significant change in the way financial advisors manage their practices, clients and home office communication. Along with the data driven client servicing platforms, smooth transition and good compensation, advisors are closely evaluating their firm’s digital quotient to provide them the service and support in times of such crisis and if not satisfied, may look out options of switching affiliation during or post this crisis.

This pandemic is not a trigger rather has provided additional reasons for advisors to continue to look out for a firm that fits better in their pursuit of growth and better client service.

2018 Fidelity Advisor Movement Study says, 56% of advisors have either switched or considered switching from their existing firms over the last 5 years. financial-planning.com publishes that one fifth of the advisors are at the age of 65 or above and in total around 40% of the advisor may retire over the next decade.

advisor-movement-study

A Cerulli report anticipates transition of almost $70 trillion from baby boomers to Gen X, Gen Y and charity, over the next 25 years. Soon, the reduction in the advisor workforce will create a big advice gap, that the wealth management firms will have to bridge by acquiring and retaining the right set of Advisors

expected-wealth-transfer

We are observing a changing landscape of advisor and client population, mounting cost pressure due to zero commission fee and the need for scalable operations. COVID-19 has further accentuated the need for the firms to better understand the causal factors for changes in advisor affiliation, to optimize their resources deployed for engaging through the Advisor life cycle. The wealth management firms are increasingly realising that a one fit for all solution may not get optimal returns for them.

Data and Analytics can help the firms segment their advisors better and drive better results throughout the advisor life cycle. Advisor Personalization, using specific data attributes can deliver contextual and targeted engagements and can significantly improve results by dynamically curating contextual & personalized experiences through the advisor life cycle.

A good data driven advisor engagement framework defines and measures key KPIs for each stage of the advisor lifecycle and not only provides insights on key business metrics but also addresses the So What question about those insights. As wealth management firms collect and aggregate data from multiple sources, they are also increasingly using AI/ML based models to further refine advisor servicing.

Let us look at the key goals or business metrics for each stage of the advisor life cycle below and see how data and analytics driven approach helps in each stage of the life cycle.

key-goals-or-business-metrics

Prospecting & Acquisition

To attract and convert more high producing advisors, recruitment teams should be tracking key parameters through the prospecting journey of the advisors so that they can identify:

  • What is the source of most of their prospective advisors; RIA, wirehouses, other BDs
  • Which competitors are consistently attracting high producing advisor
  • What % of advisors drop from one funnel stage to another and finally affiliate with the firm
  • What are the common patterns and characteristics in the recruited advisors

Data driven advisor recruitment process that relies on the feedback loop helps in the early identification of potential converts, thereby balancing the effort spent on recruited vs lost advisors. It also improves the amount and quality of the recruited assets.

For example, analysis of one-year recruitment data of a large wealth management firm revealed that prospects dealing with variable insurance did not eventually join the firm due to the firm’s  restricted approved product list. Another insight revealed that prospects with a higher proportion of fee revenue vs the brokerage revenue increased their GDC and AUM at a much faster rate after one year of affiliation. Our Machine Learning Lead Scoring Model used multiple such parameters and scored a recruit’s joining probability and 1-year relationship value to help the firm in precision targeting of high value advisors.  These insights allowed the firm to narrow down their target segment of advisors and improved conversion of high value advisors.

Growth & Expansion

A lot of focus during the growth phase of the advisor lifecycle is on tracking business metrics such as TTM GDC, AUM growth, commissions vs Fee splits. The above metrics however have now become table stakes and the advisors expect their firms to provide more meaningful insights and recommendations to improve their practices. Some of the ways, firms are using data to enhance advisor practice are by:

  • Using data from data aggregators and providing insights on advisor’s wallet share and potential investment opportunities
  • Providing peer performance comparisons to the advisors
  • Providing next best action recommendations based on the advisor and client activities

For example, our Recommendation Engine analysed advisor portfolio and trading patterns and determined that most of the high performing advisors showed similar patterns in Investment distribution, asset concentration, churning %. This enabled the engine to provide targeted investment recommendations for the other advisors based on their current investment basket and client risk profile. The wealth management firms are also using advisor segmentation and personalization models based on their clients, Investment patterns, performance, digital engagement, content preference and sending personalized marketing and research content for the advisors based on their personas thus driving better engagement.

Maturity and Retention

It is always more difficult and costly to acquire new advisors as compared to growing with the existing advisor base. The firms pay extra attention to ensure that their top producer’s needs are always met. Yet despite their best efforts, large offices leave their current firms for greener pastures or higher pay-outs. The firms run periodic NPS surveys with their advisor population which indicates overall satisfaction levels of the advisors, but they do not generate any insights for proactive attrition prevention. Data and analytics can help you identify patterns to predict advisor disengagement and do targeted proactive interventions.

For example, our attrition analysis study for a leading wealth manager indicated that a large portion of advisors over the age of 60 were leaving the firm and selling out their business. This enabled the firm to proactively target succession planning programs at this age demographic of advisors. Our analysis also indicated a clear pattern of decreased engagement with the firm’s digital properties and decreasing mail open rate, for the advisors leaving the firm. Based on factors such as age, length of association with the firm, digital engagement trends, outlier detection, our ML based Attrition Propensity model created attrition risk scores for advisors and enabled retention teams to proactively engage more with at-risk advisors and improve retention.

As per a study from JD Power, wealth management firms have been making huge investments in new advisor workstation technologies designed to aggregate market data, client information, account servicing tools and AI-powered analytics into a single interface. While the firms are investing heavily in technology, only 48% advisors find the technology their firm is currently using, to be valuable. While only 9% of advisors are using AI tools, the advisor satisfaction is 95 points higher on a 1000-point scale when they use AI tools. Advisors find a disconnect between the technology and the value derived from the technology.

This further necessitates the need for personalised solutions for advisors and an AI driven Advisor personalisation platform which provides curated insights to the firms. This helps in targeted & personalized services & support to advisors through the Advisor lifecycle, enabling optimal utilization of the firm’s resources and unlocking huge growth potential.

The firms that will understand the potential of data driven decision making for their advisor engagement and will start early adoption of such tools will thrive in these uncertain times and will emerge as a winner once the dust settles.

COVID-19 pandemic brought with it a complete disruption to the existing normal operating procedures in most of the industries. The unprecedented situation due to the pandemic has struck some of the business functions disproportionately hard. The most impacted functions in the companies however are those where the workforces relied heavily on “on the field” presence for the execution of their work compared to those functions which could easily be converted into a remote working setup.

From the Life Sciences industry standpoint, the drug promotion via Medical Reps (MR) falls into the prior category. Although the industry as a whole has seen rapid adoption of digital solutions across the workstreams in the ongoing decade, their marketing efforts to the Health Care Providers (HCPs) still heavily rely on the Face to Face (F2F) interaction of the Reps with the Physicians.

This status quo however has been challenged by the ongoing COVID pandemic, with the social-distancing norms in place. There are estimates of 92% drop in F2F HCP engagements in April 2020 compared to 6 months ago[1].  It is also estimated that in the new post-pandemic normal, the frequency of F2F engagements will shift as much as by 65% to quarterly/annual rather than the weekly/monthly norms prevalent pre-COVID2. This is indeed a massive blow to the existing Pharma sales and marketing approach and has seen many of the companies rapidly scale up their digital engagement channels to fill the gap. The use of these digital channels for HCP engagement has seen a 2x increase from their pre-pandemic levels.[2]

The current COVID driven environment has several key implications for the Lifesciences organizations in their effort to meaningfully engage with HCPs.

  1. Impact on Sales & Marketing Channel Mix – Restrictions on in-person meetings have lead to reduced access to HCPs, canceled/postponed training sessions, and canceled conferences and events, all of which were major marketing methods till now. Pharma and other Lifesciences companies have to accelerate their sluggish digital transformation initiatives and enable a true omnichannel digital experience for HCPs
  2. Digital engagement channel optimization – The digital omnichannel push needs to account for varying pysician preferences for the type of digital channel engagement, based on factors like therapeutic area, demography, and personal preferences.
  3. Personalized, contextual messages for better engagement– Physicians at the front lines have to balance innovation and efficiency while dealing with the increased pandemic workload. As a result, engagement and interaction frequency with HCPs have decreased abruptly. With this sudden shift, there is a need for communication to be crisp and contextual for it to be effective.

This brings us to an important question of how the Bio-Pharmaceutical companies should navigate the current shock concerning HCP engagement and what lies ahead for them. Pharma Commercial Teams would need a strategic HCP engagement approach that manages the immediate COVID situation as well as builds capabilities for the new digital-driven normal.

As the Bio-pharma companies scramble to optimize their marketing efforts in the current times, they need to formulate a strategy which tackles the problem in phases:

  • Now: Immediate Priorities to manage COVID situation (next 1-3 months) – Set of tactical initiatives and workarounds to the existing HCP engagement methodologies, meant to strictly tackle only the immediate priorities around COVID-19 impact
  • Next: Accelerate digital capabilities build-up to drive Omnichannel HCP engagement (in 3-6 months ) – Strategic initiatives to accelerate and deliver a highly engaging digital experience for HCPs. These will fundamentally help in shifting and realignment of biopharma omnichannel engagement capabilities in post-COVID realities.

(Now) Immediate Priorities to manage HCP promotions in COVID situation

As an immediate measure, Bio-Pharma companies need to evaluate the impact of COVID-19 on HCPs’ practice – Rx, patient counts, geographical impact, etc, and Field Reps access to HCPs. It is imperative that Biopharma companies create a COVID control room, which integrates external trigger impact data with internal data sources to truly assess the impact of COVID situation (and potentially other external triggers and shocks) on their sales & marketing plans.

covid-19-geographic-risk-assessment

As the COVID impact is quantified, bio-pharma can synthesize the same to adjust the tactical call plans for their promotional activities. The critical parameters to consider while making changes to the call action models would be:

  • Incorporate external COVID impact triggers at geo, HCP level
  • Defining and quantifying the digital affinity of physicians
  • Optimization of cross-channel (Digital & Rep) targeting frequency
  • Dynamic adjustments to the call-plan (digital mix, frequency) as the COVID situation evolves

(Next) Accelerate digital capabilities build-up to drive Omnichannel HCP engagement

Once the immediate priorities related to the pandemic are solved, companies can utilize the learnings and key insights from the pandemic times to further advance their digital engagement strategy. The evaluation of what went right and what were the misses in the earlier stage should also be used to formulate a long-term digital and omnichannel engagement strategy. There is also, a lot to learn from Digital-natives who have, highly effectively, leveraged digital channels to driven customer engagement.
Bringing these best-practices from Digital natives together with Bio-pharma context can help accelerate the digital transformation of the industries HCP engagement approach.

Best-practices and Learnings from Digital Natives Lifesciences Ecosystem Context
Focus on differentiated HCP experience Physicians have different interaction points, interests, and requirements including clinical content, CMEs, studies, samples, copay coupons, patient counseling material, etc. and hence differentiated experience enables engagement.
Volume and variety of data Pharma has access to multi-dimensional physician data in terms of demography, preferences, prescription patterns, patient/payer mix profiles via claims, digital affinity to micro-segment physicians, and uncover preferences, behaviors, and personalized needs.
HCP/Customer Journey management and personalization Advanced analytics and ML-based approaches can leverage the available data to predict intents, recommend interventions, and seamlessly deliver them via physician engagement platform and processes.
Omnichannel execution Multi-channel interaction provides a foundation platform for delivering these experiences across digital as well as non-digital channels.
Measure, Learn & Improve A/B testing driven digital engagement experimentation anchored on performance-driven, yet responsive targeting strategies.

 

To accelerate their digital transformation journey, biopharma companies need to inculcate these best practices into their HCP digital marketing capability. An integrated Digital Engagement solution will help biopharma companies create and deliver omnichannel personalized experiences for HCPs, by enabling real-time AI/ML-driven next-best-action recommendations and precision targeting strategies based on their preference and intents.

HCP-digital-engagement-framework

COVID pandemic is an unprecedented global event, which will radically alter our behaviors, expectations, and interactions. Earlier rules of engagement are now getting irrelevant at a pace that is faster than ever before. To maintain(and grow) their share of voice and engagement with HCPs, bio-pharma organizations can no longer afford to follow the “digital-addon” approach. They have to fundamentally re-design their HCP engagement framework, as a Digital-driven strategy, $to stay relevant, to stay ahead and keep growing.

[1] Sermo COVID-19 Survey Apr, 2020
[2] Sermo COVID-19 Survey Apr, 2020

While the reckless overextension of credit lines by lenders and banks was the root cause of the financial crisis of 2007-09 and it had the US primarily as its central point, this time the financial crisis has been caused by a virus with rapidly evolving geographical centers and covering almost the entire world. The banks though are in a catch 22 situation, they need to support the government’s lending and loan relief measures while also maintaining low credit loss rates and enough capital provisioning for their balance sheet. Effective risk management and credit policy decisioning was never as challenging for the banks as it is now in the post covid-19 world.

COVID-19 implications and challenges for banks and lending institutions

Sudden shift in risk profile of retail and commercial customers – The surge in unemployment, deteriorated cash flow for businesses, etc has led to a sudden shift in the credit profile of customers. The data that banks used to leverage before COVID might not provide an accurate picture of the consumer’s risk profile in the current times.

Narrow window of opportunity to re-define credit policies – Bank’s credit policies in terms of origination, existing customer management, collections, etc have been designed over years with a lot of rigor, market tests, design and application of credit risk models and scorecards, etc. The coronavirus has caught the bankers and Chief Risk Officers by surprise and there is a narrow window of opportunity to make changes in existing models and risk strategies. While a lot of banks had built a practice of stress testing for unfavorable macroeconomic scenarios, the pace and impact of coronavirus have been unprecedented. This requires immediate response from the banks to mitigate the expected risks.

Government relief programs like payment moratoriums – The introduction of payment holidays and moratorium programs are effective to take some burden off consumers but prevent the banks from understanding high risk customers as there is no measure of delinquency that banks can capture from existing data.

Four-point action plan and strategy to navigate through the COVID-19 crisis

Banks will need to go back to the drawing board, re-imagine their credit strategy and put in accelerated war room efforts to leverage data and create personalized risk decisioning policies. Based on Incedo’s experience of supporting some of the mid-tier banks in the US for post COVID risk management, we believe the following could help banks and lenders make a fast shift to enhanced credit policies and mitigate portfolio risk

  1. Covid situational risk assessment – As a starting point, Risk managers should identify the distress indicators that capture the situational risk posed post Covid-19. These indicators could be a firsthand source of customer’s situational risk (e.g. drop in payroll income) or surrogate variables like higher utilization or use of cash advance facility on credit card etc. Banks would need to leverage a combination of internal and external parameters, such as industry, geography, employment type, customer payment behavior, etc. to quantify COVID based situational risk for a given customer.

    Covid-situational-risk-assessment

  2. Early warning alerts & heuristic risk scores based on a recent behavioral shift in customer’s risk profile – A sudden change in the financial distress signals should be captured to create automated alerts at the customer level, this in combination with a historical risk of the customer (pre-COVID) should go as a key input variable into the overall risk decisioning process. The Early warning system should issue alerts, alerting the credit risk system of abnormal fluctuations and potential stress prone behavior for a given account.

    early-warning-alerts-heuristic-risk-scores

  3. Executive Command Centre for COVID Risk Monitoring – The re-defined heuristic customer risk scores should be leveraged to quantify the overall risk exposure for the bank post COVID. Banks need to monitor the rapidly changing credit behavior of customers on a periodic basis and identify key opportunities. The rapid risk monitoring based command center should focus on risk across the customer lifecycle and various risk strategies and help provide answers to some of the following questions of the bank’s management team
    • What is overall current risk exposure and forecasted risk exposure over short term period?
    • How has the overall credit quality of existing customer base changed, are there any patterns across different credit product portfolios?
    • What type of customers are using payment moratoriums, what is the expected risk of default of such customer segments?
    • Quantification of the drop in income estimates at an overall portfolio level and how it could affect other credit interventions?
    • What models are witnessing significant deterioration in performance and may need re-calibration as high priority models?executive-command-centre-for-COVID-risk-monitoring
  4. Personalized credit interventions strategy (Whom to Defend vs Grow vs Economize vs Exit)  – To manage credit risk while optimizing the customer experience, banks should use data driven personalized interventions framework of Defend, Grow, Economize & Exit. Using customer’s historical risk, post COVID risk and potential future value-based framework, optimal credit intervention strategy should be carved out. This framework should enable banks to help customers with short term liquidity crunch through government relief programs, bank loan re-negotiation and settlement offers while building a better portfolio by sourcing credit to creditworthy customers in the current low interest rate environment.
    personalized-credit-interventions-strategy

The execution of the above-mentioned action plan should help banks to not only mitigate the expected surge in credit risk but also enable a competitive advantage as we move towards the new-normal. The rapid credit decisioning should be backed with more informed decision making and on an ongoing basis, the framework should be fine-tuned to reflect the real pattern of delinquencies.

Incedo with its team of credit risk experts and data scientists has enabled setting up the post COVID early monitoring system, heuristic post COVID risk scores and COVID command center for a couple of mid-tier US based banks over a period of last few weeks.

Learn more about how Incedo can help you with credit risk management.

The magnitude of the spread of the COVID-19 pandemic has forced the world to come to a virtual halt, with a sharp negative impact on the economies worldwide. The last few weeks have seen one of the most brutal global equity collapse, spike in unemployment numbers, and negative GDP forecasts. With the crisis posing a major systemic financial risk, effective credit risk management in these times is the key imperative for the banks, fintech and lending institutions.

Expected spike in delinquencies and credit losses post COVID-19

The creditworthiness of banking customers for both retail and commercial portfolios has decreased drastically due to the sudden negative impact on their employment and income. In case of continuation of the epidemic for a longer-term period, the scenarios in terms of defaults and credit losses for banks could potentially be much higher than as observed in the global financial crisis of 2008.
expected spike in delinquencies and credit losses post covid-19

Need for an up-to-date, agile and analytics driven credit decisioning framework:

The existing models that banks rely upon simply did not account for such a ‘black swan event’. The credit decisioning framework for banks based on existing risk models and business criteria would be suboptimal in assessing customer risk, putting the reliability of these models in doubt. There is an immediate need for banks to adapt new credit lending framework to quickly and effectively identify risks and make changes in their credit policies

Incedo’s risk management framework for the post COVID-19 world

To address the challenges thrown up by the COVID-19, it is important to assess the short, medium and long-term impact on bank’s credit portfolio risk and define a clear roadmap as a strategic response focusing on changes to risk management methodologies, credit risk models and existing policies.

We propose a six-step framework for banks and lending institutions which comprises of the following approaches.

Roadmap-for-post-Covid-credit-risk-management

  1. COVID Risk Assessment & Early monitoring Systems

Banks and lending institutions should focus on control room efforts and carry out a rapid re-assessment of customer and portfolio risk. This should be based on COVID situational risk distress indicators and anomalies observed in customer behaviour post COVID-19. As an example, sudden spike in utilization for a customer, less or no credit of salary in payroll account, usage of cash advance facility by transactor persona could potentially be examples of increasing situational risk for a given customer. In the absence of real delinquencies (due to moratorium or payment holidays facility), such triggers should enable banks to understand customer’s changing profiles and create automated alerts around the same.

COVID-risk-assessment-and-early-monitoring-systems

  1. Credit risk tightening measures

Whether you are a chief risk officer of a bank or a credit risk practitioner, by now you would have heard many times that all your previous credit risk models and scorecards would not hold and validate any longer. While that is true, it has also been observed that directionally most of these models would still rank order with only a few exceptions. These exceptions or business over-rides can be captured through early monitoring signals and overlaid on top of existing risk scores as a very short term plan. Customers with a low risk score and situational risk deterioration based on early monitoring triggers are the segments where credit policy needs to be tightened. As the delinquencies start getting captured, banks should re-create these models and identify the most optimal cutoffs for credit decisioning.

credit-risk-tightening-measures

  1. Personalized Credit Interventions

There are still customers with superior credit worthiness waiting to borrow for their financial needs. It is very important for banks to discern such customers from those that have a low ability to payback. To do this, banks require personalized interventions to reduce risk exposure while ensuring an optimal customer experience through data-driven personalized interventions. Banks need to help customers with liquidity crunch through Government relief programs, bank loan re-negotiation, and settlement offers while building a better portfolio by sourcing credit to ‘good’ customers in the current low rate environment.

  1. Models Re-design and Re-Calibration

A wait and watch approach for the next 2-3 months period to understand the shifts in customer profile and behavior is a precursor before re-designing the existing models. This would enable banks to better understand the effect of the crisis on customer profiles and make intelligent scenarios around the future trend for delinquencies. There would be a need to re-calibrate or re-design the existing models. Periodic re-monitoring of new models would be a must, given the expected economic volatility for at least next 6-12 months period.

  1. Model Risk Management through Risk Governance and Rapid Model Monitoring

There is an urgent need for banks to identify and quantify the risks emerging due to the use of historical credit risk models and scorecards through Model monitoring. While the risk associated with credit products has increased, the delinquencies have not yet started getting captured in the bank’s database due to the payment holiday period facility introduced by govt’s of most of the countries. In such a situation, it is critical to design risk governance rules for new models that may not have information related to dependent variables (e.g. delinquency) captured accurately.

  1. Portfolio Stress Tests aligned with dynamic macro economic scenarios

Banks and lending institutions need to leverage and further build on their stress testing practice by running dynamic macro-economic scenarios on a periodic basis. The stress testing practice has enabled banks in the US to improve their capital provisioning and the COVID crisis should further enable banks across the geographies to use the stress tests to guide their future roadmap depending on how their financials would fare under different scenarios and take remedial actions.

The execution of the above-mentioned framework should ensure that banks and fintech’s are able to respond to immediate priorities to protect the downside while emerging stronger as we enter the new normal of the credit lending marketplace.

Incedo is at the forefront of helping organizations transform the risk management post COVID-19 through advanced analytics, while supporting broader efforts to maximize risk adjusted returns.

Our team of credit risk experts and data scientists has enabled setting up the post COVID early monitoring system, heuristic post COVID risk scores, and COVID command centre for a couple of mid-tier US based banks over a period of last few weeks.

Learn more about how Incedo can help with credit risk management.

Quantum Computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. Quantum computers perform calculations based on the probability of an object’s state before it is measured – instead of just 1s or 0s – which means they have the potential to process more data compared to classical computers.

Why do companies need to reimagine their customer service? And why do they need to learn from Digital Natives like Google, Amazon, etc.?  That is because these digital natives are setting the standard for customer expectations – In a recent survey, when customers were asked which company would they want to take Telecom services from, 60% people responded Google or Amazon!
So what are the key differences in the way Digital Natives approach Customer Service?

  • Fix at Source – While traditional organizations look to call deflection to save costs, digital natives believe that customer service indicates a customer pain point that should be fixed “at source”.
  • Use Product Thinking and Tech to solve issues – Too many processes and policies at legacy organizations are driven by risk, legal and finance making them high friction. Digital natives, start with the voice of the customer to design the right customer experiences and use technology to manage risks
  • Put AI and Technology at heart of everything – Not as siloed solutions to micro-problems but for driving end-to-end orchestration of customer experiences

To build next gen customer service capabilities, Incedo recommends 5 key initiatives:

  1. Use Voice of Customer to drive business priorities
  2. Fix root cause at source using Product Design Thinking
  3. Personalise Service Channel Mix
  4. Leverage AI to increase machine and self-serve digital channels
  5. Use Cloud based architecture to enable AI driven Customer Service at scale

Voice of Customer to drive business priorities

KPIs to optimize: NPS

Customers talk about products and service through multiple medium – they leave reviews on product pages, social media, App Store, etc., call care, write emails or escalate to senior management. Often, the focus of customer service teams is on “managing” these inputs – douse the fire if the review is negative. However, there is a wealth of information available in these customer inputs on what is working and what is not – the challenge is that there is a lot of noise and traditional approaches have been inadequate. Advanced NLP + AI techniques can help organizations extract very actionable insights from these VOC channels

Voice of Customer to drive business priorities
Fix root cause at source using Product Design Thinking

KPIs to optimize: Calls/Incidents per Unit/Order

Most customer service issues require cross-functional approach and product design thinking to resolve at the root cause. For example, when faced with fraud most organization end up putting strong checks and balances in place that also add a lot of friction to genuine customer journeys. Digital natives, on the other hand approach it differently:

  • They build robust tech and AI based preventive and corrective mechanisms and continuously refine them
  • They take a ROI based approach – compensating customers for small ticket breaches rather than adding friction

Personalize Service Channel Mix

KPIs to optimize: NPS, CSAT

A recent study showed that digital channel leads to highest customer satisfaction for service. However, all customers are not equal and so are the issues they face. Personalization of service channel based on following key parameters is recommended:

  • Customer lifetime value – High LTV customers expect white glove treatment best provided by high-quality agents
  • Digital affinity – Forcing low digital affinity customers towards digital channels and vice-versa can lead to dissatisfaction
  • Anxiety Levels – Some issues cause high anxiety, channels with best resolution rates if the customer is reaching out for these issues

Fix root cause at source using Product Design Thinking
Leverage AI to increase machine and self-serve digital channels

KPIs to optimize: Resolution Rate, MTTR (Mean Time To Resolution), Operating Cost

What should you automate or move to self-service? The choice should be driven by volume and resolution complexity of issues. High volume, low complexity issues lend themselves well to self-serve channel whereas high complexity issues will require human touch. Design of chatbot and self-serve solutions should begin with design thinking of customer journeys
Leverage AI to increase machine and self-serve digital channels

Use Cloud based architecture to enable AI driven Customer Service at scale

KPIs to optimize: Time to Market

The solutions and approaches outlined in the previous 4 initiatives require building real-time AI/ML models that evolve continuously. Traditional data and technology architectures cannot keep up with the velocity of change and volume of data. Cloud based architectures are key to solving this problem given inherent scalability and vast & growing libraries of reusable components.
However, transforming existing legacy architectures to cloud based is a daunting task. Organizations can follow a 2-speed approach to this transformation:

  • Speed 1: End to End Cloud transformation use case by use case
  • Speed 2: Building out the cloud architecture that can support multiple use cases and future needs

In conclusion, customer service as most organizations know it is transforming and Digital natives are at the forefront. Leaders of traditional organizations can drive this transformation by undertaking 5 key initiatives that put the customer at the heart of the service – to begin this journey a cross-functional empowered team that can own and drive these initiatives is recommended. It is either that or slow death as customers abandon sub-par experiences for better ones.

In a world of constant and extensive technology disruption, with organizations engaged in a battle for survival, the urgency to digitally transform is well understood by almost every large enterprise.

That everyone is trying to go digital is well established. Yet organizations continue to grapple with achieving breakthrough business impact from digital transformation programs.

Kicking off Information Age’s Digital Transformation month, we look at everything you need to know about what is digital transformation in business; the challenges, the technologies and above all, how to succeed

In recent years, promoters of blockchain have pushed the technology as a major disrupter to existing digital payments and transactions systems. Indeed, it offers tremendous promise to become a key building block of the digital economy, but the technology has fallen victim to massive hype and irrational exuberance in past, driven largely over a Bitcoin-buying frenzy.

The talent gap is often the talking point in the industry. To discuss the typical analytics hiring scenario in India and steps that can be taken to bridge the talent gap, Analytics India Magazine caught up with Nitin Seth, CEO of Incedo Inc., who shares that talent gap is primarily driven by the sharp rise of analytics AI-based solutions needed in different industries. “The supply side has not been able to cope up,” he said.

Customer expectations have reached an all-time high and industry competition is ever increasing — putting businesses under constant pressure to increase efficiency and improve results

Step outside the digital natives from the Silicon Valley and Seattle, and AI as a source of h advantage begins to look like smoke and mirrors. In our conversations with multiple Fortune 100 executives, we see increasing levels of frustration. A not un-common refrain: “We are being asked to spend millions on AI initiatives: is this the best way to allocate capital?” We believe that this is the right question to ask – after all, there is no dearth of investments driven by technology hype cycles. Why should AI be any different?

While the pundits talk breathlessly about AI being responsible for the 4th Industrial Revolution, we believe that the reality is far more nuanced – and a good place to start is to ask the right questions to better understand the current state of AI in your enterprise. So here goes – 10 Questions. And like all good questions, these are meant to provoke a dialogue within your organization and through that, a better assessment of whether AI is ‘real’ and more importantly, the journey that you and your organization need to embark to make AI real.

We follow that up with a strawman Manifesto on what it will take to make AI real: you should create one for your own organization.

10 Questions

AI for the sake of AI
1. Are the AI projects focused on delivering measurable business outcomes?
2. Do you have the right instrumentation integrated to monitor the impact of AI projects?

Nurturing AI Talent
3. Is there a core AI capability under a CDO/CTO? Or is it a bolt-on as part of the CIO org?
4. Are there long-term career paths for AI/ML Data Scientists & Engineers?

Data as a Core Asset
5. Is there a Data Governance team with a CXO commitment to truly enable Data Democratization?
6. Is the legacy BI/EDW environment the main data platform for AI projects?

The Legacy of Deterministic thinking
7. Does the organization have an appetite for Experimentation across the Enterprise: not just cosmetic website changes?
8. Does the business accept the idea of Probabilistic Recommendations?

Crossing the AI Chasm
9. Do you have an enterprise AI platform infrastructure?
10. Have AI projects been integrated into transaction systems (e.g., ERP, RPA) in the last 12 months?

The Manifesto

To make AI truly real in your organization, you need to spark some kind of a revolution. And revolutions obviously (!) need manifestos. A series of bold, declarative statements that set the tone for the entire organization – only then, you have a shot at genuinely using data-driven decision making real and drive competitive advantage.

Here’s the Manifesto:

1. Be Ruthless About Outcomes: Quantified outcomes should drive AI project prioritization – not the other way around; mandate the instrumentation that can be linked back to an outcome

KPI as part of each AI project.

2. Invest in Building Organizational Capability: Invest in a centralized AI/ML Data Science and Engineering capability; balance that with an ecosystem of ‘Citizen Data Scientists’ who can provide capability at the edges in an organization. Create a career path that encourages mobility between the edges and the centralized teams.

3. Elevate Data to be a First-Class Citizen: Data is an Asset. Treat it like one: it deserves a governance structure; invest in a ‘Data as a Service’ architecture that goes beyond just data provisioning.

4. Integrate Probabilistic Systems into Operating Processes: Get the organization comfortable with the idea of probabilistic recommendations; ensure AI systems get better over time – where you don’t have enough observations to learn from, use experiments.

5. Invest in ‘AI Platform as a Service’: Invest in an AI@Scale platform that standardizes AI model lifecycle management; move away from monolithic systems to a marketplace of modular ‘code blocks’ that can be used to assemble solutions.

Two key points are clear:

1. AI is here to stay – it is no longer about the Why or What, but increasingly about the How.

2. AI, like all technology driven transformation, is not a one-size fits all strategy.

Our end-to-end suite of AI services includes AI/ML implementation, business process & digital integration, customer 360 view, and continuous A/B testing, among others.

Learn More: https://www.slideshare.net/IncedoInc/ai-in-the-enterprise-hype-vs-reality

Management jargon like ‘Extreme Experimentation’, ‘Fail Fast’ have been around for some time now. Much of this thinking and consequently, success has come from the software industry. But once you step outside the Silicon Valley, you will find hard pressed to find successful instances of experimentation translating to actual shareholder value. In my years of working with Fortune 500 leaders, I have found a stubborn chasm between desire and execution – one that goes beyond systems and processes.

It is clear that this is above all, a challenge of cultural transformation – one that moves away from trusting expert judgment to a more incremental approach informed through faster customer feedback. This is a journey that, if executed well, will create not just a data and system architecture but also impact the organization structure and KPIs. In other words, this should trigger a wholesale cultural change. And like all transformation ideas, there need to be a series of initiatives.

  1. Invest in a Cross-functional Design of Experiments Team: Most organizations have digital platforms. And many of them run basic experiments limited to testing out multiple website changes (e.g. A/B testing). This thinking needs to expand beyond these ‘cosmetics’ to experiment with deeper changes – e.g. pricing, product offering changes. Such initiatives require changes not just to the website – here are just a couple of examples:
    1. Product offering experiments: This will require a change in how the product structures are created – instead of individual SKU Bills of Material (BOMs), you will need to create option BOMs, with dynamic optional combinations
    2. Pricing experiments: This will require a change in pricing methods – instead of overall product price, you will need to set up a line structure that prices individual feature combinations

This will require a cross-functional team with a mandate to build this capability. An initial manifesto could look something like this:

  • Design multiple experiments in line with business goals. This requires a heavy dose of Data Science (see below)
  • Implement process changes – from changing say, how pricing gets done to how product changes can be deployed across physical and digital channels
  • Design the right set of KPIs to track not just the lift from individual experiments, but also to track the impact of these changes through implementation
  • Orchestrate the IT infrastructure to deploy these experiments

In our opinion, this is best owned and orchestrated through the Marketing Strategy or Corporate Strategy function. A leading home improvement retailer invested in this capability within the CFO organization – and used this function to drive experiments across channels – from in-store experiments (e.g. store-level promotions) to experiment with omni-channel implementation scenarios (e.g. Buy Online Pick up in Store).

  1. Not all Learning needs to come from field experiments: The proverbial data haystack has many needles. To begin with, the historical product and pricing changes can provide signals on the customer stated preferences – e. the traditional lift from these changes. Even more, data provides the opportunity to tease out revealed preferences, essentially signals that customers communicate through indirect mechanisms – e.g. relative preference for specific attributes (e.g. storage capacity vs. processing power) expressed through features (response to memory upgrades vs. RAM upgrades). Discrete Choice Models often help understand the value customers assign to product attributes (i.e. decompose a product price into the individual attributes). This could be a good starting point to understand price-value of product features. And then abstract out the features to attributes – which can then be imputed back to new features. A B2B tech manufacturer used this strategy to understand the price-value at a feature level of its Server product portfolio. This formed the basis of option-pricing for the next generation of Server products. Needless to say, this was the only viable approach given that it was not possible to run field pricing tests in a highly competitive market.
  2. Build Data Science Capability to extract value from data: It is clear that both of the above will require Data Science capability. And this capability becomes even more important given multiple challenges around not just the quality, but often surprisingly, the quantity of data.
    • Data Quality: Experiment data is more often than not, notoriously noisy. There are often multiple factors at play – both external (e.g. competitor launch, market dynamics) and internal (e.g. marketing promotion calendars, Supply chain considerations around availability etc.). Solving for these truly requires a blend of Art and Science:
      • Experiment design: Design the right test/control methods and the right measurement approach. From A/B testing to sophisticated methods like MAB (Multi-Armed Bandit)
      • Attribution modeling: Deploy Machine Learning models to tease out the attribution of the lift to a specific set of experiments from all other factors.
    • Data Quantity and Context: Most companies do not have the luxury of massive data sets the way Facebook, Amazon or Google do. More often than not, experiments need to deal with sparse datasets (e.g. small samples, poor response rates). And in some cases, there is not enough information in the incoming data to be able to easily execute experiments. For instance, without any prior information about the visitor to a website, how do you decide the right page to serve in an A/B test?

As companies across industries try to improve engagement with their end consumers, building a Design and Execution of Experiments capability is no longer nice to have restricted to the company’s website changes. We believe that the time is right for investing in building the right Organization, Data and Technology eco-system that can create, launch and sustain this process across the enterprise – Product Design and Launch and Pricing are two areas where there should be an immediate opportunity to invest in building this capability.

I have often said that the most valuable thing that I have built from my years in Analytics Consulting is a ‘failure portfolio’. Each failed project has taught a lot, and it comes down to some foundational issues

1. Are you solving the right problem? Call Center Operations are always trying to cut down the call time (Average Handle Time). Needless to say, there are multiple improvement opportunities in the entire process. A telecom company wanted to solve the problem of auto-classification of calls using AI. The idea was to shave a few seconds from an agent’s workflow from every call. This required transcribing the call by converting the audio to text; extracting features using text mining and then combining with other call related data to classify the call using a pre-defined taxonomy. Several thousands of dollars later, they had an AI engine with some acceptable level of accuracy. At the end of this exercise, they had managed to cut a few seconds of the agent time at the end of the call. When the solution was demonstrated to the call center agents, they had a much simpler alternative – training and simple tweaks to the workflow for better routing to the right agents. As it turns out, agents are already organized by problem area (billing, upgrade options, device management etc.) and it would be a few simple training sessions to get them to further classify the call within their domain area. In the end, the AI engine was shelved – the moral of the story: it is important to focus on the right problem. Choice at origin is important – pick the wrong problem and it is easy to go down a rabbit hole.

2. Have you thought of the overall Business Process? One of the problems that automobile manufacturers have long struggled with is that of parts re-use. As multiple engineering teams work on different vehicle platforms, they tend to create new part designs, as opposed to re-using existing parts. The downstream effects of this are obvious – part proliferation leads to inventory holding and procurement costs. Engineering teams are also very good at capturing part specifications – both detailed design as well as attributes. Only that most of them are drawings – from scanned documents (PDFs, TIFFs et al) to CAD files. There is clearly an opportunity to use AI – more specifically, computer vision – to extract features from these documents and create a matching solution that would with a few simple questions about the engineer’s intent, suggest a list of matching parts. A Tier-1 auto manufacturer invested in exactly that and developed a solution that would do any Data Science team proud. Then came the next step – how does this fit into the overall business process? How do you make it part of the engineer’s process? And then there was the issue of systems – Engineers work in CAD/CAE and PLM systems – how does this solution fit into that eco-system? All these were questions that we were not thought through fully to begin with. Too often, we forget that AI solutions are more often than not, solve a very specific problem. And unless this is pieced together with the relevant process tweaks, chances are the AI solution will end up as a proof of concept.

3. Have you engineered the Solution to work at scale? Every retailer is always on the hunt for extract cost savings from the system – and one big area of focus is safety stocks. Retailers have typically lived with a normative method (i.e. a formula that makes a lot of theory driven assumptions) of computing safety stocks. Along came Big Data and AI. The idea was to develop an empirical method to compute safety stocks using reinforcement learning. The solution worked beautifully – there were significant improvements in safety stock levels in test after test. Then came the issue of scaling – to make a real dent of even a few basis points to the bottomline, the solution had to work for over 2,000 stores, with each store carrying 50,000 SKUs on an average. It is no secret that AI solutions are compute and storage intensive. Despite that, the solution, elegant though it was for a small subset of SKUs, was just not designed to operate at scale.

4. Are you trying to fit a technique into a use-case? For those of us who have seen technology hype cycles, we are painfully aware of the early stages of the hype cycle – the temptation to take the new hammer out to look for a nail is too strong to pass. And it was thus, that the Customer Care function in a technology firm took it upon itself to leverage ‘cutting edge AI’. The idea was to go where no one had chosen (yet) to go – and as we all know, unstructured data is the new frontier. And the best minds got together and invented a use-case: using speech-to-text, voice modulation features and NLP to assess in real time, the mood of a caller. The idea: instead of relying on the Call Center representative to judge the need to escalate a call, how about we let machines make the recommendation in real time? By now – it should be obvious where this all landed. On hindsight, it seems almost laughable that we could dream up such a use-case: machines listening in to a human-to-human interaction and intervening in real-time if the conversation was not likely to result in a desirable outcome. But that is exactly what happens – there is a thin line separating an innovative and a ludicrous idea.

And here’s the interesting thing – you may have noticed that these are not necessarily Big Data or AI specific issues. Fundamental issues that are relevant for any transformation initiative. And that’s the good news.

And does this mean that AI is all hype? Of course not – there is absolutely no doubt that AI and Big Data present a tremendous opportunity to drive tremendous, game-changing value in organizations. And to be sure, we will have many such failures – but so long as we approach this thoughtfully, start with outcomes in mind, move with ‘deliberate speed’ and are always willing to learn, we can truly unlock the potential of AI and Big Data.

Innovation likely ranks in the top 10 of the most overused words in our industry today. But, what drives the need for Innovation — cost, new products, competitors, or something else? How does one execute — run experiments, new pilots, setup Innovation ventures?

I found Harvard Business School Prof. Sunil Gupta’s book titled “Driving digital strategy” bring this issue to the forefront in a compelling way. Prof. Gupta describes the need to fundamentally rethink a business in the lens of its customers, not around products or competitors. One of my favorite quotation from the book is “Starting an Innovation unit in a large company is like launching a speedboat to turn around a large ship; often the speedboat takes off but does little to change the course of the ship.”

So, how does one succeed? I reflected on my own experiences helping customers in their innovation journeys. Innovation should first focus on answering the simple question, What is the compelling pain OR gain I can deliver to my customers? Answer to this question begins the process of rethinking how a product or service should position itself for growth by specifically addressing customer problems and taking advantage of data & new digital technologies.

Here is an example from one of my experiences where we assisted a Medical Devices & Diagnostics manufacturer address two key questions:

– How to shift value creation from being an equipment manufacturer to a full-service provider?
– How to increase the share of wallet in my customer base (hospitals)?

This manufacturer historically focused on the strengths of the equipment, the efficacy, and clinical value, delivering differentiation through its hardware. It left a big gap. Third-party software and service providers were providing surrounding solutions that leveraged the data from this manufacturer’s diagnostic equipment along with other data assets to solve specific customer problems (diagnosis aids, improving care workflows, improving the patient experience and improving clinician productivity).
Consequently, the manufacturer was leaving untapped value on the table, operating as a participant instead of owning a larger share of its customer ecosystem. The message was clear — the company had to Innovate or risk getting left behind.

Clarity on purpose for Innovation, what problems to solve is an essential first step, but that does not guarantee success!

It brings us to the next step — Execution. Often lack of effective execution is the reason Innovation efforts fail.

Let’s look at some of the reasons for poor execution:

  • Jumping too quickly into what to innovate and poor definition of the specific business problem (use case) to solve for
  • Getting carried away by Technology — Digital technologies, Data, AI, Machine Learning becomes the focal point of identifying new capabilities, but not the customer need (In my diagnostics example, adding a voice capability to clinical diagnostic equipment had excellent marketing appeal and sounded different, but it did not go too far in the absence of a compelling pain or gain to solve.)
  • Starting with a platform strategy too early and investing too much time and money in building platform capabilities (This introduces confirmation bias before market validation and slows down execution.)
  • Not thinking upfront of downstream changes that will need to occur if an Innovation pilot succeeds (In my Diagnostics example, the pilot use case excited everyone, but an early assessment that it may introduce changes to the regulatory framework and new security considerations led to a more informed, better thought-out approach.)
  • Lack of a clear business impact framework and measurement mechanisms to continually drive alignment between the Innovation strategy and execution (It is not sufficient to come up with top-level goals! Developing a framework to track granular level execution metrics brings tight alignment with the business problem.)

To avoid this pitfall, we should address vital questions upfront:

  • Are we solving a real customer need, and are we able to define it clearly?
  • How well do we understand the end-user journey? Often, there is a low tech but high impact answer to the problem if we can humanize the experience.
  • How will we measure success? What are the Key Performance Indicators (KPIs) that will be a lead indicator of positive change and how to track them?
  • What technologies, capabilities will we need to execute?
  • What changes will have to occur in different parts of the value chain to commercialize such a new product or service?
  • How to simultaneously demonstrate value in the short term while building for scale in the long run?

What does Innovation mean for you, and what determines its success? I’d love to get your comments and learn from your experiences!

Recruitment Fraud Alert