* This content is AI generated. It is suggested to read the full transcript for any furthur clarity.
Ladies and gentlemen, we also have the very valuable presence of Mr Sanjay Agrawal, CTO and Head – Technology Sales, Hitachi Vantara India and SAARC region. I request sir to kindly join us on the dais and talk about the responsibility framework for data and AI. Ladies and gentlemen, sir brings with him over 30 years of industry experience. He leads a team of solution consultants and technical experts in India to help customers focus on solutions and initiatives around digital transformation. His interest areas include analytics, IoT, content platforms, public safety solutions, enterprise storage, and server solutions. We welcome you, sir.
Thank you very much. Good afternoon everyone, and thank you Mr Kochar for giving me this opportunity. Before I start with the framework, let me share some background and the factors driving it.
Today, we are in the digital era, and when we talk about digital transformation, enterprises and corporates are focusing a lot on exceptional customer experience, improving operations and automation of processes, and creating new business models that can redefine the business and its go-to-market strategy. Digital technologies are being used to drive all of this.
When we look at digital technologies, AI and data are at the core. That is why every corporate is trying to become AI-enabled and data-driven. In all these initiatives, AI and data play a significant role. However, we have also heard that AI is not giving the right results, not giving meaningful output, and we hear about hallucinations, false predictions, negative responses, and inconsistent outputs. All of these issues point to one thing—the data.
We also talk about bias, transparency, and explainability. The core of all these problems lies in data. If AI is not getting the right data, the outcome will not be meaningful. AI and data go hand in hand. AI is not useful if the data is not good, and without AI, data cannot be monetized.
To get the best out of AI, we need the right culture, the right foundation, and a responsible framework for data operations in an organization. With this objective in mind, we collectively focused on creating a framework. It is a high-level framework; we can deep dive further, but the idea is that it should cover the entire lifecycle of data—from data onboarding to monetization and governance.
Given the time constraint, I will not go into every pillar, but I will highlight a few important ones.
The first pillar is data management. This includes data architecture, lifecycle management, disaster recovery, data loss recovery, and master data management. Why do we focus so much on data management? One of the top reasons under Corporate Digital Responsibility is sustainability. IT contributes significantly to sustainability concerns, and data infrastructure is a major contributor.
Corporates must ensure that data is managed efficiently—no duplicate data, no redundant data, and no unnecessary data—because all of this increases IT infrastructure load and contributes to greenhouse gas emissions.
Data quality is another critical pillar. From a data perspective, this means right data, complete data, and consistent data. From an AI perspective, it means having the right samples, the right data types, and enough data to support correct decision-making while avoiding bias. This requires identifying the right data sources, extracting and transforming data correctly so it contributes meaningfully to decision-making and AI outcomes.
Analytics is another key pillar. Once data is prepared, it is used for analysis—ranging from basic data discovery to advanced machine learning and deep learning. Creating, training, and tuning AI models all depend on data quality, formats, and architecture. Dark data assessment is also important—data that exists but is not visible or used. Organizations should assess whether value can be derived from it.
Governance plays a critical role. The difference between data governance and data compliance can be summarized simply: you govern data to make it compliant. Data governance involves defining policies and standards across data quality, risk management, usage, storage, and lifecycle management. Governance frameworks and policy documents must be updated regularly to ensure compliance with regulations such as India’s DPDP Act and global standards like GDPR, supported by regular audits.
Data security is another major pillar. Security exists at multiple levels—endpoint security, network security, identity and access management, encryption, and data masking. Ransomware is currently the biggest threat. In a recent CIO workshop, 80% identified ransomware as their top security concern.
Continuous monitoring is essential. Threat detection, response mechanisms, honeypots, fake assets, and incident response plans must be in place. None of this is possible without a strong data organization and culture, which must be driven from the top—at the CEO and leadership level.
Data governance teams, including Chief Digital Officers and Chief Data Officers, play a key role. Organizations should adopt a “data-first” policy—decisions must be data-driven, and if data is missing, decisions should be revisited. Leadership must provide vision, ensure data literacy at all levels, define how data is created, consumed, maintained, accessed, and shared across the enterprise.
One important concept is data products. For example, in a bank, different departments often create their own versions of a “customer 360” view, leading to inconsistency. A single, shared customer 360 data product across the organization ensures consistency and avoids duplication of effort.
All of this leads to data-driven decision-making and forms the foundation for an effective AI framework. The goal of building a data foundation is not just governance, but also business acceleration—shifting focus from data operations to data innovation.
With a strong data foundation, AI frameworks can deliver better outcomes. Key AI principles include fairness and non-discrimination, transparency in model design and assumptions, explainability of AI outputs, accountability, and liability. Environmental sustainability is also important—AI systems consume significant resources, and every AI query contributes to energy use and emissions.
We must take responsibility for how AI is used and how much it consumes. That is all from my side. I have tried to keep it high level, and I would be happy to discuss details further. Thank you.