Complexities of Ethical AI, explained by Intel’s Lama Nachman

Complexities of Ethical AI, explained by Intel’s Lama Nachman

When we talk about artificial intelligence, the conversation often gravitates toward its tangible impacts — the algorithms that can predict our shopping habits, the machines that can drive cars, or the systems that can diagnose diseases. Yet, lurking beneath these visible advancements are intangible unknowns that most people don’t fully grasp. To shed light on these hidden challenges, I interviewed Lama Nachman, Intel Fellow and Director of the Intelligent Systems Lab at Intel.

Nachman is at the forefront of AI research and development, steering projects that push the boundaries of what’s possible while grappling with the ethical implications of these technologies. Our conversation delved into the less obvious obstacles in responsible AI development and how Intel is addressing them head-on.

The intangible unknowns of Ethical AI

“While technical aspects like algorithm development are well understood, the intangible unknowns lie in the intersection of stakeholder needs and the AI lifecycle,” Nachman began. She highlighted that these challenges manifest in subtle ways that aren’t immediately apparent to most people.

“From algorithmic bias causing invisible but significant harm to certain populations, to the complex balance of automation versus human intervention in the workforce,” she explained, “less obvious challenges include building genuine trust beyond technical reliability and the environmental impact of AI systems.”

Also read: Navigating the Ethical AI maze with IBM’s Francesca Rossi

One pressing issue is the advent of large language models. “With large language models, it has gotten much harder to test for safety, bias, or toxicity of these systems,” Nachman noted. “Our methods must evolve to establish benchmarks and automated testing and evaluation of these systems. In addition, protecting against misuse is much harder given the complexity and generalizability of these models.”

Intel’s approach to Ethical AI

As a pioneer in technology, Intel recognises the ethical implications that come with advancing AI technologies. Nachman emphasised Intel’s commitment to responsible AI development. “At Intel, we are fully committed to advancing AI technology in a responsible, ethical, and inclusive manner, with trust serving as the foundation of our AI platforms and solutions,” she said.

Intel’s approach focuses on ensuring human rights, privacy, security, and inclusivity throughout their AI initiatives. “Our Responsible AI Advisory Council conducts rigorous reviews of AI projects to identify and mitigate potential ethical risks,” Nachman explained. “We also invest in research and collaborations to advance privacy, security, and sustainability in AI, and engage in industry forums to promote ethical standards and best practices.”

Diversity and inclusion are also central to Intel’s strategy. “We understand the need for equity, inclusion, and cultural sensitivity in the development and deployment of AI,” she stated. “We strive to ensure that the teams working on these technologies are diverse and inclusive.”

She highlighted Intel’s digital readiness programs as an example. “Through Intel’s digital readiness programs, we engage students to drive awareness about responsible AI, AI ethical principles, and methods to develop responsible AI solutions,” according to Nachman. “The AI technology domain should be developed and informed by diverse populations, perspectives, voices, and experiences.”

Ethical AI challenges and lessons learned

Implementing responsible AI practices comes with its own set of challenges. Nachman was candid about the obstacles Intel has faced. “A key challenge we have as developers of multi-use technologies is anticipating misuse of our technologies and coming up with effective methods to mitigate this misuse,” she acknowledged.

She pointed out that consistent regulation of use cases is an effective way to address technology misuse. “Ensuring environmental sustainability, developing ethical AI standards, and coordinating across industries and governments are some of the challenges that we as an industry need to address together,” Nachman added.

Also read: Balancing AI ethics with innovation, explained by Infosys’ Balakrishna DR

When asked about the lessons learned, she emphasised the importance of collaboration and continuous improvement. “The biggest learning has been the importance of responsible AI development as a foundation of innovation,” she said. “We need multidisciplinary review processes and continuous advancement in responsible AI practices, as well as collaboration across industries, academia, and governments to drive progress in responsible AI.”

On the prospect of establishing a global policy on AI ethics, Nachman was thoughtful. “Global policy on AI ethics should centre human rights, ensure inclusion of diverse voices, prioritise the protection of AI data enrichment workers, promote industry-wide collaboration, responsible sourcing, and continued learning to address critical issues in AI development,” she proposed. “This policy should aim to ensure fairness, transparency, and accountability in AI development, protecting the rights of workers, promoting responsible practices, and fostering continued improvement.”

India’s role in shaping Ethical AI

India is rapidly becoming a global hub for AI talent and innovation. Intel is leveraging India’s unique position to advance responsible AI development through ecosystem collaboration. “Our initiatives in India reflect a deep commitment to fostering ethical AI practices while harnessing the country’s vast potential in the field,” Nachman shared.

Intel has launched several targeted programs in collaboration with government and educational institutions. “The ‘Responsible AI for Youth’ program, developed in collaboration with MeitY and the National e-Governance Division, aims to empower government school students in grades 8-12 with AI skills and an ethical technology mindset,” she said. “This initiative is crucial in preparing India’s next generation of innovators to approach AI development responsibly.”

Another significant initiative is the “AI for All” program, a collaborative effort between Intel and the Ministry of Education. “This self-paced learning program is designed to demystify AI for all Indian citizens, regardless of their background or profession,” Nachman explained. “By enabling over 4.5 million citizens with AI basics, Intel is helping to create a society that is not only AI-literate but also aware of the ethical implications of AI technologies.”

Furthermore, the “Intel AI for Youth” program, developed in collaboration with CBSE and the Ministry of Education, empowers youth to create social impact projects using AI. “With over 160,000 students trained in AI skills, this initiative is significantly contributing to India’s growing pool of AI talent,” according to Nachman.

“Through these programs and collaborations, Intel is not just leveraging India’s position as an AI hub but is actively shaping it,” Nachman emphasised. “By focusing on responsible AI development from the grassroots level up, Intel is helping ensure that as India becomes a global leader in AI, it does so with a strong foundation in ethical practices.”

Balancing data needs with privacy

Data privacy is paramount, especially with AI’s increasing reliance on vast amounts of data. Nachman detailed how Intel balances the need for data with the imperative to protect individual privacy.

“Intel’s commitment to privacy extends to its broader security innovations, developing both hardware and software solutions to enhance AI security, data integrity, and privacy across the entire ecosystem,” she explained. “These efforts aim to create a robust foundation for trustworthy AI deployment.”

At the core of Intel’s strategy is the development of Confidential AI. “This technology allows businesses to harness AI while maintaining stringent security, privacy, and compliance standards,” Nachman said. “It protects sensitive inputs, trained data, and proprietary algorithms, enabling companies to leverage AI capabilities without compromising confidentiality.”

Also read: AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents

To ensure ethical considerations are at the forefront, Intel’s Responsible AI Advisory Council conducts rigorous reviews throughout AI project lifecycles. “Assessing potential ethical risks, including privacy concerns, is a key part of our process,” she noted. “Using a privacy impact assessment process for all datasets helps identify and mitigate privacy issues early in the development stage.”

Intel also invests heavily in privacy-preserving technologies such as federated learning. “This approach enables AI model training on decentralised data without compromising individual privacy,” Nachman explained. “It allows for the development of powerful AI models while keeping sensitive data secure and localised.”

She underscored the importance of respecting and safeguarding privacy and data rights throughout the AI lifecycle. “Consistent with Intel’s Privacy Notice, Intel supports privacy rights by designing our technology with those rights in mind,” she said. “This includes being transparent about the need for any personal data collection, allowing user choice and control, and designing, developing, and deploying our products with appropriate guardrails to protect personal data.”

Need for collaborative effort and education

According to Nachman, Intel’s commitment to responsible AI extends beyond its corporate initiatives. “Intel actively collaborates with the ecosystem, including industry and academic institutions,” Nachman shared. “We contribute to ethical AI discussions to address shared challenges and improve privacy practices across sectors.”

Furthermore, Intel emphasises education and awareness through programs like the AI for Future Workforce Program. “These efforts help in instilling a deep understanding of AI ethics and responsible development practices in the next generation of AI professionals,” she said.

Throughout the course of this interview, quickly it became very clear to me that responsible AI development is a multifaceted challenge requiring collective effort. “We as an industry need to address these challenges together,” Nachman asserted. “It’s not just about what one company can do, but how we can collaborate across industries, academia, and governments to drive progress in responsible AI.”

She stressed that the development of AI technologies should be informed by diverse populations and experiences. “The AI technology domain should be developed and informed by diverse populations, perspectives, voices, and experiences,” she reiterated.

Also read: Google Gemini controversies: When AI went wrong to rogue

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile

Digit.in
Logo
Digit.in
Logo