Responsible AI

Balancing Ethics, Equity and Sustainability to Achieve Responsible AI

Rackspace Technology is setting standards, policies and providing governance and oversight to ensure the responsible and ethical use of AI. We are committed to using technology in a way that is consistent with the ethical principles of accountability, fairness, transparency and trustworthiness.

ChatGPT from OpenAI has been instrumental in bringing AI into the mainstream consumer market. The record pace of adoption of the product and the underlying generative AI technologies using other foundational models, covering language, code and visuals has simultaneously accelerated innovation and misuse of the technologies.

This has led some companies to impose all-out bans on using the technologies and products on corporate systems. At the current rate of adoption, and given the benefits of responsible use of AI, complete bans of usage of AI technologies are likely to be ineffective. Instead, it’s more prudent for organizations to implement controls, and create a culture of responsible use of AI - by raising awareness and providing training that includes guidance on the best uses of these technologies, and warnings against the potential misuses.

 

How do we define responsible use of AI?

Responsibility, unlike accountability, is not relegated to individual or co-ownership, but collective ownership. Responsibility exists when one is authentically aligned with his or her word. When you apply that to AI, it’s about requiring developers and users of AI systems to consider the potential impact of these systems on individuals and society, taking ownership, and taking steps to mitigate any negative consequences, so that collectively as an organization, we make the right choices.

Responsible AI refers to the development, deployment and use of AI in a way that is ethical, trustworthy, fair and unbiased, transparent, and beneficial to individuals and society as a whole. The focus should span across innovation, improved productivity and using the technology as a tool to eliminate inherent social biases. It is about using AI as a decision support system, and not as a decision-maker.

 

Building a chain of trust in AI

AI systems inherently have multiple layers of information encoded as part of the models deployed. This poses a challenge in clarifying which information is shared with underlying platforms for services that integrate AI functionality. Foundational models are available from a variety of sources, ranging from proprietary to open source. These are then expanded and incorporated into various functional models, which are further incorporated into products.

Take, for example, GitHub Copilot. From the end user, developers using Copilot for AI pair programming need to be cognizant of proprietary IP and data shared with the platform to co-create code. The first ring in the chain is individual users. Organizations must adopt policies and governance for the usage of GitHub Copilot. That is the second layer of trust an organization places in the use of a software product.

In turn, GitHub uses OpenAI Codex for the underlying foundational model, and GitHub places the next-level chain of trust with OpenAI. For us to responsibly use it, we must understand what GitHub collects, as well as what the underlying platform, OpenAI in this case, collects. This necessitates building that chain of trust, where GitHub trusts that OpenAI is doing the right thing, and we trust GitHub and our fellow co-workers are using the service responsibly.

 

Our approach to the responsible use of AI within the organization

There is a lot of excitement regarding the new capabilities introduced by generative AI. There’s increased demand to improve productivity in the hybrid workplace through the use of AI co-pilots, as well as the desire to build an intelligent enterprise with the use of secure, affordable and scalable AI models. We want to do all of this in a responsible manner.

We’ve taken a few steps to make policies that are applicable, actionable and tied to our use case scenarios. Here are some guidelines we followed to do this:

  • Keep policies simple and easy to understand.
  • Define data classification policies and provide guidance that includes concrete examples of information classification and the secure use of data.
  • Educate and empower teams with responsible AI principles and guidelines, and contextualize the policies with real-world examples.
  • Implement a process for monitoring the ethical usage of AI.
  • Create a governance council that can triage and validate the application of policies and make regular updates to them.

 

Our core AI policies:

  • Governance and oversight: We’ve formed a committee and defined owners to provide oversight, compliance, auditing and enforcement of the AI standard.
  • Authorized use of software: The use of AI software is subject to the same global purchasing and internal use oversight that we apply to other software applications.
  • Responsible and ethical use: We encourage the ethical use, supervision and explainability of AI models. This includes ensuring validity, reliability, safety, accountability, transparency, our ability to explain and interpret fairness, and the management of harmful bias.
  • Confidential and sensitive information: We have implemented information classification standards and provided clear guidance on the usage of AI services to ensure the proper protection of intellectual property, regulated data and confidential information.
  • Data retention, privacy and security: We follow data management and retention policies and maintain compliance with corporate security and data privacy policies.
  • Reporting: Employees are encouraged to report violations of the AI standard in good faith.

 

Conclusion

In summary, here are our core guiding principles that help us create policies and build a socially responsible environment that fosters innovation and prevents negative usage and implications of AI.

  • AI for good – Create and use AI for the collective good and limit harmful factors.
  • Eliminate bias – Be fair and remove bias through algorithms, datasets and reinforcement learning.
  • Accountable and explainable - We must hold ourselves accountable for any use of AI and derivative uses and employ explainability as a foundation for any model-building process.
  • Privacy and IP – We will maintain secure use of corporate data and intellectual property.
  • Transparency - Use of models and data sets will be well cataloged and documented.
  • Ethical use – We will monitor and validate ethical use of datasets and AI.
  • Improved productivity – We will focus efforts of AI adoption to improve productivity, increase operational efficiencies and spur innovation.

 

 

FAIR logo

Capitalize on the power of AI, quickly and responsibly with Foundry for AI by Rackspace Technology (FAIR™).

FAIR™ is at the forefront of global AI innovation, paving the way for businesses to accelerate the responsible adoption of AI solutions. FAIR aligns with hundreds of AI use cases across a wide range of industries while allowing for customization through the creation of a tailor-made AI strategy that’s applicable to your specific business needs. Capable of deployment on any private, hybrid or hyperscale public cloud platform, FAIR solutions empower businesses worldwide by going beyond digital transformation to unlock creativity, unleash productivity and open the door to new areas of growth for our customers. Learn more →

 

Join the Conversation: Find Solve on Twitter and LinkedIn, or follow along via RSS.

Stay on top of what's next in technology

Learn about tech trends, innovations and how technologists are working today.

Subscribe
FAIR

Tap into its transformative potential with Foundry for Generative AI by Rackspace (FAIR™).

About the Authors

Nirmal Ranganathan

Chief Architect - Data & AI

Nirmal Ranganathan

Nirmal Ranganathan is the Chief Architect – Data & AI at Rackspace Technology and responsible for technology strategy and roadmap for Rackspace's Public Cloud Data & AI solutions portfolio, working closely with customers, alliances and partners. Nirmal has worked with data over the past 2 decades, solving distributed systems challenges dealing with large volumes of data, being a customer advocate and helping customers solve their data challenges. Nirmal consults with customers around large-scale databases, data processing, data analytics and data warehousing in the cloud, providing solutions for innovative use cases across industries leveraging AI and Machine Learning.

Read more about Nirmal Ranganathan
Joanne Flack

VP, Deputy General Counsel & Chief Privacy Officer

Joanne Flack

Joanne Flack is Vice President, Deputy General Counsel & Chief Privacy Officer at Rackspace Technology. As part of her role, Joanne is responsible for all privacy, cybersecurity, enterprise risk management, intellectual property, product development and technology legal matters at Rackspace Technology. This includes overseeing appropriate corporate governance and compliant strategy for responsible use of AI, both within Rackspace Technology and in Rackspace Technology’s multicloud solution portfolio. Joanne is an award winning legal executive, and has two decades of in-depth experience in the technology sector. She is a US licensed attorney, a solicitor in England & Wales, an experienced privacy and cybersecurity professional (IAPP Fellow of Information Privacy - FIP, CIPP/US, CIPP/E, CIPT, CIPM; (ISC)² CC, & CISSP candidate), and a COSO certified enterprise risk manager. Joanne is an experienced public company executive, holds an MBA from Imperial College London, and is an NACD certified corporate director.

Read more about Joanne Flack