Women in AI: Miriam Vogel highlights the need for responsible AI
To shine a much-deserved and long-overdue spotlight on women researchers and others focused on AI, TechCrunch has been publishing a series of interviews highlighting notable women who have contributed to the AI revolution. As the AI boom continues, we’ll be publishing these stories throughout the year to shine a spotlight on important research that often goes unrecognized. Find more profiles here.
Miriam Vogel is CEO of EqualAI, a nonprofit founded to reduce unconscious bias in AI and promote responsible AI governance. She also chairs the recently launched National AI Advisory Council, which is mandated by Congress to advise President Joe Biden and the White House on AI policy, and teaches technology law and policy at Georgetown University Law Center.
Vogel previously served as an Assistant Attorney General at the Department of Justice, where he advised the Attorney General and Deputy Attorney General on a wide range of legal, policy and operational issues. As director of the Responsible AI Institute and senior advisor to the Center for Democracy and Technology, Vogel has advised White House leadership on a range of initiatives ranging from women’s, economic, regulatory and food safety policy to criminal justice issues.
Just to briefly ask, how did you get started working in AI? What attracted you to this field?
I started my career in government by interning for the Senate the summer before 11th grade. I developed an interest in policy and spent the next few summers working in Congress and then the White House. My focus at that point was civil rights. This is not a traditional path into artificial intelligence, but in retrospect, it makes perfect sense.
After law school, I transitioned from being an entertainment attorney specializing in intellectual property to working on civil rights and social impact issues in the Executive Branch. While working in the White House, I had the honor of leading the Equal Pay Task Force and, while serving as Assistant Attorney General under former Deputy Attorney General Sally Yates, I led the creation and development of implicit bias training for federal law enforcement.
I was asked to lead EqualAI based on my experience as a technology lawyer and my background in policy addressing bias and systemic harm. I was drawn to the organization because I recognize that AI presents the next civil rights frontier: if we’re not vigilant, decades of progress could be undone in a few lines of code.
I have always been excited by the possibilities created by innovation, and I still believe that AI can provide incredible new opportunities for more people to thrive – but only if we take care to ensure that more people at this critical juncture are meaningfully involved in its creation and development.
How do you address the challenges of a male-dominated tech industry, and even a male-dominated AI industry?
Fundamentally, I believe we all have a role to play in making AI as effective, efficient, and beneficial as possible. That means further supporting women’s voices in AI development (which, incidentally, in the US is over 85% female, so ensuring their interests and safety is smart business strategy) and underrepresented voices of different ages, geographies, ethnicities, and nationalities who are underrepresented.
As we work towards gender equality, we need to ensure that more voices and perspectives are taken into account to develop not just AI for developers, but AI for all consumers.
What advice do you have for women looking to enter the AI field?
First, it’s never too late to get started. Never. I encourage all of you grandparents to try ChatGPT by OpenAI, Copilot by Microsoft, and Gemini by Google. To succeed in an AI-driven economy, we all need to become AI literate. And that’s a great thing. We all have a role to play. Whether you’re starting a career in AI or using AI to support your work, women should try AI tools, see what these tools can and can’t do, see if they can help you, and become AI-savvy.
Second, responsible AI development requires more than ethical computer scientists. Many people think the field of AI requires degrees in computer science or other STEM, but in reality, AI needs the perspectives and expertise of women and men from all backgrounds. Get involved. Your voice and perspective are needed. Your involvement matters.
What are the most pressing issues facing AI as it evolves?
First, we need to increase AI literacy. EqualAI is “AI net positive,” meaning we believe AI will bring unprecedented opportunities to our economy and improve everyday life — but only if these opportunities are equally available and beneficial to the broader population. We need the current workforce, the next generation, and our grandparents. All of us The key is to acquire the knowledge and skills to reap the benefits of AI.
Second, we need to develop standardized measures and metrics for evaluating AI systems. Standardized evaluation is essential to build trust in AI systems and enable consumers, regulators, and downstream users to understand the limitations of the AI systems they are using and determine whether they are worthy of trust. Understanding who the system is being built for and what the intended use cases are can help answer the important question: “For whom could this fail?”
What issues should AI users be aware of?
Artificial intelligence is exactly this: artificialAI is built by humans to “mimic” human cognition and assist in human pursuits. When using this technology, we must exercise appropriate skepticism and due caution to ensure we are trusting a system that is worthy of trust. AI can augment humanity, but it cannot replace it.
We must always keep in mind the fact that AI is made up of two main elements: algorithms (created by humans) and data (reflecting human conversations and interactions). As a result, AI reflects and adapts to human shortcomings. Bias and harm can be embedded throughout the AI’s lifecycle, whether through algorithms created by humans or through data, which is a snapshot of human lives. However, every touchpoint with humans is an opportunity to identify and mitigate potential harm.
Because people can only imagine as broadly as their experiences allow, and AI programs are limited by the structures they are built in, the more people with diverse perspectives and experiences on a team, the more likely it is to spot biases and other safety concerns embedded in the AI.
What is the best way to build AI responsibly?
Building trustworthy AI is the responsibility of all of us. We can’t expect someone else to do it for us. We must start by asking three fundamental questions: (1) Who is this AI system being built for? (2) What are the intended use cases? (3) For whom could it fail? Even with these questions in mind, there will always be pitfalls. To mitigate these risks, designers, developers, and adopters must follow best practices.
At EqualAI, we promote good “AI hygiene,” including planning frameworks, ensuring accountability, standardizing testing, documentation, and regular audits. We also recently published a guide on designing and operationalizing a responsible AI governance framework, outlining the values, principles, and framework for implementing AI responsibly in your organization. This white paper serves as a resource for organizations of all sizes, verticals, and maturity that are responsibly adopting, developing, using, and implementing AI systems internally and publicly.
How can investors promote responsible AI?
Investors have a big role to play in ensuring AI is safe, effective, and responsible. Investors can ensure that companies seeking funding are aware of and thinking about mitigating the potential harms and liabilities of AI systems. Simply asking the question, “How have you implemented AI governance practices?” is a meaningful first step in ensuring better outcomes.
This effort is not only in the public interest, but also in the best interest of investors who want to ensure that the companies they invest in or partner with are not subject to bad publicity or litigation. Trust is one of the few non-negotiables for a company’s success, and a commitment to responsible AI governance is the best way to build and maintain public trust. Robust and trustworthy AI also makes good business sense.
Source link