Q: Merve, can you give me an idea of what is it like being an AI ethicist?

A: Being an AI ethicist is definitely an uphill battle. You see the impact of AI powered decision-making tools around us, but don’t necessarily understand that the consequences of that. I think we need to make a difference between the people who are involved in this from a professional point of view or an advocacy point of view: people who actually have better understanding of the technologies and are more literate in terms of the technology, impact of big data and the consequences of the use of that data; versus people in the mainstream who are interacting with these technologies, voluntarily or involuntarily, intentionally or unintentionally, but not necessarily understanding what kind of impact is collection of data, as well as processing of and use of that data through the AI systems and, what kind of decisions are being made about them. So being an AI ethicist, at least for me is a lot about AI advocacy and awareness raising.

Q: When you speak about AI ethics, I think one argument that you generally hear is a lack of a multicultural approach. What are your thoughts on that? 

A: I agree with that statement: there's a lack of diversity. And I actually wrote a journal article of that looking at the review of some of the principles. Right now, you see a lot of them are Western or US centric. A lot of them are coming from research institutes or organizations funded by private companies, or by private companies themselves. In addition, some of the frameworks that are coming out, you don't necessarily see much that is outside of that Western or US centric approach. And that is also translated in the people that are involved in this work, when you look at authorships and the people that you see mainly in conferences and representation in the conferences, etc. So I think, you don't see different ethical perspectives either. You don't see much of a the impact of, or use, of say Buddhism or Confucianism or Latin America perspectives and social approaches from those regions, which I think we should be very careful and very intentional about changing. This is because, at the end of the day, we say garbage in, garbage out. I'm not saying garbage in the sense of inputs from the people, but that diversity really matters. If the perspectives you bring in are focused on certain areas then definitely, the outputs of those teams reflect in the principles and the products and the services that come up. If you never had experiences from different regions, from different cultures, you will not be able to reflect or even think about the impact of your technologies on those groups. 

Q: You are the founder of an organisation called AIethicist.org. Can you give me a brief of all that your organization does and what is the bigger role that your organisation will be playing going forward?

A: Aiethicist.org was kind of born out of my frustration when I first came into the field and was interested in AI ethics and responsible AI. It was really hard and time-consuming to get a holistic picture of the discussions and debates on bias or fairness,  justice and AI systems. So first of all, I created the platform to help people such as researchers or advocates coming into the field of AI ethics or interested in responsible tech or responsible AI to get a view of where the ecosystem is, and then have a focus on a few areas, say bias, fairness, explainability. So I created a list of research papers and reports on these areas that one should be aware of. My work under AIethicist.org is three pronged. One is awareness raising and advocacy. Like I said, trying to spread this knowledge of the consequences and the implications of AI on, on individuals, as well as the society in general and what that means for social justice. The second is capacity building. So, I do training whether that's self-paced online courses, or by going into organisations and capacity building within the teams to help people build these into their projects, into their product life cycles. I try to give them the tools and methods to have these discussions in their own organisations so you don’t always depend on consultants or external people to come in. And the third prong is governance: how could I contribute towards the governance of AI systems and autonomous systems? As you might be aware, there are not many methods or established regulations or standards etc. around AI systems yet. It's definitely a work in progress, but there's much that is out there that has been accepted. So I contribute to a number of international organizations that are building international standards on AI systems. I’m also developing an AI audit framework on this. So, like I said, AIethicist.org is very much to help the people who are interested in either research or interested in coming into the field. And I am coming from a HR technology backgrounds. So as a VP of HR, responsible for recruitment technologies and diversity recruitment for Bank of America Merrill Lynch, I was in London for five years, co-heading the team that was doing recruiting from colleges across the region, as well as Asia Pacific countries, our offices in those regions. So there's also a niche focus on AI in HR recruitment systems. And how can we have better systems? What are the ethical impact of these systems on not only the individual candidates, but labour force and the labour market in general.

Q: What will be your advice for a student who wants to pursue a career as an AI ethicist?

A: First of all, being an AI ethicist or trying to build these systems and methods inside organisations really requires multidisciplinary approach and multidisciplinary interaction. So if you're already in a computer science or engineering degree, I would suggest trying to be more conscious of classes where you can get a social, racial and historical context of the data or of systems of technology as well. If you're coming from a humanities or social sciences background, then trying to get basic understanding of what AI is, how does machine learning work, what are the biggest tensions in these technologies, basically trying to get an understanding of the technology itself. I'm lucky that I come from a social science background, but my career took me through a lot of technology work. So I can speak the language – I can't code, but I can speak the language – and I understand the interactions of the technology. That really helps me to have that language when I'm talking to project managers or developers or product managers, and I can then turn around and translate it to the others in the team in more mainstream language. But if you're going to be an AI ethicist, you are going to be a one of those people sitting in the middle. So having that multidisciplinary backgrounds, whether in college or whether coming into an organization and getting a taste and understanding of different departments is extremely, extremely crucial, so you don't end up talking past each other when you're trying to explain something.

Q: Does a job of an AI ethicist stop at addressing the bias or fairness, or are we also supposed to be advocating for the AI tools to not be misused by bad actors? 

A: I think it's both, it's not exclusive. One of the things that I like to say when I'm talking about my role, whether in an organization or as a consultant is, I try to make myself obsolete eventually. So as someone trying to build capacity, my dream one day is to have this built and embedded in project life cycles, in business culture – like second nature. I think it's going to take a bit of time. But I think once you reached that critical mass of investors and CEOs and developers, as well as, consumers and citizens who understand the consequences of AI systems, I think there will be more demand for change and more responsible and ethical, design and deployment of systems. And then it will be more mainstream in the organizations. So that's where an AI ethicists would eventually go away, because everyone would have that lens, whether you're a project manager or a developer or a CEO. And I sincerely would like to see the AI ethicist job to be obsolete at some point. But KPMG shows this to be one of the top five jobs in the 2020s.

Sources of Article

Image from Flickr

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE