So on your first question I think you are right. That policymakers should actually define the crash barriers, but I don’t think they need to do that for everything. I think we should choose those areas that are most sensitive. The EU has called them a high risk. And maybe we can get some models from that that help us think about what is high risk and what we should spend more time on and possibly policy makers, where should we spend time together?
I’m a big fan of regulatory sandboxes when it comes to co-design and co-evolution of feedback. Uh, I have an article coming out in an Oxford University press book about an incentive-based rating system that I could talk about in a moment. But I also think on the other hand that you all have to consider your reputation risk.
As we enter a much more digitally advanced society, it is also the duty of developers to do their due diligence. As a company, you can’t afford to go out there and put an algorithm that you think, or an autonomous system that you think is the best idea, makes it onto the first page of the paper. Because what that does is it affects the reliability of your product by your consumers.
So what I’m saying, you know, both sides is I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology because we don’t have the technical rigor if it applies to all populations. When it comes to disparate impact on financial products and services. There are great models that I’ve found in my work, in the banking industry, where they actually have triggers because they have regulatory bodies that help them understand which proxies actually have disparate impacts. There are areas where we saw this right in the housing and appraisal market, where AI is being used to replace some sort of, er, subjective decision making, but contributes more to the kind of discrimination and predatory assessments we see. There are certain cases when we really need policy makers to put in place crash barriers, but above all to be proactive. I always tell policy makers that you can’t blame data scientists. If the data is terrible.
Anthony Green: Right.
Nicol Turner Lee: Put more money into R&D. Help us create better data sets that are overrepresented in certain areas or underrepresented in terms of minorities. Most importantly, it has to work together. I don’t think we’ll have a good winning solution if policymakers actually lead it, or if data scientists lead it themselves in certain areas. I think you really need people to work together and collaborate on what those principles are. We make these models. Computers don’t. We know what to do with these models when we create algorithms or autonomous systems or ad targeting. We know! We in this room can’t sit back and say, we don’t understand why we’re using these technologies. We know because they set a precedent for how they’ve expanded in our society, but we need some responsibility. And that’s actually what I’m trying to achieve. Who makes us responsible for these systems we create?
It’s so interesting, Anthony, the last few weeks that many of us have seen the conflict in Ukraine. My daughter, since I have a 15 year old, has come to me with a variety of TikToks and other things she’s seen say, “Hey mom, did you know this is happening?” And I had to pull myself back a bit, because I really got involved in the conversation, and somehow I didn’t know that once I went down that path with her. I go deeper and deeper and deeper into that pit.
Anthony Green: Yes.