Apr 17, 2019 | 5 min read

Conversation with Professor Sander Klous

Podcast #55: Accounting as a Model For AI Ethics

Sander Klous is Professor of Big Data Ecosystems at the University of Amsterdam and Partner in charge of Big Data and Analytics for KPMG in the Netherlands. Our conversation explored the key issues of trust, governance and ethics in analytics and AI, following on his books We are Big Data and Building Trust in a Smart Society. He is currently working on creating an ethical framework that would enable auditing of analytics and algorithms in a way similar to how audited financial statements represent a trusted presentation of business results. He shares insights into the current thinking around ethics in AI, outlining key risks from analytics and the value of trust. Lastly, he shares examples of how the city of Amsterdam is leading the field in applying ethical frameworks to analytics.  

 

Recommendations:

Building Trust in a Smart Society by Sander Klous and Nart Wielaard 

We are Big Data: The Future of the Information Society by Sander Klous and Nart Wielaard 

Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil 

AI in Control website 

 

orange-line.png

We'll notify you bi-weekly about new podcast episodes, upcoming guests, and news. You can subscribe to the podcast and if you'd like to be considered to appear on the podcast contact us.

 

View Transcript

Welcome to another episode of our Momenta Podcasts. This is Ed Maguire Insights Partner at Momenta, and today our guest if Professor Sander Klous who is a Professor at Big Data Ecosystems at the University of Amsterdam, and a partner in charge at Big Data and Analytics for KPMG in the Netherlands. He’s been doing quite a lot of work around the concept and the need for ethical frameworks around big data analytics, and we’re excited to be able to dive into the topic and hear a bit more about the work he’s doing.

Professor, it’s great to have you join us.

Thank you very much.

Just to set the stage with a bit of context, could you share a bit of your background, and what have been the core experiences that have led you down your current path, and the current interest in big data and analytics?

I started to write about big data and advanced analytics a couple of years ago, I wrote a book called ‘We are Big Data’, where I was basically emphasizing the enormous impact of data, and the analysis of data on our society, the key message of that book is in the title of course, we actually want this. And of course in the years that followed, you could see all kinds of side-effects that came with this data analysis, and privacy was one of the most familiar ones that became a discussion, but there are others like discrimination and reliability. So, these situations led to people distrusting the results of algorithms, and that for me was a very interesting notion, we wrote a report about it called Guardians of Trust, and there were quite some shocking results coming out of it, for example it said that over ninety percent of the executives didn’t have trust in the analysis that was in their own organization.

In my new book I started to think about, how do you actually do that, how do you build trust in such a small society? In there I discuss what the potential solutions are, and interestingly enough people talk about solutions in terms of we need to make better algorithms, or we need to become more transparent about the algorithms used, or we need to explain better what an algorithm does, and then people will probably start trusting it. But that’s not how trust works, trust typically comes from experience, and then experience builds reputation, and then reputation builds brand. I always compare it to a pack of milk that you buy in the supermarket, you look at the expiration date and you trust it, and the question is why, because nobody ever researches how that date was calculated; you trust it because you buy a pack of milk of a certain brand, and you trust the brand, and the reputation of that brand because you have had many positive experiences with that brand.

So, if that’s how trust works then you need to start doing something else, something that we are very familiar with at KMPG from an accountancy background. So, I started putting the edges together, and then I thought let’s come up with a different method to build trust, so this is how I got my inspiration on working on building trust in a smart society.

Going even further back, in the early stages of my career I was a high energy physicist, I worked for 15 years on the experiment at CERN in Geneva, I was colliding protons together, it was actually the highest energy that we are able to make as human beings which we are making by colliding these protons together. I was part of the ATLAS collaboration that was searching the Hicks Boson, and we found it in 2012, it led to the Nobel Prize for Peter Hickson in 2013.

This is really an analysis factory, it’s almost like an industrial approach, if you want, to science. There are 3,500 physicists working on that experiment. And that level of professionalism is something that I still carry with me, and it helps me to understand what the difference is between a proof of concept where you try to demonstrate an idea compared to a production environment where the performers of an organization can depend on the outcome of algorithms, so you need something very different in an industrial environment, than you need in a tiny experimental environment.

I think these are the things that shaped me most.

Could you talk about what can go wrong, what are some of the issues that face us when we’re dealing with algorithms? You can think abstractly about this, but are there some common examples, or prominent examples that could illustrate what can go wrong with poorly designed analytics, or predictive algorithms?

I actually did some work on that, I wouldn’t call it scientific work but at a certain point what I did was, I was collecting headlines from the news of everything that went wrong with data or with analytics, and then I tried to put them into several topics, and it turned out that you can arrange them in six topics. The first one is a very familiar one, cyber-security related, so something gets hacked and then information gets out on the street. It’s something that has been around for a long time, it’s not something you immediately associate with big data or artificial intelligence, but it is evolving in the nature of big data and artificial intelligence, the incidents are becoming more impactful if you like.

The other one is something which is also quite familiar to us, its data governance related; think about who can do what with the data, and who can do what with an analysis, that’s a very common topic that we already know from the past. What you see is, that now data and analysis is actually stretching the boundaries further of an organization, and it becomes harder and harder to decide who can do what with data. So, you see typical mistakes there that involves for example a business developer that has this cool new idea of what can be done with data, and then starts implementing it, and it turns out that often the clients of that organization don’t expect it. We had an incident here in the Netherlands with a navigation system provider, and that data showed up at the national police who in turn started to optimize their radar controls placed on that data.

So, even though the business model there is interesting, you have data and navigation system providers that the police is interested in, so you can build a new business model around it. They didn’t consider the side-effect of that when that would actually get out there, and what the impact would be on the sales of these navigation systems. So, here you can see a typical example of a data management issue; somebody had the responsibility of data, and it was not properly evaluated and that didn’t exist in big data pre-artificial intelligence times.

The third one is very familiar, ID architecture-related stuff. What you see there is that because we are now living in a world where we have the Internet of Things, and devices are in our daily lives and are touching our daily lives, and many times during the day things like maintenance and system upgrades, things that you typically associate with servers in the basement, now become something that can affect your daily lives; if you have a smart thermostat in your room, or if you have a smart-lock on your door, and further upgrade, then the impact is completely different than when it actually hits a server in the server room of your office.

So, these three are classic things that evolve, but there’s no revolution around them in the order of the themes that you see and are much bigger steps if you want. Usually they are grouped together under ‘ethics’, but I don’t think they are all ethics-related, one of them is more reliability I would say, so think about how good does an algorithm actually need to be before you can deploy it in real-life; does it need to be just as good as when a human being would be doing that task, self-driving car – can it cause any accidents, because humans are causing accidents, so what is the criterium for the algorithm?

The other thing is the interaction aspects of it. I remember a while ago there was an experiment in the US where Amazon Alexa was put in a classroom, and it turned out that the kids started to talk to Amazon Alexa in very clear commands, which is of course what Alexa responds best to, but they also started to interact with each other in clear commands which is an unforeseen side-effect, and something that you probably don’t want with your kids. Which is why Amazon put the magic word mode on the Alexa, so it only responds if you said ‘please’, and it only responds to the next question if you said ‘Thank you’ to the previous answer. By that you can actually influence behavior in a positive way, or at least that’s the objective.

The last category is probably what you would refer to as ethics. This is something I learnt through a conversation I had with the Head Psychiatrist of the University Medical Centre of Utrecht which is a city here in the Netherlands. I asked her, ‘Now that you have a computer that gives you advice on the treatment of a patient, that must make you feel uncomfortable as a doctor, because you don’t actually understand completely what’s behind that advice that’s coming from the computer?’ and she said, ‘Well, actually that’s not the biggest problem I have, because currently I’m also using research from my colleagues, and that’s also something that I might not completely understand because I wasn’t involved in that research, but I still use their advice and their answers. So, for me it’s not much different if I don’t completely understand the advice from my colleagues, I still use it in my treatments, or, if it comes from a computer’. She said, ‘The biggest difference is the computer gives me completely new insights, predictable insights. It will tell me for example that a patient will become violent within the next 24 hours’, and she gave an example where there are cases with over 80 percent confidence that in that computer system, the patient actually becomes violent within 24 hours.

Then the question becomes, ‘What do I do with that information?’ Of course you tell the patient, but the patient can respond in different ways, the patient can say, ‘Well, I don’t feel I’m getting violent at all, so I’m probably part of the other 20 percent’. Then should you be able to force somebody to take their medicine, and if they don’t take the medicine and they do become violent, then it probably has a huge impact on the nurses that are treating that patient. If you look at the history of nurses that interact with patients that become violent, they are incidents that have a heavy impact on them, and then lead to significant time off before they can get back into business.

So, do you tell these people that there is this score, even though it’s not a hundred percent score, which in turn triggers behavior with, them which will have a response on the patient again? So, you get into this loop of all kinds of ethical situations that you now suddenly have to take into account. The interesting thing is, you cannot even avoid it, you have to make ethical decisions here because the technology is available, so even if you decided not to use the technology, that’s already an ethical decision. So, if the technology is not there, there is no ethical choice to make, but as soon as there is then every doctor has to make ethical decisions around this topic, that’s what is very interesting.

That’s fascinating. I want to turn back to the findings of the trust and analytics study that was rather surprising the lack of awareness of potential risks. I would love to get your insights on what you see as a disconnect between, on the one had we have an enormous wave of interest and reliance on analytics, and if you go back 10 years to Tom Davenport, ‘Competing on Analytics’, and ‘Moneyball’, in many respects the vision of applying analytics for better business results, it courses all the way through GE, the Industrial Internet, and the Internet of Things. But then on the flip side, as you’ve highlighted, there are real disconnects between how much trust you put in the systems, and then an awareness that there are risks which people are not aware of.

  1. How do you assess the level of awareness of risk?

 

  1. What are some steps that may need to be taken in your view, to raise awareness and help close some of these gaps?

In my opinion, algorithms are all around us, and they are so embedded in our daily lives that sometimes you don’t even notice it anymore. There is even a term for it, it’s called ‘Invisible barbed wire’.

I think that’s a good term!

Yes, it’s like a barbed wire that is sort of guiding you through life without even noticing that it’s there. Maybe what’s wrong with that phrase is that the barbed wire is not yet completely invisible. Sometimes you realize that you’re impacted by an algorithm in a way you don’t like. I always like the example of, in the navigation system that is for example sending you into a one-way street from the wrong direction, and at that moment you’re really frustrated with your navigation system, and you’re feeling annoyed. That annoyance that you’re feeling is actually the barbed wire, it’s an algorithm that is trying to impact your decision-making in a way that you don’t like. But if you give it five years, if you give it ten years, then that annoyance has gone, then the algorithms have developed beyond that point, and then the barbed wire is really invisible.

For me that means that we are living in a crucial moment in time, we are the generation that needs to think about the autonomy in decision-making; what decisions do we want to make ourselves? And, what decisions are we fine with if they’re taken over by algorithms, or technology? There are a couple of decisions that you’d probably want to take yourself, thinking about political decisions, or partner choice, and even there you already know that you’re impacted by algorithms, but at least you want to know that you’re impacted, and how you are impacted.

As I said, we are the generation that has to define what the boundaries are there, and how we are going to make sure that the algorithms that are implemented comply with these boundaries. You don’t have the option to step out, it’s not that you can say, ‘Let’s not do this’; this is part of who we are, it’s part of societies developing with society. It’s almost like language, you cannot say, ‘Well, language can be used to do harm, so let’s not use language’, it’s just there, you have to deal with it and you have to make the most of it, and the same for algorithms and technology in the way that we deal with it in our daily lives.

The approach that we chose is of course very closely associated with the approach that we have also seen on financial statements. About 100 years ago we had similar situations around financial statements, basically there were mistakes in financial statement that led to distrust in society about these financial statements. That’s how the audit function appeared, as we needed an independent auditor that looks at these financial statements, and you need a framework to put in place that shows you are in control of how you built the financial statement. That’s something we are developing right now, and we have developed over the years, but we will be developing it further over the next years.

How do you actually put such a controlled environment in place, and what kind of checks and balances do you need? There is quite some similarities, the mechanisms that you use for auditing financial statements, they are similar to what you need to have in place if you want to have a look at algorithms; think about the materiality irrelevance, or the four-eye principled where you have independence between the review that is done, and the person that actually built the algorithm. Or, three lines of defense where you have an operational layer, a risk management layer, and then an internal-external audit layer. They are just as applicable to financial statements as they are to algorithms.

So, yes, I think we can learn from what has been done in financial statement audits, and we can try to create similar frameworks, and apply them to algorithms as well.

That’s a fascinating comparison, and I hadn’t thought about it that way, but it makes perfect sense; a framework of generally accepted accounting principles, or IFRS principles. I’d be interested to get your perspective on what components or what inputs one would need to build an effective framework that can be applied in different situations. I guess I’m asking whether you might need different frameworks, or different applications, whether you’re doing an operational audit versus an autonomous system, versus a hiring system where there are people involved, or say a government system. What do you need to put together a framework that people can work with?

What we see is that there is one gigantic difference between annual financial statement audits, and algorithms, which is the way in which they are created. So, a financial statement is typically quite a thorough well-documented process, and the controls are mostly to do with traceability and with being able to reconstruct exactly how a certain point of information entered into that financial statement. If you look at algorithms they are typically developed in an agile process, that means if you would try to apply similar operational procedures you would probably kill innovation altogether, and if you asked a team building algorithm to document everything and every decision that they made when they were building it, then there’s no way that you can actually keep it.

So, you need to connect to the controls that are already in the agile development methodology, for example a methodology as something like a definition of done, and what you put in the definition of done can be discussed, and it can be associated with various types of risks, and at the end of each print you give product demos. So, you can adjust your product demos to make sure that you are also touching on all the risk areas that you need to touch on at the right time. So, there are all kinds of handles in the agile methodology to make sure that they connect to your risk management layer, and your second, and your third layer, and a line of defense.

Now, to answer your question, its different for every algorithm in every team, right? So, at the moment, data-science and the development of algorithms is still quite immature I would say, there is no standardized approach so everybody has developed their own implementation, and we feed that back when we are building these kinds of control frameworks, because we need to sit together with the teams, discuss with them, ‘How do you make sure that the algorithm is reliable?’ for example, ‘How do you make sure that there is no unintended bias in the algorithm?’ and based on their answers you can build the control objectives, and you can build the operating procedures to actually make sure these control objectives are met. That’s different if it is an advanced algorithm, something with deep learning, or something with machine learning, compared to a rule-based algorithm that on an individual level is explainable, and it’s also different from team to team; if it’s a team of three people that develops the algorithm, you see different procedures appearing than if it’s a team of 30 people.

So, yes, I completely agree with you that the standardization that is already there in financial statements, the control framework for financial statements, it’s not there yet for algorithm development. I’m pretty sure it will go that way and it will be mature, and there will be standardization at a certain point, at least far more than there is now. But it will always be an agile way of working, so you will have different controls, similar control objectives, but with different operating procedures to meet these controls.

As you look forward and these frameworks begin to be adopted, how do you see a companywhether being compelled, or voluntarily disclosing that their AI, or their analytics have been vetted for ethical concerns, how do you see going forward what will be drivers of the adoption of ethical frameworks, and also do you see a role of a framework, or some sort of verification that companies have gone through a process, as important to a brand, or important to financial auditors for instance? I think you’ve raised some really interesting points here because, when you look at financial statements that have been audited by a firm like KPMG, or one of the big four, you can be pretty confident that those numbers have gone through a discipline vetting process, and then for investors that becomes table stakes to trust the numbers are accurate.

But when you’re dealing with increasing automation of processes in an organization, do you expect there may be some sort of certification that for instance insurance companies may require, just as a part of risk management? Or, would this even become an assertion that is a positive aspect of a brand, particularly in the case of the big Internets which there’s growing concern about the impact of their algorithms as well?

The answer I think is both, so we see this driven from two sides; on the one hand you see regulatory requirements appearing, especially of course now around privacy in Europe you have the general data protection regulation, which is a very small subset of the control we’re talking about right here, but it’s a start. On the other hand, and this for me is the most interesting one, and I was surprised by the adoption rate, this is very much client-driven, so it’s exactly what you said, clients are demanding these kinds of statements from their providers. We’re working with the City of Amsterdam, and the City of Amsterdam already made a statement a couple of months ago, that if you are a provider of an algorithm to the City of Amsterdam, they won’t buy it anymore if you don’t have an independent statement, or a statement from an independent party that says the algorithm is actually doing what you say it’s doing.

This adoption goes much further actually than the regulatory part, because of course for the regulatory part you have to have independent standards, which takes a long time to decide what these standards need to be, and how good it needs to be before you’re compliant. But from a client point of view, it’s much more flexible, and individual clients will start asking for this, and if sufficient independent clients start asking for this then the providers will probably reverse the process, that’s what you saw with data centers for example. So, if you want to store your data in a data center and it doesn’t have the right certification of what is actually stored in the data-center, and the way that appeared was, first there were individual clients asking for it, and when there were too many individual clients asking for the certifications then the data center said, ‘Well, let’s just generalize this. We will make sure that we have this type of certification, and then you can all see that we have done our job properly’.

So, that’s actually what I expect to happen here as well, first there will be major clients that will start asking for it, and if sufficient clients have started to ask for it, then the companies will say, ‘Well, we’re not going to do audits every time that you ask for it, so we will just do it once. We will have a standardized statement, and then that’s the statement that you can trust’.

Are there any industries or particular use cases that you anticipate may lead the effort to be able to show validation of their own processes?

I think the City of Amsterdam is actually a very interesting one. One of the reasons why I find it so fascinating is that usually government is lagging a bit behind with developments around technology, but especially cities, they are ahead of things because they’re worried about the digital rights of their citizens, and they are pretty close to their citizens, so every day they know what this does with their citizens, so they are on the forefront of the developments. For example, in the city of Amsterdam there are various algorithms that are really impactful, and there is a school selection algorithm, every major city has the same issue that the popular schools are all in high demand, so you cannot put everybody in the school that they want to go. That means that you need some sort of placement algorithm, and then of course you get questions, does the placement algorithm do what it’s supposed to do, and is there any bias in that?

How about the Mayor of the city, do the mayor’s children go through the same process, or is there some back door that they can use? These kinds of questions pop up, and these are serious questions that need to be addressed by the cities, and that’s why they are in the forefront of this. Another example is, there’s an algorithm in the City of Amsterdam that if you have any complaints about things that go wrong in the city, you can register that in various ways, and then you can run an algorithm and that algorithm identifies the complaint, what it’s about, where it needs to go, and how much priority does it need to get. In principle it sounds like quite an innocent algorithm, it’s a natural processing algorithm, but if you think about it a bit longer then there are all kinds of trickiness in there; for example, the areas in the City of Amsterdam that complain the most are the better parts of town, which is maybe something that is unexpected, but it’s true. That means that there is tendency to bias the better parts of town, and they are already better than the rest of the town.

Also, the complaints that come from the better parts of town are written in a language that is more understandable by the natural language processing algorithm. So, the identification and the ranking of these complaints goes better than the ranking and the identification of the problems coming from the other areas of Amsterdam. That also leads to a bias tendency, and you need to compensate for that to make sure that the algorithm remains fair in its assessment of priority in an area. So, even algorithms where you use in the beginning what can go wrong with that? If you think about it a little bit harder there are quite a few tricky elements in there.

That’s a fantastic example. I think you make a great point too, that certainly government is not always the most advanced in terms of adopting technologies but being able to require that vendors and people comply with policy, that’s certainly a very efficient way to ensure that adoption proceeds.

I’d be also interested to get your perspective on some of the broader efforts to establish a future ethical framework, and I know we’ve been talking about algorithms and very practical near-term AI, but as you look a little further out there have been some high-profile concerns about the impact of what happens when artificial general intelligence emerges, if it in fact does, and then what should be an appropriate set of ethics or principles for governing, for instance autonomous systems in warfare, and other critical life or death decisions, and those 23 Asilomar principles I think provide at least a starting point there; but I’d love to get your perspective on the relevance of that type of effort at least to the broader perception of AI, and also whether there are lessons or useful insights from those types of efforts that can apply to companies and organizations that are much more concerned with the here and now.

If I start talking about general AI, I’m not somebody that is a big fan of talking about it, it’s not that I think it will never appear, but for me it’s still a bridge too far to consider the consequences about that. I remember a phrase that one of my colleagues once said, which was, ‘People worry about algorithms being too smart, and going to impact their lives in ways beyond their imagination. But the real problem is that algorithms are too stupid, and they are already impacting our life beyond imagination’. So, I think that’s the situation we are in right now. Having said that, giving you a couple of examples of why I think we are still far away from a situation where AI is going to be meaningful in a broad sense of the word, let’s take the definition of success. One of the greatest celebrations of artificial intelligence over the last couple of years was of courses the AlphaGo machine that defeated the Go Player, developed by DeepMind. If you think about it in terms of the definition of success, that’s a really easy one, because in the game the success is if you win the game, and if you lose the game you don’t have success. So, it’s a black and white decision and it’s very easy to measure.

Real-life cases that you’ve already mentioned, like how do you select the best candidate, and you have to get a definition of the best candidate, then it becomes very-very tricky because what is the best candidate? If you cannot give a proper definition of the best candidate, how can I translate that into objective requirements? So, if you cannot do that then basically your best next step to take is to take historical data and look at how humans make those decisions. But then of course you’re getting in a tricky area, because you already know the decisions of these humans have not been perfect either, so you need to think about all kinds of criteria which you need to filter out. Then the question is, how are you going to filter them out, because some of them might actually be appropriate.  So, these kinds of discussions show that we are still a long way off understanding how AI would work, if you’re talking about general AI.

I wrote a blog for the World Economic Forum a while back and we were discussing empathy, and can AI ever contain empathy? Then you can of course have different interpretations of what empathy is; if empathy means it needs to listen very carefully, and it needs to have a good understanding of what you’re saying, and respond in meaningful answers etc., then it’s just an evolution of what’s already going on, I think. But if empathy means that in some cases you have to go against rules and regulations, because its inhumane to actually take the decision according to proven regulations, then it becomes a very different discussion, and its already in the word inhumane – I mean AI is not human, so it also doesn’t understand what inhumane is. Then it becomes very hard to even define what we mean with inhumane, and then putting it in another black box makes it even trickier.

We had a situation a while back here in the Netherlands with a couple of Armenian kids who applied for asylum here in the Netherlands and their applications were denied, but they’d lived here for their entire lives and had to be sent back to Armenia where they had never lived before. The question was, are we going to do that, are we actually going to send these kids back? These were Dutch kids, and the general consensus of the Dutch society was, this is inhumane, so they should stay here in the Netherlands, but according to every rule in the book they should go back. So, the Ministry eventually turned back the decision, so, then my question is, can we expect AI to turn back those decisions? That’s a very difficult definition and a very difficult direction to follow.

So, yes, I would like to keep it closer to the here and now. Then I’m thinking about how do we deal with the definition of success? Cathy O’Neil is somebody that we work a lot with, she wrote the preface in my book, and she has developed something called ‘The Ethical Risk Matrix’, which gives you some handle on what is acceptable and what’s not acceptable, but it still puts a lot of responsibility on the organization itself. So, as an organization, in the end you have to decide for you what the limits are, because there is no general consensus on what these limits are.

Well, that’s been a really interesting discussion, fascinating. I’d like to ask one final question, if you could just share your final thoughts on the outlook for the work that you’re doing on the progress of an ethical framework, and what you’re optimistic about. And if you could share any resources that our listeners could access for further interests, and to learn more about the work that you’re doing.

The outlook for me is, in the years that are coming we will see some sort of short cycle approach in developing the ethical standards. The reason I think it is going to be a short cycle is, I don’t expect any government or any organization to come up with a holistic standard framework So, what’s probably going to happen is, there are going to be incidents, the incidents are going to go to court, then the judge is going to assess for example, who is responsible in the chain of AI decisions, this is the reaction that built the algorithm, is it the person that put it into the device that you’re using, or is it the user of the device? Which is interesting in itself because I talked to a few judges and they said, ‘Well at the moment we’re not equipped to make that decision, so we also need to get additional knowledge and additional capability, and we need to think about what kind of information we need before we can take those decisions.

Then with case law these boundaries are put there, and some of these boundaries might not be consistent with what we as a society think is needed, and then politics will probably step in with corrections in place, so that it meets our standards. It will go on for a decade or so, and then after a decade we get close to a regulatory framework, or a compliance framework, that is more generally accepted. That’s the way I think this will go.

In terms of my research in it, we are doing a lot of research into what does it actually mean when you claim that news is spreading hatred, and what do you mean when you first make the statement that news is true or false? Because these notions are also often subjective. So, that’s an interesting part of research, and the most interesting part I myself find is the data-management part, because you need to not only explain what an algorithm does, and probably you don’t even need to explain it, it’s the example we gave with the pack of milk; but what you do need to explain is how much a person can trust an algorithm, but that’s never going to be 100 percent. You can make a statement that says, ‘We have not been able to identify any flaws in this algorithm’, which doesn’t mean that it’s never going to cause any harm, and if it does cause any harm then basically people will have a negative experience with such an algorithm, and with the statements around the reliability of the algorithm, which in turn leads to reputation damage, eventually brand damage, and maybe even distrust.

So, this is something in the area of explain-ability that we really need to start tackling in the next few years; how do you manage expectations not only around the performance of algorithms, but also around the statements that are made on the reliability, or the performance of such algorithms.

Do you have a couple of books that are available? We’ll include links to that in the show notes as well.

Yes, you were asking about any resources for further information, and of course I should plug my own two books, the one about Big Data, and how it impacts our life, ‘We are Big Data’, from 2015, and a couple of months ago I published, ‘Building Trust in a Smart Society’, which also dives into the details of how this trust mechanism works, and its associated to a couple of organizational measures that you can take, in terms of modularity, agility, decentralization, which contributes to trustworthy ways of developing algorithms and decision-making.

And of course, I shouldn’t forget Cathy’s book, ‘Weapons of Math Destruction’. As I said, we are working quite closely together, and I think that’s our useful dimension on the topic as well, because she dives more into the ethical element, the ethical consequences, and how you should deal with those.

Those are terrific recommendations, and I look forward to getting them on my list.

So, this had been Ed Maguire, Insights Partner at Momenta Partners, we’ve been speaking with Professor Sander Klous who is Professor of Big Data Ecosystems at the University of Amsterdam, and partner in charge of Big Data Analytics at KPMG in The Netherlands.

Thank you once again for a terrifically informative conversation, really appreciate it.

My pleasure, thank you very much.

 

[End]

 

 

Subscribe to Our Podcasts