Mar
29
AI and bias: Cui bono?
Filed Under Uncategorized | Leave a Comment
Who benefits is never a bad question to ask. In my mind, after long years in the information industry, it is a question closely related to “follow the money”. And it is closely in my mind at the moment, since I have been reading the UK Information Commissioner’s consultation (https://ico.org.uk/) about-the-ico/what-we-do/our-work-on-artificial-intelligence/generative-ai-second-call-for-evidence/ on the use of personal data in AI training sets and research data. The narrative surrounding consultation invokes; for me, all sorts of ideas about the nature of trust.
Let me try to explain my ideas about trust, since I think the subject is becoming so controversial that each of us needs to state their position before we begin a discussion. For example, I trust in the brand of marmalade to which I am fairly addicted. My father was an advocate of Frank Coopers Oxford marmalade, and this is probably the only respect in which I have followed him. We certainly have over 100 years of male Worlock usage of this brand of marmalade. Furthermore, in modern times, the ingredients are listed upon the jar, together with any chemical additives. Should I suffer a serious medical condition as a result of my marmalade addiction, I can clearly follow the trial and find where it was made and the provenance of its ingredients. And in the 60 or so years that I have been enjoying it, it has not varied significantly in flavour, taste or ingredients.
I also believe, being a suspicious country man, in something that I call “the law of opposites” . Therefore, when people say that they “do no evil “ or claim that they practice “effective altruism “, then I wonder why they need to tell me this. My bias, then becomes the reverse of their intentions: I tend to think that they are telling me that they are good because they are trying to disguise the fact that they are not. This becomes important as we move from what I would term an open trust society – exemplified by the marmalade – into a blind trust society – exemplified by the “black box” technology, which , we are told, is what it is, and cannot be tracked, audited or regulated in any of the normal ways.
The UK Information Commissioner has similar problems to mine, but naturally at a greater level of intellectual intensity. In their latest consultation document, his people ask whether personal data can be used in a context without purpose. Under data privacy rules, the use of personal data, where permitted, has to be accompanied by a defined purpose. whether the data is used to detect shifts in consumer attitudes or to demonstrate the efficacy of a drug therapy, the data use is defined by its purpose. General models of generative AI, with no stated or specific purposes, violate current data protection regulation, if they use personal data in any form, and this should set us wondering about the outcomes, and the way in which they should earn our trust.
The psychologist Daniel Kahneman who died this week, earned his Nobel prize in economics for his work on decision-making behaviours. His demonstration that decisions are seldom made on a purely rational basis, but are usually derived from preferences based on bias and experience (whether relevant or not) should be ever present in our minds when we think about the outputs of generative AI.Our route to trusting those outcomes should begin with questions like: what is the provenance of the data used in the training sets? Do I trust that data and its sources? Can I, if necessary, audit the bias inherent in that data? How can I understand or apply the output from the process if I do not understand the extent and representativeness of the inputs?
I sense that there will be great resistance to answering questions like this. In time there will be regulation,. I think it is a good idea now for data, suppliers and providers to annotate their data with metadata, which demonstrates provenance, and provides a clear record of how it has been edited and utilised, as well as what detectable bias was inherent in its collection. One day, I anticipate, we shall have AI environments that are capable of detecting bias in generative AI environments, but until then we have to build trust in any way that we can. And where we cannot build trust we cannot have trust, and the lack of it will be, the key factor in slowing the adoption of technologies that may one day even surpass the claims of the current flood of Press releases about them. Meanwhile, cui bono? Mostly .it seems to me, Google, Microsoft, Open AI, Meta. Are they ethically motivated or are they in it for the money? For myself, I need them to clearly demonstrate in self regulation that they are as trustworthy as Frank, Coopers, Oxford marmalade.