Jun
27
Education: AI‘s finest hour?
Filed Under Uncategorized | Leave a Comment
The chief technology officer at Open AI is a hugely impressive woman. Born in Albania, she won a scholarship to a famous Canadian school, and then took two university degrees at Dartmouth in the USA. Now in her mid 30s, she gave a recent interview on being awarded an honorary doctorate by her alma mater. The interview proceeded upon fairly normal lines, with the interviewee safely within industry discretion rails and PR requirements for the most part, until that is we reached Q&A at the end. Then, for me, two really interesting statements made the previous hour worth while. In the first instance. Mira Murati made the interesting statement that she did believe that those whose data had been used to train models should be recompensed, and that she and her colleagues were working on a recompense engine which would assess the value created by individual datasets as part of the whole LLM construct. This is obviously important and I will return to comment further on it later.
However, another questionner asked her where, in her view, AI was likely to make its greatest and the most lasting impact. She unhesitatingly pointed to education, and in elaborating her answer, pointed to so many of the issues that the teaching profession, along with the producers of educational content and services, have tackled for so long. As she responded, I was personally taken back to my own first contact with AI as I have rehearsed here and elsewhere many times before. In 1985, in a Marvin Minsky seminar at the MIT Media Lab, I first heard him defend AI, in this case from a librarian who thought it would kill books and libraries, by saying that he wanted to promote the world of books and libraries, by filling libraries with books that were able to speak to each other, update each other and argue with each other. All this in pursuit of educational process that he said, and I think we all know, is essentially a matter of individual learning, of the ability of the individual to respond to different stimuli, at different times in a lifetime of learning, and as a result of different states of learning readiness. in her answer, I felt that Ms Murati was pointing to a world where the inequalities and waste caused by teaching 30 people of different abilities in a classroom at the same time to pass an examination which by definition was standardised to one level of learning achievement could it last draw to an end. The idea that each learner could learn at their own speed, because a machine environment was able to assess their learning ability and readiness and present them with the appropriate next learning materials at the point at which they were ready to progress and in the media that enabled them to absorb it best – becoming effectively a personal tutor.
Whenever I write things like this, my teacher friends rise up in revolt and point to the immense value of the teacher-pupil relationship at all levels of education. It is important therefore to say that I agree with every word of this. I believe that if AI really works in education then it will free teachers to be real teachers. Not markers of papers. Not distracted from the brilliant pupils and their needs by the requirements of slow or inadequate learners. Not neglecting slow and needy learners because the reputation of the school depends upon the success of the brilliant, who must be personally coached to ensure brand distinction.If AI works in education, it will give teachers at all levels the time to be the people they need to be: the gurus, the thought leaders, the people in charge of pastoral care, the listeners and the advisory voice of experience. Overall longer term, machine intelligence will be able to monitor and know very effectively what levels of learning have been accomplished. Feedback given to teachers will grow in quality and reliability. AI in education could eventually release us all from examination systems, that are grossly inadequate, and mostly measure which members of society are good at examination systems. It will prevent us from penalising the individual who had an off day, a headache, or a period pain, and it will measure everything that the learner has learnt in all dimensions, not simply the ability to answer a multiple-choice or essay question which may not be representative either of the course or the learning process.
As a young man in educational publishing in the late 1960s I remember my puzzlement about educational resources and what worked and what didn’t. Sitting at the back of classrooms in London comprehensive schools gave little enlightenment, but did demonstrate the boredom induced for very many people by the group learning process. Yet those young people, bored or engaged, all had talent, and that quality, I knew, had to be released into society, if society were to flourish and develop. Over the years I’ve worked with start-ups and entrepreneurs in a variety of different ways on schemes of learning based around learning pathways and learning journeys. Some good things have been done, but until now we have never had the AI technology which looked like making a real impact on the problem.
In the past two years, with the rise of generative AI, it seems to me that it just become more probable, this idea of a real man-machine relationship in guided human learning, supervised and overseen by human teachers, is it last a real possibility. How we reward the contributors of learning material, how we ensure that the range of data provided gives the ability stretch and complexity of content needed., And how we recompense contributors for the use of materials in the network, remain huge problems, and ones which will be difficult to break down and tackle effectively. But I hope – and education is fundamentally a triumph of hope over experience – one day employers will be able to make hiring decisions based upon really knowing what the candidate knows; the professional in one country will be able to get a job in another without a clash of professional qualification standards: and that we will cease to talk about “slow learners“ but of people at different and measurable levels of learning engagement and attainment. And who knows, it might even be a world where teachers enjoy teaching again.
Government health warning: while the prospect here is glorious, it does carry huge risks. These systems are subject to political interference. In a world where book burning and the removal of books from educational library shelves is now sadly prevalent, we will need to protect the man-machine educational interface from political distortion.
Jun
11
My data+Your data=Their data
Filed Under Uncategorized | Leave a Comment
The announcement yesterday, by Tim Cook,, of the Apple Intelligence initiative may mark a really important point in the development of machine intelligent solutions. Commentary upon it has taken several different directions, which may mean that the pundits and the journalists do not know quite what to concentrate upon.Is this Apple joining the AI battle really for the first time, while trying to sharply differentiate itself from its Big Tech rivals? Is this Apple being forced to do something with AI by virtue of the fact that its asset value has sunk below that of Microsoft? Or is this Apple, aware that it is lagging behind in the AI PR wars, playinig catch up while not deploying its own technology but using Open AI instead?.
It may be one of these things, or it may be all three of them at once. What Big Tech companies say is very much about markets and valuations: what these companies do is very much more about easing customers along the line of upgrade to the next price point. I see nothing much in the Apple announcement that departs from this settled formula. But there is one important difference that does mean a lot. In order to develop its Apple Intelligence program, Apple is going to have to, on a massive scale, acquire, learn from and reuse its own customers data. In the past Apple has set its face against storing and reusing and resellingcustomer data. In fact, it has rather positioned itself as the company of data privacy and the protector of user rights and privileges. Despite protestations that nothing will change this surely has to change if Apple is going to be storing and reusing data about customer preferences, modes of usage, workflow and leisure habits in order to provide more intelligent solutions for the users themselves. The early announcements indicate that there will be “personal“ storage and security. The early announcements do not say that Apple is now abandoning its earlier position as the protector of privacy for its users.
Does this matter? If we all get smart services. which integrate apps and create real speed, time saving and value for us, will we care at all? Possibly many of us will not, and it is indicative that Apple Intelligence is being rolled out in the USA before it moves to Europe. In Europe it will face more difficult challenges, in a way that takes me back to Luxembourg in the mid 1980s when the first debates on European data privacy were getting underway. I think of it as a time when a European dream was confronting an American economic reality. And, as a member of the EU Legal Observatory of DG XIII, I was privileged to watch. My fellow UK delegate, Charles Clark (the U.K.’s leading copyright authority) and I sat through lengthy and impassioned statements of the droit morale principles which covered the inalienable rights of each individual to the ownership of every item of data created by that individual or spun out from his activities as an individual or as a participant in society. It might be said that in the European copyright legislation of those years, the French view, lost the battle, but they got their revenge with the introduction of data privacy law, and it’s dissemination in such a way that it in effect became global despite its European origins.
The part of the world which has seen information and data protection in terms of economic rights, but not in terms of personal privacy, has also been the part of the world mostly concerned with the development of commercial AI. Open AI has in particular gained the reputation of being scraper in chief and re-user in general of everything which can be obtained without payment. Whether that reputation is fair or not, it is certainly true that the torrent of lawsuits is forcing AI developers to look more carefully at the status of the date that they use. And since Apple will be using the personal data of its own users to create services which, while highly beneficial, it will monetise to the fullest extent possible, we may expect to see Apple Intelligence launched on a string of revised terms and conditions, in which users surrender entirely even those minimal rights that they thought they might have had.
Back in Luxembourg all those years ago, the Europeans had a dream. They imagined that if each and every one of us was in control of their own data, then part of the New World might be selling back that data to the developers who were going to use it. Selling it to get lower prices, or premium treatment. A bit like selling the excess power generated by your solar panels back to the electricity company. Selling your data in this way would keep the technology companies honest, it was argued. They would then have to tell you how your data had been reused and who had bought it. They could be audited, and steps could be taken to ensure that the provenance of data was checked, and misinformation in the network was better controlled. Naturally, they were dismissed as fantasists!
My own hope is that Apple today will set it sights higher then improving my holiday travel booking process and getting the erratic predicted text punctuation out of my dictation, and embark on a program of user undertakings that includes prohibiting the sale of my data to anyone who, in any circumstances, will use it against my interests.