I was required to complete an online training module by my university last week. Several actually, but a new one was on the use of generative AI. I was expecting it to be full of warnings and maybe some advice about how to spot its use in essays. To my surprise, but not necessarily dismay, I was wrong. It was pretty positive about the idea of Leeds academics using generative AI in their work. In fact it was encouraging us to use it to plan meetings, improve our lecture slides, and even develop funding proposals. There were lots of caveats about only using it as a starting point and treating it as fallible. But it heartily advised us to play around with it, do some prompt engineering, and see how it might be able to help.
Overall I was pleased (and surprised) to see Leeds deviating from its usual cautious, wait and see what others do, approach. I'm skeptical generally of the idea that LLM's are likely to cross a singularity point and turn evil and destroy us all. Scifi lover though I am, my reasoning is that being evil requires the possession of goals, and things can only acquire goals of their own if they evolve by natural selection. Without that, they can only have the goals that humans program into them. and we already knew that humans are evil so there is nothing new to see here.
Anyway, one of the interesting things about the training module was that it included much-repeated instructions to use only one particular, university-approved, brand of Gen-AI. Presumably this is because they've paid a hefty license fee and need to justify it. But they also explained that they've acquired a license wherein anything you feed into the LLM stays in the local university-zone of the LLM. The thought is that you can trust the system not to steal your work. I think there is also an option to keep it private so you don't share it across the university network either.
This gave me an idea. Wouldn't it be cool if different universities all developed their own firewalled LLMs, encouraged all their staff to feed all their work into it, and then each competed with other universities to solve hard problems and develop new theories?
My basic understanding is that each LLM would be slightly different, distinguished by its different inputs, and would constitute a sort of idiosyncratic super-brain for each university, melding the linguistic habits of all its different researchers. It would be fun to see how they then produced different approaches to, for example, the problem of reconciling quantum physics and general relativity. It would be like a scaled-up market place of ideas. A marketplace of superintelligences.
We'd need careful mechanisms to stop powerful/wealthy universities from just favouring their own solutions, of course. And it would be terrible for cooperation between universities. But maybe we could move universities away from being nationalistic/regional and more towards each being biased towards a different set of ideologies, like in Robert Nozick's utopia. Academics could choose which university to offer their work to, based on their individual sympathies and goals.
I wonder what would emerge?