From Greek myth to improving medical diagnostics, final-year undergraduate Andrej Zukov Gregoric explains the basis of the PhD he is starting in September in a branch of Artificial Intelligence called ‘machine learning’.
History is full of tales of animate beings created from inanimate matter. It is said that to compensate for his limp the ancient Greek god of blacksmiths Hephaestus created two little golden robots to help him with his work. What once belonged to myth slowly began to feature as a theme of science as times moved on. In his 1948 paper “Intelligent Machinery” Turing outlined what in hindsight can be thought of as the first manifesto of artificial intelligence (AI). Ever since then researchers have pursued AI in two ways. One school of thought starting in the 1950s tried directly to bring about human-like AI. The other, which became popular later on, focused instead on the immediately applicable sub-problems of AI. A subset of these sub-problems is what we today call machine learning.
Machine learning, as succinctly defined by Tom M. Mitchell, is the ability of a program “… to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E”. Some of these tasks fall under the banner of supervised learning where our task is to predict the labels of unlabelled examples which are successively revealed to us. If we take an example to be a set of symptoms, a possible label is the type of disease. Our task would then be to successively label examples with the correct disease updating the rule by which we achieve our predictions at each step of the way.
Various performance measures for this type of machine learning exist, but from none can we extrapolate a true probability of how confident we are in our next prediction being correct. Conformal prediction, a method recently developed by Vladimir Vovk and Alexander Gammerman of Royal Holloway, and Glenn Shafer of Rutgers University, allows for precise and intuitive measures of confidence given a set of conditions. Thus, using conformal prediction, we can augment predictions made by naive machine learning algorithms with a measure of how confident we are in them being correct. In my final year project I, under the supervision of Zhiyuan Luo, applied conformal predictors to a number of naive machine learning algorithms.
The real-world applicability of conformal predictors is great and various applications are currently being researched. For example, conformal predictors can be used to improve diagnostics and prognostics in medicine. They can also be used to make everything from wind speed to indoor localisation prediction be more useful. Furthermore, conformal predictors are now beginning to be picked up by industry.
By the 1970s it had become clear that AI was not as easy to achieve as had originally been thought. Nowadays research is mostly geared towards extending AI’s sub-problems, the ongoing research in conformal prediction is a part of this. Perhaps one day the two schools will meet halfway and little golden robots will again finally ensue.