Photo by Marcel Scholte on Unsplash
Human senses are nothing to be sniffed at (pardon the pun) – after all, the human eye can detect a photon of light in absolute darkness. As well as excellent vision, humans are blessed with a two-million-year history of exceptional manual dexterity dating from the first fossil evidence of thumb opposition; a key difference in the anatomy between ourselves and other members of the ape family. In combination, such natural gifts enable humans to carry out incredible feats such as play violin concertos, write plays and novels, and of course, perform exquisitely complicated surgeries. Yet as surgical procedures grow exponentially more complex, so too does the level of incredibly fine motion required to perform said procedures. In spite of our significant innate capacity for fine motor control mentioned above, there nevertheless seems to be an eventual limit to what we can achieve surgically imposed by the constraints of our biological reality.
That’s where robotics comes in. Before continuing, I feel the need to dispel a common misconception regarding what surgical robotics refers to- not humanoid retro-futuristic cyborgs autonomously performing whole surgeries but rather skilled surgeons utilising some kind of robotic interface transferring their motion to a subsequent action applied to the tissue of the patient (adducing the Da Vinci machine as a perfect current example). But it may be for some particular sections of a surgical procedure it might be considered expedient to let a semi-autonomous program to ‘take over’ so to speak and carry out a highly specific action that a human alone could not manage to perform. A possible manifestation of this could be following the margin of a brain tumour with pinpoint accuracy where it comes right next to a major vessel/ nerve.
So far, surgical robots have fallen far short of the mark when it comes to performing these specific automatic actions due to a critical limitation: their lack of non-visual sensing capabilities. In other words, up to know robots have been unable to act on what they cannot see. To rectify this, an international research project called FAROS (functionally accurate robotic surgery) involving Balgrist University Hospital, KU Leuven in Belgium, Sorbonne University in France and our own university has formed to attempt to emulate traditional robotic navigation systems with widefield mapping, auditory and haptic sensors[1]. The project cites the following key elements ‘(1) a rich set of non-visual sensors…; (2) functional models that relate non-conventional sensor signals to functional parameters…; and (3) functional controllers, obtained through reinforcement learning…’. To sum up, surgeons of the future may be able to use such technology to enable to perform inhuman feats of surgery. Might this set the stage for a total supersession of surgeons far in the future? Perhaps, but as with most speculating about the future not much can be said until we are actually there. Either way, the quality and ability of surgical procedures is only set to improve with the aid of tech such as these.
[1] Cordis.europa.eu. 2021. CORDIS | European Commission. [online] Available at: <https://cordis.europa.eu/project/id/101016985> [Accessed 6 February 2021].