This course will provide a broad overview of current topics in improving the capabilities of surgical robots. With robot-assisted surgical procedures, we can capture data about the surgeon’s motions and register to the patient anatomy in ways that were simply unavailable previously. This provides an unprecedented opportunity for machine learning to improve patient care. While some specialties have made use of this by automating certain treatments (with LASIK being a well-known application), most general surgery robots are still manually teleoperated. In this course, we will discuss why this is and what steps we can take to use machine learning to improve surgeries and the training of surgeons.
Topics covered include modeling robot motions and signals, modeling the patient anatomy, and how pre-operative imaging may be used to guide intraoperative decisions. By the end of the course, you should have an understanding of different aspects of surgical robots and how they fit together.
Location: Featheringil Hall 306
Time: Tuesdays and Thursdays 11:00am - 12:15pm
Instructor: Jie Ying Wu
Office Hours:: TBD
Homeworks and projects will be submitted and graded through Brightspace. Please use Brightspace discussion for homework questions. Readings are posted on Perusall (access through Brightspace) and discussions on readings will take place there.
Prerequisites: This course is aimed at graduate students and advanced undergraduate students with an interest in applying computer science to healthcare. It assumes a background in deep learning (some familiarity with how neural networks work and experience with implementing one in any framework) and computer vision (calibration and frame transformations). We will be using Pytorch for homeworks so familiarity with it is useful though the goal of the first homework will be to bring everyone up to speed on the specifics of Pytorch.
If you do not have access to a GPU, there will be Google Cloud credits available (thanks to their generous academic program!) to run homeworks and projects.
Tentative Schedule
Week 1 – Aug 22nd: Intro to surgical robots. Homework 1 released
Week 2 – Aug 27th, Aug 29th: Kinematics and teleoperation, Python and PyTorch tutorial
Week 3 – Sep 3rd, Sep 5th: Intro to devices and calibration, field trip to VISE Project 1 released
Week 4 – Sep 10th, Sep 12th: Neural networks for robot instrument segmentation
Week 5 – Sept 17th, Sept 19th: Graph learning and tissue tracking
Week 6 – Sept 24th, Sept 26th: Autonomous suturing
Week 7 – (Oct 1st, Oct 3rd) Project 1 presentations Homework 2 released
Week 8 – Oct 8th: Guest lecture from Florian Richter
Week 9 – Oct 15th, Oct 17th: Unsupervised representation of surgical gestures Homework 3 released
Week 10 – Oct 22nd, Oct 24th: Video gesture analysis and unsupervised modeling
Week 11 – Oct 29th, Oct 31st: Kinematics-based surgical gestures and skill identification Project 2 released
Week 12 – Nov 5th, Nov 7th: Imitation learning for surgical subtask automation
Week 13 – Nov 12th, Nov 14th: Virtual fixtures and controls
Week 14 – Nov 19th, Nov 21st: AI meets medical robotics - what’s next?
Week 15 – Happy Thanksgiving!
Week 16 – Dec 3rd, Dec 5th: Project 2 presentations