This serves as an ongoing post for companies attempting to bring Deep Learning to radiology. We welcome any sized company in this arena to list their name. We’d like to keep the companies’ bios up to date as they advance in the market.
Lunit is one of the oldest (founded 2013) deep learning startups, and the one to watch going forward. They are on the short-lists of deep learning companies overall, even outside of medicine (Forbes, VentureBeat, CB Insights, etc.). A few features make Lunit stand-out from the rest:
a. The founders are technically savvy, having published eight scientific articles on deep learning. Their approach is to develop the scientific clout and validate results before marketing the technology.
b. They’ve won numerous awards include ranking highly on the ImageNet competition, and earning first place in the MICCAI 2016 Tumor Proliferation Assessment Challenge. These successes are impressive considering their focus was on pathology data and only recently (2015) entered the radiology space.
ContextVision has been developing image processing technologies in radiology for over 30 years. Their product line consists of algorithmic solutions for improving visibility of radiographic imaging features. They are certainly well-positioned to bring about another type of image processing pipeline incorporating deep learning. Notably, they placed second in the MICCAI 2016 competition (second only to Lunit). They would be the established company to watch apart from the large device vendors.
Their work targets early cancer detection and diagnosis. Lesion segmentation and tracking is an important problem in radiology. Imagia’s product uses deep learning to automatically detect, segment and track lesions between studies. They are also interested in the automatic classification of tumor type based on imaging data.
Not entirely devoted to radiology, Enlitic aims to bring deep learning to all aspects of medical care including physician’s notes, radiology and pathology. They claim to be able to detect fractures as small as 0.01% of an X-ray image.
Zebra Medical Vision: https://www.zebra-med.com/
Zebra is one of the oldest (founded in 2015) and well-funded ($12M+) deep learning radiology startups. They have a lot of press releases, but it’s unclear whether their technology works as well as advertised. I’m not aware of published studies comparing their algorithms against established metrics.
GE Healthcare: https://www.ucsf.edu/news/2016/11/404956/ucsf-ge-healthcare-launch-deep-learning-partnership-advance-care-globally
The existing vendors are also interested in deep learning for radiology. We’re seeing similar approaches from the big guys – partnering with large medical centers and offloading much of the research to practitioners at that institution. GE has partnered with UCSF with an initial focus on detection of traumatic injuries in the acute setting.
It’s important to note that IBM Watson was not originally a deep learning system. The IBM Watson project began back in 2006 before deep learning hit the scene (around 2012). It was a rules-based system, albeit the best in the world. Rules-based systems are powerful, but operate on a fundamentally different technology that limits their power to analyze the raw image data in radiology. The hype around Watson in RSNA 2016 was due to new additions that combined the rules-based text processor (i.e. for clinical notes) with new deep-learning algorithms for examining images. You can read more at the link above.
Device manufacturers are interested in adding deep learning models right into their hardware. We’ve all heard the criticisms of automated EKG readers. Arguably, EKG strips should be easy for deep learning to read. On the radiology side of things, Samsung Medison (a Samsung Electronics affiliate), has added a pre-trained deep learning model into their breast ultrasound device to help with lesion detection.