The Threat of Artificial Intelligence

March 29, 2017

Joyce Yu Cahoon, PhD Candidate

Now that the break has begun, I’m getting around to watching Black Mirror, a series on Netflix, and my god, it’s thought-provoking. I stopped last night at episode “The Entire History of You” because it left me so unsettled. It presents an alternate reality where everyone is implanted with a 'grain' that records everything seen, done and heard and gives each individual a means to replay memories on-screen–which I’ve got to admit, is really neat. Growing up in this age of social media, where everyone is publicizing every mundane moment of their life, the recording and sharing of every second of one’s life is not unimaginable. The repercussions of that technology are what left me clammy; the protagonist of this episode ultimately rips out the grain implanted behind his ear because he is unable to cope with the memory of losing his wife.

I’ve fantasized about working in artificial intelligence (AI) for years but have yet to stop and think about the drawbacks. Yeah, I believe inventing such a 'grain' would be highly beneficial for society. No, I don’t believe it should be mandated that everyone has one–my Pavlovian reaction to watching the “The Entire History of You.” The eeriness this one episode elicited has led me to take on a greater awareness of my work: solving problems in computer vision and in mammography may not result in the gains I imagined. In fact, could it be beneficial that no one adopts the technology at all? That no one has the ability to abuse such powerful AI systems? To abuse and misuse for aims not necessarily as noble as the detection of breast cancer? What the heck is 'noble' anyway? How pedantic! I digress, but the repercussions of our work are important to explore.

For the past semester, I’ve been working towards improving the detection of tumors in routine mammograms. What does that entail? (1) Learning how to use libraries for deep learning like TensorFlow and Theano; (2) reading research papers related to deep learning systems; and (3) executing new ideas on a small set of digital mammograms and performing some type of cross-validation to test if these ideas work. So, what’s the point? I’m drawn to this problem in mammography because I know individuals that have died from breast cancer as well as those that have survived. One in 8 women will be diagnosed with it in their lifetime, but a whopping 61% of cases detected early have a 5-year survival rate of 98.6%. What does that mean? It means that while there’s a high prevalence of breast cancer, early detection can avert death and thus the frenzy among scientists to improve the accuracy of the screening process, digital mammography.

The majority of solutions today rely on domain experts to manually identify diseased breast tissue in thousands (if not more) mammograms, then use these ’labeled’ images to develop computational models known as `convolutional neural nets’ (CNN) that can identify a patient with or without breast cancer with greater accuracy than that of a physician. The ability of CNNs to outperform radiologists has yet to be achieved. And experts have attributed this deficiency to CNNs’ reliance on being trained by intractably large sets of labeled mammograms; such a dataset, if it exists, must encapsulate all features of tumors that are relevant to any patient. Many published CNNs have failed in generalizing to other breast cancer datasets.

My work thus centers on developing a model that takes in unlabeled (raw) mammograms and provides an indication of diseased breast tissue. How? It uses 'smarter' weights in the CNN, eliminating the need to provide millions of labeled images. The models I work on are essentially simple visual cortexes: you give it an arbitrary mammogram, and each layer of the model processes the image a little more and understands it in some deeper sense until you reach the very last layer, which has an abstract understanding of the mammogram giving way to the binary outcome of diseased or undiseased breast tissue. Like a newborn baby, our CNN starts off as a blank slate that becomes more and more specialized over time as it is exposed to more stimuli (or mammograms in our case). Whether or not our work can be adapted to more sinister applications… well, only time will tell, but right now, I can’t imagine a scenario in which improved screening can be in any way nefarious. Someone please prove me wrong.

Joyce is a PhD Candidate whose research focuses on machine learning. We thought this posting was a great excuse to get to know a little more about her so we we asked her a few questions!

  • What do you find most interesting/compelling about your research?

    Reverse engineering human intelligence.

  • What do you see are the biggest or most pressing challenges in your research area?

    From the thought leaders @ DARPA: “building systems capable of contextual adaption.”

  • If there were a hell for ponies, what do you think it would look like? Your answer should be in the form of a picture drawn using microsoft paint.