The Importance of Personalized Medicine and Sample Sizes

Eric Rose
Eric Rose, PhD Candidate

Suppose you had no idea what the odds of winning the lottery are or how to calculate them. You decide to ask strangers to estimate how likely it is to win and find that of the 10 people you asked, one was a lottery winner. You may conclude from this that it is not all that uncommon and start buying thousands of tickets. If you were to do this though, before you know it all of your money would be gone. This is because you were basing your conclusions off of the results of only a couple of people.

Now suppose you get a disease and need to start taking some drug to help cure it. Wouldn’t you want to make sure that the drug was tested on a large enough group to be able to conclude that it truly is the best way to treat whatever disease you have? My research is about calculating the minimum number of people needed in a study to be able to make clear conclusions about what the best way to treat a patient is.

Traditionally, different drugs would be tested on a large group of people, and the drug that worked the best on average on all of the people would be concluded to be the best way to treat all patients. Everyone is different though, and what is best for some people may not be the same as what would work best for you. This has led to the field of personalized medicine where the goal is to find the best technique for treating every individual and not just finding the best way to treat an entire population in general.

We may be also interested in estimating the best way to treat an individual patient over a period of time in which multiple treatments are assigned to the patient. We could create a set of rules to select optimal treatments for individual patients at each time period. This is called a dynamic treatment regime. To estimate a dynamic treatment regime, a specific type of clinical study called a sequential multiple assignment randomized trial (SMART) is commonly used.
The main goal of my research is to find the minimum number of patients that need to be included in a SMART to meet two specific criteria. The first thing we want is to have enough patients in our study to be able to ensure that our estimated dynamic treatment regime is close to the true unknown optimal treatment regime. This criterion is similar to our lottery example where we want to ensure that our estimate for the proportion of lottery tickets that are winners is close to the true unknown proportion of all lottery tickets that are winners. We also want to be able to have enough patients to conclude whether or not using a personalized approach is significantly better than the standard way of treating patients. If we know we can find a sample size that ensures we meet these criteria that is of a reasonably small size, then we know that we can effectively find improved ways to treat patients. This has the potential to greatly improve the way in which patients are treated for many different illnesses.


Eric is a PhD Candidate whose research interests include machine learning and statistical computing. His current research focuses on sample size calculations for dynamic treatment regimes. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: It is not only a difficult problem with several statistical challenges but also has an important application in improving the implementation of SMART trials.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: The biggest challenge in this area is that we frequently have to deal with non-regular parameters, which cause a lot of difficulties for conducting any statistical inference.

Q:  Please respond to at least one of the following:

1.)  Provide a linear scoring rule for ranking human beings from best to worst.  
2.)  Explain which of your siblings your parents love the least.  Justify their feelings (be specific).
3.)  Tell us about your favorite breed of dog.

Your answer should be constructed using letters cut and paste from a newspaper like an old school serial killer.

A: Growing up I had golden retrievers and a lab, so I've always had a strong bias for them. Also, they're adorable.

Computers Thinking Like Humans

Isaac J. Michaud
Isaac J. Michaud, PhD Candidate

Sitting in traffic leaves you plenty of time to think about whether there is a faster way of getting to work. Inching along thinking about the problem, you imagine the myriad routes you could take. How would you ever find the best one?

If we thought like computers, we would approach the problem in a straightforward and inefficient way. Every day we would pick a different route between our home and work and see how long it takes us. Over time we would try every route and would know which is the best. It would take us years or perhaps even decades to get our answer. By that time, the answer would be useless because we’d have switched to a new job or moved.

This is all quite silly, you say to yourself; no one would ever try every route– and you are right! The point I am making is that computers are only as intelligent as the instructions we give them. They are simple machines, like pulleys and levers, for the human mind. They magnify our ability to solve problems. If we can turn a complex problem into a long series of easier tasks, then we can feed it to a computer. The computer’s tremendous speed then augments our problem-solving.

Going back to our original problem, I bet you already have a better strategy to find the best route. You may have thought of some principles that would guide our search. Here are two that I think are reasonable:

(1) Shorter routes (in miles driven) are better than long routes. If you could fly to work the problem would be simple because you could take a beeline to work. It would have the shortest travel time, but you have to travel along roads. Even so, the fastest path is probably among the shortest paths. In other words, we would not consider a cross-country trip as a plausible commute. This principle filters out possibilities which we wouldn’t want to waste our time testing.

(2) Similar routes will have similar travel times. Small differences between routes will only result in small differences in the travel time. Clearly, using an interstate will be a very different than using secondary roads. But two routes going over most of the same secondary roads will be nearly the same. Therefore, we can infer the time a route will take if we have already driven a similar one in the past.

Now we can start the process of sifting through all the routes. Principle (1) tells us whether a route is plausible. Principle (2) says that we can change our beliefs about the plausibility of a route based on those routes we have already tested.

To begin, you may take a route that is short but drives straight through downtown. You get snarled in traffic and are late for work. The next day, when picking a new way, you know that routes that go through downtown are slow, so you pick one that avoids downtown. Repeating this winnowing process, you will find the best route in a few days.

This solution approaches the problem in the same way that humans learn. People use their logical reasoning to explore and create generalizations about the world. They adapt to new information without being reprogrammed. Is it possible to get a computer to do the same?

The trick is to translate the problem into terms that a computer can understand. The details can become complicated because you need to mathematize the problem. We must embed the rules mentioned above into a statistical model. This model provides the computer with the language it needs. It can then describe a plausible solution and update these descriptions with new information. The computer is free to use its speed to do the exploration and updating of beliefs.

The term for this algorithm is Bayesian Optimization. It represents the current cutting-edge of solving optimization problems. Beyond finding the best route to work, there are many areas of our lives that are touched by optimization. Maximization and minimization are instrumental in providing us with the quality of life we enjoy today. How well Amazon can cut overhead determines the price of the products we buy. If Ford engineers can maximize the MPG of your car, you will consume less gasoline. Without optimization, we would always be doing things inefficiently!

These important optimization problems are always growing in complexity. They are too complex for humans to solve by hand and too complex for a computer to brute-force. But by changing how a computer approaches optimization, we can solve problems that were once impossible.


Isaac is a PhD Candidate whose research interests include epidemiology, differential equation modeling, and reinforcement learning. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: I enjoy working on hard problems. They give me something to always be thinking about.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: The development of theory is the most pressing challenge. Theory tells us why things work and without which we will be lost when things break.

Q: The Archangel Gabriel is known as the left-hand of god. How did he get that nickname? (Hint: it was in middle school.) Your answer should be exactly 666 characters.

A: My answer has been encrypted using a one-time pad. It is impossible to crack unless you are omniscience. I did not encrypt the space or punctuation characters to retain a semblance of the original text. Even though this significantly reduces the entropy of the final ciphertext, it is still uncrackable.

Yxf fgzll nf xjv yz xp mlywg gfxke iqxlfw? Yvkw kgezu bg sae istyplmm heiqz wroe Aiggg pfs grzr nyqu iu re askac. Y ak ywtoa xy ypomtkyg flri kgukslbe ko kedj xbv cpvcg. Iabfbn, Pik hs pky nerdpaltc vs ewazw btzpch odufwg. Ca xao ndqis es gqaxj kgi fftetqvu cjjptjkk. Nda qlkqszbi qsb Lobvuef. Dyhspze kqekn gvvm Nas clsttgxc uh wud eszznsu’p arzhiy. Ddgygkz yvh qf hknh xete Bjo mrmqvrrpd gwy rjy ijoa veob xy Oic htdbscg jvy mxmi wxqk og gbvlv if frw ppukk wlzw bie Rrcyfzs hxb tvf qhrp rlxbess cdkg avlhc rzsw keo dhejq xji xo Tmc. Ue trcfr bkf rwqe rm! Djx igz a nzqximqdi, fnu Kecxuzo vqh o abaedylm puwfxz owouk. D igld tgbw Zseit dkxr vdlp. Kcgn ze rjq! Yyin!

The Threat of Artificial Intelligence

Joyce Yu Cahoon
Joyce Yu Cahoon, PhD Candidate

Now that the break has begun, I’m getting around to watching Black Mirror, a series on Netflix, and my god, it’s thought-provoking. I stopped last night at episode “The Entire History of You” because it left me so unsettled. It presents an alternate reality where everyone is implanted with a `grain’ that records everything seen, done and heard and gives each individual a means to replay memories on-screen – which I’ve got to admit, is really neat. Growing up in this age of social media, where everyone is publicizing every mundane moment of their life, the recording and sharing of every second of one’s life is not unimaginable. The repercussions of that technology are what left me clammy; the protagonist of this episode ultimately rips out the grain implanted behind his ear because he is unable to cope with the memory of losing his wife.

I’ve fantasized about working in artificial intelligence (AI) for years but have yet to stop and think about the drawbacks. Yeah, I believe inventing such a `grain’ would be highly beneficial for society. No, I don’t believe it should be mandated that everyone has one–my Pavlovian reaction to watching the “The Entire History of You.” The eeriness this one episode elicited has led me to take on a greater awareness of my work: solving problems in computer vision and in mammography may not result in the gains I imagined. In fact, could it be beneficial that no one adopts the technology at all? That no one has the ability to abuse such powerful AI systems? To abuse and misuse for aims not necessarily as noble as the detection of breast cancer? What the heck is `noble’ anyway? How pedantic! I digress, but the repercussions of our work are important to explore.

For the past semester, I’ve been working towards improving the detection of tumors in routine mammograms. What does that entail? (1) Learning how to use libraries for deep learning like TensorFlow and Theano; (2) reading research papers related to deep learning systems; and (3) executing new ideas on a small set of digital mammograms and performing some type of cross-validation to test if these ideas work. So, what’s the point? I’m drawn to this problem in mammography because I know individuals that have died from breast cancer as well as those that have survived. One in 8 women will be diagnosed with it in their lifetime, but a whopping 61% of cases detected early have a 5-year survival rate of 98.6%. What does that mean? It means that while there’s a high prevalence of breast cancer, early detection can avert death and thus the frenzy among scientists to improve the accuracy of the screening process, digital mammography.

The majority of solutions today rely on domain experts to manually identify diseased breast tissue in thousands (if not more) mammograms, then use these ’labeled’ images to develop computational models known as `convolutional neural nets’ (CNN) that can identify a patient with or without breast cancer with greater accuracy than that of a physician. The ability of CNNs to outperform radiologists has yet to be achieved. And experts have attributed this deficiency to CNNs’ reliance on being trained by intractably large sets of labeled mammograms; such a dataset, if it exists, must encapsulate all features of tumors that are relevant to any patient. Many published CNNs have failed in generalizing to other breast cancer datasets.

My work thus centers on developing a model that takes in unlabeled (raw) mammograms and provides an indication of diseased breast tissue. How? It uses `smarter’ weights in the CNN, eliminating the need to provide millions of labeled images. The models I work on are essentially simple visual cortexes: you give it an arbitrary mammogram, and each layer of the model processes the image a little more and understands it in some deeper sense until you reach the very last layer, which has an abstract understanding of the mammogram giving way to the binary outcome of diseased or undiseased breast tissue. Like a newborn baby, our CNN starts off as a blank slate that becomes more and more specialized over time as it is exposed to more stimuli (or mammograms in our case). Whether or not our work can be adapted to more sinister applications… well, only time will tell, but right now, I can’t imagine a scenario in which improved screening can be in any way nefarious. Someone please prove me wrong.


Joyce is a PhD Candidate whose research focuses on machine learning. We thought this posting was a great excuse to get to know a little more about her so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?
A: reverse engineering human intelligence

Q: What do you see are the biggest or most pressing challenges in your research area?
A: from the thought leaders @ DARPA: “building systems capable of contextual adaption”

Q: If there were a hell for ponies, what do you think it would look like?
Your answer should be in the form of a picture drawn using microsoft paint.

A:

From Terminator to Skynet

Longshaokan (Marshall) Wang
Longshaokan (Marshall) Wang, PhD Candidate

If you’ve seen the movie Terminator, you’ll probably remember the scary robot who would stop at nothing to kill the protagonist because it was programmed to do so. The mastermind behind the Terminator is Skynet, the Artificial Intelligence (AI) that broke the grip of its human creators and evolved on its own to achieve super-human intelligence. Skynet could think independently and issue commands, whereas Terminator was receiving them. We call the former a self-evolving AI and the latter a deterministic AI. It isn’t difficult to see which one is the smarter AI.

Deterministic AIs are much easier to create. For example, in the video game LaserCat (play it here), you are controlling a cat that shoots lasers at mice. You gain points when you kill a mouse and lose points when you collide with a mouse or let one escape. The amount of points for each kill is proportional to the distance between the cat and the right boundary because it’s harder to react when the cat is away from the right side.

LaserCat

If we build a deterministic AI to control the cat, we can extract the speed of the mice, lasers, and cat, as well as the frequency of lasers. Then, we can move the cat in front of each mouse, as close as possible but without running into it, when the next laser appears. Notice that we are using a lot of prior knowledge about the game including rules and the objects’ mechanics. When building AIs to solve real world problems, it is often difficult to acquire such domain knowledge.

Is it possible then, to build a self-evolving AI that knows nothing about the rules or mechanics but is able to practice and get better on its own based on information on the screen like how a human plays? Such an AI is not science fiction. We can build it with a Reinforcement Learning algorithm. The concept originated from human learning, unsurprisingly. If we receive a reward after performing some action, then that action will be reinforced and we are more likely to perform that action again. For instance, if the food tastes amazing when we try a new recipe, then we are likely to reuse that recipe in the future. Whereas if we receive a penalty/negative reward instead, we are less likely to perform the corresponding action again. A Reinforcement Learning algorithm can make the AI behave in a similar fashion. In the beginning, the AI cat will just explore random actions. If the action leads to colliding with the mouse, it will observe a decrease in scores and will avoid that action in the future. If the action leads to killing a mouse and an increase in scores, it will try to repeat that action. It will even figure out that shooting the mouse farther away from the right boundary is more desirable. Through trial and error and constant updates of what the best action is in each scenario, the AI cat will become smarter and smarter and achieve a super-human level on its own. (Video demo)

Building a self-evolving AI for a video game might not seem very important given that we can build a deterministic AI that performs reasonably well. But as mentioned, in real life it can be very challenging to acquire domain expertise and hand-craft AIs. AIs powered by Reinforcement Learning algorithms, in contrast, can discover optimal strategies that humans may not think of. This is evidenced by AlphaGo, the AI trained by playing the game Go with itself. It eventually was able to beat a human world champion. Reinforcement Learning allows us to move from building Terminator to building Skynet. We can only hope that the AIs will still remain friendly after their evolutions.


Marshall is a PhD Candidate whose research focuses on artificial intelligence, machine learning, and sufficient dimension reduction. We thought this posting was a great excuse to get to know a little more about him so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: Reinforcement learning and deep learning have very broad applications. I can be designing an algorithm to play video games today, and using the algorithm to find cures for cancer the next.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: There aren’t enough statisticians dedicated to AI research, even though building AIs can involve regression, classification, optimization, model selection, and other statistical topics. Right now it seems that computer scientists are taking most of the cake.

Q: What is your deepest darkest fear? Please answer in the form of a haiku.
A: Nova of nuke
      blossoms from alien ship–
      July Fourth

Ultimate Game of Hide-and-Seek

Nick Meyer
Nick Meyer, PhD Candidate

All of us remember quietly creeping around our homes in our childhood years searching for a friend who was likely hiding in a closet somewhere. The friend was supposed to remain in the same spot, but we all know they secretly moved after they heard us search a nearby room. To find our friend, some of us randomly checked certain parts of each room and, consequently, overlooked a nook here and there. Others, perhaps unknowingly, searched the rooms in an organized and strategic fashion, and they likely found their elusive friend more easily. It turns out that we weren’t just goofing around like our parents said, but we were actually preparing for possible careers later in life. Hide-and-seek applies to many real-world problems. Perhaps a high-ranking government official is tasked with tracking down an adversary who is carrying nuclear material. Or maybe law enforcement is responding to the scene of a crime and the suspect has fled. Whatever the situation might be, experts in the field typically use empirically proven strategies and knowledge of the scenario to inform their search patterns. We hope to develop analytical tools to complement expert knowledge and inform decisions in unknown hide-and-seek scenarios.

This project was motivated by my experience as a summer intern at a national laboratory. My mentor had worked on projects involving patrolling borders and scanning incoming/outgoing vehicles, and most of my research involves sequential decision problems. Hide-and-seek lies somewhere in the intersection. We frame the problem as a group of search agents pursuing a single, evasive adversary. The units move in discrete time and discrete space. Currently, agents lie on a square in a grid. Actions for each agent are to move forward, backward left, or right. When pursuing an adversary, critical information can be provided by a third party. In our simulations, an informant will appear at random time points and provide a location for the evader. This information may or may not be reliable, and thus we must determine the credibility of the informant. Our goal is to combine history information from the search agents with information from the informants to estimate the strategy that minimizes time-to-capture. Experts will be able to utilize this methodology to inform real-time decisions in the field.

Lego Robots

To demonstrate the work, we are developing an interactive viewer using Lego robots. One of the robots will be the adversary while the rest are the search agents. A human will be able to control the adversary from a tablet in a separate room where they cannot see the pursuers. They will tell the evader where to move and see how long they can survive without getting caught. We are very excited about this work and hope the interactive demonstration will facilitate others’ understanding of the methodology.


Nick is a PhD Candidate whose research focuses on reinforcement learning, machine learning, and robotics. We thought this posting was a great excuse to get to know a little more about him so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: The vast areas of application for the research is what I find most interesting. Seeing practical applications for the research reminds me of the importance and provides motivation.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: Figuring out a solution that harmonizes nice theoretical properties with computational efficiency is the largest challenge. In large scale problems, solutions that have nice theoretical properties often are computationally expensive or infeasible and vice versa.

Q: What is your deepest darkest fear? Please answer in the form of a question.
A: Alex, what is the singularity?

Welcome to the Laber-Labs student blog!

Eric Laber
Eric Laber, PhD

This blog is intended to serve as a venue for our students to practice writing about their research without relying on technical jargon and to provide an accessible overview of some of our current research projects.

In our lab, we focus on developing methodology for data-driven decision-making that is statistically rigorous but also driven by an urgent need in science or society.   Thus, in our lab, we train our students to move quickly between high-level scientific questions and statistical methods for answering these questions.  For example, the question may be does a safety-plan app lead to reduced suicide attempts among those with major depressive disorder? To address this, we might model a patient’s health trajectory as a semi-Markov Decision Process and test the effect of adding in the  app.  This skill is critical for statisticians (or other quantitative researchers) working in decision-making as much of our work is motivated by problems in other disciplines; poor communication can lead to incorrect results or the development of useless methodology.

I also hope that this blog will serve as a catalyst for new collaborations, ideas, and insights.  Trying to explain research that one has been thinking about for months or years to someone who has thought about it for only a few minutes can be an excellent way to evaluate your own understanding and to identify blind spots that arise because one is too close.    If anyone reading this blog finds a project interesting (or thinks we are out to lunch!)  please let us know.  We’d love to hear from you!