Grad School is a Miserable Experience

Joyce Yu Cahoon
Joyce Yu Cahoon, PhD Candidate

I’m kidding. Your time in graduate school can be challenging, but like so many things in life, it’s how you take on those challenges that matters. My resolution to succeed was tested after jumping to the conclusion that two semesters of research will never see the light of day. I had a bit of an identity crisis. I questioned my life decisions. I was bitter and resentful. But getting through moments like these has made me realize the intrinsic value of a PhD.

Everyone has some theory of the world in which they conceptualize themselves. At least I’d like to think so. When someone or something dear to us objects to our theory or lurks outside our structure, chaos ensues. Luckily, I had the fortune to have such a formative experience and gain this perspective through the wide gamut of projects in our lab. I even had a short stint as a data scientist this summer at a local start-up. Over the past year, the projects I’ve worked on include:

  • Monitoring food safety violation rates.
  • Using digital mammography to predict breast cancer.
  • Text mining Twitter data to identify incidences of food poisoning.
  • Developing a means to detect age from facial and body markers.
  • Reconciling disparate data sources.
  • Building a simulation tool to illuminate the benefits and costs of microtransit.

If you aren’t familiar with these topics, let me assure you that this year was a random walk through research areas – no one topic naturally flowed from the last. In hindsight, it was interesting to encounter the broad divide among statisticians in understanding the nature of these problems and the best approaches to solve them—and no, this is not another one of those frequentists vs. bayesians posts. If there is a common thread to these projects, it would be that we have some set of inputs, x, to which we hope to apply some statistical magic so that we arrive at the response of interest, y. But what magic do we use? How do we get from x to y?

In one camp are those that generally assume that the input data is generated by some stochastic model and can be fit using a class of parametric models. By applying this template, we can elegantly conduct our hypothesis tests, arrive at our confidence intervals, and get the asymptotics we desire. This tends to be the lens provided by our core curriculum. The strength of this approach lies in its simplicity. However, with the rise of Bayesian methods, Markov chains, etc. this camp is beginning to lose the “most interpretable” designation. Moreover, what if the data model doesn’t hold? What if it doesn’t emulate nature at all?

In the other camp, are the statisticians whose magic relies on the proverbial “black-box” to get from x to y. They use algorithmic methods, such as trees, forests, neural nets and svms, which can achieve high prediction rates. I must admit, most of the projects I’ve worked on fall in this camp. But despite its many advantages, there are issues: multiplicity, interpretability and dimensionality, to name a few. Case in point, the team I worked with in the digital mammography project was provided a pilot data set of 500 mammograms from 58 patients with and without breast cancer. Our goal was to design a model that can flag a patient with or without cancer. But how can we make rich inferences from such limited training data? Some argue our algorithmic models can be sufficiently groomed to learn representations that meet that of a human mind. In this case, that of a radiologist in identifying mammograms associated with those at risk of breast cancer. However, our team worked on tinkering with a variety of adaptations of often-cited convolutional neural nets; each variation of which was not able to fully capture the representations we desired in identifying radiological features. The tools at hand were simply not designed to achieve the objective; grooming was not the solution.

So now that I’ve come through this experience and am again looking forward — I have to ask — in which camp do I fall? Perhaps it’s not either-or; perhaps it’s not even a combination, but something entirely new. Whatever the path forward is, I’m excited to be playing a part. It’s been an intense year, but the level of intellectual growth and personal self-discovery made it all the more worthwhile.

Leo Breiman. Statistical modeling: the two cultures, Statistical Science, 2001.

Joyce is a PhD Candidate whose research focuses on machine learning. We asked a fellow Laber-Labs colleague to ask Joyce a probing question —

Q:  Propose a viable strategy to Kim Jong-un on how to take over the world in the next 5 years. — Marshall Wang


With just nuclear capability, NK is left with a route with low odds but high payout. They should continue to do missile tests that inflate their nuclear capability. Kim should also ramp up the disparaging comments against Trump for Trump’s inaction insinuates Americans would never use an atomic bomb. Such comments would likely not draw sanctions from strong allies, namely China and Russia. Kim should then leak intel on a planned nuclear weapon launch as close to the SK border as possible. If stars align, Trump could justify nuking NK, but the collateral damage in SK would likely draw political ire from the global community. If successful to this point, the US would fall into great political turmoil as Trump would be demonized to be worst than Putin. Kim would then need the US and Russia to somehow engage with one another in WW3. While they are preoccupied on that front, Kim could start a violent civil war within Korea. This may involve bombing highly populated areas in SK, though the US would HAVE to be preoccupied fiercely on other fronts AND China would have to be involved, perhaps on the Eastern front, for Kim to execute on such a scheme successfully. NK should at this time revert to a defensive strategy in order to move towards a united Korea. Over the course of a few years, Kim should hope both sides take heavy causalities, provide help to China when it can, and win over other Asian allies of US. And since Korea is among the most advanced nations in technological production, these years would leave a destructive gap in technological progress.


This is Joyce’s second post! To learn more about her research, check out her first article here!

Teaching human language to a computer

Longshaokan (Marshall) Wang
Longshaokan (Marshall) Wang, PhD Candidate

Have you ever learned to write code? If so, you were learning a “computer language.” But, have you ever considered the reverse; teaching a computer to “understand” a human language? With the advancement of machine learning techniques, we can now build models to convert audio signals to texts (Automatic Speech Recognition), detect emotions carried by sentences (Sentiment Analysis), identify intentions from texts (Natural Language Understanding), translate texts from one language to another (Machine Translation), synthesize audio signals from texts (Text-To-Speech) and more! In fact, you probably have already been using these models without knowing because they are the brains of the popular artificial intelligence (AI) assistants such as Amazon’s Alexa, Google Assistant, Apple’s Siri, Microsoft’s Cortana, and most likely Iron Man’s Jarvis. If you have ever wondered how these AI assistants interact with you, then you are in luck! We are going to take a high-level look at how these models are built.

Many of the language-processing tasks listed above use variations of a machine-learning model called Recurrent Neural Network (RNN). But, let’s start from the very beginning. Say you spotted an animal you haven’t seen before, and you attempt to classify it. Your brain is implicitly considering multiple factors (or features): Is it big? Does it make a noise? Does it have a tail?, etc. You weight these factors differently because maybe the color of its fur is not as important as the shape of its face. Then, your guess will be the “closest” animal you know. A machine-learning model for classification works similarly. It maps a set of input features (e.g., big, purrs, has a tail, …) to a classification label (cat). First, the model needs to be trained using samples with correct labels, so that it knows what features correspond to each label. Then, given the features of a new sample, the model can assign it to the “closest” label it knows.

A simple example of a classification model is a perceptron. This model uses a weighted sum of the input features to produce a binary classification based on whether the sum passes a threshold:


But a perceptron is too simple for many tasks, such as the “Exclusive Or (XOR)” problem. In XOR problems with 2 input variables, the correct classification is 1 if only one input variable is 1, and 0 otherwise:

Values of input variables A and B True output/correct classification
A = 0, B = 0 0
A = 1, B = 0 1
A = 0, B = 1 1
A = 1, B = 1 0

However, this classification rule is impossible for a perceptron to learn. To see this, note that if there are only two input features, a perceptron essentially draws a line in the plane to separate the 2 classes, and in the XOR problem, a line can never classify the labels correctly (separate the yellow and gray dots):


To handle more complicated tasks, we need to make our model more flexible. One method is to stack multiple perceptrons to form a layer, stack multiple layers to form a network, and add non-linear transformations to the perceptrons:


The result is called an Artificial Neural Network (ANN). Instead of learning only a linear separation, this model can learn extremely complicated classification rules. We can increase the number of layers to make the model “deep” and more powerful, which we refer to as a Deep Neural Network (DNN) or Deep Learning.

Despite the flexibility of the DNN model, language processing remains a challenging classification task, however. For starters, sentences can have different lengths. In cases like Machine Translation, the output is not just a single label. What’s more, how can we train a model to extract useful linguistic features on its own? Just think about how hard it is for a human to become a linguist. So, to handle language processing, we need a few more twists on our DNN model.

To deal with the variable lengths of sentences, one can employ a method known as word embedding. Here, each word of a sentence is processed individually and mapped to a numeric vector of a fixed length. A good word embedding tends to put words with related meanings, such as “dolphin” and “SeaWorld,” close to one another in the vector space and words with distinct meanings far apart:


The embeddings are then fed to the DNN for classification.

But a word’s meaning and function also depend on its context in the sentence! How can we preserve the context when processing a sentence word by word? Instead of using only the current word as our DNN’s input, we also use the output of our DNN for the previous word as an additional input. The resulting structure is called a Recurrent Neural Network (RNN) because the previous output becomes part of the current input:


Now we know how to make our model “read” a sentence, but how do we format all the language-processing tasks as classification problems? It’s straightforward in Sentiment Analysis, where we use the output of an RNN for the last word as a summary for the sentence and add a simple classification model on top of the summary. The labels can be [“positive”, “neutral”, “negative”] or [“happy”, “angry”, “sad,”  …]. In Machine Translation, we have an encoder RNN and a decoder RNN. The encoder reads and summarizes the sentence in language A; the decoder sequentially generates the translation word-by-word in language B. Given what you’ve learned so far, can you figure out how to use a RNN for Natural Language Understanding, Automatic Speech Recognition, and Text-To-Speech?

On this journey, we started with the basic classification model, the perceptron, and finished with the bleeding-edge classification models that can process human language. We have peeked into the brains of the AI assistants. Exciting research in language processing is happening as we speak, but there is still a long road ahead for the AI assistants to converse like humans. Language processing is, as mentioned before, not easy. At least next time you get frustrated with Siri, instead of yelling “WHY ARE YOU SO DUMB?” you can yell “YOU CLASSIFIED MY INTENTION WRONG! DO YOU NEED A BETTER EMBEDDING?”

[1]Programming a Perceptron in Python, 2013, Danilo Bargen.

[2]A deep learning tutorial: from perceptrons to deep networks, 2014, Ivan Vasilev

[3]Overview of artificial neural networks and its applications, 2017, Jagreet.

[4]Wonderful world of word embeddings: what are they and why are they needed?, 2017, Madrugado.

[5]Understanding LSTM networks, 2015, Colah.

Marshall is a PhD Candidate whose research focuses on artificial intelligence, machine learning, and sufficient dimension reduction. We asked a fellow Laber-Labs colleague to ask Marshal a probing question —

Q:  If you were running a company in Boston and had summer interns coming from out of town, what would be the best way to scam some money off of them? — James Gilman


Call my company Ataristicians and ask for seed money.

Just kidding. On a more serious note, if I were a scammer, I would take advantage of the fact that in Boston, gifting weed is legal but selling is not. The way transaction works is that the buyer would “accidentally” drop his money and then pick up the “gift bag” from the seller. The employees of my company would go to all the intern events, establish contacts with the interns, find the potential customers, and pose as discrete weed dealers. Then we would simply put garbage in the gift bag and take the interns “dropped” money. Nothing illegal with gifting garbage. Those interns can’t find help from the police. And because they came from out of town, they are unlikely to have connections with local gangs. Now, if we want to make more money, we would record the whole price negotiations and the transactions, then blackmail the interns, threatening to email the recordings to their managers and ruin their careers.

This is Marshall’s second post! To learn more about his research, check out his first article here!

Variable Selection using LASSO

Wenhao Hu
Wenhao Hu, PhD Candidate

How to identify a gene related to cancer?  What factors are correlated to graduation rates in all NCAA universities? To answer those questions, statisticians usually use a method called variable selection. Variable selection is a technique to identity significant factors related to the response, e.g., graduation rates. One of the most widely used variable selection methods is called LASSO. LASSO is a standard tool among quantitative researchers working across nearly all areas of science.

LASSO can handle data with lots of factors, e.g., thousands of genes. In the era of big data, this is extremely useful. For example, suppose that there are 50 patients with cancer and another 50 healthy people. And scientists sequence each subject’s gene at ~100k positions. To identify the gene related to cancer, one needs to check those ~100k positions. Traditional regression methods fail in this case because they usually require that the number of subjects be larger than the number of genes. LASSO avoids this problem by introducing regularization, which then has been used by many others machine learning and deep learning algorithms. LASSO has been implemented in most statistical software environments. For example, R has a package called glmnet. SAS has a PROC called glmselect.

To achieve good performances for LASSO, it is vital to choose an appropriate tuning parameter, which balances the model complexity and model fitting. Classical methods usually focus on selecting on a single optimal tuning parameter that minimizes some criterion, e.g., AIC, BIC. However, researchers usually ignore uncertainties in tuning parameter selection. Our research studies the distribution of the tuning parameter, and thus provides scientists with information about the variability of model selection.  Furthermore, we are developing an interactive R package for LASSO. By using the package,  scientists can dynamically see the model selected and corresponding false selection rates.  This allows them to explore the dataset and to incorporate their own subject knowledge into model selection.

Illustration of the interactive R package under development for variable selection.

Wenhao is a PhD Candidate whose research interests include variable selection and statistical learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?

A: My research provides me a better understanding about the theory of
linear models, which is one of the most widely used statistical

Q: What do you see are the biggest or most pressing challenges in your research area?

A: One biggest challenge is model interpretability and inference after model selection. Meanwhile, users usually have little freedom to incorporate their domain knowledge into the process of model selection.

Q: Finish this parable:
A Tiger is walking through the jungle whereupon he sees a python strangling a lemur. The Tiger asks the python, “why must you kill in this way?” it is slow and painful. We all must eat, but have you no compassion for your fellow animals? To which the python replied, “Why must you kill with teeth and fangs? The gore and violence of it is scarring to all who are unfortunate enough to see it.” The tiger considered this for a moment and finally said, “Let us ask the Lemur. Lemur, which is your preferred way to go?”

A: The python relaxed his grip slightly so that the Lemur could speak, “I
don’t know which way is better. But if I can choose, I prefer to be
killed by the strongest animal. Is Python or Tiger stronger?” The Tiger
answer confidently, ‘I am the strongest animal in the jungle. Python, you
should leave the Lemur to me.’ The Python felt very unhappy and started to debate with the Tiger and the Lemur. After several minutes, the Python and Tiger started fighting with each other. The Lemur escaped…

Following the money trail… To catch the bad guys

Yeng Saanchi
Yeng Saanchi, PhD Candidate

Imagine a world devoid of human exploitation, a world free from the fear of being trapped under the yoke of slavery. Sounds like a perfectly splendid world, if you ask me! Alas, this is a world that remains elusive because there are people who refuse to accept that owning their fellow human being is the epitome of evil. Modern-day slavery, also known as human trafficking, involves the abuse of power by certain individuals or groups to coerce the victims and exploit them through the use of threats or force. It is usually characterized by the giving or receipt of payments or benefits in order to assume control over a person. Forms of modern-day slavery include sex trafficking, labor trafficking, domestic servitude, forced marriage, bonded labor and child labor.


Human trafficking ranks as the third most profitable crime in the world and generates about $32 billion a year. But there is hope! The illicit trade of humans is a problem that is acknowledged by many governments and organisations in the world, and the battle against it has been ongoing for decades now. Although there have been many attempts to curb human trafficking activities, many have come to the realization that these criminal acts cannot be disrupted by conventional policing methods. The inefficacy of conventional methods has given rise to what is referred to as “follow the money” techniques. These methods target illicit assets, such as the financial assets of criminal organisations. It is claimed that targeting illicit assets demonstrates that crime does not pay, disrupts criminal networks and markets, and acts as a deterrent through reduced returns. While the confiscation systems developed have been partly effective, there is an ongoing discussion as to whether these current methods are truly achieving the objective of curbing these crimes.

The purpose of the current human trafficking project being undertaken by Laber Labs is to map out connections between people based on personal and geographic information, as well as their financial transactions. This is to aid in devising a means of detecting with reasonable accuracy, which transactions appear suspicious and are more likely to be associated with criminal activities. The long-term goal of the project is to assist law enforcement to apprehend the criminals as well as stop these crimes before they are committed, if possible.

The dark web plays an important role in exploring this method of apprehending criminals involved in illegal activities, primarily human trafficking. The dark web is the content on the world-wide web which can only be accessed by specific software, configurations or authorisation. Though the dark web is deemed as a treasure trove of criminal activity, a study conducted by Terbium Labs, showed that about 48% of the activities that take place on the dark web are legal. Interesting, right? The dark web is actually patronised by a lot of people who wish for privacy or anonymity.

The task of “catching the bad guys” perpetrating these inhuman acts is no doubt a challenging one and hopefully the outcome of this project will provide an effective method for curbing this canker.

I will end this post with a quote by William Wilberforce: “If to be feelingly alive to the sufferings of my fellow creatures is to be a fanatic, I am one of the most incurable fanatics ever permitted to be at large.”

Yeng is a PhD Candidate whose research interests include predictive modeling and variable selection. We thought this posting was a great excuse to get to know a little more about her, so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?

A: What I find most compelling about my research is the potential of saving lives by helping to put a stop to modern-day slavery.

Q: What do you see are the biggest or most pressing challenges in your research area?

A: The most pressing challenge at the moment is building a statistical model for age prediction using body poses in order to help in distinguishing between underage and adult victims.

Q: Give five tips for starting a successful doomsday cult! One tip should be about fostering the deviancy amplification spiral in your potential followers.

A: i) Run for student body president as a way of getting students on board. Could insert subtle messages about an imminent robot apocalypse in the numerous emails that the student president is allowed to send to students.

ii) Reach out to the fraternities and sororities as a way of garnering more support

iii) Put out a story online about a prominent figure in the academic community who is working on helping law enforcement to curtail human trafficking and yet has his own coffle of slaves in the guise of a lab, with proof and all(made-up or not). This should elicit moral outrage and help foster the deviancy amplification spiral somewhat, I think.

iv) Work on getting a couple of notable figures involved, probably someone from the academic community. For instance, convincing EBL that an apocalypse is imminent will be a step in convincing many. How to go about that, I’m not certain.

v) The least probable tactic will be to convince the most powerful man in the world that unlike global warming, a robot apocalypse is real and imminent.

Spatial Analysis of College Basketball

Nick Kapur
Nick Kapur, PhD Candidate

For a few weeks each March, the country is captivated by March Madness. Brackets are filled out, bets are placed, and occasionally prayers are answered. Professional sports are wonderful, but college sports are able to generate the purest form of passion; a passion derived from people’s lives being intricately and inexorably tied to the school they attend. At NC State, we are at the epicenter of college basketball. NC State plays in the best basketball conference in the country (the ACC), mere minutes from Duke and UNC, 2 of the greatest college basketball programs of all time. Competing constantly against the very best schools in the country requires a flexibility and adaptability often necessary in any “underdog” story. I believe that this requirement can lead to the perfect union of NC State basketball and an unlikely partner: the Department of Statistics.

Since the early 2000s, professional sports organizations have slowly embraced the use of statistics and analytics to help drive performance increases. The professional equivalent of college basketball, the National Basketball Association (NBA) has even gone so far as to install special cameras in each arena that produce data including every player’s spatial location 24 times per second. College sports teams, due primarily to a lack of resources, have been far slower to embrace analytics. In college, there are no fancy cameras in place leaving most studies to use simple statistics such as points, rebounds, and assists. Meanwhile, the most important offensive concept in the game, the ability to shoot the ball, is captured only by field goal percentage. Field goal percentage is a misleading statistic as it is unable to determine where on the court shots originate. This shortcoming leads to players who take easier or fewer shots to have higher field goal percentages. This is problematic as it doesn’t truly capture the best shooters; it simply captures the most opportunistic ones.

That is where the Statistics department can help make major strides. In a recent project, I created a web application that allowed for the easy tracking of college basketball shots. This does not give all player’s locations 24 times a second like the NBA, but it does allow easy capture of shot location, a glaring missing piece of data for most college programs. In addition, after leading a team of undergraduates to collect data for 20 NC State games from the 2016-17 season, I performed a spatial analysis of the data. This analysis led to several interesting insights. First, the conventional wisdom that players tend to shoot more (or better) to the side of their dominant hand showed no evidence. Second, the belief that shooting 3 pointers is significantly better than shooting long 2 pointers was reaffirmed. And finally, likelihood comparisons were able to be drawn for each player. This is important, as it can be used to determine where certain players are likely to shoot, which is wonderful information for a coach trying to create a game plan.

Overall, this recent project was able to accomplish several interesting tasks in the world of college basketball that will hopefully allow the influence of statistical thinking to soon become an integral part of the game. If this union is embraced by NC State (as it has been thus far), our university can be a leader in driving the field of sports statistics to a higher level while at the same time winning in front of the entire country every March.

Nick is a PhD Candidate whose research interests include machine learning and statistical genetic. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?

A: I love the ability to work on problems from a diverse set of fields. The ability to do statistical research in sports and then take that research and apply it to national security, robotics, and medicine is incredibly appealing to me.

Q: What do you see are the biggest or most pressing challenges in your research area?

A: I wrote this blog post on sports statistics, so I will answer about that as a research area. I think the most challenging aspect is gaining the trust of the sports community. Like many communities, it tends to be insular and resistant to change. There are still many athletes, coaches, and administrators who do not see the value in listening to people who have not played their sport at a high level. This is slowly changing for the better; however, the area of sports statistics still needs many practitioners who intimately know the sport they are studying, can communicate effectively with the people within that sport’s community, and are open-minded to compromise.

Q: Explain the benefits of Scientology.

A: The founder of Scientology, L. Ron Hubbard, once said “For a Scientologist, the final test of any knowledge he has gained is, ‘did the data and the use of it in life actually improve conditions or didn’t it?’” The question posed in this quote is phenomenal. It is something a statistician should ask themselves every time they are working on a problem. While the statistical methodology of Scientologists may be less rigorous than that of trained statisticians, at least they are asking themselves the appropriate questions (something statisticians don’t always do).

Your Own Path

Robert Pehlman
Robert Pehlman, PhD Candidate

“Follow your own path”.  Useful advice when navigating the ups and downs of life.  It is also a useful primer for functional data!  Sometimes when researchers collect data, they are interested in a “path” or trajectory of some measurable quantity over time.  Imagine that you were able to know your heartbeat at any given moment during the day and visualize your heartbeat as a graph (trajectory), with time on the x-axis and beats per minute on the y-axis.  It slows down when you rest, and it speeds up when you play LaserCatsTM.

Your pulse is a continuous process, which means that if it jumps from 70 bpm to 150 bpm it must visit every value in between the two. Furthermore, it seems reasonable to assume that your heart rate in the current moment depends on your heart rate from a few minutes ago.  In the statistics world, we would say that your heart rate now is correlated with your heart rate in the past.  Just as in life, every individual’s heart rate must follow its own path, and no two trajectories are identical.  However, it may be reasonable to believe that there are underlying characteristics of your pulse that are similar to other humans.  They tend to be elevated in the day and lower at night.  The correlation between your heart rate 5 minutes ago and right now may be the product of a rhythm that is common to all humans.  If we make a few assumptions, we can fit a model that describes the average value of this process over time and how the process is correlated with itself in the past and future.

If you had a high quality model, you could even make predictions about a future heart rate given enough information about the process up until the current time.  Imagine that your goal was to maximize your heart rate at the end of an hour of working out, and you could choose whether to spend your next hour doing crossfit or jogging.  Either one of these options could create the target rate, but the decision about which one performs better might be dependent on prior information, like the path of your heart rate through the day.  The choice about whether to do crossfit or jogging could be tailored to an individual based on how their heart beat progression has looked up until the moment the decision needs to be made.

My research examines the underlying principles involved in modeling a continuous process and using it to predict a future outcome.  I am currently working to apply it to a practical problem — depression in humans.  Mental health workers use existing methodology to assign numerical scores to indicate severity of depression.  Like heart rate, this is a value that may be constantly changing over a day, a week, or a year.  We can tailor a treatment strategy using past information about the trajectory of someone’s depression score to find a medication that works best for that particular individual. I hope that the results of this research could be useful in improving the quality of life of people suffering from depression and that the underlying statistical tools could have broader applications in other fields.

Robert is a PhD Candidate whose research interests include computational statistics and machine learning. His current research focuses on functional Q-learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: I enjoy the challenge inherent in solving a problem that doesn’t have a standard solution. A lot of elementary statistics involves applying tools that are already well understood, but research allows you to develop something rigorous that could become a standard practice in the future.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: Sequential decision making needs a lot of data, nonparametric problems tend to need a lot of data, so multi-stage, sequential problems that are estimated using nonparametric methods might need a lot^2 of data to be good.

Q: The poem Antigonish begins:
      As I was going up the stair
      I met a man who wasn’t there!
      He wasn’t there again today,
      Oh how I wish he’d go away!

Write a 200 word sitcom pitch for a family comedy based on this snippet.
A: Theodore and His Imaginary Frenemy

4th grader Theodore Giffel had always been an outsider and had a hard time connecting with others. He always wanted a friend who would never leave his side, but after he fell down the stairs and got a concussion, he woke up with more friend than he bargained for. Ricky Morton, his imaginary friend, began causing trouble in Theo’s life as soon as he entered it. Theo would be blamed for the havoc caused by Ricky, who seems to be motivated only by mayhem. Despite this, Ricky stuck by Theo all the time and always kept him busy, even when Theo wished he wouldn’t. Coming this fall on Laber Labs TV!

Reinforcement Learning in Education


Eric Rose
Lin Dong, PhD Candidate


I always prefer video to a pile of reading materials, but I am not sure which one helps me learn better.Lin Dong


‘Reinforcement’ has become a buzz word in the machine learning and artificial intelligence communities. It has wide application from winning a video game to automating a car.

If you are not familiar with reinforcement learning, here is what it is. First of all, it is a sub-area of machine learning. In supervised learning, the task is to learn to predict or classify something from a training dataset. For example, you want to decide if an image is showing an apple. You will receive some pictures of apples and get an idea of what an apple looks like.  In reinforcement learning, the task is not simply to predict or classify but to learn what to do to maximize our reward in a complex dynamic system. In this setting, we don’t have the nicely labelled training set to teach us. So what should we do? Well, we can learn through trial-and-error interactions with the system. This is like making an apple pie. Try different kinds of apples and various amounts of sugar. You may puke several times, but eventually you will learn to make a perfect-tasting apple pie.

You may wonder how this is related to education. Think of students taking a course to learn some skills. The complex system is the interaction between the instructor and the students, as well as how student learn the knowledge. The reward of this system is how much the students actually learn.

Nowadays, the common practice in education is a one-fit-all method. That is, each student in a course is treated identically for all the teaching activities – the same content, same way of teaching, and same tests. However, some students may learn better from a video illustration, whereas some students may learn more from a well-organized handout. Or, some students may perform better on a project but others are good at exams. Therefore, a better strategy would be to develop a personalized educational scheme that takes into account the inherent differences between students, and the scheme should be able to change dynamically according to feedback from the student.

The study process can be formally modeled as a Markov decision process. Each student entering the course has his/her own initial status, which may include the student’s own characteristics and previous proficiency level. The process starts by an assessment (A), say a quiz. The assessment is so important here as instructors normally cannot read minds. They need to give a quiz to estimate how much he/she really understands the content. The result of this quiz is observed (X) and serves as an estimate to the student’s true proficiency level. Then the instructor gives an intervention (I) by choosing one of the teaching resources for this student. This intervention leads the student to a new proficiency level. The data triple (A, X, I) accumulates until the student reaches the end of the course. What we care about most is the final assessment result, which is a measurement of a student’s final proficiency level after the course.

This process is slightly different from ordinary Markov decision processes in the sense that there are two completely different decisions to make: how to assess the student’s understanding and how to select the teaching resources for the student. Therefore, our goal is to maximize the final outcome by finding the optimal policy of both assessment and instructor’s intervention.

The next step is how to solve for the optimal policy using the accumulated data of students. We will use approximate dynamic programming, a tool from reinforcement learning, to learn the optimal teaching plan. Check out my next post for details!

Lin is a PhD Candidate whose research interests include dynamic treatment regimes, reinforcement learning,  and survival analysis. Her current research focuses on shared decision making in resource allocation problems. We thought this posting was a great excuse to get to know a little more about her, so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?
A: I can always simulate fake subjects and manipulate their imaginary behaviors. In a larger view, I may change the world of education.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: Inference is hard. That’s why the world need statisticians.

Q:  Explain, as you might to a child, that just because mommy and daddy are splitting up it doesn’t mean they love him any less.  This is *not* his fault, but, if we’re being honest, he didn’t help.

A: The poor kid’s name is Snow.

“Snow, come here!”

Snow comes to Daddy.

“Kid, here is something you need to know. You know that daddy and mommy both fear cold weather right? Well, two people that both hate cold cannot live together, because they make each other colder. Now it is winter and snowy. You know, it is cold now, but it is not because of the snow outside. Snow just does not help warm up the weather. So daddy and mommy have to split for a while.”

The Importance of Personalized Medicine and Sample Sizes

Eric Rose
Eric Rose, PhD Candidate

Suppose you had no idea what the odds of winning the lottery are or how to calculate them. You decide to ask strangers to estimate how likely it is to win and find that of the 10 people you asked, one was a lottery winner. You may conclude from this that it is not all that uncommon and start buying thousands of tickets. If you were to do this though, before you know it all of your money would be gone. This is because you were basing your conclusions off of the results of only a couple of people.

Now suppose you get a disease and need to start taking some drug to help cure it. Wouldn’t you want to make sure that the drug was tested on a large enough group to be able to conclude that it truly is the best way to treat whatever disease you have? My research is about calculating the minimum number of people needed in a study to be able to make clear conclusions about what the best way to treat a patient is.

Traditionally, different drugs would be tested on a large group of people, and the drug that worked the best on average on all of the people would be concluded to be the best way to treat all patients. Everyone is different though, and what is best for some people may not be the same as what would work best for you. This has led to the field of personalized medicine where the goal is to find the best technique for treating every individual and not just finding the best way to treat an entire population in general.

We may be also interested in estimating the best way to treat an individual patient over a period of time in which multiple treatments are assigned to the patient. We could create a set of rules to select optimal treatments for individual patients at each time period. This is called a dynamic treatment regime. To estimate a dynamic treatment regime, a specific type of clinical study called a sequential multiple assignment randomized trial (SMART) is commonly used.
The main goal of my research is to find the minimum number of patients that need to be included in a SMART to meet two specific criteria. The first thing we want is to have enough patients in our study to be able to ensure that our estimated dynamic treatment regime is close to the true unknown optimal treatment regime. This criterion is similar to our lottery example where we want to ensure that our estimate for the proportion of lottery tickets that are winners is close to the true unknown proportion of all lottery tickets that are winners. We also want to be able to have enough patients to conclude whether or not using a personalized approach is significantly better than the standard way of treating patients. If we know we can find a sample size that ensures we meet these criteria that is of a reasonably small size, then we know that we can effectively find improved ways to treat patients. This has the potential to greatly improve the way in which patients are treated for many different illnesses.

Eric is a PhD Candidate whose research interests include machine learning and statistical computing. His current research focuses on sample size calculations for dynamic treatment regimes. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: It is not only a difficult problem with several statistical challenges but also has an important application in improving the implementation of SMART trials.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: The biggest challenge in this area is that we frequently have to deal with non-regular parameters, which cause a lot of difficulties for conducting any statistical inference.

Q:  Please respond to at least one of the following:

1.)  Provide a linear scoring rule for ranking human beings from best to worst.  
2.)  Explain which of your siblings your parents love the least.  Justify their feelings (be specific).
3.)  Tell us about your favorite breed of dog.

Your answer should be constructed using letters cut and paste from a newspaper like an old school serial killer.

A: Growing up I had golden retrievers and a lab, so I've always had a strong bias for them. Also, they're adorable.

Computers Thinking Like Humans

Isaac J. Michaud
Isaac J. Michaud, PhD Candidate

Sitting in traffic leaves you plenty of time to think about whether there is a faster way of getting to work. Inching along thinking about the problem, you imagine the myriad routes you could take. How would you ever find the best one?

If we thought like computers, we would approach the problem in a straightforward and inefficient way. Every day we would pick a different route between our home and work and see how long it takes us. Over time we would try every route and would know which is the best. It would take us years or perhaps even decades to get our answer. By that time, the answer would be useless because we’d have switched to a new job or moved.

This is all quite silly, you say to yourself; no one would ever try every route– and you are right! The point I am making is that computers are only as intelligent as the instructions we give them. They are simple machines, like pulleys and levers, for the human mind. They magnify our ability to solve problems. If we can turn a complex problem into a long series of easier tasks, then we can feed it to a computer. The computer’s tremendous speed then augments our problem-solving.

Going back to our original problem, I bet you already have a better strategy to find the best route. You may have thought of some principles that would guide our search. Here are two that I think are reasonable:

(1) Shorter routes (in miles driven) are better than long routes. If you could fly to work the problem would be simple because you could take a beeline to work. It would have the shortest travel time, but you have to travel along roads. Even so, the fastest path is probably among the shortest paths. In other words, we would not consider a cross-country trip as a plausible commute. This principle filters out possibilities which we wouldn’t want to waste our time testing.

(2) Similar routes will have similar travel times. Small differences between routes will only result in small differences in the travel time. Clearly, using an interstate will be a very different than using secondary roads. But two routes going over most of the same secondary roads will be nearly the same. Therefore, we can infer the time a route will take if we have already driven a similar one in the past.

Now we can start the process of sifting through all the routes. Principle (1) tells us whether a route is plausible. Principle (2) says that we can change our beliefs about the plausibility of a route based on those routes we have already tested.

To begin, you may take a route that is short but drives straight through downtown. You get snarled in traffic and are late for work. The next day, when picking a new way, you know that routes that go through downtown are slow, so you pick one that avoids downtown. Repeating this winnowing process, you will find the best route in a few days.

This solution approaches the problem in the same way that humans learn. People use their logical reasoning to explore and create generalizations about the world. They adapt to new information without being reprogrammed. Is it possible to get a computer to do the same?

The trick is to translate the problem into terms that a computer can understand. The details can become complicated because you need to mathematize the problem. We must embed the rules mentioned above into a statistical model. This model provides the computer with the language it needs. It can then describe a plausible solution and update these descriptions with new information. The computer is free to use its speed to do the exploration and updating of beliefs.

The term for this algorithm is Bayesian Optimization. It represents the current cutting-edge of solving optimization problems. Beyond finding the best route to work, there are many areas of our lives that are touched by optimization. Maximization and minimization are instrumental in providing us with the quality of life we enjoy today. How well Amazon can cut overhead determines the price of the products we buy. If Ford engineers can maximize the MPG of your car, you will consume less gasoline. Without optimization, we would always be doing things inefficiently!

These important optimization problems are always growing in complexity. They are too complex for humans to solve by hand and too complex for a computer to brute-force. But by changing how a computer approaches optimization, we can solve problems that were once impossible.

Isaac is a PhD Candidate whose research interests include epidemiology, differential equation modeling, and reinforcement learning. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: I enjoy working on hard problems. They give me something to always be thinking about.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: The development of theory is the most pressing challenge. Theory tells us why things work and without which we will be lost when things break.

Q: The Archangel Gabriel is known as the left-hand of god. How did he get that nickname? (Hint: it was in middle school.) Your answer should be exactly 666 characters.

A: My answer has been encrypted using a one-time pad. It is impossible to crack unless you are omniscience. I did not encrypt the space or punctuation characters to retain a semblance of the original text. Even though this significantly reduces the entropy of the final ciphertext, it is still uncrackable.

Yxf fgzll nf xjv yz xp mlywg gfxke iqxlfw? Yvkw kgezu bg sae istyplmm heiqz wroe Aiggg pfs grzr nyqu iu re askac. Y ak ywtoa xy ypomtkyg flri kgukslbe ko kedj xbv cpvcg. Iabfbn, Pik hs pky nerdpaltc vs ewazw btzpch odufwg. Ca xao ndqis es gqaxj kgi fftetqvu cjjptjkk. Nda qlkqszbi qsb Lobvuef. Dyhspze kqekn gvvm Nas clsttgxc uh wud eszznsu’p arzhiy. Ddgygkz yvh qf hknh xete Bjo mrmqvrrpd gwy rjy ijoa veob xy Oic htdbscg jvy mxmi wxqk og gbvlv if frw ppukk wlzw bie Rrcyfzs hxb tvf qhrp rlxbess cdkg avlhc rzsw keo dhejq xji xo Tmc. Ue trcfr bkf rwqe rm! Djx igz a nzqximqdi, fnu Kecxuzo vqh o abaedylm puwfxz owouk. D igld tgbw Zseit dkxr vdlp. Kcgn ze rjq! Yyin!

The Threat of Artificial Intelligence

Joyce Yu Cahoon
Joyce Yu Cahoon, PhD Candidate

Now that the break has begun, I’m getting around to watching Black Mirror, a series on Netflix, and my god, it’s thought-provoking. I stopped last night at episode “The Entire History of You” because it left me so unsettled. It presents an alternate reality where everyone is implanted with a `grain’ that records everything seen, done and heard and gives each individual a means to replay memories on-screen – which I’ve got to admit, is really neat. Growing up in this age of social media, where everyone is publicizing every mundane moment of their life, the recording and sharing of every second of one’s life is not unimaginable. The repercussions of that technology are what left me clammy; the protagonist of this episode ultimately rips out the grain implanted behind his ear because he is unable to cope with the memory of losing his wife.

I’ve fantasized about working in artificial intelligence (AI) for years but have yet to stop and think about the drawbacks. Yeah, I believe inventing such a `grain’ would be highly beneficial for society. No, I don’t believe it should be mandated that everyone has one–my Pavlovian reaction to watching the “The Entire History of You.” The eeriness this one episode elicited has led me to take on a greater awareness of my work: solving problems in computer vision and in mammography may not result in the gains I imagined. In fact, could it be beneficial that no one adopts the technology at all? That no one has the ability to abuse such powerful AI systems? To abuse and misuse for aims not necessarily as noble as the detection of breast cancer? What the heck is `noble’ anyway? How pedantic! I digress, but the repercussions of our work are important to explore.

For the past semester, I’ve been working towards improving the detection of tumors in routine mammograms. What does that entail? (1) Learning how to use libraries for deep learning like TensorFlow and Theano; (2) reading research papers related to deep learning systems; and (3) executing new ideas on a small set of digital mammograms and performing some type of cross-validation to test if these ideas work. So, what’s the point? I’m drawn to this problem in mammography because I know individuals that have died from breast cancer as well as those that have survived. One in 8 women will be diagnosed with it in their lifetime, but a whopping 61% of cases detected early have a 5-year survival rate of 98.6%. What does that mean? It means that while there’s a high prevalence of breast cancer, early detection can avert death and thus the frenzy among scientists to improve the accuracy of the screening process, digital mammography.

The majority of solutions today rely on domain experts to manually identify diseased breast tissue in thousands (if not more) mammograms, then use these ’labeled’ images to develop computational models known as `convolutional neural nets’ (CNN) that can identify a patient with or without breast cancer with greater accuracy than that of a physician. The ability of CNNs to outperform radiologists has yet to be achieved. And experts have attributed this deficiency to CNNs’ reliance on being trained by intractably large sets of labeled mammograms; such a dataset, if it exists, must encapsulate all features of tumors that are relevant to any patient. Many published CNNs have failed in generalizing to other breast cancer datasets.

My work thus centers on developing a model that takes in unlabeled (raw) mammograms and provides an indication of diseased breast tissue. How? It uses `smarter’ weights in the CNN, eliminating the need to provide millions of labeled images. The models I work on are essentially simple visual cortexes: you give it an arbitrary mammogram, and each layer of the model processes the image a little more and understands it in some deeper sense until you reach the very last layer, which has an abstract understanding of the mammogram giving way to the binary outcome of diseased or undiseased breast tissue. Like a newborn baby, our CNN starts off as a blank slate that becomes more and more specialized over time as it is exposed to more stimuli (or mammograms in our case). Whether or not our work can be adapted to more sinister applications… well, only time will tell, but right now, I can’t imagine a scenario in which improved screening can be in any way nefarious. Someone please prove me wrong.

Joyce is a PhD Candidate whose research focuses on machine learning. We thought this posting was a great excuse to get to know a little more about her so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?
A: reverse engineering human intelligence

Q: What do you see are the biggest or most pressing challenges in your research area?
A: from the thought leaders @ DARPA: “building systems capable of contextual adaption”

Q: If there were a hell for ponies, what do you think it would look like?
Your answer should be in the form of a picture drawn using microsoft paint.