Facing Missing Data

 

Eric Rose
Lin Dong, PhD Candidate

This is a detour from my last post about education. Turns out that I have been working on a project about sequential decision making in the face of missing data for several months, so why not talk about that. Missing data arises in all sorts of data. For data with sequential feature, like data from sequential multiple assignment randomized trials (SMART), the problem is that patients are often subject to drop out. Q-learning or other techniques for solving the optimal strategy cannot be directly applied to data sets containing missing values, so we need a way to get around having missing values.

The first question we may ask is – why is missing data a problem? Further, can we just throw out the missing entries? Why do people care so much about it and develop sophisticated methods to deal with it? Things are not that simple.

Missing data is not a big issue if the data are missing completely at random (MCAR). Yes, that’s jargon. MCAR means that missingness is completely random and is independent of the data. Suppose we have a typical n by p data matrix in which you have n rows corresponding to the subjects and p variables. A quick and dirty way to handle the missing data is to throw away all the rows containing missing values. This is not a great idea if a large proportion of your data contains missing values. Suppose you have an unbiased estimator for the full data. Under MCAR, the estimator remains unbiased, but you may lose a lot of efficiency (you are less certain about your estimation).

Another type of missingness is called missing at random (MAR), which means missingness is not completely random but only depends on the observed data. If you throw away missing data under this scenario, you will obtain a biased estimator. For example, cautious and wealthy people tend to avoid giving responses to questions about their income. For this reason, an income estimate would be lower than the truth because your sample only covers less wealthy respondents. Nonetheless, MAR is actually a very handy assumption because the missing event is tractable and thus can be modeled. The methods I will introduce later are all based on MAR.

If one refuses to assume MCAR or MAR for their data, we have a third and final missingness assumption called missing not at random (MNAR) – it says that the missing data depend on the things you did not observe. A very important paper that introduced these assumptions is Rubin 1976 [1].

So, we cannot simply throw away missing entries. What then are the alternatives? One can use the general class of imputation methods. Imputation methods are intuitive and work by filling in the missing entries based on the researcher’s best knowledge. The simplest imputation is to fill with the mean/median of the covariate. If we are willing to assume MAR, a more advanced way is to build a model for the variable. We can get a model-based estimator that can serve as the fill-in value. Instead of filling in with one estimator, one can estimate the conditional distribution of each variable given all other observed variables and then draw samples from the estimated conditional distribution to fill in the missing value. To account for the uncertainty in drawing samples, we can repeat the sampling procedure several times so that we have multiple imputed data sets. The inference is then performed on each of the imputed data sets. We combine the multiple results into one final estimator, for example, by averaging them. This is called multiple imputation and is a very popular approach to deal with missing data.

Another method, which is less known, models the missing mechanism directly. It is called the inverse probability weighted estimator, where probability refers to the probability of missing. When missingness is not MCAR, bias is introduced because the complete cases left are no longer a representative sample of the population. A method to fix that is to give each complete row a weight, which is 1 over the probability that it would be missing. Then we get a re-weighted sample that mimics the full and representative sample. The estimation of interest can be performed on the re-weighted sample, which only uses the complete rows. The key to this method is to estimate the probability of missing – the missing mechanism. Luckily, one can model the probability of missing under the MAR assumption.

Not until recently did I realized that I also encountered and studied the missing data issue in my undergraduate years. We were dealing with sensitive questionnaires, where people were being asked about very sensitive questions that they might be reluctant to answer. So we believed that they were likely being deceitful. The mechanism we used to address this was the following: I wanted to ask about a binary and sensitive status, and I coded it as {No = 0, Yes = 1}. Instead of asking directly, I listed a non-sensitive and independent question, e.g. how many times did you catch a flight in the last 3 months (an integer). Then, I asked the respondent to report only the sum of the number of flights and the answer to the sensitive question. For example, if a respondent traveled by air 3 times in the last three month and their sensitive status is “Yes”, he/she should write down 3+1 = 4. As the researcher, we only observe 4, which could be that the respondent flew 4 times with no sensitive status. In this way, it is believed that the compliance of respondents will increase. Utilizing this method, we translated the sensitive status into missing data as it was not directly observed. Typically, the researchers are only interested in population level of the sensitive status. Then, we applied the maximum likelihood to estimate the expected value of the missing value (In this case, a proportion). More details of this idea can be found in [2].

A perfect world would have no missing values. The real world, however, is so flawed that missing data arises wherever data are generated. Working on this issue gives me the illusion that I am helping to fix the world! A great reference for the general missing data issue is introduced in Prof. Marie Davidian’s course [3].

[1] Rubin, D. B. (1976). Inference and missing data. Biometrika 63, 581–592.
[2] GL Tian, ML Tang, Q Wu, Y Liu (2017). Poisson and negative binomial item count techniques for surveys with sensitive question. Statistical Methods in Medical Research. Vol 26, Issue 2, pp. 931 – 947
[3] http://www4.stat.ncsu.edu/~davidian/st790/


Lin is a PhD Candidate whose research interests include dynamic treatment regimes, reinforcement learning,  and survival analysis. Her current research focuses on shared decision making in resource allocation problems. We asked a fellow Laber-Labs colleague to ask Lin a probing question —

Q: Explain your favorite statistical method, but from the perspective of a crooked politician running a smear campaign against it.
A: Linear regression. This is definitely my favorite model. It is so simple, pure yet powerful. You can generalize it, penalize it and even interpret it.
Human brain should be linear – not some complicated, intricate, twisted, impenetrable, nonlinear, *deep* networks. Believe me, the whole world should be linear.

This is Lin’s second post! To learn more about her research, check out her first article here!

Improving Football Play-calls Using Madden

Nick Kapur
Nick Kapur, PhD Candidate

The ability to make crucial decisions in real time is one of the most sought after attributes of a head coach in any sport. Being able to improve upon these decisions is thus an important problem, as it can improve a team’s chances of winning. In baseball, there have been numerous studies on managerial decisions such as defensive alignments, bullpen usage, bunting, and more. These studies have resulted in managers making more efficient decisions, leading directly to better play. In football, coaches are faced with fundamental decisions to make every down: the personnel, the formation, and the play their team will run. Unlike baseball, where there is an abundance of data, it is difficult to determine whether coaches are making these important decisions effectively in football for several reasons. First, obtaining labeled data is extremely expensive and requires hand-labeling by domain experts. Furthermore, with a 16-game schedule and an average of only 130 plays per game, NFL football does not generate nearly enough data to reach reliable conclusions.

The lack of sufficient data is not an uncommon problem in science, and due to the proliferation of computing power, it is a problem commonly remedied by simulation studies. Luckily, there is a realistic NFL simulation environment that has been developed and extensively updated for nearly 30 years, EA Sports’ Madden video game franchise. Madden games can act as a model for the underlying system dynamics of an NFL game. We utilize data generated from Madden 17, the most recent version of the game, to train reinforcement learning algorithms that make every play-calling decision throughout an entire game. We compare the results of these algorithms with a baseline established from the game’s built-in play-calling algorithm, an initial surrogate for real-life coaching decisions.

Controllers with Raspberry Pi Computers

To generate the data at rates far greater than actual NFL games, we constructed 4 controllers that were operable through an interface with Raspberry Pi computers. We ran each of these controllers continuously on separate Xboxes, and we used optical character recognition techniques to capture the current state of the game from image data. Then, we used the current state as input to our reinforcement learning algorithms, which would return the play to run. The correct buttons were subsequently passed to the Raspberry Pi, resulting in 4 Madden games that could run continuously with no human input, collecting data 24 hours per day.

Our results show that the reinforcement learning algorithms are able to perform at better rates than the built-in Madden play-calling algorithm, leading to better decision-making and thus more victories. These results can potentially provide a framework for evaluating and improving play-calling in football. Additionally, they can potentially be augmented with real data to provide a model that performs better than a model based on the real data alone. With enough evidence, football coaches may be compelled to alter strategic decisions for the better, leading to more efficiently called football games.


Nick is a PhD Candidate whose research interests include machine learning and statistical genetic. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions! We asked a fellow Laber-Labs colleague to ask Nick a probing question —

Q: Explain the countable axiom of choice with an analogy involving hot dogs.  

Let’s say you really want a hotdog. You are walking down the street, and suddenly you stumble upon an infinite number of hotdog vendors who each have tubs with many hotdogs in them. You know that you are incredibly hungry right now, and that in the future you may want to go back to the best hotdog vendor. Therefore, you get out your trusty megaphone and announce to the hotdog vendors a rule (some function that allows them to choose…let’s call it a choice function) so that each of them will know exactly which of their hotdogs to give you. This way, you don’t have to go and pick out one hotdog from each of them individually. The axiom of choice has now saved you a lot of valuable time and probably doomed you to a sedentary lifestyle.

This is Nick’s second post! To learn more about his research, check out his first article here!

What is in a Model?

Isaac J. Michaud
Isaac J. Michaud, PhD Candidate

Nothing kills communication like jargon: it signals the tribe you belong to. Jargon makes the distinction between the insiders and the outsiders painfully clear. One particular piece of jargon that has always bothered me is the concept of a “model.” I suppose this has been on my mind recently because I have heard people relay the famous quote from George Box, “All models are wrong, but some are useful.” This is an adage that is hard to escape in Statistics, and like all maxims it becomes trite when overused. I am most annoyed when presenters throw this into their lecture as some legal caveat emptor to mitigate the criticisms of their work….

…But, getting back on topic, what exactly did Box mean by a model? We use this term all the time. Taking a blunt view of Statisticians, all we really do is build models. Of course other scientists also build models, we don’t have a monopoly –yet (insert evil laugh). My definition of a model, albeit inept, is: a description of either an object or a process. Now some descriptions are better than others. A detailed blueprint is a more useful description for building a skyscraper than a poem. This is why mathematical models are so prevalent. They cut directly to a quantitative description without any confusion. Models don’t need to be equations; they can take many different forms, for example a computer program. The important thing is that it is a description.

Joyce’s second blog post discusses two camps of modeling. There are those that want the model to be interpretable and those that do not care about the form of the model but instead only want them to achieve some result, say winning at go or chess. Both are valid descriptions, but they illuminate different aspects of the same object. Neither of them is right and neither of them is wrong. The only flawed assumption is that the only correct description is your own model.

My research deals specifically with what are called surrogate models. These are models that are built and calibrated to produce the same results as another model. Now why would anyone want to do this? It seems meta and academic. Well, you’re not wrong! But there are very good reasons to do this. Simpler models, assuming they have enough fidelity, are easier to analyze and understand without losing relevant information. When thinking about surrogate models I always remember the short story “On Exactitude in Science” by Jorge Luis Borges, which describes an empire whose cartographers were so proficient that their maps were the same size as the empire itself. Every detail of the terrain reproduced exactly. Obviously such a map, although accurate, is rather unwieldy. A cut down version would be sufficient for most practical purposes. The tricky issue is how to perform the trimming.

My surrogate modeling falls into a gray region between the two camps Joyce describes. Often the surrogate model takes the form of some Gaussian Process model that’s inscrutable, and the model being approximated is a computer simulation built up from scientific knowledge. The simulation is understandable but slow, whereas the surrogate is the reverse. The Gaussian Process model is not a better description of the reality that the computer code is stimulating, but it does make certain information available to us that would otherwise be locked away in a computer program running until the end of time. In my case, one model is not enough to describe everything. I believe this plurality is true across Statistics and the other sciences. We must be flexible so we are not dogmatically stuck at the expense of progress.

 


Isaac is a PhD Candidate whose research interests include epidemiology, differential equation modeling, and reinforcement learning. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We asked a fellow Laber-Labs colleague to ask Isaac a probing question —

Q: On the table in front of you are two boxes.  1 is clear and contains $1000.  The other is opaque, and contains either $1 million or nothing.  You have two choices: 

1. Take only the opaque box. 
2. Take both boxes. 
The catch is, before you were asked to play this game, a being called Omega, who has nearly-perfect foresight, predicted what you would do.  If Omega predicted you would take one box, they put $1 million in the opaque box.  If Omega predicted you’d take 2 boxes, they put nothing in the opaque box.
Do you choose one or two boxes?

A: Both boxes. Depending on what you assume about Omega’s foresight and objectives will lead to different conclusions. If I believe that Omega is more likely to be correct than wrong when predicting my actions, then I would choose the opaque box in order to maximize my expected reward. But if I assume that Omega is a rational being who believes I am a rational being and has the goal of maximizing the chance of being correct, then she will know that I am going to choose the opaque box with 100% certainty and will predict it. But I, knowing that Omega will choose this, will maximize reward by instead choosing both boxes. Omega will know this adjust accordingly. I would then have no reason to switch to picking the opaque box because it would have nothing in it. Instead, I would settle for taking both boxes while realizing a 1000 dollar prize, and Omega would be correct in her foresight. Moral of the story: 1000 dollars on the table is worth 1 million gambling with an omniscient being.  

This is Isaac’s second post! To learn more about her research, check out her first article here!

Grad School is a Miserable Experience

Joyce Yu Cahoon
Joyce Yu Cahoon, PhD Candidate

I’m kidding. Your time in graduate school can be challenging, but like so many things in life, it’s how you take on those challenges that matters. My resolution to succeed was tested after jumping to the conclusion that two semesters of research will never see the light of day. I had a bit of an identity crisis. I questioned my life decisions. I was bitter and resentful. But getting through moments like these has made me realize the intrinsic value of a PhD.

Everyone has some theory of the world in which they conceptualize themselves. At least I’d like to think so. When someone or something dear to us objects to our theory or lurks outside our structure, chaos ensues. Luckily, I had the fortune to have such a formative experience and gain this perspective through the wide gamut of projects in our lab. I even had a short stint as a data scientist this summer at a local start-up. Over the past year, the projects I’ve worked on include:

  • Monitoring food safety violation rates.
  • Using digital mammography to predict breast cancer.
  • Text mining Twitter data to identify incidences of food poisoning.
  • Developing a means to detect age from facial and body markers.
  • Reconciling disparate data sources.
  • Building a simulation tool to illuminate the benefits and costs of microtransit.

If you aren’t familiar with these topics, let me assure you that this year was a random walk through research areas – no one topic naturally flowed from the last. In hindsight, it was interesting to encounter the broad divide among statisticians in understanding the nature of these problems and the best approaches to solve them—and no, this is not another one of those frequentists vs. bayesians posts. If there is a common thread to these projects, it would be that we have some set of inputs, x, to which we hope to apply some statistical magic so that we arrive at the response of interest, y. But what magic do we use? How do we get from x to y?

In one camp are those that generally assume that the input data is generated by some stochastic model and can be fit using a class of parametric models. By applying this template, we can elegantly conduct our hypothesis tests, arrive at our confidence intervals, and get the asymptotics we desire. This tends to be the lens provided by our core curriculum. The strength of this approach lies in its simplicity. However, with the rise of Bayesian methods, Markov chains, etc. this camp is beginning to lose the “most interpretable” designation. Moreover, what if the data model doesn’t hold? What if it doesn’t emulate nature at all?

In the other camp, are the statisticians whose magic relies on the proverbial “black-box” to get from x to y. They use algorithmic methods, such as trees, forests, neural nets and svms, which can achieve high prediction rates. I must admit, most of the projects I’ve worked on fall in this camp. But despite its many advantages, there are issues: multiplicity, interpretability and dimensionality, to name a few. Case in point, the team I worked with in the digital mammography project was provided a pilot data set of 500 mammograms from 58 patients with and without breast cancer. Our goal was to design a model that can flag a patient with or without cancer. But how can we make rich inferences from such limited training data? Some argue our algorithmic models can be sufficiently groomed to learn representations that meet that of a human mind. In this case, that of a radiologist in identifying mammograms associated with those at risk of breast cancer. However, our team worked on tinkering with a variety of adaptations of often-cited convolutional neural nets; each variation of which was not able to fully capture the representations we desired in identifying radiological features. The tools at hand were simply not designed to achieve the objective; grooming was not the solution.

So now that I’ve come through this experience and am again looking forward — I have to ask — in which camp do I fall? Perhaps it’s not either-or; perhaps it’s not even a combination, but something entirely new. Whatever the path forward is, I’m excited to be playing a part. It’s been an intense year, but the level of intellectual growth and personal self-discovery made it all the more worthwhile.

References
Leo Breiman. Statistical modeling: the two cultures, Statistical Science, 2001.


Joyce is a PhD Candidate whose research focuses on machine learning. We asked a fellow Laber-Labs colleague to ask Joyce a probing question —

Q:  Propose a viable strategy to Kim Jong-un on how to take over the world in the next 5 years. — Marshall Wang

A:

With just nuclear capability, NK is left with a route with low odds but high payout. They should continue to do missile tests that inflate their nuclear capability. Kim should also ramp up the disparaging comments against Trump for Trump’s inaction insinuates Americans would never use an atomic bomb. Such comments would likely not draw sanctions from strong allies, namely China and Russia. Kim should then leak intel on a planned nuclear weapon launch as close to the SK border as possible. If stars align, Trump could justify nuking NK, but the collateral damage in SK would likely draw political ire from the global community. If successful to this point, the US would fall into great political turmoil as Trump would be demonized to be worst than Putin. Kim would then need the US and Russia to somehow engage with one another in WW3. While they are preoccupied on that front, Kim could start a violent civil war within Korea. This may involve bombing highly populated areas in SK, though the US would HAVE to be preoccupied fiercely on other fronts AND China would have to be involved, perhaps on the Eastern front, for Kim to execute on such a scheme successfully. NK should at this time revert to a defensive strategy in order to move towards a united Korea. Over the course of a few years, Kim should hope both sides take heavy causalities, provide help to China when it can, and win over other Asian allies of US. And since Korea is among the most advanced nations in technological production, these years would leave a destructive gap in technological progress.

 

This is Joyce’s second post! To learn more about her research, check out her first article here!

Teaching human language to a computer

Longshaokan (Marshall) Wang
Longshaokan (Marshall) Wang, PhD Candidate

Have you ever learned to write code? If so, you were learning a “computer language.” But, have you ever considered the reverse; teaching a computer to “understand” a human language? With the advancement of machine learning techniques, we can now build models to convert audio signals to texts (Automatic Speech Recognition), detect emotions carried by sentences (Sentiment Analysis), identify intentions from texts (Natural Language Understanding), translate texts from one language to another (Machine Translation), synthesize audio signals from texts (Text-To-Speech) and more! In fact, you probably have already been using these models without knowing because they are the brains of the popular artificial intelligence (AI) assistants such as Amazon’s Alexa, Google Assistant, Apple’s Siri, Microsoft’s Cortana, and most likely Iron Man’s Jarvis. If you have ever wondered how these AI assistants interact with you, then you are in luck! We are going to take a high-level look at how these models are built.

Many of the language-processing tasks listed above use variations of a machine-learning model called Recurrent Neural Network (RNN). But, let’s start from the very beginning. Say you spotted an animal you haven’t seen before, and you attempt to classify it. Your brain is implicitly considering multiple factors (or features): Is it big? Does it make a noise? Does it have a tail?, etc. You weight these factors differently because maybe the color of its fur is not as important as the shape of its face. Then, your guess will be the “closest” animal you know. A machine-learning model for classification works similarly. It maps a set of input features (e.g., big, purrs, has a tail, …) to a classification label (cat). First, the model needs to be trained using samples with correct labels, so that it knows what features correspond to each label. Then, given the features of a new sample, the model can assign it to the “closest” label it knows.

A simple example of a classification model is a perceptron. This model uses a weighted sum of the input features to produce a binary classification based on whether the sum passes a threshold:

[1]

But a perceptron is too simple for many tasks, such as the “Exclusive Or (XOR)” problem. In XOR problems with 2 input variables, the correct classification is 1 if only one input variable is 1, and 0 otherwise:

Values of input variables A and B True output/correct classification
A = 0, B = 0 0
A = 1, B = 0 1
A = 0, B = 1 1
A = 1, B = 1 0

However, this classification rule is impossible for a perceptron to learn. To see this, note that if there are only two input features, a perceptron essentially draws a line in the plane to separate the 2 classes, and in the XOR problem, a line can never classify the labels correctly (separate the yellow and gray dots):

[2]

To handle more complicated tasks, we need to make our model more flexible. One method is to stack multiple perceptrons to form a layer, stack multiple layers to form a network, and add non-linear transformations to the perceptrons:

[3]

The result is called an Artificial Neural Network (ANN). Instead of learning only a linear separation, this model can learn extremely complicated classification rules. We can increase the number of layers to make the model “deep” and more powerful, which we refer to as a Deep Neural Network (DNN) or Deep Learning.

Despite the flexibility of the DNN model, language processing remains a challenging classification task, however. For starters, sentences can have different lengths. In cases like Machine Translation, the output is not just a single label. What’s more, how can we train a model to extract useful linguistic features on its own? Just think about how hard it is for a human to become a linguist. So, to handle language processing, we need a few more twists on our DNN model.

To deal with the variable lengths of sentences, one can employ a method known as word embedding. Here, each word of a sentence is processed individually and mapped to a numeric vector of a fixed length. A good word embedding tends to put words with related meanings, such as “dolphin” and “SeaWorld,” close to one another in the vector space and words with distinct meanings far apart:

[4]

The embeddings are then fed to the DNN for classification.

But a word’s meaning and function also depend on its context in the sentence! How can we preserve the context when processing a sentence word by word? Instead of using only the current word as our DNN’s input, we also use the output of our DNN for the previous word as an additional input. The resulting structure is called a Recurrent Neural Network (RNN) because the previous output becomes part of the current input:

[5]

Now we know how to make our model “read” a sentence, but how do we format all the language-processing tasks as classification problems? It’s straightforward in Sentiment Analysis, where we use the output of an RNN for the last word as a summary for the sentence and add a simple classification model on top of the summary. The labels can be [“positive”, “neutral”, “negative”] or [“happy”, “angry”, “sad,”  …]. In Machine Translation, we have an encoder RNN and a decoder RNN. The encoder reads and summarizes the sentence in language A; the decoder sequentially generates the translation word-by-word in language B. Given what you’ve learned so far, can you figure out how to use a RNN for Natural Language Understanding, Automatic Speech Recognition, and Text-To-Speech?

On this journey, we started with the basic classification model, the perceptron, and finished with the bleeding-edge classification models that can process human language. We have peeked into the brains of the AI assistants. Exciting research in language processing is happening as we speak, but there is still a long road ahead for the AI assistants to converse like humans. Language processing is, as mentioned before, not easy. At least next time you get frustrated with Siri, instead of yelling “WHY ARE YOU SO DUMB?” you can yell “YOU CLASSIFIED MY INTENTION WRONG! DO YOU NEED A BETTER EMBEDDING?”

[1]Programming a Perceptron in Python, 2013, Danilo Bargen.

[2]A deep learning tutorial: from perceptrons to deep networks, 2014, Ivan Vasilev

[3]Overview of artificial neural networks and its applications, 2017, Jagreet.

[4]Wonderful world of word embeddings: what are they and why are they needed?, 2017, Madrugado.

[5]Understanding LSTM networks, 2015, Colah.


Marshall is a PhD Candidate whose research focuses on artificial intelligence, machine learning, and sufficient dimension reduction. We asked a fellow Laber-Labs colleague to ask Marshal a probing question —

Q:  If you were running a company in Boston and had summer interns coming from out of town, what would be the best way to scam some money off of them? — James Gilman

A:

Call my company Ataristicians and ask for seed money.

Just kidding. On a more serious note, if I were a scammer, I would take advantage of the fact that in Boston, gifting weed is legal but selling is not. The way transaction works is that the buyer would “accidentally” drop his money and then pick up the “gift bag” from the seller. The employees of my company would go to all the intern events, establish contacts with the interns, find the potential customers, and pose as discrete weed dealers. Then we would simply put garbage in the gift bag and take the interns “dropped” money. Nothing illegal with gifting garbage. Those interns can’t find help from the police. And because they came from out of town, they are unlikely to have connections with local gangs. Now, if we want to make more money, we would record the whole price negotiations and the transactions, then blackmail the interns, threatening to email the recordings to their managers and ruin their careers.

This is Marshall’s second post! To learn more about his research, check out his first article here!

Variable Selection using LASSO

Wenhao Hu
Wenhao Hu, PhD Candidate

How to identify a gene related to cancer?  What factors are correlated to graduation rates in all NCAA universities? To answer those questions, statisticians usually use a method called variable selection. Variable selection is a technique to identity significant factors related to the response, e.g., graduation rates. One of the most widely used variable selection methods is called LASSO. LASSO is a standard tool among quantitative researchers working across nearly all areas of science.

LASSO can handle data with lots of factors, e.g., thousands of genes. In the era of big data, this is extremely useful. For example, suppose that there are 50 patients with cancer and another 50 healthy people. And scientists sequence each subject’s gene at ~100k positions. To identify the gene related to cancer, one needs to check those ~100k positions. Traditional regression methods fail in this case because they usually require that the number of subjects be larger than the number of genes. LASSO avoids this problem by introducing regularization, which then has been used by many others machine learning and deep learning algorithms. LASSO has been implemented in most statistical software environments. For example, R has a package called glmnet. SAS has a PROC called glmselect.

To achieve good performances for LASSO, it is vital to choose an appropriate tuning parameter, which balances the model complexity and model fitting. Classical methods usually focus on selecting on a single optimal tuning parameter that minimizes some criterion, e.g., AIC, BIC. However, researchers usually ignore uncertainties in tuning parameter selection. Our research studies the distribution of the tuning parameter, and thus provides scientists with information about the variability of model selection.  Furthermore, we are developing an interactive R package for LASSO. By using the package,  scientists can dynamically see the model selected and corresponding false selection rates.  This allows them to explore the dataset and to incorporate their own subject knowledge into model selection.

Illustration of the interactive R package under development for variable selection.


Wenhao is a PhD Candidate whose research interests include variable selection and statistical learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?

A: My research provides me a better understanding about the theory of
linear models, which is one of the most widely used statistical
methods.

Q: What do you see are the biggest or most pressing challenges in your research area?

A: One biggest challenge is model interpretability and inference after model selection. Meanwhile, users usually have little freedom to incorporate their domain knowledge into the process of model selection.

Q: Finish this parable:
A Tiger is walking through the jungle whereupon he sees a python strangling a lemur. The Tiger asks the python, “why must you kill in this way?” it is slow and painful. We all must eat, but have you no compassion for your fellow animals? To which the python replied, “Why must you kill with teeth and fangs? The gore and violence of it is scarring to all who are unfortunate enough to see it.” The tiger considered this for a moment and finally said, “Let us ask the Lemur. Lemur, which is your preferred way to go?”

A: The python relaxed his grip slightly so that the Lemur could speak, “I
don’t know which way is better. But if I can choose, I prefer to be
killed by the strongest animal. Is Python or Tiger stronger?” The Tiger
answer confidently, ‘I am the strongest animal in the jungle. Python, you
should leave the Lemur to me.’ The Python felt very unhappy and started to debate with the Tiger and the Lemur. After several minutes, the Python and Tiger started fighting with each other. The Lemur escaped…

Following the money trail… To catch the bad guys

Yeng Saanchi
Yeng Saanchi, PhD Candidate

Imagine a world devoid of human exploitation, a world free from the fear of being trapped under the yoke of slavery. Sounds like a perfectly splendid world, if you ask me! Alas, this is a world that remains elusive because there are people who refuse to accept that owning their fellow human being is the epitome of evil. Modern-day slavery, also known as human trafficking, involves the abuse of power by certain individuals or groups to coerce the victims and exploit them through the use of threats or force. It is usually characterized by the giving or receipt of payments or benefits in order to assume control over a person. Forms of modern-day slavery include sex trafficking, labor trafficking, domestic servitude, forced marriage, bonded labor and child labor.

 

Human trafficking ranks as the third most profitable crime in the world and generates about $32 billion a year. But there is hope! The illicit trade of humans is a problem that is acknowledged by many governments and organisations in the world, and the battle against it has been ongoing for decades now. Although there have been many attempts to curb human trafficking activities, many have come to the realization that these criminal acts cannot be disrupted by conventional policing methods. The inefficacy of conventional methods has given rise to what is referred to as “follow the money” techniques. These methods target illicit assets, such as the financial assets of criminal organisations. It is claimed that targeting illicit assets demonstrates that crime does not pay, disrupts criminal networks and markets, and acts as a deterrent through reduced returns. While the confiscation systems developed have been partly effective, there is an ongoing discussion as to whether these current methods are truly achieving the objective of curbing these crimes.

The purpose of the current human trafficking project being undertaken by Laber Labs is to map out connections between people based on personal and geographic information, as well as their financial transactions. This is to aid in devising a means of detecting with reasonable accuracy, which transactions appear suspicious and are more likely to be associated with criminal activities. The long-term goal of the project is to assist law enforcement to apprehend the criminals as well as stop these crimes before they are committed, if possible.

The dark web plays an important role in exploring this method of apprehending criminals involved in illegal activities, primarily human trafficking. The dark web is the content on the world-wide web which can only be accessed by specific software, configurations or authorisation. Though the dark web is deemed as a treasure trove of criminal activity, a study conducted by Terbium Labs, showed that about 48% of the activities that take place on the dark web are legal. Interesting, right? The dark web is actually patronised by a lot of people who wish for privacy or anonymity.

The task of “catching the bad guys” perpetrating these inhuman acts is no doubt a challenging one and hopefully the outcome of this project will provide an effective method for curbing this canker.

I will end this post with a quote by William Wilberforce: “If to be feelingly alive to the sufferings of my fellow creatures is to be a fanatic, I am one of the most incurable fanatics ever permitted to be at large.”


Yeng is a PhD Candidate whose research interests include predictive modeling and variable selection. We thought this posting was a great excuse to get to know a little more about her, so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?

A: What I find most compelling about my research is the potential of saving lives by helping to put a stop to modern-day slavery.

Q: What do you see are the biggest or most pressing challenges in your research area?

A: The most pressing challenge at the moment is building a statistical model for age prediction using body poses in order to help in distinguishing between underage and adult victims.

Q: Give five tips for starting a successful doomsday cult! One tip should be about fostering the deviancy amplification spiral in your potential followers.

A: i) Run for student body president as a way of getting students on board. Could insert subtle messages about an imminent robot apocalypse in the numerous emails that the student president is allowed to send to students.

ii) Reach out to the fraternities and sororities as a way of garnering more support

iii) Put out a story online about a prominent figure in the academic community who is working on helping law enforcement to curtail human trafficking and yet has his own coffle of slaves in the guise of a lab, with proof and all(made-up or not). This should elicit moral outrage and help foster the deviancy amplification spiral somewhat, I think.

iv) Work on getting a couple of notable figures involved, probably someone from the academic community. For instance, convincing EBL that an apocalypse is imminent will be a step in convincing many. How to go about that, I’m not certain.

v) The least probable tactic will be to convince the most powerful man in the world that unlike global warming, a robot apocalypse is real and imminent.

Spatial Analysis of College Basketball

Nick Kapur
Nick Kapur, PhD Candidate

For a few weeks each March, the country is captivated by March Madness. Brackets are filled out, bets are placed, and occasionally prayers are answered. Professional sports are wonderful, but college sports are able to generate the purest form of passion; a passion derived from people’s lives being intricately and inexorably tied to the school they attend. At NC State, we are at the epicenter of college basketball. NC State plays in the best basketball conference in the country (the ACC), mere minutes from Duke and UNC, 2 of the greatest college basketball programs of all time. Competing constantly against the very best schools in the country requires a flexibility and adaptability often necessary in any “underdog” story. I believe that this requirement can lead to the perfect union of NC State basketball and an unlikely partner: the Department of Statistics.

Since the early 2000s, professional sports organizations have slowly embraced the use of statistics and analytics to help drive performance increases. The professional equivalent of college basketball, the National Basketball Association (NBA) has even gone so far as to install special cameras in each arena that produce data including every player’s spatial location 24 times per second. College sports teams, due primarily to a lack of resources, have been far slower to embrace analytics. In college, there are no fancy cameras in place leaving most studies to use simple statistics such as points, rebounds, and assists. Meanwhile, the most important offensive concept in the game, the ability to shoot the ball, is captured only by field goal percentage. Field goal percentage is a misleading statistic as it is unable to determine where on the court shots originate. This shortcoming leads to players who take easier or fewer shots to have higher field goal percentages. This is problematic as it doesn’t truly capture the best shooters; it simply captures the most opportunistic ones.

That is where the Statistics department can help make major strides. In a recent project, I created a web application that allowed for the easy tracking of college basketball shots. This does not give all player’s locations 24 times a second like the NBA, but it does allow easy capture of shot location, a glaring missing piece of data for most college programs. In addition, after leading a team of undergraduates to collect data for 20 NC State games from the 2016-17 season, I performed a spatial analysis of the data. This analysis led to several interesting insights. First, the conventional wisdom that players tend to shoot more (or better) to the side of their dominant hand showed no evidence. Second, the belief that shooting 3 pointers is significantly better than shooting long 2 pointers was reaffirmed. And finally, likelihood comparisons were able to be drawn for each player. This is important, as it can be used to determine where certain players are likely to shoot, which is wonderful information for a coach trying to create a game plan.

Overall, this recent project was able to accomplish several interesting tasks in the world of college basketball that will hopefully allow the influence of statistical thinking to soon become an integral part of the game. If this union is embraced by NC State (as it has been thus far), our university can be a leader in driving the field of sports statistics to a higher level while at the same time winning in front of the entire country every March.


Nick is a PhD Candidate whose research interests include machine learning and statistical genetic. His current research focuses on pursuit-evasion and cooperative reinforcement learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?

A: I love the ability to work on problems from a diverse set of fields. The ability to do statistical research in sports and then take that research and apply it to national security, robotics, and medicine is incredibly appealing to me.

Q: What do you see are the biggest or most pressing challenges in your research area?

A: I wrote this blog post on sports statistics, so I will answer about that as a research area. I think the most challenging aspect is gaining the trust of the sports community. Like many communities, it tends to be insular and resistant to change. There are still many athletes, coaches, and administrators who do not see the value in listening to people who have not played their sport at a high level. This is slowly changing for the better; however, the area of sports statistics still needs many practitioners who intimately know the sport they are studying, can communicate effectively with the people within that sport’s community, and are open-minded to compromise.

Q: Explain the benefits of Scientology.

A: The founder of Scientology, L. Ron Hubbard, once said “For a Scientologist, the final test of any knowledge he has gained is, ‘did the data and the use of it in life actually improve conditions or didn’t it?’” The question posed in this quote is phenomenal. It is something a statistician should ask themselves every time they are working on a problem. While the statistical methodology of Scientologists may be less rigorous than that of trained statisticians, at least they are asking themselves the appropriate questions (something statisticians don’t always do).

Your Own Path

Robert Pehlman
Robert Pehlman, PhD Candidate

“Follow your own path”.  Useful advice when navigating the ups and downs of life.  It is also a useful primer for functional data!  Sometimes when researchers collect data, they are interested in a “path” or trajectory of some measurable quantity over time.  Imagine that you were able to know your heartbeat at any given moment during the day and visualize your heartbeat as a graph (trajectory), with time on the x-axis and beats per minute on the y-axis.  It slows down when you rest, and it speeds up when you play LaserCatsTM.

Your pulse is a continuous process, which means that if it jumps from 70 bpm to 150 bpm it must visit every value in between the two. Furthermore, it seems reasonable to assume that your heart rate in the current moment depends on your heart rate from a few minutes ago.  In the statistics world, we would say that your heart rate now is correlated with your heart rate in the past.  Just as in life, every individual’s heart rate must follow its own path, and no two trajectories are identical.  However, it may be reasonable to believe that there are underlying characteristics of your pulse that are similar to other humans.  They tend to be elevated in the day and lower at night.  The correlation between your heart rate 5 minutes ago and right now may be the product of a rhythm that is common to all humans.  If we make a few assumptions, we can fit a model that describes the average value of this process over time and how the process is correlated with itself in the past and future.

If you had a high quality model, you could even make predictions about a future heart rate given enough information about the process up until the current time.  Imagine that your goal was to maximize your heart rate at the end of an hour of working out, and you could choose whether to spend your next hour doing crossfit or jogging.  Either one of these options could create the target rate, but the decision about which one performs better might be dependent on prior information, like the path of your heart rate through the day.  The choice about whether to do crossfit or jogging could be tailored to an individual based on how their heart beat progression has looked up until the moment the decision needs to be made.

My research examines the underlying principles involved in modeling a continuous process and using it to predict a future outcome.  I am currently working to apply it to a practical problem — depression in humans.  Mental health workers use existing methodology to assign numerical scores to indicate severity of depression.  Like heart rate, this is a value that may be constantly changing over a day, a week, or a year.  We can tailor a treatment strategy using past information about the trajectory of someone’s depression score to find a medication that works best for that particular individual. I hope that the results of this research could be useful in improving the quality of life of people suffering from depression and that the underlying statistical tools could have broader applications in other fields.


Robert is a PhD Candidate whose research interests include computational statistics and machine learning. His current research focuses on functional Q-learning. We thought this posting was a great excuse to get to know a little more about him, so we we asked him a few questions!

Q: What do you find most interesting/compelling about your research?
A: I enjoy the challenge inherent in solving a problem that doesn’t have a standard solution. A lot of elementary statistics involves applying tools that are already well understood, but research allows you to develop something rigorous that could become a standard practice in the future.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: Sequential decision making needs a lot of data, nonparametric problems tend to need a lot of data, so multi-stage, sequential problems that are estimated using nonparametric methods might need a lot^2 of data to be good.

Q: The poem Antigonish begins:
      As I was going up the stair
      I met a man who wasn’t there!
      He wasn’t there again today,
      Oh how I wish he’d go away!

Write a 200 word sitcom pitch for a family comedy based on this snippet.
A: Theodore and His Imaginary Frenemy

4th grader Theodore Giffel had always been an outsider and had a hard time connecting with others. He always wanted a friend who would never leave his side, but after he fell down the stairs and got a concussion, he woke up with more friend than he bargained for. Ricky Morton, his imaginary friend, began causing trouble in Theo’s life as soon as he entered it. Theo would be blamed for the havoc caused by Ricky, who seems to be motivated only by mayhem. Despite this, Ricky stuck by Theo all the time and always kept him busy, even when Theo wished he wouldn’t. Coming this fall on Laber Labs TV!

Reinforcement Learning in Education

 

Eric Rose
Lin Dong, PhD Candidate

 

I always prefer video to a pile of reading materials, but I am not sure which one helps me learn better.Lin Dong

 

‘Reinforcement’ has become a buzz word in the machine learning and artificial intelligence communities. It has wide application from winning a video game to automating a car.

If you are not familiar with reinforcement learning, here is what it is. First of all, it is a sub-area of machine learning. In supervised learning, the task is to learn to predict or classify something from a training dataset. For example, you want to decide if an image is showing an apple. You will receive some pictures of apples and get an idea of what an apple looks like.  In reinforcement learning, the task is not simply to predict or classify but to learn what to do to maximize our reward in a complex dynamic system. In this setting, we don’t have the nicely labelled training set to teach us. So what should we do? Well, we can learn through trial-and-error interactions with the system. This is like making an apple pie. Try different kinds of apples and various amounts of sugar. You may puke several times, but eventually you will learn to make a perfect-tasting apple pie.

You may wonder how this is related to education. Think of students taking a course to learn some skills. The complex system is the interaction between the instructor and the students, as well as how student learn the knowledge. The reward of this system is how much the students actually learn.

Nowadays, the common practice in education is a one-fit-all method. That is, each student in a course is treated identically for all the teaching activities – the same content, same way of teaching, and same tests. However, some students may learn better from a video illustration, whereas some students may learn more from a well-organized handout. Or, some students may perform better on a project but others are good at exams. Therefore, a better strategy would be to develop a personalized educational scheme that takes into account the inherent differences between students, and the scheme should be able to change dynamically according to feedback from the student.

The study process can be formally modeled as a Markov decision process. Each student entering the course has his/her own initial status, which may include the student’s own characteristics and previous proficiency level. The process starts by an assessment (A), say a quiz. The assessment is so important here as instructors normally cannot read minds. They need to give a quiz to estimate how much he/she really understands the content. The result of this quiz is observed (X) and serves as an estimate to the student’s true proficiency level. Then the instructor gives an intervention (I) by choosing one of the teaching resources for this student. This intervention leads the student to a new proficiency level. The data triple (A, X, I) accumulates until the student reaches the end of the course. What we care about most is the final assessment result, which is a measurement of a student’s final proficiency level after the course.

This process is slightly different from ordinary Markov decision processes in the sense that there are two completely different decisions to make: how to assess the student’s understanding and how to select the teaching resources for the student. Therefore, our goal is to maximize the final outcome by finding the optimal policy of both assessment and instructor’s intervention.

The next step is how to solve for the optimal policy using the accumulated data of students. We will use approximate dynamic programming, a tool from reinforcement learning, to learn the optimal teaching plan. Check out my next post for details!


Lin is a PhD Candidate whose research interests include dynamic treatment regimes, reinforcement learning,  and survival analysis. Her current research focuses on shared decision making in resource allocation problems. We thought this posting was a great excuse to get to know a little more about her, so we we asked her a few questions!

Q: What do you find most interesting/compelling about your research?
A: I can always simulate fake subjects and manipulate their imaginary behaviors. In a larger view, I may change the world of education.

Q: What do you see are the biggest or most pressing challenges in your research area?
A: Inference is hard. That’s why the world need statisticians.

Q:  Explain, as you might to a child, that just because mommy and daddy are splitting up it doesn’t mean they love him any less.  This is *not* his fault, but, if we’re being honest, he didn’t help.

A: The poor kid’s name is Snow.

“Snow, come here!”

Snow comes to Daddy.

“Kid, here is something you need to know. You know that daddy and mommy both fear cold weather right? Well, two people that both hate cold cannot live together, because they make each other colder. Now it is winter and snowy. You know, it is cold now, but it is not because of the snow outside. Snow just does not help warm up the weather. So daddy and mommy have to split for a while.”