100 Burritos in San Diego: 10-dimensional rating system

WEBSITE MOVED.

See this post at https://srcole.github.io/100burritos/

Original article:

See the details of all >100 burrito reviews

Contribute to the burrito review data set by filling out this form

IPython Notebooks for this analysis

Interactive map

Summary

We have developed a 10-dimensional system for rating the burritos in San Diego. The goal of this project is threefold:

  1. Identify the best and worst burritos in San Diego to share this information with others
  2. Characterize the variance in burrito qualities across the county.
  3. Generate models for what makes a burrito great and investigate correlations in its dimensions

At this time, 30 reviewers have visited 31 taco shops and critiqued 104 burritos. So far, a general consensus has identified The Taco Stand in downtown La Jolla as having the best California burrito, but there are many more to try. Here, the average burrito costs about $7 and is about 850mL in volume. Further, we explore correlations between burrito dimensions, such as the quality of the meat and nonmeat fillings, and identify a novel correlation between tortillas and Yelp ratings.

 

Motivation

Mexican cuisine is often the best food option is southern California. And the burrito is the hallmark of delicious taco shop food: tasty, cheap, and filling. Though these “majestic cylinders” are consumed at a rate faster than one per second across San Diego county [1], they have been dramatically understudied [2]. This lack of funding to support public burrito knowledge has led millions of people to eating a burrito and subsequently feeling dissatisfied, a tragedy that can be avoided. Even the most experienced burrito eaters have experienced the following disappointments:

  • “I just took a bite entirely of sour cream”
  • “This carne asada has the texture of rubber”
  • “THE TEMPERATURE OF THE EGGS IN THIS BURRITO IS TOO DAMN HIGH”
  • “I am not looking forward to the leftover burrito in my fridge”
  • “Where is the meat in this burrito?”
  • “I need a fork”

For this reason, an effort was launched to critique burritos across the county and make this data open to the lay burrito consumer. Armed with an ever-growing database of over 100 burritos from Chula Vista to San Marcos (but mainly around UCSD), consumers can make better educated decisions of where to get their next fix. With this active feedback into consumption choice, we hope this will also cause burrito chefs to continuously improve their own methods. This study was predominantly a joint effort between the Neurosciences graduate program at UC San Diego and the amateur beach volleyball group that plays at Muir courts at 5pm. It is an extension of the single-dimensional burrito analysis published last year.

 

Previous work: FiveThirtyEight’s best burrito in America

Anna Maria Barry-Jester, a reporter with the website FiveThirtyEight travelled across America to identify the best burrito in the country. Their process, described here, was to data mine Yelp in order to narrow down the pool of >67,000 restaurants to just 64. Then, Anna completed a “burrito bracket” in which groups of four restaurants faced off against each other in knockout-fashion for 3 rounds until only one taqueria remained: La Taqueria in San Francisco. Their articles are worth reading, as their efforts were much more serious, qualifications much more qualified, and methods much better thought-out than my own. In addition to naming a best burrito in America, they also have interesting insight on crowdsourced reviewing systems and biases.

 

The 10-dimensional burrito

Contrary to popular belief, burritos do not merely exist in 3 dimensions. They transcend the physical limitations of space. From polling several San Diegans, we’ve established the 10 core dimensions of the San Diego burrito.

  1. Volume – “size matters,” “bigger is better,” or whatever your favorite innuendo is fits because there’s nothing more disconcerting than ordering a burrito and not being full.
  2. Tortilla quality
  3. Temperature – the Goldilocks zone
  4. Meat quality
  5. Non-meat filling quality
  6. Meat : filling – The ratio between meat and non-meat. Perhaps the golden ratio: 1.6180339887…
  7. Uniformity – Bites full of sour cream and cheese with no meat are disappointing.
  8. Salsa quality – and variety!
  9. Flavor synergy – “That magical aspect a great burrito has, making everything come together like it is a gift from the skies” – A wise Dutchman
  10. Wrap integrity – you ordered a burrito, not a burrito bowl.

All of these measures (except for Volume) are rated on a scale from 0 to 5, 0 being terrible, and 5 being optimal. In the future, Meat:Filling and Temperature measures may stray from this subjective scale in order to better quantify these two valuable burrito characteristics. Additionally, acquisition of a portable scale will allow collection of mass. Cost (in USD) and hunger level (on the same 0-5 scale) are measured as potential control factors. In addition to these 10 core dimensions, we also collect two summary statistics:

  1. Overall rating – 0 to 5 stars
  2. Recommendation – Yes/No. If a friend asked you about that burrito with the intent of purchasing one, would you recommend it?

 

Where can I get the best burrito?

This controversial question is argued by many who hold very strong opinions. However, I believe that there is no single best burrito for a few reasons:

  1. Each burrito at each taco stand has significant variance between each sample. Each chef has their own burrito assembly techniques and certainly cannot construct each burrito in the exact same way.
  2. Each person processes a given burrito in different ways, from their tongue to their higher-level cortices. Also, the optimal burrito for consumption varies across time for a single individual (e.g. a breakfast burrito may be optimal in the morning, and a carne asada burrito for dinner).

Therefore, the best we can do is to see which burritos are consistently rated the best by multiple reviewers. We want to identify the burritos that will be maximally enjoyed by the most number of people. The consensus at this time is that the “best” burrito in the area is the California burrito from The Taco Stand in downtown La Jolla. The quality of their carne asada is unmatched and worth the extra cost and lack of seating.

In the future, when more data is collected (requiring multiple burritos from several establishments), we can identify the taco shop with the best meat, best tortilla, most optimal Meat:Filling, etc. Currently, we can share the rankings of each of the 3 taco shops from which we have rated at least 9 burritos. We find that Taco Stand is superior in most categories (overall average: 4.1/5) but fares worst in terms of cost, volume, and temperature. Rigoberto’s Taco Shop on Miramar Road, seems to be the best value burrito with the lowest cost and greatest volume and Meat:Filling and still an overall rating of 3.8/5.

rankingdimension

Table 1. Ranking of the three most-rated taco shops in each burrito dimension. ‘1.5’ indicates a tie.

 

The MNIST (Mexican National Institute for Sustenance Taste) burrito database

As with the MNIST handwritten digit database, all raw data is available in the Google spreadsheet here. The subsequent analyses performed can be found in my GitHub repo for this blog, organized in IPython Notebooks here. As of May 19, 2016, the review system outlined above has been applied by 30 people to rate 104 burritos at 31 unique restaurants. Only 9 of those 31 (29%) taco shops provided free chips. The California burrito was the most commonly rated variety, mainly because it is one of my favorites and a standard in San Diego. However, multiple samples were taken from other common varieties as well as each restaurant’s specialties.

burritotypes

While burritos are known to be inexpensive, there is significant variance across taco shops. The average burrito was about $7 before tax, but this value ranged from $5 to $10.

costdist

Volume was estimated using a flexible tape measure (Wal-Mart, sewing section) trimmed to a length of 30cm for better portability. First, before any part of the burrito was consumed, the tape measure was extended in front of the burrito, and the length of the burrito-proper (portion of the burrito with approximately the same circumference as the center) was measured with a precision of 5mm. Second, the tape measure was wrapped around the center of the burrito to record the circumference. An estimate of burrito volume was then calculated using these two measures. The average burrito occupied approximately 0.85 liters but varied across the distribution shown below.

Volumedist

Linear models to predict overall burrito quality

Of the above dimensions, which are the most important to the overall rating of a burrito? Before attempting to answer this, it is important to note that each metric is not independent of one another, and in fact there are considerable correlations between numerous dimensions. This is clearly seen in the correlation matrix below. While the overall rating correlates positively with almost all measures, these measures are not independent of one another, so it is difficult to disentangle how each one contributes to the overall rating. This limitation may be rooted in a few possibilities:

  1. Physical limitations of the human gustatory system and subsequent neural processing
  2. Restaurants that performs well in one burrito metric are more likely to perform well in other metrics.
  3. Some metrics will inherently be dependent, such as filling quality and flavor synergy.

metriccorrmat

Despite the correlations between our burrito features, a general linear model predicted overall burrito rating based on 8 of the fundamental burrito dimensions as well as Cost and Hunger Level as controlling factors. “Flavor synergy” is an ambiguous term that may be difficult to disassociate from one’s overall rating, so this was removed as a predictor. Additionally, we do not yet have sufficient data on burrito size to include it in the model. The correlation coefficients for each of the 10 predictors are plotted below. Overall, the 10 features explained 71% of the variance in overall rating.

overall_metric_linearmodelcoef

The four significant predictors were relatively unsurprising: Non-meat filling, Meat, Salsa, and Meat:Filling. However, what is more interesting is the relative weighting of these features. While I am known to claim that meat quality is the most important aspect of a burrito, the non-meat fillings are actually given more weight in the model. The strong contribution of Salsa in the linear model supports the idea that even if a burrito is lacking in some aspects, a fine salsa can really boost the quality of the meal.

Also interesting is what is not a reliable contributor to overall burrito rating. First, a more expensive burrito does not equate to a tastier burrito. Hunger level is not a significant predictor of overall rating, contrary to the idea that a burrito will taste better if the consumer is more hungry. Even prior to accounting for other factors, hunger was only weakly positively correlated (Pearson r2 ~ 0.04, p ~ 0.04). This may be indicative of the quality training of these reviewers who are not fooled by their physiological state and remain as objective as possible for burrito ratings.

While the ratings for Fillings, Meat, Salsa, and Tortilla are heavily reliant on the quality of the ingredients, the other measures are more sensitive to the skilled techniques of the burrito chef. It may be counter-intuitive that the ingredient uniformity, temperature, and wrap integrity were not significant predictors of the overall rating. Naively, one could conclude from this that all that’s important in a burrito is the quality of its ingredients, not the care with which it was prepared. An alternative is that these indications of poor technique are more common at a place that uses poor ingredients. Another interpretation is that poor preparation (e.g. too low of a temperature) can have a negative impact on the subjective ratings of ingredient quality. However Meat:Filling, was a significant predictor of overall burrito rating. Therefore, when burrito artists are making their masterpieces, they should pay close attention to this balance and avoid skimping out too much on the meat.

Is there a recipe for a great burrito? A second linear model was designed to predict overall rating, this time based on the ingredients in each burrito. In order to be included in the model, an ingredient had to be in at least 10 burritos. Ten ingredients met this qualification: Beef, Pork, Pico de Gallo, Guacamole, Cheese, Potatoes, Sour cream, rice, beans, and sauce. Though these features had binary values, a linear model was a reasonable first pass for regression analysis [3]. However, the linear model only explained 12% of the variance in the overall rating, using the same number of features as the previous linear model. This was lower than 27% of models trained using the same number of features but with random values. From this, we conclude that the ingredients chosen for a burrito is not critical, it’s solely how the ingredients are prepared.

 

Correlations: Difficult to interpret and possibly spurious

It’s hard to resist looking for correlations after collecting a large multivariate dataset. Afterall, for every 20 tests I run, there will be at least one that will stop and make me think.

Analyzing the 29 burritos for which we have a size estimate, volume is weakly negatively correlated with cost (Pearson r = -0.38, p = 0.04). That is, when ordering a fancy burrito (e.g. Lobster burrito from El Zarape), don’t expect to be full. However, it is hard to believe that this would hold true to both extremes. Extremely cheap burritos (<$5) probably will not be extremely large, and a “Monster burrito” can run >$10. Though it is not a significant predictor for overall burrito rating, we’ll keep an eye on this metric in the future though to see how size relates to other burrito dimensions, linearly or non-linearly.

corr-volume-cost

One of the strongest correlations between burrito dimensions was between Meat and Filling. There are a number of possible interpretations of this, including

  1. A restaurant with good meat is more likely to have good filling (mildly interesting)
  2. The meat and fillings interact to enhance or detract from one another’s flavor (most interesting)
  3. It is difficult for a reviewer to rate these two dimensions (least interesting, most likely)
  4. A combination of these and other explanations

In order to address hypothesis (1), we performed a case study at my favorite burrito shack, The Taco Stand in downtown La Jolla. By only analyzing California burritos at The Taco Stand, we still have a positive correlation between Meat and Filling (Spearman r = 0.69, p = 0.04, N=9). The effect was similar when including all burritos rated at The Taco Stand (Spearman r = 0.65, p = 0.007). This test concludes that the Meat and Filling correlation is not simply due to hypothesis (1).

Testing hypotheses (2) and (3) will require very specialized data sets. For example, reviewing many carne asada and carnitas burritos from a given restaurant would hold the fillings (guac and pico) constant while solely changing the meat. Ideally, one meat would be great at this restaurant and the other would be terrible. Then, we could test if there was a difference between the Filling ratings between these two groups (good meat and poor meat). The null result is interesting in this case, in which there is no difference in Filling rating between the burrito with good meat and the type with bad meat. The conclusion would be to reject hypotheses (2) and (3). However, this conclusion will require high power (and so a large sample size) to support.

 

Reviewer ratings vs. Yelp ratings

Lastly, how does this data set relate to aggregate ratings from users on Google and Yelp,  both out of 5 stars? While Google and Yelp were both highly correlated with each other (Pearson r = 0.66), they were correlated to a lesser extent to the overall burrito rating (Yelp: Pearson r = 0.34; Google: r = 0.27). This makes sense because we are only rating a subset of the menu at these taco shops. To my surprise, the Tortilla rating was a better predictor than the overall burrito rating when these were these two dimensions were used to predict Yelp rating in a linear model (Tortilla: GLM coefficient = 0.39 +/- 0.13, Z = 2.9, p = 0.003, Overall: GLM coefficient = -0.12 +/- 0.14, Z = -0.8, p = 0.38).

corr-Yelp-tortilla

 

Future plans

We are certainly not satisfied with our sparse sampling of the burritos around San Diego county. Assembling a reliable must-try list for a burrito enthusiast will require visiting many new taco shops and increasing our sampling at the current ones. The burritosofsandiego Tumblr will help here, and perhaps we will integrate some of its data into future analysis. While doing this, we hope to continue to characterize the spectrum of burritos found across San Diego.

While the current analysis was limited to linear models, future analysis will investigate nonlinear effects across the burrito dimensions. For example, is it possible for a burrito to recover from a Meat quality rating of 1/5 to achieve an above-average overall rating? As the data set grows, nonlinear techniques and machine learning approaches can be utilized to extract more insight on the burritos across San Diego. Furthermore, case studies of specific burritos reviewed by many individuals will allow for more controlled analysis.

In writing this, I welcome and hope to receive suggestions on data collection improvements and analytics ideas. Additionally, by opening up this data set, I encourage anyone who is interested to perform their own analysis and share their conclusions! Most importantly, I hope that readers will contribute to this data set by filling out this form.

 

Acknowledgements

Thank you to everyone who rated a burrito and provided feedback to improve this system. I am especially grateful to the multiple-burrito raters including Sage Aronson (4 burritos), Ricardo Serrano (6 burritos), and Emily Cheng (21 burritos). And thank you to the National Science Foundation Graduate Research Fellowship program for providing a stipend with sufficient disposable income to eat a lot of burritos.

 

Footnotes

[1] Estimate 3.2 million people in San Diego county eat an average of 1 burrito per month.

[2] A Google Scholar search for “california burrito” yields 15 results, all of which are not accessible or irrelevant.

[3] http://stats.stackexchange.com/questions/30820/how-do-you-predict-a-continuous-value-from-many-booleans-a-continuous-value . I may look at other options in the future.

Phase-amplitude coupling: hidden in noise

WEBSITE MOVED

See post here:

https://srcole.github.io/2016/03/06/pac/

Old post:

See IPython Notebook accompanying this brief post.

In my admittedly short experience analyzing neural data, I have learned to appreciate a general principle: whatever signal I am looking for, it is probably masked in a lot of noise within the raw data. This is undoubtedly true for many cases of data analysis. Here, I’m going to show a specific example of this related to my research.

Our lab studies oscillations in neural recordings, and one oscillatory phenomena that has been identified across the brain is phase-amplitude coupling  (PAC). Briefly, PAC is a statistical correlation between the phase of one oscillation and the amplitude of an oscillation of higher frequency. The occurrence of PAC has been associated with improved multi-item working memory, learning, attention, decision making, with some evidence of a directional relationship of oscillations in one region driving oscillations downstream.

temppac

The simulated data above shows a high frequency oscillation that has increased amplitude at the peak of a lower frequency oscillation. Thus, it has high PAC. Together with Voytek lab post-doc, Erik Peterson, we wrote a python package for quantifying PAC. Using this package, we can visualize the coupling between these two oscillators with a comodulogram, below, which shows that the PAC is present between a 15-40Hz carrier oscillation and the amplitude of frequencies above 100Hz.

tempcomod

However, electrophysiological recordings will not have nearly as strong of a statistical dependence compared to our contrived example. In reality, one friend was searching for PAC in this voltage signal:

tempraw

Given the clear artifacts in the time series, he was not detecting PAC in his comodulogram:

tempcomodunclean

In this IPython Notebook, we show how to process this signal in order to extract the underlying PAC:

tempcomodclean

 

Later this year we will be publishing a paper with novel insight on PAC in the motor cortex of patients with Parkinson’s Disease. So stay tuned!

Empirical mode decomposition (EMD) tutorial

WEBSITE MOVED

See post here:

https://srcole.github.io/2016/01/18/emd/

Old post:

Click here for the IPython Notebook EMD tutorial (executable with binder!)

A while back, I came across a J Neuro Methods paper which outlined an alternative methodology for phase-amplitude coupling (PAC) estimation in neural signals. PAC is a metric I have become intimately familiar with since the start of my PhD, as I have even written a python package for it, which has since been adopted by MNE, a general library for analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data.

The general point to this paper is that our current standard method for decomposing signals for PAC analysis assumed that biological rhythms are stationary, when in reality, we know that neural oscillations are frequency modulated over time. They suggested using empirical mode decomposition (EMD) to extract two coupled oscillators in a signal as opposed to the typical bandpass-filter approach.

This begged the question: how does EMD work? The current resources online weren’t the best, so I’ve made a tutorial that outlines the algorithm and provides the necessary code to apply it to a piece of data.

Is this alternative method always an improvement? In a future notebook, I’ll elaborate on my thoughts on just how well I think EMD functions on electrophysiological data.

Our forgotten memories are still in our heads

So what if I told you: mad scientists have unlocked the secret to probing lost memories in our brains. Well, I hope you got excited, because I’m practicing writing clickbait titles. But that is an extreme extension of a paper published earlier this year in Science out of the Tonegawa lab.

Let’s step back a moment to appreciate some amazing research done in the past couple years in the realm of false memories. It was a popular science story a few years ago when neuroscientists first succeeded in creating a “false memory” in a mouse (Ramirez et al, Science, 2013). Essentially, the experimenters made the mouse think that a certain cage was scary without the mouse ever having a frightening experience in that cage. Usually, we would ingrain this memory using a classical conditioning paradigm, pairing a mouse being in a certain cage (neutral, conditioned stimulus) with the experience of a foot shock (unconditioned stimulus). Therefore, when the mouse enters the cage on a later day, it will freeze because apparently that’s what mice do when they’re scared. Instead, researchers have recently started performing classical conditioning by stimulating the neurons that encode the memory of a certain cage (neutral, conditioned stimulus) with a foot shock (unconditioned stimulus) while the mouse was in a different cage. This resulted in the mice being afraid of the cage encoded by the stimulated neurons, even though they were never actually shocked when in that cage. The neurons that the experimenters stimulated were in the dentate gyrus of the hippocampus, a brain region heavily associated with memories of contexts. To make this idea more concrete, I massacred one of their figures below in an attempt at a visual aid.

Ramirez2013Fig2F

TOP. First, the mouse was in cage A (red triangle). Second, the mouse was in cage B in which it received foot shocks while its neurons that encode cage A were stimulated. Afterward, these mice were exposed to cage A again as well as a novel cage (cage C). BOTTOM. After this classical conditioning, mice were specifically afraid of cage A (blue). The gray bars are from control mice. (Ramirez et al, Science 2013, Figure 2F)

 

Now that we know that we can stimulate neurons associated with a memory (“engram cells”) in order to reactivate that memory, there are a lot of cool experiments we can run. In their paper, Tomás Ryan and his labmates used a drug, Anisomycin, to give the mice amnesic symptoms, i.e. forget their recently made memories. In this case, the researchers labelled the cells corresponding to the cage in which fear conditioning was performed. They found that the mice who were given this amnesic drug were less afraid (compared to non-amnesic mice) when the mice returned to this cage that they had previously had shocked feet in. Therefore, the amnesic mice had essentially forgotten that this cage is scary.

However, something unexpected happened when they stimulated the cells in the amenic mice’s dentate gyrus that were active when initially encoding the fear memory (see figure below). When these cells were stimulated, this revived the fear response, and the amnesic mice exhibited freezing. In other words, the experimenter brought back a forgotten memory to these mice by stimulating their neurons directly. This was surprising because these cells were critical for a memory that had shown to be forgotten. Therefore, we could have expected that stimulation of these cells would no longer evoke a fear memory. But they do.

Ryan2015Fig3B

TOP. First, the mouse was in cage A (blue). Second, the mouse was in cage B (red) in which it received foot shocks while its activated neurons were labelled. One group of mice received an amnesiac drug after this classical conditioning. Afterward, these mice were 1) exposed to cage B again, and 2) their neurons that encode cage B were stimulated while in cage A. BOTTOM. The amnesic mice were not afraid of cage B after training. However, they were afraid when the neurons that encode cage B were stimulated. (Ryan et al, Science, 2015, Figure 3B)

 

Aside: One experiment that would have been nice to see is how the overall activity of the labelled engram cells (measured with c-fos expression) compared in the amnesic mice compared to controls. I would expect relatively lower c-fos expression in these engram cells for the amnesic mice compared to controls. I’m pretty sure they didn’t do this experiment, but maybe I missed it in the supplement. Second, I would be interested in if the results would be at all different if the engram cells were labelled prior to the fear conditioning experiment. If the amnesic mice lacked a fear response to the stimulation of these engram cells, then that would imply that the engram cells need to be labeled at the time of conditioning in order to be functionally connected to cells that correlate with fear expression, probably in the amygdala.

 

The result highlighted above implies that after forgetting the fearful memory in a specific cage, the amnesic mice used a new ensemble of neurons to re-encode that previously forgotten context. To extend this to humans, when we forget a memory due to problems with retrieval and then re-encode it, we now have two traces of that memory in the brain, just we were not able to reactivate the former at the time of encoding of the latter. This is a reminder that neurons do not have any intrinsic meaning in themselves (e.g. a Pamela Anderson neuron), but rather contribute to the conscious percept through the specific integration of their inputs and their downstream projections.

An even cooler (far-distant future) prospect of this work is the ability for us to re-activate our forgotten memories. Wouldn’t it be great if you could always remember the lyrics to Blank Space and serenade your friends, even though the lyrics now only seem to come to you while singing in the shower? Well, whenever you need to remember something, just label those cells with a light-activated cation channel and activate your implanted fiber optic cable whenever you want to recall that memory! Maybe in its early stages, this technology will be limited so you can only hold one item in this memory system at a time, like Ctrl+C, Ctrl+V. And if you adopt the first generation, that protein expression probably won’t be reversible, so this one memory you choose to encode first will become the single most important memory of your life. If that’s the case, I know I would just be listening to TSwift’s 1989 album throughout that entire encoding period.

Future students will be genetically modified as cfos-TTA lines and live on a diet of doxycycline except on the days of their tests in which they also get viral injections of AAV9-TREChR2 into their hippocampus, right by their fiber optic cable implant. Sounds like a computer is more efficient at this though, so let’s just convert our neural activity to a silicon substrate and call it a day. Maybe in year 2400. Screw you, Kurzweil, the Singularity is not that near.

To be a little bit less ridiculous, I believe there could sooner be alternative strategies to reactivate seemingly lost memories. Hypnosis is a popular example that has claimed (and I have always been skeptical) that it can be used to make people remember scenes of a crime. I’m starting to believe there might be some merit to this idea. By altering the brain state (such as by changing the rhythms of the brain), some neural circuits may become more commonly activated, thus more likely eliciting conscious recall of a specific memory. Maybe we’ll develop a strategy to stimulate our brains in many ways in order to produce a diversity of brain activations and widen our search for a lost engram.

 

 

By the way, I only touched on a small part of that paper, so you should also check out their reported differences between control and amnesic mice in terms of the engram cells’ synaptic plasticity. Also check out this nice recent review of engram cells by Susumu Tonegawa.

Is an undergraduate 4.0 GPA worth the effort?

WEBSITE MOVED

See this article at: https://srcole.github.io/2015/09/07/4point0/

Original article:

 

Short answer: no, probably not. But I think there is merit in digging a bit deeper into this question.

“Do you know why she only got the honorable mention?” my professor asked but did not wait for an answer. She then wrote “3.98” and circled it. “But you have a 4.0. You can become the first student from our department to win a Goldwater.” Was the 4.0 key? Who knows.

Many people graduate with 4.0 GPAs, so why do I have the delusion that my opinion is worth writing about? It is motivated by the sentiment that is echoed by many (including my PhD advisor) that undergraduate GPA is not an indicator of success in graduate school, and so it should not be a large factor in admissions. This makes total sense because the skills needed to excel in graduate research are very different than those needed to study and make good grades. It is also supported by (sometimes contradicted) literature [1].

While I would love to delude myself with the belief that undergraduate GPA is a perfect indicator for graduate school success, it’s obviously untrue. And so, I’ve reflected on if I could have spent my time in undergrad more wisely than ensuring I maintained a 4.0 GPA. So that leads me to (hesitantly) write my first personal post. 

1. How to 4.0?

If you (understandably) don’t give a shit about my test-taking strategies, please skip to part 2, for what I think is the more interesting stuff. However, I’m including Part 1 because it provides insight to my personal perspective, and it might be helpful for a young (STEM) student who’s reading. Caution: #humblebrag

People 4.0 in different ways, and I can only tell you my perspective as a non-socialite with undiagnosed academic anxiety at a state university with rampant grade inflation. I’m going to focus on tests because that’s the entire grade for STEM courses, other than the occasional project. As far as I can tell, I am an unusually good test-taker despite, and maybe thanks to, my anxiety. I think my strategies are best organized in list-form:

  1. Most important is test preparation. Unsurprisingly, I started studying early. I found it helpful to review my notes immediately after lecture, write down notes on anything I found confusing, and then address them later with the internet or prof. I found it mildly amusing when my roommate would declare that he has no homework and therefore had no excuse not to play video games and repeatedly complained about bad test scores. No homework assignment? Review time.
  2. I made a list of all the things I needed to review (e.g. 1. old exam, 2. lecture notes, 3. homework, 4. old exam again). I made sure I understood almost everything, performing subconscious cost-benefit and risk management analyses to optimally spend my time. If I didn’t understand something that I thought would come up on the test, I worked until I figured it out or at least memorized some clues about it.
  3. When possible, I was always absorbed in the material immediately before starting the exam. People would always give me shit, saying that I wasn’t going to learn any more information in the last few minutes. But what I was trying to do was to stay in the mindset of the material. Switching from the Electronics mindset to the Anatomy mindset required spending the intervening 15 minutes reviewing my single-sided page of “everything I think I’ll forget for this exam.” My goal was to sufficiently potentiate the neural networks that needed to be reactivated to access anatomical knowledge in the upcoming stressful situation.
  4. I finished fast. Usually in about half the allotted time. And then I went through it again. I reworked all math to look for calculation errors. When possible, I used a different method that I knew should give the same answer. I had marked each question with my degree of uncertainty in my answer and focused on those until I was satisfied with my best guesses.
  5. I strategically answered open-ended questions to get the maximum amount of credit possible. With widespread grade inflation at many universities [2], professors WANT to give high scores. I would often insert information that I remembered that was not directly relevant to the question being asked and then laugh at how little credit was taken off my score.

It’s a fuck ton of work to assure a 4.0, and I think it comes down to 3 main ingredients:

  1. Discipline. Being awkward, it is easy to avoid distractions such as parties. I also internalized the belief that TV and video games would not result in any life benefit. However I was neglecting to consider my other options for meaningful activities (see next section).
  2. Luck. Maybe you get a bad professor. Maybe one exam question is worth 10% of your final grade and it’s obscure. Maybe you’re sick or for some other reason in a suboptimal state of mind during a critical test.
  3. Academic intelligence. Needless to say, but the list would be incomplete otherwise.

But don’t just take my advice, Quora is a great resource [relevant post] , and there are countless other sites too.

2. Was assuring a 4.0 worth it?

I described in the last section that I spent an enormous amount of time assuring that my GPA would not fall. This time investment was considerably more than what would be required to most likely get a low A. So I could have chosen to slack off a bit, take my chances, and have a more active social life or some cool independent projects to show for it.

And this would have been a very rational decision. I commonly hear from profs that recommendation letters, research experiences, publications, and personal statements are all more important than GPA in graduate admissions. These documents examine skills that are more directly related to research (yet also more subjective), and so deserve higher weight. Similarly, when hiring software engineers, a well-established GitHub is much more valuable than high grades in comp sci courses.

This sentiment is ubiquitous on blogs and forums. Many argue that the small difference in GPA between 4.0 and 3.9 is negligible, and still others go further to say things like: “I don’t want a employee with a 4.0 gpa, with bad social skills.” It seems to be the consensus that there is not much extra value in the final few grade points up to a 4.0.

But if that is so obviously true: why are people even asking this question in the first place? Well, in many other metrics in life, the degree to which we feel impressed by a score is a nonlinear one (see figure). Everyone knows this. If a score is finite, there is extra perceived value for the absence of any flaw. That’s why no one cares about all of the 279-point bowling games and only the 300-point games. This is not to say that the difference between these bowling scores is comparable to small differences in GPA (note the arbitrary distances I chose for my plot). So yes, the difference between a 3.99 and a 4.0 is nontrivial because 4.0 is the upper limit.

nonlinearimpressive

Practically, most people with this concern are interested in if this grade point difference has any impact on their applications. And by following the logic above, I am inclined to say: perhaps. This was implied in terms of fellowships by the aforementioned department chair. Secondly (and still anecdotally), a colleague at Caltech echoed this sentiment in regards to (his limited knowledge of) that school’s admissions preferences. These profs explained that a 4.0 gives the impression that the student can handle whatever is thrown at him/her. A perfect record such as this is indicative of the student’s meticulousness and ability to avoid mistakes. It is possible that program admissions and fellowship selection committees have these ideas in mind when considering an applicant’s GPA.

However, I did not come across this sentiment in my blog- and forum-based research. I believe the reason for this may be because the sentiment that a 4.0 GPA is distinct from a 3.95 is an unpopular one to hold. First, it applies a nonlinear interpretation to GPA, and we find linear trends much more intuitive. Second, not many people debating this point had a 4.0 undergraduate GPA., so maybe they’re a little bit biased to think that having 0 B’s or 5 B’s is not an important difference.

OF COURSE: Since we cannot control for all factors (such as undergraduate program rigor), the correlation between GPA and future success can only be very limited. I am no smarter than anyone else because I have a 4.0 undergraduate GPA. In fact, it reveals a weakness of me-from-the-past…

I was not an exceptional youth. I did not read as a child/teenager, and I learned to be content toiling my days away playing video games until I got to college. I had no ambition to do anything remotely challenging or spend the effort to learn something new that was not required. Extracurriculars were minimal. I didn’t have an active role model who wanted to guide me on how I could invest in my future, and I had 0 good ideas, so I often wasted my time or gave into my instant gratification monkey

Therefore, when I began undergrad, all I knew to do to succeed was to do well in classes. So it was pretty straightforward to stay focused on that and maintain a 4.0. Somewhere along the line, I just assumed that a PhD was the next step, and so I continued collecting research experience and making presentations. Only rarely did I gather the ambition to go and do something on my own. Maybe I have above-average standards, but the point is: I could have done a lot more cool things. If I could go back in time 5 years and advise me-from-the-past, there are plenty of activities I would suggest in lieu of excessive studying:

  • Learn Python
  • Read neuroscience literature
  • Improve skills in machine learning with Kaggle
  • Learn to use arduino
  • Extend that currency trading project
  • Read sci-fi

So do I regret all the seemingly unnecessary effort I made to assure a 4.0 undergraduate GPA? I like how Alex summed it up, “I think getting >3.9 is worth it. Opens doors for a small window, producing dividends down the line.” If I didn’t have a 4.0, I may not have received the Goldwater, according to my aforementioned professor’s judgment. Maybe then I would not have gotten an NSF GRF. And then, it could have been more difficult to join an awesome lab. So… I don’t regret it. However, I am super interested in the alternate reality in which I made the other choice and did cooler things that would have been more directly valuable to my future.

Footnotes

[1] Weiner, O. D. (2014). How should we be selecting our graduate students?.Molecular biology of the cell, 25(4), 429-430.

supported by (laughably small N): Schwager, I. T., Hülsheger, U. R., Bridgeman, B., & Lang, J. W. (2015). Graduate Student Selection: Graduate record examination, socioeconomic status, and undergraduate grade point average as predictors of study success in a western European University. International Journal of Selection and Assessment, 23(1), 71-79.

contradicted by: Burton, N. W. & Wang, M. M. (2005). Predicting long-term success in graduate school: a collaborative validity study. Educational testing service (ETS) report.

[2] Popov, S. V., & Bernhardt, D. (2013). University Competition, Grading Standards, and Grade Inflation. Economic Inquiry, 51(3), 1764-1778.

Searching for San Diego’s finest burrito

Want to get a good burrito in SD but not sure what new place to try? Here’s a periodically-updated ranking of some places to prioritize and avoid! There are two lists:

  1. Best carne asada burritos (beef, guac pico)
  2. Best all other burritos

In addition to its ranking, each burrito has a rating (out of 10.0). To give you an idea of the quality compared to some chain Mexican restaurants, Rubio’s scored a 6.0 (too much rice!) and Chipotle a solid 7.8. Most burritos were about $6, while you can expect to pay more at Lucha Libre, in downtown La Jolla, etc.

In the future, we’ll perform a more in depth, multidimensional analysis of the top burritos.

Check out the Google Map with all rated locations! (deprecated as of Jan 29 2016)

Check out the Google Sheets with all rated burritos!

Best carne asada burritos

Taco Stand

#1 (8.8) The Taco Stand

It’s a little small, but the tasty beef and freshly-made tortillas made it the best carne asada burrito I’ve had in SD. Making it unfortunate that this picture is so out-of-focus.
ElZarape

#2 (8.7) El Zarape

Awesome guac. And solid mild (don’t hate) salsa. They also have a burrito with pineapple in it!

Lolitas

#3 (8.1) Lolita’s Taco Shop

Awesome guac. First result when Googling “best burrito in San Diego.” Churros good. Horchata bad.

Rigobertos

#4 (8.0) Rigoberto’s

Rigoberto’s has a special place in my heart. I had my first California burrito and cup of horchata here when I flew in around midnight for a grad program interview. It’s probably your best option if you’re looking to get a burrito after 10pm. I would also recommend the Campeon burrito, pictured above, is enormous and only $7. Warning: remember to ask without sour cream.

El Patron

#5 (8.0) El Patron

Free grilled onions! And this place also has a sweet Taco Tuesday deal: an assortment of $1 tacos.

CaliforniaBurritos

#6 (7.7) California Burritos

Carne asada everything – apparently “everything” means + cheese and sour cream. Mucha carne. Sour cream is never a good decision, but otherwise, it was really good.

Los Primos

#7 (7.7) Los Primos

Good beef flavor. Known for their Monster burritos (pictured). The carne asada monster was pretty bad, too much rice and not enough meat.

Los Dos Pedros

#8 (7.5) Los Dos Pedros

High quality traditional burrito for only $5.25. But really, you should go to Oscar’s next door and get some fish tacos and ceviche.

Carmens

#9 (7.4) Carmen’s Mexican Food

Good tortilla. The meat:nonmeat ratio was off. Not enough meat.

TacoSurfPB

#10 (7.4) Taco Surf PB

An above average burrito with lots of meat and good hot sauce. But the meat wasn’t too flavorful.
.

Other ratings

Don Carlos Taco Shop (7.4) – The hottest burrito I’ve touched. Mediocre salsa. Kind of expensive, and it’s a block away from The Taco Stand, so just go there.

Kotija Jr. Taco Shop (6.6) – Was an 8.5 before their meat was hard to chew and the tortilla’s structural integrity disappeared

Los Palmitos (6.6) – Slightly above average burrito. Large water cups!

Estrada’s (6.5) – “American or Mexican Guacamole” … “Mexican” guac tastes just like all the other guac I’ve had.

JV’s (6.0) – average taco shop

El Indio (5.9) – Just go to Lucha Libre down the street.

Belindas Cocina (5.5) – at UCSD “farmer’s market”

Vallarta’s (4.7) – Did not want to finish.

Cotixan (3.0) – I usually look forward to leftover burritos. I was terrified to eat this the next day.

Best other burritos

Lucha Libre

#1 (9.2/10.0) Lucha Libre California burrito.

The meat flavor is great and their cheese isn’t just filler! They’re well known for their delicious green cilantro sauce. I originally rated their Surf & Turf (7.2/10.0), which commits the sin of using rice as a filler. Free chips are always a plus. Though they’re not great.

sushifreak

#2. (9.0) Sushi Freak Sushi burrito

Cheap sushi that’s actually filling! Just $10 for a giant roll with salmon, crab, and tuna. Cucumbers in sesame oil on the side. Nom.

LosPanchos

#3. (8.9) Los Panchos Breakfast burrito

I’m usually not a fan of breakfast burritos, but this was exceptional. Just a little egg with beef, cheese, and potatoes. And salsa verde.

 

 

While I assert that these ratings are objective and should certainly extend to everyone else’s tastes, I always like hearing what others think, so let me know if you think I completely misjudged one restaurant, and maybe I’ll re-evaluate. Also inform me of any places I might be missing out on, and I’ll try to schedule them in for an official rating.