# Customer Satisfaction of American Airline Companies

Flying on US domestic airlines is a nightmare. The customer service is pathetic, the staff unfreindly, the airlines charge for every small thing…the list goes on and on.

University of Michigan carries out surveys of American customers and publishes the average scores annually as American Customer Satisfaction Index (ACSI). You can check the scores for several industries by visiting their website. For airlines, the chart looks like this:

American Customer Satisfaction Index for American Airline Companies

The airlines appear in descending order of the 2015 ACSI scores, which range from 81 for JetBlue to 54 for Spirit.

ACSI is published for a given brand only once a year. But companies are interested in knowing about customer satisfaction round the clock. So I decided to use Twitter sentiment as a measure for customer satisfaction. This is a very rough exerise to see whether we get any results that have face validity. My students will realize that, for airlines, Twitter is one of the key social networks for addressing customer complaints. Therefore, Twitter will likely capture customer satisfaction in real time. So the validity is actually about ACSI and not about Twitter sentiment. However, there is a commonly discussed issue about Twitter — it’s not representative of the general population. Still, we must keep in mind that ACSI may not be a good representative of American flyers sentiment either.

I decided to focus on the 9 airlines for which the ACSI scores for 2015 are available – JetBlue, Southwest, Alaska, Delta, American Air, Allegiant, United, Frontier, and Spirit. The graph looks as follows:

ACSI Scores for Nine American Airlines

The average score for these 9 airlines is 68.11. As the maximum possible ACSI score is 100,  68 is not a great score. However, I am amazed at how thoroughly the epxectations of American flyers have gone down. I am sure that if the survey respondents were from Asia, you would get an average of less than 50. But that’s a story for another post where I will compare the sentiment about the best airlines including Singapore Airlines, Emirates, Qatar, etc.

Next, I went on Twitter and downloaded tweets that were directed at these airlines. My condition was simply that the Twitter handle of the airline should appear in the tweet. For example, a tweet mentioning @JetBlue would indicate that this is a tweet targeted towards JetBlue and therefore should be included for the analysis. I carried out this data collection on 2 April 2016 from Singapore. Following this, I categorized the tweets as either positive, negative, or neutral. To compare to ACSI, I created a metric similar to Net Promoter Score (NPS). The formula for that is given as follows:

$\displaystyle \mbox{Net Sentiment Score} = \frac{\mbox{(Total Positive Tweets - Total Negative Tweets)}}{\mbox{Total Tweets}}$

Here is the graph when I plotted net sentiment scores of all the 9 airlines:

Net Sentiment Scores for 9 American Airlines

The score is bounded between -1 and 1. If all the tweets are negative then the score will be -1 and if all the tweets are positive then the score will be 1.

The average score is 0.19, which is around 60% of the scale range (1.19/2.0). We see that similar to ACSI graph, 4 airlines–JetBlue, Alaska, Southwest, and Delta–are above the mean while remaining 5 are below the mean. Interestingly, these are the same 4 airlines which have above average scores on ACSI. The ordering is a bit off though. In order to better compare the two graphs, I decided to plot them in the same space. However, for that I need to have the same scale. For the sake of convenience I decided to use Z-scores.¹

ACSI and Twitter Net Sentiment Score Correlation

I find that the correlation is high at 0.77. It’s also statistically significant with a p-value equal to 0.016. However, notice that we have only 9 observations, which means that the standard error is likely to be high. Actually the 95% confidence interval for the correlation coefficient is pretty wide [0.21, 0.95] but the lower level is still comfortably far from 0.

I think that ACSI is doing a fair job of capturing customer satisfaction of American air travellers. It corresponds to the Twitter sentiment quite well. It’s worth noting that I am comparing the survey results, which were collected over 1-2 months period in 2015 with tweets that were sent on or slightly before 2 April 2016. It would be worth studying how Twitter sentiment fluctuates over a period of time. This is my next assignment once I am done with the sentiment analysis of top ranked airlines.

In case you are interested in individual airlines sentiment charts, you can view them here:

¹ Z-score of any variable has 0 average and 1 standard deviation.

# Instagram Filters and Laziness

I always believed that Instagram filters are what made Instagram such a big hit. Otherwise Instagram was just another photo sharing app. When I started using Instagram, I used to try many filters before settling on one and sharing my picture. Over the period of time, I started sharing pictures without any filters — also known as a “Normal” filter because of my laziness. Recently I wondered whether my behavior was peculiar or many other people were also using Normal filter while sharing pictures on Instagram. Accordingly, I did a quick and dirty analysis using pictures collected from Orchard Road area in Singapore. Why Orchard? Well, let’s just say because Orchard is one of the most frequented tourists spots in Singapore, which makes my analysis more representative.

I carried out analysis on the pictures collected in January-February months of 2014, 2015, and 2016. This makes comparison easier and more uniform. As the filters offered by Instagram changed over the 2 years period, I am not plotting all the filter counts for 3 years on the same graph. Instead I show you 3 separate graphs — one for each year. Also, I show bars on the graph only for filters which were used more than 500 times in the 2 months period. This is arbitrary but it helps me depict a meaningful bar graphs. All the filters with <= 500 pictures were combined and labeled “Other”, which will show up in the graphs.

Without further ado, here are the three graphs:

Well, I am not an outlier! It turns out that at least from 2014, people have been using Normal filter (basically no filter) more than any other filter. The percentages of pictures with Normal filter were 59% in 2014, 73% in 2015, and 65% in 2016.

It’s tough to say why people selected the filters they did. My hypothesis is that people are lazy and so they will go with the default. “Normal” filter is the default so it’s picked up the most often. Now Instagram has been changing the filters so I don’t know how they were arranged in 2014 or 2015. But for 2016, the ordering was as follows:

Clarendon and Gingham are the next two filters shown by Instagram after Normal, which are also the next two most commonly used filters in Orchard area! After that, the filter ranking in the graph loses correlation with the Instagram’s  default ordering of filters. Perhaps, this indicates that non-lazy people actually hunt down the filter that gives them the best looking result. Still Juno and Lark, which are 5th and 6th on my 2016 graph, appear on 7th and 5th position in the Instagram ordering. Hudson, Sierra, and X-Pro II which are at the end on my bar graph, also appear in Instagram ordering towards the end. It seems that there is some support for the “lazy” hypothesis!

# Marketing Analytics – Summary of Session 1

We started the second trimester today at ESSEC’s Singapore campus. I am teaching Marketing Analytics (Engineering) to two sections of 50 students each. In my first lecture I introduced the fundamental problem in front of marketers – how to justify their decisions to others who control the budget. Gone are the days when people could simply use experience, gut feel, intuition, etc. as valid criteria for selecting marketing strategies. Now nobody wants to bet even \$1 on speculative marketing managers. Data driven marketing is the new norm. My course is an introduction to this new reality. In any marketing course, ‘brand positioning’, ‘segmentation’, ‘targeting’, ‘media planning’, etc. are common terminologies. Professors and students know what these concepts mean. Yet, given a real life business situation how many students will be able to actually come up with a strategic solution? Very few indeed.

Our Course Text – Principles of Marketing Engineering

Over the next five weeks, we will take a two-step approach. First, we will clarify a certain marketing concept, for e.g., positioning. We will then understand what type of information needs to be collected to plot a perceptual map showing the brand positioning on 2 or 3 dimensional space. Next we will use SPSS to do the data analysis using statistical techniques such as factor analysis. Finally, based on the perceptual maps, students will recommend actions. There will be hard numbers involved. For example, when the students suggest launching a new brand to exploit potential gap in the market, they will need to justify that by projecting the changes in the market shares. They will have to account for cannibalization of any existing brands from the same company that is supposed to launch a new brand. This will be a complex but fun exercise!

The other topics include decisions on segmentation using probability models, salesforce allocation, and conjoint analysis. As we started working with SPSS today, I used a dataset consisting of accounting information on several US firms over 2010 and 2011. The students’ first task is to build a sales response model and test it using the data. To what extent do the sales respond to advertising? The response model will not be very complicated yet we may end up using a logit-type curve (ADBUDG model), who knows?

I believe that modeling the data is not the most important thing. It’s just a small component of decision making. The critical parts are to read the analysis, interpret it, and then recommend a decision path. I don’t like blind data mining of millions of data points to come up with patterns that everyone believes are true. Unfortunately this is exactly what’s happening in the analytics area. Data mining coupled with intelligent experiments is the way to go. (More on this later). Bringing intuition to this party is like inviting Michael Lohan to speak at a conference on responsible parenting!

# Social Media Metrics

On Tuesday we are going to start learning social media metrics. Measurement of any marketing strategy is tough because we are rarely able to “control” the experiment as scientists do. For example, when a firm launches its advertising campaign, there can be several other things going on at the same time. The competitors tweaking their ad campaigns, changing their prices, introducing new products, customer behavior might be changing, the economy may start tanking, government may change the regulation…any of these things and many more might be going on at the same time. In that case how does an advertiser isolate the effectiveness of the ad campaign? It is really difficult.[1] The same is true for social media marketing. Therefore, understanding measurement of the effectiveness of a social media campaign is critical.

Read the articles below. They give a fair idea about social media metrics. I am going to keep on updating this post over the next few days by adding more links below.

Book – Web Analytics 2.0 by Avinash Kaushik

CLV Calculator

Net Promoter Score

How to Track Social Media Metrics Like A Rockstar

Top 10 social media dashboard tools

Why Your Friends Have More Friends Than You Do

Ever wondered why your friends seem so much more popular than you are?

The 10 Social Media Metrics Your Company Should Monitor

Social Media Metrics – Chris Brogan

Archive for the ‘Social Media Measurement’ Category – Jeremiah Owyang