Strategic disclosure of information

We live in a world with imperfect information, that is in a world where consumers do not know everything about each competing product on the market. This is to the disadvantage of consumers, but nowadays – especially with the spread of the internet – we have a lot of reviews and rankings. Do these really help?

Luca and Smith (2013) investigate whether business schools display rankings on their websites and whether the disclosed rankings are entirely clear or smudged in some way (e.g. saying “top 20 school” when school is ranked #20).

While in the age of the internet it is extremely easy to verify a school’s exact ranking, shrouding information is still very common. So for instance, many schools tend to say merely that they’re “a top school according to XY” without stating their (easily verifiable) actual position in the ranking; or that they’re “top 5 according to XY” leaving out the fact that say they’re only top 5 in the part-time Master’s ranking and not in the overall ranking, etc.

Besides establishing that schools at all levels who display rankings on their websites tend to shroud information, the authors also find that the relationship between ranking and probability of disclosing a ranking takes somewhat of an inverted U-shape:

Ranking disclosure by business schools

The authors use OLS to estimate the model

Y_i = \alpha_0 + \alpha_1 D_{11-25} + \alpha_2 D_{26-50} + \alpha_3 D_{51-75} + \alpha_4 D_{76-116} + \alpha_5 D_{UR} + \epsilon_i

where all D variables are dummies for schools with a certain rank, D_UR stands for unranked schools and of course this implies that the intercept term represents top 10 schools. This regression is run with several different (dummy) dependent variables, represented by Y above.

The results of the regression confirm the figure above. With most dependent variables related to disclosure of rankings, schools ranked 11-25 are not significantly more likely to display rankings than top 10 schools, schools ranked 26-50 are the most likely (about 66% more likely than top 10 schools) and then the probability again starts declining but it stays significantly above zero even for unranked schools. Namely, schools ranked 51-75 are 56%, those ranked 76-116 are 36% and unranked schools are 32% more likely than top 10 schools to display any ranking by the five major rankers on their websites.

When it comes to shrouding information, we can see a similar pattern, as again shown by Figure 1. So we have an inverted U-shape. If we take Y to be “any shrouding of information”, then schools ranked 11-25 are not more likely than top 10 schools to shroud information. However, schools ranked 26-50 are 58%, those ranked 51-75 are 50%, those ranked 76-116 are 42% and unranked ones are 36% more likely than top 10 schools to do so.

Certain rankers do not give an explicit ranking of schools, they just publish a list of “top schools”. For instance, the Princeton Review does that. (I assume good) business schools are also accredited by an organization called AACSB. Displaying that a school is considered “top” by the Princeton Review or that it is accredited by the AACSB is also taken as a dependent variable by the authors. What they find is a monotonic relationship: schools ranked lower are more likely to display such information. The coefficients become significant from the 26-50 category for the AACSB certification. For the Princeton Review, however, only schools ranked 76-116 and unranked schools are more likely than top 10 schools to disclose this ranking on their websites.

The main conclusion is that thinking that the best firms will be more likely to disclose ranking/review information appears wrong. Rather, if a firm advertises itself as having a certain rank somewhere, it is much more likely to be a mediocre firm. And if a firm advertises itself as being certified by some organization, it is much more likely to be a firm with inferior products.

It’s another question whether this can be generalized to any arbitrary industry (probably not), but it may be true in areas other than business schools as well. To answer the question posed at the beginning of the post: rankings publicized by the firms themselves do not really help in a monotonic fashion. One should rather check out the rankings and reviews themselves. So reviews/rankings (probably) help, but only when one acquires them from a primary source (that is from the ranker itself and not from the firm).

The authors also propose a brief explanation for this phenomenon: the reason this is happening they say is that mid-ranked schools want to seperate themselves from lower ranked ones, but then top-ranked schools want to “countersignal”, which is to say they want to show that they’re so good they don’t need to show any rankings on their website.

Note on the methodology: it’s a little weird that the authors used OLS to estimate a model with a binary dependent variable and didn’t even try a logit or probit model for robustness checks at least.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s