crowdsourcing forecasts: competition for sell-side … · 1 crowdsourcing forecasts: competition...

43
1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech January 6, 2014 Abstract Prior research documents long horizon optimism and short horizon pessimism in sell-side forecasts. Recent research suggests that forecasts are increasingly less important to sell-side analysts. These factors motivate us to explore a new phenomenon, crowdsourcing, as an alternative source of forecasts. We obtain revenue and earnings forecasts from estimize, an entity that solicits these forecasts online. Many participants are buy-side or independent analysts. We find the number of contributors per firm is associated with proxies for professional exposure and forecast difficulty, consistent with extrinsic and intrinsic incentives. On average, the estimize forecasts are bolder (deviate from the consensus) and as accurate as those from the sell-side. Further, their forecasts contain less of the gaming element of the extreme pessimism of the sell- side’s final forecasts. Our market test confirms the superiority of estimize’s short-term forecasts as proxies for market expectations. These results provide support for the value of crowdsourced forecasts. JEL Classification: G28; G29; M41; M43 Keywords: Analyst, Forecast, Earnings Response Coefficients, Crowdsourcing Acknowledgements: We thank Leigh Drogen from estimize for providing us with their data and answering questions about their business. We also thank Tony Kang, Shail Pandit, Phil Stocken, and workshop participants at Loyola Marymount University, McMaster University, and Wake Forest University for their helpful comments and suggestions. *Corresponding author, [email protected], 713-348-6302

Upload: vutram

Post on 02-Apr-2018

228 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

1

Crowdsourcing Forecasts: competition for sell-side analysts?

Rick Johnston*

Rice University

Michael Wolfe

Virginia Tech

January 6, 2014

Abstract

Prior research documents long horizon optimism and short horizon pessimism in sell-side

forecasts. Recent research suggests that forecasts are increasingly less important to sell-side

analysts. These factors motivate us to explore a new phenomenon, crowdsourcing, as an

alternative source of forecasts. We obtain revenue and earnings forecasts from estimize, an entity

that solicits these forecasts online. Many participants are buy-side or independent analysts. We

find the number of contributors per firm is associated with proxies for professional exposure and

forecast difficulty, consistent with extrinsic and intrinsic incentives. On average, the estimize

forecasts are bolder (deviate from the consensus) and as accurate as those from the sell-side.

Further, their forecasts contain less of the gaming element of the extreme pessimism of the sell-

side’s final forecasts. Our market test confirms the superiority of estimize’s short-term forecasts

as proxies for market expectations. These results provide support for the value of crowdsourced

forecasts.

JEL Classification: G28; G29; M41; M43

Keywords: Analyst, Forecast, Earnings Response Coefficients, Crowdsourcing

Acknowledgements: We thank Leigh Drogen from estimize for providing us with their data and

answering questions about their business. We also thank Tony Kang, Shail Pandit, Phil Stocken,

and workshop participants at Loyola Marymount University, McMaster University, and Wake

Forest University for their helpful comments and suggestions.

*Corresponding author, [email protected], 713-348-6302

Page 2: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

2

1. Introduction

Forecasts are fundamental to markets. Revenues, earnings, and cash flows are common

forecast parameters. Sell-side analysts are one publicly available source of such forecasts for

market participants and academics. However, sell-side forecasts do have drawbacks, one being

bias such as long-term optimism and short-term pessimism, perhaps to support management’s

effort to meet or beat expectations (Richardson et al., 2004). Also, Bradshaw et al. (2012) show

that in some circumstances, such as smaller or younger firms, time series forecasts are superior

to sell-side analyst annual forecasts. Perhaps more disconcerting, however, is that recent studies

are beginning to question the importance of forecasting to sell-side analysts (Brown et al., 2013),

raising the possibility of deteriorating quality both now and in the future. This elicits the question

of whether there are alternative or better sources of forecasts.

In this paper, we explore a new phenomenon, crowdsourcing of forecasts.

Crowdsourcing, a word that originated in 2006, is an online production model that leverages the

collective intelligence of online communities (crowds) to serve organizational goals (Brabham,

2013). We examine forecasts from estimize. Estimize, which was founded in 2011, is an open

platform that solicits and reports revenue and earnings forecasts from members, a significant

number of whom are employed by buy-side or independent research entities.1 These forecasts are

available on the company’s website and Bloomberg terminals. In our sample from 2012, more

than 2,000 contributors provide forecasts for approximately 1,400 firms. Since the economics of

crowdsourcing is a relatively undeveloped literature, we first explore incentives to participate on

1 The company describes their contributors as follows: analysts, portfolio managers, and traders from many high-

profile hedge funds, mutual funds, and asset management firms. The community also includes a large contingent of

professional independent analysts, retail traders and investors, hobbyists, corporate finance professionals, industry

professionals, and students. Anyone may join. Recently, they have begun to collect affiliation data, but it is

unavailable for our data and the cumulative data is therefore incomplete. The founder of estimize, who himself came

from the buy-side, told us that the buy-side represents about 30% of the participants.

Page 3: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

3

estimize. To provide evidence on the potential value of crowdsourced forecasts, we then examine

the characteristics of the estimize forecasts and how well they represent market expectations

using firm-matched sell-side forecasts as a basis of comparison.

Although independent and buy-side analyst forecasts provide an alternative to those of

the sell-side, historically buy-side forecasts were rarely publicly available. Also, previous

research, which is limited, suggests that both are less accurate than sell-side analysts and, in the

case of buy-side, more optimistic as well (Groysberg et al., 2008; Gu and Xue, 2008). However,

the samples for both of these studies are limited and fairly dated. The estimize sample not only

combines multiple information providers, it also offers a newer and perhaps broader selection of

buy-side analysts than was represented in past research, thus opening the possibility of different

findings.2

Early research established sell-side analyst forecast superiority to time series models

(Brown et al., 1987) and linked sell-side analyst career success to forecast accuracy (Stickel,

1992; Mikhail et al., 1999). More recent studies suggest forecast accuracy is increasingly less

important to the sell-side. Groysberg et al. (2011) find no relation between forecast accuracy and

compensation, providing a possible explanation for its reduced importance. Johnston et al.

(2012) show that a simple adjustment to enhance forecast accuracy is overlooked by analysts,

thus providing direct evidence of forecasting’s reduced importance. Brown et al. (2013) find that

industry knowledge is increasingly important to sell-side analysts.

2 Another alternative source for forecasts is “whispers.” The origin or sources of whispers is unknown. Collection

and sourcing of whispers vary, but the coverage appears to be relatively small. For example, Brown and Fernando

(2011) say their sample of 247 firms is “relatively large.” They also summarize the literature on the quality of

whispers as inconsistent. Other research finds Regulation Fair Disclosure (FD) altered the value of whispers (Rees

and Adut, 2005). Whispers also seem to be only earnings forecasts with a short forecast horizon. Therefore, estimize

offers more breadth in terms of firm coverage, horizon coverage, and types of forecasts.

Page 4: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

4

Institutional changes are likely part of the reason for the evolving role of the sell-side

analyst as well as additional motivation to revisit prior findings of sell-side analyst forecast

superiority. The operating practices of sell-side research departments and the research labor

market were altered dramatically by various regulatory and voluntary changes that occurred in

the early 2000s.3 Many good analysts left the sell-side because of the likelihood of reduced

compensation caused by the decoupling of research funding and underwriting, and many sell-

side research departments were downsized for the same reason.4 This exogenous shock to the

sell-side would have altered the pool of available candidates to the buy-side and to independent

research providers not only immediately but also thereafter. Anecdotal evidence of such is found

in the following quote from a Business Week article (Steverman, 2007):

Demand for analysts is strong, but the landscape has shifted. More research

dollars are flowing away from . . . so called “sell side” firms that sell their

research to others. Instead, “buy side” firms such as hedge funds and other money

managers are hiring in-house research staffs, paying top dollar to keep those

investing insights all to themselves.

The Global Analyst Settlement may have leveled the research playing field in another

way as well. By requiring the funding of independent research for five years, the settlement may

have facilitated independent research providers’ ability to hire better talent and broaden their

coverage. Thus, previous findings of the sell-side’s forecast superiority over buy-side and

independent analysts may be less likely in today’s environment.

3 For example, Regulation (FD) in 2001, Self-Regulation Organization rules of NYSE and NASDAQ in 2002, and

the Global Research Analyst Settlement in 2003. 4 In 2004, the number of sell-side analysts had decreased 15–20% according to John McInerney, senior director at

Citigate Financial Intelligence. The Flight of the Sell-Side Analyst, CFO, July 8, 2004.

Page 5: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

5

The crowdsourcing of forecasts on estimize provides an opportunity to revisit the relative

quality of sell-side forecasts. Crowdsourcing is increasingly used by businesses, nonprofits, and

governments. Crowdfunding, a related concept, is receiving increasing attention in the finance

community. The economics of crowdsourcing is a relatively undeveloped literature. In a study of

open source software, Lerner and Tirole (2002, p. 197) write: “To an economist, the behavior of

individual programmers and commercial companies engaged in open source projects is initially

startling.” Crowdsourcing elicits a similar reaction. Motivated by the work of Lerner and Tirole

(2002) on the incentives to participate, we explore whether the level of firm-specific

participation on estimize is associated with proxies for exposure in the investment community as

well as the difficulty of the forecasting task. We further conjecture that more experienced or

skilled individuals are more likely to participate resulting in bolder (deviate from the consensus)

forecasts. However, based on prior studies of buy-side and independent analysts, which represent

large constitute groups on estimize, we expect estimize will be less accurate, on average.

Although the evidence is mixed, bolder forecasts have been shown to be more accurate (Clement

and Tse, 2005), thus allowing the possibility of the estimize sample’s accuracy results to differ

from prior studies. The voluntary participation on estimize may also influence the results,

especially if only the more skilled contribute. Those possibilities notwithstanding, we

hypothesize that estimize may at least be as accurate as the sell-side in smaller or less-covered

firms where the payoff to the sell-side analyst and her associated brokerage firm is likely lower.

Further with respect to forecast bias, we expect that the two groups are similar in the longer

horizons but estimize is more optimistic in short horizons due to sell-side incentives to curry

favor with company management and facilitate meeting or beating expectations by being

pessimistic.

Page 6: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

6

To explore participation, we employ a count model of firm-specific forecasters on

estimize. Testing for forecast boldness, we employ Hong et al.’s (2000) measures compared to a

firm-matched IBES (Institutional Brokers’ Estimate System) sample. To evaluate the relative

quality of forecasts, we adopt a dual approach. We assess ex post performance by comparing

accuracy and bias of both revenue and earnings forecasts for estimize versus the sell-side. As a

measure of ex ante performance, we follow Gu and Xue (2008) and compare earnings response

coefficients (ERCs) for the two groups of forecasts. Forecasts that are better proxies for market

expectations should have a stronger association with returns at the time the actual is announced,

reflected by higher ERCs. Previous studies find that the ex ante and ex post measures do not

always produce consistent results (O’Brien, 1988; Gu and Xue, 2008).

We find the number of forecasters per firm is associated with the number of sell-side

analysts after controlling for firm size and market-to-book, suggesting investment community

exposure is an incentive to participate. Further, we find participation is increasing in forecasting

difficulty, again consistent with contributors being motivated by a challenge and wanting to

demonstrate their skill to others. Comparing the estimize forecasts to the sell-side forecasts, we

find the estimize forecasts are bolder and more accurate. However, after controlling for other

determinants, we find no difference in forecast accuracy of either earnings or revenue, on

average. Given previous results of sell-side forecasts’ accuracy superiority over both buy-side

and independent analysts, we believe this finding is noteworthy. Testing for differential bias, we

find estimize to be slightly more optimistic, consistent with prior research on the buy-side. The

optimism is concentrated in short horizon forecasts. Further, in additional analysis, we attribute

this result, at least in part, to the pessimism of the sell-side’s last forecast, consistent with an

incentive to allow firms to meet or beat expectations. Our ERC test confirms the superiority of

Page 7: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

7

the estimize short-term forecasts as they show a stronger association with the earnings

announcement returns. Partitioning the sample by firm size and analyst coverage to examine

potential cross-sectional differences in forecast properties, we find little evidence of differences

in accuracy but do find that the main revenue bias result is concentrated in low analyst coverage

firms.

Our paper provides early evidence on the potential value of crowdsourcing forecasts, thus

addressing an under researched but promising topic related to information intermediaries,

sources beyond sell-side equity analysts (Berger, 2011). It also extends prior research by

exploring disaggregated earnings forecasts (Ertimur et al., 2011) by examining both revenue and

earnings forecasts. Previous studies limited their tests to earnings forecasts. Our results suggest

that, on average, the inferences are similar for both types of forecasts. The consistency of results

between revenue and earnings also ameliorates concern that the two groups of analysts may be

forecasting different measures of earnings, complicating the comparison. This paper also

introduces a new data source, which—perhaps by including a broader array of forecasters—

suggests that sell-side forecast superiority no longer applies. Whether this is due to the declining

importance of forecasts to the sell-side, an improvement in the talent and skills of the buy-side

and independent analysts, a result of only the better analysts participating on estimize, or a

broader representation of participants on estimize is a question that we are unable to answer with

our limited time series of data. High-quality forecasts are of importance not only to market

participants but also to academics who use them as proxies of market expectations. This new

source of forecasts could alter the conclusions of previous studies, our ERC results are only one

such example.

Page 8: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

8

Our study also has potentially broader implications. Our results suggest that

crowdsourcing is effective in creating a reliable information source. Estimize as a crowdsourcing

platform represents a possible market solution to the shortcomings associated with sell-side

analyst forecasts perhaps resulting from their incentives. If estimize continues to grow and cover

more firms, with more frequent forecasts, they could ultimately compete with IBES as a data

source. This application of technology to enhance the information environment of firms is

innovative and potentially revolutionary.

The remainder of the paper is organized as follows. Section 2 explores the incentives of

crowdsourcing, reviews prior literature and outlines our hypotheses. Section 3 details the

research design. Empirical results are presented in Section 4, and Section 5 concludes.

2. Crowdsourcing: Incentives to Participate, Previous Analyst Research, and Hypotheses

The economics of crowdsourcing is a relatively undeveloped literature. Why would

people give freely? Perhaps for buy-side and independent analysts there is little marginal cost

since they generate forecasts as part of their regular research activities. Although others—

students, for example—may engage in the process of forecasting to participate in estimize, it

seems more likely that, for most contributors, estimize provides an outlet or platform for

exposure. Exposure speaks to the potential benefits of participating. Lerner and Tirole (2002)

identify two possible incentives of participants: career concerns and ego gratification.

Participating on estimize provides an opportunity for forecasters to track their performance

relative to others. Estimize also plans to provide performance reports in the near future for both

individuals and research directors, similar to what StarMine does for the sell-side. Establishing a

track record is potentially beneficial for future job opportunities as well as ego. Lerner and Tirole

Page 9: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

9

(2002) argue from an economic perspective that the two incentives are similar and combine them

into what they call a signaling incentive. Citing Holmstrom (1999), they state the signaling

incentive is stronger

(i) the more visible the performance,

(ii) the higher the impact of effort on performance, and

(iii) the more informative the performance about talent.

Consistent with these views, Brabham (2013), reviewing the crowdsourcing literature,

cites several intrinsic and extrinsic motivations of participants including skill development,

networking, job prospects, challenging oneself and the opportunity to share and have fun.

Estimize’s explanation for participation, information sharing and establishing a track record, is

also consistent with the above.

An estimize contributor is likely interested in visibility within the institutional investor

community or the sell-side analyst community. Larger firms and growth firms are likely to

attract the interest of both of these groups. Estimize participation may therefore be highly

correlated with sell-side coverage, which also proxies for institutional interest. The last two

Holmstrom (1999) factors above also suggest that harder-to-forecast firms may be more

attractive to estimize participants. This leads to our first hypothesis:

H1: The number of firm-specific estimize contributors is associated with investment

community exposure and the difficulty of forecasting that firm.

Incentive effects could influence the firms covered on estimize as well as the participants.

However, firm coverage is less likely affected because estimize currently plays an active role in

soliciting input on firms they choose. As far as contributors, however, perhaps only those with

self-assessed skill or ability participate. Such analysts are more likely to issue bolder forecasts

(Trueman, 1994). Bold forecasts are also consistent with the signaling hypothesis above. Hence,

we conjecture the following:

Page 10: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

10

H2: Estimize forecasts are bolder forecasts.

Sell-side equity analysts, as information intermediaries, is a highly studied topic. In

contrast, there is very little research on buy-side and independent analysts. Given that buy-side

and independent analysts are heavy contributors to estimize, we use previous research on them to

motivate our work. The difference in attention the two groups of analysts has received is most

likely due to the plethora of available data for the sell-side relative to other analysts. Schipper

(1991) notes that buy-side and sell-side analysts work for different types of firms and face

different incentives. Whereas a sell-side analyst works for a brokerage firm and issues publicly

available forecasts and recommendations, a buy-side analyst works for a mutual fund, hedge

fund, or pension fund and provides forecasts and recommendations privately to the employing

firm.

Early research established the value of sell-side analyst forecasts (Brown et al., 1987) and

the importance of forecast accuracy to sell-side analysts (Stickel, 1992; Mikhail et al., 1999).

However, early research also documented an optimistic bias in sell-side forecasts (O’Brien,

1988). Lim (2001) proposes and finds support for a rational trade-off of optimistic forecast bias

to improve management access and hence forecast accuracy. Many studies have examined the

effect of sell-side analyst incentives on their outputs, and most studies find the effect is on

recommendations rather than forecasts (Lin and McNichols, 1998; Irvine, 2004). Further, more

recent research finds little evidence of sell-side optimism, on average, instead suggesting that

early optimism is followed by preannouncement pessimism to facilitate firms’ meeting or

beating expectations (Richardson et al., 2004). Ljungqvist et al. (2007) provide evidence that

institutional ownership is positively associated with sell-side forecast accuracy, rationalizing that

analyst career success is often at institutions’ mercy. Given the large role of institutional

Page 11: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

11

investors in U.S. stocks, they present a powerful counterforce to other potential incentives that

sell-side analysts face. In sum, the majority of past evidence generally supports the importance of

accurate forecasting to sell-side analysts. Perhaps, then, it is not surprising that in comparison,

independent and buy-side analyst earnings forecasts tend to be less accurate and more

optimistically biased (Groysberg et al., 2008; Gu and Xue, 2008).

Prior papers document that sell-side analysts are generally more competent and are

compensated better than buy-side analysts (e.g., Dorfman and McGough, 1993; Williams et al.,

1996), and buy-side firms are also more likely to retain poorly performing analysts (Groysberg et

al., 2008). In addition, the number of firms covered by buy-side analysts generally exceeds the

coverage of sell-side analysts, and sell-side analysts are more likely to focus on a subset of

similar firms. Taken together, such factors likely explain the previously documented superiority

of sell-side analysts over buy-side analysts with respect to forecast accuracy (e.g., Groysberg et

al., 2008). An independent analyst functions more like a sell-side analyst. Their primary role is to

create research reports for clients. However, similar to buy-side analysts, prior research suggests

that independent analysts are of lower quality and have fewer resources relative to the sell-side,

which at least in part may explain their less accurate forecasts (Jacob et al., 2008).

However, a few recent papers begin to question the importance of forecasts for sell-side

analysts, which in turn presents the possibility that the relative quality of their forecasts may

have declined or may do so in the future (Groysberg et al., 2011; Johnston et al., 2012). Brown et

al. (2013) in a survey find that analysts rank forecasts last in importance. Only 24% of analysts

surveyed in their study responded that the accuracy and timeliness of earnings forecasts are very

important to their compensation; rather, industry knowledge is the most important determinant

and the most important input to both earnings forecasts and stock recommendations.

Page 12: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

12

In addition to the evolving role of the sell-side analyst as described above, which may

impact the relative quality of their forecasts, there have been other changes in the way sell-side

research operates since the Groysberg et al. (2008) and Gu and Xue (2008) studies were

conducted which could further alter the relative advantage of the sell-side.5 Groysberg et al.

(2008) for example, acknowledge that part of the sell-side superiority to the buy-side was a

function of the sell-side’s preferential access in the period prior to Regulation FD.

Moreover, buy-side analysts likely have access to most sell-side analyst reports, conduct

their own independent research, and are likely aware of the historic pattern of pessimistic final

forecasts by the sell-side.6 This all provides further reason to expect the two groups of forecasts

to be closer in accuracy and allow the buy-side to be better with respect to bias.

Finally, the voluntary aspect of crowdsourcing opens the possibility of only the better

analysts participating, raising the possibility that comparisons to the sell-side may differ from

previous studies. However the little research that does exist suggests that the sell-side is more

accurate. Therefore, our third hypothesis is:

H3: Estimize’s earnings forecasts are less accurate than sell-side earnings forecasts.

Prior studies find that revenue increases are often more persistent than expense decreases,

and, as a consequence, investors’ reaction to revenue surprises are greater than earnings surprises

(e.g., Jegadeesh and Livnat, 2006; Keung, 2010; Ertimur et al., 2011). Also, Ghosh, Gu, and Jain

(2005) claim that investors consider revenue growth more than earnings growth in valuing firms.

These studies suggest that investors would value revenue forecasts to complement earnings

5 See footnote 3 above and related discussion.

6 There is also the question of strategic reporting. Mola and Guidolin (2009) find evidence of recommendation

optimism among analysts affiliated with mutual funds. In contrast, Irvine et al. (2004) find that affiliated analyst

forecasts improve in accuracy as the funds’ investment increases. Whether they report publicly on estimize

something different for strategic reasons is an issue that we are currently unable to address. To the extent they inject

optimism for strategic reasons, it would result in relatively less accurate forecasts.

Page 13: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

13

forecasts. However, there is a paucity of evidence on the properties of buy-side and independent

analyst revenue forecasts, let alone their accuracy relative to sell-side forecasts. Since we are not

aware of any prior research that points to a differential forecasting performance for revenues by

the two types of analysts, we expect that the properties of revenue forecasts would be similar to

earnings forecasts. Hence our next hypothesis:

H3A: Estimize’s revenue forecasts are less accurate than sell-side revenue forecasts.

On the question of forecast bias, little is known about the incentives of buy-side and

independent analysts. The evidence on sell-side analysts suggests that incentives have little effect

on forecasts with the exception of a short-horizon pessimism that may be a result of managers

attempting to “walk-down” earnings targets to a level that can be more easily met (Richardson et

al., 2004). Given that buy-side forecasts are not published with the analyst identity nor released

publicly, there would be little incentive for the analyst to participate in the same walk-down

game as the sell-side analyst. If that is the case, the greater buy-side forecast optimism observed

in previous studies (e.g., Groysberg et al., 2008) is more likely to be observed for forecasts closer

to the announcement date. Hence our next hypotheses:

H4: Estimize contributor’s longer-horizon forecasts (earnings and revenues) are as

optimistic as sell-side forecasts.

H4A: Estimize contributor’s short-horizon forecasts (earnings and revenues) are more

optimistic than sell-side forecasts.

It is not clear whether buy-side or independent analysts would ever have an advantage,

informationally or otherwise, over the sell-side. Past studies suggest that sell-side analysts are of

higher ability (Groysberg et al., 2008). One could imagine that if any analyst chose to focus on

smaller or less-covered firms, they could develop a relative advantage over others. Certainly,

Page 14: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

14

Bradshaw et al.’s (2012) results demonstrate the sell-side’s relative weakness in forecasting

small firms. This poor performance is likely due to the relatively low payoff of small firms to

the sell-side, for example, due to lower share turnover and likelihood of exposure to institutional

investors. Perhaps smaller or less-covered firms are the focus of buy-side or independent analysts

to compensate for weaker coverage from the sell-side. Finally, Lys and Soo (1995) find that

accuracy improves with coverage, and so it would be harder for any competing analysts to show

incremental superiority in the highly covered firms. Thus, as our final hypothesis, we conjecture

the following:

H5: Estimize contributor forecasts are as accurate as the sell-side for smaller or less-

covered firms.

3. Sample and Research Design

3.1 Sample

Estimize is an open platform for forecasts. By crowdsourcing earnings and revenue

estimates from contributors ranging from hedge fund and institutional professionals to

independent investors, estimize provides an alternative source of forecasts. We obtain

estimize’s quarterly forecast data for 2012. Table 1 outlines the sample selection. The initial

data contain 18,048 earnings forecasts for 1,415 firms. After requiring IBES forecasts for the

same firm quarter as well as eliminating some observations due to missing or problematic data,

we are left with 17,486 earnings forecasts for 1,387 firms. For our main analysis, we want to

compare these observations with IBES forecasts of similar horizon. We define forecast horizon

for matching purposes as two groups, less than and equal to 30 days and greater than 30 days. A

final sample reduction is the result of IBES forecasts not existing for the corresponding sample

Page 15: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

15

forecast horizon leaving the final sample of 15,044 earnings forecasts for 1,142 firms.7 We relax

our matching requirement and use the larger sample for robustness testing with the caveat that

inferences might be problematic.

Prior research on buy-side and independent analysts has only explored earnings forecasts.

We have the benefit of revenue forecasts to extend past work. The horizon-matched revenue

forecast sample (untabulated) is similar to the earnings sample detailed above with a slightly

smaller number of firms (1,133) but a slightly higher number of observations (15,467).

The matching IBES forecasts are discussed below. We obtain market variables from the

CRSP (Center for Research in Security Prices) and accounting data from Compustat.

In Table 2, we present some exploratory analysis of the firm coverage within the estimize

data. In Panel A, we compare firm size and firm market-to-book ratios relative to the IBES

universe. Estimize’s data cover fewer firms. The firms are large with average market

capitalizations of $11 billion versus an IBES average of $5.7 billion. The sample firms are also,

on average, more growth oriented than the entire IBES universe, with slightly higher market-to-

book ratios, 3.9 versus 3.1 respectively. Therefore, it does not appear that the sample is limited to

smaller or less-covered firms. We explore analyst coverage in Panel B.

There are, on average, 10 estimize contributors forecasting each firm; however, the

coverage is highly skewed as the median is three. The average IBES coverage of the sample

firms is approximately 17 analysts per firm, and the median is similar at 15. We also note some

other significant differences. The forecast horizon differs dramatically; the average forecast age

for estimize is 19 days versus 187 days for the sell-side. Although these are quarterly forecasts,

7 Although estimize’s first year of operations was 2011, there are a limited number of forecasts for 2011 fiscal

periods. We focus our analysis on fiscal periods ending in 2012, estimize’s first full year of operations. For

comparison, Groysberg et al. (2008) examine annual forecasts from 37 buy-side analysts at one top-10 money

management firm. Their buy-side final sample contains 3,526 forecasts for 337 stocks. As noted earlier, estimize has

over 2,000 contributors. Gu and Xue (2008) examine 6,999 quarterly forecasts related to 1,198 firms.

Page 16: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

16

the IBES average is high in part because analysts tend to issue a quarterly forecast for each

quarter at the beginning of the year, making the age or horizon for latter quarters extremely high.

There is also a difference in the number of forecasts per quarter. The sell-side appears to be

updating their forecasts, issuing on average nearly four forecasts per quarter. In contrast, the

estimize contributors have one forecast per quarter. Finally, many of the estimize contributors

are only contributing forecasts for one firm based on a median of one, but there are some

covering many more, as the average is approximately seven.

3.2 Research Design

Our tests explore estimize participation and relative forecast boldness, accuracy, and bias.

We use IBES forecasts as representatives of the sell-side. To evaluate the relative quality of

forecasts, we adopt a dual approach. We assess ex post performance by comparing accuracy and

bias of both revenue and earnings forecasts for the two groups of forecasters. As a measure of ex

ante forecast performance, we compare earnings response coefficients (ERCs) for the two groups

of forecasts.

3.2.1 Estimize Participation

Our dependent variable is a firm-quarter count of unique contributors on estimize. We

use both ordinary least squares (OLS) and a Poisson count model to ensure our inferences are not

affected by the use of OLS.8 Our independent variables of interest are proxies for investment

community exposure and forecast difficulty. For exposure, we use the lagged value of the

number of sell-side analysts on IBES covering the firm each quarter. Since analyst coverage is a

8 Poisson distributed data is intrinsically integer-valued, which makes sense for count data. Ordinary least squares

assumes that true values are normally distributed around the expected value and can take any real value—positive or

negative, integer or fractional.

Page 17: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

17

function of firm size and growth opportunities we include lagged market value of equity and

market-to-book for each firm as control variables. Forecast difficulty is measured as the standard

deviation of earnings per share for the preceding eight quarters of 2010 and 2011.

3.2.2 Forecast Boldness

We follow Hong et al. (2000) to calculate our boldness measures, deviation from

consensus, and boldness score. We estimate a consensus (mean of all forecasts issued in the

respective window) twice during each quarter: one based on the first 30 days of the quarter, and

the second based on the first 60 days of the quarter. We compare forecasts made in the 30 to 60

day horizon to the first consensus and forecasts made in the less-than-30-day horizon to the

second consensus. DEVIATION FROM CONSENSUS is thus the absolute value of the difference

between each individual forecast and the appropriate consensus. BOLDNESS SCORE ranks each

forecaster by firm-quarter based on their deviation from consensus (boldest is ranked first) and

then averages across all firms forecasted for each quarter. To compare estimize contributors to

IBES analysts, we include in a regression the dummy variable ESTIMIZE, which has a value of 1

for estimize contributor forecasts and zero otherwise as an independent variable. We include

controls for forecast horizon as well as the number of companies and industries an analyst covers

(Clement and Tse, 2005).

3.2.3 Forecast Accuracy and Bias

Our dependent variables are standard measures in the literature. FERR, our measure of

accuracy, is the absolute value of forecast errors (actual less forecast) deflated by the beginning-

of-period stock price for earnings and market value of equity for revenue. Our bias measure

Page 18: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

18

(BIAS) is calculated in the same manner without applying the absolute value. Actuals are taken

from IBES.9 Calculating FERR and BIAS for both revenue and earnings results in four forecast

measures. For our regression analyses, the main independent variable, again, is ESTIMIZE.

Our specifications are as follows:

Forecast Measure = β0 + β1ESTIMIZE + γ Controls + ε (1)

Our multivariate analysis controls for other known determinants of forecast accuracy,

such as forecast horizon, as well as company characteristics. Since information arrives over time,

forecast error should decline as the earnings announcement approaches (O’Brien, 1988).

HORIZON is the number of days from the analyst forecast date to the date of the company’s

earnings announcement. Brown (1999) finds that forecast errors differ between loss and profit

companies, and therefore, we include LOSS to control for forecast differences between profitable

and unprofitable quarters. LOSS is an indicator variable—1 if net income is less than zero and

zero otherwise. We include FIRM SIZE to control for differential information environments

between large and small firms. FIRM SIZE is measured as the natural log of the market value of

equity at the end of the previous period. Lys and Soo (1995) find that analyst forecast accuracy

increases with analyst following. FOLLOWING is the number of analysts who provide forecasts

on a company during the quarter.

We include industry-fixed effects in all of our analyses to control for forecast difficulty

across industries. To control for potential accuracy differences across quarters, we include

quarter-fixed effects as well. All standard errors are clustered at the firm level.

9 Gu and Xue (2008) use the same construct.

Page 19: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

19

3.2.4 Market Test

As a measure of ex ante performance, we follow Gu and Xue (2008) and compare

earnings response coefficients (ERCs) for the two groups of forecasts. We run a standard ERC

model shown below as Equation 2 for each forecasting group. The dependent variable is the

three-day, size-adjusted buy and hold return (BHAR) around each respective earnings

announcement. The main variable of interest is the mean forecast error (FE) of forecasts issued

in the last 30 days of the quarter for each respective forecaster group. The ERC, which is the α1

coefficient on the mean forecast error, is the basis for comparing market association. The group

with the larger ERC suggests closer alignment with market expectations. We run the model both

with raw measures as well as absolute value measures of returns and forecast error. Firm-specific

control variables are not required, but we do control for quarter and industry effects.

BHAR = α0 + α1FE + γ Controls + ε (2)

4. Results

4.1 Estimize Participation

In Table 3, we present the results of the participation count model. In Column A, we

estimate the model using OLS, and in Column B, a Poisson model is applied. The inferences are

not sensitive to the model type.

The results are consistent with our signaling hypotheses. Estimize participation is

positively associated with our proxies for exposure in the investment community and forecast

difficulty since the coefficients on IBES CONTRIBUTORS and FORECAST DIFFICULTY are

both positive and statistically significant. The control variables for firm characteristics, size, and

market-to-book are also both positive and statistically significant. Their inclusion helps alleviate

Page 20: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

20

concerns that the number of sell-side analysts is merely capturing firm characteristics as opposed

to investment community exposure. These results offer a beginning to our understanding of why

contributors participate on estimize.

4.2 Forecast Boldness

Table 4 presents the results of the forecast boldness analyses, with DEVIATION FROM

CONSENSUS as the dependent variable presented first, followed by BOLDNESS SCORE. The

analyses are partitioned by forecast horizon: the last 30 days of the quarter are presented first,

and then the middle 30 days are presented second, each with its respective consensus measure.

Both earnings (EPS) and revenue forecasts are analyzed.

The results support the boldness of estimize forecasts as all the coefficients on ESTIMIZE

are positive and statistically significant. Thereby providing some support for our conjecture that

estimize contributors as ones with self-assessed skill. The only consistently significant control

variable is INDUSTRIES; with negative coefficients, the results are consistent with Clement and

Tse (2005) and intuition that broader coverage diminishes the ability to be bold. Combined, the

participation and boldness results suggest incentives and skill play a role in estimize

participation.

4.3 Univariate Analysis: Accuracy and Bias

In Table 5, we present the univariate analysis of forecast accuracy and bias for the two

groups (estimize versus sell-side). We analyze the full sample as well as partitions by forecast

horizon. The first two rows examine earnings forecasts (accuracy followed by bias), and the last

two rows do the same for revenue forecasts. The full sample contains the 15,044 (15,467)

Page 21: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

21

estimize earnings (revenue) forecasts from Table 1. The corresponding IBES sample contains

73,645 (68,089) earnings (revenue) forecasts. The larger IBES sample is a result of higher

analyst firm coverage as well as more frequent forecasting by IBES analysts.

The first row and column intersection presents the full sample, which is firm and forecast

horizon matched, and shows that sell-side analyst earnings forecasts have larger errors on

average (0.0046 versus 0.0025) compared to estimize. The inference from the median is the

same. The results are similar for revenue forecasts (third row), although both the mean raw errors

and the difference are larger (0.0135 versus 0.0075). This evidence of greater accuracy by the

estimize analysts is surprising based on past research and counter to H3 and H3A. The pattern of

sell-side inferiority is also fairly consistent when we partition by forecast horizon (less than and

equal to 30 days or greater than 30 days).10

However, the medians in the longer horizon, greater

than 30 days, are exceptions. For earnings, the medians show no difference based on the

Wilcoxon test, and for revenue, the IBES median is slightly smaller. The results using the larger

non-horizon-matched robustness sample, shown in the fourth column, are comparable to the

horizon-matched full sample.

One needs to be cautious in interpreting these univariate results given that we know from

Table 2 that the forecast horizon for the two groups differs significantly. Our multivariate tests

control for forecast horizon with greater precision. Moreover, the estimize sample in the longer-

horizon window is relatively small, making generalizable results more problematic. The forecast

bias results are discussed next.

Both the horizon-matched full sample and the robustness sample show that sell-side

forecasts are more optimistic than the estimize forecasts for earnings (second row) and revenue

(fourth row), on average. Optimism is reflected by a negative sign since the forecast is greater

10

Inferences are similar in the less-than-30-day horizon if we restrict to the last forecast for all forecasters.

Page 22: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

22

than the actual. The mean earnings forecast bias for IBES is -0.001 versus 0.0003 for estimize.

Based on an average share price of $50 in our sample, these forecast errors represent 5 cent and

1.5 cent differences between forecast and actual, respectively. The mean bias for revenue

forecasts are -0.0031 (IBES) and -0.0007 (estimize). Medians also suggest sell-side relative

optimism, although the median earnings forecast error for the sell-side is zero, and the estimize

median is slightly pessimistic. For revenue, both medians are slightly optimistic, with the sell-

side slightly more so.

The pattern of greater sell-side optimism across forecast horizon, however, is not

consistent. IBES earnings and revenue forecasts are more (less) optimistically biased for longer-

(shorter-) term forecasts. For example, in the 30-days-or-less column based on means, IBES

earnings forecasts are more pessimistic than estimize (0.0009 versus 0.0004), and the contrast is

more dramatic for revenue forecasts (0.0013 versus -0.0009). This result is consistent with our

H4A hypothesis. In the greater-than-30-days column, the results are similar to the full sample,

perhaps not surprising given that most of the IBES observations are in that group. The sell-side’s

greater optimism in the longer horizon is counter to our expectation of similar bias in hypothesis

H4. Again, however, this comparison is only univariate and is limited by the small estimize long-

horizon sample size. The multivariate results follow.

4.4 Regression Analysis: FERR and BIAS

Tables 6 and 7 present the results. The four columns correspond to the samples presented

in Table 5. Accuracy with FERR as the dependent variable is presented first, followed by the bias

analysis. Within each column, earnings forecasts are analyzed first, and the revenue analyses

follow. After controlling for the previously documented determinants of earnings forecast

Page 23: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

23

accuracy, the ESTIMIZE dummy is statistically insignificant in all four EPS analyses, suggesting

no difference in earnings forecasting performance between the two groups. This finding is in

contrast to the univariate analysis, which suggests superior estimize accuracy but only crudely

controlled for forecast horizon. The regression analyses suggest that estimize earnings forecasts

are, on average, as accurate as sell-side analyst forecasts. Nonetheless, this result is noteworthy

because all previous research found buy-side and independent analysts to be less accurate than

the sell-side.11

The revenue forecast regression results are similar. The main difference is that for

longer horizons (greater than 30 days), the estimize revenue forecasts appear to be less accurate

based on positive and statistically significant coefficients on the ESTIMIZE. Also, the robustness

sample results in Table 6 differ slightly from the horizon-matched sample results. The ESTIMIZE

variable is positive and statistically significant, again suggesting less accurate estimize forecasts.

However, as noted earlier, this sample is potentially problematic, so caution is warranted in

interpreting this differing result.12

The control variables, the age of forecasts (HORIZON), FIRM SIZE, and the LOSS

dummy load consistently in the predicted directions. The FOLLOW variable is statistically

insignificant in all specifications. The following effect is likely subsumed by the firm size

variable. The adjusted r-squareds for the regressions are of reasonable magnitudes, suggesting

that our specification explains a reasonable amount of the variation in forecast error.

In contrast to the evidence of equivalency for accuracy, the ESTIMIZE variable loads

negatively in three of the four earnings forecast bias regressions reported in Table 7, the

11

Estimize represents that they are more accurate than the Wall Street consensus 69.5% of the time. Our univariate

analysis in section 4.3 supports this finding. A major difference is that estimize flags and removes extreme

observations, something we do not do. Our multivariate approach differs as well: we focus on average forecast error

and control for other determinants of accuracy. 12

To further explore the robustness of our results, we re-run our tests excluding forecasts from contributors that only

contribute for one firm. Our results are similar except for revenue FERR, which is no longer statistically significant.

Page 24: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

24

exception being for longer-horizon forecasts (> 30 days). The long-horizon forecasts show no

statistically significant difference between the two groups, consistent with our hypothesis, H4.

The other results suggest greater earnings forecast optimism for estimize analysts, particularly

in shorter horizons. This is consistent with the univariate analysis and supports H4A. Finding

greater optimism for the buy-side or independent analysts is consistent with prior research but

seems at odds with the lack of evidence regarding a difference in forecast accuracy. This

apparent puzzle will be further explored later in the paper.

The corresponding revenue forecast regression results are similar. The main difference

again is that for longer horizons (greater than 30 days), the estimize revenue forecasts appear

more pessimistic based on the positive and statistically significant coefficient on ESTIMIZE.

Taken together, the revenue and earnings forecast results suggest, in general, that

estimize forecasts are as accurate as sell-side forecasts but that they are more optimistically

biased, particularly in short horizons. The optimism finding is consistent with the pessimism of

short-term sell-side analyst forecasts documented in prior studies, something we will explore

further in Section 4.6.

4.5 Cross-Sectional Analysis: Firm Size and Analyst Following

Next, we present our cross-sectional analyses. We partition the observations based on

firm size and analyst following using the median of the variable of interest. Given prior evidence

that firm size relates to predisclosure-period information environment (Atiase, 1985) and analyst

coverage and forecast accuracy (Lang and Lundholm, 1996), we examine whether the

performance of estimize versus the sell-side varies with firm size or analyst following.

Page 25: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

25

We present only a summary of results for these tables. The columns correspond to the

respective samples, which are the same as in the previous tables with the innovation being that

each column is further partitioned into two groups using the median of firm size or analyst

following. The rows present only the coefficients on the ESTIMIZE variable for each respective

regression.13

For both forecast accuracy and bias in Table 8, we find no statistically significant

differences across the firm-size groups for either earnings or revenue forecasts. The results of

Table 9 also provide no evidence of accuracy differences for earnings or revenue forecasts across

the coverage partitions. There is some evidence of greater optimism for estimize in low-coverage

firms, at least for revenues in the short-horizon sample. There is also evidence of incremental

pessimism for forecasts in the longer-horizon forecasts for these firms.

The main results of Table 6 found the estimize earnings forecasts to be as accurate as the

sell-side. The cross-sectional analyses show that that result holds in all partitions. We had

expected that only for smaller or less-covered firms would the estimize forecasts be as accurate

(H5). The power of our cross-sectional tests may be limited by the sample size, which is also

somewhat skewed to larger firms.

4.6. Revisiting the Accuracy and Bias Result

As noted earlier, we find that estimize analysts, on average, are as accurate as sell-side

analysts despite the fact that they show a slight relative optimistic bias, particularly in the short-

horizon window. We conjecture that a walk-down of their forecasts by sell-side analysts

contributes to this result. To explore that possibility, in Table 10, Panel A, we present univariate

comparisons of the sell-side’s most recent forecast, that is, the one issued closest to the actual

13

Full results are available upon request.

Page 26: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

26

earnings announcement versus all prior forecasts. In terms of accuracy, not surprisingly, the

magnitude of the forecast error declines for the most recent forecast relative to previous ones.

This is true for both earnings and revenue. For example, prior earnings forecasts have a mean

FERR of 0.0052 versus 0.0030 for the last forecast. The same pattern appears for revenue

forecasts, 0.0153 versus 0.0088 respectively. Of greater relevance are the bias measures. For

both earnings and revenue, we see that the mean prior forecasts are negative, supporting

optimism; however, the last or most recent forecast mean is positive, suggesting pessimism. For

earnings, the mean of prior forecasts is -0.0016, in contrast to the most recent forecast of 0.0005.

The comparison for revenue is similar. This pattern is representative of the walk-down first

presented in Richardson et al. (2004).

Performing a similar analysis for estimize is problematic since many analysts only

forecast once per quarter. However, Table 3 can be used as a crude proxy. For earnings,

comparing across the greater-than-30-days-horizon group and the less-than-and-equal-to-30-days

group, in the bias row, we also see the switch from optimism to pessimism (-0.0003 versus

0.0004), but the change is less dramatic than that of the sell-side above.14

For revenue forecast

bias, the results are mixed. Based on means, the most recent (< 30 days) buy-side revenue

forecasts seem more optimistic than the prior forecasts (-0.0009 versus 0.0002). The medians

suggest the opposite (-0.0000 versus -0.0002).

Both the means and medians indicate greater optimism for IBES earnings and revenue

forecasts relative to estimize in the greater-than-30-days horizon. However, for the shorter

horizon (< 30 days), the opposite is true as the results are consistent with greater optimism in the

14

The average short term pessimism of estimize is somewhat surprising given the participation of buy-side analysts

whose employers likely hold the stocks that they forecast and hence may be motivated to attempt to boost the stock

price with optimistic forecasts. Our results suggest that, on average, such an effect is not dominant in our sample.

We leave for future research the exploration of cross-sectional opportunistic behavior.

Page 27: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

27

estimize forecasts. So the sell-side represents both ends of the spectrum, the most optimistic in

long horizons and the most pessimistic in short horizons.

To complement the analysis above, in Panel B we explore the proportion of pessimistic

forecasts by the two groups of forecasters across the two forecast horizons. In long-horizon

forecasts (> 30 days), IBES shows a slight majority of optimistic forecasts for both earnings and

revenue. Estimize is similar for revenues but not earnings. Of greater consequence, however, is

the change that occurs in the short horizon. The proportion of pessimistic IBES forecasts is

greater than estimize for both revenue and earnings, and the change in the proportion across

horizons is also greater. These results are consistent with the two groups having different

incentives, in particular the sell-side’s desire to facilitate firm’s meet or beat objectives.

4.7 Market Test

Finally, we report the short-window ERC regression results in Table 11. We report two

sets of results for earnings, one relating signed three-day size-adjusted buy and hold returns with

mean 30-day forecast bias and the other relating the absolute value of both the short-window

stock returns and mean 30-day forecast errors. Since the estimize forecasts are primarily short-

term, we limit our test to forecasts issued within 30 days to allow a more fair comparison. The

results show that announcement-period stock returns are more strongly associated with the

signed earnings surprise calculated using estimize forecasts. The ERC based on estimize

forecasts is 3.093, significantly larger than the 2.413 resulting from IBES forecasts. The

difference in these two ERCs is statistically significant (p-value < 0.05). In contrast, the unsigned

results show little difference in magnitude or statistical significance. We interpret these results in

combination with our previous findings as support for the story that although there is little

Page 28: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

28

difference in accuracy between the two groups, the fact that estimize forecasts are relatively less

pessimistic in the short-horizon results in a better representation of market expectations, as

evidenced by the stronger association with returns.

5. Conclusion

We explore a new phenomenon: crowdsourcing of forecasts. We obtain revenue and

earnings forecasts from estimize, which is an open platform to which all members can contribute

forecasts. We explore firm-level participation and find it is increasing in proxies for exposure

within the investment community and forecast difficulty, consistent with intrinsic and extrinsic

motivations. We compare the estimize forecasts to those of sell-side analysts covering the same

firms, found on IBES. Our results support the effectiveness of crowdsourcing. We find estimize

forecasts to be bolder and more accurate. After controlling for other determinants of accuracy we

find the two groups equal in forecast accuracy. Although we find relative optimism for the

estimize group, especially in the short horizon, our analysis suggests that it is at least in part due

to the extreme pessimism of the final forecasts prior to the earnings announcement of sell-side

analysts. We confirm the superiority of estimize forecasts by demonstrating a stronger

association with returns around the corresponding earnings announcement.

Finally, in cross-sectional analysis, we find that the forecast accuracy equivalency holds

across firm-size and analyst-coverage partitions. Similarly, we find no difference for earnings

forecast bias across the partitions, but there is some evidence that the revenue forecast bias of the

estimize analysts may be concentrated in low-analyst coverage firms. We leave for future

research an explanation for this finding.

Page 29: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

29

Much has been written about sell-side analyst incentives and the impact on their

reporting. Estimize provides a new source of forecast data for both market participants and

academics that is perhaps a better reflection of market expectations because it is free of the

conflicts that sell-side analysts face. Our results also suggest that crowdsourcing may skew

participation to those of greater skill which may also contribute to its value as an information

source. Should it continue grow and be successful, estimize could rival IBES as an information

source.

However, our paper is not without caveats. Buy-side and independent analysts may face

their own incentives, of which little has been written or is known, and so future research may

uncover corresponding shortcomings. Further, with the relatively short life of estimize, our

sample period is limited, and thus our analysis is clustered in a small window of time. Therefore,

its generalizability is an open question. Finally, although it is well known that IBES does not

represent all sell-side analysts, its coverage is fairly broad. Little is known about the universe of

non-sell-side forecasts and forecasters, so it is difficult to generalize our findings to all buy-side

or independent analysts.

Page 30: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

30

References

Atiase, R., 1985. Predisclosure information, capitalization, and security price behavior around

earnings announcements. Journal of Accounting Research (Spring) 493–522.

Berger, P., 2011. Challenges and opportunities in disclosure research—A discussion of ‘the

financial reporting environment: Review of the recent literature.’ Journal of Accounting &

Economics 51, 204–218.

Brabham, D., 2013. Crowdsourcing. MIT Press. Cambridge, Massachusetts.

Bradshaw, M.T., Drake, M.S., Myers, J.N., Myers L.A., 2012. A re-examination of analysts’

superiority over time-series forecasts of annual earnings. Review of Accounting Studies 17(4),

944–968.

Brown, L.D., 1999. Predicting individual analyst earnings forecast accuracy. Working paper,

Georgia State University.

Brown, L.D., Call, A.C., Clement, M.B., Sharp, N.Y., 2013. Inside the “black box” of sell-side

financial analysts. Working Paper, Temple University.

Brown, L.D., Hagerman, R.L., Griffin, P.A., Zmijewski, M.E., 1987. An evaluation of

alternative proxies for the markets assessment of unexpected earnings. Journal of Accounting &

Economics 9, 159–193.

Brown, W.D., Fernando, G.D., 2011. Whisper forecasts of earnings per share: Is anyone still

listening? Journal of Business Research 64, 476–482.

Clement, M., Tse, S., 2005. Financial analyst characteristics and herding behavior in forecasting.

Journal of Finance 60(1), 307–341.

Dorfman, J.R., McGough, R., 1993. Some question if analysts are worth their pay. Wall Street

Journal (Sept. 15), R11.

Ertimur, Y., Mayew, W., Stubben, S., 2011. Analyst reputation and the issuance of disaggregated

earnings forecasts to I/B/E/S. Review of Accounting Studies 16, 29–58.

Ghosh, A., Gu, Z., Jain, P., 2005. Sustained earnings and revenue growth, earnings quality and

earnings response coefficients. Review of Accounting Studies 10, 33–57.

Groysberg, B., Healy, P., Chapman, C., 2008. Buy-side vs. sell-side analysts’ earnings forecasts.

Financial Analysts Journal 64(4), 25–39.

Groysberg, B., Healy, P., Maber, D.A., 2011. What drives sell-side analyst compensation at

high-status investment banks? Journal of Accounting Research 49(4), 969–1000.

Gu, Z., Xue, J., 2008. The superiority and disciplining role of independent analysts. Journal of

Accounting & Economics 45, 289–316.

Holmstrom, B., 1999. Managerial incentive problems: A dynamic perspective. Review of

Economic Studies 66, 169–182.

Hong, H., Kubik, J.D., Solomon, A., 2000. Security analysts’ career concerns and herding of

earnings forecasts. Rand Journal of Economics 31(1), 121–144.

Page 31: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

31

Hong, H., Kubik, J.D., 2003. Analyzing the analysts: Career concerns and biased earnings

forecasts. Journal of Finance 58 (1), 313–351.

Irvine, P., 2004. Analysts’ forecasts and brokerage-firm trading. The Accounting Review 79(1),

125–149.

Irvine, P., Simko, P.J., Nathan, S., 2004. Asset management and affiliated analysts’ forecasts.

Financial Analysts Journal 60(3), 67–78.

Jacob, J., Rock, S., Weber, D.P., 2008. Do non-investment bank analysts make better earnings

forecasts? Journal of Accounting, Auditing & Finance 23(1), 23–61.

Jegadeesh, N., Livnat, J., 2006. Revenue surprises and stock return. Journal of Accounting &

Economics 41, 147–171.

Johnston, R., Leone, A., Ramnath, S., Yang, Y., 2012. 14-Week Quarters. Journal of Accounting

& Economics 53, 271–289.

Keung, E., 2010. Do supplementary sales forecasts increase the credibility of financial analysts’

earnings forecasts? The Accounting Review 85, 2047–2074.

Lang, M.H., Lundholm, R.J., 1996. Corporate Disclosure policy and analyst behavior. The

Accounting Review 71(4), 467–492.

Leone, M., 2004. The Flight of the Sell-Side Analyst. CFO July 8, 2004.

Lerner, J., Tirole, J., 2002. Some simple economics of open source. Journal of Industrial

Economics 50(2), 197–234.

Lim, T., 2001. Rationality and analysts’ forecast bias. Journal of Finance 56, 369–385.

Lin, H., McNichols, M., 1998. Underwriting relationships, analysts’ earnings forecasts and

investment recommendations. Journal of Accounting & Economics 25(1), 101–127.

Ljungqvist, A., Marston, F., Starks, L.T., Wei, K.D., Yan, H., 2007. Conflicts of interest in sell-

side research and the moderating role of institutional investors. Journal of Financial Economics

85(2), 420–456.

Lys, T., Soo, L.G., 1995. Analysts’ forecast precision as a response to competition. Journal of

Accounting, Auditing & Finance 10(4), 751–764.

Mikhail, M.B., Walther, B.R., Willis, R.H., 1999. Does forecast accuracy matter to security

analysts? The Accounting Review 74 (2), 249–284.

Mola, S., Guidolin, M., 2009. Affiliated mutual funds and analyst optimism. Journal of Financial

Economics 93(1), 108–137.

O’Brien, P., 1988. Analysts’ forecasts as earnings expectations. Journal of Accounting &

Economics 10, 53–83.

Rees, L, Adut, D., 2005. The effect of regulation fair disclosure on the relative accuracy of

analysts’ published forecasts versus whisper forecasts. Advances in Accounting 21, 173–197.

Richardson, S., Teoh, S., Wysocki, P., 2004. The walk-down to beatable analyst forecasts: The

role of equity issuance and insider trading incentives. Contemporary Accounting Research 21(4),

885–924.

Page 32: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

32

Schipper, K., 1991. Commentary on analyst forecasts. Accounting Horizons 5(1), 105–121.

Steverman, B., 2007. Equity research: What’s next. BusinessWeek.com (June 8),

http://www.businessweek.com/stories/2007-06-08/equity-research-whats-next-businessweek-

business-news-stock-market-and-financial-advice.

Stickel, S., 1992. Reputation and performance among security analysts. Journal of Finance 47(5),

1811–1136.

Trueman, B., 1994. Analyst forecasts and herding behavior. Review of Financial Studies 7, 97–

124.

Williams, P.A., Moyes, G.D., Park, K., 1996. Factors affecting earnings forecast revisions for the

buy-side and sell-side analyst. Accounting Horizons 10(3), 112–121.

Page 33: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

33

Table 1

Estimize Forecasts

Sample Selection

# of Firms # of Earnings

Forecasts

Initial sample 1,415 18,048

Less:

Missing IBES coverage

Missing other data

(12)

(14)

(151)

(383)

Observations with a forecast date after the

announcement date

Final sample of firms

(2)

1,387

(28)

17,486

Less:

Observations without an IBES forecast in the same

forecast horizon

Final sample of firms with an IBES forecast in the

same forecast horizon

(245)

1,142

(2,442)

15,044

This table delineates the sample selection process. We obtain the forecast data from estimize. Footnote 1 above

contains the company’s description of their contributors. We first require firms to have a matching IBES firm. We

then eliminate firms without available data necessary to compute our dependent and control variables. We next

eliminate firms with a negative forecast horizon. This results in a sample of 1,387 firms and 17,486 observations.

Finally, we limit our sample to those observations with an IBES forecast in the same approximate forecast horizon

(within 30 days or greater than 30 days). Horizon is the number of days between the forecast date and the earnings

announcement date. The time-matched sample consists of 1,142 firms and 15,044 observations.

Page 34: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

34

Table 2

Comparison of the ESTIMIZE Sample and IBES

Panel A: Comparison of Firms Covered by ESTIMIZE and IBES Universe

Panel B: Coverage of Firms by ESTIMIZE and IBES Forecasters

ESTIMIZE Forecasts

N Mean Median Std. Dev.

Forecasters per firm 1,387 10.33 3.00 29.32

Age of forecasts 17,486 18.90 2.00 61.11

Forecasts per firm per forecaster per quarter 2,805 1.00 1.00 0.00

Firms covered by forecaster 1,951 7.43 1.00 36.79

IBES Forecasts (Firm and Forecast Period matched to buy-side sample above)

N Mean Median Std. Dev.

Analysts per firm 1,387 17.17 15.00 10.13

Age of forecasts 189,866 185.08 188.00 105.39

Forecasts per firm per analyst per quarter 8,825 3.64 3.00 2.63

See Table 1 for estimize sample selection.

Market Capitalization (in billions)

Mean Median T-test Wilcoxon Test

Estimize sample firms 10.97 2.72 0.0000 0.0000

IBES universe 5.67 0.79

Market-to-Book Ratio

Mean Median T-test Wilcoxon Test

Estimize sample firms 3.93 2.31 0.0000 0.0000

IBES universe 3.14 1.75

Page 35: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

35

Table 3

ESTIMIZE Participation

DEPENDENT VARIABLE: (A) (B)

ESTIMIZE CONTRIBUTORS

OLS Model

Poisson Model

IBES CONTRIBUTORS

0.51278***

0.04631***

(6.62) (13.42)

FIRM SIZE 1.5105***

0.21459***

(6.93) (11.44)

MARKET-TO-BOOK

0.04535**

(2.44)

0.00820***

(3.98)

FORECAST DIFFICULTY 2.9677**

0.24918***

(2.41) (6.13)

INTERCEPT -26.6817***

-2.6148***

(-5.23) (-7.50)

TIME AND INDUSTRY DUMMIES YES YES

N 2,697 2,697

Adj. R2/Pseudo R

2 0.259 0.435

This table contains regression results with ESTIMIZE CONTRIBUTORS as the dependent variable. Column (A)

contains results from estimating an ordinary least squares model, and Column (B) contains results from estimating a

Poisson regression. ESTIMIZE CONTRIBUTORS is the total number of Estimize contributors per firm and is

measured for each quarter. IBES CONTRIBUTORS is the total number of IBES contributors per firm during the

previous quarter. FIRM SIZE is measured as the natural log of the market value of equity at the end of the previous

quarter. MARKET-TO-BOOK is the firm’s market capitalization divided by its book value of equity at the end of

the previous quarter. FORECAST DIFFICULTY is measured as the standard deviation of the firm’s earnings per

share for the eight quarters prior to the 2012 sample period. All continuous variables are winsorized at the 1% level.

*, **, and *** indicate statistical significance at the 10, 5, and 1% level, respectively.

Page 36: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

36

Table 4

Forecast Boldness

DEPENDENT

VARIABLE:

Deviation from Consensus

Boldness Score

SAMPLE: Forecasts with Horizon

<= 30 days (Consensus

measured from <= 90 days to >

30 days)

Forecasts with Horizon

<= 60 days and > 30 Days

(Consensus measured from <= 90

days to > 60 days)

Forecasts with Horizon

<= 30 days (Consensus

measured from <= 90 days to >

30 days)

Forecasts with Horizon

<= 60 days and > 30 Days

(Consensus measured from <= 90

days to > 60 days)

(1)

EPS

(2)

Revenue

(3)

EPS

(4)

Revenue

(5)

EPS

(6)

Revenue

(7)

EPS

(8)

Revenue

ESTIMIZE

0.0538**

120.66*

0.2084***

501.58***

32.212***

32.093***

33.776***

34.167***

(2.20) (1.83) (3.69) (3.47) (2.76) (2.61) (4.14) (4.33)

HORIZON 0.0003* 0.2862 -0.0003 -1.680 0.1568* 0.2131** -0.0595 -0.1433

(1.67) (0.61) (-0.72) (-1.62) (1.75) (2.32) (-1.34) (-1.62)

COMPANIES 0.0001* 0.2813* 0.0001** 1.166** 0.0720 0.1148 0.0642*** 0.1183***

(1.92) (1.68) (2.43) (2.01) (1.25) (1.16) (2.83) (3.51)

INDUSTRIES

-0.0024**

-6.081***

-0.007***

-22.61***

-1.397*

-1.590

-1.399***

-1.746***

(-2.28) (-2.08) (-3.17) (-2.91) (-1.74) (-1.50) (-4.20) (-3.89)

INTERCEPT 0.0764*** 107.51** 0.1630*** 324.94*** 7.863*** 7.214*** 9.187*** 12.792***

(2.71) (2.02) (4.91) (3.91) (7.61) (8.58) (4.69) (2.84)

TIME AND

INDUSTRY

DUMMIES

YES YES YES YES NO NO NO NO

N 24,518 24,704 6,847 5,729 3,336 3,009 2,149 1,830

Adj. R2 0.126 0.261 0.230 0.321 0.213 0.194 0.372 0.374

This table contains regression results with the boldness measures as the dependent variables. To calculate the measures, we first estimate a consensus twice during each quarter.

First, based on the first 30 days of the quarter and second based on the first 60 days of the quarter. We compare forecasts made in the 30 to 60 day horizon to the first consensus

and the forecasts made in the less than 30 day horizon to the second consensus. The deviation from consensus is the absolute value of the difference between each individual

forecast and the appropriate consensus. The boldness score ranks each forecaster by firm-quarter based on their deviation from consensus and then averages across all firms

forecasted for each quarter. ESTIMIZE is an indicator variable set equal to 1 if the forecast is from an estimize analyst and zero otherwise. HORIZON is the number of days

between the forecast date and the earnings announcement date in the Deviation from Consensus model and the average HORIZON in the Boldness Score model. COMPANIES

AND INDUSTRIES are the number of firms and number of industries covered by the analyst, respectively. *, **, and *** indicate statistical significance at the 10, 5, and 1%

level, respectively.

Page 37: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

37

Table 5

Univariate Comparison of ESTIMIZE and IBES Forecast Accuracy and Bias

Full Sample with a

Matching

Horizon Forecast HORIZON <= 30 HORIZON > 30 Robustness Sample

ESTIMIZE IBES ESTIMIZE IBES ESTIMIZE IBES ESTIMIZE IBES

FERR (Earnings)

Mean

0.0025 0.0046 0.0023 0.0030 0.0038 0.0048 0.0025 0.0054

Median

0.0010 0.0016 0.0009 0.0012 0.0017 0.0016 0.0010 0.0019

N 15,044 73,645 13,005 9,170 2,039 64,475 17,486 189,866

t-test

0.0000 0.0000 0.0000 0.0000

Wilcoxon test 0.0000 0.0000 0.9135 0.0000

BIAS (Earnings)

Mean

0.0003 -0.0010 0.0004 0.0009 -0.0003 -0.0013 0.0002 -0.0018

Median

0.0002 0.0000 0.0002 0.0004 0.0000 0.0000 0.0002 0.0000

N 15,044 73,645 13,005 9,170 2,039 64,475 17,486 189,866

t-test

0.0000 0.0000 0.0000 0.0000

Wilcoxon test 0.0000 0.0000 0.0004 0.0000

FERR (Revenue)

Mean

0.0075 0.0135 0.0068 0.0089 0.0126 0.0142 0.0078 0.0164

Median

0.0022 0.0038 0.0020 0.0022 0.0045 0.0042 0.0023 0.0050

N 15,467 68,089 13,420 9,433 2,047 58,656 17,479 167,356

t-test

0.0000 0.0000 0.0018 0.0000

Wilcoxon test 0.0000 0.0181 0.0475 0.0000

BIAS (Revenue)

Mean

-0.0007 -0.0031 -0.0009 0.0013 0.0002 -0.0038 -0.0008 -0.0044

Median

-0.0001 -0.0004 -0.0000 0.0002 -0.0002 -0.0006 -0.0000 -0.0007

N 15,467 68,089 13,420 9,433 2,047 58,656 17,479 167,356

t-test

0.0000 0.0000 0.0000 0.0000

Wilcoxon test 0.0000 0.0000 0.0000 0.0000

See Table 1 for buy-side sample selection. FERR is calculated as │Actual - Forecast│and is scaled by the share

price at the end of the previous quarter for earnings and the market value of equity at the end of the previous quarter

for revenue. BIAS is calculated as Actual - Forecast and is scaled by the share price at the end of the previous

quarter for earnings and the market value of equity at the end of the previous quarter for revenue. HORIZON is the

number of days between the forecast date and the earnings announcement date.

Page 38: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

38

Table 6

Forecast Accuracy

DEPEDENT

VARIABLE:

FERR

Full Sample of Firms Covered by

both Estimize and IBES with a

Matching Horizon Forecast

Forecasts with Horizon

<= 30 days

Forecasts with Horizon

> 30 days

Robustness Sample of Firms

Covered by both Estimize and

IBES

(1) (2)

(3)

(4)

(5)

(6)

(7)

(8)

EPS Revenue EPS Revenue EPS Revenue EPS Revenue

ESTIMIZE

-0.0042

0.0593

0.00242

0.0404

0.0417

0.309***

0.0291

0.1317**

(-0.23) (1.07) (0.16) (0.84) (1.36) (2.83) (1.53) (2.38)

HORIZON 0.0011*** 0.0040*** 0.0028*** 0.0098*** 0.0010*** 0.0038*** 0.0012*** 0.00438***

(8.12) (8.59) (3.25) (3.81) (8.06) (8.80) (11.44) (14.40)

FIRM SIZE -0.0957*** -0.314*** -0.058*** -0.130*** -0.110*** -0.387*** -0.106*** -0.364***

(-3.99) (-3.90) (-5.23) (-4.01) (-3.75) (-3.80) (-6.11) (-6.30)

LOSS 1.380*** 1.091** 0.706*** 0.697*** 1.710*** 1.241** 1.620*** 1.260***

(4.23) (2.56) (5.20) (3.04) (3.92) (2.15) (8.22) (4.65)

FOLLOW -0.0006 -0.0018 0.0004 0.0003 -0.0010 -0.0029 0.0011 0.0010

(-0.60) (-0.76) (1.13) (0.38) (-0.86) (-1.00) (1.24) (0.49)

INTERCEPT 1.080*** 3.499*** 0.737*** 1.587*** 1.23*** 4.269*** 0.0111*** 3.988***

(4.11) (4.07) (5.00) (4.12) (3.87) (3.92) (5.86) (6.62)

TIME AND

INDUSTRY

DUMMIES

YES

YES

YES

YES

YES

YES

YES

YES

N 88,689 83,556 22,175 22,853 66,514 60,703 207,352 184,835

Adj. R2 0.229 0.187 0.187 0.171 0.255 0.200 0.310 0.196

This table contains regression results with earnings FERR as the dependent variable. See Table 1 for sample selection. FERR is calculated as │Actual -

Forecast│and is scaled by the share price at the end of the previous quarter for earnings and market value at the end of the previous quarter for revenue.

ESTIMIZE is an indicator variable set equal to 1 if the forecast is from an estimize analyst and zero otherwise. HORIZON is the number of days between the

forecast date and the earnings announcement date. FIRM SIZE is measured as the natural log of the market value of equity at the end of the previous quarter.

LOSS is an indicator variable set equal to 1 if actual earnings are less than zero. FOLLOW is the total analyst coverage (ESTIMIZE and IBES) during the

quarter. All continuous variables are winsorized at the 1% level. Standard errors are clustered at the firm level. *, **, and *** indicate statistical significance at

the 10, 5, and 1% level, respectively.

Page 39: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

39

Table 7

Forecast Bias

DEPEDENT

VARIABLE:

BIAS

Full Sample of Firms Covered by both

Estimize and IBES with a Matching

Horizon Forecast

Forecasts with Horizon

<=30 days

Forecasts with Horizon

>30 days

Robustness Sample of Firms

Covered by both Estimize and

IBES

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

EPS Revenue EPS Revenue EPS Revenue EPS Revenue

ESTIMIZE

-0.0798***

-0.129**

-0.055***

-0.220***

-0.0254

0.255***

-0.132***

-0.200***

(-3.82) (-2.48) (-3.78) (-4.83) (-0.83) (2.96) (-5.99) (-3.90)

HORIZON -0.0012*** -0.0028*** -0.0008 -0.0021 -0.001*** -0.003*** -0.002*** -0.0031***

(-7.43) (-5.35) (-1.11) (-0.96) (-6.70) (-4.92) (-11.68) (-9.48)

FIRM SIZE 0.0111 0.105* -0.0050 0.0325 0.0151 0.123* 0.0064 0.151***

(0.60) (1.80) (-0.51) (1.08) (0.65) (1.68) (0.42) (3.22)

LOSS -1.42*** -1.201*** -0.468*** -1.474 -1.880*** -1.171*** -1.62*** -1.08***

(-4.46) (-3.35) (-4.71) (-0.82) (-4.51) (-3.52) (-8.30) (-4.61)

FOLLOW -0.0006 -0.0021 -0.0007** -0.0017** -0.0004 -0.0020 -0.0004 -0.0023

(-0.93) (-1.27) (-1.99) (-1.98) (-0.54) (-1.04) (-0.93) (-1.60)

INTERCEPT 0.178 -1.12* 0.138 -0.300 0.177 -1.280 0.259 -1.224**

(0.74) (-1.65) (0.59) (-0.82) (0.61) (-1.51) (1.35) (-2.30)

TIME AND

INDUSTRY

DUMMIES

YES

YES

YES

YES

YES

YES

YES

YES

N 88,689 83,556 22,175 22,853 66,514 60,703 207,352 184,835 Adj. R2 0.174 0.086 0.077 0.066 0.221 0.107 0.247 0.105

This table contains regression results with earnings BIAS as the dependent variable. See Table 1 for sample selection. BIAS is calculated as Actual - Forecast and

is scaled by the share price at the end of the previous quarter for earnings and market value at the end of the previous quarter for revenue. ESTIMIZE is an

indicator variable set equal to 1 if the forecast is from an estimize analyst and zero otherwise. HORIZON is the number of days between the forecast date and the

announcement date. FIRM SIZE is measured as the natural log of the market value of equity at the end of the previous quarter. LOSS is an indicator variable set

equal to 1 if actual earnings are less than zero. FOLLOW is the total analyst coverage (ESTIMIZE and IBES) during the quarter. All continuous variables are

winsorized at the 1% level. Standard errors are clustered at the firm level. *, **, and *** indicate statistical significance at the 10, 5, and 1% level, respectively.

Page 40: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

40

Table 8

Forecast Accuracy and Bias for Subsamples Partitioned Based on the Median of Firm Size

Full Sample with a Matching

Horizon Forecast

Horizon <= 30 Days

Horizon > 30 Days

Robustness Sample

Small Firms

Large Firms

Small Firms

Large Firms

Small Firms

Large Firms

Small Firms

Large Firms

DEPENDENT VARIABLE:

FERR (EPS)

ESTIMIZE

0.0071

0.0195

-0.0019

-0.0009

0.0417

0.0421**

0.0091

0.0400***

(0.23) (1.48) (-0.08) (-0.05) (0.84) (2.39) (0.24) (2.69)

Difference in Coefficients

DEPENDENT VARIABLE:

BIAS (EPS)

0.0124 0.0010 0.0004 0.0310

ESTIMIZE

-0.117***

-0.0731***

-0.0581**

-0.0776***

-0.0257

-0.0268

-0.174***

-0.101***

(-3.06) (-4.24) (-2.50) (-3.72) (-0.42) (-1.00) (-4.02) (-5.06)

Difference in Coefficients 0.0439 -0.0195 -0.0011 0.0730

DEPENDENT VARIABLE:

FERR (REVENUE)

ESTIMIZE

0.0632

0.108

0.0392

0.0190

0.162

0.333***

0.106

0.128**

(0.85) (1.45) (0.60) (0.27) (1.31) (2.94) (1.08) (2.10)

Difference in Coefficients 0.0448 -0.0202 0.1710 0.0220

DEPENDENT VARIABLE:

BIAS (REVENUE)

ESTIMIZE

-0.119*

-0.121

-0.215***

-0.238***

0.297**

0.173

-0.187**

-0.167***

(-1.76) (-1.52) (-3.42) (-3.58) (2.04) (1.64) (-2.10) (-2.95)

Difference in Coefficients -0.0020 -0.0230 -0.124 0.5210

This table contains regression results for subsamples partitioned based on the median of the market value of equity. See Table 1 for sample selection. FERR is calculated as │Actual - Forecast│and is scaled by the share price at the end of the previous quarter for earnings and the market value of equity at the end of the previous quarter for revenue. BIAS is calculated as Actual - Forecast and is scaled

by the share price at the end of the previous quarter for earnings and the market value of equity at end of the previous quarter for revenue. ESTIMIZE is an indicator variable set equal to 1 if the forecast

is from an estimize analyst and zero otherwise. *, **, and *** indicate statistical significance at the 10, 5, and 1% level, respectively.

Page 41: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

41

Table 9

Forecast Accuracy and Bias for Subsamples Partitioned Based on the Median of Analyst Following

Full Sample with a Matching

Horizon Forecast

Horizon <= 30 Days

Horizon > 30 Days

Robustness Sample

Low

Coverage

High

Coverage

Low

Coverage

High

Coverage

Low

Coverage

High

Coverage

Low

Coverage

High

Coverage

DEPENDENT VARIABLE:

FERR (EPS)

ESTIMIZE

-0.0515

0.0195

0.0174

-0.0236

0.0251

0.0511

-0.00152

0.0557**

(-1.47) (0.81) (0.80) (-0.95) (0.56) (1.45) (-0.06) (2.12)

Difference in Coefficients

DEPENDENT VARIABLE:

BIAS (EPS)

0.071 -0.041 0.026 0.0709

ESTIMIZE

-0.0667**

-0.0912***

-0.064***

-0.0619***

0.0624

-0.0510

-0.100***

-0.172***

(-2.43) (-2.64) (-3.30) (-3.18) (1.25) (-1.40) (-4.22) (-4.78)

Difference in Coefficients -0.0245 0.0021 -0.1134# -0.072#

DEPENDENT VARIABLE:

FERR (REVENUE)

ESTIMIZE

-0.0360

0.0643

0.0297

0.0460

0.201

0.330**

-0.00369

0.137**

(-0.43) (0.88) (0.40) (0.78) (1.62) (2.03) (-0.04) (2.48)

Difference in Coefficients 0.1003 0.0163 0.129 0.1407

DEPENDENT VARIABLE:

BIAS (REVENUE)

ESTIMIZE

-0.166**

-0.0726

-0.291***

-0.116*

0.443***

0.118

-0.231***

-0.114**

(-2.41) (-0.93) (-4.60) (-1.86) (2.95) (0.97) (-2.83) (-2.08)

Difference in Coefficients 0.0934 0.175## -0.325# 0.117

This table contains regression results for subsamples partitioned based on the median of analyst following (FOLLOW). See Table 1 for sample selection. FERR is calculated as

│Actual - Forecast│and is scaled by the share price at the end of the previous quarter for earnings and the market value of equity at the end of the previous quarter for revenue.

BIAS is calculated as Actual - Forecast and is scaled by the share price at the end of the previous quarter for earnings and the market value of equity at end of the previous quarter

for revenue. ESTIMIZE is an indicator variable set equal to 1 if the forecast is from an estimize analyst and zero otherwise. *, **, and *** indicate statistical significance at the 10,

5, and 1% level, respectively. ## indicates statistical significance at the 5% level, and # indicates statistical significance at the 10% level.

Page 42: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

42

Table 10

Panel A: Most Recent versus Prior IBES Forecasts

Panel B: Proportion of Pessimistic Forecasts for ESTIMIZE and IBES

HORIZON <= 30 Days HORIZON > 30 Days

ESTIMIZE IBES ESTIMIZE IBES

PROPORTION OF PESSIMISTIC FORECASTS (EPS)

Mean

0.5736 0.6288 0.5165 0.4814

N 12,989 9,158 2,035 64,332

t-test

0.0000 0.0018

PROPORTION OF PESSIMISTIC FORECASTS (Revenue)

Mean

0.4923 0.5508 0.4733 0.4463

N 13,404 9,411 2,043 58,509

t-test

0.0000 0.0016

Panel A provides results comparing the most recent IBES forecast to prior IBES forecasts for the sample of firms

detailed in Table 1. The most recent forecast is the last forecast prior to the earnings announcement, and prior

forecasts are all others. FERR is calculated as │Actual - Forecast│and is scaled by the share price at the end of the

previous quarter for earnings and the market value of equity at the end of the previous quarter for revenue. BIAS is

calculated as Actual - Forecast and is scaled by the share price at the end of the previous quarter for earnings and the

market value of equity at the end of the previous quarter for revenue. Panel B provides results comparing the

proportion of Pessimistic Forecasts (forecasted EPS or revenue is less than the actual EPS or revenue) for both

estimize and IBES forecasts.

FERR (EPS)

Mean Median T-test

Wilcoxon

Test

Prior Forecasts 0.0052 0.0019 0.0000 0.0000

Most Recent Forecast 0.0030 0.0011

BIAS (EPS)

Mean Median T-test

Wilcoxon

Test

Prior Forecasts -0.0016 -0.0002 0.0000 0.0000

Most Recent Forecast 0.0005 0.0005

FERR (REVENUE)

Mean Median T-test

Wilcoxon

Test

Prior Forecasts 0.0153 0.0046 0.0000 0.0000

Most Recent Forecast 0.0088 0.0023

BIAS (REVENUE)

Mean Median T-test

Wilcoxon

Test

Prior Forecasts -0.0044 -0.0010 0.0000 0.0000

Most Recent Forecast 0.0002 0.0003

Page 43: Crowdsourcing Forecasts: competition for sell-side … · 1 Crowdsourcing Forecasts: competition for sell-side analysts? Rick Johnston* Rice University Michael Wolfe Virginia Tech

43

Table 11

ERCs

(A) (B) (C) (D)

DEPENDENT VARIABLE BHAR3 BHAR3 ABS_BHAR3 ABS_BHAR3

INTERCEPT

-0.0305*

-0.0330*

0.0548***

0.0541***

BIAS_IBES 2.413***

BIAS_ESTIMIZE 3.093***

FERR_IBES 2.474***

FERR_ESTIMIZE 2.494***

TIME AND INDUSTRY

DUMMIES

YES YES YES YES

N 1,819 1,819 1,819 1,819

Adj. R2 0.017 0.031 0.143 0.144

Difference in Coefficients on

IBES and BUY-SIDE

0.68##

0.02

This table contains regression results with BHAR3 and ABS_BHAR3 as the dependent variables. BHAR3 is the

size-adjusted buy-and-hold return during the three trading days around the earnings announcement. ABS_BHAR3 is

the absolute value of BHAR3. BIAS_IBES is the mean forecast bias for IBES forecasts made during the 30 days

prior to the information release; BIAS_ESTIMIZE is the mean forecast bias for estimize forecasts made during the

30 days prior to the information release; FERR_IBES is the mean forecast error for IBES forecasts made during the

30 days prior to the information release; and FERR_ESTIMIZE is the mean forecast error for estimize forecasts

made during the 30 days prior to the information release. When a forecaster has more than one forecast during the

period, only the most recent forecast is included. The variables are scaled by share price at the end of the previous

period. All continuous variables are winsorized at the 1% level. Standard errors are clustered at the firm level. *, **,

and *** indicate statistical significance at the 10, 5, and 1% level, respectively. ## indicates statistical significance

at the 5% level.