Tracks Paper
Persistent Patterns in Stock
Returns, Stock Volumes, and
Accounting Data in the U.S.
Capital Markets
Journal of Accounting,
Auditing & Finance
2015, Vol. 30(4) 541–557
ÓThe Author(s) 2015
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0148558X15584051
jaf.sagepub.com
Mark J. Nigrini1
Abstract
Benford’s Law gives the expected frequencies of the digits in tabulated data. The expected
frequencies show a large bias toward the low digits. An analysis of the Center for Research
in Security Prices (CRSP) data shows that the daily returns have a near-perfect fit to
Benford’s Law. The daily volumes also have a close fit to Benford’s Law but there are deviations due to round lot trading and the fact that some of the data are rounded to the nearest hundred. An analysis of Compustat data also shows a close fit to Benford’s Law with
some explainable deviations. The expected returns and the abnormal returns used in event
studies over an extended period showed that these numbers also conformed to Benford’s
Law. Recent studies have divided a population into subsets and then tested the subsets for
conformity to Benford’s Law. The conclusions are that the subsets with the weakest fit to
Benford were fraudulent. The problems with this approach are discussed, and these include
statistical considerations, issues with using Compustat data, other plausible explanations for
a lack of conformity, and the fact that there is no clear link between a change in the leading
digit of a number and the materiality of the dollar value of the change.
Keywords
Benford’s Law, stock returns, stock volumes, fraud detection, event studies
Introduction
Ball and Brown (1968) showed that there was a relationship between stock price changes
and the information contained in earnings reports. A few years earlier, Fama (1965a,
1965b) made the conceptual breakthrough of framing the random walks of stock prices as a
function of information flows. The random walk phrase was popularized by Malkiel
(1973). Google Scholar shows that these four studies have been cited more than 15,000
times confirming that the topics of stock prices and accounting data taken by themselves or
seen together are important areas of study. The objective of this study is to show that there
is the same consistent, persistent, and interesting pattern in the random walk of stock
prices, the stock volumes associated with those same stock prices, the expected and
1
West Virginia University, Morgantown, USA
Corresponding Author:
Mark J. Nigrini, College of Business and Economics, West Virginia University, Morgantown, WV 26506, USA.
Email: mark.nigrini@mail.wvu.edu
542
Journal of Accounting, Auditing & Finance
abnormal returns calculated in event studies, and in the numbers shown in earnings reports.
This regularity, namely, Benford’s Law, has also been seen in naturally occurring earth science data.
In response to the growing number of studies that use Benford’s Law to identify financial statement fraud and economic statistics fraud, this study also shows that there are other
possible non-fraud explanations for nonconformity to Benford’s Law. In addition to nonfraud explanations, there are methodological issues with using the first digits to identify
manipulation, and statistical issues with using Compustat data.
Benford’s Law
Benford (1938) hypothesized that more real-world numbers started with 1s and 2s than
started with 8s or 9s. He analyzed the first digits of numbers from diverse sources (such as
the drainage areas of rivers, scientific constants, and population counts), and his results
showed that 1 was the first digit 30.6% of the time and that 2 was the first digit 18.5% of
the time. A positive number x can be written as S(x) 3 10k, where S(x) 2 [1, 10) is the significand and k is an integer (called the exponent). For example, the number 1,964 can be
written as 1.964 3 103. The integer part of the significand is the first digit. Zero, by definition, is inadmissible as a first digit. Benford made the assumption that the ordered values
of a data set form a geometric sequence, and using calculus he developed the expected frequencies of the digits in tabulated data. The formulas are shown below, with D1 representing the first digit and D1D2 representing the first-two digits of a number:
1
ProbðD1 = d1 Þ = log 1 +
d1
1
ProbðD1 D2 = d1 d2 Þ = log 1 +
d1 d2
d1 2 f1, 2, . . . , 9g,
d1 d2 2 f10, 11, 12, . . . , 99g,
ð1Þ
ð2Þ
where Prob indicates the probability of observing the event in parentheses, and log refers
to the common logarithm. For example, the probability of the first-two digits being 19 is
.0223 (log(1 + 1/19)).
For the first-two digits there is a large bias toward the lower digits (1, 2, and 3). From
the third digit onward, the probabilities are close to being uniform at .10 for any of the possible 10 digits, 0 to 9.
The basis of Benford’s Law is that the mantissas of the logs of the numbers are uniformly distributed. For example, the logarithm of 1,964 is 3.2931 and the mantissa is the
0.2931 fractional part and the characteristic is the integer value of 3. Leemis, Schmeiser,
and Evans (‘‘LSE,’’ 2000) state that if W is uniformly distributed U(a, b), where a and b
are real numbers with a \ b, and if the interval (10a, 10b) covers an integer number of
orders of magnitude, then the first digit of the random variable T = 10W satisfies Benford’s
Law (‘‘Benford’’) exactly. They presumably meant that the distribution of the digits of the
possible values of T would conform to Benford. So if b 2 a is an integer, and if the logarithms are uniformly distributed, then the exponentiated numbers (10W) will conform to
Benford. When the logs of the numbers are uniformly distributed, the numbers themselves
will form a perfect, or a near-perfect, geometric sequence of the form,
Nigrini
543
Sn = arn1 ,
ð3Þ
where a is the first element of the sequence and r is the ratio of the (n + 1)th element
divided by the nth element. A geometric sequence with N elements will have n spanning
the range [1, 2, 3, . . ., N].
Benford’s Law has some interesting properties. The scale-invariance property
(Pinkham, 1961) states that if the numbers in a Benford Set (a set of numbers that conforms to Benford’s Law) were all multiplied by a (nonzero) constant, then the new data
set would also be a Benford Set. The implication is that if Benford’s Law applies to stock
market or accounting data, then it should do so regardless of the source currency. The
law is also base invariant, which means that if the numbers in a Benford Set were converted to (say) base 8 (where 1,964 becomes 3,654) and if the expected probabilities in
Equations 1 and 2 were recalculated, then the base 8 numbers will conform to the base 8
probabilities (Berger & Hill, 2015). The law is also power invariant in that if each
numeric value in a Benford Set is raised to a power in the sequence {1.5, 2.0, 2.5,
3.0, . . .}, then the new data set would also be a Benford Set. This is a variation on the
scale-invariance property.
Nigrini and Miller (2007) analyzed streamflow records for 140 years. Their data conformed almost perfectly to Benford using the Mean Absolute Deviation (MAD) as the conformity measure. The formula is shown in Equation 4:
K
P
Mean absolute deviation =
jAP EPj
i=1
K
,
ð4Þ
where EP denotes the expected proportion, AP the actual proportion, and K represents the
number of bins (which equals 90 for the first-two digits).
The streamflow MAD of 0.00013 meant that there were only small differences between
the actual proportions and the Benford proportions. There is no measure of significance for
the MAD, but Nigrini (2011) contains a table that states that MAD values from 0 to 0.0012
constitute a close conformity to Benford.
Stock Returns
Ley (1996) analyzed the daily returns of the Dow Jones Industrial Average (DJIA) from
1900 to 1993 and the daily returns of the S&P index from 1926 to 1993. The first digits
showed a reasonable conformity to Benford. The MAD values were 0.0047 and
0.0043, which constitute a close conformity result. Rodriguez (2004) analyzed capital
market data, and his results showed that the annual rates of return (from the Ibbotson
stocks, bonds, bills, and inflation data) conformed to Benford, but with only 76 records in
the data set, the chi-square test is somewhat forgiving. His results also showed that the
daily returns of the DJIA did not conform to Benford.
The daily returns of security issues in the Center for Research in Security Prices (CRSP)
database were analyzed. The data used were the Stock/Security Files in the Annual Update
file. The options selected were as follows:
Date range: 1/1/2000 to 12/31/2013.
Company codes: ‘‘Search the entire database.’’
544
Journal of Accounting, Auditing & Finance
Figure 1. First-two digits and ordered logs of daily returns.
Note. Panel A shows the line of Benford’s Law and the actual proportions as bars of the first-two digits of the daily
returns, and Panel B shows the ordered logs of the daily returns over the same period.
Conditional statements: Share Code (shrcd) \ 20.
Time series information: Price, Holding Period Return.
The query produced a table with 20,174,725 records. The stock returns are reported to six
decimal places. Records with an absolute value less than 0.00001 were deleted because
values from 0.000001 to 0.000009 do not have explicit first-two digits. A number recorded
as 0.000008 could be any number from 0.00000750 to 0.00000849. Some 1.15 million
returns were equal to zero, 3,200 returns were values from 0.000001 to 0.000009, and
360,000 (American Stock Exchange [AMEX]) returns were missing (null). This left N =
18,667,795. The first-two digits test was first used by Nigrini and Mittermaier (1997) because
it is more informative than the first digits test. The results are shown in Panel A of Figure 1.
The monotonically decreasing line in Panel A represents the expected proportions of
Benford’s Law. The Benford proportions start at 0.0414 at 10 on the x axis and decrease
steadily to a low of 0.0044. The bars show the actual proportions and the bar at 50 indicates that the actual proportion for 50 was 0.0098. The first-two digits have a close conformity to Benford with a MAD of 0.00046. There are small visible spikes (excesses) at 20
and 25 and a slight tendency for systematic spikes at the multiples of 10. The absolute
daily returns that occurred most often were 0.04, 0.05, 0.025, 0.041667, 0.033333,
0.066667, and 0.047619. Each of these values occurred between 25,000 and 29,000 times.
With 18.7 million records, the spikes caused by these number duplications were small.
A plot of the ordered logs of the daily returns is shown in Panel B of Figure 1. Daily
returns present some issues when it comes to an analysis of the logs because the log of a
negative number is undefined. This issue was solved by taking the logs of the absolute
values of the daily return. The ‘‘log’’ of 20.01 was calculated to be 22.00, and so 22.00
was the ‘‘log’’ of both 20.01 and 0.01. The graph of the ordered logs is upward sloping as
would be expected from ordered values. This graph has the same shape as the streamflow
graph in Nigrini and Miller (2007). The digit patterns (and the log patterns) of the daily
returns are the same patterns that were observed in the earth science data.
The next step in the analysis was the preparation of a histogram of the returns. The histogram was based on 19,819,978 returns after the deletion of the null values and is shown
in Figure 2.
Nigrini
545
Figure 2. Daily stock returns and a fitted Cauchy distribution.
Note. A histogram of the ordinary share stock returns and a fitted curve from the Cauchy distribution.
The histogram shows 125 intervals (with an interval width of 0.0016) from 20.10 to
+ 0.10. The large spike in the center of the graph is the [0.0000, 0.0016) interval with a
proportion of 0.086. The proportion of returns that were negative was 0.4741, whereas the
proportion of returns that were positive was 0.4680, with slightly less than 6% of the
returns being equal to exactly 0. The median return was 0.
The data have a near-perfect fit to Benford. Berger and Hill (2015) note that
none of the familiar classical probability distributions or random variables, such as e.g. normal,
uniform, exponential, beta, binomial, or gamma distributions are Benford. Specifically, no uniform distribution is even close to Benford, no matter how large its range or how it is centered.
(p. 36)
They also note that an exponential distribution with a mean equal to 1 comes close to
being Benford, and that ‘‘a log-normal random variable with large variance (compared with
its mean) is practically indistinguishable from a Benford random variable.’’
The histogram was analyzed using the curve fitting function of the software package
TableCurve 2D. The best fitting density function was the Pearson IV distribution. This distribution was ignored because the pdf is complex and can fit almost any set of continuous
data. The best fit from the familiar distributions was the Cauchy distribution with an r2 of
.872. The fitted Cauchy distribution is the line shown in Figure 2, which is a simple, symmetric, unimodal shape that is similar in shape to the standard normal pdf.
Stock Volumes
The query using the stock returns options returned 20,174,725 records. Daily volumes less
than 10 shares for the day were deleted because the integers from 1 to 9 do not have
546
Journal of Accounting, Auditing & Finance
Figure 3. First-two digits and ordered logs of daily stock volumes.
Note. Panel A shows the first-two digits of the daily volumes and Panel B shows the ordered logs of the daily volumes.
explicit first-two digits leaving 19,120,349 records. The digit patterns are shown in Panel A
of Figure 3.
The digits of the daily volumes show a close conformity with a MAD of 0.00070. MADs
less than 0.0012 qualify as being close conformity. There are systematic spikes at the multiples of 10 (10, 20, . . ., 90). A review of the number frequencies shows that the daily
volumes that occurred most often were 100, 200, 500, 1,000, 300, 400, and 600. It seems
that the high frequency volumes are amounts that are the result of investors avoiding odd-lot
trading. The odd-lot premium has almost disappeared in recent years. The CRSP documentation also reports that ‘‘our source for the NYSE/AMEX reports the numbers rounded to the
nearest hundred.’’ The systematic spikes are there because of a tendency to trade in multiples
of 100 and the fact that New York Stock Exchange (NYSE) reports rounded numbers.
The ordered (ranked from smallest to largest) logs of the stock volumes are shown in
Panel B of Figure 3. The line is either upward sloping or (as can be seen at y = 2) it has a
short segment with a zero slope. This happens when a number (such as 100 which is 102)
has a high enough frequency to cause a visible horizontal segment. This shape is similar to
the ordered log pattern for the streamflow data in Nigrini and Miller (2007).
Accounting Data
Prior studies have analyzed accounting data for conformity to Benford. The dollar amounts
of the invoices approved for payment by a NYSE-listed company were analyzed in Nigrini
and Mittermaier (1997), and the dollar amounts of the invoices approved for payment by a
software company were analyzed in Drake and Nigrini (2000). Carslaw (1988), Thomas
(1989), and Nigrini (2005) analyzed earnings releases for signs of rounding up around psychological reference points (such as $100 million).
The Compustat data used were the Fundamentals Annual section of the North America/
Monthly Updates. The options selected were all fiscal years from 2000 to 2013 and All 382
Balance Sheet items, 328 Income Statement items, 66 Cash Flow items, and 114
Miscellaneous items.
The data were downloaded in four groups. Each group had 157,000 firm years.
Companies do not use every item in every group. Some fields were used by 130,000 firms,
whereas others were unused or were used by fewer than 1,000 firms. The digits of the
accounting data are shown in Figure 4.
Nigrini
547
Figure 4. First-two digits of Compustat data.
Note. The graphs show the first-two digits of the Compustat financial statement items.
Compustat report amounts in millions to three decimal places. Amounts from 0.001 to
0.009 were deleted because these amounts do not have an explicit second digit. An amount
reported as 0.002 could be any number from 0.00150 to 0.00249. The Compustat data have
MADs ranging from 0.00037 for the Cash Flow items to 0.00166 for the Income Statement
items. If the data are aggregated (N = 19,189,396), the result is MAD of 0.00085 which is
comfortably below the 0.0012 upper bound for close conformity. A plot of the ordered logs
shows the same patterns as can be seen in Figures 1 and 3.
A number duplications test shows that the following 12 amounts occurred most often:
All Compustat
Amount
0.010
0.020
0.030
0.040
0.050
0.100
0.060
0.070
1.000
0.080
0.090
0.200
Income Statement
Count
Amount (US$)
Count
179,131
97,617
76,473
62,110
57,298
53,114
46,926
40,388
39,429
37,100
33,567
31,903
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.10
0.09
0.11
0.12
123,296
83,069
64,479
51,857
44,771
38,348
33,070
29,869
29,858
26,917
22,198
21,520
548
Journal of Accounting, Auditing & Finance
The Income Statement items have disproportionately high counts for the 0.01 to 0.10
numbers. These numbers caused the systematic spikes at 10, 20, 30, . . . , 90. A review of
the items with high 0.01 to 0.10 counts shows that these items are all related to various
Earnings Per Share (EPS) calculations or items showing the EPS effect of transactions
(e.g., GLEPS Gain/Loss Basic EPS Effect). These EPS items caused the Income Statement
to have the largest MAD.
Abnormal Stock Returns
Event studies are used in accounting and finance to study the impact of an event on the
daily (or monthly) return of a security. Events could include the promulgation of a new
accounting rule, a merger, or a dividend announcement. These studies partition the return
into the part due to the new news (the event) and the part due to macroeconomic news.
The part of the daily return that is due to the event is called the abnormal return.
The first step in this analysis was to delete the records for stocks that did not have daily
returns for the full 14-year period from 2000 to 2013 (both years inclusive). Including
stocks with random within-period starting or ending dates would have added a layer of
complexity to the programming that might have introduced errors in the results. The 14year period had 3,521 trading days. There were 2,405 stock issues with complete daily
return data.
The next step was to delete firms with low trading volumes. The calculation of abnormal
returns is complicated by non-synchronous trading, which occurs when stocks do not trade
all the way to the closing bell. To reduce this source of noise, stocks were deleted that had
more than 250 days with trading volumes of 100 shares or less in the period. This deletion
reduced the number of firms by 270, which left 2,135 firms each with 3,521 daily returns
for 14 years (N = 7,517,335).
For each stock issue, the first 250 days (essentially the calendar year 2000) was used as
the estimation period. Abnormal returns were calculated for all companies for Day 251.
Thereafter, the returns for Day 2 to Day 251 were used to calculate an abnormal return for
all companies for Day 252 and so on. With 3,521 days and a moving 250-day estimation
window, each company had 3,271 abnormal returns. The expected return in each case was
calculated as follows:
EðR0 jRM0 Þ = Intercept + Slope3RM0 ,
ð5Þ
where R is the return for the firm, RM is the return for the market, and the parameters
Intercept and Slope are related to the linear structure of the market model.
The abnormal return is the excess of the actual return, R0, over the expected return calculated using Equation 5. There were 6,983,585 abnormal returns (2,135 firms with abnormal returns for 3,271 days).
The expected return results are shown in Panel A of Figure 5. There is a near-perfect
conformity to Benford with a MAD of just 0.00032. The actual proportions marginally
exceed the Benford proportions in the 10 to 15 range, and the opposite occurs in the 16 to
47 range. The results are consistent, in that both the actual returns (with a MAD of 0.00046
in Figure 1) that were used to calculate the expected returns and the expected returns themselves conformed to Benford.
Nigrini
549
Figure 5. First-two digits and histogram of expected returns.
Note. Panel A shows the first-two digits of the expected returns and Panel B shows a histogram of the expected
returns.
Figure 6. First-two digits and histogram of abnormal returns.
Note. Panel A shows the first-two digits and Panel B shows a histogram of the abnormal returns.
A histogram of the expected returns is shown in Panel B of Figure 5. Once again, the
Cauchy distribution provides a good fit to the data with an r2 of .9804. The digits of the
abnormal returns are shown in Panel A of Figure 6.
The abnormal returns in Figure 6 have a remarkably close conformity to Benford with a
MAD of 0.00044. The actual proportions are almost perfectly monotonically decreasing.
A histogram of the abnormal returns is shown in Panel B. The histogram is slightly positively skewed, presumably because returns have no limit on the upside. The Cauchy distribution provides a close fit with an r2 of .9994. These results suggest that researchers could
test their abnormal returns against Benford, and that nonconformity might indicate a systematic error in the calculations. However, conformity to Benford does not mean that the
sample is complete (a random sample of a Benford Set should give a Benford Set), it also
does not confirm that the researcher has correctly identified the event dates, and it also
does not mean that the researcher has chosen the ‘‘best’’ stock index for the regression
models. Benford is not a guarantor of the validity of the study.
The tests were repeated using the CRSP Equal-Weighted (EW) index, and the results
(not shown) were nearly identical. A random index was simulated with a range of returns
uniformly distributed (20.10, + 0.10). The expected returns and the abnormal returns
were calculated for each of the stocks in the same way as was done previously. The results
550
Journal of Accounting, Auditing & Finance
were surprising, in that the expected returns showed just a small increase in the MAD. The
fictional abnormal returns also had a close conformity to Benford with a MAD of just
0.00049, which is close to the previous results. Benford can therefore not be used to test
the accuracy of the market returns used in event studies.
The Benford–Cauchy fits in Figures 2, 5, and 6 indicate that there is a relationship
between the simple, symmetric, unimodal Cauchy distribution and conformity to Benford.
This Benford–Cauchy relationship was first documented by Rodriguez (2004).
Analysis of Data Subsets
Recent studies have divided economic populations into subsets. The conclusions were that
the worst fits to Benford were fraudulent. For example, Rauch, Göttsche, Brähler, and
Engel (‘‘RGBE,’’ 2011) analyze macroeconomic data for 16 countries for 11 years with
130 records per year for each country. The Greek data had the worst conformity to
Benford. They conclude that as data issues were identified by the European Commission,
this confirmed the effectiveness of Benford as a detector of such manipulations. They justify their subset approach by noting that ‘‘it is sufficient that the conditional probability of
a Benford distribution is higher for non-manipulated data than for manipulated data.’’
The subset rankings depend, in part, on which reported amounts were included in the
analysis, on the chosen conformity measure, and on the time period analyzed. Also, willful
manipulation would mean that some data categories would be susceptible to overstatement
and others susceptible to an understatement. A country might want to understate its debt
and overstate some categories of social spending. With a small sample of highly aggregated
data, and with incentives to inflate some numbers and to deflate others, it is not clear that
conformity to Benford’s Law has any relationship to fraudulent manipulation. For example,
nonconformity could be caused by an expenditure (or income) line item that starts at (say)
9,000 and ends at 9,800. The amounts infuse the data with extra first digit 9s that inflate
the chi-square statistics. If the numbers had started at (say) 10,000 and had ended at
10,900, then the series would infuse the data with extra first digit 1s, but as 1s have a high
expected count, the effect on the chi-square statistic is muted.
The RGBE results show that Belgium, Austria, Ireland, and Finland had similar poor fits
to Benford’s Law, while Portugal had the second best fit to Benford. Table 1 shows the
RGBE rankings and adds the rank for each of the countries for the Institutions portion of
the Global Competiveness Index (World Economic Forum, 2010 at www.weforum.org).
The Institutional rankings in Table 1 take into account the quality of the government’s
management of public finances and the administrative framework within which everyone
interacts to generate income and wealth in the economy. This presumably includes the
quality and accuracy of government statistics. The Spearman Rank Correlation coefficient
is .082, which is an insignificant (p = .762) correlation. The lack of any link between
Benford and fraud was confirmed by Gonzalez-Garcia and Pastor (2009) who tested conformity to Benford against the Reports of the Observance of Standard and Codes (see http://
www.imf.org/external/NP/rosc/rosc.aspx) of the International Monetary Fund (IMF). They
found that macroeconomic data as a whole conform to Benford’s Law but that the
conformity of various subsets was not a reliable indicator of data quality. There was no
‘‘pattern of consistency’’ between conformity to Benford and the data quality ratings in the
IMF’s Reports. They concluded that nonconformity did not reliably signal poor quality
macroeconomic data.
Nigrini
551
Table 1. Conformity to Benford and Competitiveness Rankings.
Country
Netherlands
Portugal
Luxembourg
Malta
France
Spain
Slovenia
Cyprus
Italy
Germany
Slovakia
Finland
Ireland
Austria
Belgium
Greece
RGBE
Institutions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
12
54
9
34
26
53
50
30
92
13
89
4
24
15
29
84
Note. The table shows the conformity to Benford rankings of RGBE and the Institutional Competiveness rankings
of the World Economic Forum. RGBE = Rauch, Göttsche, Brähler, and Engel.
Other studies have copied their subset approach. Google Scholar (http://
scholar.google.com) reports that the RGBE paper has 57 citations. Rauch, Göttsche, and
Langenegger (2014) analyzed military expenditures reported to the United Nations Office
for Disarmament Affairs. They concluded that the United States and the United Kingdom
have the lowest quality military data because they had the worst conformity to Benford.
The best fits to Benford were for Romania and Russia.
To show that factors other than fraud can affect the conformity of subsets, the CRSP
subsets were analyzed as if the test was meant to detect ‘‘CRSP fraud.’’ The stock returns
of individual companies were analyzed to see which stock price patterns generated the largest deviations from Benford. The stock prices of the four companies with the highest
MADs are shown in Panels A to D of Figure 7.
The companies in Panels A to D are all mutual funds with large holdings of fixed
income securities. Their stock prices were relatively stable from 2000 to the financial crisis
of 2008 when they showed only small declines compared with the rest of the market. The
stocks showed small steady price gains for 2011, a year when the overall market treaded
water. The results show that long-term price stability with small daily changes is one condition that produces large MADs for the stock returns. A histogram of the stock returns (not
shown) shows that the daily returns of these stocks are less dispersed than the conforming
returns (shown in Panels E and F). A review of the first-two digits of the daily returns for
the AllianceBernstein Income Fund (ACG; the highest MAD) shows large spikes at 12, 24,
and 36. These spikes are caused by the 0.01 changes with a stock price of $8.00 (a daily
return of 0.00125), the 0.02 changes with a stock price marginally above $8.00, and also
the 0.03 changes which are responsible for the spike at 36. The spikes are the result of
0.01, 0.02, and 0.03 changes for a stock price that is stuck just above $8.00.
The Nuveen Municipal Value Fund (NUV) result in Panel B is also due to a low level
of volatility. The first-two digits showed a large spike at 10, and a review of the data
552
Journal of Accounting, Auditing & Finance
Figure 7. Stocks with a weak or strong conformity to Benford.
Note. Panels A to D show the stock prices for the four two stocks with daily returns that have the weakest
conformity to Benford. Panels E and F show the stock prices of the two stocks (Juniper Networks and Citigroup)
with the best conformity to Benford each with MADs of 0.0012.
showed that these were caused by many 0.01 changes with a stock price marginally above
9.00 (e.g., 0.01/9.20). In a sharp contrast to the low volatility of the Panel A-D stocks, the
two stocks on Panels E and F have a high volatility as can be seen by looking at the stock
price ranges (Juniper Networks from $4.43 to $344.00 and Citigroup from $1.02 to
$77.44). The best company MAD of 0.0012 exceeds the average MAD of 0.00046. The
deviations from Benford in the 2,135 subset graphs offset each other to give a population
result that is better than the best subset. Stock price volatility (which has nothing to do
with fraud) is the driver for conformity to Benford for the individual companies.
Nigrini
553
Alali and Romero (2013) divide a decade into five subsets: 2001-2002, 2003-2004, . . .,
and 2009-2010. They use Compustat financial statement numbers and the conformity of the
periods to Benford’s Law to ‘‘find different indicators of manipulation during the periods.’’
The authors note that they are using deviations from Benford, which might indicate manipulation. Following their approach, the daily returns were analyzed on a year-by-year basis
to identify the CRSP calendar years with the most ‘‘potential manipulation.’’ The total
number of records varied from 1.1 million records to 1.6 million records because of
changes in the number of stock issues, the deletion of returns of 0, and the variation in the
number of trading days per year. For each of the 14 years, the fit was excellent with MADs
ranging from 0.00051 to 0.00129. Once again, the best annual MAD of 0.00051 was
greater than the population MAD of 0.00046 (shown in Figure 1). Over the 14-year period,
the market suffered two large declines and two periods of growth, and it is remarkable that
the digit patterns remained stable from year to year. The highest MAD of 0.00129 was for
2000 and the second highest MAD of 0.00081 was for 2001. These high MADs were
caused by trading in eighths and sixteenths, which restricted the possible number of returns
(and the first-two digits of those returns) for any stock for any particular day. The high
MADs had nothing to do with fraud. The annual graphs (not shown) showed regular small
spikes at 20. There were excessive duplications of 0.020408. This return was caused by a
stock at $6.125 increasing to $6.25, $12.25 to $12.50, $24.50 to $25.00, $49.00 to $50.00,
and $98.00 to $100.00, and this ‘‘pricing in eighths increase’’ happened just enough times
to cause a spike (and a deviation on the graph).
The departures from Benford for the volume data can also be explained with non-fraud
reasons. The volume data in Figure 3 showed spikes at the multiples of 10. These MADinflating spikes were a result of trading in lots of 100 and also the fact that the NYSE data
were reported to the nearest hundred shares for the day. For example, the actual volume
123 would be reported as 100. This would cause MAD-inflating spikes at the multiples of
10 for all actual daily volumes from 50 (rounded up to 100) to 1,049.
To show a large MAD difference due to treading water around an average, the daily
stock volumes of Apple Inc. (AAPL) and Facebook, Inc. (FB) were downloaded from
Yahoo Finance (finance.yahoo.com) from their first day of trading to the time of writing.
Apple was listed in December, 1980 and its history includes the success of the Macintosh,
a period of decline, a return to profitability, a record market capitalization, and four stock
splits. An analysis of the stock volumes shows an acceptable conformity with a MAD of
0.00149 (and N = 8,552). In contrast, Facebook was listed in May, 2012 and an analysis of
the daily volumes showed a MAD of 0.00596 (with N = 623), which is above the lower
bound of 0.0022 for nonconformity. The high MAD for Facebook was because one third of
the volumes were in the 30 million to 50 million range, which caused spikes in the 30 to
49 interval. A company with an average volume that is stuck in a narrow range will tend to
have a high MAD, which is a result of a narrow volume range and the number of records.
There are three conformity-related issues with using firm-year Compustat data. First,
Compustat replaces originally reported data with the new data when a company restates its
past results. Their data are a combination of original data that will remain unchanged,
restated numbers that have replaced some original data, and original data that will be changed at some time in the future. It is not clear how this data mixture affects the conformity
of the subsets (the financial statements for a company for one year). Second, any subset
analysis should avoid the inclusion of totals or subtotals. For example, Microsoft’s 2012
Form 10-K shows that the three components of its inventories amount to $210 million, $96
million, and $831 million, respectively. Compustat shows INVRM, INVWIP, and INVFG
554
Journal of Accounting, Auditing & Finance
at these values, but it also has another field INVT (Total Inventories) at $1,137 million.
INVT cannot be manipulated, and including this total in any analysis introduces an extra
first-two digits 11 into the MAD calculation. The effect of including subtotals (and totals
such as total current assets) probably improves the conformity of the subsets overall
because these fields have close conformities to Benford. Third, Compustat data are standardized to ensure ‘‘consistent and comparable data across companies, industries and business
cycles.’’ The 2012 Microsoft current liabilities include Securities lending payable of $814
and Other of $3,151. Compustat data combine these amounts and shows Current Liabilities
Other (LCOXDR) of $3,965. Also, long-term unearned revenue of $1,406 and other longterm liabilities of $8,208 are summed to give $9,614 for their field Liabilities-Other-Total
(LO). Standardization has the effect of replacing some financial statement numbers with
other numbers with the result that researchers end up using different digits for the MAD
calculation than would be the case if the actual reported numbers were used.
In an analysis of subsets, high MADs are generally associated with a small number of
records in a subset. A subset with only one record will have a first-two digit MAD of at
least 0.0213. With two records, the MAD will still be at least 0.0205. An analysis of the
382 Compustat Balance Sheet fields showed that, in general, those items that had a low
number of records had the highest MADs. The items that were applicable to the fewest
companies generally had the highest nonconformity. The correlation between N and the
MAD was 2.426, meaning that lower counts were associated with higher MADs. The carrying value of common stock (CSTKCV) stood out from the group of Balance Sheet items
because it had a high count of 92,500 and a relatively high MAD of 0.0118. A review of
the data showed that this was because one half of the amounts were either $0.01 or $1.00.
This field is an anomaly because it is not a balance sheet line item that gets included in the
sum of assets or liabilities or equity. These amounts are not ledger balance dollar amounts.
The high MADs for items with only a few records also apply to Compustat’s income
statement fields. The correlation between N and the MAD is 2.358, meaning that the items
used by only a few companies had the highest MADs. The exceptions (high MAD and high
N) were all for items related to EPS calculations that were variations on the standard EPS
calculation or the EPS effect of some loss or gain. None of these items were ledger balance
amounts. For example, S&P Core Earnings EPS Diluted (SPCED) had a high MAD
because of a pattern that was similar but more pronounced than Panel B in Figure 4. The
spikes at the multiples of 10 were caused by high counts for 0.01, 0.02, 0.09, and 0.10.
There were no cash flow anomalies (high MAD and high N), and the correlation between
the MAD and the number of records was 20.556. There were three anomalies for the miscellaneous items, and these were all related to the options (the life of options in years, the
risk-free rate, and the volatility assumption as a percentage).
Any subset analysis also suffers from a bluntness issue. A number such as 100 can be
increased by 99.99%, and it will still have the same first digit, namely 1. A number such as
900 can be increased by 11.1%, and it will still have the same first digit, namely 9.
Relatively large increases for 100 or 200 will leave the first digits unchanged. But numbers
such as 800 or 900 will change first digits for comparatively small increases. Using calculus, it can be shown that a random number drawn from a Benford Set can be increased on
average by 22.86% without a change in the first digit. At the extreme, a company with a
close conformity can increase every number to the maximum extent possible (e.g.,
Microsoft can change its Other liabilities of $3,151 to $3,965 or $3,999) and the MAD
would remain unchanged. A close conformity firm can increase every number by an average of 22.86% and the MAD would be unchanged. The scale-invariance property makes
Nigrini
555
this issue even more serious. If a company has a close conformity to begin with, then every
number can be multiplied by any constant and the MAD would remain unchanged.
The subset studies fail to demonstrate using mathematics that manipulations will always
increase the MAD. If the manipulation changes a first digit 2 (when the first digit 2s is at
17.6% or less) to a 3 (when the first digit 3s is already at 12.5% or higher), the effect will be
to increase the MAD. But, if the manipulation changes a first digit 2 (when we have a spike
at 2) to a first digit 3 (when the actual first digit 3 proportion is below .125), the effect will
be to decrease the MAD. The effect on the MAD of a 2!3 change is therefore indeterminate. If Microsoft erroneously reported Securities lending payable of $814 and Other
Liabilities of $3,965, the first digits of both amounts would be unchanged at 3 and 8.
There is also a disconnection issue because of the lack of a relationship between the
first digit of a number and the materiality of the number. Microsoft’s income statement
shows revenues of $73,723. An error that overstated revenues by $10,000 would change
the first digit 7 to 8. Their balance sheet reports an income tax liability of $789. An error
that overstated the tax liability by $100 would also change that first digit 7 to an 8. The
effect on the company MAD would be the same for a $100 million error and for a $10,000
million error. However, the effect of the tax error on EPS would be small but the effect of
the revenue error would be $1.19 (ignoring any income tax on the extra profits). The large
error is more likely to affect the digits of other numbers that would be affected (net
income, accounts receivable, and retained earnings) but the effect of these secondary
changes on the MAD is indeterminate. For example, the net income would rise from
$16,978 to $26,978, and retained earnings would rise from $566 to $10,566. The loss of
the first digit 1 for net income would be offset by the gain in a first digit 1 for retained
earnings. The net effect would be a loss of a first digit 5 and a gain of a first digit 2, which
could move the MAD in any direction.
The disconnection issue is a reminder that financial statement fraud can be achieved by
manipulating only a few financial statement line items. If revenue is inflated, it will inflate
accounts receivable, retained earnings, and perhaps income tax expense and income taxes
payable. It is unlikely that only five numeric changes will inflate a company’s MAD to the
extent that it becomes an outlier with a high MAD.
Conclusion
Daily stock returns and stock volumes have a close conformity to Benford’s Law, which
gives the expected frequencies of the various digits (0-9) in tabulated data. The expected
returns and the abnormal returns generated by accounting and finance event studies also
have a close conformity to Benford as do the financial statement amounts reported on the
Compustat database.
Some recent studies have divided a population into subsets, tested the subsets for conformity to Benford, and concluded that the subsets with the weakest fit to Benford were fraudulent. With this approach, the rankings depend on the time period chosen, the conformity
measure used, and the line items (financial statement line items or economic statistics)
used in the analysis. The rankings are also influenced by any standardization steps performed on the data. This approach ignores valid non-fraud reasons for nonconformity and
the fact that the manipulation of a financial statement amount might have no effect on the
first digits. There is also no relationship between a change in the first digit and the magnitude or the materiality of the error.
556
Journal of Accounting, Auditing & Finance
Diekmann and Jann (2010) state that to validly use Benford’s Law to detect fraud, one
has to demonstrate that correctly stated data conform to Benford’s Law, while manipulated
data follow a different distribution. Benford’s Law is most applicable to fraud schemes
where one person invents all the numbers and all the numbers are fictitious such as the fraudulent vendor scheme described in Nigrini (1999). Benford’s Law is also applicable when
many people all have the same incentive to manipulate numbers in the same way, and the
effect on the digits of those numbers is predictable. Examples would include the upper and
lower limits in the income tax code, and a good example is described in Christian and Gupta
(1993). Another example is the early rounding-up study of Carslaw (1988).
Author’s Note
The Compustat and the CRSP data are available from Wharton Research Data Services (WRDS) at
https://wrds-web.wharton.upenn.edu/wrds/
Acknowledgments
I hereby extend my thanks to the Editor-in-Chief Bharat Sarath, the associate editor, and the reviewer
for their insightful comments and suggestions. The article also benefitted from the comments of workshop participants at West Virginia University and from discussions with Ted Hill, Steven Miller,
Dick Riley, and Jack Dorminey. I would also like to express my gratitude to my dissertation chairman
Wallace Wood and to Marty Levy on my dissertation committee for believing, all those years ago,
that the largely unknown phenomenon called Benford’s Law was worthy of being a dissertation topic.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/
or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this
article.
References
Alali, F., & Romero, S. (2013). Benford’s Law: Analyzing a decade of financial data. Journal of
Emerging Technologies in Accounting, 10, 1-39.
Ball, R., & Brown, P. (1968). An empirical evaluation of accounting income numbers. Journal of
Accounting Research, 6, 159-178.
Benford, F. (1938). The law of anomalous numbers. Proceedings of the American Philosophical
Society, 78, 551-572.
Berger, A., & Hill, T. (2015). An introduction to Benford’s Law. Princeton, NJ: Princeton University
Press.
Carslaw, C. (1988). Anomalies in income numbers: Evidence of goal oriented behavior. The
Accounting Review, 63, 321-327.
Christian, C., & Gupta, S. (1993). New evidence on ‘‘secondary evasion.’’ The Journal of the
American Taxation Association, 15(1), 72-93.
Diekmann, A., & Jann, B. (2010). Benford’s Law and fraud detection: Facts and legends. German
Economic Review, 11, 397-401.
Drake, P., & Nigrini, M. (2000). Computer assisted analytical procedures using Benford’s Law.
Journal of Accounting Education, 18, 127-146.
Nigrini
557
Fama, E. (1965a). The behavior of stock-market prices. The Journal of Business, 38, 34-105.
Fama, E. (1965b). Random walks in stock market prices. Financial Analysts Journal, 21(5), 55-59.
Gonzalez-Garcia, J., & Pastor, G. (2009, January). Benford’s Law and macroeconomic data quality
(IMF Working Paper No. 09/10). Washington, DC: International Monetary Fund.
Leemis, L., Schmeiser, B., & Evans, D. (2000). Survival distributions satisfying Benford’s Law. The
American Statistician, 54(3), 1-6.
Ley, E. (1996). On the peculiar distribution of the US stock indices first digits. The American
Statistician, 50, 311-314.
Malkiel, B. (1973). A random walk down Wall Street. New York, NY: W.W. Norton.
Nigrini, M. (1999). Fraud detection: I’ve got your number. Journal of Accountancy, 187(5), 79-83.
Nigrini, M. (2005). An assessment of the change in the incidence of earnings management around the
Enron-Andersen episode. Review of Accounting and Finance, 4(1), 92-110.
Nigrini, M. (2011). Forensic analytics: Methods and techniques for forensic accounting investigations. Hoboken, NJ: John Wiley.
Nigrini, M., & Miller, S. (2007). Benford’s Law applied to hydrology data: Results and relevance to
other geophysical data. Mathematical Geology, 39, 469-490.
Nigrini, M., & Mittermaier, L. (1997). The use of Benford’s Law as an aid in analytical procedures.
Auditing: A Journal of Practice & Theory, 16(2), 52-67.
Pinkham, R. (1961). On the distribution of first significant digits. Annals of Mathematical Statistics,
32, 1223-1230.
Rauch, B., Göttsche, M., Brähler, G., & Engel, S. (2011). Fact and fiction in EU-governmental economic data. German Economic Review, 12, 243-255.
Rauch, B., Göttsche, M., & Langenegger, S. (2014). Detecting problems in military expenditure data
using digital analysis. Defense and Peace Economics, 25, 97-111.
Rodriguez, R. (2004). Reducing false alarms in the detection of human influence on data. Journal of
Accounting, Auditing, & Finance, 19, 141-158.
Thomas, J. (1989). Unusual patterns in reported earnings. The Accounting Review, 64, 773-787.
Copyright of Journal of Accounting, Auditing & Finance is the property of Sage Publications
Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written permission. However, users may print,
download, or email articles for individual use.
JOURNAL OF FINANCIAL AND QUANTITATIVE ANALYSIS
Vol. 50, No. 3, June 2015, pp. 301–323
COPYRIGHT 2015, MICHAEL G. FOSTER SCHOOL OF BUSINESS, UNIVERSITY OF WASHINGTON, SEATTLE, WA 98195
doi:10.1017/S0022109014000660
Capital Structure Decisions around the World:
Which Factors Are Reliably Important?
Özde Öztekin∗
Abstract
This article examines the international determinants of capital structure using a large sample of firms from 37 countries. The reliable determinants for leverage are firm size, tangibility, industry leverage, profits, and inflation. The quality of the countries’ institutions
affects leverage and the adjustment speed toward target leverage in significant ways. Highquality institutions lead to faster leverage adjustments, whereas laws and traditions that
safeguard debt holders relative to stockholders (e.g., more effective bankruptcy procedures
and stronger creditor protection) lead to higher leverage.
I.
Introduction
A growing body of literature employs cross-country comparisons to investigate various aspects of the determinants of capital structure and the role of
particular countries’ institutional characteristics in this determination (Rajan and
Zingales (1995), Booth, Aivazian, Demirgüç-Kunt, and Maksimovic (2001),
Antoniou, Guney, and Paudyal (2008), and Fan, Titman, and Twite (2012)).
However, no research has asked the broader questions: Globally, what are the consistent determinants of capital structure? How do institutional differences affect
the choice of leverage and the ability of firms to adjust to that leverage choice?
In this article, I identify the robust determinants of capital structure by extending the analysis to a larger number of countries and by estimating a dynamic
panel model that allows the impact of country-specific differences on leverage
choices and adjustment speeds to be jointly considered in an econometrically robust setting.
Prior work in this area provides an understanding of leverage determinants
in the United States and restricted international samples. Specifically, Frank and
Goyal (2009) document that the key factors for U.S. firms are industry leverage, market-to-book ratio, tangibility, profits, firm size, and inflation. They also
∗ Öztekin (corresponding author), ooztekin@fiu.edu, College of Business, Florida International
University, Miami, FL 33199. I thank Hendrik Bessembinder (the editor), Mark Flannery, Vidhan
Goyal (the referee), Jay Ritter, Richard Warr, and seminar participants at the University of Nebraska
and the 2010 Financial Management Association meetings for helpful comments and suggestions. All
remaining errors are my own.
301
302
Journal of Financial and Quantitative Analysis
report that the impact of firm size, market-to-book ratio, and inflation is not
reliable. Rajan and Zingales (1995) examine the Group of 7 (G-7, comprising
Canada, France, Germany, Italy, Japan, the United Kingdom, and the United
States) countries and report that the dominant factors are market-to-book ratio,
tangibility, profits, and firm size. What is not known is whether the results from
major industrial countries extend to a much larger panel of countries. Thus, the
primary goal of this study is to identify the reliable patterns in the international
data and determine how institutions influence financing decisions around the
world.
In a closely related study, Fan et al. (2012) examine how the institutional environment influences capital structure and debt maturity choices in 39 developed
and developing economies. The analysis undertaken herein is complementary to
theirs, with two important differences. First, Fan et al.’s analysis does not focus
on the influences of institutional environments on leverage adjustments and on the
reliability of firm, industry, and macroeconomic determinants for leverage determination. Second, they emphasize supply-side financing (i.e., investors). In contrast, the current investigation employs institutional features that also correspond
to the demand side (i.e., corporations), reflecting various costs (e.g., bankruptcy
costs, agency costs, transaction costs, contracting costs, and information asymmetry costs) that firms face in their respective countries.
First, I evaluate robust determinants of capital structure and find that profitability, tangibility, firm size, industry leverage, and inflation present consistent
signs and statistical significance across a large number of countries. To establish
the robustness of the leverage factors to firm circumstances imposed by country
features, I examine the effects of the firm, industry, and macroeconomic attributes
on capital structure separately for countries with strong and weak institutions.
The selection of core factors and their impact on leverage are generally robust
across firms from diverse institutional environments. However, firm size is not
a reliable factor for leverage. This result seems driven by countries with weak
institutional settings, in which firm size does not have a significant influence on
leverage. In general, the results are consistent with the conclusions of previous
studies, although their samples have limited geographical coverage.
Second, I examine the degree to which variations in the quality of the countries’ institutions can explain cross-country differences in capital structure
adjustments and leveraging choices. I find that legal and financial institutions
are first-order determinants of how fast the average firm adjusts its leverage in
a country, with better institutions resulting in faster adjustments. I also find that
higher leverage is associated with better bankruptcy outcomes; stronger protection of creditors; weaker protection of shareholders; poor contract enforcement,
executive quality, and law and order; weaker accounting, disclosure, liability, and
enforcement standards; and more-prevalent insider trading. These findings reinforce the prior literature on the importance of legal and financial institutions for
capital structure decisions.
The article proceeds as follows: Section II reviews the literature and discusses the association between firm, industry, macroeconomic, and institutional
factors and leverage. Section III introduces the data and empirical method. Section IV presents the results, and Section V draws some conclusions.
Öztekin
II.
303
Literature Review and Hypotheses
This article draws on two broad thrusts in the capital structure literature.
The first is the various competing or complementary theories on capital structure. Although this article is not intended to test capital structure theories in an
international environment, I draw on these theories to help understand the role
of various factors in the capital structure decision. The second area of the capital
structure literature is the set of studies that specifically examine capital structure
determinants and institutional effects on capital structure in a global setting. It is
this literature to which this article contributes.
A.
Reliable Firm, Industry, and Macroeconomic Determinants
Theories of capital structure make specific predictions about the influence
of factors such as bankruptcy costs, agency costs, transaction costs, and information asymmetry costs on firms’ capital structures. Several firm, industry, and
macroeconomic proxies have been proposed to account for the relation between
these factors and leverage. I evaluate the reliability of these suggested determinants for the firm’s choice of capital structure in many countries. It is important
to stress that the current investigation assesses consistent patterns in the international leverage data and does not employ structural tests of the capital structure
theories. As elaborated subsequently, the observed signs could be consistent with
multiple theories with various explanations of the coefficient estimates.
One strand in the theoretical literature maintains that a firm’s capital structure
is the outcome of the trade-off between the benefits of debt and the costs of debt.
Classic arguments for this trade-off are based on bankruptcy costs, tax benefits,
and agency costs related to asset substitution (Jensen and Meckling (1976)), underinvestment (Myers (1977)), and overinvestment (Jensen (1986), Stulz (1990)).
This trade-off motivates four broad predictions. First, higher bankruptcy costs will
decrease a firm’s optimal leverage. Accordingly, lower debt ratios should be associated with firms that are smaller and less profitable, firms with greater growth
opportunities, firms with fewer tangible assets, firms operating in industries with
lower leverage, and firms in economies with higher inflation, which are more
likely to have higher bankruptcy costs. A negative sign on profitability could arise
because profits directly add to the equity of the firm. As profitability increases,
the book value of equity also increases because of additions to retained earnings.
Profitability also increases the market value of equity. Firms could respond to
this organic increase in equity by issuing debt, but because of transaction costs,
the adjustment is partial (Strebulaev (2007), Frank and Goyal (2015)). Second, a
higher value of tax shields would cause a firm’s optimal leverage to increase. That
is, higher profitability, higher inflation, and higher tax rates should have a positive impact on leverage. Third, more profitable firms and firms with fewer growth
opportunities, which could possibly face higher agency costs of equity, should
carry more debt. Fourth, larger firms and firms with more tangible assets and
fewer growth opportunities, which are more likely to face lower agency costs of
debt, should also carry more debt.
According to another strand in the theoretical literature, the adverse-selection
costs of issuing risky securities, because of either asymmetric information
304
Journal of Financial and Quantitative Analysis
(Myers (1984), Myers and Majluf (1984)) or managerial optimism (Heaton
(2002)), lead to a preference ranking over financing sources. To minimize adverseselection costs, firms first issue internal funds, followed by debt and then equity.
This pecking order motivates two broad predictions. First, more internal funds and
fewer investment opportunities lead to less debt. Consequently, holding dividends
fixed, more profitable firms and firms with fewer growth opportunities should
have a lower amount of debt in their capital structures. Second, higher adverseselection costs result in more debt. If smaller firms and firms with fewer tangible
assets are more prone to adverse-selection costs, they should carry more debt in
their capital structures. Alternatively, if adverse selection is about assets in place,
tangibility may increase adverse-selection costs and result in higher debt (Frank
and Goyal (2009)). Therefore, the effect of tangibility on adverse-selection costs
is ambiguous.
A third strand in the theoretical literature posits that when managers issue securities, they consider the time-varying relative costs of issuances for debt
and equity (Myers (1984), Graham and Harvey (2001), Hovakimian, Opler, and
Titman (2001), Baker and Wurgler (2002), and Huang and Ritter (2009)). This
market timing motivates the prediction that firms alter their leverage to exploit
favorable pricing opportunities. As long as the market-to-book ratio is a reasonable proxy for stock overpricing opportunities, it should be negatively associated
with leverage. Several studies show that the negative relation between marketto-book ratio and leverage is mostly driven by growth opportunities and not by
market timing (e.g., Liu (2009)). Thus, one should be cautious in reading too
much support for market timing from the negative coefficient on the market-tobook ratio in leverage regressions. Furthermore, higher expected inflation makes
debt issuances cheaper, implying more debt in a firm’s capital structure. In addition, equities may be undervalued in the presence of inflation if investors suffer
from inflation illusion (Ritter and Warr (2002)), resulting in higher leverage.
B.
Institutional Effects
Prior research indicates that the institutional environment influences firms’
financing policies (e.g., Rajan and Zingales (1995), Demirgüç-Kunt and
Maksimovic (1999), Booth et al. (2001), Bae and Goyal (2009), and Fan et al.
(2012)). Institutional characteristics could affect capital structure decisions by
altering the costs and benefits of operating at various leverage ratios. First, the
institutional environment might influence the speed with which a firm converges
to its long-term capital structure, given some deviation. If a country’s institutional characteristics make it more expensive to issue debt and equity, firms in
that country would exhibit slower adjustment speeds. Second, country characteristics could influence long-term capital structure. Institutions that safeguard debt
holders (equity holders) would lead to cheaper debt (equity) financing, resulting in
higher (lower) leverage.
Strong institutions form the legal framework that enables more efficient contracting and facilitates economic transactions. They also provide checks against
expropriation by powerful groups. The extent to which these institutional effects
interact makes the interpretation of the cross-sectional comparisons more difficult.
Öztekin
305
Unbundling different effects of institutional environments is challenging and is
not attempted herein. Nevertheless, existing theoretical and empirical evidence
provides some guidance on the potential channels through which institutions can
influence financing decisions.
Bankruptcy law and procedures constitute an integral element of a debt contract. Court mechanisms governing default on debt contracts could affect the
effectiveness of resolution of financial distress. Firms from countries that administer the bankruptcy process in a manner that is less time consuming (TIME),
less costly (COST), and more efficient (EFFICIENCY) should have lower financial distress and contracting costs, leading to more debt. Similarly, countries with
stronger creditor protection, in which lenders can easily force repayment, repossess collateral, gain control of the firm (CREDITOR), and enforce debt contracts
(FORMALISM), could mitigate bankruptcy costs, agency costs, and contracting
costs, resulting in more debt.1 Debt tax shields play an important role in determining the capital structure (Graham (1996)). Holding personal tax rates constant,
higher corporate tax rates (TAX) should have a positive effect on the value of tax
shields, resulting in more debt.
The degree of agency costs and contracting costs should also greatly depend
on the quality of shareholder protection, as determined by the rights attached to
equity securities (ANTIDIR), their enforcement (PRENF), and disciplinary and
monitoring mechanisms that limit managerial discretion and facilitate financial
contracting. I use executive quality (EXECUTIVE), the strength of law and order
(LAW&ORDER), the quality of government (GOVERNMENT), and the quality
of contract enforcement (ENFORCE) to account for governance and contracting
mechanisms that could correct any conflict between managers and shareholders
and alleviate contracting costs. The growing law and finance literature argues that
capital markets function properly only when good security laws exist (La Porta,
Lopez-de-Silanes, Shleifer, and Vishny (1997), (1998)). A common premise in
this literature is that stronger property rights and enforcement reduce agency costs
and, consequently, the cost of external financing, increasing its supply. Accordingly, stronger shareholder protection should lead to lower leverage through more
equity. The effect of disciplinary and monitoring mechanisms on leverage is not
obvious a priori because they could influence both debt and equity contracts. Fan
et al. (2012) argue that when contracting and agency costs are high as a result of
weak enforcement and/or a poor legal system, debt that allows insiders less discretion is likely to dominate. Similarly, Acemoglu and Johnson (2005) suggest that
poor-quality contracting institutions could result in more debt rather than equity
because debt contracts are cheaper to enforce. Conversely, La Porta et al. (1997),
(1998) and Levine (1999) maintain that in an inferior contracting environment,
debt holders are likely to increase the price of debt and decrease its quantity.
1 Rajan and Zingales (1995) argue that strong creditor rights enhance ex ante contractibility and
give management incentives to avoid bankruptcy. Qian and Strahan (2007) show that creditor protection and legal origins significantly influence the terms and pricing of bank loans. Bae and Goyal
(2009) posit that variation in laws and enforcement affects borrower incentives to expropriate and
increases the riskiness of assets, influencing default and recovery probabilities. They argue that this
variation also influences lender incentives to monitor and lender contracting abilities. They find that
banks charge higher loan spreads when property rights are weaker.
306
Journal of Financial and Quantitative Analysis
A country’s quality of accounting standards (ACCSTDS); regulation of security laws, including mandatory disclosure (EDISCLOSE), liability standards
(ELIABS), and public enforcement (EPUBENF); insider trading laws (INSIDER);
and presence of public credit registries (PUBINFO), which facilitate information
sharing in debt markets, could potentially influence incentive problems, contracting, and information asymmetry costs. Although accounting standards and the
regulation of securities laws might affect both debt and equity costs, the general
consensus in the literature is that equity contracts are relatively more sensitive to
incentive problems, contracting, and information asymmetry costs than are debt
contracts. If so, in weaker institutional settings in which these costs are binding,
the firms should carry higher leverage.2 In contrast, greater information sharing
in debt markets should increase the incentives of investors to hold debt, leading to higher leverage. Stiglitz and Weiss (1981) propose that when lenders are
knowledgeable about the borrowers or other lenders of the firm, the moral hazard
problem of financing nonviable projects is less prominent.
III.
Data and Method
I construct my firm-level sample from all nonfinancial and unregulated firms
included in the Compustat Global Vantage database from 1991 to 2006.3 To minimize the potential impact of outliers, I winsorize the firm-level variables at the
1st and 99th percentiles. The sample consists of 15,177 firms from 37 countries,
totaling 101,264 firm-years, an average of 7 years per firm.
A.
Leverage Determinants Model
Several recent studies on U.S. and international firm leverage models conclude that adjustment costs are nontrivial and that firm-fixed effects are essential to capture unobserved firm-level heterogeneity (Flannery and Rangan (2006),
Lemmon, Roberts, and Zender (2008), Gungoraydinoglu and Öztekin (2011),
Faulkender, Flannery, Hankins, and Smith (2012), Öztekin and Flannery (2012),
and Warr, Elliott, Koëter-Kant, and Öztekin (2012)). Rather than estimate a static
model based on observed contemporaneous debt ratios, I estimate a dynamic
panel model that produces an estimate of the unobserved target leverage and that
can also provide an estimate of the adjustment speed to the target. The benefit
of the partial adjustment model is that it incorporates rebalancing costs that may
slow down the firm’s rate of adjustment to its optimal leverage.
(1)
LEVij,t − LEVij,t−1
=
λj LEV∗ij,t − LEVij,t−1 + δij,t ,
2 Verrecchia (2001) argues that tight accounting standards and disclosure requirements increase
the transparency of the firm to outside investors, reducing the cost of equity financing. Hail and Leuz
(2006) show that firms from countries with more extensive securities regulation and stricter enforcement mechanisms have a significantly lower cost of equity capital. Bhattacharya and Daouk (2002)
show that transaction costs are higher in stock markets in which insiders trade with impunity.
3 Following previous researchers, I exclude financial firms (Standard Industrial Classification (SIC)
codes 6000–6999) and utilities (SIC codes 4900–4999).
Öztekin
307
where LEVij,t is firm i’s debt ratio in year t and in a country or institutional setting
j, LEV∗ij,t is the optimal debt ratio, and λj is the adjustment parameter.
The optimal debt ratio is therefore determined by the β coefficient vector to
be estimated and Xij,t−1 , the vector of firm, industry, and macroeconomic characteristics.
LEV∗ij,t
(2)
= βj Xij,t−1 .
Equation (2) thus provides a model of the determinants of the optimal leverage,
which relies only on observable variables. To control for unobservable factors that
could affect leverage, I include firm- and year-fixed effects, Fi and Yt , respectively.
Because optimal leverage LEV∗ij,t is unobservable, substituting equation (2) into
equation (1) yields the following:
(3)
LEVij,t
= (λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi + ρt Yt + δij,t .
However, equation (3) requires instruments for the endogenous transformed
lagged-dependent variable and a correction for the short panel bias (Blundell and
Bond (1998), Huang and Ritter (2009)). Flannery and Hankins (2013) conclude
that Blundell and Bond’s system generalized method of moments (GMM) estimation method provides adequate estimates in the presence of these estimation
issues. I therefore use a 2-step system GMM to estimate equation (3), and I control for the potential endogeneity of the right-hand-side variables by using lags of
the same variables as instruments.
The base adjustment speed, λ, is obtained from the coefficient on the lagged
dependent variable, LEVij,t−1 , by simply subtracting it from 1. If managers have
target (optimal) debt ratios and make proactive efforts to reach them, then λ =
0.
In the presence of market frictions, the adjustment is not instantaneous; therefore,
λ=
1. Although I do not test capital structure theories, note that the dynamic tradeoff theory predicts that λ should be strictly bounded between 0 and 1. In contrast,
pecking-order and market-timing theories suggest a coefficient close to 0.
To test which leverage determinants have a robust impact on capital structure
according to equation (3), one can conduct the following test: β = 0. If the leverage factor in question is reliable, β =
/ 0 should hold. Throughout the empirical
analysis, I use a measure of (book) leverage (LEV) computed as follows:
(4)
LEV =
Long-Term Debt + Short-Term Debt
.
Total Assets
Many potential variables may or may not have a deterministic role in the
capital structure decision; these include a host of firm-specific, industry-specific,
macroeconomic, and institutional features. I analyze which determinants of capital structure are reliably signed and reliably important (i.e., statistically significant) in explaining the firm’s leverage choices. Initially, I employ the specification
equation (3), which does not explicitly control for the institutional environment
but permits ready comparison with a plethora of U.S. studies. I perform two types
of analyses, separate and pooled, to evaluate the impact of the leverage determinants on capital structure around the world.
308
Journal of Financial and Quantitative Analysis
For the separate methodology, I estimate equation (3) separately for each
country in the sample and obtain an estimate of each country’s capital structure determinants (βs). By allowing different sensitivities and by instrumenting
for the leverage determinants, separate regressions implicitly (partially) account
for the effects of firms’ institutional environments on the estimated coefficients.
I compute the number of countries in which a particular leverage determinant is of
a specific sign and statistically significant at the 90% or higher confidence level.
If the correlations consistently hold for the sample countries, I infer that the determinant in question is a reliable (dominant) factor for the financing decisions.
I require a leverage factor to be significant with a consistent sign at least 50% of
the time, thus in at least 19 countries.
For the pooled methodology, I combine the data on all sample countries
to estimate equation (3) as a world model and obtain an estimate of the overall sample’s capital structure determinants (β). To account for the effects of the
firms’ institutional settings on the estimated coefficients, the world regressions
include country-fixed effects in addition to firm- and year-fixed effects. In the
world model, a factor is either significant or not. If a leverage determinant is of a
particular sign and statistically significant at the 90% or higher confidence level,
I assign it a score of 1. I assign a score of 0 to determinants that have insignificant
coefficient estimates. I require a score of 1 to consider a factor reliable.
I employ a similar approach to assess the effect of institutional characteristics
on the reliability of the capital structure determinants. First, I classify the sample
countries into two portfolios according to the median value of 18 indexes representing the quality of legal and financial institutions. Second, I estimate equation
(3) separately for each institutional characteristic for strong and weak institutional
portfolios. Similar to the world regressions, institutional regressions include firm-,
year-, and country-fixed effects. If correlations consistently hold for the partitioning of the data based on country features, I infer that the determinant in question
is a reliable factor for leverage decisions. I require a leverage factor to be significant with a consistent sign at least 50% of the time. Thus, I require at least 9, 9,
and 18 consistent and significant signs on a determinant to consider it reliable in
weak, strong, and all institutional settings, respectively.
B.
Institutional Effects Models
An empirical challenge for the cross-sectional tests is to form a causal relation between international variation in capital structure policies and differences
in the quality of institutional environments, beyond a simple correlation. It is possible that types of industries and firms differ across countries. Unobserved country
variables could affect both the quality of institutional environments and the financial policies of firms. The empirical design might not entirely resolve these issues
but it aims to mitigate these concerns: i) The GMM estimators control and instrument for (lagged) firm, industry, and macroeconomic characteristics; ii) the
2-stage regressions isolate the impact of the institutional features from that of the
firm and industry characteristics; and iii) both first- and second-stage regressions
either control for country-fixed effects or employ random country effects and
instrumental variables.
Öztekin
309
The leverage determinants model in equation (3) is more general than many
prior international comparisons because it accounts for the dynamic nature of
the firm’s capital structure and its unobserved heterogeneity. At the same time, it
includes no information on firms’ institutional environments. To examine whether
institutional factors can explain country-level variations in capital structure
choices, I use a 2-step methodology.
1.
Institutions and Adjustment Speeds
In the first step, I estimate equation (3) with the inclusion of country-fixed
effects. The estimated coefficients from equation (3) indicate each firm’s target
ratio (equation (2)) and deviation from its target debt ratio:
ij,t
DEV
(5)
∗ij,t − LEVij,t−1 .
= LEV
Substituting equation (5) into equation (3) gives the following:
LEVij,t − LEVij,t−1
(6)
ij,t ) + δij,t .
= λj (DEV
The simplification of equation (6) relaxes the assumption that all firms adjust at
a constant rate. I allow the adjustment speed to depend on institutional characteristics
λj
(7)
= ΛZj + μTjt + τt Yt ,
where Λ, μ, and τ are vectors of the coefficients; Z is a vector of national institutional cost and a constant term; T is a vector of time-varying macroeconomic
(gross domestic product (GDP) growth) and financial development (stock and
bond market capitalization) control variables; and Y is a vector of year-fixed
effects.
Substituting equation (7) into the partial adjustment model equation (6) and
rearranging yields the following:
LEVij,t − LEVij,t−1
(8)
ij,t ) + δij,t .
= (ΛZj + μTjt + τt Yt )(DEV
I estimate equation (8) using a country-fixed-effects estimator (ordinary least
squares estimation yields similar results) with bootstrapped standard errors to account for the generated regressor (Pagan (1984), Faulkender et al. (2012)).
2.
Institutions and Optimal Leverage
In the first stage, I estimate the following reduced-form model of leverage,
where λ is the adjustment parameter; X is a set of firm, industry, and macroeconomic characteristics; F, C, and Y are vectors of firm-, country-, and year-fixed
effects, respectively; and δ is a random-error term:
(9)
LEVij,t
= (λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi
+ θj Cj + ρt Yt + πj,t (Cj × Yt ) + δij,t .
The estimated coefficients from equation (9) indicate variations in capital
structure that cannot be accounted for by firm- and industry-specific factors but
are related to country-level factors:
(10)
ω
CYjt
= θj Cj + ρYt + π
j,t (Cj × Yt ).
310
Journal of Financial and Quantitative Analysis
In the second stage, the country-level estimates derived in the first stage,
ω
CYjt , help examine whether country-level variations in leverage can be explained
by the institutional factors, Z; macroeconomic and financial development control
variables, T; and year-fixed effects, Y, using bootstrapped standard errors to account for generated regressors:
(11)
ω
CYjt
= γZj + μTjt + τt Yt .
Equation (11) must overcome an important empirical challenge; that is, to establish the causal effect of institutional environments, one should account for the
unobserved factors that could be driving the cross-country differences in leverage.
Ideally, country-fixed effects or change regressions should alleviate this concern.
However, because of the time-invariant nature of the institutional variables, these
approaches cannot be undertaken.4 Instead, I use two alternative methodologies to
estimate equation (11). The first is a random-effects approach that would be equivalent to a fixed-effects approach under the assumption that unobserved country
effects are not correlated with the regressors. To the degree that this assumption is
violated, the estimates may be biased. The second methodology is an instrumental variable approach that isolates potentially exogenous sources of variations in
institutions. To the extent that these variables directly influence capital structure
choices or are influenced by types of firms in the country, they would result in
biased coefficient estimates. Although some caution is necessary in interpreting
the results, both methodologies yield the same conclusions.
IV.
A.
Analysis and Results
Reliable Firm, Industry, and Macroeconomic Determinants
I assess the relative importance of the leverage factors for capital structure
decisions by evaluating their explanatory power. Table 1 reports the relation between the firm, industry, and macroeconomic determinants of capital structure
and leverage using the separate and pooled methods. Panel A documents the consistency of the direction of the relation between leverage and each determinant.
The separate method reports the number of instances (of 37 sample countries) in
which the given determinant of leverage has a particular sign at the 90% confidence level or higher. The pooled method reports whether the given determinant
of leverage has a particular sign at the 90% confidence level or higher in the world
model. Panel B evaluates whether the leverage determinant is a dominant factor
by requiring a minimum score of 19 for each factor using the separate method and
a score equal to 1 using the pooled method.
The results of factor selection indicate that profits, firm size, tangibility,
industry leverage, and inflation are dominant factors across all firms around the
world. Larger firms and firms that have more tangible assets tend to have higher
leverage. These firms potentially have lower financial distress costs and/or lower
4 This is in contrast to adjustment speed equation (8), where DEV and its interaction terms with the
institutional variables are time varying (i.e., leverage targets depend on time-varying firm, industry,
and macroeconomic characteristics).
Öztekin
311
TABLE 1
Reliable Firm, Industry, and Macroeconomic Determinants
LEVij,t is ﬁrm i’s debt ratio in year t and in country j; λ is the adjustment parameter; Xij,t−1 is a vector of ﬁrm, industry,
and macroeconomic characteristics related to the costs and beneﬁts of operating with various leverage ratios; F and
C are the unobserved ﬁrm and country heterogeneity captured by the ﬁrm and country dummies, respectively; Y is a
vector of year-ﬁxed effects; and δij,t is the error term. Panel A provides a summary of the consistency of the direction of
the relation between leverage and each determinant. Panel B evaluates whether the leverage determinant is a dominant
factor. Columns 1 and 2 refer to the core estimation model (equation (3)) and run separately for each country:
=
LEVij,t
(λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi + ρt Yt + δij,t .
Columns 3 and 4 refer to the core model estimated pooling of all 37 sample countries:
LEVij,t
=
(λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi + ρt Yt + θj Cj + δij,t .
The ﬁrst or second column in Panel A reports the number of instances (of 37 sample countries) in separate regressions in
which the given determinant of leverage has a positive or negative signiﬁcant coefﬁcient at the 90% or higher conﬁdence
level. The third or fourth column in Panel A reports 1 if the given determinant of leverage has a positive or negative significant coefﬁcient at the 90% or higher conﬁdence level in world regressions, and 0 otherwise. Panel B assigns a leverage
determinant as a core factor using a Yes indicator if the score reported in Panel A is at least 19 for columns 1 and 2, and
if it is equal to 1 for columns 3 and 4. Variable deﬁnitions are provided in the Appendix.
Pooled
Separate
Firm, Industry, and Macroeconomic Determinants
+
−
+
−
1
2
3
4
2
7
22
19
20
8
23
17
2
4
6
15
0
0
1
1
1
0
1
0
0
0
0
1
Panel A. Number of Signiﬁcant Correlations
PROFIT
MARKET-TO-BOOK RATIO
ln(TOTAL ASSETS)
TANGIBILITY
INDUSTRY LEVERAGE
INFLATION
Panel B. Core Factors
PROFIT
MARKET-TO-BOOK RATIO
ln(TOTAL ASSETS)
TANGIBILITY
INDUSTRY LEVERAGE
INFLATION
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
agency costs of debt. In addition, tangibility possibly reflects adverse-selection
costs related to assets in place. Similarly, firms that compete in industries in which
the median firm has higher leverage tend to carry higher leverage, consistent with
these firms having a lower probability of default. Firms that have more profits
tend to have lower leverage. This factor possibly reflects transaction costs and/or
information asymmetry costs. Finally, firms in lower inflationary environments
tend to have lower leverage. These firms could have mispriced (undervalued) debt
and/or lower tax benefits.
An advantage of examining the determinants of capital structure in an international context is that the costs and benefits of leverage should depend on each
firm’s institutional environment. I condition the firms’ circumstances on the institutional setting because some leverage determinants may be dominant in certain
types of institutional settings. Table 2 reports the relation between the firm, industry, and macroeconomic determinants of capital structure and leverage for weak,
strong, and all institutional settings. Panel A reports the number of instances
(of 18 partitions made according to the institutional indexes) in which the given
determinant of leverage has a particular sign at the 90% or higher confidence
level. Panel B evaluates whether the leverage determinant is a dominant factor by
312
Journal of Financial and Quantitative Analysis
requiring a minimum score of 9, 9, and 18 for each factor in each category of
weak, strong, and all institutions, respectively.
TABLE 2
Effects of Conditioning on the Institutional Settings for the Reliability of Firm,
Industry, and Macroeconomic Determinants
LEVij,t is ﬁrm i’s debt ratio in year t and in country j; λ is the adjustment parameter; Xij,t−1 is a vector of ﬁrm, industry,
and macroeconomic characteristics related to the costs and beneﬁts of operating with various leverage ratios; F and C are
the unobserved ﬁrm and country heterogeneity captured by the ﬁrm and country dummies, respectively; Y is a vector of
year-ﬁxed effects; and δij,t is the error term. Panel A provides a summary of the consistency of the direction of the relation
between leverage and each determinant. Panel B evaluates whether the leverage determinant is a dominant factor. The
rows in Panel A report the number of instances of 18 partitions of the data made according to the institutional indexes for
which the given determinant of leverage has a positive or negative signiﬁcant coefﬁcient at the 90% conﬁdence level or
higher in institutional regressions. For each institutional index, I form two portfolios based on its median value. I run the
core estimation model separately for each portfolio. Columns 1 and 2 summarize the results of the institutional regressions
for the weak institutional portfolio. Columns 3 and 4 summarize the results of the institutional regressions for the strong
institutional portfolio. Columns 5 and 6 in Panel A give the gross total of column pairs 1, 3 and 2, 4, respectively. Panel
B assigns a leverage determinant as a core factor using a Yes indicator if the score reported in Panel A is at least 9 for
columns 1 to 4 and at least 18 for columns 5 and 6:
LEVij,t
=
(λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi + ρt Yt + θj Cj + δij,t .
Variable deﬁnitions are provided in the Appendix.
Weak Institutions
Firm, Industry, and Macroeconomic Determinants
Strong Institutions
All Institutions
+
−
+
−
+
−
1
2
3
4
5
6
2
0
0
17
2
0
16
2
7
1
3
10
1
0
10
18
17
0
2
2
0
0
0
11
3
0
10
35
19
0
18
4
7
1
3
21
Panel A. Number of Signiﬁcant Correlations
PROFIT
MARKET-TO-BOOK RATIO
ln(TOTAL ASSETS)
TANGIBILITY
INDUSTRY LEVERAGE
INFLATION
Panel B. Core Factors
PROFIT
MARKET-TO-BOOK RATIO
ln(TOTAL ASSETS)
TANGIBILITY
INDUSTRY LEVERAGE
INFLATION
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Some differences emerge across weak (columns 1 and 2 of Table 2) and
strong (columns 3 and 4) institutional settings. Profits (−) are a core factor only
in weak institutional settings, whereas size (+) and industry leverage (+) are core
factors only in strong institutional settings. Overall, the selection of core factors is
mostly robust, and the direction of their impact is similar across firms from diverse
institutional environments (columns 5 and 6). However, there is one exception:
Firm size is no longer a reliable factor for leverage.
I also evaluate the robustness of the capital structure determinants to alternative definitions of financial leverage. First, I employ a measure of market
leverage, defined as long-term debt plus short-term debt divided by total assets
minus book equity plus market equity. In untabulated results, the selection of
dominant leverage factors remains unchanged. However, the market-to-book
ratio (−) is also selected as a reliable factor for market leverage. In addition,
the signs on profitability and inflation are reversed with market leverage. Some
differences also emerge across firms from diverse institutional environments.
Öztekin
313
Industry leverage is no longer a reliable factor, with this difference stemming
from firms in weak institutional settings. Second, Welch (2011) argues that leverage should be measured by the ratio of debt to invested capital. Accordingly, I
define book leverage as current debt plus long-term debt divided by invested capital (book debt plus stockholders’ equity plus minority interest) and the market
leverage ratio as the ratio of book debt to the market value of invested capital. In
unreported results, the main conclusions are similar when using these alternative
measures of financial leverage, with two major exceptions: Industry leverage now
has a negative sign when using market leverage in all institutional settings, and
inflation is no longer a reliable factor for book leverage, with this unreliability
driven mainly by firms in weak institutional settings.
B.
Institutions and Capital Structure Choices
At this point, the results indicate that the variation in legal and financial institutions is correlated with the variation in the capital structure policies of firms
across countries. Do legal and financial differences cause the observed variations in capital structure policies? To better address this question, I first provide associations between institutions and the cost of transacting in debt and
equity markets. I then test whether country-level financing choices and the international variation in adjustment speeds are systematically affected by transaction costs and institutional differences. I employ 2-stage leverage specifications
to isolate the impact of institutions on leverage from that of firm and industry
characteristics.
1.
The Impact of Institutional Environments on Debt and Equity Costs
What factors cause cross-country differences in capital structure choices?
By definition, these factors must relate to some variation in firms’ costs or benefits of leveraging. However, in general, direct measures of these costs are not
available. I continue my exploration of international variations in financial policies by tying them to measures of debt and equity transaction costs in various
countries.
Elkins McSherry (www.elkinsmcsherry.com), a leader in the global financial consulting industry, provides an international comparison of the direct and
indirect costs of engaging in equity and debt transactions. If institutional cost
and benefit indexes influence firms’ debt and equity costs, I expect them to be
similarly related to the Elkins McSherry indexes of transaction costs. In addition, I expect strong institutional settings to result in lower transaction costs of
both debt and equity. Table 3 reports the results of comparing securities trading
costs between weak and strong institutional settings. Institutional characteristics
determine the country-level debt and equity trading costs. Consistent with my
hypotheses, higher trading costs are almost always associated with lower quality
institutions. That is, institutional differences influence the cost of transacting in
bond and equity markets, at least as measured by these trading costs. This relation
between institutions and transaction costs suggests that the institutional environment that affects debt and equity costs should also affect financing choices around
the world.
314
Journal of Financial and Quantitative Analysis
TABLE 3
Institutional Determinants of the Debt and Equity Trading Costs
Countries are allocated into portfolios according to the sample median of the institutional indexes. Pairwise comparisons
of the mean debt and equity trading costs (basis points) of the two portfolios are then conducted with t-tests. *** indicates
signiﬁcant difference between groups at the 1% level. Variable deﬁnitions are provided in the Appendix.
Institutional
Feature
Group
DEBT COSTS
EQUITY COSTS
TIME
Weak
Strong
16.52***
7.22
10.14***
9.35
COST
Weak
Strong
16.36***
9.42
16.24***
8.53
EFFICIENCY
Weak
Strong
16.65***
7.10
13.05***
8.57
TAX
Weak
Strong
15.16
19.37
9.61
10.15
CREDITOR
Weak
Strong
12.89***
10.30
9.28
10.86
FORMALISM
Weak
Strong
14.64***
9.91
11.22***
9.48
ANTIDIR
Weak
Strong
13.84***
9.99
12.66***
8.93
PRENF
Weak
Strong
12.08
12.07
11.74***
9.29
EXECUTIVE
Weak
Strong
16.64***
7.39
14.76***
8.36
ENFORCE
Weak
Strong
17.25***
6.44
16.36***
8.39
LAW&ORDER
Weak
Strong
15.07***
6.97
10.81***
8.97
GOVERNMENT
Weak
Strong
17.99***
6.50
16.55***
8.42
ACCSTDS
Weak
Strong
13.88***
8.89
13.17***
8.89
EDISCLOSE
Weak
Strong
13.08***
10.38
13.19***
9.02
ELIABS
Weak
Strong
12.75***
11.28
12.14***
9.16
EPUBENF
Weak
Strong
11.02
13.66
10.09***
9.66
INSIDER
Weak
Strong
16.48***
6.69
15.77***
8.33
PUBINFO
Weak
Strong
14.24***
6.69
11.80***
8.33
2.
The Impact of Institutional Environments on Adjustment Speeds
Table 3 shows that, in general, stronger institutions have lower debt and
equity costs, which in turn should lead to faster adjustment speeds to optimal
leverage. Do adjustment speeds exhibit international variation consistent with
(the dynamic trade-off) theory? In Table 4, I test how differences in the institutional environment affect the adjustment to optimal leverage. Although evaluating all available indexes concurrently is possible, such an approach could obscure
valuable information because these indexes are likely to be correlated. For this
reason, I estimate equation (8) separately for each country feature. Each column
in Table 4 provides a different institutional effect, controlling for macroeconomic
and financial development indicators and year- and country-fixed effects. To ease
economic interpretation, the institutional variables are normalized to have a mean
of 0 and a standard deviation of 1.
Öztekin
315
TABLE 4
Effect of the Institutional Setting on Adjustment Speeds
Table 4 reports the impact of each institutional determinant on adjustment speeds using a 2-stage procedure. In the
(unreported) ﬁrst stage, I estimate the following reduced-form model of leverage, where λ is the adjustment parameter;
X is a set of ﬁrm, industry, and macroeconomic characteristics; F, C, and Y are vectors of ﬁrm-, country-, and year-ﬁxed
effects, respectively; and δ is a random-error term:
LEVij,t
=
(λj βj ) Xij,t−1 + (1 − λj ) LEVij,t−1 + ϑij Fi + ρt Yt + θj Cj + δij,t .
This provides an initial set of estimated β and λ, which I use to calculate an initial estimated target leverage ratio
∗
(LEV
ij,t−1 ) and deviation from the target leverage ratio ( DEVij,t ) for each ﬁrm-year. In the second stage, I substitute the
) into the following equation to produce estimates of the determiestimated deviation from the target leverage ratio ( DEV
ij,t
nants of a ﬁrm’s adjustment speed:
LEVij,t − LEVij,t−1
)+δ ,
λj (DEV
ij,t
ij,t
=
∗
where LEVij∗,t = βj Xij,t−1 , DEV
ij,t = LEVij,t − LEVij,t−1 , and λj = ΛZj + μTjt + τt Yt ;
EQUITY COSTS
TIME
COST
EFFICIENCY
TAX
CREDITOR
FORMALISM
ANTIDIR
PRENF
Zj
DEBT COSTS
Z is a vector of an index of national institutional cost and a constant term; T is a vector of time-varying macroeconomic (gross domestic product (GDP)
growth) and ﬁnancial development (stock and bond market capitalization) control variables; Y is a vector of year-ﬁxed
effects; and Λ, μ, and τ (unreported) are vectors of coefﬁcients. Each column in the table represents a separate
estimation of the second-stage regression and reports the coefﬁcient estimates from country-ﬁxed-effects regressions.
Standard errors are bootstrapped to account for generated regressors. The p-values are reported in parentheses below
the coefﬁcient estimates. *, **, and *** indicate signiﬁcant difference between groups at the 10%, 5%, and 1% levels,
respectively. The institutional variables are transformed to standard normal variables. Variable deﬁnitions are provided in
the Appendix.
1
2
3
4
5
6
7
8
9
10
Constant
0.2130*** 0.2055*** 0.2135*** 0.2057*** 0.1993*** 0.2116*** 0.2107*** 0.1903*** 0.1936*** 0.1938***
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
(0.000)
Zj
–0.0180*
(0.090)
STOCK MARKET 0.0308*** 0.0248** 0.0355*** 0.0248*** 0.0245*** 0.0468*** 0.0293*** 0.0070
CAP
(0.000)
(0.013)
(0.000)
(0.009)
(0.001)
(0.000)
(0.000)
(0.164)
0.0138** 0.0079
(0.023)
(0.185)
BOND MARKET
CAP
0.0134
(0.205)
0.0143*** 0.0233
(0.003)
(0.123)
0.0143
(0.315)
0.0139
(0.289)
0.0154
(0.295)
0.0288*
(0.092)
0.0192
(0.208)
0.0143
(0.248)
0.0335*
(0.054)
GDP GROWTH
0.0093
(0.145)
0.0081
(0.226)
0.0025
(0.618)
0.0081
(0.121)
0.0069
(0.161)
0.0076
(0.149)
0.0063
(0.204)
0.0064
(0.190)
0.0011
(0.808)
–0.0001
(0.998)
No. of obs.
84,294
84,294
84,294
84,294
84,294
84,294
84,294
84,294
84,294
84,294
LAW&ORDER
GOVERNMENT
ACCSTDS
EDISCLOSE
ELIABS
EPUBENF
INSIDER
PUBINFO
–0.0320*** 0.0307*** 0.0316*** 0.0130** –0.0630*** 0.0421*** 0.0547***
(0.000)
(0.003)
(0.001)
(0.048)
(0.000)
(0.002)
(0.000)
ENFORCE
0.0090
(0.313)
EXECUTIVE
–0.0230*
(0.070)
11
12
13
14
15
16
17
18
19
20
Zj
Constant
0.2107*** 0.1849*** 0.1998*** 0.1985*** 0.1665*** 0.1718*** 0.1881*** 0.2000*** 0.1884*** 0.2158***
(0.000)
(0.001)
(0.000)
(0.000)
(0.001)
(0.001)
(0.000)
(0.000)
(0.000)
(0.000)
Zj
0.0112
(0.111)
0.0471*** 0.0229*** 0.0231** 0.0815*** 0.0538*** 0.0340*** 0.0255*** 0.0402*** –0.0008
(0.000)
(0.007)
(0.025)
(0.000)
(0.000)
(0.000)
(0.001)
(0.000)
(0.938)
STOCK MARKET 0.0523*** 0.0306*** 0.0297*** 0.0275*** 0.0083
CAP
(0.000)
(0.002)
(0.000)
(0.000)
(0.135)
0.0103*
(0.087)
0.0225*** 0.0247*** 0.0163*** 0.0378***
(0.001)
(0.001)
(0.003)
(0.000)
BOND MARKET
CAP
0.0089
(0.480)
–0.0019
(0.882)
0.0154
(0.258)
0.0151
(0.272)
0.0273*
(0.061)
0.0016
(0.902)
0.0044
(0.698)
0.0234
(0.117)
0.0107
(0.381)
GDP GROWTH
0.0020
(0.724)
0.0094
(0.119)
0.0076
(0.145)
0.0096
(0.104)
0.0040
(0.415)
–0.0030
(0.548)
–0.0024
(0.632)
–0.0001
(0.986)
0.0138** 0.0014
(0.029)
(0.765)
No. of obs.
84,294
84,294
84,294
84,294
84,294
84,294
84,294
84,294
84,294
0.0301**
(0.037)
84,294
I first provide direct evidence on debt and equity transaction costs in explaining cross-country differences in adjustment speeds. A 1-standard-deviation increase in both debt and equity trading costs decreases the typical firm’s adjustment
speed by approximately 2% compared to an average adjustment speed of 21%
316
Journal of Financial and Quantitative Analysis
(coefficient estimate on the constant term). That is, transaction costs significantly
influence financing thresholds, indicating that institutional differences that determine these costs should also influence adjustment speeds.
Firms from countries that administer the bankruptcy process in court in a
manner that is less time consuming, less costly, and more efficient should adjust
more rapidly to their targets because of lower deadweight costs associated with
the insolvency process. Two of the three factors determining the administration of
bankruptcy influence capital structure adjustment in the direction hypothesized.
A 1-standard-deviation increase in the costliness and efficiency of the bankruptcy
process decreases the typical firm’s adjustment speed by 3%. In addition, the tax
benefits of leverage should increase the value of reaching and maintaining the
leverage target. The effect of tax benefits on the adjustment speed is significantly
positive (3%).
The rebalancing costs should be lower in countries with better access to capital markets, in which firms can repeatedly adjust their debt or equity to reach their
optimal leverage rather than waiting until access becomes available or relatively
cheaper. Capit...

Purchase answer to see full
attachment