Monday, September 30, 2019

Role of Computer in Daily Life

Financial Crises and Bank Liquidity Creation Allen N. Berger †  and Christa H. S. Bouwman †¡ October 2008 Financial crises and bank liquidity creation are often connected. We examine this connection from two perspectives. First, we examine the aggregate liquidity creation of banks before, during, and after five major financial crises in the U. S. from 1984:Q1 to 2008:Q1. We uncover numerous interesting patterns, such as a significant build-up or drop-off of â€Å"abnormal† liquidity creation before each crisis, where â€Å"abnormal† is defined relative to a time trend and seasonal factors.Banking and market-related crises differ in that banking crises were preceded by abnormal positive liquidity creation, while market-related crises were generally preceded by abnormal negative liquidity creation. Bank liquidity creation has both decreased and increased during crises, likely both exacerbating and ameliorating the effects of crises. Off-balance sheet guarantees such as loan commitments moved more than on-balance sheet assets such as mortgages and business lending during banking crises.Second, we examine the effect of pre-crisis bank capital ratios on the competitive positions and profitability of individual banks during and after each crisis. The evidence suggests that high capital served large banks well around banking crises – they improved their liquidity creation market share and profitability during these crises and were able to hold on to their improved performance afterwards. In addition, high-capital listed banks enjoyed significantly higher abnormal stock returns than low-capital listed banks during banking crises.These benefits did not hold or held to a lesser degree around marketrelated crises and in normal times. In contrast, high capital ratios appear to have helped small banks improve their liquidity creation market share during banking crises, market-related crises, and normal times alike, and the gains in market shar e were sustained afterwards. Their profitability improved during two crises and subsequent to virtually every crisis. Similar results were observed during normal times for small banks. †  University of South Carolina, Wharton Financial Institutions Center, and CentER – Tilburg University.Contact details: Moore School of Business, University of South Carolina, 1705 College Street, Columbia, SC 29208. Tel: 803-576-8440. Fax: 803-777-6876. E-mail: [email  protected] sc. edu. †¡ Case Western Reserve University, and Wharton Financial Institutions Center. Contact details: Weatherhead School of Management, Case Western Reserve University, 10900 Euclid Avenue, 362 PBL, Cleveland, OH 44106. Tel. : 216-368-3688. Fax: 216-368-6249. E-mail: christa. [email  protected] edu. Keywords: Financial Crises, Liquidity Creation, and Banking. JEL Classification: G28, and G21.The authors thank Asani Sarkar, Bob DeYoung, Peter Ritchken, Greg Udell, and participants at presentations at the Summer Research Conference 2008 in Finance at the ISB in Hyderabad, the International Monetary Fund, the University of Kansas’ Southwind Finance Conference, and Erasmus University for useful comments. Financial Crises and Bank Liquidity Creation 1. Introduction Over the past quarter century, the U. S. has experienced a number of financial crises. At the heart of these crises are often issues surrounding liquidity provision by the banking sector and financial markets (e. . , Acharya, Shin, and Yorulmazer 2007). For example, in the current subprime lending crisis, liquidity seems to have dried up as banks seem less willing to lend to individuals, firms, other banks, and capital market participants, and loan securitization appears to be significantly depressed. This behavior of banks is summarized by the Economist: â€Å"Although bankers are always stingier in a downturn, [†¦] lots of banks said they had also cut back lending because of a slide in their current or expe cted capital and liquidity. 1 The practical importance of liquidity during crises is buttressed by financial intermediation theory, which indicates that the creation of liquidity is an important reason why banks exist. 2 Early contributions argue that banks create liquidity by financing relatively illiquid assets such as business loans with relatively liquid liabilities such as transactions deposits (e. g. , Bryant 1980, Diamond and Dybvig 1983). More recent contributions suggest that banks also create liquidity off the balance sheet through loan commitments and similar claims to liquid funds (e. g. Holmstrom and Tirole 1998, Kashyap, Rajan, and Stein 2002). 3 The creation of liquidity makes banks fragile and susceptible to runs (e. g. , Diamond and Dybvig 1983, Chari and Jagannathan 1988), and such runs can lead to crises via contagion effects. Bank liquidity creation can also have real effects, in particular if a financial crisis ruptures the creation of liquidity (e. g. , Dellâ⠂¬â„¢Ariccia, Detragiache, and Rajan 2008). 4 Exploring the relationship between financial crises and bank liquidity creation can thus yield potentially interesting economic insights and may have important policy implications.The goals of this paper are twofold. The first is to examine the aggregate liquidity creation of 1 â€Å"The credit crisis: Financial engine failure† – The Economist, February 7, 2008. According to the theory, another central role of banks in the economy is to transform credit risk (e. g. , Diamond 1984, Ramakrishnan and Thakor 1984, Boyd and Prescott 1986). Recently, Coval and Thakor (2005) theorize that banks may also arise in response to the behavior of irrational agents in financial markets. 3James (1981) and Boot, Thakor, and Udell (1991) endogenize the loan commitment contract due to informational frictions. The loan commitment contract is subsequently used in Holmstrom and Tirole (1998) and Kashyap, Rajan, and Stein (2002) to show how banks can provide liquidity to borrowers. 4 Acharya and Pedersen (2005) show that liquidity risk also affects the expected returns on stocks. 2 1 banks around five financial crises in the U. S. over the past quarter century. 5 The crises include two banking crises (the credit crunch of the early 1990s and the subprime lending crisis of 2007 – ? and three crises that can be viewed as primarily market-related (the 1987 stock market crash, the Russian debt crisis plus the Long-Term Capital Management meltdown in 1998, and the bursting of the dot. com bubble plus the September 11 terrorist attack of the early 2000s). This examination is intended to shed light on whether there are any connections between financial crises and aggregate liquidity creation, and whether these vary based on the nature of the crisis (i. e. , banking versus market-related crisis). A good nderstanding of the behavior of bank liquidity creation around financial crises is also important to shed light on whether banks create â€Å"too little† or â€Å"too much† liquidity, and whether bank behavior exacerbates or ameliorates the effects of crises. We document the empirical regularities related to these issues, so as to raise additional interesting questions for further empirical and theoretical examinations. The second goal is to study the effect of pre-crisis equity capital ratios on the competitive positions and profitability of individual banks around each crisis.Since bank capital affects liquidity creation (e. g. , Diamond and Rajan 2000, 2001, Berger and Bouwman forthcoming), it is likely that banks with different capital ratios behave differently during crises in terms of their liquidity creation responses. Specifically, we ask: are high-capital banks able to gain market share in terms of liquidity creation at the expense of low-capital banks during a crisis, and does such enhanced market share translate into higher profitability? If so, are the high-capital banks able t o sustain their improved competitive positions after the financial crisis is over?The recent acquisitions of Countrywide, Bear Stearns, and Washington Mutual provide interesting case studies in this regard. All three firms ran low on capital and had to be bailed out by banks with stronger capital positions. Bank of America (Countrywide’s acquirer) and J. P. Morgan Chase (acquirer of Bear-Stearns and Washington Mutual’s banking operations) had capital ratios high enough to enable them to buy their rivals at a small fraction of what they were worth a year before, thereby gaining a potential competitive advantage. 6 The recent experience of IndyMac Bank provides 5Studies on the behavior of banks around financial crises have typically focused on commercial and real estate lending (e. g. , Berger and Udell 1994, Hancock, Laing, and Wilcox 1995, Dell’Ariccia, Igan, and Laeven 2008). We focus on the more comprehensive notion of bank liquidity creation. 6 On Sunday, Mar ch 16, 2008, J. P. Morgan Chase agreed to pay $2 a share to buy all of Bear Stearns, less than onetenth of the firm’s share price on Friday and a small fraction of the $170 share price a year before. On March 24, 2008, it increased its bid to $10, and completed the transaction on May 30, 2008.On January 11, Bank of America announced it would pay $4 billion for Countrywide, after Countrywide’s market capitalization had plummeted 85% during the preceding 12 months. The transaction was completed on July 1, 2008. After a $16. 4 billion ten-day bank 2 another interesting example. The FDIC seized IndyMac Bank after it suffered substantive losses and depositors had started to run on the bank. The FDIC intends to sell the bank, preferably as a single entity but if that does not work, the bank will be sold off in pieces.Given the way the regulatory approval process for bank acquisitions works, it is likely that the acquirer(s) will have a strong capital base. 7 A financial cris is is a natural event to examine how capital affects the competitive positions of banks. During â€Å"normal† times, capital has many effects on the bank, some of which counteract each other, making it difficult to learn much. For example, capital helps the bank cope more effectively with risk,8 but it also reduces the value of the deposit insurance put option (Merton 1977). During a crisis, risks become elevated and the risk-absorption capacity of capital becomes paramount.Banks with high capital, which are better buffered against the shocks of the crisis, may thus gain a potential advantage. To examine the behavior of bank liquidity creation around financial crises, we calculate the amount of liquidity created by the banking sector using Berger and Bouwman’s (forthcoming) preferred liquidity creation measure. This measure takes into account the fact that banks create liquidity both on and off the balance sheet and is constructed using a three-step procedure. In the f irst step, all bank assets, liabilities, equity, and off-balance sheet activities are classified as liquid, semi-liquid, or illiquid.This is done based on the ease, cost, and time for customers to obtain liquid funds from the bank, and the ease, cost, and time for banks to dispose of their obligations in order to meet these liquidity demands. This classification process uses information on both product category and maturity for all activities other than loans; due to data limitations, loans are classified based solely on category (â€Å"cat†). Thus, residential mortgages are classified as more liquid than business loans regardless of maturity because it is generally easier to securitize and sell such mortgages than business loans.In the second step, weights are assigned to these activities. The weights are consistent with the theory in that maximum liquidity is created when illiquid assets (e. g. , business loans) are transformed into liquid liabilities (e. g. , transactions deposits) and maximum liquidity is destroyed when liquid assets (e. g. , treasuries) are transformed into illiquid liabilities â€Å"walk†, Washington Mutual was placed into the receivership of the FDIC on September 25, 2008. J. P. Morgan Chase purchased the banking business for $1. 9 billion and re-opened the bank the next day.On September 26, 2008, the holding company and its remaining subsidiary filed for bankruptcy. Washington Mutual, the sixth-largest bank in the U. S. before its collapse, is the largest bank failure in the U. S. financial history. 7 After peaking at $50. 11 on May 8, 2006, IndyMac’s shares lost 87% of their value in 2007 and another 95% in 2008. Its share price closed at $0. 28 on July 11, 2008, the day before it was seized by the FDIC. 8 There are numerous papers that argue that capital enhances the risk-absorption capacity of banks (e. g. , Bhattacharya and Thakor 1993, Repullo 2004, Von Thadden 2004). (e. g. , subordinated debt) or equity. In the third step, a â€Å"cat fat† liquidity creation measure is constructed, where â€Å"fat† refers to the inclusion of off-balance sheet activities. Although Berger and Bouwman construct four different liquidity creation measures, they indicate that â€Å"cat fat† is the preferred measure. They argue that to assess the amount of liquidity creation, the ability to securitize or sell a particular loan category is more important than the maturity of those loans, and the inclusion of off-balance sheet activities is critical. We apply the â€Å"cat fat† liquidity creation measure to quarterly data on virtually all U. S. commercial and credit card banks from 1984:Q1 to 2008:Q1. Our measurement of aggregate liquidity creation by banks allows us to examine the behavior of liquidity created prior to, during, and after each crisis. The popular press has provided anecdotal accounts of liquidity drying up during some financial crises as well as excessive liquidity p rovision at other times that led to credit expansion bubbles (e. g. , the subprime lending crisis).We attempt to give empirical content to these notions of â€Å"too little† and â€Å"too much† liquidity created by banks. Liquidity creation has quadrupled in real terms over the sample period and appears to have seasonal components (as documented below). Since no theories exist that explain the intertemporal behavior of liquidity creation, we take an essentially empirical approach to the problem and focus on how far liquidity creation lies above or below a time trend and seasonal factors. 10 That is, we focus on â€Å"abnormal† liquidity creation.The use of this measure rests on the supposition that some â€Å"normal† amount of liquidity creation exists, acknowledging that at any point in time, liquidity creation may be â€Å"too much† or â€Å"too little† in dollar terms. Our main results regarding the behavior of liquidity creation around f inancial crises are as follows. First, prior to financial crises, there seems to have been a significant build-up or drop-off of â€Å"abnormal† liquidity creation. Second, banking and market-related crises differ in two respects.The banking crises (the credit crunch of 1990-1992 and the current subprime lending crisis) were preceded by abnormal positive liquidity creation by banks, whereas the market-related crises were generally preceded by abnormal negative liquidity creation. In addition, the banking crises themselves seemed to change the trajectory of aggregate liquidity creation, while the market-related crises did not appear to do so. Third, 9 Their alternative measures include â€Å"cat nonfat,† â€Å"mat fat,† and â€Å"mat nonfat. † The â€Å"nonfat† measures exclude offbalance sheet activities, and the â€Å"mat† measures classify loans by maturity rather than by product category. 0 As alternative approaches, we use the dollar amo unt of liquidity creation per capita and liquidity creation divided by GDP and obtain similar results (see Section 4. 2). 4 liquidity creation has both decreased during crises (e. g. , the 1990-1992 credit crunch) and increased during crises (e. g. , the 1998 Russian debt crisis / LTCM bailout). Thus, liquidity creation likely both exacerbated and ameliorated the effects of crises. Fourth, off-balance sheet illiquid guarantees (primarily loan commitments) moved more than semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans) during banking crises.Fifth, the current subprime lending crisis was preceded by an unusually high positive abnormal amount of aggregate liquidity creation, possibly caused by lax lending standards that led banks to extend increasing amounts of credit and off-balance sheet guarantees. This suggests a possible dark side of bank liquidity creation. While financial fragility may be needed to induce banks to create liquidity (e. g. , Diamond and Rajan 2000, 2001), our analysis raises the intriguing possibility that the causality may also be reversed in the sense that too much liquidity creation may lead to financial fragility.We then turn to the second goal of the paper – examining whether banks’ pre-crisis capital ratios affect their competitive positions and profitability around financial crises. To examine the effect on a bank’s competitive position, we regress the change in its market share of liquidity creation – measured as the average market share of aggregate liquidity creation during the crisis (or over the eight quarters after the crisis) minus the average market share over the eight quarters before the crisis, expressed as a proportion of the bank’s average pre-crisis market share – on its average pre-crisis capital ratio and a set of control variables. 1 Since the analyses in the first half of the paper reveal a great deal of heterogeneity in crises, we run these regressions on a per-crisis basis, rather than pooling the data across crises. The control variables include bank size, bank risk, bank holding company membership, local market competition,12 and proxies for the economic circumstances in the local markets in which the bank operates. Moreover, we examine large and small banks as two separate groups since the results in Berger and Bouwman (forthcoming) indicate that the effect of capital on liquidity creation differs across large and small banks. 13 11Defining market share this way is a departure from previous research (e. g. , Laeven and Levine 2007), in which market share relates to the bank’s weighted-average local market share of total deposits. 12 While our focus is on the change in banks’ competitive positions measured in terms of their aggregate liquidity creation market shares, we control for â€Å"local market competition† measured as the bank-level Herfindahl index based on local market deposit mar ket shares. 13 Berger and Bouwman use three size categories: large, medium, and small banks. We combine the large and medium bank categories into one â€Å"large bank† category. 5One potential concern is that differences in bank capital ratios may simply reflect differences in bank risk. Banks that hold higher capital ratios because their investment portfolios are riskier may not improve their competitive positions around financial crises. Our empirical design takes this into account. The inclusion of bank risk as a control variable is critical and ensures that the measured effect of capital on a bank’s market share is net of the effect of risk. We find evidence that high-capital large banks improved their market share of liquidity creation during the two banking crises, but not during the market-related crises.After the credit crunch of the early 1990s, high-capital large banks held on to their improved competitive positions. Since the current subprime lending crisis was not over at the end of the sample period, we cannot yet tell whether highcapital large banks will also hold on to their improved competitive positions after this crisis. In contrast to the large banks, high-capital small banks seemed to enhance their competitive positions during all crises and held on to their improved competitive positions after the crises as well.Next, we focus on the effect of pre-crisis bank capital on the profitability of the bank around each crisis. We run regressions that are similar to the ones described above with the change in return on equity (ROE) as the dependent variable. We find that high-capital large banks improved their ROE in those cases in which they enhanced their liquidity creation market share – the two banking crises – and were able to hold on to their improved profitability after the credit crunch. profitability after the market-related crises. They also increased theirIn contrast, for high-capital small banks, profitabilit y improved during two crises, and subsequent to virtually every crisis. As an additional analysis, we examine whether the improved competitive positions and profitability of high-capital banks translated into better stock return performance. To perform this analysis, we focus on listed banks and bank holding companies (BHCs). If multiple banks are part of the same listed BHC, their financial statements are added together to create pro-forma financial statements of the BHC.The results confirm the earlier change in performance findings of large banks: listed banks with high capital ratios enjoyed significantly larger abnormal returns than banks with low capital ratios during banking crises, but not during market-related crises. Our results are based on a five-factor asset pricing model that includes the three Fama-French (1993) factors, momentum, and a proxy for the slope of the yield curve. 6 We also check whether high capital provided similar advantages outside crisis periods, i. e. , during â€Å"normal† times.We find that large banks with high capital ratios did not enjoy either market share or profitability gains over the other large banks, whereas for small banks, results are similar to the smallbank findings discussed above. Moreover, outside banking crises, high capital was not associated with high stock returns. Combined, the results suggest that high capital ratios serve large banks well, particularly around banking crises. In contrast, high capital ratios appear to help small banks around banking crises, marketrelated crises, and normal times alike. The remainder of this paper is organized as follows.Section 2 discusses the related literature. Section 3 explains the liquidity creation measures and our sample based on data of U. S. banks from 1984:Q1 to 2008:Q1. Section 4 describes the behavior of aggregate bank liquidity creation around five financial crises and draws some general conclusions. Section 5 discusses the tests of the effects of pre crisis capital ratios on banks’ competitive positions and profitability around financial crises and â€Å"normal† times. This section also examines the stock returns of high- and low-capital listed banking organizations during each crisis and during normal† times. Section 6 concludes. 2. Related literature This paper is related to two literatures. The first is the literature on financial crises. 14 One strand in this literature has focused on financial crises and fragility. Some papers have analyzed contagion. Contributions in this area suggest that a small liquidity shock in one area may have a contagious effect throughout the economy (e. g. , Allen and Gale 1998, 2000). Other papers have focused on the determinants of financial crises and the policy implications (e. g. Bordo, Eichengreen, Klingebiel, and Martinez-Peria 2001, Demirguc-Kunt, Detragiache, and Gupta 2006, Lorenzoni 2008, Claessens, Klingebiel, and Laeven forthcoming). A second strand examines the e ffect of financial crises on the real sector (e. g. , Friedman and Schwarz 1963, Bernanke 1983, Bernanke and Gertler 1989, Dell’Ariccia, Detragiache, and Rajan 2008, Shin forthcoming). These papers find that financial crises increase the cost of financing and reduce credit, which adversely affects corporate investment and may lead to reduced 14Allen and Gale (2007) provide a detailed overview of the causes and consequences of financial crises. 7 growth and recessions. That is, financial crises have independent real effects (see Dell’Ariccia, Detragiache, and Rajan 2008). In contrast to these papers, we examine how the amount of liquidity created by the banking sector behaved around financial crises in the U. S. , and explore systematic patterns in the data. The second literature to which this paper is related focuses on the strategic use of leverage in product-market competition for non-financial firms (e. g. , Brander and Lewis 1986, Campello 2006, Lyandres 2006).This literature suggests that financial leverage can affect competitive dynamics. While this literature has not focused on banks, we analyze the effects of crises on the competitive positioning and profitability of banks based on their pre-crisis capital ratios. Our hypothesis is that in the case of banks, the competitive implications of capital are likely to be most pronounced during a crisis when a bank’s capitalization has a major influence on its ability to survive the crisis, particularly in light of regulatory discretion in closing banks or otherwise resolving problem institutions.Liquidity creation may be a channel through which this competitive advantage is gained or lost. 15 3. Description of the liquidity creation measure and sample We calculate the dollar amount of liquidity created by the banking sector using Berger and Bouwman’s (forthcoming) preferred â€Å"cat fat† liquidity creation measure. In this section, we explain briefly what this acronym stand s for and how we construct this measure. 16 We then describe our sample. All financial values are expressed in real 2007:Q4 dollars using the implicit GDP price deflator. 3. 1. Liquidity creation measureTo construct a measure of liquidity creation, we follow Berger and Bouwman’s three-step procedure (see Table 1). Below, we briefly discuss these three steps. In Step 1, we classify all bank activities (assets, liabilities, equity, and off-balance sheet activities) as liquid, semi-liquid, or illiquid. For assets, we do this based on the ease, cost, and time for banks to dispose of their obligations in order to meet these liquidity demands. For liabilities and equity, we do this 15 Allen and Gale (2004) analyze how competition affects financial stability. We reverse the causality and examine the effect of financial crises on competition. 6 For a more detailed discussion, see Berger and Bouwman (forthcoming). 8 based on the ease, cost, and time for customers to obtain liquid fund s from the bank. We follow a similar approach for off-balance sheet activities, classifying them based on functionally similar on-balance sheet activities. For all activities other than loans, this classification process uses information on both product category and maturity. Due to data restrictions, we classify loans entirely by category (â€Å"cat†). 17 In Step 2, we assign weights to all the bank activities classified in Step 1.The weights are consistent with liquidity creation theory, which argues that banks create liquidity on the balance sheet when they transform illiquid assets into liquid liabilities. We therefore apply positive weights to illiquid assets and liquid liabilities. Following similar logic, we apply negative weights to liquid assets and illiquid liabilities and equity, since banks destroy liquidity when they use illiquid liabilities to finance liquid assets. We use weights of ? and -? , because only half of the total amount of liquidity created is attrib utable to the source or use of funds alone.For example, when $1 of liquid liabilities is used to finance $1 in illiquid assets, liquidity creation equals ? * $1 + ? * $1 = $1. In this case, maximum liquidity is created. However, when $1 of liquid liabilities is used to finance $1 in liquid assets, liquidity creation equals ? * $1 + -? * $1 = $0. In this case, no liquidity is created as the bank holds items of approximately the same liquidity as those it gives to the nonbank public. Maximum liquidity is destroyed when $1 of illiquid liabilities or equity is used to finance $1 of liquid assets. In this case, liquidity creation equals -? $1 + -? * $1 = -$1. An intermediate weight of 0 is applied to semi-liquid assets and liabilities. Weights for off-balance sheet activities are assigned using the same principles. In Step 3, we combine the activities as classified in Step 1 and as weighted in Step 2 to construct Berger and Bouwman’s preferred â€Å"cat fat† liquidity creat ion measure. This measure classifies loans by category (â€Å"cat†), while all activities other than loans are classified using information on product category and maturity, and includes off-balance sheet activities (â€Å"fat†).Berger and Bouwman construct four liquidity creation measures by alternatively classifying loans by category or maturity, and by alternatively including or excluding off-balance sheet activities. However, they argue that â€Å"cat fat† is the preferred measure since for liquidity creation, banks’ ability to securitize or sell loans is more important than loan maturity, and banks do create liquidity both on the balance sheet and off the balance sheet. 17 Alternatively, we could classify loans by maturity (â€Å"mat†).However, Berger and Bouwman argue that it is preferable to classify them by category since for loans, the ability to securitize or sell is more important than their maturity. 9 To obtain the dollar amount of liq uidity creation at a particular bank, we multiply the weights of ? , -? , or 0, respectively, times the dollar amounts of the corresponding bank activities and add the weighted dollar amounts. 3. 2. Sample description We include virtually all commercial and credit card banks in the U. S. in our study. 18 For each bank, we obtain quarterly Call Report data from 1984:Q1 to 2008:Q1.We keep a bank if it: 1) has commercial real estate or commercial and industrial loans outstanding; 2) has deposits; 3) has an equity capital ratio of at least 1%; 4) has gross total assets or GTA (total assets plus allowance for loan and lease losses and the allocated transfer risk reserve) exceeding $25 million. We end up with data on 18,134 distinct banks, yielding 907,159 bank-quarter observations over our sample period. For each bank, we calculate the dollar amount of liquidity creation using the process described in Section 3. 1.The amount of liquidity creation and all other financial values are put in to real 2007:Q4 dollars using the implicit GDP price deflator. When we explore aggregate bank liquidity creation around financial crises, we focus on the real dollar amount of liquidity creation by the banking sector. To obtain this, we aggregate the liquidity created by all banks in each quarter and end up with a sample that contains 97 inflation-adjusted, quarterly liquidity creation amounts. In contrast, when we examine how capital affects the competitive positions of banks, we focus on the amount of liquidity created by individual banks around each crisis.Given documented differences between large and small banks in terms of portfolio composition (e. g. , Kashyap, Rajan, and Stein 2002, Berger, Miller, Petersen, Rajan, and Stein 2005) and the effect of capital on liquidity creation (Berger and Bouwman forthcoming), we split the sample into large banks (between 330 and 477 observations, depending on the crisis) and small banks (between 5556 and 6343 observations, depending on the crisis), and run all change in market share and profitability regressions separately for these two sets of banks.Large banks have gross total assets (GTA) exceeding $1 billion at the end of the quarter before a crisis 18 Berger and Bouwman (forthcoming) include only commercial banks. We also include credit card banks to avoid an artificial $0. 19 trillion drop in bank liquidity creation in the fourth quarter of 2006 when Citibank N. A. moved its credit-card lines to Citibank South Dakota N. A. , a credit card bank. 10 and small banks have GTA up to $1 billion at the end of that quarter. 19,20 4.The behavior of aggregate bank liquidity creation around financial crises This section focuses on the first goal of the paper – examining the aggregate liquidity creation of banks across five financial crises in the U. S. over the past quarter century. The crises include the 1987 stock market crash, the credit crunch of the early 1990s, the Russian debt crisis plus Long-Term Capital M anagement (LTCM) bailout of 1998, the bursting of the dot. com bubble and the Sept. 11 terrorist attacks of the early 2000s, and the current subprime lending crisis. We first provide summary statistics and explain our empirical approach.We then discuss alternative measures of abnormal liquidity creation. Next, we describe the behavior of bank liquidity creation before, during, and after each crisis. Finally, we draw some general conclusions from these results. 4. 1. Summary statistics and empirical approach Figure 1 Panel A shows the dollar amount of liquidity created by the banking sector, calculated using the â€Å"cat fat† liquidity creation measure over our sample period. As shown, liquidity creation has increased substantially over time: it has more than quadrupled from $1. 369 trillion in 1984:Q1 to $5. 06 trillion in 2008:Q1 (in real 2007:Q4 dollars). We want to examine whether liquidity creation by the banking sector is â€Å"high,† â€Å"low,† or at a à ¢â‚¬Å"normal† level around financial crises. Since no theories exist that explain the intertemporal behavior of liquidity creation or generate numerical estimates of â€Å"normal† liquidity creation, we need a reasonable empirical approach. At first blush, it may seem that we could simply calculate the average amount of bank liquidity creation over the entire sample period and view amounts above this sample average as â€Å"high† and amounts below the average as â€Å"low. However, Figure 1 Panel A clearly shows that this approach would cause us to classify the entire second half of the sample period (1996:Q1 – 2008:Q1) as â€Å"high† and the entire first half of the sample period (1984:Q1 – 1995:Q4) as â€Å"low. † We therefore do not 19 As noted before, we combine Berger and Bouwman’s large and medium bank categories into one â€Å"large bank† category. Recall that all financial values are expressed in real 2007:Q4 dol lars. 20 GTA equals total assets plus the allowance for loan and lease losses and the allocated transfer risk reserve.Total assets on Call Reports deduct these two reserves, which are held to cover potential credit losses. We add these reserves back to measure the full value of the loans financed and the liquidity created by the bank on the asset side. 11 use this approach. The approach we take is aimed at calculating the â€Å"abnormal† amount of liquidity created by the banking sector based on a time trend. It focuses on whether liquidity creation lies above or below this time trend, and also deseasonalizes the data to ensure that we do not base our conclusions on mere seasonal effects.We detrend and deseasonalize the data by regressing the dollar amount of liquidity creation on a time index and three quarterly dummies. The residuals from this regression measure the â€Å"abnormal† dollar amount of liquidity creation in a particular quarter. That is, they measure how far (deseasonalized) liquidity creation lies above or below the trend line. If abnormal liquidity creation is greater than (smaller than) $0, the dollar amount of liquidity created by the banking sector lies above (below) the time trend.If abnormal liquidity creation is high (low) relative to the time trend and seasonal factors, we will interpret this as liquidity creation being â€Å"too high† (â€Å"too low†). Figure 1 Panel B shows abnormal liquidity creation over time. The amount of liquidity created by the banking sector was high (yet declining) in the mid-1980s, low in the mid-1990s, and high (and mostly rising) in the most recent years. 4. 2. Alternative measures of abnormal liquidity creation We considered several alternative approaches to measuring abnormal liquidity creation. One possibility is to scale the dollar amount of liquidity creation by total population.The idea behind this approach is that a â€Å"normal† amount of liquidity creation may exi st in per capita terms. The average amount of liquidity creation per capita over our sample period could potentially serve as the â€Å"normal† amount and deviations from this average would be viewed as abnormal. To calculate per capita liquidity creation we obtain annual U. S. population estimates from the U. S. Census Bureau. Figure 2 Panel A shows per capita liquidity creation over time. The picture reveals that per capita liquidity creation more than tripled from $5. 8K in 1984:Q1 to $18. 8K in 2008:Q1.Interestingly, the picture looks very similar to the one shown in Panel A, perhaps because the annual U. S. population growth rate is low. For reasons similar to those in our earlier analysis, we calculate abnormal per capita liquidity creation by detrending and deseasonalizing the data like we did in the previous section. Figure 2 Panel B shows abnormal per capita liquidity creation over time. 12 Another possibility is to scale the dollar amount of liquidity creation by GD P. Since liquidity creation by banks may causally affect GDP, this approach seems less appropriate.Nonetheless, we show the results for completeness. Figure 2 Panel C shows the dollar amount of liquidity creation divided by GDP. The picture reveals that bank liquidity creation has increased from 19. 9% of GDP in 1984:Q1 to 40. 4% of GDP in 2008:Q1. While liquidity creation more than quadrupled over the sample period, GDP doubled. Importantly, the picture looks similar to the one shown in Panel A. Again, for reasons similar to those in our earlier analysis, we detrend and deseasonalize the data to obtain abnormal liquidity creation divided by GDP.Figure 2 Panel D shows abnormal liquidity creation divided by GDP over time. Since these alternative approaches yield results that are similar to those shown in Section 4. 1, we focus our discussions on the abnormal amount of liquidity creation (rather than the abnormal amount of per capita liquidity creation or the abnormal amount of liquid ity creation divided by GDP) around financial crises. 4. 3. Abnormal bank liquidity creation before, during, and after five financial crises We now examine how abnormal bank liquidity creation behaved efore, during, and after five financial crises. In all cases, the pre-crisis and post-crisis periods are defined to be eight quarters long. 21 The one exception is that we do not examine abnormal bank liquidity creation after the current subprime lending crisis, since this crisis was still ongoing at the end of the sample period. Figure 3 Panels A – E show the graphs of the abnormal amount of liquidity creation for the five crises. This subsection is a fact-finding effort and largely descriptive. In Section 4. , we will combine the evidence gathered here and interpret it to draw some general conclusions. Financial crisis #1: Stock market crash (1987:Q4) On Monday, October 19, 1987, the stock market crashed, with the S&P500 index falling about 20%. During the years before the cra sh, the level of the stock market had increased dramatically, causing some 21 As a result of our choice of two-year pre-crisis and post-crisis periods, the post-Russian debt crisis period overlaps with the bursting of the dot. com bubble, and the pre-dot. com bubble period overlaps with the Russian debt crisis.For these two crises, we redo our analyses using six-quarter pre-crisis and post-crisis periods and obtain results that are qualitatively similar to the ones documented here. 13 concern that the market had become overvalued. 22 A few days before the crash, two events occurred that may have helped precipitate the crash: 1) legislation was enacted to eliminate certain tax benefits associated with financing mergers; and 2) information was released that the trade deficit was above expectations. Both events seemed to have added to the selling pressure and a record trading volume on Oct. 9, in part caused by program trading, overwhelmed many systems. Figure 3 Panel A shows abnormal bank liquidity creation before, during, and after the stock market crash. Although this financial crisis seems to have originated in the stock market rather than the banking system, it is clear from the graph that abnormal liquidity creation by banks was high ($0. 5 trillion above the time trend) two years before the crisis. It had already dropped substantially before the crisis and continued to drop until well after the crisis, but was still above the time trend even a year after the crisis.Financial crisis #2: Credit crunch (1990:Q1 – 1992:Q4) During the first three years of the 1990s, bank commercial and industrial lending declined in real terms, particularly for small banks and for small loans (see Berger, Kashyap, and Scalise 1995, Table 8, for details). The ascribed causes of the credit crunch include a fall in bank capital from the loan loss experiences of the late 1980s (e. g. , Peek and Rosengren 1995), the increases in bank leverage requirements and implementation o f Basel I risk-based capital standards during this time period (e. g. Berger and Udell 1994, Hancock, Laing, and Wilcox 1995, Thakor 1996), an increase in supervisory toughness evidenced in worse examination ratings for a given bank condition (e. g. , Berger, Kyle, and Scalise 2001), and reduced loan demand because of macroeconomic and regional recessions (e. g. , Bernanke and Lown 1991). To some extent, the research supports virtually all of these hypotheses. Figure 3 Panel B shows how abnormal liquidity creation behaved before, during, and after the credit crunch. The graph shows that liquidity creation was above the time trend before the crisis, but declining.After a temporary increase, it dropped markedly during the crisis by roughly $0. 6 trillion, and the decline even extended a bit beyond the crunch period. After having reached a noticeably low level in the post-crunch period, liquidity creation slowly started to bottom out. This evidence suggests that the 22 E. g. , â€Å"R aging bull, stock market’s surge is puzzling investors: When will it end? † on page 1 of the Wall Street Journal, Jan. 19, 1987. 14 banking sector created (slightly) positive abnormal liquidity before the crisis, but created significantly negative abnormal liquidity during and fter the crisis, representing behavior by banks that may have further fueled the crisis. Financial crisis #3: Russian debt crisis / LTCM bailout (1998:Q3 – 1998:Q4) Since its inception in March 1994, hedge fund Long-Term Capital Management (â€Å"LTCM†) followed an arbitrage strategy that was avowedly â€Å"market neutral,† designed to make money regardless of whether prices were rising or falling. When Russia defaulted on its sovereign debt on August 17, 1998, investors fled from other government paper to the safe haven of U. S. treasuries.This flight to liquidity caused an unexpected widening of spreads on supposedly low-risk portfolios. By the end of August 1998, LTCMâ€⠄¢s capital had dropped to $2. 3 billion, less than 50% of its December 1997 value, with assets standing at $126 billion. In the first three weeks of September, LTCM’s capital dropped further to $600 million without shrinking the portfolio. Banks began to doubt its ability to meet margin calls. To prevent a potential systemic meltdown triggered by the collapse of the world’s largest hedge fund, the Federal Reserve Bank of New York organized a $3. billion bail-out by LTCM’s major creditors on September 23, 1998. In 1998:Q4, many large banks had to take substantial write-offs as a result of losses on their investments. Figure 3 Panel C shows abnormal liquidity creation around the Russian debt crisis and LTCM bailout. The pattern shown in the graph is very different from the ones we have seen so far. Liquidity creation was abnormally negative before the crisis, but increasing. Liquidity creation increased further during the crisis, countercyclical behavior by banks that may have alleviated the crisis, and continued to grow after the crisis.This suggests that liquidity creation may have been too low entering the crisis and returned to normal levels a few quarters after the end of the crisis. Financial crisis #4: Bursting of the dot. com bubble and Sept. 11 terrorist attack (2000:Q2 – 2002:Q3) The dot. com bubble was a speculative stock price bubble that was built up during the mid to late 1990s. During this period, many internet-based companies, commonly referred to as â€Å"dot. coms,† were founded. Rapidly increasing stock prices and widely available venture capital created an environment in which 15 any of these companies seemed to focus largely on increasing market share. At the height of the boom, it seemed possible for dot. com’s to go public and raise substantial amounts of money even if they had never earned any profits, and in some cases had not even earned any revenues. On March 10, 2000, the Nasdaq composite ind ex peaked at more than double its value just a year before. After the bursting of the bubble, many dot. com’s ran out of capital and were acquired or filed for bankruptcy (examples of the latter include WorldCom and Pets. com). The U. S. economy started to slow down and business nvestments began falling. The September 11, 2001 terrorist attacks may have exacerbated the stock market downturn by adversely affecting investor sentiment. By 2002:Q3, the Nasdaq index had fallen by 78%, wiping out $5 trillion in market value of mostly technology firms. Figure 3 Panel D shows how abnormal liquidity creation behaved before, during, and after the bursting of the dot. com bubble and the Sept. 11 terrorist attacks. The graph shows that before the crisis period, liquidity creation moved from displaying a negative abnormal value to displaying a positive abnormal value at the time the bubble burst.During the crisis, liquidity creation declined somewhat and hovered around the time trend by t he time the crisis was over. After the crisis, liquidity creation slowly started to pick up again. Financial crisis #5: Subprime lending crisis (2007:Q3 – ? ) The subprime lending crisis has been characterized by turmoil in financial markets as banks have experienced difficulty in selling loans in the syndicated loan market and in securitizing loans. Banks also seem to be reluctant to provide credit: they appear to have cut back their lending to firms and individuals, and have also been reticent to lend to each other.Risk premia have increased as evidenced by a higher premium over treasuries for mortgages and other bank products. Some banks have experienced massive losses in capital. For example, Citicorp had to raise about $40 billion in equity to cover subprime lending and other losses. Massive losses at Countrywide resulted in a takeover by Bank of America. Bear Stearns suffered a fatal loss in confidence and was sold at a fire-sale price to J. P. Morgan Chase with the Fed eral Reserve guaranteeing $29 billion in potential losses. Washington Mutual, the sixth-largest bank, became the biggest bank failure in the U.S. financial history. J. P. Morgan Chase purchased the banking business while the rest of the organization filed for bankruptcy. The Federal Reserve intervened in some 16 unprecedented ways in the market, extending its safety-net privileges to investment banks. In addition to lowering the discount rate sharply, it also began holding mortgage-backed securities and lending directly to investment banks. Subsequently, IndyMac Bank was seized by the FDIC after it suffered substantive losses and depositors had started to run on the bank. This failure is expected to cost the FDIC $4 billion – $8 billion.The FDIC intends to sell the bank. Congress also recently passed legislation to provide Freddie Mac and Fannie Mae with unlimited credit lines and possible equity injections to prop up these troubled organizations, which are considered too big to fail. Figure 3 Panel E shows abnormal liquidity creation before and during the first part of the subprime lending crisis. The graph suggests that liquidity creation displayed a high positive abnormal value that was increasing before the crisis hit, with abnormal liquidity creation around $0. 0 trillion entering the crisis, decreasing substantially after the crisis hit. A striking fact about this crisis compared to the other crises is the relatively high build-up of positive abnormal liquidity creation prior to the crisis. 4. 4. Behavior of some liquidity creation components around the two banking crises It is of particular interest to examine the behavior of some selected components of liquidity creation around the banking crises. As discussed above (Section 4. 3), numerous papers have focused on the credit crunch, examining lending behavior.These studies generally find that mortgage and business lending started to decline significantly during the crisis. Here we contrast the cr edit crunch experience with the current subprime lending crisis, and expand the components of liquidity creation that are examined. Rather than focusing on mortgages and business loans, we examine the two liquidity creation components that include these items – semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans). In addition, we analyze two other components of liquidity creation.We examine the behavior of liquid assets to address whether a decrease (increase) in semi-liquid assets and / or illiquid assets tended to be accompanied by an increase (decrease) in liquid assets. We also analyze the behavior of illiquid off-balance sheet guarantees (primarily loan commitments) to address whether illiquid assets and illiquid off-balance sheet guarantees move in tandem around banking crises and whether changes in one are more pronounced than the other. Figure 4 Panels A and B show the abnormal amount of four liquidity creation components around 17 h e credit crunch and the subprime lending crisis, respectively. For ease of comparison, the components are not weighted by weights of +? (illiquid assets and illiquid off-balance sheet guarantees), 0 (semiliquid assets), and –? (liquid assets). The abnormal amounts are obtained by detrending and deseasonalizing each liquidity creation component. Figure 4 Panel A shows that abnormal semi-liquid assets decreased slightly during the credit crunch, while abnormal illiquid assets and especially abnormal illiquid guarantees dropped significantly and turned negative.This picture suggests that these components fell increasingly below the trendline. The dramatic drop in abnormal illiquid assets and abnormal illiquid off-balance sheet guarantees (which carry positive weights) helps explain the significant decrease in abnormal liquidity creation during the credit crunch shown in Figure 3 Panel B. Figure 4 Panel B shows that these four components of abnormal liquidity creation were above the trendline before and during the subprime lending crisis.Illiquid assets and especially off-balance sheet guarantees move further and further above the trendline before the crisis, which helps explain the dramatic buildup in abnormal liquidity creation before the subprime lending crisis shown in Figure 3 Panel E. All four components of abnormal liquidity creation continued to increase at the beginning of the crisis. After the first quarter of the crisis, illiquid off-balance sheet guarantees showed a significant decrease, which helps explain the decrease in abnormal liquidity creation in Figure 3 Panel E.On the balance sheet, during the final quarter of the sample period (the third quarter of the crisis), abnormal semi-liquid and illiquid assets declined, while abnormal liquid assets increased. 4. 5. General conclusions from the results What do we learn from the various graphs in the previous analyses that indicate intertemporal patterns of liquidity creation and selected liquidi ty creation components around five financial crises? First, across all the financial crises, there seems to have been a significant build-up or drop-off of abnormal liquidity creation before the crisis.This is consistent with the notion that crises may be preceded by either â€Å"too much† or â€Å"too little† liquidity creation, although at this stage we offer this as tentative food for thought rather than as a conclusion. Second, there seem to be two main differences between banking crises and market-related crises. 18 The banking crises, namely the credit crunch and the subprime lending crisis, were both preceded by positive abnormal liquidity creation by banks, while two out of the three market-related crises were preceded by negative abnormal liquidity creation.In addition, during the two banking crises, the crises themselves seem to have exerted a noticeable influence on the pattern of aggregate liquidity creation by banks. Just prior to the credit crunch, abnorm al liquidity creation was positive and had started to trend upward, but reversed course and plunged quite substantially to become negative during and after the crisis. Just prior to the subprime lending crisis, aggregate liquidity creation was again abnormally positive and trending up, but began to decline during the crisis, although it remains abnormally high by historical standards.The other crises, which are less directly related to banks, did not seem to exhibit such noticeable impact. Third, liquidity creation has both decreased during crises (e. g. , the 1990-1992 credit crunch) and increased during crises (e. g. , the 1998 Russian debt crisis / LTCM bailout). Thus, liquidity creation likely both exacerbated and ameliorated the effects of crises. Fourth, off-balance sheet illiquid guarantees (primarily loan commitments) moved more than semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans) during banking crises.Fifth, while liquidity creation i s generally thought of as a financial intermediation service with positive economic value at the level of the individual bank and individual borrower (see Diamond and Rajan 2000, 2001), our analysis hints at the existence of a â€Å"dark side† to liquidity creation. Specifically, it may be more than coincidence that the subprime lending crisis was preceded by a very high level of positive abnormal aggregate liquidity creation by banks relative to historical levels.The notion that this may have contributed to the subprime lending crisis is consistent with the findings that banks adopted lax credit standards (see Dell’Ariccia, Igan, and Laeven 2008, Keys, Mukherjee, Seru, and Vig 2008), which in turn could have led to an increase in credit availability and off-balance sheet guarantees. Thus, while Diamond and Rajan (2000, 2001) argue that financial fragility is needed to create liquidity, our analysis offers the intriguing possibility that the causality may be reversed a s well: too much liquidity creation may lead to financial fragility. 9 5. The effect of capital on banks’ competitive positions and profitability around financial crises This section focuses on the second goal of the paper – examining how bank capital affects banks’ competitive positions and profitability around financial crises. We first explain our methodology and provide summary statistics. We then present and discuss the empirical results. In an additional check, we examine whether the stock return performance of high- and low-capital listed banks is consistent with the competitive position and profitability results for large banks.In another check, we generate some â€Å"fake† crises to analyze whether our findings hold during â€Å"normal† times as well. 5. 1. Empirical approach To examine whether banks with high capital ratios improve their competitive positions and profitability during financial crises, and if so, whether they are able to h old on to this improved performance after these crises, we focus on the behavior of individual banks rather than that of the banking sector as a whole.Because our analysis of aggregate liquidity creation by banks shows substantial differences across crises, we do not pool the data from all the crises but instead analyze each crisis separately. Our findings below that the coefficients of interest differ substantially across crises tend to justify this separate treatment of the different crises. We use the following regression specification for each of the five crises: ? PERFi,j = ? + ? 1 * EQRATi,j + B * Zi,j (1) where ?PERFi,j is the change in bank i’s performance around crisis j, EQRATi,j is the bank’s average capital ratio before the crisis, and Zi,j includes a set of control variables averaged over the pre-crisis period. All of these variables are discussed in Section 5. 2. Since we use a cross-sectional regression model, bank and year fixed effects are not included . In all regressions, t-statistics are based on robust standard errors. Given documented differences between large and small banks in terms of portfolio composition (e. g. Kashyap, Rajan, and Stein 2002, Berger, Miller, Petersen, Rajan, and Stein 2005) and the effect of capital on liquidity creation (Berger and Bouwman forthcoming), we split the sample into large and small banks, and run all regressions separately for these two sets of banks. Large banks have gross total assets (GTA) exceeding $1 billion at the end of the quarter preceding the crisis and small banks have GTA up to 20 $1 billion at the end of that quarter. 5. 2. Variable descriptions and summary statistics We use two measures of a bank’s performance: competitive position and profitability.The bank’s competitive position is measured as the bank’s market share of overall liquidity creation, i. e. , the dollar amount of liquidity created by the bank divided by the dollar amount of liquidity created by the industry. Our focus on the share of liquidity creation is a departure from the traditional focus on a bank’s market share of deposits. Liquidity creation is a more comprehensive measure of banking activities since it does not just consider one funding item but instead is based on all the bank’s on-balance sheet and off-balance sheet activities.To establish whether banks improve their competitive positions during the crisis, we define the change in liquidity creation market share, ? LCSHARE, as the bank’s average market share during the crisis minus its average market share over the eight quarters before the crisis, normalized by its average pre-crisis market share. To examine whether these banks hold on to their improved performance after the crisis, we also measure each bank’s average market share over the eight quarters after the crisis minus its average market share over the eight quarters before the crisis, again normalized by its average marke t share before the crisis.The second performance measure is the bank’s profitability, measured as the return on equity (ROE), i. e. , net income divided by stockholders equity. 23 To examine whether a bank improves its profitability during a crisis, we focus on the change in profitability, ? ROE, measured as the bank’s average ROE during the crisis minus the bank’s average ROE over the eight quarters before the crisis. 24 To analyze whether the bank is able to hold on to improved profitability, we focus on the bank’s average ROE over the eight quarters after the crisis minus its average ROE over the eight quarters before the crisis.To mitigate the influence of outliers, ? LCSHARE and ? ROE are winsorized at the 3% level. Furthermore, to ensure that average values are calculated based on a sufficient number of quarters, we 23 We use ROE, the bank’s net income divided by equity, rather than return on assets (ROA), net income divided by assets, since banks may have substantial off-balance sheet portfolios. Banks must allocate capital against every offbalance sheet activity they engage in. Hence, net income and equity both reflect the bank’s on-balance sheet and off-balance sheet activities.In contrast, ROA divides net income earned based on on-balance sheet and off-balance sheet activities merely by the size of the on-balance sheet activities. 24 We do not divide by the bank’s ROE before the crisis since ROE itself is already a scaled variable. 21 require that at least half of a bank’s pre-crisis / crisis / post-crisis observations are available for both performance measures around a crisis. Since the subprime lending crisis was still ongoing at the end of the sample period, we require that at least half of a bank’s pre-subprime crisis observations and all three quarters of its subprime crisis observations are available.The key exogenous variable is EQRAT, the bank’s capital ratio averaged over the eight quarters before the crisis. EQRAT is the ratio of equity capital to gross total assets, GTA. 25 The control variables include: bank size, bank risk, bank holding company membership, local market competition, and proxies for the economic environment. Bank size is controlled for by including lnGTA, the log of GTA, in all regressions. In addition, we run regressions separately for large and small banks. We include the z-score to control for bank risk. 26 The z-score indicates the bank’s distance from default (e. g. Boyd, Graham, and Hewitt 1993), with higher values indicating that a bank is less likely to default. It is measured as a bank’s return on assets plus the equity capital/GTA ratio divided by the standard deviation of the return on assets over the eight quarters before the crisis. To control for bank holding company status, we include D-BHC, a dummy variable that equals 1 if the bank was part of a bank holding company. Bank holding company membership m ay affect a bank’s competitive position because the holding company is required to act as a source of strength to all the banks it owns, and may also inject equity voluntarily when needed.In addition, other banks in the holding company provide cross-guarantees. Furthermore, Houston, James, and Marcus (1997) find that bank loan growth depends on BHC membership. We control for local market competition by including HERF, the bank-level HerfindahlHirschman index of deposit concentration for the markets in which the bank is p

Sunday, September 29, 2019

Unit 6 Essay Exam Ap Us History P3

Elsa Castro period3 11-19-2012 Unit 6 Essay Exam Before the start of the industrial revolution women was a gender that was considered insignificant, and powerless. It has always been that way till the years of 1790 and 1860 that things where begining to change drastically. Since that drastic changed we all know as the industrial revolution economically women were finally given the opportunity to work,earn their own money,and help their families; while domestically there was reat amount of admiration from women in the comfort of their own home now instead of just expecting their place to just be there. The Industrial Revoltuion as we all know was a period of drastic change in technology, manufacturing,and transportation from the start of the nineteenth century onward. Those things had a huge effect on the economic, social, and the cultural conditions. Due to the drastic change women were finally allowed to work.Before the industrial revoltuion if women wanted to work they would have d omestic jobs like sewing,or making household materials out of soap. When the period of change was at its end women were now working in factories. This radical change was only the begining of women being able to work. earn their own money, and being able to gain economic independence. Although women were now able to work they would work to a certain extent. Women would have to work 13 hours a day, and get paid very little.Inaddition if your a women that has been single would have to leave their job if the women is getting married; once your married your husband will be the supporter of the house. Even before, during, and after the industrial revolution the women still had to be at home taking care of her husband and her children. Normally in domestic families women normally would have to agree with the husband, and both of them were only limited to doing certain task

Saturday, September 28, 2019

I will explain it in the instructions Essay Example | Topics and Well Written Essays - 500 words - 3

I will explain it in the instructions - Essay Example Practically, socialism emerged as a consequence of theoretical, logical reasoning triggered by a moral crisis suspended by intellectual anarchy. A distinct feature before the Enlightenment era in Europe, aristocratic rule buoyed by the concentration of wealth [property] in the hands of the chosen few was inevitable, justifiable and God given. As a moderating mechanism, Christianity endorsed holy poverty as the clergy rented the air with the gospel of obligatory charity to the majority poor; a balance that leaned much on agriculture and whose effects could only get worse as the population expanded.2 Indeed as the impact of Industrial Revolution gradually changed the contours of European civilization, the old aristocracy was slowly rendered irrelevant as the bourgeoisie [the propertied] took effective economic and political control, drafting much of the peasant class into a chequered, industrial labor recruitments. The new modes of production granted the propertied a natural limitless accumulation of wealth, widening the inequality gap even further. The working class in the newly industrializing Europe suffered more than doubled with a stepped-up exploitation reaching the extremes; the old feudal system that guaranteed places of residence and limited income for peasants became no more; workers could be hired and fired at will; wage rates became driven by the market forces and could plummet as low as competition allowed; and factories operating 24/7 ran under the worst inhuman conditions ever witnessed in history. Adding to the misery of the proletariat, women and children became the preferred factories workers because of the cheaper pay.3 The result was a general decline in the standards of living and a subsequent attitudinal shift towards capitalism. Powered by the eighteenth century maxims of the French pioneers of thought, socialism was a change, inspiration movement dedicated

Friday, September 27, 2019

Assighment Essay Example | Topics and Well Written Essays - 750 words

Assighment - Essay Example Discussion Definition of Diversity Diversity is described as the characteristics and talents of an individual that distinguishes him from other human beings. Moreover, an individual might also be distinguished from others on the basis of education, religions, caste, creed, race, culture, age, behavior, nationality, status and living style. Although diversity is present everywhere, still we all believe to live within a single big umbrella, called UNITY. Keeping this concept in mind, now a day’s the effects of diversity is fading slowing and gradually from the workplaces of the entire globe. Effects of Diversity, while discussion over ‘Female Identity’ in the organization At the time of discussion in a debate session over the topic of female identification within the organization, varied types of ideas and views came into focus. Numerous senior as well as junior members of the organization presented varied types of negative statements or opinions regarding the progr ess or development of the female community. According to them, progress of female segment is just waste of time and money as ultimately they need to devote their time for the development of the family members. ... so they need to be present within the interiors of the residence and the females might also be looked after by a male member of the family so that they remain free from any type of troubles. Therefore, female progress or identification is just a nightmare that might never become successful. After hearing such types of negative and pessimistic statements, it seemed extremely disturbing and distressing to me along with many other group members and so we all presented the positive sides of female progress. In this age, males and females are uniform in all sense, i.e. in terms of education, experience, knowledge and outlook. Moreover, an organization might surely present the opportunity for the female members of the society to get recruited so as to enhance their identity in a male-dominated nation or state. Only then the females might present their talents, knowledge, ideas, information and facts in front of numerous male members. This might act as a weapon to reduce the impacts of dome stic as well as corporate violence over the females thereby amplifying reputation and image in the market among others. Along with this, it might also amplify the status and popularity of the females within a society among the male members and within a corporate business world. Thus, such type of multicultural organization might prove effective in tackling varied type of situational challenges (Thyer, 345-378). Therefore, after completion of the ideas and statements of our group members, all other pessimistic individual changed their opinion, regarding this topic. All of them stated that female progress or identification is extremely essential in this age of extensive crimes and violence. Hence, the concluding statement filled our hearts with extreme contentment and happiness. Conclusion

Thursday, September 26, 2019

Profile Speech about Trail of Tears- Removal of the Cherokee Essay

Profile Speech about Trail of Tears- Removal of the Cherokee - Essay Example IV. (Preview body of speech) Today, I shall tell you what prompted the U.S. government to decide the fate of thousands of Native Americans. Next, I will tell you how tribes such as Cherokees were affected following which I shall take you back to the fateful â€Å"Trail of Tears.† Finally, being an optimist, I will share with you the present conditions of the relocated people and the role of the U.S. government in their lives today. 3. Americans and Cherokees signed a treaty. This treaty was supposed to bring in some form of civilization among the tribal men i.e. they were expected to give up hunting and adopt farming. The Cherokees accepted the terms since it not only meant progress but it also meant that the Americans would further on mind their own business and leave them alone. This deadly trek, during the course of which thousands of Cherokees perished, loved ones died in front of the eyes while others stood helpless. The Cherokees only stopped some time to bury their dead and continued marching westwards. The Cherokees called this journey – â€Å" Nunna dual isunyi† meaning the trail where we cried. In English, this earned popularity as the â€Å"Trail of tears† (Fradlin,2008). (Transition) It is true, that today as I stand here talking about our American Native brothers and sisters, their plight is still the same. Almost a million of them still remains in abject poverty and lead a life of prejudice. The American Indian Relief Council works towards helping the Native Americans build a stronger community and bring in positive changes in their lives by offering services from literacy to nutrition (American Indian Relief Council).

Wednesday, September 25, 2019

Magical sword, harp, oak tree, grail as archetypal symbols Essay

Magical sword, harp, oak tree, grail as archetypal symbols - Essay Example It influences all of our experiences and behaviors, most especially the emotional ones, but we only know about it indirectly, by looking at those influences† (Boeree, 2006). Like Freud, Jung felt that dream messages were couched in symbolism, but differed regarding what these symbols represented. He felt that dreams would continue to present carefully selected symbols as a purposeful means of communicating specific meaning to the dreamer from the unconscious rather than attempting to hide these concepts. At the same time, he felt unconscious symbols were often used as well to help us understand and accept those aspects of ourselves that we have ignored or attempted to disown or to present archetypal figures that help us connect with the collective. â€Å"Jung thought that dreams could help us grow and heal through use of archetypal symbols. †¦ Various archetypes are represented within myths, fairy tales, and religions, as well as dreams† (Bixler-Thomas, 1998). An ar chetype is described as an â€Å"unlearned tendency to experience things in a certain way† (Boeree, 2006) and Jung identified several, such as the mother, mana (or spiritual power), the shadow (or the unknown) and the persona (or public mask). His wife, Emma Jung, took these concepts and applied them to her own interests, specifically as they applied to Celtic myth and the Grail legends. Emma Jung’s theories regarding the archetypes of the magic sword, the oak tree, the grail and the harp will be closely examined to demonstrate how these archetypes and Jungian theory have become widely applicable within the Western world. Carl Jung believed the most effective method for dream interpretation was the use of series correlation (Hutchinson, 2000).   He gave hope to all dreamers who were looking for the meaning in their dreams without having to hire a ‘professional.’ Series correlation is a process involving the analysis of dreams over time.   Jung suggested keeping a dream

Tuesday, September 24, 2019

Anorexia in the Fashion Industry Research Paper Example | Topics and Well Written Essays - 500 words

Anorexia in the Fashion Industry - Research Paper Example The voluptuous curves which used to be admired in the past have been switched to straighter angles and lines on fashion model’s bodies. With these skinny fashion models gracing the covers of the most exclusive and trendy magazines, many teenage girls seem to be drawn to the idea that in order for them to be as beautiful and as accepted as these models, they have to look like them. This obsession has even created the psychological disorder known as anorexia nervosa. Anorexia is a life-threatening disease, which mostly involves the act of starving oneself in order to look as thin and as â€Å"beautiful† as the fashion models. Deaths have been reported for some anorexic teenagers and models, as the unacceptable image that these models and teenagers see in their mirrors never seems to reach acceptable standards. This paper shall now investigate how the fashion world and designers influence models and girls to become extremely thin, and how they cause these girls to be anorexic. It will also note the lengths to which these girls go to in order to be and to stay as thin as possible and the impact on these girls lives. This paper shall also discuss the changing trends surrounding weight issues and fashion modeling -- from the earliest days when such trends made an impact on the world and on contemporary times. This discussion shall cover opinions and scholarly studies conducted on the subject matter, focusing on the thin fashion models and celebrities and the impact of the skinny obsession on teenagers and on girls in general. The end of the First World War brought about the raising of the hems and the lowering of the waists in women’s fashion. Panati (p. 235) points out how the dresses designed in the 1920s called for women to be flat-chested and boyish, and those who had fuller breasts were also forced to bind their chest to flatten their chests.  

Monday, September 23, 2019

Comparative Analysis of German, French and American Human Rights Law Essay

Comparative Analysis of German, French and American Human Rights Law - Essay Example This essay discusses that crucial importance of political rights and liberties in today’s evolving and fast-changing world cannot be overemphasized. It has been opined that political rights and liberties are of paramount importance because of their impact on other rights, such as social and economic rights. The universal condemnation of state-sponsored repression is due in large part to the globalized ideal of human rights where we see a whittling down of the concept of sovereignty in favor of the acceptance of international norms of human rights. Indeed, the protection of human rights is one of the fundamental aspirations of international law. In international law, the primacy of the State is the core principle of the international legal regime as it is traditionally known. It is the duty of international law, therefore, to interlock authority with power, and to ensure that authorized decision-makers regulate the actions of States. When the United Nations was created in 1948 by a world still reeling from the ravages of the Second World War and intent on healing the wounds wrought by it, it was tasked to become the primary agency in defining and advancing human rights. From then on, various other agencies were created, addressing specific human rights concerns. Notable examples of this are the International Labor Organization and the UNICEF. Within the jurisdiction of the individual states, however, human rights legislation evolves mainly as a result of case law, i.e., the jurisprudence based on decisions made by the Supreme Court on human rights disputes brought before it. Indeed, Indeed, society has come a long way towards preserving human rights, and righting the wrongs of the past with justice and accountability. Says Abrams and Ratner3: Societies long reluctant to investigate or prosecute human rights abusers have begun to do so with greater frequency. These include both those inquiring into the abuses of their own officials or former officials, as well as those investigating or prosecuting individuals who have committed abuses in other countries. This paper attempts to trace the role that case law has played in the legal systems of Germany, France and the United States with respect to the development and evolution of human rights. This paper shall also look into some of the more important and landmark decisions made in the respective jurisdictions and evaluate the degree to which these decisions have impacted on human rights. As the space for this paper is rather limited and the field of human rights is vast, this paper will focus on human rights law as it applies to freedom of religion and circumstances when it competes with the interests of the state to preserve certain values, e.g., neutrality and national security. Germany When people think of Germany and human rights law and religion, thoughts inevitably first turn to the end of the second world war, where Nazi soldiers had been prosecuted for gross war crimes committed against the Jews. The end of World War II ushered in a milestone for international criminal responsibility. The axis powers were completely annihilated and the allied powers were now determined not to repeat the mistakes of the past. It was only through punishing the guilty that the horrors and wounds of the victims could be assuaged. The allied states created the International Military Tribunal (IMT) for the prosecution of the men

Sunday, September 22, 2019

Cognitive Development May Progress Gradually or Through a Series of Stages Essay Example for Free

Cognitive Development May Progress Gradually or Through a Series of Stages Essay Cognitive development can be defined as the growth of our knowledge in understanding the world around us. This growth can be developed gradually, in other words, it is seen as a continuous process by collecting more information. Another way of developing cognitively is through a series of stages which involves some sort of revolution from one period to another in one’s lifetime. Jean Piaget, a cognitive developmentalist believed that humans go through a series of stages in life in order to reach their full cognitive ability. In this essay, we would briefly talk about Piaget’s Stage Theory and its criticisms. Piaget divided his theory into four different stages of development. The first one is known as the sensorimotor stage which is applied to infants for approximately the first two years of their lives. At this stage, infants discover the world mainly by their senses and actions. One of the main concepts Piaget penned is object permanence. This is the knowledge of the existence of objects even when we cannot directly sense it. Piaget suggested that babies lack this concept through his study; A not B task. In this study, the experimenter hides a toy under Box A then the baby would search for it under Box A. This procedure was repeated and eventually in front of the baby, the experimenter hid the toy under Box B. The baby searched for it under Box A instead of B even though they saw the experimenter hid it under Box B. Therefore, this study shows that the baby lacked the concept of object permanence. They are known to be in a state of solipsism, also known as the failure to differentiate between themselves and the surroundings. Based on observation conducted on his own children (1952), Piaget divided this stage into six different sub stages. However, Piaget’s claims on object permanence have been criticized. Baillargeon et al. (1985) found in their research that infants as young as three-and-a-half months have developed the ability of object permanence. This was backed up by Bower Wishart (1972) whereby they discovered that even after the lights were switched off, the babies continued to search for the object shown. Hence, they do possess the ability of object permanence. The second stage is the preoperational stage which occurs when the child is aged 2 to 7. On this stage, the child solves problems by using symbols and develops the skill for languages. According to Piaget, the child is egocentric which means he sees the world from his standpoint but not others. The solution to this is to apply operational intelligence. The process of solving problems by using logic. Another concept which Piaget is concerned with is conservation. It is the understanding that any quantity remains the same even if physical changes is made to the objects holding the medium. In addition to these concepts, centration, also defined as the focus on a single aspect of a problem at a time. Piaget states that at this stage, the child fails to decenter. Conversely, Borke and Hughes (1975) found contradicting evidence against Piaget’s on his study of the three mountains task. They used the same elements of the task and discovered that children had no problem with identifying the perspectives of the others when the task is shown in a meaningful context. Hence, from the result obtained, Hughes found that the children did not display any characteristics of being egocentric. Furthermore, Gelman (1979) found in his study that four year olds altered their explanations of things to get their message across clearer to a blindfolded listener. If Piaget’s concept of egocentrism was correct then, this shouldn’t have happened. In addition, Flavell suggested an alternative to this issue by coining the Level 1 and Level 2 perspective-taking abilities. In Level 1, one thinks about viewing objects but not the different perspectives that can be seen of the objects while in Level 2, one is able to imagine the views of the objects from different angles. Flavell concluded that it is not compulsory that children think others share the same perspective as themselves but they do struggle to imagine what others can see. Therefore, this shows that Piaget’s claim on egocentrism could be correct or wrong. Moreover, in Donaldson’s Children’s Minds (1978), she argued that children misunderstood the questions which Piaget asked while conducting the studies. This was the reason why Piaget obtained the results in his studies involving the concept of conservation especially. Donaldson stated that Piaget’s tasks had no meaningful context for the children to understand, hence they answered what they thought the experimenter expected of them. This claim was supported by Rose Blank (1974) where they found children often succeeded in the conservation task. Further research was done by Samuel Bryant (1978) who used conservation of number, liquid quantity and substance and obtained similar conclusion with Rose Blank’s. Donaldson also stated that children were unintentionally forced to produce the wrong answer against their own logical judgment. One of the explanations is that the same question was asked repeatedly before and after transformations presented to them and this in turn caused the children to believe that their original answer was wrong. Thus, the idea of children assuming the fact that reality changes according to appearance could be incorrect. In addition, Piaget may have underestimated a child’s cognitive ability because based on Mitchell Robinson’s (1992) study; they demonstrated that children from the age of 4 could locate the correct answer to a solution by canceling out the alternatives. This process is also known as inference by elimination. The children were presented with a set of cartoon characters, three of which were well-known. They were asked to identify a superhero which was unknown. The researchers discovered that majority of the children selected the unknown character without doubt. Another example of a child’s ability is their capability with syllogisms which consists of logical problems accompanied by a general rule that enables people to create a statement. Dias and Harris (1990) stated a general rule that all fish live in trees and Tiddles is a fish, then it is logical to assume that Tiddles live in trees. After presenting this to the children, they insisted that Tiddles lives in the water instead. However, after the experimenters presented them with another rule, they were ready to use the rule to make inferences. Therefore, these evidences show that Piaget may have underestimated the abilities of younger children. Subsequently is the stage of concrete operation which happens to children around the age of 7 to 12. Now the child is able to solve their problems in a logical manner but the problem has to be either real or concrete. The final stage; formal operations which takes place when the child turns 12 and continue into their adulthood. In this stage, one would be able to solve problems systematically and logically even if the problem is a hypothetical situation. Wason and Johnson-Laird showed that most intelligent adults do not fulfill Piaget’s ideal cognitively developed person through selection task. This claim is proved in Cheng and Holyoak’s (1985) study where the results strongly show that majority of the participants does not display the reasoning of an adult in the stage of formal operations. In other words, this experiment is a clear indication that the formal operations stage does not exist. One of the critics of Piaget’s Stage Theory in general is John Flavell (1982) who claimed that Piaget did not define the cognitive processes clearly. Furthermore, Braine and Rumain (1983) who conducted an analysis on the contents and the structure of the theory found that Piaget’s theory could be flawed. These are only the few critics of Piaget’s Stage Theory. Thus, the theory is constantly being questioned due to its impact in the field of cognitive psychology. After stating the basic facts of the theory as well as giving some examples of the critics of Piaget’s concepts and ideas, we are now able to get an overview of the debate. Overall, there are evidences which propose some of the concepts to be reviewed again and maybe even rejected. However, from the researches conducted on Piaget’s theory as well as the impact of it, alternative theories were penned down. For example, Vygotsky’s theory which takes on a more social based view of describing the cognitive development. In conclusion, Piaget’s theory have been applied in various institutions especially education but it is also being criticized by many in the field, therefore, it is only fair to conclude that Piaget’s theory may need to be modified in order to create a more accurate theory to explain the way we understand the world.

Saturday, September 21, 2019

Sophomore change Essay Example for Free

Sophomore change Essay A major event that has changed my life forever is high school. It has affected my life both negatively and positively. I never expected it to go the way it has gone. I can honestly say that if I could re-do high school all over again, I would. It has been a bumpy road and I wish I had done it completely different. The small events within it have made me realize who my true friends are, look forward the future, and make the best decisions for myself. High school started out a mystery. I had no idea what I was in for. I came into it in a relationship that lasted half of my high school years. I would not have changed that as a whole, just some of the parts in between. I consider myself as being very naive in the beginning. Freshman year was confusing for everyone. No one knew who his or her real friends were yet. It changed for most everyone. Sophomore year was basically the same, just older. Cliques started forming and more friends were made. I made a lot of mistakes in this time of my life that I would change if possible. I should have been a lot of things but I definitely should have been more considerate. Junior year was pretty much the same. I started defining who I was. I became more aware of situations and started making better decisions. I feel as if I grew up pretty fast. My parents started trusting me more and letting me experience life a little more freely. Junior year was a learning period of high school for sure. Senior year has been the most challenging year. I started realizing life is starting to get real. Everything counts now. College applications were a hard task. Realizing where you want to spend the next chapter in your life is really mind-altering. It is confusing yet exciting. This has changed my life drastically. My mind kept changing and changing over again. This was the year I began to realize who my real friends are. A lot of my friends came and went but I have really started to realize that family is what counts the most. High school has changed my life forever. I have lost people that I love and I have gained a great amount of knowledge. I still have no idea where my life will take me but I know that with the right support system, I will get where I am supposed to be. Family will always be there and friends are sometimes temporary. As these years have passed by, I’ve learned a lot about growing up and taking my life more seriously. I wish I had stepped up and applied myself like I know I could have. I should have taken school more seriously so I would have more options for my future. High school has had its ups and downs. It was the biggest life-changer I have had. I look forward to seeing where the next chapter in my life goes. These past few years have definitely changed my life forever.

Friday, September 20, 2019

Issues of social balance and mixed communities

Issues of social balance and mixed communities Interest in social balance and mixed communities has arisen as a response to both increased management issues in social housing and to concepts of the underclass and social exclusion. The identification of significant and persistent inequalities between areas at the ward and neighbourhood level in recent research (e.g. Meen et al., 2005) has triggered a shift in housing strategy and policy. Social balance is now entrenched within English housing and planning policy where it provides a correction to the housing markets natural tendency to segregate (Goodchild and Cole, 2001). Although this state interventionist approach has come under-fire from academics such as Cheshire (2007), who argue that spatial policy cannot correct deep-rooted social and economic forces and that the focus of policy should be to reduce income inequality in society not just treat the consequences of it, social mixing has gained popular support in urban policy. This literature review outlines the mixed community approach to urban gentrification in urban policy by discussing its latest iteration, the MCI. The MCIs place in UK policy discourse is then analysed as a way of exploring its conceptual and theoretical ideologies for area regeneration. Finally, an in depth review of the literature is conducted which reengages with Mixed Communities as an approach to area regeneration Since 2005, the mixed communities approach to gentrification and the renewal of disadvantaged neighbourhoods has become firmly embedded in the UKs housing and planning policy. The approach was first announced in January 2005 in the Mixed Communities Initiative (MCI) which formed part of New Labours five year plan for the delivery of sustainable communities. The MCI has four core components (Lupton et al., 2009); A commitment to the transformation of areas with concentrated poverty, to provide a better housing environment, higher employment, better education, less crime and higher educational achievements. To achieve these through changes in the housing stock and attraction of new populations, whilst improving opportunities for existing populations. Finance development by recognising the value of publicly owned land and other public assets. Integrate government policies to produce a holistic approach which is sustainable through mainstream funding. Initially the MCI was delivered through twelve demonstration projects situated in the most deprived neighbourhoods in the UK. However, more recently the concepts behind the mixed community approach have grown beyond these projects and are now advocated by planning authorities in a diverse range of areas. Consequently, mixed community developments are emerging without demonstration project status and as such mixed communities have become an approach to area regeneration in addition to being a government policy initiative (Silverman et al., 2006). In response to this policy development the purpose of this literature review is two-fold. Firstly, through analysis of the theories of poverty, place and gentrification in policy discourse it is possible to gain an understanding of the rationale behind the mixed communities conception of the causes place poverty. Review Secondly Theories of Poverty and Place in Urban Policy Any form of urban regeneration reflects a specific theoretical understanding of the causes of place poverty. Throughout the 20th Century UK urban policy has undergone a transformation in its understanding of the causes of place poverty and consequently the approach to urban regeneration has altered. A broad distinction can be made in the UKs approaches to regeneration; between early regeneration by the Keynesian welfare state and that advocated by conservative governments. The former looked to correct the crisis of the neighbourhood through neighbourhood improvement. This approach understands the problems of declining areas as a product of the economic structures which cause spatial and social inequality (Katz, 2004). In response they looked to improve living conditions and try to equalise life chances through redistributive social welfare programmes. In contrast to neighbourhood improvement is the neighbourhood transformation approach, a discernibly neoliberal approach advocated by conservative governments. Here the problems of disadvantaged neighbourhoods are understood as the product of market failures rather than underlying economic structures. The creation of mass social housing estates and overly generous benefit regimes are some of the market failures which reportedly trap the disadvantaged in social cultures of dependency (Goetz, 2003). In the neighbourhood improvement approach these areas are seen as a barrier to market forces; occupying inner city areas with good commercial and residential property investment potential. According to Lupton and Fuller (2009:1016) the neighbourhood improvement approach understands the solution to be: not simply the amelioration of conditions in these neighbourhoods for the benefit of their current residents, but the restoration of market functionality through the physical change and transformation of the position of the neighbourhood in the urban hierarchy Perhaps the best example of this is the role of Urban Development Corporations which brought about the transformation of the London Docklands in the 1980s. Their presence instigated a fundamental change in the role of the state in urban development, from a regulator of the market to an agent within the market. The state was now responsible for fostering the economic conditions under which the economic productivity of areas and communities could be improved. In 1997 New Labours urban regeneration policy was hailed as a divorce from this transformational approach and a return to the improvement approach. The government pioneered an array of new, enhanced public services under the National Strategy for Neighbourhood Renewal. Included was the Neighbourhood Renewal Unit and the New Deal for Communities (NDC) which facilitated interaction between local agents on neighbourhood improvement. Whilst this strategy had the appearance of a strong local focus which prioritised residents, other elements of New Labours policies were characteristically neoliberal. As Fuller and Geddes (2008) remark, Labours urban interventions focus on an equality of opportunity agenda which aspires to greater social cohesion and inclusion by devolving responsibility to local citizens. However, by not matching these responsibilities with appropriate state powers within the NRU and NDC there has been little support for local citizens except to merely compensate the indiv iduals and places put at risk by market forces. As such New Labours initiatives have failed to deliver major redistributional interventions which relinquish local state agents from neoliberal targets, cultures and forms of control (Jessop, 1990). Neoliberal theories of poverty and place within the MCI Within this policy discourse the MCI exists as a more characteristically neoliberal initiative. It is clear in its understanding of the problem, concentrated poverty, and the solution, de-concentration through gentrification and neighbourhood transformation. By doing this the MCI subscribes to a policy discourse which understands concentrated poverty as a spatial metaphor (Crump, 2002). This metaphor inherently undermines complex economic, social and political processes and uses the individual failings of the poor within concentrated spaces to justify their dilution or removal. The concentrated poverty thesis originated from the US (e.g. The Hope VI Urban Revitalisation Programme) where it provides legitimacy to policies which alter cities spatial structures through market forces. Such influences have encouraged British policy makers to adopt a more radical approach to urban regeneration and advocate extensive demolition and gentrification to restore functioning housing markets, imposing a neoliberal agenda on struggling housing environments (Imbroscio, 2008). The MCIs focus on market restoration is clearly articulated: the aim is that success measures should be choice. Reputation, choice of staying and that people want to move in its about market choice (Senior CLG official in Lupton et al., 2009:36) The government realises that while public service improvements will help create this market, it is not enough alone physical change is required to enhance peoples attraction to the neighbourhood and its market. The states role is therefore not just to invest directly but improve and diversify the housing stock whilst decreasing public housing ratios with the explicit goal of stimulating market processes. However, a further consequence of this is the re-population of The mixed communities approach requires the state to fund the improvement of services, in many cases to attract better-off residents, and sell or gift land to the private sector. The removal of social housing through its gift to the private sector inherently creates a spatial fix for poverty and incentivises the development of mixed-income housing developments. In such a situation there is potential for the private sector to change social housing in co-ordinance with market dynamics and consequently complex and marginal developments will be neglected (Adair et al., 2003). CONCLUDE and develop a little mention gentrifiction Impact of Mixed Communities As long as 30 years ago, Holcomb and Beauregard (1981) were critical of the way it was assumed that benefits of urban revitalisation through social mixing would trickle down to the poor. Despite the consequential academic debate, which disputed whether gentrification leads to social exclusion, segregation and displacement, it has become increasingly popular in urban policy where it is assumed that its application leads to a more socially mixed, integrated, and sustainable urban environment. The following review will explore the literature which questions whether moving middle-income populations into low-income neighbourhoods or vice versa has a positive impact on residents urban experience. link to mixed communities Schoon (2001) identifies three rationales behind social mixing in policy debates. Firstly, there is an assumption that the middle-class are more likely to attract public resources and as such the lower-income household will fare better in socially mixed communities. Secondly, mixed income developments are in a better position to support a local economy than areas of concentrated poverty. Finally and most controversially, the networks and contacts argument advocated by Putnam (1995) poses that socially mixed neighbourhoods create an environment which improves the bridging and bonding of social capital between social classes. Consequently, lower-income residents have more opportunities to network and break out of poverty than they would in areas of concentrated deprivation. The Social Exclusion Unit (1998:53) expands on this: [socially mixed neighbourhoods] often brings people into contact with those outside their normal circle, broadening horizons and raising expectations, and can link people into informal networks through which work is more easily found. These three arguments are the cornerstone of a global policy discourse which has received very little critique in the UK. One of the reasons for this is the way it is framed. The social mixing agenda which has been prominent in western efforts to decentralise poverty is a discourse which actively avoids the word gentrification. Instead it uses terms like urban revitalisation, urban regeneration, and urban sustainability to redefine itself as a moral discourse which helps the poor (Slater, 2005; 2006). By doing this the discourse deflects from the class restructuring processes which define its implementation. Previous Studies As of yet there is little consensus around the ability of gentrification to achieve the goals asked of it, neither is it clear what type of social mix is most desirable or the outcomes of different mixes (Walks and Maaranen, 2008). For instance, Tunstall and Fenton (2006) who claim to amass the best UK research on social mix conclude that although knowledge gaps exist the founding arguments for mixed communities remains valid. Yet, in contrast, Doherty et al. (2006) undertook quantitative analysis of the UK census and Scottish Longitudinal Study and concluded that there is little evidence to support the mixing of housing tenures in developments with the premise of improving social well-being. Purpose sentence Randolph and Wood (2003) note that much of the research conducted so far has concentrated on social mixing in public housing estates (Atkinson and Kintrea, 2000; Cole and Shayer, 1998) and there has been little exploration of the social mixing occurring in new build developments. Does Gentrification bring about social mixing? Contrary to the assumptions which link gentrification to improved social mixing, most research suggests that gentrification is likely to reduce social mixing at the neighbourhood level. Interviews conducted by Butler (1997), and Butler and Robson (2001; 2003) suggest that local middle-income gentrifiers engaged in little social interaction with lower-income residents. Their research found that gentrifiers generally sought out people with similar cultural and political interests which often lead to little interaction between middle and low-income residents. Accordingly, they found that interaction was greatest in areas where gentrification had homogenised an area and pushed out other groups. In areas where this had not occurred, Butler and Robson (2001) reported that, the difference between tenants resulted in tectonic juxtapositions which polarised social groups rather than integrating them. In their later research, Butler and Robson (2003) not only reinforced their earlier findings but found that children formed a key facilitator in resident integration: there was no evidence that the children played outside these middle class networks, our fieldwork strongly suggests that the middle class preschool clubs were highly exclusionary of non-middle class children (Butler and Robson, 2003:128) Although Butler and Robsons research rightly questions the role of gentrification in a policy discourse which looks to foster a sustainable urban environment it does so primarily through the experiences of the gentrifier. Davidsons (under review) research of new build, middle income development on the River Thames, London engaged with both gentrifier and non-gentrifier to reinforce scepticism over the ability of housing type to influence class relations. Davidson found no evidence to suggest that any of the developments desired outcomes had been achieved through the introduction of a middle class population. Both the temporary nature of new build residents and the spatially segregated nature of the development itself meant the development fostered little integration between low and middle income residents who do not work in the same place, use the same transport or frequent same restaurants or pubs. In a similar study Freeman (2006) researched two black gentrifying neighbourhoods in New York City. Like Davidson, Freeman found that social networks rarely crossed and that gentrifiers and longer term residents generally moved in different spaces. Additionally, Freeman experienced that residents were hesitant to pass comment on social mixing, they rarely expressed their opinions in overly positive or negative tones. In accordance with this literature it seems unrealistic to assume that different social groups will integrate when living together. As some of the authors have highlighted, increased neighbourhood diversity does not correlate with increased social interaction and can in some cases promote social conflict as much as it does social harmony. Mention how its all based on a class representation of society The mixed communities policy agenda has been used to help improve inequality in social housing (estates managed by local authorities, housing associations, and other non-profit housing agencies) and more controversially to regenerate social housing. This concentration on social housing comes out of a Since its conception social housing in the UK has experienced slow residualisation a tendency to house only certain types of household; the poor, unemployed, those in debt, with a history of mental illness and experiencing a relationship breakdown (Cole and Furbey, 1994). For much of social housings history this process has been ignored and consequently has been accompanied by a sorting process forcing the most vulnerable households into the most unattractive housing (Willmott and Murie, 1988). MIXED COMMUNITIES DEFINE EVERYDAY EXPERIENCE what is encapsulated within this? Social interactionà ¢Ã¢â€š ¬Ã‚ ¦ Previous Studies There are three studies which are relevant to this research. They examine the impact of mixed community housing on social interaction: Atkinson and Kintrea (2000) conducted an exploratory study which analysed diaries made by 38 households. The research suggested that patterns of social life vary by tenure and as such little interaction occurred between residents of owner occupied housing and social housing tenants. The neighbourhood was seen as a focus of interaction for social housing residents only. Cole and Shavers (1998b) survey of 52 residents in a new build, mixed-tenure redevelopment in Sheffield again found only weakly developed social networks. Jupps (1999:10-11) analysis of interviews with over 1,000 residents living in ten-mixed-tenure estates in England, concluded that the street is a more significant social unit than the estate. The case studies analysed often had social and private housing located on different streets and consequently there was little mixing reported between the two groups. Jupp reported that fostering social interaction would extremely difficult because of the overwhelming belief between residents: they do not think that they share many common interests with their neighbours. Individually these studies offer little scope, but taken together they provide a consistent view that mixed tenure developments foster little social interaction between residents of different social backgrounds. However, it must be realised that these studies only examine the grass-roots neighbourhood, that is to say that they often ignore the way external perceptions have defining role in the developments success. Atkinson and Kintrea (2000) identify it as a key area for future research when they report that residents welcomed the influx of higher income residents because they improve the reputation and appearance of the area. There is one fundamental understanding that underpins urban policy in the UK; as stated in the foreword of the Urban White Paper: How we live our lives is shaped by where we live our lives