MyMemory, la memoria di traduzione più grande del mondo
Click to expand

Coppia di lingue: Click to swap content  Argomento   
Chiedi a Google

Hai cercato: gawok lower    [ Disabilita i colori ]

Contributi umani

Da traduttori professionali, aziende, pagine web e banche di dati di traduzione disponibili al pubblico.

Aggiungi una traduzione

Indonesiano

Inglese

Dettagli

TerjemahanTurn the Television on any Sunday morning and you’ll find yourself in the middle of a “how to buy real estate" infomercial. Can you really buy a house with no down payment? Can you really make thousands or millions of dollars buying real estate. Of course the answer is “yes" and “no". The real question is, are you willing to pay anywhere from $500 to $5000 for the information, classes and hotline? Most important are you self disciplined enough to follow the program. Before you spend money on these expensive programs, here are my top ten “no money down" ways to buy real estate. If you’re self disciplined and willing to hear the word “no" many times before you get a “yes", then maybe you can buy a house without a down payment. 1.First is to check out the many new zero down programs now available from lenders. Especially if you’re a fist time buyer. Also FHA and VA have loans that may not be zero down, but are very close. 2.Borrow money for the down payment – Borrow the money from family, friends or a business partner at a high interest rate or a percentage of the profit when the property is sold 3.Raise the price and lower the terms – Offer the seller more than he is asking provided he is willing to accept the down payment in the form of a note. If the seller is asking $150,000 with $15,000 down and willing to carry the balance of $135,000. Try offering $155,000 in the form of a promissory not instead of cash. The seller gets a little more money for the additional risk. 4.Borrow against a life insurance policy – Many life insurance policy’s let you borrow against the policy for the purpose of investing in real estate or other investments. 5.Use other property as collateral – Create a note on existing property that you or a partner own and use it as the down payment for the property you are buying. 6.Home equity loan – Home equity loans are generally easy to qualify for as long as there is adequate equity in the property. 7.Seller refinance – Have the seller refinance the property, receiving the cash he needs from the proceeds of the new loan, the buyer gives the seller a note for the balance of the seller’s equity. 8.Find an investor – There are many people who have money but no time. Their current profession keeps them too busy. Work out a deal where they put up the money and you split the profits when you sell. 9.Lease with option to purchase – Lease a property with the right to buy it at some future time. Provide for the rental payment to be credited towards the down payment if you decide to exercise your option. 10.Give them something they need – If the seller is planning to purchase something in the future that you own or can buy, use it as a trade. This can be anything such as furniture, boat or motor home.

Terjemahan

Ultimo aggiornamento: 2014-04-23
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

1 Predicting Australian Takeover Targets: A Logit Analysis Maurice Peat* Maxwell Stevenson* * Discipline of Finance, School of Finance, The University of Sydney Abstract Positive announcement-day adjusted returns to target shareholders in the event of a takeover are well documented. Investors who are able to accurately predict firms that will be the subject of a takeover attempt should be able to earn these excess returns. In this paper a series of probabilistic regression models were developed that use financial statement variables suggested by prior research as explanatory variables. The models, applied to in-sample and out-of-sample data, led to predictions of takeover targets that were better than chance in all cases. The economic outcome resulting from holding a portfolio of the predicted targets over the prediction period are also analysed. Keywords: takeovers, targets, prediction, classification, logit analysis JEL Codes: G11, G17, G23, G34 This is a draft copy and not to be quoted. 2 1. Introduction In this paper our aim is to accurately predict companies that will become takeover targets. Theoretically, if it is possible to predict takeovers with accuracy greater than chance, it should be possible to generate abnormal returns from holding a portfolio of the predicted targets. Evidence of abnormal returns of 20% to 30% made by shareholders of firms on announcement of a takeover bid is why prediction of these events is of interest to academics and practitioners alike. The modelling approach adopted in this study was based on the discrete choice approach used by Palepu (1986) and Barnes (1999). The models were based on financial statement information, using variables suggested by the numerous theories that have been put forward to explain takeover activity. The performance of the models was evaluated using statistical criteria. Further, the predictions from the models were rated against chance and economic criteria through the formation and tracking of a portfolio of predicted targets. Positive results were found under both evaluation criteria. Takeover prediction studies are a logical extension of the work of Altman (1968) who used financial statement information to explain corporate events. Early studies by Simkowitz and Monroe (1971) and Stevens (1973) were based on the Multiple Discriminant Analysis (MDA) technique. Stevens (1973) coupled MDA with factor analysis to eliminate potential multicollinearity problems and reported a predictive accuracy of 67.5%, suggesting that takeover prediction was viable. Belkaoui (1978) and Rege (1984) conducted similar analyses in Canada with Belkaoui (1978) confirming the results of these earlier researchers and reporting a predictive accuracy of 85% . Concerns were raised by Rege (1984) who was unable to predict with similar accuracy. These concerns were also raised in research by others such as Singh (1971) and Fogelberg, Laurent, and McCorkindale (1975). Reacting to the wide criticism of the MDA method, researchers began to use discrete choice models as the basis of their research. Harris et al. (1984) used probit analysis to develop a model and found that it had extremely high explanatory power, but were unable to discriminate between target and non-target firms with any degree of accuracy. Dietrich and Sorensen (1984) continued this work using a logit model and achieved a classification accuracy rate of 90%. Palepu (1986) addressed a number of methodological problems in takeover prediction. He suggested the use of statebased prediction samples where a number of targets were matched with non-targets 3 for the same sample period. While this approach was appropriate for the estimation sample, it exaggerated accuracies within the predictive samples because the estimated error rates in these samples were not indicative of error rates within the population of firms. He also proposed the use of an optimal cut-off point derivation which considered the decision problem at hand. On the basis of this rectified methodology, along with the application of a logit model to a large sample of US firms, Palepu (1986) provided evidence that the ability of the model was no better than a chance selection of target and non-target firms. Barnes (1999) also used the logit model and a modified version of the optimal cut-off rule on UK data. His results indicated that a portfolio of predicted targets may have been consistent with Palepu’s finding, but he was unable to document this in the UK context due to model inaccuracy. In the following section the economic explanations underlying takeover activity are discussed. Section 3 outlines our takeover hypotheses and describes the explanatory variables that are used in the modelling procedure. The modelling framework and data used in the study is contained in Section 4, while the results of our model estimation, predictions, classification accuracy and portfolio economic outcomes are found in Section 5. We conclude in Section 6. 2. Economic explanations of takeover activity Economic explanations of takeover activity have suggested the explanatory variables that were included in this discrete choice model development study. Jensen and Meckling (1976) posited that agency problems occurred when decision making and risk bearing were separated between management and stakeholders1, leading to management inefficiencies. Manne (1965) and Fama (1980) theorised that a mechanism existed that ensured management acted in the interests of the vast number of small non-controlling shareholders2. They suggested that a market for corporate control existed in which alternative management teams competed for the rights to control corporate assets. The threat of acquisition aligned management objectives with those of stakeholders as managers are terminated in the event of an acquisition in order to rectify inefficient management of the firm’s assets. Jensen and Ruback (1983) suggested that both capital gains and increased dividends are available to an 1 Stakeholders are generally considered to be both stock and bond holders of a corporation. 2 We take the interests of shareholders to be in the maximization of the present value of the firm. 4 acquirer who could eliminate the inefficiencies created by target management, with the attractiveness of the firm for takeover increasing with the level of inefficiency. Jensen (1986) looked at the agency costs of free cash flow, another form of management inefficiency. In this case, free cash flow referred to cash flows in excess of positive net present value (NPV) investment opportunities and normal levels of financial slack (retained earnings). The agency cost of free cash flow is the negative NPV value that arises from investing in negative NPV projects rather than returning funds to investors. Jensen (1986) suggested that the market value of the firm should be discounted by the expected agency costs of free cash flow. These, he argued, were the costs that could be eliminated either by issuing debt to fund an acquisition of stock, or through merger with, or acquisition of a growing firm that had positive NPV investments and required the use of these excess funds. Smith and Kim (1994) combined the financial pecking order argument of Myers and Majluf (1984) with the free cash flow argument of Jensen (1986) to create another motivational hypothesis that postulated inefficient firms forgo profitable investment opportunities because of informational asymmetries. Further, Jensen (1986) argued that, due to information asymmetries that left shareholders less informed, management was more likely to undertake negative NPV projects rather than returning funds to investors. Smith and Kim (1994) suggested that some combination of these firms, like an inefficient firm and an efficient acquirer, would be the optimal solution to the two respective resource allocation problems. This, they hypothesised, would result in a market value for the combined entity that exceeded the sum of the individual values of the firms. This is one form of financial synergy that can arise in merger situations. Another form of financial synergy is that which results from a combination of characteristics of the target and bidding firms. Jensen (1986) suggested that an optimal capital structure exists, whereby the marginal benefits and marginal costs of debt are equal. At this point, the cost of capital for a firm is minimised. This suggested that increases in leverage will only be viable for those firms who have free cash flow excesses, and not for those which have an already high level of debt. Lewellen (1971) proposed that in certain situations, financial efficiencies may be realized without the realization of operational efficiencies. These efficiencies relied on a simple Miller and Modigliani (1964) model. It proposed that, in the absence of corporate taxes, an increase in a firm’s leverage to reasonable levels would increase the value of the equity share of the company due to a lower cost of capital. By a 5 merger of two firms, where either one or both had not utilised their borrowing capacity, would result in a financial gain. This financial gain would represent a valuation gain above that of the sum of the equity values of the individual firms. However, this result is predicated on the assumption that the firms need to either merge or be acquired in order to achieve this result. Merger waves are well documented in the literature. Gort (1969) suggested that industry disturbances are the source of these merger waves, his argument being that they occurred in response to discrepancies between the valuation of a firm by shareholders and potential acquirers. As a consequence of economic shocks (such as deregulation, changes in input or output prices, etc.), expectations concerning future cash flow became more variable. This results in an increased probability that the value the acquirer places on a potential target is greater than its current owner’s valuation. The result is a possible offer and subsequent takeover. Mitchell and Mulherin (1996), in their analysis of mergers and acquisitions in the US during the 1980s, provided evidence that mergers and acquisitions cluster by industries and time. Their analysis confirmed the theoretical and empirical evidence provided by Gort (1969) and provided a different view suggesting that mergers, acquisitions, and leveraged buyouts were the least cost method of adjusting to the economic shocks borne by an industry. These theories suggested a clear theoretical base on which to build takeover prediction models. As a result, eight main hypotheses for the motivation of a merger or acquisition have been formulated, along with twenty three possible explanatory variables to be incorporated predictive models. 3. Takeover hypotheses and explanatory variables The most commonly accepted motivation for takeovers is the inefficient management hypothesis.3 The hypothesis states that inefficiently managed firms will be acquired by more efficiently managed firms. Accordingly, H1: Inefficient management will lead to an increased likelihood of acquisition. Explanatory variables suggested by this hypothesis as candidates to be included in the specifications of predictive models included: 1. ROA (EBIT/Total Assets – Outside Equity Interests) 3 It is also known as the disciplinary motivation for takeovers. 6 2. ROE (Net Profit After Tax / Shareholders Equity – Outside Equity Interests) 3. Earnings Before Interest and Tax Margin (EBIT/Operating Revenue) 4. EBIT/Shareholders Equity 5. Free Cash Flow (FCF)/Total Assets 6. Dividend/Shareholders Equity 7. Growth in EBIT over past year, along with an activity ratio, 8. Asset Turnover (Net Sales/Total Assets) While there are competing explanations for the effect that a firm’s undervaluation has on the likelihood of its acquisition by a bidder, there is consistent agreement across all explanations that the greater the level of undervaluation then the greater the likelihood a firm will be acquired. The hypothesis that embodies the impact of these competing explanations is as follows: H2: Undervaluation of a firm will lead to an increased likelihood of acquisition. The explanatory variable suggested by this hypothesis is: 9. Market to book ratio (Market Value of Securities/Net Assets) The Price Earnings (P/E) ratio is closely linked to the undervaluation and inefficient management hypotheses. The impact of the P/E ratio on the likehood of acquisition is referred to as the P/E hypothesis: H3: A high Price to Earnings Ratio will lead to a decreased likelihood of acquisition. It follows from this hypothesis that the P/E ratio is a likely candidate as an explanatory variable for inclusion in models for the prediction of potential takeover targets. 10. Price/Earnings Ratio The growth resource mismatch hypothesis is the fourth hypothesis. However, the explanatory variables used in models specified to examine this hypothesis capture growth and resource availability separately. This gives rise to the following: H4: Firms which possess low growth / high resource combinations or, alternatively, high growth / low resource combinations will have an increased likelihood of acquisition. The following explanatory variables suggested by this hypothesis are: 7 11. Growth in Sales (Operating Revenue) over the past year 12. Capital Expenditure/Total Assets 13. Current Ratio (Current Assets/Current Liabilities) 14. (Current Assets – Current Liabilities)/Total Assets 15. Quick Assets (Current Assets – Inventory)/Current Liabilities The behaviour of some firms to pay out less of their earnings in order to maintain enough financial slack (retained earnings) to exploit future growth opportunities as they arise, has led to the dividend payout hypothesis: H5: High payout ratios will lead to a decreased likelihood of acquisition. The obvious explanatory variable suggested by this hypothesis is: 16. Dividend Payout Ratio Rectification of capital structure problems is an obvious motivation for takeovers. However, there has been some argument as to the impact of low or high leverage on acquisition likelihood. This paper proposes a hypothesis known as the inefficient financial structure hypothesis from which the following hypothesis is derived. H6: High leverage will lead to a decreased likelihood of acquisition. The explanatory variables suggested by this hypothesis include: 17. Net Gearing (Short Term Debt + Long Term Debt)/Shareholders Equity 18. Net Interest Cover (EBIT/Interest Expense) 19. Total Liabilities/Total Assets 20. Long Term Debt/Total Assets The existence of Merger and Acquisition (M&A) activity waves, where takeovers are clustered in wave-like profiles, have been proposed as indicators of changing levels of M&A activity over time. It has been argued that the identification of M&A waves, with the corresponding improved likelihood of acquisition when the wave is surging, captures the effect of the rate of takeover activity at specific points in time, and serves as valuable input into takeover prediction models. Consistent with M&A activity waves and their explanation as a motivation for takeovers is the industry disturbance hypothesis: 8 H7: Industry merger and acquisition activity will lead to an increased likelihood of acquisition. An industry relative ratio of takeover activity is suggested by this hypothesis: 21. The numerator is the total bids launched in a given year, while the denominator is the average number of bids launched across all the industries in the ASX. Size will have an impact on the likelihood of acquisition. It seems plausible that smaller firms will have a greater likelihood of acquisition due to larger firms generally having fewer bidding firms with the resources to acquire them. This gives rise to the following hypothesis: H8: The size of a firm will be negatively related to the likelihood of acquisition. Explanatory variables that can be employed to control for size include: 21. Log (Total Assets) 22. Net Assets 4. Data and Method The data requirements for the variables defined above are derived from the financial statements and balance sheet date price information for Australian listed companies. The financial statement information was sourced from the AspectHuntley data base which includes annual financial statement data for all ASX listed companies between 1995 and 2006. The database includes industry classifications for all firms included in the construction of industry relative ratios. Lists of takeover bids and their respective success were obtained from the Connect4 database. This information enabled the construction of variables for relative merger activity between industries. Additionally, stock prices from the relevant balance dates of all companies were sourced from the AspectHuntley online database, the SIRCA Core Price Data Set and Yahoo! Finance. 4.1 The Discrete Choice Modelling Framework The modelling procedure used is the nominal logit model, made popular in the bankruptcy prediction literature by Ohlson (1980) and, subsequently, in the takeover prediction literature by Palepu (1986). Logit models are commonly utilised for dichotomous state problems. The model is given by equations [1] to [3] below. 9 [3] The logit model was developed to overcome the rigidities of the Linear Probability Model in the presence of a binary dependent variable. Equations [1] and [2] show the existence of a linear relationship between the log-odds ratio (otherwise known as the logit Li) and the explanatory variables. However, the relationship between the probability of the event and acquisition likelihood is non-linear. This non-linear relationship has a major advantage that is demonstrated in equation [3]. Equation [3] measures the change in the probability of the event as a result of a small increment in the explanatory variables, . When the probability of the event is high or low, the incremental impact of a change in an explanatory variable on the likelihood of the event will be compressed, requiring a large change in the explanatory variables to change the classification of the observation. If a firm is clearly classified as a target or non-target, a large change in the explanatory variables is required to change its classification. 4.2 Sampling Schema Two samples were used in the model building and evaluation procedure. They were selected to mimic the problem faced by a practitioner attempting to predict takeover targets into the future. The first sample was used to estimate the model and to conduct in-sample classification. It was referred to as the Estimation Sample. This sample was based on financial data for the 2001 and 2002 financial years for firms that became takeover targets, as well as selected non-targets, between January, 2003 and December, 2004. The lag in the dates allows for the release of financial information as well as allowing for the release of financial statements for firms whose balance dates fall after the 30th June. Following model estimation, the probability of a takeover offer was estimated for each firm in the entire sample of firms between January, 2003 and December, 2004 using the estimated model and each firm’s 2001 and 2002 financial data. Expost predictive ability for each firm was then assessed. 10 A second sample was then used to assess the predictive accuracy of the model estimated with the estimation sample data. It is referred to as the Prediction Sample. This sample includes the financial data for the 2003 and 2004 financial years, which will be used in conjunction with target and non-target firms for the period January, 2005 to December, 2006. Using the model estimated from the 2001 and 2002 financial data, the sample of firms from 2005 and 2006 were fitted to the model using their 2003 and 2004 financial data. They were then classified as targets or non-targets using the 2005 and 2006 data. This sampling methodology allows for the evaluation of ex-ante predictive ability rather than ex-post classification accuracy. A diagrammatic explanation of the sample data used for both model estimation and prediction can be found below in Figure 1, and in tabular form in Table 1. Figure 1 Timeline of sample data used in model estimation and prediction Table 1 Sample data used in model estimation and prediction Sample Financial Data Classification Period Estimation Sample 2001 and 2002 2003 and 2004 Prediction Sample 2003 and 2004 2005 and 2006 11 For model estimation, a technique known as state-based sampling was used. Allison (2006) suggested the use of this sampling approach in order to minimise the standard error of the estimated parameters when the dependent variable states were unequally distributed in the population. All the target firms were included in the estimation sample, along with an equal number of randomly selected non-target firms for the same period. Targets in the estimation sample were randomly paired with the sample of non-target firms for the same period over which financial data was measured.4 4.3 Assessing the Estimated Model and its Predictive Accuracy Walter (1994), Zanakis and Zopounidis (1997), and Barnes (1999) utilised the Proportional Chance Criterion and the Maximum Chance Criterion to assess the predictions of discriminant models relative to chance. These criteria are also applicable to the discrete choice modelling exercise that is the focus of this study and, accordingly, are discussed more fully below. 4.3.1 Proportional Chance Criterion To assess the classification accuracy of the estimated models in this study, the Proportional Chance Criterion was utilized to assess whether the overall classifications from the models were better than that expected by chance. This criterion compared the classification accuracy of models to jointly classify target or non-target firms better than that expected by chance. Although the criterion does not indicate the source of the classification accuracy of the model (that is, whether the model accurately predicts targets or non-targets), it does allow for the comparison with alternative models. A simple Z-score calculation formed the basis of a joint test of the null hypothesis that the model was unable to jointly classify targets and nontargets better than chance. Under a chance selection, we would expect a proportion of targets and non-targets to be jointly equal to their frequencies in the population under consideration. The null and alternative hypotheses, along with the test statistic are given below. H0: Model is unable to classify targets and non-targets jointly better than chance. H1: Model is able to classify targets and non-targets jointly better than chance. 4 This approach differs from matched pair samples where targets are matched to non-targets on the basis of variables such as industry and/or size. 12 If the statistic is significant, we reject the null hypothesis and conclude that the model can classify target and non-target firms jointly better than chance. 4.3.2 Maximum Chance Criterion While the Proportional Chance Criterion indicated whether a model jointly classified target and non-target firms better than chance, it did not indicate the source of the predictive ability. However, under the Maximum Chance Criterion, a similar test of hypotheses does indicate whether a model has probability greater than chance in classifying either a target or a non-target firm. The Z-score statistic to test the null hypothesis that a model is unable to classify targets better than chance is given below. It is based on the Concentration Ratio defined by Powell (2001) that measures the maximum potential chance of correct classification of a target, or the proportion of correctly classified targets from those firms predicted to be targets. H0: Model is unable to classify targets better than chance. H1: Model is able to classify targets better than chance. In order to assess the classification accuracy of the models in the Estimation and Prediction Samples, these two criteria were used. The focus of this study was on the use of the Maximum Chance Criterion for targets, as it assessed whether the number 13 of correctly predicted targets exceeded the population of predicted targets.5 The Concentration Ratio was the ratio advocated by Barnes (1999) for maximising returns. 4.3.3 Industry Relative Ratios Platt and Platt (1990) advocated the use of industry relative variables to increase the predictive accuracy of bankruptcy prediction models on the pretext that these variables enabled more accurate predictions across industries and through time. This argument was based on two main contentions. Firstly, average financial ratios are inconsistent across industries and reflect the relative efficiencies of production commonly employed in those industries. The second is that average financial ratios are inconsistent throughout time as a result of variable industry performance due to economic conditions and other factors. Platt and Platt (1990) argued that firms from different industries or different time periods could not be analysed without some form of industry adjustment. In this study both raw and industry adjusted financial ratios were used to determine the benefits of industry adjustment. There are four different model specifications. One was based on raw financial ratios for the single year prior to the sample period (the Single Raw Model). Another was based on averaged raw financial ratios for the two years prior to the acquisition period (the Combined Raw Model). A third specification was based on industry adjusted financial ratios for the single year prior to the sample period (the Single Adjusted Model), while the fourth was based on averaged industry adjusted financial ratios for the two years prior to the sample period (the Combined Adjusted Model). The purpose of using averages was to reduce random fluctuations in the financial ratios of the firms under analysis, and to capture permanent rather than transitory values. This approach was proposed by Walter (1994). Most researchers used industry relative ratios calculated by scaling firms’ financial ratios using the industry average defined by equation [4] below. Under this procedure all ratios were standardised to unity. Industry relative ratios such as ROA or ROE that were greater than unity indicated industry over-performance, while those less than unity were consistent with underperformance. Problems were encountered when the industry average value was negative. In this case, those firms that underperformed the industry average also had industry relative ratios greater than one. This was the result of a large negative number being divided by a smaller negative 5 That is, the ratio A11/TP1 in Table 2. 14 number. Additionally, those firms that over-performed the negative industry average ratio, but still retained a negative financial ratio, had a ratio less than one. This ambiguity in the calculation of industry relative ratios had implications for those models in this study that included variables with negative industry averages for some ratios. This problem may explain the inability of researchers in the recent literature to accurately predict target and non-target firms that utilised industry adjustments and may have caused the Barnes (1999) model to predict no takeover targets at all. An alternative methodology was implemented to account for negative industry averages. Equation [5] below uses the difference between the individual firm’s ratio and the industry average ratio, divided by the absolute value of the industry average ratio. As a result all ratios are standardised to zero rather than one. Problems relating to the sign of the industry relative ratio are also corrected. Under-performance of the industry results in an industry relative ratio less than zero, with over-performance returning a ratio greater than zero. This approach is similar to the variable scaling methods widely documented in the Neural Network prediction literature. It was used for the two models based on industry relative variables with industry adjustment based on the 24 industry classification from the old ASX. 4.4 Calculation of Optimal Cut-off Probabilities for Classification In the case of a logit model, predictive output for an input sample of the explanatory variables is a probability with a value between 0 and 1. This is the predicted probability of an acquisition offer being made for a specific firm within the prediction period. What is needed is a method to convert these predicted probabilities of an acquisition offer into a binary prediction of becoming a target or not. These methods are known as optimal cut-off probability calculations and two main methodologies were implemented in this study. 4.4.1 Minimisation of Error Probabilities (Palepu, 1986) 15 In order to understand the calculation of the optimal cut-off probability, what is needed is an understanding of Type 1 and Type 11 errors. A Type 1 error occurs when a firm is predicted to become a takeover target when it does not (outcome A01 in Table 2 below), while a Type 11 error occurs when a firm is predicted not to become a target but actually becomes a target (outcome A10). Palepu (1986) assumed that the cost of these two types of errors were identical. To calculate the optimal cut-off probability, he used histograms to plot the predicted probabilities of acquisition offers for targets and non-targets separately on the same graph. The optimal cut-off probability which minimised the total error rate occurs at the intersection of the two conditional distributions. Firms with predicted probabilities of acquisition offers above this cut-off were classified as targets and those with probabilities below the cutoff classified as non-targets. Table 2 An outcome matrix for a standard classification problem Predicted Outcome Actual Outcome Non-Target (0) Target (1) Total Non-Target (0) A00 A01 TA0 Target (1) A10 A11 TA1 Total TP0 TP1 T 4.4.2 Minimisation of Error Costs (Barnes, 1999) Palepu (1986) assumed equal costs of Type 1 and Type 11 errors. However, it has been suggested that, due to investment being less likely in predicted non-targets, the cost of investing in the equity of a firm which did not become a takeover target (Type 1 error) was greater than the cost of not investing in the equity of a firm that became a takeover target (Type 11 error). Accordingly, Barnes (1999) proposed minimisation of the Type 1 error in order to maximise returns from an investment in predicted targets. From Table 2, it can be seen that the minimisation of Type 1 error is equivalent to the minimisation of the number of incorrectly predicted targets, A01, or alternatively, the maximisation of the number of correctly predicted targets, A11. It follows that, a cutoff probability is needed to maximise the number of predicted targets in a portfolio that became actual targets. This involved maximisation of the ratio of A11 to TP1 in 16 Table 2. Figure 2 below is an idealized representation of the Type 1 and Type 2 errors associated with the Palepu and Barnes cut-off probability methodologies. As the purpose of this paper was to replicate the problem faced by a practitioner, unawareness of the actual outcomes of the prediction process was assumed. Further, Figure 2 Idealized Palepu and Barnes Cut-off Probabilities the probabilities that companies will become targets were derived from a prediction model estimated using estimation data on known targets and non-targets. The companies for which these probabilities are calculated comprised the Prediction Sample (recall Table 1). For the calculation of the optimal cut-off probability according to Palepu, a histogram of predicted acquisition offer probabilities for targets and non- targets was created from the Estimation Sample, and followed the error minimisation procedure detailed above in section 4.4.1. To calculate the optimal cut-off under the Barnes methodology outlined in section 4.4.2, the ratio of A11/ TP1 for all cut-off probabilities between 0 and 1 was calculated to determine the maximum point. A simple grid search from 0 to 1 in increments of 0.05 was used. The classification and prediction accuracies under these two methods of calculating cut-off probabilities was compared for all four models considered in this study. Non-Targets Targets Estimated Acquisition Offer Cut-off Probabilities for Non-Targets and Targets PALEPU CUT-OFF BARNES CUT-OFF Type 2 Error Type 1 Error Relative Frequency of Non-Targets and Targets Targets Non-Targets Targets 17 5. Results 5.1 Multicollinearity Issues An examination of the correlation matrix and Variance Inflation Factors (VIFs) of the Estimation Sample indicated that five variables needed to be eliminated. They are listed in Table 3. That these variables should contribute to the multicollinearity problem was not a surprise considering the presence of the large number of potential explanatory variables measuring similar attributes suggested by the hypothesised motivations for takeover. These variables have correlation coefficients that exceeded 0.8 or VIFs that exceeded 10. Exclusion of these five variables eliminated significant correlations in the Variance/Covariance matrix, along with reduction of the VIF values of all the remaining variables to below 10. The resultant reduced variable set was used in the backward stepwise logit models estimated and reported in the following sub-section. Table 3 Variables Removed Due to Multicollinearity ROE (NPAT/Shareholders Equity – Outside Equity Interests) FCF/Total Assets Current Ratio (Current Assets/Current Liabilities) (Current Assets – Current Liabilities)/Total Assets Total Liabilities/Total Assets 5.2 Backward Stepwise Regression Results Using the remaining variables after controlling for multicollinearity, backward stepwise logistic regressions were performed for each of the four model specifications. Consistent with the methodology of Walter (1994), the significance level for retention of variables in the analysis was set at 0.15. The results for these models that were estimated using a common set of target and non-target firms are presented in Tables 4 to 7, with the results for the combined adjusted model in Table 7 described in more detail in the following sub-section.6 The backward stepwise analysis for this model required seven steps, eliminating six of the fifteen starting variables, while retaining nine significant variables. These 6 Detailed results for each of the models represented in Tables 4 to 7 are available from the authors on request. 18 Table 4 Backward Stepwise Results for Single Raw Model Variable Parameter Estimate Prob > Chi Sq Intercept -13.14 ( Chi Sq Intercept -0.58 (0.02) Asset Turnover (Net Sales/Total Assets) -0.59 (0.03) Capital Expenditure/Total Assets 0.34 (0.07) Dividend Payout Ratio -0.22 (0.07) Long Term Debt/Total Assets -0.21 (0.11) Ln (Total Assets) 12.07 ( Chi Sq Intercept -12.36 ( Chi Sq Intercept -0.04 (0.92) ROA (EBIT/Total Assets – Outside Equity Interests) 0.28 (0.09) Asset Turnover (Net Sales/Total Assets) -0.54 (0.05) Capital Expenditure/Total Assets 0.69 (<0.01) Quick Assets/Current Liabilities (Current Assets – Inventory)/Current Liabilities 0.93 (0.02) Dividend Payout Ratio -0.34 (0.02) Long Term Debt/Total Assets -0.32 (0.07) Merger Wave Dummy -0.59 (0.06) Ln (Total Assets) 13.34 (<0.01) Net Assets -0.21 (0.07) results provided evidence concerning six of the eight hypothesised motivations for takeover discussed previously in Section 3. The growth resource mismatch hypothesis was only significant in the two adjusted models. This suggested that growth should be measured relative to an industry benchmark when attempting to discriminate between target and non-target firms. 5.3 Classification Analysis While the analysis of the final models was of theoretical interest, the primary aim of this paper was to evaluate their classification accuracy. For the purposes of classification, the models were re-estimated using the Estimation Sample with all variables included. The complex relationships between all the variables were assumed to provide us with the ability to discriminate between target and non-target firms. Using financial data from 2001 and 2002, the models were estimated on the basis of 62 targets matched with 62 non-targets where the targets were identified between January, 2003 and December, 2004. Following estimation of the model, an in-sample fit was sought for the entire sample of the 1060 firms reporting 2001 and 2002 financial data. To proceed with classification, we derived a cut-off probability using 20 the methods of Palepu (1986) and Barnes (1999). The graph presented in Figure 3 focuses on the combined adjusted model and the Palepu cut-off point. Using a bin range of 0.05, it showed the histograms required for the calculation of the cut-off probability using the Palepu methodology was approximately 0.675. This is the probability corresponding to the highest point of intersection of the plots of the estimated acquisition probabilities for target and non-target companies. Figure 3 Cut-off Calculations using the Palepu methodology and 0.05 histogram bin increments. Table 8 Summary of optimal cut-off probabilities for all models under both methodologies. Optimal Cutoff Methodology Probabilities Palepu Barnes Single Raw Model 0.725 0.85 Single Adjusted Model 0.725 0.90 Combined Raw Model 0.850 0.95 Combined Adjusted Model 0.675 0.95 The optimal cut-off probabilities derived by using both the Barnes and Palepu methodologies for all four models are reported in Table 8. The optimal cut-off 21 probabilities calculated using the Barnes methodology were significantly larger than the cut-offs calculated under the Palepu methodology for all models.7 Table 9 below shows the outcome of the application of all of four models to the entire Estimation Sample based on a cut-off derived under the Barnes approach. Included in this table are the outcome matrices for each of the models. An outcome of 0 indicated that the firm was not a target or was not predicted to be a target in the sample period. A value of 1 indicated that a firm was predicted to be, or become a target in the sample period. On the basis of these outcome matrices, a number of performance measures were generated. The first measure was the Concentration Ratio. This is a measure of Predictive Accuracy measure of the model and corresponds to the Maximum Chance Criterion. It is the proportion of actual targets that formed the portfolio of predicted target firms for each of the models and was represented by the ratio A11/TP1 from the outcome matrix depicted previously in Table 2. The next measure indicated the expected accuracy under a chance selection of takeover targets within the sample period (TA1/T). It measured the extent to which the model exceeded the accuracy expected under a chance selection and quantified the Proportional Chance Criterion. The last measure is a measure of the accuracy of the model relative to chance and is calculated by dividing the first ratio by the second and then subtracting unity. All three measures were expressed as a percentage. An examination of the statistics corresponding to these measures for all four models in Table 9 indicated that, for the estimation sample with a Barnes cut-off, the combined raw model was the most accurate. Of the 80 firms that this model predicted to become takeover targets in the estimation period, 19 actually became targets. This represented a prediction accuracy of 23.75%. When taken relative to chance, this accuracy exceeded the benchmark by 305%. For the purpose of comparison, the classification results for the cut-off probabilities calculated using the Palepu cut-off points are presented in Table 10. As was the case when the Barnes methodology was used to determine the cut-off values for classification, the Palepu approach realised similar results. The combined raw model was again the most accurate model for prediction with a predictive accuracy of 19.59% and a relative to chance figure of 234.3%. However, as was the case for all four models, the use of this cut-off probability approach significantly reduced the 7 As is noted in following tables, this is an explanation for the smaller number of predicted targets under this methodology. 22 Table 9 Outcome Matrices for all models for classification of Estimation Sample (Barnes cut-off probability) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off = 0.85 probability) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 874 124 998 97.00% 94.15% 3.03%** 1 27 35 62 22.01% 5.85% 276.24%** Total 901 159 1060 Single Adjusted Model (Cut-off probability = 0.90) †† Actual Outcome Predicted Outcome 0 1 Total 0 906 88 994 96.18% 94.13% 2.18%** 1 36 26 62 22.81% 5.87% 288.59%** Total 942 114 1056 Combined Raw Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 935 61 996 95.60% 94.14% 1.55%** 1 43 19 62 23.75% 5.86% 305.29%** Total 978 80 1058 Combined Adjusted Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 938 56 994 95.33% 94.13% 1.27%* 1 46 16 62 22.22% 5.87% 278.54%** 23 Total 984 72 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. Table 10 Outcome Matrices for all models for classification of Estimation Sample (Palepu cut-off probabilty) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off probability = 0.725) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 812 186 998 97.83% 94.15% 3.91%** 1 18 44 62 19.13% 5.85% 227.01%** Total 830 230 1060 Single Adjusted Model (Cut-off probability = 0.725) †† Actual Outcome Predicted Outcome 0 1 Total 0 787 207 994 97.52% 94.13% 3.60%** 1 20 42 62 16.87% 5.87% 187.39%** Total 807 249 1056 Combined Raw Model (Cut-off probability = 0.85) †† Actual Outcome Predicted Outcome 0 1 Total 0 840 156 996 97.22% 94.14% 3.27%** 1 24 38 62 19.59% 5.86% 234.30%** Total 864 194 1058 Combined Adjusted Model (Cut-off probability = 0.675) †† Actual Outcome Predicted Outcome 24 0 1 Total 0 749 245 994 97.53% 94.13% 3.61%** 1 19 43 62 14.93% 5.87% 154.34%** Total 768 288 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. Concentration Ratio and, therefore, the classification accuracy of the models under the Maximum Chance Criterion. Interestingly, while the Palepu methodology did improve the correct classification of targets accurately predicted (A11), in doing so, it also predicted a large number of non-target firms to become targets (A01). The Barnes methodology focused on the maximisation of returns from an investment in predicted targets. Rather than being focused on the prediction of a large number of targets accurately, it focused on the improvement in the proportion of actual targets in the portfolio of predicted targets. Accordingly, there are a smaller number of targets predicted under the Barnes methodology. As previously noted in Section 4.3.2, the Barnes methodology coincided more with the spirit of the Maximum Chance Criterion rather than the Proportional Chance Criterion. According to the Proportional Chance Criterion, all four models were able to jointly classify targets and non-targets within the estimation period significantly better than chance. Further, as revealed by the Maximum Chance Criterion, all models also classified targets alone significantly better than chance but on an individual basis. Overall, these results indicated high model classification ability. This was expected given that all targets in the estimation sample were used in the estimation of the model parameters. 5.4 Classification in the Prediction Period The next step of the analysis was to assess the predictive abilities of our models using the Prediction Sample. Of the total 1054 firms in this sample, 108 became targets during the prediction period. Panel A and Panel B of Table 11 report the predictions from the four estimated models using both the Barnes and Palepu cut-off probability approaches. Under the Barnes cut-off methodology, calculation of the Concentration Ratio indicated that the combined raw and combined adjusted models performed best of all of the models. This confirmed the results from the estimation 25 period. The combined adjusted model predicted 125 firms to become targets during the prediction period, during which 25 actually became targets. Prediction accuracy was 20%. Under a chance selection, we would have expected only 10.30% of those companies predicted to become targets to actually become targets. This meant that the model exceeded a chance prediction by 94.18%. While Walter (1994) was able to predict 102% better than chance, other studies including that of Palepu (1986) and Barnes (1999) were unable to achieve this level of accuracy. Industry adjustment increased predictive ability for both the single and combined models, suggesting that stability may be achieved through these adjustments. Furthermore, the combination of two years of financial data also appeared to improve predictive accuracy. This suggests that this adjustment eliminates random fluctuations in the financial ratios being used as input to the prediction models. Table 11 Prediction results for all four models using the Prediction Sample and both Barnes and Palepu cut-off probabilities PREDICTION SAMPLE (Barnes cut-off probabilities) Panel A PREDICTION SAMPLE (Palepu cut-off probabilities) Panel B Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Single Raw Model (Cut-off probability = 0.90) 15.09% 10.25% 47.22%* Single Raw Model (Cut-off probability = 0.725) 16.83% 10.25% 64.20%* Single Adjusted Model (Cut-off probability = 0.95) 15.79% 10.27% 53.75%* Single Adjusted Model (Cut-off probability = 0.725) 17.79% 10.27% 73.22%* Combined Raw Model (Cut-off probability = 0.85) 17.65% 10.25% 72.29%** Combined Raw Model (Cut-off probability = 0.85) 17.51% 10.25% 70.83%** Combined Adjusted Model † (Cut-off probability = 0.95) Combined Adjusted Model (Cut-off probability = 0.675) 26 20.00% 10.30% 94.18%** 16.77% 10.30% 62.82%** † Indicates that the overall predictions of the model are significantly better than chance at the 5% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. The prediction results for the Palepu derived cut-off probabilities are presented in Table 11 (Panel B). By a comparison of Panel A with Panel B in Table 11, it can be seen that when the Barnes cut-off probability methodology was used for the single models, the Concentration Ratio decreased relative to that of Palepu. However, it improved the ratio for the combined models. This result was reversed when the Palepu cut-off probability approach was used. Further, given the better performance of the combined models using the estimated sample, this provided the rationale for the use of the combined modes and the Barnes methodology to calculate the optimal cutoff probabilities. A different variable selection approach was implemented in an attempt to improve the accuracy of the two best predictive models, namely, the combined raw model and the combined adjusted model. A number of variables that had been insignificant in all estimated models were removed and the estimation and classification procedures repeated on the remaining variable data set.8 The classification results for the Table 12 Application of improved models to both the Estimation Sample and Prediction Sample. ESTIMATION SAMPLE (Barnes cut-off probabilities) PREDICTION SAMPLE (Barnes cut-off probabilities) Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Combined Raw Model †† Combined Raw Model 8 The variables removed were: Growth in EBIT over the past year, Market to book ratio (Market Value of Securities/Net Assets), and the Price/Earnings Ratio. 27 (less variables 7,9 and 10) 24.66% 5.86% 320.77%** (less variables 7,9 and 10) 17.54% 10.25% 71.22%** Combined Adjusted Model †† (less variables 7,9 and 10) 24.56% 5.87% 318.34%** Combined Adjusted Model (less variables 7,9 and 10) 22.45% 10.30% 118.05%** †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. application of this model to both the estimation and prediction periods are given in Table 12. The elimination of variables resulted in significant improvements in the in-sample classification accuracy using the estimation sample, with accuracies exceeding chance by well over 300%. This improvement in classification accuracy was maintained into the prediction period. The accuracy of the combined adjusted model was 118% greater than chance. This represented a level statistical accuracy above that reported by any similar published study in the area of takeover prediction. These results can be used to refute the claims of Barnes (1999) and Palepu (1986) that models cannot be implemented which achieved predictive accuracies greater than chance. They further confirm the results of Walter (1994) while using a wider sample of firms. The combined adjusted model significantly outperformed the other models for predictive purposes, suggesting that this is the most appropriate model for the application of logit analysis to predict takeover targets in the Australian context. 5.5 Economic Outcomes Although the above methodology provided us with a statistical assessment of model performance, it had nothing to say about the economic usefulness of the model. Palepu (1986), Walter (1994), and Wansley et al. (1983) all implemented an equally weighted portfolio technique to assess whether their predictions of takeover targets were able to earn abnormal risk adjusted returns. The conclusion we drew from the results of the abovementioned studies was that a positive abnormal return was not guaranteed from an investment in the targets predicted from these models. The portfolios of predicted targets in two of these studies were unrealistically large at 91 28 in the case of Walter, and 625 in the case of the Palepu studies. Due to the effect of transaction costs on returns, practitioners would be likely to limit themselves to smaller portfolios in the order of 10 to 15 stocks. To make an economic assessment of the economic usefulness of our modelling approach, we replicated a modified version of the Palepu (1986) and Walter (1994) portfolio technique using our predicted targets. Only commonly predicted targets across all models were included in the portfolio analysis for two reasons. The first was to reduce the number of stocks to a manageable level, and second, to improve the ratio of actual targets in the portfolio. Further, we rejected the equally weighted portfolio approach on the grounds that it was an inefficient strategy for an informed investor who possessed results from our modelling. We reasoned that such an investor could most likely take a leveraged position through derivatives. The portfolio analysed in this study comprised of 13 predicted target firms of which 5 actually became targets. While this is a good result per se, we sought to quantify the economic benefit from an investment in these stocks. The portfolio of predicted targets was held for the entire prediction period of 2005 and 2006 that constituted 503 trading days. The first column of Table 13 below presents the percentage of Cumulative Average Abnormal Return (CAAR%) at 20 day intervals during the prediction period. Table 13 Cumulative Average Abnormal Returns (CAARs) for the portfolio of commonly predicted takeover targets for the Prediction Period of 2005 and 2006 Day Portfolio (13 Stocks) Actual Targets (5 Stocks) Day Portfolio (13 Stocks) Actual Targets (5 Stocks) CAAR (%) CAAR (%) CAAR (%) CAAR (%) 20 1.38 5.36 280 4.77 29.64 40 2.84 10.50 300 4.67 32.47 60 -1.98 5.58 320 3.08 33.53 80 -2.53 6.11 340 0.73 31.96 100 -5.52 -1.15 360 2.89 26.62 120 4.40 25.16 380 5.28 33.72 140 3.06 17.83 400 6.99 32.02 160 4.38 20.70 420 9.78 37.43 180 5.51 24.79 440 11.33 40.22 200 9.90 34.82 460 57.44 46.00 29 220 7.51 34.87 480 58.38 47.27 240 6.40 29.31 500 68.90 52.12 260 5.04 27.71 503 68.67* 50.86^ The full prediction period CAAR of 68.67% was significantly greater than zero at the 1% level of significance under the Standard Abnormal Return [SAR] methodology of Brown and Warner (1985). We recognised that these results could have been potentially driven by actual non-target firms within the portfolio of predicted targets. This would suggest that the abnormal return was the result of the chance selection of over-performing non-target firms, rather than an accurate selection of target firms. To answer this question, the same CAAR calculation was applied to the sub-portfolio of firms that actually became targets. The full period CAAR of 50.86% was also significantly greater than zero at the 1% level. This supported the proposition that the CAAR of the portfolio was driven by the performance of the actual targets within the portfolio. Table 13 also indicated that the CAAR for the portfolio increased significantly between days 440 and 460. This result was driven by the extremely positive returns on the stock ATM which was a non-target firm predicted by the models to be a target. After repeating the portfolio analysis with this stock eliminated from the portfolio of predicted targets, it was found that a significant positive abnormal return of 25%9 was realized for the entire prediction period. Another observation from Table 13 was that the CAAR was not positive (nor significant) for either of the portfolios early in the prediction period. From the second column of Table 14, the CAAR after 100 days was negative. Further, after 340 days the CAAR was indistinguishable from zero. The real gains to the portfolio were made as mergers or acquisitions were announced and completed in the latter stages of 2006, highlighting the fact that the portfolio had to be held for the entire prediction period in order to realise the potential available returns. 6. Conclusion The main finding of this paper was that the combined adjusted model which was based on averaged, industry adjusted financial ratios across the sample period, 9 t = 9.63 30 emerged as a clear standout with regards to predictive accuracy. Further, the implementation of industry adjusted data, as described in Section 4.3.3 of this paper, significantly improved the classification accuracy of all the models bar one that were analysed in both the estimation and prediction periods. Additionally, this paper provided evidence that the inclusion of the Barnes methodology for calculation of the optimal cut-off point significantly improved classification accuracy and enabled the successful use of logit models to predict takeover targets within the Australian context. The accuracy of the single best model in this paper exceeded a chance selection by 118% and represented the highest reported accuracy for a logit model. Another important finding of this paper resulted from the examination of a portfolio of predicted targets. We demonstrated that an investment in the predicted targets, that were common across the logit models, resulted in significant Cumulative Average Abnormal Returns (CAARs) being made by an investor. Several steps were undertaken to ensure that this result was robust against returns on predicted non-target stocks. This suggests that the abnormal returns made are based on the accuracy of the predictions common to the logit models analysed in this study rather than any chance selection. We believe our results provide evidence in favour of the proposition that an abnormal return can be made from an investment in the commonly predicted takeover targets from the four logit-based models analysed in this paper. There is a wealth of evidence in existence that suggests that combining forecasts from different models improves forecasting ability. This is an obvious direction for future research and may well be achieved either by a logit and MDA combination, or through the inclusion of a neural network approach to predict targets. References Allison, Paul D, 2006, Logistic Regression Using the SAS System. Cary, NC: SAS Institute. Barnes, Paul, 1990, The Prediction of Takeover Targets in the UK by means of Multiple Discriminant Analysis, Journal of Business Finance and Accounting 17, 73- 84. Barnes, Paul, 1999, Predicting UK Takeover Targets: Some Methodological Issues and Empirical Study, Review of Quantitative Finance and Accounting 12, 283-301. Belkaoui, Ahmed, 1978, Financial Ratios as Predictors of Canadian Takeovers, Journal of Business Finance and Accounting 5, 93-108. 31 Brown, Stephen J, and Jerold B Warner, 1985, Using Daily Stock Returns, Journal of Financial Economics 14, 3-31. Dietrich, Kimball J, and Eric Sorensen, 1984, An Application of Logit Analysis to Prediction of Merger Targets, Journal of Business Research 12, 393-402. Fama, Eugene F, 1980, Agency Problems and the Theory of the Firm, Journal of Political Economy 88, 288-307. Fogelberg, G, CR Laurent, and D McCorkindale, 1975, The Usefulness of Published Financial Data for Predicting Takeover Vulnerability, University of Western Ontario, School of Business Administration (Working Paper 150). Gort, Michael C, 1969, An Economic Disturbance Theory of Mergers, Quarterly Journal of Economics 83, 624-642. Harris, Robert S, John F Stewart, David K Guilkey, and Willard T Carleton, 1984, Characteristics of Acquired Firms: Fixed and Random Coefficient Probit Analyses, Southern Economic Journal 49, 164-184. Jennings, DE, 1986, Judging Inference Adequacy in Logistic Regression, Journal of the American Statistical Association 81, 471-476. Jensen, Michael C, 1986, Agency Costs of Free Cash Flow, Corporate Finance, and Takeovers, American Economic Review 76, 323-329. Jensen, Michael C, and William H Meckling, 1976, Theory of the Firm: Managerial behaviour, agency costs, and ownership structure, Journal of Financial Economics 3, 305-360. Jensen, Michael C, and Richard S Ruback, 1983, The Market for Corporate Control: The Scientific Evidence, Journal of Financial Economics 11, 5-50. Lewellen, Wilbur G, 1971, A Pure Financial Rationale for the Conglomerate Merger, Journal of Finance 26, 521-537. Manne, Henry G, 1965, Mergers and the Market for Corporate Control, Journal of Political Economy 73, 110-120. Miller, Merton H, and Franco Modigliani, 1964, Dividend Policy, Growth, and the Valuation of Shares, Journal of Business 34, 411-433. Mitchell, Mark L, and J Harold Mulherin, 1996, The impact of industry shocks on takeover and restructuring activity, Journal of Financial Economics 41, 193-229. Myers, Stewart C, and Nicholas S Majluf, 1984, Corporate Financing and Investment Decisions when Firms have Information that Investors do not, Journal of Financial Economics 13, 187-221. 32 Ohlson, J, 1980, Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research 18, 109-131. Palepu, Krishna G, 1986, Predicting Takeover Targets: A Methodological and Empirical Analysis, Journal of Accounting and Economics 8, 3-35. Platt, Harlan D, and Marjorie D Platt, 1990, Development of a Class of Stable Predictive Variables: The Case of Bankruptcy Prediction, Journal of Business Finance and Accounting 17, 31-51. Powell, Ronan G., 2001, Takeover prediction and portfolio performance: A note, Journal of Business Finance and Accounting 28, 993-1011. Rege, Udayan P, 1984, Accounting Ratios to Locate Takeover Targets, Journal of Business Finance and Accounting 11, 301-311. Simkowitz, Michael A, and Robert J Monroe, 1971, A Discriminant Function for Conglomerate Targets, Southern Journal of Business 38, 1-16. Singh, A, 1971, Takeovers: Their Relevance to the Stock Market and the Theory of the Firm. Cambridge University Press. Smith, Richard L, and Joo-Hyun Kim, 1994, The Combined Effect of Free Cash Flow and Financial Slack on Bidder and Target Stock Returns, Journal of Business 67, 281- 310. Stevens, David L, 1973, Financial Characteristics of Merged Firms: A Multivariate Analysis, Journal of Financial and Quantitative Analysis 8, 149-158. Walter, Richard M, 1994, The Usefulness of Current Cost Information for Identifying Takeover Targets and Earning Above-Average Stock Returns, Journal of Accounting, Auditing, and Finance 9, 349-377. Zanakis, SH, and C Zopounidis, 1997, Prediction of Greek Company Takeovers via Multivariate Analysis of Financial Ratios, Journal of the Operational Research Society 48, 678-687.

saya pergi ke kediri

Ultimo aggiornamento: 2014-02-23
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

Current transformer From Wikipedia, the free encyclopedia This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2010) A CT for operation on a 110 kV grid A current transformer (CT) is used for measurement of alternating electric currents. Current transformers, together with voltage transformers (VT) (potential transformers (PT)), are known as instrument transformers. When current in a circuit is too high to apply directly to measuring instruments, a current transformer produces a reduced current accurately proportional to the current in the circuit, which can be conveniently connected to measuring and recording instruments. A current transformer also isolates the measuring instruments from what may be very high voltage in the monitored circuit. Current transformers are commonly used in metering and protective relays in the electrical power industry. Contents [hide] 1 Design 2 Usage 3 Safety precautions 4 Accuracy 4.1 Burden 4.2 Knee-point core-saturation voltage 4.3 Rating factor 5 Special designs 6 Standards 7 High voltage types 8 See also 9 References 10 External links Design[edit] Basic operation of current transformer SF6 110 kV current transformer TGFM series, Russia Current transformers used in metering equipment for three-phase 400 ampere electricity supply Like any other transformer, a current transformer has a primary winding, a magnetic core and a secondary winding. The alternating current flowing in the primary produces an alternating magnetic field in the core, which then induces an alternating current in the secondary winding circuit. An essential objective of current transformer design is to ensure that the primary and secondary circuits are efficiently coupled, so that the secondary current bears an accurate relationship to the primary current. The most common design of CT consists of a length of wire wrapped many times around a silicon steel ring passed 'around' the circuit being measured. The CT's primary circuit therefore consists of a single 'turn' of conductor, with a secondary of many tens or hundreds of turns. The primary winding may be a permanent part of the current transformer, with a heavy copper bar to carry current through the magnetic core. Window-type current transformers (aka zero sequence current transformers, or ZSCT) are also common, which can have circuit cables run through the middle of an opening in the core to provide a single-turn primary winding. When conductors passing through a CT are not centered in the circular (or oval) opening, slight inaccuracies may occur. Shapes and sizes can vary depending on the end user or switchgear manufacturer. Typical examples of low voltage single ratio metering current transformers are either ring type or plastic molded case. High-voltage current transformers are mounted on porcelain bushings to insulate them from ground. Some CT configurations slip around the bushing of a high-voltage transformer or circuit breaker, which automatically centers the conductor inside the CT window. The primary circuit is largely unaffected by the insertion of the CT. The rated secondary current is commonly standardized at 1 or 5 amperes. For example, a 4000:5 CT would provide an output current of 5 amperes when the primary was passing 4000 amperes. The secondary winding can be single ratio or multi-ratio, with five taps being common for multi-ratio CTs. The load, or burden, of the CT should be of low resistance. If the voltage time integral area is higher than the core's design rating, the core goes into saturation towards the end of each cycle, distorting the waveform and affecting accuracy. Usage[edit] Many digital clamp meters utilize a current transformer for measuring AC current Current transformers are used extensively for measuring current and monitoring the operation of the power grid. Along with voltage leads, revenue-grade CTs drive the electrical utility's watt-hour meter on virtually every building with three-phase service and single-phase services greater than 200 amps. The CT is typically described by its current ratio from primary to secondary. Often, multiple CTs are installed as a "stack" for various uses. For example, protection devices and revenue metering may use separate CTs to provide isolation between metering and protection circuits, and allows current transformers with different characteristics (accuracy, overload performance) to be used for the devices. Safety precautions[edit] Care must be taken that the secondary of a current transformer is not disconnected from its load while current is flowing in the primary, as the transformer secondary will attempt to continue driving current across the effectively infinite impedance up to its core saturation voltage. This may produce a high voltage across the open secondary into the range of several kilovolts, causing arcing, compromising operator and equipment safety, or permanently affect the accuracy of the transformer. Accuracy[edit] The accuracy of a CT is directly related to a number of factors including: Burden Burden class/saturation class Rating factor Load External electromagnetic fields Temperature and Physical configuration. The selected tap, for multi-ratio CTs For the IEC standard, accuracy classes for various types of measurement are set out in IEC 60044-1, Classes 0.1, 0.2s, 0.2, 0.5, 0.5s, 1 and 3. The class designation is an approximate measure of the CT's accuracy. The ratio (primary to secondary current) error of a Class 1 CT is 1%[ERROR] at rated current; the ratio error of a Class 0.5 CT is 0.5%[ERROR] or less. Errors in phase are also important especially in power measuring circuits, and each class has an allowable maximum phase error for a specified load impedance. Current transformers used for protective relaying also have accuracy requirements at overload currents in excess of the normal rating to ensure accurate performance of relays during system faults. A CT with a rating of 2.5L400 specifies with an output from its secondary winding of 20 times its rated secondary current (usually 5 A x 20 = 100 A) and 400 V (IZ drop) its output accuracy will be within 2.5 percent. Burden[edit] The secondary load of a current transformer is usually called the "burden" to distinguish it from the load of the circuit whose current is being measured. The current transformer is mounted one of the power transformer leads; it can be associated with an Lv or Hv lead; depending on voltage and current consideration. A section of the lead is demountable locally to enable the current transformer to removed, should the necessity arise, without disturbing the main connection. The secondary of CT is connected to the heating coil directly located under the main cover in the oil. On the larger Errs the various connections may be brought up to terminals in the main the cover for external linkage. The burden, in a CT metering circuit is the (largely resistive) impedance presented to its secondary winding. Typical burden ratings for IEC CTs are 1.5 VA, 3 VA, 5 VA, 10 VA, 15 VA, 20 VA, 30 VA, 45 VA and 60 VA. As for ANSI/IEEE burden ratings are B-0.1, B-0.2, B-0.5, B-1.0, B-2.0 and B-4.0. This means a CT with a burden rating of B-0.2 can tolerate up to 0.2 Ω of impedance in the metering circuit before its secondary accuracy falls outside of an accuracy specification. These specification diagrams show accuracy parallelograms on a grid incorporating magnitude and phase angle error scales at the CT's rated burden. Items that contribute to the burden of a current measurement circuit are switch-blocks, meters and intermediate conductors. The most common source of excess burden is the conductor between the meter and the CT. When substation meters are located far from the meter cabinets, the excessive length of wire creates a large resistance. This problem can be reduced by using CTs with 1 ampere secondaries, which will produce less voltage drop between a CT and its metering devices. Knee-point core-saturation voltage[edit] The knee-point voltage of a current transformer is the magnitude of the secondary voltage above which the output current increases to linearly follow the input current within declared accuracy. In testing, if a voltage is applied across the secondary terminals the magnetizing current will increase in proportion to the applied voltage, until the knee point is reached. The knee point is defined as the voltage at which a 10%[ERROR] increase in applied voltage increases the magnetizing current by 50%[ERROR]. For voltages greater than the knee point, the magnetizing current increases considerably even for small increments in voltage across the secondary terminals. The knee-point voltage is less applicable for metering current transformers as their accuracy is generally much tighter but constrained within a very small bandwidth of the current transformer rating, typically 1.2 to 1.5 times rated current. However, the concept of knee point voltage is very pertinent to protection current transformers, since they are necessarily exposed to currents of 20 or 30 times rated current during faults.[1] Rating factor[edit] Rating factor is a factor by which the nominal full load current of a CT can be multiplied to determine its absolute maximum measurable primary current. Conversely, the minimum primary current a CT can accurately measure is "light load," or 10%[ERROR] of the nominal current (there are, however, special CTs designed to measure accurately currents as small as 2%[ERROR] of the nominal current). The rating factor of a CT is largely dependent upon ambient temperature. Most CTs have rating factors for 35 degrees Celsius and 55 degrees Celsius. It is important to be mindful of ambient temperatures and resultant rating factors when CTs are installed inside padmount transformers or poorly ventilated mechanical rooms. Recently, manufacturers have been moving towards lower nominal primary currents with greater rating factors. This is made possible by the development of more efficient ferrites and their corresponding hysteresis curves. Special designs[edit] Specially constructed wideband current transformers are also used (usually with an oscilloscope) to measure waveforms of high frequency or pulsed currents within pulsed power systems. One type of specially constructed wideband transformer provides a voltage output that is proportional to the measured current. Another type (called a Rogowski coil) requires an external integrator in order to provide a voltage output that is proportional to the measured current. Unlike CTs used for power circuitry, wideband CTs are rated in output volts per ampere of primary current. CT RATIO Standards[edit] Depending on the ultimate clients requirement, there are two main standards to which current transformers are designed. IEC 60044-1 (BSEN 60044-1) & IEEE C57.13 (ANSI), although the Canadian & Australian standards are also recognised. High voltage types[edit] Current transformers are used for protection, measurement and control in high voltage electrical substations and the electrical grid. Current transformers may be installed inside switchgear or in apparatus bushings, but very often free-standing outdoor current transformers are used. In a switchyard, live tank current transformers have a substantial part of their enclosure energized at the line voltage and must be mounted on insulators. Dead tank current transformers isolate the measured circuit from the enclosure. Live tank CTs are useful because the primary conductor is short, which gives better stability and a higher short-circuit current withstand rating. The primary of the winding can be evenly distributed around the magnetic core, which gives better performance for overloads and transients. Since the major insulation of a live-tank current transformer is not exposed to the heat of the primary conductors, insulation life and thermal stability is improved. A high-voltage current transformer may contain several cores, each with a secondary winding, for different purposes (such as metering circuits, control, or protection).[2] A neutral current transformer is used as earth fault protection to measured any fault current flowing through the neutral line from the wye neutral point of a transformer.[3]

Translation

Ultimo aggiornamento: 2013-11-24
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

Sound System Design Reference Manual Sound System Design Reference Manual Sound System Design Reference Manual Table of Contents Preface ............................................................................................................................................. i Chapter 1: Wave Propagation........................................................................................................ 1-1 Wavelength, Frequency, and Speed of Sound ................................................................................. 1-1 Combining Sine Waves .................................................................................................................... 1-2 Combining Delayed Sine Waves ...................................................................................................... 1-3 Diffraction of Sound .......................................................................................................................... 1-5 Effects of Temperature Gradients on Sound Propagation ................................................................ 1-6 Effects of Wind Velocity and Gradients on Sound Propagation ........................................................ 1-6 Effect of Humidity on Sound Propagation ......................................................................................... 1-7 Chapter 2: The Decibel ................................................................................................................... 2-1 Introduction ....................................................................................................................................... 2-1 Power Relationships ......................................................................................................................... 2-1 Voltage, Current, and Pressure Relationships .................................................................................. 2-2 Sound Pressure and Loudness Contours ......................................................................................... 2-4 Inverse Square Relationships ........................................................................................................... 2-6 Adding Power Levels in dB ............................................................................................................... 2-7 Reference Levels .............................................................................................................................. 2-7 Peak, Average, and RMS Signal Values........................................................................................... 2-8 Chapter 3: Directivity and Angular Coverage of Loudspeakers ................................................ 3-1 Introduction ....................................................................................................................................... 3-1 Some Fundamentals ........................................................................................................................ 3-1 A Comparison of Polar Plots, Beamwidth Plots, Directivity Plots, and Isobars ................................ 3-3 Directivity of Circular Radiators ........................................................................................................ 3-4 The Importance of Flat Power Response ......................................................................................... 3-6 Measurement of Directional Characteristics ..................................................................................... 3-7 Using Directivity Information ............................................................................................................. 3-8 Directional Characteristics of Combined Radiators .......................................................................... 3-8 Chapter 4: An Outdoor Sound Reinforcement System ............................................................... 4-1 Introduction ....................................................................................................................................... 4-1 The Concept of Acoustical Gain ....................................................................................................... 4-2 The Influence of Directional Microphones and Loudspeakers on System Maximum Gain .............. 4-3 How Much Gain is Needed? ............................................................................................................. 4-4 Conclusion ........................................................................................................................................ 4-5 Chapter 5: Fundamentals of Room Acoustics ............................................................................. 5-1 Introduction ....................................................................................................................................... 5-1 Absorption and Reflection of Sound ................................................................................................. 5-1 The Growth and Decay of a Sound Field in a Room ........................................................................ 5-5 Reverberation and Reverberation Time............................................................................................ 5-7 Direct and Reverberant Sound Fields .............................................................................................. 5-12 Critical Distance ................................................................................................................................ 5-14 The Room Constant ......................................................................................................................... 5-15 Statistical Models and the Real World .............................................................................................. 5-20 Sound System Design Reference Manual Table of Contents (cont.) Chapter 6: Behavior of Sound Systems Indoors ......................................................................... 6-1 Introduction ....................................................................................................................................... 6-1 Acoustical Feedback and Potential System Gain ............................................................................. 6-2 Sound Field Calculations for a Small Room ..................................................................................... 6-2 Calculations for a Medium-Size Room ............................................................................................. 6-5 Calculations for a Distributed Loudspeaker System ......................................................................... 6-8 System Gain vs. Frequency Response ............................................................................................ 6-9 The Indoor Gain Equation ................................................................................................................ 6-9 Measuring Sound System Gain ........................................................................................................ 6-10 General Requirements for Speech Intelligibility ................................................................................ 6-11 The Role of Time Delay in Sound Reinforcement ............................................................................ 6-16 System Equalization and Power Response of Loudspeakers .......................................................... 6-17 System Design Overview ................................................................................................................. 6-19 Chapter 7: System Architecture and Layout ................................................................................ 7-1 Introduction ....................................................................................................................................... 7-1 Typical Signal Flow Diagram ............................................................................................................ 7-1 Amplifier and Loudspeaker Power Ratings ...................................................................................... 7-5 Wire Gauges and Line Losses ......................................................................................................... 7-5 Constant Voltage Distribution Systems (70-volt lines) ...................................................................... 7-6 Low Frequency Augmentation—Subwoofers ................................................................................... 7-6 Case Study A: A Speech and Music System for a Large Evangelical Church .................................. 7-9 Case Study B: A Distributed Sound Reinforcement System for a Large Liturgical Church .............. 7-12 Case Study C: Specifications for a Distributed Sound System Comprising a Ballroom, Small Meeting Space, and Social/Bar Area ............................................................................... 7-16 Bibliography Sound System Design Reference Manual Preface to the 1999 Edition: This third edition of JBL Professional’s Sound System Design Reference Manual is presented in a new graphic format that makes for easier reading and study. Like its predecessors, it presents in virtually their original 1977 form George Augspurger’s intuitive and illuminating explanations of sound and sound system behavior in enclosed spaces. The section on systems and case studies has been expanded, and references to JBL components have been updated. The fundamentals of acoustics and sound system design do not change, but system implementation improves in its effectiveness with ongoing developments in signal processing, transducer refinement, and front-end flexibility in signal routing and control. As stated in the Preface to the 1986 edition: The technical competence of professional dealers and sound contractors is much higher today than it was when the Sound Workshop manual was originally introduced. It is JBL’s feeling that the serious contractor or professional dealer of today is ready to move away from simply plugging numbers into equations. Instead, the designer is eager to learn what the equations really mean, and is intent on learning how loudspeakers and rooms interact, however complex that may be. It is for the student with such an outlook that this manual is intended. John Eargle January 1999 i Sound System Design Reference Manual Sound System Design Reference Manual Wavelength, Frequency, and Speed of Sound Sound waves travel approximately 344 m/sec (1130 ft/sec) in air. There is a relatively small velocity dependence on temperature, and under normal indoor conditions we can ignore it. Audible sound covers the frequency range from about 20 Hz to 20 kHz. The wavelength of sound of a given frequency is the distance between successive repetitions of the waveform as the sound travels through air. It is given by the following equation: wavelength = speed/frequency or, using the common abbreviations of c for speed, f for frequency, and l for wavelength: l = c/f Period (T) is defined as the time required for one cycle of the waveform. T = 1/f. For f = 1 kHz, T = 1/1000, or 0.001 sec, and l = 344/1000, or .344 m (1.13 ft.) The lowest audible sounds have wavelengths on the order of 10 m (30 ft), and the highest sounds have wavelengths as short as 20 mm (0.8 in). The range is quite large, and, as we will see, it has great bearing on the behavior of sound. The waves we have been discussing are of course sine waves, those basic building blocks of all speech and music signals. Figure 1-1 shows some of the basic aspects of sine waves. Note that waves of the same frequency can differ in both amplitude and in phase angle. The amplitude and phase angle relationships between sine waves determine how they combine, either acoustically or electrically. Chapter 1: Wave Propagation Figure 1-1. Properties of sine waves 1-1 Sound System Design Reference Manual Combining Sine Waves Referring to Figure 1-2, if two or more sine wave signals having the same frequency and amplitude are added, we find that the resulting signal also has the same frequency and that its amplitude depends upon the phase relationship of the original signals. If there is a phase difference of 120°, the resultant has exactly the same amplitude as either of the original signals. If they are combined in phase, the resulting signal has twice the amplitude of either original. For phase differences between l20° and 240°, the resultant signal always has an amplitude less than that of either of the original signals. If the two signals are exactly 180° out of phase, there will be total cancellation. In electrical circuits it is difficult to maintain identical phase relationships between all of the sine components of more complex signals, except for the special cases where the signals are combined with a 0° or 180° phase relationship. Circuits which maintain some specific phase relationship (45°, for example) over a wide range of frequencies are fairly complex. Such wide range, all-pass phase-shifting networks are used in acoustical signal processing. When dealing with complex signals such as music or speech, one must understand the concept of coherence. Suppose we feed an electrical signal through a high quality amplifier. Apart from very small amounts of distortion, the output signal is an exact replica of the input signal, except for its amplitude. The two signals, although not identical, are said to be highly coherent. If the signal is passed through a poor amplifier, we can expect substantial differences between input and output, and coherence will not be as great. If we compare totally different signals, any similarities occur purely at random, and the two are said to be non-coherent. When two non-coherent signals are added, the rms (root mean square) value of the resulting signal can be calculated by adding the relative powers of the two signals rather than their voltages. For example, if we combine the outputs of two separate noise generators, each producing an rms output of 1 volt, the resulting signal measures 1.414 volts rms, as shown in Figure 1-3. Figure 1-3. Combining two random noise generators 1-2 Figure 1-2. V ector addition of two sine waves Sound System Design Reference Manual Combining Delayed Sine Waves If two coherent wide-range signals are combined with a specified time difference between them rather than a fixed phase relationship, some frequencies will add and others will cancel. Once the delayed signal arrives and combines with the original signal, the result is a form of “comb filter,” which alters the frequency response of the signal, as shown in Figure 1-4. Delay can be achieved electrically through the use of all-pass delay networks or digital processing. In dealing with acoustical signals in air, there is simply no way to avoid delay effects, since the speed of sound is relatively slow. 1-3 Figure 1-4A. Combining delayed signals Figure 1-4B. Combining of coherent signals with constant time delay Sound System Design Reference Manual A typical example of combining delayed coherent signals is shown in Figure 1-5. Consider the familiar outdoor PA system in which a single microphone is amplified by a pair of identical separated loudspeakers. Suppose the loudspeakers in question are located at each front corner of the stage, separated by a distance of 6 m (20 ft). At any distance from the stage along the center line, signals from the two loudspeakers arrive simultaneously. But at any other location, the distances of the two loudspeakers are unequal, and sound from one must arrive slightly later than sound from the other. The illustration shows the dramatically different frequency response resulting from a change in listener position of only 2.4 m (8 ft). Using random noise as a test signal, if you walk from Point B to Point A and proceed across the center line, you will hear a pronounced swishing effect, almost like a siren. The change in sound quality is most pronounced near the center line, because in this area the response peaks and dips are spread farther apart in frequency. 1-4 Figure 1-5. Generation of interference effects (comb filter response) by a split array Figure 1-6. Audible effect of comb filters shown in Figure 1-5 Sound System Design Reference Manual 1-5 Subjectively, the effect of such a comb filter is not particularly noticeable on normal program material as long as several peaks and dips occur within each one-third octave band. See Figure 1-6. Actually, the controlling factor is the “critical bandwidth.” In general, amplitude variations that occur within a critical band will not be noticed as such. Rather, the ear will respond to the signal power contained within that band. For practical work in sound system design and architectural acoustics, we can assume that the critical bandwidth of the human ear is very nearly one-third octave wide. In houses of worship, the system should be suspended high overhead and centered. In spaces which do not have considerable height, there is a strong temptation to use two loudspeakers, one on either side of the platform, feeding both the same program. We do not recommend this. Diffraction of Sound Diffraction refers to the bending of sound waves as they move around obstacles. When sound strikes a hard, non-porous obstacle, it may be reflected or diffracted, depending on the size of the obstacle relative to the wavelength. If the obstacle is large compared to the wavelength, it acts as an effective barrier, reflecting most of the sound and casting a substantial “shadow” behind the object. On the other hand, if it is small compared with the wavelength, sound simply bends around it as if it were not there. This is shown in Figure 1-7. An interesting example of sound diffraction occurs when hard, perforated material is placed in the path of sound waves. So far as sound is concerned, such material does not consist of a solid barrier interrupted by perforations, but rather as an open area obstructed by a number of small individual objects. At frequencies whose wavelengths are small compared with the spacing between perforations, most of the sound is reflected. At these frequencies, the percentage of sound traveling through the openings is essentially proportional to the ratio between open and closed areas. At lower frequencies (those whose wavelengths are large compared with the spacing between perforations), most of the sound passes through the openings, even though they may account only for 20 or 30 percent of the total area. Figure 1-7. Diffraction of sound around obstacles Sound System Design Reference Manual Effects of Temperature Gradients on Sound Propagation If sound is propagated over large distances out of doors, its behavior may seem erratic. Differences (gradients) in temperature above ground level will affect propagation as shown in Figure 1-8. Refraction of sound refers to its changing direction as its velocity increases slightly with elevated temperatures. At Figure 1-8A, we observe a situation which often occurs at nightfall, when the ground is still warm. The case shown at B may occur in the morning, and its “skipping” characteristic may give rise to hot spots and dead spots in the listening area. Effects of Wind Velocity and Gradients on Sound Propagation Figure 1-9 shows the effect wind velocity gradients on sound propagation. The actual velocity of sound in this case is the velocity of sound in still air plus the velocity of the wind itself. Figure 1-10 shows the effect of a cross breeze on the apparent direction of a sound source. The effects shown in these two figures may be evident at large rock concerts, where the distances covered may be in the 200 - 300 m (600 - 900 ft) range. 1-6 Figure 1-8. Effects of temperature gradients on sound propagation Figure 1-9. Effect of wind velocity gradients on sound propagation Sound System Design Reference Manual 1-7 Effects of Humidity on Sound Propagation Contrary to what most people believe, there is more sound attenuation in dry air than in damp air. The effect is a complex one, and it is shown in Figure 1-11. Note that the effect is significant only at frequencies above 2 kHz. This means that high frequencies will be attenuated more with distance than low frequencies will be, and that the attenuation will be greatest when the relative humidity is 20 percent or less. Figure 1-1 1. Absorption of sound in air vs. relative humidity Figure 1-10. Effect of cross breeze on apparent direction of sound Sound System Design Reference Manual Sound System Design Reference Manual Chapter 2: The Decibel Introduction In all phases of audio technology the decibel is used to express signal levels and level differences in sound pressure, power, voltage, and current. The reason the decibel is such a useful measure is that it enables us to use a comparatively small range of numbers to express large and often unwieldy quantities. The decibel also makes sense from a psychoacoustical point of view in that it relates directly to the effect of most sensory stimuli. Power Relationships Fundamentally, the bel is defined as the common logarithm of a power ratio: bel = log (P1/P0) For convenience, we use the decibel, which is simply one-tenth bel. Thus: The following tabulation illustrates the usefulness of the concept. Letting P0 = 1 watt: P1 (watts) Level in dB 1 0 10 10 100 20 1000 30 10,000 40 20,000 43 Note that a 20,000-to-1 range in power can be expressed in a much more manageable way by referring to the powers as levels in dB above one watt. Psychoacoustically, a ten-times increase in power results in a level which most people judge to be Òtwice as loud.ÓTh us, a 100-watt acoustical signal would be twice as loud as a 10-watt signal, and a 10-watt signal would be twice as loud as a 1-watt signal. The convenience of using decibels is apparent; each of these power ratios can be expressed by the same level, 10 dB. Any 10 dB level difference, regardless of the actual powers involved, will represent a 2-to-1 difference in subjective loudness. We will now expand our power decibel table: P1 (watts) Level in dB 1.25 1 1.60 2 2.5 4 3.15 5 6.3 8 10 10 This table is worth memorizing. Knowing it, you can almost immediately do mental calculations, arriving at power levels in dB above, or below, one watt. Here are some examples: 1. What power level is represented by 80 watts? First, locate 8 watts in the left column and note that the corresponding level is 9 dB. Then, note that 80 is 10 times 8, giving another 10 dB. Thus: 9 + 10 = 19 dB 2. What power level is represented by 1 milliwatt? 0.1 watt represents a level of minus 10 dB, and 0.01 represents a level 10 dB lower. Finally, 0.001 represents an additional level decrease of 10 dB. Thus: 2-1 Level in decibels (dB) = 10 log (P1/P0) -10 -10 -10 = -30 dB Sound System Design Reference Manual 3. What power level is represented by 4 milliwatts? As we have seen, the power level of 1 milliwatt is –30 dB. Two milliwatts represents a level increase of 3 dB, and from 2 to 4 milliwatts there is an additional 3 dB level increase. Thus: –30 + 3 + 3 = –24 dB 4. What is the level difference between 40 and 100 watts? Note from the table that the level corresponding to 4 watts is 6 dB, and the level corresponding to 10 watts is 10 dB, a difference of 4 dB. Since the level of 40 watts is 10 dB greater than for 4 watts, and the level of 80 watts is 10 dB greater than for 8 watts, we have: 6 – 10 + 10 – 10 = –4 dB We have done this last example the long way, just to show the rigorous approach. However, we could simply have stopped with our first observation, noting that the dB level difference between 4 and 10 watts, .4 and 1 watt, or 400 and 1000 watts will always be the same, 4 dB, because they all represent the same power ratio. The level difference in dB can be converted back to a power ratio by means of the following equation: Power ratio = 10dB/10 For example, find the power ratio of a level difference of 13 dB: Power ratio = 1013/10 = 101.3 = 20 The reader should acquire a reasonable skill in dealing with power ratios expressed as level differences in dB. A good “feel” for decibels is a qualification for any audio engineer or sound contractor. An extended nomograph for converting power ratios to level differences in dB is given in Figure 2-1. Voltage, Current, and Pressure Relationships The decibel fundamentally relates to power ratios, and we can use voltage, current, and pressure ratios as they relate to power. Electrical power can be represented as: P = EI P = I2Z P = E2/Z Because power is proportional to the square of the voltage, the effect of doubling the voltage is to quadruple the power: (2E)2/Z = 4(E)2/Z As an example, let E = 1 volt and Z = 1 ohm. Then, P = E2/Z = 1 watt. Now, let E = 2 volts; then, P = (2)2/1 = 4 watts. The same holds true for current, and the following equations must be used to express power levels in dB using voltage and current ratios: dB level = 10 log E E 20 log E E 1 , and 0 1 0       =       2 dB level = 10 log I I 20 log I I 1 . 0 1 0       =       2 Sound pressure is analogous to voltage, and levels are given by the equation: dB level = 20 log P P 1 . 0       Figure 2-1. Nomograph for determining power ratios directly in dB 2-2 Sound System Design Reference Manual The normal reference level for voltage, E0, is one volt. For sound pressure, the reference is the extremely low value of 20 x 10-6 newtons/m2. This reference pressure corresponds roughly to the minimum audible sound pressure for persons with normal hearing. More commonly, we state pressure in pascals (Pa), where 1 Pa = 1 newton/m2. As a convenient point of reference, note that an rms pressure of 1 pascal corresponds to a sound pressure level of 94 dB. We now present a table useful for determining levels in dB for ratios given in voltage, current, or sound pressure: Voltage, Current or Pressure Ratios Level in dB 1 0 1.25 2 1.60 4 2 6 2.5 8 3.15 10 4 12 5 14 6.3 16 8 18 10 20 This table may be used exactly the same way as the previous one. Remember, however, that the reference impedance, whether electrical or acoustical, must remain fixed when using these ratios to determine level differences in dB. A few examples are given: 1. Find the level difference in dB between 2 volts and 10 volts. Directly from the table we observe 20 – 6 = 14 dB. 2. Find the level difference between 1 volt and 100 volts. A 10-to-1 ratio corresponds to a level difference of 20 dB. Since 1-to-100 represents the product of two such ratios (1-to-10 and 10-to-100), the answer is 20 + 20 = 40 dB. 3. The signal input to an amplifier is 1 volt, and the input impedance is 600 ohms. The output is also 1 volt, and the load impedance is 15 ohms. What is the gain of the amplifier in dB? Watch this one carefully! If we simply compare input and output voltages, we still get 0 dB as our answer. The voltage gain is in fact unity, or one. Recalling that decibels refer primarily to power ratios, we must take the differing input and output impedances into account and actually compute the input and output powers. Input power = E Z = 1 600 2 watt Output power = E Z = 1 15 2 T 10 log 600 15 = 10 log 40 = 16 dB hus,      Fortunately, such calculations as the above are not often made. In audio transmission, we keep track of operating levels primarily through voltage level calculations in which the voltage reference value of 0.775 volts has an assigned level of 0 dBu. The value of 0.775 volts is that which is applied to a 600- ohm load to produce a power of 1 milliwatt (mW). A power level of 0 dBm corresponds to 1 mW. Stated somewhat differently, level values in dBu and dBm will have the same numerical value only when the load impedance under consideration is 600 ohms. The level difference in dB can be converted back to a voltage, current, or pressure ratio by means of the following equation: Ratio = 10dB/20 For example, find the voltage ratio corresponding to a level difference of 66 dB: voltage ratio = 1066/20 = 103.3 = 2000. 2-3 Sound System Design Reference Manual Sound Pressure and Loudness Contours We will see the term dB-SPL time and again in professional sound work. It refers to sound pressure levels in dB above the reference of 20 x 10-6 N/m2. We commonly use a sound level meter (SLM) to measure SPL. Loudness and sound pressure obviously bear a relation to each other, but they are not the same thing. Loudness is a subjective sensation which differs from the measured level in certain important aspects. To specify loudness in scientific terms, a different unit is used, the phon. Phons and decibels share the same numerical value only at 1000 Hz. At other frequencies, the phon scale deviates more or less from the sound level scale, depending on the particular frequency and the sound pressures; Figure 2-2 shows the relationship between phons and decibels, and illustrates the well-known Robinson-Dadson equal loudness contours. These show that, in general, the ear becomes less sensitive to sounds at low frequencies as the level is reduced. When measuring sound pressure levels, weighted response may be employed to more closely approximate the response of the ear. Working with sound systems, the most useful scales on the sound level meter will be the A-weighting scale and the linear scale, shown in Figure 2-3. Inexpensive sound level meters, which cannot provide linear response over the full range of human hearing, often have no linear scale but offer a C-weighting scale instead. As can be seen from the illustration, the C-scale rolls off somewhat at the frequency extremes. Precision sound level meters normally offer A, B, and C scales in addition to linear response. Measurements made with a sound level meter are normally identified by noting the weighting factor, such as: dB(A) or dB(lin). Typical levels of familiar sounds, as shown in Figure 2-4, help us to estimate dB(A) ratings when a sound level meter is not available. For example, normal conversational level in quiet surrounds is about 60 dB(A). Most people find levels higher than 100 dB(A) uncomfortable, depending on the length of exposure. Levels much above 120 dB(A) are definitely dangerous to hearing and are perceived as painful by all except dedicated rock music fans. Figure 2-2. Free-field equal loudness contours 2-4 Sound System Design Reference Manual Figure 2-3. Frequency responses for SLM weighting characteristics Figure 2-4. T ypical A-weighted sound levels 2-5 Sound System Design Reference Manual Inverse Square Relationships When we move away from a point source of sound out of doors, or in a free field, we observe that SPL falls off almost exactly 6 dB for each doubling of distance away from the source. The reason for this is shown in Figure 2-5. At A there is a sphere of radius one meter surrounding a point source of sound P1 representing the SPL at the surface of the sphere. At B, we observe a sphere of twice the radius, 2 meters. The area of the larger sphere is four times that of the smaller one, and this means that the acoustical power passing through a small area on the larger sphere will be one-fourth that passing through the same small area on the smaller sphere. The 4-to-1 power ratio represents a level difference of 6 dB, and the corresponding sound pressure ratio will be 2-to-1. A convenient nomograph for determining inverse square losses is given in Figure 2-6. Inverse square calculations depend on a theoretical point source in a free field. In the real world, we can closely approach an ideal free field, but we still must take into account the factors of finite source size and non-uniform radiation patterns. Consider a horn-type loudspeaker having a rated sensitivity of 100 dB, 1 watt at 1 meter. One meter from where? Do we measure from the mouth of the horn, the throat of the horn, the driver diaphragm, or some indeterminate point in between? Even if the measurement position is specified, the information may be useless. Sound from a finite source does not behave according to inverse square law at distances close to that source. Measurements made in the “near field” cannot be used to estimate performance at greater distances. This being so, one may well wonder why loudspeakers are rated at a distance of only 1 meter. The method of rating and the accepted methods of measuring the devices are two different things. The manufacturer is expected to make a number of measurements at various distances under free field conditions. From these he can establish Figure 2-6. Nomograph for determining inverse square losses 2-6 Figure 2-5. Inverse square relationships Sound System Design Reference Manual that the measuring microphone is far enough away from the device to be in its far field, and he can also calculate the imaginary point from which sound waves diverge, according to inverse square law. This point is called the acoustic center of the device. After accurate field measurements have been made, the results are converted to an equivalent one meter rating. The rated sensitivity at one meter is that SPL which would be measured if the inverse square relationship were actually maintained that close to the device. Let us work a few exercises using the nomograph of Figure 2-6: 1. A JBL model 2360 horn with a 2446 HF driver produces an output of 113 dB, 1 watt at 1 meter. What SPL will be produced by 1 watt at 30 meters? We can solve this by inspection of the nomograph. Simply read the difference in dB between 1 meter and 30 meters: 29.5 dB. Now, subtracting this from 113 dB: 113 – 29.5 = 83.5 dB 2. The nominal power rating of the JBL model 2446 driver is 100 watts. What maximum SPL will be produced at a distance of 120 meters in a free field when this driver is mounted on a JBL model 2366 horn? There are three simple steps in solving this problem. First, determine the inverse square loss from Figure 2-6; it is approximately 42 dB. Next, determine the level difference between one watt and 100 watts. From Figure 2-1 we observe this to be 20 dB. Finally, note that the horn-driver sensitivity is 118 dB, 1 watt at 1 meter. Adding these values: 118 – 42 + 20 = 96 dB-SPL Calculations such as these are very commonplace in sound reinforcement work, and qualified sound contractors should be able to make them easily. Adding Power Levels in dB Quite often, a sound contractor will have to add power levels expressed in dB. Let us assume that two sound fields, each 94 dB-SPL, are combined. What is the resulting level? If we simply add the levels numerically, we get 188 dB-SPL, clearly an absurd answer! What we must do in effect is convert the levels back to their actual powers, add them, and then recalculate the level in dB. Where two levels are involved, we can accomplish this easily with the data of Figure 2-7. Let D be the difference in dB between the two levels, and determine the value N corresponding to this difference. Now, add N to the higher of the two original values. As an exercise, let us add two sound fields, 90 dB-SPL and 84 dB-SPL. Using Figure 2-7, a D of 6 dB corresponds to an N of about 1 dB. Therefore, the new level will be 91 dB-SPL. Note that when two levels differ by more than about 10 dB, the resulting summation will be substantially the same as the higher of the two values. The effect of the lower level will be negligible. Reference Levels Although we have discussed some of the common reference levels already, we will list here all of those that a sound contractor is likely to encounter. In acoustical measurements, SPL is always measured relative to 20 x 10-6 Pa. An equivalent expression of this is .0002 dynes/cm2. In broadcast transmission work, power is often expressed relative to 1 milliwatt (.001 watt), and such levels are expressed in dBm. The designation dBW refers to levels relative to one watt. Thus, 0 dBW = 30 dBm. In signal transmission diagrams, the designation dBu indicates voltage levels referred to .775 volts. 2-7 Figure 2-7. Nomograph for adding levels expressed in dB. Summing sound level output of two sound sources where D is their output difference in dB. N is added to the higher to derive the total level. Sound System Design Reference Manual In other voltage measurements, dBV refers to levels relative to 1 volt. Rarely encountered by the sound contractor will be acoustical power levels. These are designated dB-PWL, and the reference power is 10-12 watts. This is a very small power indeed. It is used in acoustical measurements because such small amounts of power are normally encountered in acoustics. Peak, Average, and rms Signal Values Most measurements of voltage, current, or sound pressure in acoustical engineering work are given as rms (root mean square) values of the waveforms. The rms value of a repetitive waveform equals its equivalent DC value in power transmission. Referring to Figure 2-8A for a sine wave with a peak value of one volt, the rms value is .707 volt, a 3 dB difference. The average value of the waveform is .637 volt. For more complex waveforms, such as are found in speech and music, the peak values will be considerably higher than the average or rms values. The waveform shown at Figure 2-8B is that of a trumpet at about 400 Hz, and the spread between peak and average values is 13 dB. In this chapter, we have in effect been using rms values of voltage, current, and pressure for all calculations. However, in all audio engineering applications, the time-varying nature of music and speech demands that we consider as well the instantaneous values of waveforms likely to be encountered. The term headroom refers to the extra margin in dB designed into a signal transmission system over its normal operating level. The importance of headroom will become more evident as our course develops. 2-8 Figure 2-8. Peak, average, and rms values. Sinewave (A); complex waveform (B). Sound System Design Reference Manual The data of Figure 3-1 was generalized by Molloy (7) and is shown in Figure 3-3. Here, note that Dl and Q are related to the solid angular coverage of a hypothetical sound radiator whose horizontal and vertical coverage angles are specified. Such ideal sound radiators do not exist, but it is surprising how closely these equations agree with measured Dl and Q of HF horns that exhibit fairly steep cut-off outside their normal coverage angles. As an example of this, a JBL model 2360 Bi-Radial horn has a nominal 900-by-400 pattern measured between the 6 dB down points in each plane. If we insert the values of 90° and 40° into Molloy’s equation, we get DI = 11 and Q = 12.8. The published values were calculated by integrating response over 360° in both horizontal and vertical planes, and they are Dl = 10.8 and Q = 12.3. So the estimates are in excellent agreement with the measurements. For the JBL model 2366 horn, with its nominal 6 dB down coverage angles of 40° and 20°, Molloy’s equation gives Dl = 17.2 and Q = 53. The published values are Dl = 16.5 and Q = 46. Again, the agreement is excellent. Is there always such good correlation between the 6 dB down horizontal and vertical beamwidth of a horn and its calculated directivity? The answer is no. Only when the response cut-off is sharp beyond the 6 dB beamwidth limits and when there is minimal radiation outside rated beamwidth will the correlation be good. For many types of radiators, especially those operating at wavelengths large compared with their physical dimensions, Molloy’s equation will not hold. A Comparison of Polar Plots, Beamwidth Plots, Directivity Plots, and Isobars There is no one method of presenting directional data on radiators which is complete in all regards. Polar plots (Figure 3-4A) are normally presented in only the horizontal and vertical planes. A single polar plot covers only a single frequency, or frequency band, and a complete set of polar plots takes up considerable space. Polars are, however, the only method of presentation giving a clear picture of a radiator’s response outside its normal operating beamwidth. Beamwidth plots of the 6 dB down coverage angles (Figure 3-4B) are very common because considerable information is contained in a single plot. By itself, a plot of Dl or Q conveys information only about the on-axis performance of a radiator (Figure 3-4C). Taken together, horizontal and vertical beamwidth plots and Dl or Q plots convey sufficient information for most sound reinforcement design requirements. 3-3 Figure 3-4. Methods of presenting directional information Sound System Design Reference Manual Isobars have become popular in recent years. They give the angular contours in spherical coordinates about the principal axis along which the response is -3, -6, and -9 dB, relative to the on-axis maximum. It is relatively easy to interpolate visually between adjacent isobars to arrive at a reasonable estimate of relative response over the useful frontal solid radiation angle of the horn. Isobars are useful in advanced computer layout techniques for determining sound coverage over entire seating areas. The normal method of isobar presentation is shown in Figure 3-4D. Still another way to show the directional characteristics of radiators is by means of a family of off-axis frequency response curves, as shown in Figure 3-5. At A, note that the off-axis response curves of the JBL model 2360 Bi-Radial horn run almost parallel to the on-axis response curve. What this means is that a listener seated off the main axis will perceive smooth response when a Bi-Radial constant coverage horn is used. Contrast this with the off-axis response curves of the older (and obsolete) JBL model 2350 radial horn shown at B. If this device is equalized for flat on-axis response, then listeners off-axis will perceive rolled-off HF response. Directivity of Circular Radiators Any radiator has little directional control for frequencies whose wavelengths are large compared with the radiating area. Even when the radiating area is large compared to the wavelength, constant pattern control will not result unless the device has been specifically designed to maintain a constant pattern. Nothing demonstrates this better than a simple radiating piston. Figure 3-6 shows the sharpening of on-axis response of a piston mounted in a flat baffle. The wavelength varies over a 24-to-1 range. If the piston were, say a 300 mm (12”) loudspeaker, then the wavelength illustrated in the figure would correspond to frequencies spanning the range from about 350 Hz to 8 kHz. Among other things, this illustration points out why “full range,” single-cone loudspeakers are of little use in sound reinforcement engineering. While the on-axis response can be maintained through equalization, off-axis response falls off drastically above the frequency whose wavelength is about equal to the diameter of the piston. Note that when the diameter equals the wavelength, the radiation pattern is approximately a 90° cone with - 6 dB response at ±45°. 3-4 Figure 3-5. Families of off-axis frequency response curves Sound System Design Reference Manual The values of DI and Q given in Figure 3-6 are the on-axis values, that is, along the axis of maximum loudspeaker sensitivity. This is almost always the case for published values of Dl and Q. However, values of Dl and Q exist along any axis of the radiator, and they can be determined by inspection of the polar plot. For example, in Figure 3-6, examine the polar plot corresponding to Diameter = l. Here, the on-axis Dl is 10 dB. If we simply move off-axis to a point where the response has dropped 10 dB, then the Dl along that direction will be 10 - 10, or 0 dB, and the Q will be unity. The off-axis angle where the response is 10 dB down is marked on the plot and is at about 55°. Normally, we will not be concerned with values of Dl and Q along axes other than the principal one; however, there are certain calculations involving interaction of microphones and loudspeakers where a knowledge of off-axis directivity is essential. Omnidirectional microphones with circular diaphragms respond to on- and off-axis signals in a manner similar to the data shown in Figure 3-6. Let us assume that a given microphone has a diaphragm about 25 mm (1”) in diameter. The frequency corresponding to l/4 is about 3500 Hz, and the response will be quite smooth both on and off axis. However, by the time we reach 13 or 14 kHz, the diameter of the diaphragm is about equal to l, and the Dl of the microphone is about 10 dB. That is, it will be 10 dB more sensitive to sounds arriving on axis than to sounds which are randomly incident to the microphone. Of course, a piston is a very simple radiator — or receiver. Horns such as JBL’s Bi-Radial series are complex by comparison, and they have been designed to maintain constant HF coverage through attention to wave-guide principles in their design. One thing is certain: no radiator can exhibit much pattern control at frequencies whose wavelengths are much larger than the circumference of the radiating surface. 3-5 Figure 3-6. Directional characteristics of a circular-piston source mounted in an infinite baffle as a function of diameter and l. Sound System Design Reference Manual The Importance of Flat Power Response If a radiator exhibits flat power response, then the power it radiates, integrated over all directions, will be constant with frequency. Typical compression drivers inherently have a rolled-off response when measured on a plane wave tube (PWT), as shown in Figure 3-7A. When such a driver is mounted on a typical radial horn such as the JBL model 2350, the on-axis response of the combination will be the sum of the PWT response and the Dl of the horn. Observe at B that the combination is fairly flat on axis and does not need additional equalization. Off-axis response falls off, both vertically and horizontally, and the total power response of the combination will be the same as observed on the PWT; that is, it rolls off above about 3 kHz. Now, let us mount the same driver on a Bi- Radial uniform coverage horn, as shown at C. Note that both on-and off-axis response curves are rolled off but run parallel with each other. Since the Dl of the horn is essentially flat, the on-axis response will be virtually the same as the PWT response. At D, we have inserted a HF boost to compensate for the driver’s rolled off power response, and the result is now flat response both on and off axis. Listeners anywhere in the area covered by the horn will appreciate the smooth and extended response of the system. Flat power response makes sense only with components exhibiting constant angular coverage. If we had equalized the 2350 horn for flat power response, then the on-axis response would have been too bright and edgy sounding. 3-6 Figure 3-7. Power response of HF systems Sound System Design Reference Manual The rising DI of most typical radial horns is accomplished through a narrowing of the vertical pattern with rising frequency, while the horizontal pattern remains fairly constant, as shown in Figure 3-8A. Such a horn can give excellent horizontal coverage, and since it is “self equalizing” through its rising DI, there may be no need at all for external equalization. The smooth-running horizontal and vertical coverage angles of a Bi-Radial, as shown at Figure 3-8B, will always require power response HF boosting. 3-7 Measurement of Directional Characteristics Polar plots and isobar plots require that the radiator under test be rotated about several of its axes and the response recorded. Beamwidth plots may be taken directly from this data. DI and Q can be calculated from polar data by integration using the following equation: DI = 10 log 2 P sin d θ π ( ) θ θ       ∫ 2 o PQ is taken as unity, and q is taken in 10° increments. The integral is solved for a value of DI in the horizontal plane and a value in the vertical plane. The resulting DI and Q for the radiator are given as: DI = DI 2 + DI 2 h v and Q = Q Q n v ⋅ (Note: There are slight variations of this method, and of course all commonly use methods are only approximations in that they make use of limited polar data.) Figure 3-8. Increasing DI through narrowing verticalbeamwidth Sound System Design Reference Manual Using Directivity Information A knowledge of the coverage angles of an HF horn is essential if the device is to be oriented properly with respect to an audience area. If polar plots or isobars are available, then the sound contractor can make calculations such as those indicated in Figure 3-9. The horn used in this example is the JBL 2360 Bi-Radial. We note from the isobars for this horn that the -3 dB angle off the vertical is 14°. The -6 dB and -9 dB angles are 23° and 30° respectively. This data is for the octave band centered at 2 kHz. The horn is aimed so that its major axis is pointed at the farthest seats. This will ensure maximum reach, or “throw,” to those seats. We now look at the -3 dB angle of the horn and compare the reduction in the horn’s output along that angle with the inverse square advantage at the closer-in seats covered along that axis. Ideally, we would like for the inverse square advantage to exactly match the horn’s off-axis fall-off, but this is not always possible. We similarly look at the response along the -6 and -9 dB axes of the horn, comparing them with the inverse square advantages afforded by the closer-in seats. When the designer has flexibility in choosing the horn’s location, a good compromise, such as that shown in this figure, will be possible. Beyond the -9 dB angle, the horn’s output falls off so rapidly that additional devices, driven at much lower levels, would be needed to cover the front seats (often called “front fill” loudspeakers). Aiming a horn as shown here may result in a good bit of power being radiated toward the back wall. Ideally, that surface should be fairly absorptive so that reflections from it do not become a problem. Directional Characteristics of Combined Radiators While manufacturers routinely provide data on their individual items of hardware, most provide little, if any, data on how they interact with each other. The data presented here for combinations of HF horns is of course highly wavelength, and thus size, dependent. Appropriate scaling must be done if this data is to be applied to larger or smaller horns. In general, at high frequencies, horns will act independently of each other. If a pair of horns are properly splayed so that their -6 dB angles just overlap, then the response along that common axis should be smooth, and the effect will be nearly that of a single horn with increased coverage in the plane of overlap. Thus, two horns with 60° coverage in the horizontal plane can be splayed to give 120° horizontal coverage. Likewise, dissimilar horns can be splayed, with a resulting angle being the sum of the two coverage angles in the plane of the splay. Splaying may be done in the vertical plane with similar results. Figure 3-10 presents an example of Figure 3-9. Off-axis and inverse square calculations horn splaying in the horizontal plane. Figure 3-10. Horn splaying for wider coverage 3-8 Sound System Design Reference Manual Horns may be stacked in a vertical array to improve pattern control at low frequencies. The JBL Flat-Front Bi-Radials, because of their relatively small vertical mouth dimension, exhibit a broadening in their vertical pattern control below about 2 kHz. When used in vertical stacks of three or four units, the effective vertical mouth dimension is much larger Figure 3-1 1. Stacking horns for higher directivity at low frequencies (solid line, horizontal -6 dB deamwidth, dashed line, vertical -6 dB beamwidth) than that of a single horn. The result, as shown in Figure 3-11, is tighter pattern control down to about 500 Hz. In such vertical in-line arrays, the resulting horizontal pattern is the same as for a single horn. Additional details on horn stacking are given in Technical Note Volume 1, Number 7. 3-9 Sound System Design Reference Manual Sound System Design Reference Manual Chapter 4: An Outdoor Sound Reinforcement System 4-1 Introduction Our study of sound reinforcement systems begins with an analysis of a simple outdoor system. The outdoor environment is relatively free of reflecting surfaces, and we will make the simplifying assumption that free field conditions exist. A basic reinforcement system is shown in Figure 4-1A. The essential acoustical elements are the talker, microphone, loudspeaker, and listener. The electrical diagram of the system is shown at B. The dotted line indicates the acoustical feedback path which can exist around the entire system. When the system is turned on, the gain of the amplifier can be advanced up to some point at which the system will “ring,” or go into feedback. At the onset of feedback, the gain around the electroacoustical path is unity and at a zero phase angle. This condition is shown at C, where the input at the microphone of a single pulse will give rise to a repetitive signal at the microphone, fed back from the loudspeaker and which will quickly give rise to sustained oscillation at a single frequency with a period related to Dt. Even at levels somewhat below feedback, the response of the system will be irregular, due to the fact that the system is “trying” to go into feedback, but does not have enough loop gain to sustain it. This is shown in Figure 4-2. As a rule, a workable reinforcement system should have a gain margin of 6 to 10 dB before feedback if it is to sound natural on all types of program input. Figure 4-1. A simple outdoor reinforcement system Sound System Design Reference Manual The Concept of Acoustical Gain Boner (4) quantified the concept of acoustical gain, and we will now present its simple but elegant derivation. Acoustical gain is defined as the increase in level that a given listener in the audience perceives with the system turned on, as compared to the level the listener hears directly from the talker when the system is off. Referring to Figure 4-3, let us assume that both the loudspeaker and microphone are omnidirectional; that is, DI = 0 dB and Q = 1. Then by inverse square loss, the level at the listener will be: 70 dB - 20 log (7/1) = 70 - 17 = 53 dB Now, we turn the system on and advance the gain until we are just at the onset of feedback. This will occur when the loudspeaker, along the D1 path, produces a level at the microphone equal to that of the talker, 70 dB. If the loudspeaker produces a level of 70 dB at the microphone, it will produce a level at the listener of: 70 - 20 log (6/4) = 70 - 3.5 = 66.5 dB With no safety margin, the maximum gain this system can produce is: 66.5 - 53 = 13.5 dB Rewriting our equations: Maximum gain = 70 - 20 log (D2/D1) - 70 - 20 log (D0/Ds) This simplifies to: Maximum gain = 20 log D0 - 20 log Ds + 20 log D1 - 20 log D2 Figure 4-2. Electrical response of a sound system 3 dB below sustained acoustical feedback Figure 4-3. System gain calculations, loudspeaker and microphone both omnidirectional 4-2 Sound System Design Reference Manual Adding a 6 dB safety factor gives us the usual form of the equation: Maximum gain = 20 log D0 - 20 log Ds + 20 log D1 - 20 log D2 - 6 In this form, the gain equation tells us several things, some of them intuitively obvious: 1. That gain is independent of the level of the talker 2. That decreasing Ds will increase gain 3. That increasing D1 will increase gain. The Influence of Directional Microphones and Loudspeakers on System Maximum Gain Let us rework the example of Figure 4-3, this time making use of a directional loudspeaker whose midband polar characteristics are as shown in Figure 4-4A. It is obvious from looking at Figure 4-4A that sound arriving at the microphone along the D1 direction will be reduced 6 dB relative to the omnidirectional loudspeaker. This 6 dB results directly in added gain potential for the system. The same holds for directional microphones, as shown in Figure 4-5A. In Figure 4-5B, we show a system using an omnidirectional loudspeaker and a cardioid microphone with its -6 dB axis facing toward the loudspeaker. This system is equivalent to the one shown in Figure 4-4B; both exhibit a 6 dB increase in maximum gain over the earlier case where both microphone and loudspeaker were omnidirectional. Finally, we can use both directional loudspeakers and microphones to pick up additional gain. We simply calculate the maximum gain using omnidirectional elements, and then add to that value the off-axis pattern advantage in dB for both loudspeaker and microphone. As a practical matter, however, it is not wise to rely too heavily on directional microphones and loudspeakers to make a significant increase in system gain. Most designers are content to realize no more than 4-to-6 dB overall added gain from the use of directional elements. The reason for this is that microphone and loudspeaker directional patterns are not constant with frequency. Most directional loudspeakers will, at low frequencies, appear to be nearly omnidirectional. If more gain is called for, the most straightforward way to get it is to reduce Ds or increase D1. Figure 4-4. System gain calculations, directionalloudspeaker Figure 4-5. System gain calculations, directionalmicrophone 4-3 Sound System Design Reference Manual How Much Gain is Needed? The parameters of a given sound reinforcement system may be such that we have more gain than we need. When this is the case, we simply turn things down to a comfortable point, and everyone is happy. But things often do not work out so well. What is needed is some way of determining beforehand how much gain we will need so that we can avoid specifying a system which will not work. One way of doing this is by specifying the equivalent, or effective, acoustical distance (EAD), as shown in Figure 4-6. Sound reinforcement systems may be thought of as effectively moving the talker closer to the listener. In a quiet environment, we may not want to bring the talker any closer than, say, 3 meters from the listener. What this means, roughly, is that the loudness produced by the reinforcement system should approximate, for a listener at D0, the loudness level of an actual talker at a distance of 3 meters. The gain necessary to do this is calculated from the inverse square relation between D0and EAD: Necessary gain = 20 log D0 - 20 log EAD In our earlier example, D0 = 7 meters. Setting EAD = 3 meters, then: Necessary gain = 20 log (7) - 20 log (3) = 17 - 9.5 = 7.5 dB Assuming that both loudspeaker and microphone are omnidirectional, the maximum gain we can expect is: Maximum gain = 20 log (7) - 20 log (1) + 20 log (4) - 20 log (6) - 6 Maximum gain = 17 - 0 + 12 - 15.5 - 6 Maximum gain = 7.5 dB As we can see, the necessary gain and the maximum gain are both 7.5 dB, so the system will be workable. If, for example, we were specifying a system for a noisier environment requiring a shorter EAD, then the system would not have sufficient gain. For example, a new EAD of 1.5 meters would require 6 dB more acoustical gain. As we have discussed, using a directional microphone and a directional loudspeaker would just about give us the needed 6 dB. A simpler, and better, solution would be to reduce Ds to 0.5 meter in order to get the added 6 dB of gain. In general, in an outdoor system, satisfactory articulation will result when speech peaks are about 25 dB higher than the A-weighted ambient noise level. Typical conversation takes place at levels of 60 to 65 dB at a distance of one meter. Thus, in an ambient noise field of 50 dB, we would require speech peaks of 75 to 80 dB for comfortable listening, and this would require an EAD as close as 0.25 meter, calculated as follows: Speech level at 1 meter = 65 dB Speech level at 0.5 meter = 71 dB Speech level at 0.25 meter = 77 dB Let us see what we must do to our outdoor system to make it work under these demanding conditions. First, we calculate the necessary acoustical gain: Necessary gain = 20 log D0 - 20 log EAD Necessary gain = 20 log (7) - 20 log (.25) Necessary gain = 17+ 12 = 29 dB 4-4 Figure 4-6. Concept of Effective Acoustical Dustance (EAD) Sound System Design Reference Manual 4-5 As we saw in an earlier example, our system only has 7.5 dB of maximum gain available with a 6 dB safety factor. By going to both a directional microphone and a directional loudspeaker, we can increase this by about 6 dB, yielding a maximum gain of 13.5 dB — still some 16 dB short of what we actually need. The solution is obvious; a hand-held microphone will be necessary in order to achieve the required gain. For 16 dB of added gain, Ds will have to be reduced to the value calculated below: 16 = 20 log (1/x) 16/20 = log (1/x) 10.8 = 1/x Therefore: x = 1/10.8 = 0.16 meter (6”) Of course, the problem with a hand-held microphone is that it is difficult for the user to maintain a fixed distance between the microphone and his mouth. As a result, the gain of the system will vary considerably with only small changes in the performer-microphone operating distance. It is always better to use some kind of personal microphone, one worn by the user. In this case, a swivel type microphone attached to a headpiece would be best, since it provides the minimum value of DS. This type of microphone is now becoming very popular on-stage, largely because a number of major pop and country artists have adopted it. In other cases a simple tietack microphone may be sufficient. Conclusion In this chapter, we have presented the rudiments of gain calculation for sound systems, and the methods of analysis form the basis for the study of indoor systems, which we will cover in a later chapter. Sound System Design Reference Manual Sound System Design Reference Manual Chapter 5: Fundamentals of Room Acoustics 5-1 Introduction Most sound reinforcement systems are located indoors, and the acoustical properties of the enclosed space have a profound effect on the system’s requirements and its performance. Our study begins with a discussion of sound absorption and reflection, the growth and decay of sound fields in a room, reverberation, direct and reverberant sound fields, critical distance, and room constant. If analyzed in detail, any enclosed space is quite complex acoustically. We will make many simplifications as we construct “statistical” models of rooms, our aim being to keep our calculations to a minimum, while maintaining accuracy on the order of 10%, or ±1 dB. Absorption and Reflection of Sound Sound tends to “bend around” non-porous, small obstacles. However, large surfaces such as the boundaries of rooms are typically partially flexible and partially porous. As a result, when sound strikes such a surface, some of its energy is reflected, some is absorbed, and some is transmitted through the boundary and again propagated as sound waves on the other side. See Figure 5-1. All three effects may vary with frequency and with the angle of incidence. In typical situations, they do not vary with sound intensity. Over the range of sound pressures commonly encountered in audio work, most construction materials have the same characteristics of reflection, absorption and transmission whether struck by very weak or very strong sound waves. Figure 5-1. Sound impinging on a large boundary surface Sound System Design Reference Manual When dealing with the behavior of sound in an enclosed space, we must be able to estimate how much sound energy will be lost each time a sound wave strikes one of the boundary surfaces or one of the objects inside the room. Tables of absorption coefficients for common building materials as well as special “acoustical” materials can be found in any architectural acoustics textbook or in data sheets supplied by manufacturers of construction materiaIs. Unless otherwise specified, published sound absorption coefficients represent average absorption over all possible angles of incidence. This is desirable from a practical standpoint since the random incidence coefficient fits the situation that exists in a typical enclosed space where sound waves rebound many times from each boundary surface in virtually all possible directions. Absorption ratings normally are given for a number of different frequency bands. Typically, each band of frequencies is one octave wide, and standard center frequencies of 125 Hz, 250 Hz, 500 Hz, 1 kHz, etc., are used. In sound system design, it usually is sufficient to know absorption characteristics of materials in three or four frequency ranges. In this handbook, we make use of absorption ratings in the bands centered at 125 Hz, 1 kHz and 4 kHz. The effects of mo

Teacher

Ultimo aggiornamento: 2013-02-28
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

All Traffic Lights Stay Green Cheat code: Right, R1, Up, L2, L2, Left, R1, L1, R1, R1 Blow Up Vehicle Cheat code: L1, L2, L2, Up, Down, Down, Up, R1, R2, R2 Cars On Water Cheat code: Right, R2, Circle, R1, L2, Square, R1, R2 Chaos Mode Cheat code: L2, Right, L1, Triangle, Right, Right, R1, L1, Right, L1, L1, L1 Destroy Cars Cheat code: R2, L2, R1, L1, L2, R2, Square, Triangle, Circle, Triangle, L2, L1 Flying Boats Cheat code: R2, Circle, Up, L1, Right, R1, Right, Up, Square, Triangle Full Weapon Aiming While Driving Cheat code: Up, Up, Square, L2, Right, X, R1, Down, R2, Circle Hitman In All Weapons Cheat code: Down, Square, X, Left, R1, R2, Left, Down, Down, L1, L1, L1 Increase Car Speed Cheat code: Up, L1, R1, Up, Right, Up, X, L2, X, L1 Infinite Ammo Cheat code: L1, R1, Square, R1, Left, R2, R1, Left, Square, Down, L1, L1 Infinite Health Cheat code: Down, X, Right, Left, Right, R1, Right, Down, Up, Triangle Spawn Jetpack Cheat code: Left, Right, L1, L2, R1, R2, Up, Down, Left, Right Lower Wanted Level Cheat code: R1, R1, Circle, R2, Up, Down, Up, Down, Up, Down Max All Vehicle Stats (Driving, Flying Bike, Cycling) Cheat code: Square, L2, X, R1, L2, L2, Left, R1, Right, L1, L1, L1 No Traffic or Pedestrians Cheat code: X, Down, Up, R2, Down, Triangle, L1, Triangle, Left Pedestrian Riot Cheat code: Down, Left, Up, Left, X, R2, R1, L2, L1 Note: This code cannot be turned off. Pedestrians Have Weapons Cheat code: R2, R1, X, Triangle, X, Triangle, Up, Down Recruits Anyone (with Rockets) Cheat code: R2, R2, R2, X, L2, L1, R2, L1, Down, X Spawn Parachute Cheat code: Left, Right, L1, L2, R1, R2, R2, Up, Down, Right, L1 Spawn Dozer Cheat code: R2, L1, L1, Right, Right, Up, Up, X, L1, Left Super Punch Cheat code: Up, Left, X, Triangle, R1, Circle, Circle, Circle, L2 Weapons Pack 1 Cheat code: R1, R2, L1, R2, Left, Down, Right, Up, Left, Down, Right, Up Weapons Pack 2 Cheat code: R1, R2, L1, R2, Left, Down, Right, Up, Left, Down, Down, Left Weapons Pack 3 Cheat code: R1, R2, L1, R2, Left, Down, Right, Up, Left , Down, Down, Down

google translation

Ultimo aggiornamento: 2013-01-18
Argomento: Generico
Frequenza d'uso: 1
Qualità:

Alcune traduzioni umane di bassa rilevanza non sono state presentate.
Mostra risultati a bassa rilevanza.

Aggiungi una traduzione