MyMemory, World's Largest Translation Memory
Click to expand

Language pair: Click to swap content  Subject   
Ask Google

You searched for: procedure text use laptop    [ Turn off colors ]

Human contributions

From professional translators, enterprises, web pages and freely available translation repositories.

Add a translation

Indonesian

English

Info

Laptop

Laptop

Last Update: 2013-02-08
Usage Frequency: 1
Quality:

Laptop

Lappy

Last Update: 2013-02-08
Usage Frequency: 1
Quality:
Reference: Wikipedia

Laptop

Laptops

Last Update: 2012-12-23
Usage Frequency: 1
Quality:
Reference: Wikipedia

Laptop

Notebook computer

Last Update: 2012-09-10
Usage Frequency: 1
Quality:
Reference: Wikipedia

Laptop Centre Dibobol 50 Laptop Hilang

Laptop Centre burglarized 50 Missing Laptops

Last Update: 2011-01-06
Subject: General
Usage Frequency: 1
Quality:

narative text

narrative text

Last Update: 2014-09-05
Subject: History
Usage Frequency: 1
Quality:
Reference: Anonymous

This Page Will Allow You To Create Your Own Ad Products. Use the Tabs To Create, Text, Image or Video Ads *NOTE Advertisers cannot purchase ads on your website until your ad zone code has been placed on your website. Once the AdHitz system verifies placement of the code, your ad zones can then be purchased. After your ad zones and types are set, you should create code to place on your website. Create code by Clicking Here. e.

Sick

Last Update: 2014-09-05
Subject: General
Usage Frequency: 1
Quality:
Reference: Anonymous

googl1 00:00:00,100 --> 00:00:01,600 Previously on criminal minds... 2 00:00:01,700 --> 00:00:03,400 Elle, 2 weeks of pure heaven. 3 00:00:03,500 --> 00:00:04,600 Do not call me for anything. 4 00:00:04,700 --> 00:00:06,300 Have a great time. You all deserve a break. 5 00:00:06,400 --> 00:00:07,500 Welcome to paradise. 6 00:00:07,600 --> 00:00:09,000 Your resort is beautiful. 7 00:00:09,100 --> 00:00:12,000 A man said there had been a murder in room 19. 8 00:00:12,100 --> 00:00:13,000 Get down! Aah! 9 00:00:13,100 --> 00:00:14,500 Who are you?! 10 00:00:14,600 --> 00:00:15,800 Where is the victim's head? 11 00:00:15,900 --> 00:00:16,900 I'm here on vacation. 12 00:00:17,000 --> 00:00:18,600 From jamaica. Someone sent you a head? 13 00:00:18,600 --> 00:00:20,300 Morgan and elle are in jamaica right now. 14 00:00:20,400 --> 00:00:21,500 Agent greenaway wasn't even here 15 00:00:21,600 --> 00:00:22,600 When this man was killed. 16 00:00:22,600 --> 00:00:23,700 How did he know where we were? 17 00:00:23,800 --> 00:00:25,500 A man came to the door with something 18 00:00:25,600 --> 00:00:26,500 He said you would need right away. 19 00:00:26,600 --> 00:00:27,700 He came to the door? 20 00:00:27,700 --> 00:00:29,100 I was playing a game yesterday. 21 00:00:29,200 --> 00:00:31,500 The hacker could have gotten into my computer first. 22 00:00:31,600 --> 00:00:34,100 I have far less protection on my own laptop. 23 00:00:34,200 --> 00:00:35,500 How could you be that stupid? 24 00:00:35,600 --> 00:00:37,000 I found him. You what? 25 00:00:37,100 --> 00:00:38,100 I know who he is, the hacker, 26 00:00:38,100 --> 00:00:39,100 His name's geist. 27 00:00:39,200 --> 00:00:40,500 Now you're on a quest. 28 00:00:40,600 --> 00:00:42,200 A young girl's life depends 29 00:00:42,300 --> 00:00:43,800 On the successful completion of it. 30 00:00:43,900 --> 00:00:46,100 Aah! Aah! 31 00:00:46,200 --> 00:00:48,900 The one rule is, only the members of your team 32 00:00:49,000 --> 00:00:50,500 May participate in the quest. 33 00:00:51,800 --> 00:00:52,800 He's given us all the clues 34 00:00:52,800 --> 00:00:54,700 Needed to complete this quest, 35 00:00:54,800 --> 00:00:56,300 Including this book code. 36 00:00:56,400 --> 00:00:58,900 Each one of these sets of numbers represents a particular word. 37 00:00:58,900 --> 00:01:00,200 From what book? 38 00:01:00,200 --> 00:01:02,200 Jj, get some reporters here as soon as possible. 39 00:01:02,200 --> 00:01:04,100 Didn't he say that we had to keep this under the team? 40 00:01:04,200 --> 00:01:05,500 We're looking for this man... 41 00:01:05,500 --> 00:01:08,400 No, no, no, no, i said no outsiders. 42 00:01:08,500 --> 00:01:10,000 I'm awake. 43 00:01:10,100 --> 00:01:11,300 Anderson, take greenaway home. 44 00:01:11,400 --> 00:01:12,900 Yes, sir. Get some sleep. 45 00:01:13,000 --> 00:01:15,400 I told you, it was one rule. 46 00:01:15,500 --> 00:01:16,600 One rule! 47 00:01:21,200 --> 00:01:22,900 "The defects and faults of the mind 48 00:01:22,900 --> 00:01:25,200 "Are like wounds in the body. 49 00:01:25,200 --> 00:01:26,300 "After all imaginable care has been taken 50 00:01:26,400 --> 00:01:28,700 "To heal them up, 51 00:01:28,700 --> 00:01:32,400 Still there will be a scar left behind." 52 00:01:32,400 --> 00:01:35,000 French writer francois la rochefoucauld. 53 00:01:41,400 --> 00:01:42,500 Sir, i thought you'd want to know. 54 00:01:42,500 --> 00:01:44,700 We identified the girl in the video... 55 00:01:44,800 --> 00:01:46,400 Rebecca bryant. 56 00:01:46,400 --> 00:01:48,500 Leave it on the desk. 57 00:02:07,000 --> 00:02:09,400 Reid, how many books do you think are published in a year? 58 00:02:09,400 --> 00:02:10,600 In the whole world? 59 00:02:10,600 --> 00:02:11,700 Thousands. 60 00:02:11,800 --> 00:02:14,400 Great, and all we gotta do is find one. 61 00:02:16,100 --> 00:02:17,800 You know, i can see this unsub 62 00:02:17,800 --> 00:02:19,000 Gettin' our phone numbers and addresses 63 00:02:19,000 --> 00:02:20,000 From the bureau personnel files, 64 00:02:20,100 --> 00:02:21,400 But come on, man, 65 00:02:21,500 --> 00:02:23,700 It really says in there that gideon digs nellie fox? 66 00:02:23,800 --> 00:02:25,700 Or that jj collects butterflies? 67 00:02:25,700 --> 00:02:27,600 I didn't even know these things about us. 68 00:02:27,700 --> 00:02:29,200 "Ever would it be night, 69 00:02:29,200 --> 00:02:31,900 But always clear day to any man's sight." 70 00:02:31,900 --> 00:02:34,800 Reid, not again with the poem from the music box, please. 71 00:02:34,800 --> 00:02:35,900 There's something familiar about it. 72 00:02:36,000 --> 00:02:38,800 I think i've heard it somewhere before. 73 00:02:38,900 --> 00:02:40,400 Thought you had a photographic memory. 74 00:02:40,400 --> 00:02:41,700 Eidetic memory, 75 00:02:41,800 --> 00:02:43,100 And that's primarily related to things i read. 76 00:02:43,200 --> 00:02:45,100 Like i said, this is something i think i've heard. 77 00:02:45,200 --> 00:02:46,200 Which leaves us... 78 00:02:46,300 --> 00:02:47,200 Nowhere, that's where it leaves is. 79 00:02:47,300 --> 00:02:48,300 Not necessarily. 80 00:02:48,400 --> 00:02:49,400 How would we proceed 81 00:02:49,500 --> 00:02:52,000 If we didn't have all these clues? 82 00:02:52,100 --> 00:02:53,500 What's the first thing we'd look at? 83 00:02:53,600 --> 00:02:54,900 Victimology. 84 00:02:54,900 --> 00:02:56,500 Why this particular victim in this particular place 85 00:02:56,600 --> 00:02:57,600 At this particular time? 86 00:02:57,600 --> 00:02:59,100 We have a victim, don't we? 87 00:02:59,200 --> 00:03:01,200 Rebecca bryant. 88 00:03:01,300 --> 00:03:03,400 Missin' out of south boston, virginia. 89 00:03:03,400 --> 00:03:05,300 You can get there in a few hours if you hurry. 90 00:03:05,400 --> 00:03:07,900 Take jj. Find out everything there is to know about this girl. 91 00:03:07,900 --> 00:03:08,900 You go it. 92 00:03:08,900 --> 00:03:10,000 Been lettin' him lead us around 93 00:03:10,100 --> 00:03:11,000 Like he's somethin' more than he is. 94 00:03:11,100 --> 00:03:12,600 He's just another unsub. 95 00:03:12,600 --> 00:03:13,800 Let's start puttin' together a profile. 96 00:03:13,900 --> 00:03:15,300 What you want me to do? 97 00:03:15,400 --> 00:03:16,500 Just keep workin' on this. 98 00:03:16,500 --> 00:03:18,100 If anybody can put it together, you can. 99 00:03:40,600 --> 00:03:42,400 Please help me. 100 00:03:48,000 --> 00:03:49,000 Anderson?e translit

peedback

Last Update: 2014-07-24
Subject: General
Usage Frequency: 1
Quality:
Reference: Anonymous
Warning: Contains invisible HTML formatting

Pentingnya SOP (Standard Operating Procedure) Keberadaan SOP (Standard Operating Procedure) sangat penting bagi operasional suatu perusahaan. Dengan SOP kita bisa mengantisipasi berbagai situasi yang mungkin terjadi dalam menjalankan bisnis kita. SOP ini harus kita perjuangkan sejak kita mendirikan perusahaan. Pada tahap awal sop ini terlihat sederhana, tetapi seiring dengan perjalanan menjalankan bisnis kita akan semakin memperlengkapi sop kita. SOP akan memberi arah bagi staf perusahaan dalan menjalankan pekerjaannya. Dengan adanya SOP maka karyawan mengetahui lingkup pekerjaannya. Dengan kejelasan ruang lingkup ini, maka job description akan jelas sehingga tidak tumpang tindih. Dengan demikian maka kinerja staf perusahaan akan terjaga dengan baik. SOP ini bisa kita bagi ke dalam berbagai bidang misalnya: 1. SOP dalam menangani calon client 2. SOP dalam mengerjakan project 3. SOP layanan purna jual 4. SOP quality control 5. SOP keuangan 6. SOP penanganan barang dan lain-lain 7. Karena sop ini memakai berbagai skill yang dimiliki beberapa staf, maka kita bisa membagi sub- sub tersebut kepada staf yang memiliki keahlian sesuai bidangnya. SOP ini harus terus dievaluasi dan dikembangkan. Dalam periode tertentu minimal 6 bulan sekali SOP harus dievaluasi dan diperbarui untuk perbaikan kinerja perusahaan secara menyeluruh.

QUERY LENGTH LIMIT EXCEDEED. MAX ALLOWED QUERY : 500 CHARS

Last Update: 2014-04-03
Subject: General
Usage Frequency: 1
Quality:
Reference: Anonymous

Nomor : 080/SK/II/2014 12 Februari 2014 Lampiran : - Hal : Pesanan Barang Kepada Bapak Wijaya Lee toko “Pelangi Elektronik” Jl. Gatot Subroto No. 26 Jakarta Selatan Dengan Hormat, Berdasarkan informasi yang kami terima dari brousur anda pada tanggal 05 februari 2014, no 55/PE/SP/II/2014 kami ingin memesang barang-barang elektronik di toko anda yaitu sebagai berikut : No NAMA BARANG MEREK BANYAKNYA HARGA JUMLAH 1 Netbook Aple 2 buah Rp.5.000.000 Rp.20.000.000 2 Laptop Toshiba 3 buah Rp.4.000.000 Rp.12.000.000 3 Printer Cannon 6 buah Rp.400.000 Rp.2.400.000 4 LCD Thosiba 1 buah Rp.5.000.000 RP.5.000.000 Kami berharap barang-barang yang dipesan tiba di perusahaan paling lambat 1 minggu setelah pemesanan. Mengenai pembayaran akan segera kami kirimkan setelah barang kami terima. Atas perhatian dan kerjasama Saudara kami mengucapkan terima kasih. Hormat kami, Wahyudi Sri lokeswara,ST Manager Pemasaran

Afrikaans

Last Update: 2014-02-25
Usage Frequency: 1
Quality:
Reference: Wikipedia

1 Predicting Australian Takeover Targets: A Logit Analysis Maurice Peat* Maxwell Stevenson* * Discipline of Finance, School of Finance, The University of Sydney Abstract Positive announcement-day adjusted returns to target shareholders in the event of a takeover are well documented. Investors who are able to accurately predict firms that will be the subject of a takeover attempt should be able to earn these excess returns. In this paper a series of probabilistic regression models were developed that use financial statement variables suggested by prior research as explanatory variables. The models, applied to in-sample and out-of-sample data, led to predictions of takeover targets that were better than chance in all cases. The economic outcome resulting from holding a portfolio of the predicted targets over the prediction period are also analysed. Keywords: takeovers, targets, prediction, classification, logit analysis JEL Codes: G11, G17, G23, G34 This is a draft copy and not to be quoted. 2 1. Introduction In this paper our aim is to accurately predict companies that will become takeover targets. Theoretically, if it is possible to predict takeovers with accuracy greater than chance, it should be possible to generate abnormal returns from holding a portfolio of the predicted targets. Evidence of abnormal returns of 20% to 30% made by shareholders of firms on announcement of a takeover bid is why prediction of these events is of interest to academics and practitioners alike. The modelling approach adopted in this study was based on the discrete choice approach used by Palepu (1986) and Barnes (1999). The models were based on financial statement information, using variables suggested by the numerous theories that have been put forward to explain takeover activity. The performance of the models was evaluated using statistical criteria. Further, the predictions from the models were rated against chance and economic criteria through the formation and tracking of a portfolio of predicted targets. Positive results were found under both evaluation criteria. Takeover prediction studies are a logical extension of the work of Altman (1968) who used financial statement information to explain corporate events. Early studies by Simkowitz and Monroe (1971) and Stevens (1973) were based on the Multiple Discriminant Analysis (MDA) technique. Stevens (1973) coupled MDA with factor analysis to eliminate potential multicollinearity problems and reported a predictive accuracy of 67.5%, suggesting that takeover prediction was viable. Belkaoui (1978) and Rege (1984) conducted similar analyses in Canada with Belkaoui (1978) confirming the results of these earlier researchers and reporting a predictive accuracy of 85% . Concerns were raised by Rege (1984) who was unable to predict with similar accuracy. These concerns were also raised in research by others such as Singh (1971) and Fogelberg, Laurent, and McCorkindale (1975). Reacting to the wide criticism of the MDA method, researchers began to use discrete choice models as the basis of their research. Harris et al. (1984) used probit analysis to develop a model and found that it had extremely high explanatory power, but were unable to discriminate between target and non-target firms with any degree of accuracy. Dietrich and Sorensen (1984) continued this work using a logit model and achieved a classification accuracy rate of 90%. Palepu (1986) addressed a number of methodological problems in takeover prediction. He suggested the use of statebased prediction samples where a number of targets were matched with non-targets 3 for the same sample period. While this approach was appropriate for the estimation sample, it exaggerated accuracies within the predictive samples because the estimated error rates in these samples were not indicative of error rates within the population of firms. He also proposed the use of an optimal cut-off point derivation which considered the decision problem at hand. On the basis of this rectified methodology, along with the application of a logit model to a large sample of US firms, Palepu (1986) provided evidence that the ability of the model was no better than a chance selection of target and non-target firms. Barnes (1999) also used the logit model and a modified version of the optimal cut-off rule on UK data. His results indicated that a portfolio of predicted targets may have been consistent with Palepu’s finding, but he was unable to document this in the UK context due to model inaccuracy. In the following section the economic explanations underlying takeover activity are discussed. Section 3 outlines our takeover hypotheses and describes the explanatory variables that are used in the modelling procedure. The modelling framework and data used in the study is contained in Section 4, while the results of our model estimation, predictions, classification accuracy and portfolio economic outcomes are found in Section 5. We conclude in Section 6. 2. Economic explanations of takeover activity Economic explanations of takeover activity have suggested the explanatory variables that were included in this discrete choice model development study. Jensen and Meckling (1976) posited that agency problems occurred when decision making and risk bearing were separated between management and stakeholders1, leading to management inefficiencies. Manne (1965) and Fama (1980) theorised that a mechanism existed that ensured management acted in the interests of the vast number of small non-controlling shareholders2. They suggested that a market for corporate control existed in which alternative management teams competed for the rights to control corporate assets. The threat of acquisition aligned management objectives with those of stakeholders as managers are terminated in the event of an acquisition in order to rectify inefficient management of the firm’s assets. Jensen and Ruback (1983) suggested that both capital gains and increased dividends are available to an 1 Stakeholders are generally considered to be both stock and bond holders of a corporation. 2 We take the interests of shareholders to be in the maximization of the present value of the firm. 4 acquirer who could eliminate the inefficiencies created by target management, with the attractiveness of the firm for takeover increasing with the level of inefficiency. Jensen (1986) looked at the agency costs of free cash flow, another form of management inefficiency. In this case, free cash flow referred to cash flows in excess of positive net present value (NPV) investment opportunities and normal levels of financial slack (retained earnings). The agency cost of free cash flow is the negative NPV value that arises from investing in negative NPV projects rather than returning funds to investors. Jensen (1986) suggested that the market value of the firm should be discounted by the expected agency costs of free cash flow. These, he argued, were the costs that could be eliminated either by issuing debt to fund an acquisition of stock, or through merger with, or acquisition of a growing firm that had positive NPV investments and required the use of these excess funds. Smith and Kim (1994) combined the financial pecking order argument of Myers and Majluf (1984) with the free cash flow argument of Jensen (1986) to create another motivational hypothesis that postulated inefficient firms forgo profitable investment opportunities because of informational asymmetries. Further, Jensen (1986) argued that, due to information asymmetries that left shareholders less informed, management was more likely to undertake negative NPV projects rather than returning funds to investors. Smith and Kim (1994) suggested that some combination of these firms, like an inefficient firm and an efficient acquirer, would be the optimal solution to the two respective resource allocation problems. This, they hypothesised, would result in a market value for the combined entity that exceeded the sum of the individual values of the firms. This is one form of financial synergy that can arise in merger situations. Another form of financial synergy is that which results from a combination of characteristics of the target and bidding firms. Jensen (1986) suggested that an optimal capital structure exists, whereby the marginal benefits and marginal costs of debt are equal. At this point, the cost of capital for a firm is minimised. This suggested that increases in leverage will only be viable for those firms who have free cash flow excesses, and not for those which have an already high level of debt. Lewellen (1971) proposed that in certain situations, financial efficiencies may be realized without the realization of operational efficiencies. These efficiencies relied on a simple Miller and Modigliani (1964) model. It proposed that, in the absence of corporate taxes, an increase in a firm’s leverage to reasonable levels would increase the value of the equity share of the company due to a lower cost of capital. By a 5 merger of two firms, where either one or both had not utilised their borrowing capacity, would result in a financial gain. This financial gain would represent a valuation gain above that of the sum of the equity values of the individual firms. However, this result is predicated on the assumption that the firms need to either merge or be acquired in order to achieve this result. Merger waves are well documented in the literature. Gort (1969) suggested that industry disturbances are the source of these merger waves, his argument being that they occurred in response to discrepancies between the valuation of a firm by shareholders and potential acquirers. As a consequence of economic shocks (such as deregulation, changes in input or output prices, etc.), expectations concerning future cash flow became more variable. This results in an increased probability that the value the acquirer places on a potential target is greater than its current owner’s valuation. The result is a possible offer and subsequent takeover. Mitchell and Mulherin (1996), in their analysis of mergers and acquisitions in the US during the 1980s, provided evidence that mergers and acquisitions cluster by industries and time. Their analysis confirmed the theoretical and empirical evidence provided by Gort (1969) and provided a different view suggesting that mergers, acquisitions, and leveraged buyouts were the least cost method of adjusting to the economic shocks borne by an industry. These theories suggested a clear theoretical base on which to build takeover prediction models. As a result, eight main hypotheses for the motivation of a merger or acquisition have been formulated, along with twenty three possible explanatory variables to be incorporated predictive models. 3. Takeover hypotheses and explanatory variables The most commonly accepted motivation for takeovers is the inefficient management hypothesis.3 The hypothesis states that inefficiently managed firms will be acquired by more efficiently managed firms. Accordingly, H1: Inefficient management will lead to an increased likelihood of acquisition. Explanatory variables suggested by this hypothesis as candidates to be included in the specifications of predictive models included: 1. ROA (EBIT/Total Assets – Outside Equity Interests) 3 It is also known as the disciplinary motivation for takeovers. 6 2. ROE (Net Profit After Tax / Shareholders Equity – Outside Equity Interests) 3. Earnings Before Interest and Tax Margin (EBIT/Operating Revenue) 4. EBIT/Shareholders Equity 5. Free Cash Flow (FCF)/Total Assets 6. Dividend/Shareholders Equity 7. Growth in EBIT over past year, along with an activity ratio, 8. Asset Turnover (Net Sales/Total Assets) While there are competing explanations for the effect that a firm’s undervaluation has on the likelihood of its acquisition by a bidder, there is consistent agreement across all explanations that the greater the level of undervaluation then the greater the likelihood a firm will be acquired. The hypothesis that embodies the impact of these competing explanations is as follows: H2: Undervaluation of a firm will lead to an increased likelihood of acquisition. The explanatory variable suggested by this hypothesis is: 9. Market to book ratio (Market Value of Securities/Net Assets) The Price Earnings (P/E) ratio is closely linked to the undervaluation and inefficient management hypotheses. The impact of the P/E ratio on the likehood of acquisition is referred to as the P/E hypothesis: H3: A high Price to Earnings Ratio will lead to a decreased likelihood of acquisition. It follows from this hypothesis that the P/E ratio is a likely candidate as an explanatory variable for inclusion in models for the prediction of potential takeover targets. 10. Price/Earnings Ratio The growth resource mismatch hypothesis is the fourth hypothesis. However, the explanatory variables used in models specified to examine this hypothesis capture growth and resource availability separately. This gives rise to the following: H4: Firms which possess low growth / high resource combinations or, alternatively, high growth / low resource combinations will have an increased likelihood of acquisition. The following explanatory variables suggested by this hypothesis are: 7 11. Growth in Sales (Operating Revenue) over the past year 12. Capital Expenditure/Total Assets 13. Current Ratio (Current Assets/Current Liabilities) 14. (Current Assets – Current Liabilities)/Total Assets 15. Quick Assets (Current Assets – Inventory)/Current Liabilities The behaviour of some firms to pay out less of their earnings in order to maintain enough financial slack (retained earnings) to exploit future growth opportunities as they arise, has led to the dividend payout hypothesis: H5: High payout ratios will lead to a decreased likelihood of acquisition. The obvious explanatory variable suggested by this hypothesis is: 16. Dividend Payout Ratio Rectification of capital structure problems is an obvious motivation for takeovers. However, there has been some argument as to the impact of low or high leverage on acquisition likelihood. This paper proposes a hypothesis known as the inefficient financial structure hypothesis from which the following hypothesis is derived. H6: High leverage will lead to a decreased likelihood of acquisition. The explanatory variables suggested by this hypothesis include: 17. Net Gearing (Short Term Debt + Long Term Debt)/Shareholders Equity 18. Net Interest Cover (EBIT/Interest Expense) 19. Total Liabilities/Total Assets 20. Long Term Debt/Total Assets The existence of Merger and Acquisition (M&A) activity waves, where takeovers are clustered in wave-like profiles, have been proposed as indicators of changing levels of M&A activity over time. It has been argued that the identification of M&A waves, with the corresponding improved likelihood of acquisition when the wave is surging, captures the effect of the rate of takeover activity at specific points in time, and serves as valuable input into takeover prediction models. Consistent with M&A activity waves and their explanation as a motivation for takeovers is the industry disturbance hypothesis: 8 H7: Industry merger and acquisition activity will lead to an increased likelihood of acquisition. An industry relative ratio of takeover activity is suggested by this hypothesis: 21. The numerator is the total bids launched in a given year, while the denominator is the average number of bids launched across all the industries in the ASX. Size will have an impact on the likelihood of acquisition. It seems plausible that smaller firms will have a greater likelihood of acquisition due to larger firms generally having fewer bidding firms with the resources to acquire them. This gives rise to the following hypothesis: H8: The size of a firm will be negatively related to the likelihood of acquisition. Explanatory variables that can be employed to control for size include: 21. Log (Total Assets) 22. Net Assets 4. Data and Method The data requirements for the variables defined above are derived from the financial statements and balance sheet date price information for Australian listed companies. The financial statement information was sourced from the AspectHuntley data base which includes annual financial statement data for all ASX listed companies between 1995 and 2006. The database includes industry classifications for all firms included in the construction of industry relative ratios. Lists of takeover bids and their respective success were obtained from the Connect4 database. This information enabled the construction of variables for relative merger activity between industries. Additionally, stock prices from the relevant balance dates of all companies were sourced from the AspectHuntley online database, the SIRCA Core Price Data Set and Yahoo! Finance. 4.1 The Discrete Choice Modelling Framework The modelling procedure used is the nominal logit model, made popular in the bankruptcy prediction literature by Ohlson (1980) and, subsequently, in the takeover prediction literature by Palepu (1986). Logit models are commonly utilised for dichotomous state problems. The model is given by equations [1] to [3] below. 9 [3] The logit model was developed to overcome the rigidities of the Linear Probability Model in the presence of a binary dependent variable. Equations [1] and [2] show the existence of a linear relationship between the log-odds ratio (otherwise known as the logit Li) and the explanatory variables. However, the relationship between the probability of the event and acquisition likelihood is non-linear. This non-linear relationship has a major advantage that is demonstrated in equation [3]. Equation [3] measures the change in the probability of the event as a result of a small increment in the explanatory variables, . When the probability of the event is high or low, the incremental impact of a change in an explanatory variable on the likelihood of the event will be compressed, requiring a large change in the explanatory variables to change the classification of the observation. If a firm is clearly classified as a target or non-target, a large change in the explanatory variables is required to change its classification. 4.2 Sampling Schema Two samples were used in the model building and evaluation procedure. They were selected to mimic the problem faced by a practitioner attempting to predict takeover targets into the future. The first sample was used to estimate the model and to conduct in-sample classification. It was referred to as the Estimation Sample. This sample was based on financial data for the 2001 and 2002 financial years for firms that became takeover targets, as well as selected non-targets, between January, 2003 and December, 2004. The lag in the dates allows for the release of financial information as well as allowing for the release of financial statements for firms whose balance dates fall after the 30th June. Following model estimation, the probability of a takeover offer was estimated for each firm in the entire sample of firms between January, 2003 and December, 2004 using the estimated model and each firm’s 2001 and 2002 financial data. Expost predictive ability for each firm was then assessed. 10 A second sample was then used to assess the predictive accuracy of the model estimated with the estimation sample data. It is referred to as the Prediction Sample. This sample includes the financial data for the 2003 and 2004 financial years, which will be used in conjunction with target and non-target firms for the period January, 2005 to December, 2006. Using the model estimated from the 2001 and 2002 financial data, the sample of firms from 2005 and 2006 were fitted to the model using their 2003 and 2004 financial data. They were then classified as targets or non-targets using the 2005 and 2006 data. This sampling methodology allows for the evaluation of ex-ante predictive ability rather than ex-post classification accuracy. A diagrammatic explanation of the sample data used for both model estimation and prediction can be found below in Figure 1, and in tabular form in Table 1. Figure 1 Timeline of sample data used in model estimation and prediction Table 1 Sample data used in model estimation and prediction Sample Financial Data Classification Period Estimation Sample 2001 and 2002 2003 and 2004 Prediction Sample 2003 and 2004 2005 and 2006 11 For model estimation, a technique known as state-based sampling was used. Allison (2006) suggested the use of this sampling approach in order to minimise the standard error of the estimated parameters when the dependent variable states were unequally distributed in the population. All the target firms were included in the estimation sample, along with an equal number of randomly selected non-target firms for the same period. Targets in the estimation sample were randomly paired with the sample of non-target firms for the same period over which financial data was measured.4 4.3 Assessing the Estimated Model and its Predictive Accuracy Walter (1994), Zanakis and Zopounidis (1997), and Barnes (1999) utilised the Proportional Chance Criterion and the Maximum Chance Criterion to assess the predictions of discriminant models relative to chance. These criteria are also applicable to the discrete choice modelling exercise that is the focus of this study and, accordingly, are discussed more fully below. 4.3.1 Proportional Chance Criterion To assess the classification accuracy of the estimated models in this study, the Proportional Chance Criterion was utilized to assess whether the overall classifications from the models were better than that expected by chance. This criterion compared the classification accuracy of models to jointly classify target or non-target firms better than that expected by chance. Although the criterion does not indicate the source of the classification accuracy of the model (that is, whether the model accurately predicts targets or non-targets), it does allow for the comparison with alternative models. A simple Z-score calculation formed the basis of a joint test of the null hypothesis that the model was unable to jointly classify targets and nontargets better than chance. Under a chance selection, we would expect a proportion of targets and non-targets to be jointly equal to their frequencies in the population under consideration. The null and alternative hypotheses, along with the test statistic are given below. H0: Model is unable to classify targets and non-targets jointly better than chance. H1: Model is able to classify targets and non-targets jointly better than chance. 4 This approach differs from matched pair samples where targets are matched to non-targets on the basis of variables such as industry and/or size. 12 If the statistic is significant, we reject the null hypothesis and conclude that the model can classify target and non-target firms jointly better than chance. 4.3.2 Maximum Chance Criterion While the Proportional Chance Criterion indicated whether a model jointly classified target and non-target firms better than chance, it did not indicate the source of the predictive ability. However, under the Maximum Chance Criterion, a similar test of hypotheses does indicate whether a model has probability greater than chance in classifying either a target or a non-target firm. The Z-score statistic to test the null hypothesis that a model is unable to classify targets better than chance is given below. It is based on the Concentration Ratio defined by Powell (2001) that measures the maximum potential chance of correct classification of a target, or the proportion of correctly classified targets from those firms predicted to be targets. H0: Model is unable to classify targets better than chance. H1: Model is able to classify targets better than chance. In order to assess the classification accuracy of the models in the Estimation and Prediction Samples, these two criteria were used. The focus of this study was on the use of the Maximum Chance Criterion for targets, as it assessed whether the number 13 of correctly predicted targets exceeded the population of predicted targets.5 The Concentration Ratio was the ratio advocated by Barnes (1999) for maximising returns. 4.3.3 Industry Relative Ratios Platt and Platt (1990) advocated the use of industry relative variables to increase the predictive accuracy of bankruptcy prediction models on the pretext that these variables enabled more accurate predictions across industries and through time. This argument was based on two main contentions. Firstly, average financial ratios are inconsistent across industries and reflect the relative efficiencies of production commonly employed in those industries. The second is that average financial ratios are inconsistent throughout time as a result of variable industry performance due to economic conditions and other factors. Platt and Platt (1990) argued that firms from different industries or different time periods could not be analysed without some form of industry adjustment. In this study both raw and industry adjusted financial ratios were used to determine the benefits of industry adjustment. There are four different model specifications. One was based on raw financial ratios for the single year prior to the sample period (the Single Raw Model). Another was based on averaged raw financial ratios for the two years prior to the acquisition period (the Combined Raw Model). A third specification was based on industry adjusted financial ratios for the single year prior to the sample period (the Single Adjusted Model), while the fourth was based on averaged industry adjusted financial ratios for the two years prior to the sample period (the Combined Adjusted Model). The purpose of using averages was to reduce random fluctuations in the financial ratios of the firms under analysis, and to capture permanent rather than transitory values. This approach was proposed by Walter (1994). Most researchers used industry relative ratios calculated by scaling firms’ financial ratios using the industry average defined by equation [4] below. Under this procedure all ratios were standardised to unity. Industry relative ratios such as ROA or ROE that were greater than unity indicated industry over-performance, while those less than unity were consistent with underperformance. Problems were encountered when the industry average value was negative. In this case, those firms that underperformed the industry average also had industry relative ratios greater than one. This was the result of a large negative number being divided by a smaller negative 5 That is, the ratio A11/TP1 in Table 2. 14 number. Additionally, those firms that over-performed the negative industry average ratio, but still retained a negative financial ratio, had a ratio less than one. This ambiguity in the calculation of industry relative ratios had implications for those models in this study that included variables with negative industry averages for some ratios. This problem may explain the inability of researchers in the recent literature to accurately predict target and non-target firms that utilised industry adjustments and may have caused the Barnes (1999) model to predict no takeover targets at all. An alternative methodology was implemented to account for negative industry averages. Equation [5] below uses the difference between the individual firm’s ratio and the industry average ratio, divided by the absolute value of the industry average ratio. As a result all ratios are standardised to zero rather than one. Problems relating to the sign of the industry relative ratio are also corrected. Under-performance of the industry results in an industry relative ratio less than zero, with over-performance returning a ratio greater than zero. This approach is similar to the variable scaling methods widely documented in the Neural Network prediction literature. It was used for the two models based on industry relative variables with industry adjustment based on the 24 industry classification from the old ASX. 4.4 Calculation of Optimal Cut-off Probabilities for Classification In the case of a logit model, predictive output for an input sample of the explanatory variables is a probability with a value between 0 and 1. This is the predicted probability of an acquisition offer being made for a specific firm within the prediction period. What is needed is a method to convert these predicted probabilities of an acquisition offer into a binary prediction of becoming a target or not. These methods are known as optimal cut-off probability calculations and two main methodologies were implemented in this study. 4.4.1 Minimisation of Error Probabilities (Palepu, 1986) 15 In order to understand the calculation of the optimal cut-off probability, what is needed is an understanding of Type 1 and Type 11 errors. A Type 1 error occurs when a firm is predicted to become a takeover target when it does not (outcome A01 in Table 2 below), while a Type 11 error occurs when a firm is predicted not to become a target but actually becomes a target (outcome A10). Palepu (1986) assumed that the cost of these two types of errors were identical. To calculate the optimal cut-off probability, he used histograms to plot the predicted probabilities of acquisition offers for targets and non-targets separately on the same graph. The optimal cut-off probability which minimised the total error rate occurs at the intersection of the two conditional distributions. Firms with predicted probabilities of acquisition offers above this cut-off were classified as targets and those with probabilities below the cutoff classified as non-targets. Table 2 An outcome matrix for a standard classification problem Predicted Outcome Actual Outcome Non-Target (0) Target (1) Total Non-Target (0) A00 A01 TA0 Target (1) A10 A11 TA1 Total TP0 TP1 T 4.4.2 Minimisation of Error Costs (Barnes, 1999) Palepu (1986) assumed equal costs of Type 1 and Type 11 errors. However, it has been suggested that, due to investment being less likely in predicted non-targets, the cost of investing in the equity of a firm which did not become a takeover target (Type 1 error) was greater than the cost of not investing in the equity of a firm that became a takeover target (Type 11 error). Accordingly, Barnes (1999) proposed minimisation of the Type 1 error in order to maximise returns from an investment in predicted targets. From Table 2, it can be seen that the minimisation of Type 1 error is equivalent to the minimisation of the number of incorrectly predicted targets, A01, or alternatively, the maximisation of the number of correctly predicted targets, A11. It follows that, a cutoff probability is needed to maximise the number of predicted targets in a portfolio that became actual targets. This involved maximisation of the ratio of A11 to TP1 in 16 Table 2. Figure 2 below is an idealized representation of the Type 1 and Type 2 errors associated with the Palepu and Barnes cut-off probability methodologies. As the purpose of this paper was to replicate the problem faced by a practitioner, unawareness of the actual outcomes of the prediction process was assumed. Further, Figure 2 Idealized Palepu and Barnes Cut-off Probabilities the probabilities that companies will become targets were derived from a prediction model estimated using estimation data on known targets and non-targets. The companies for which these probabilities are calculated comprised the Prediction Sample (recall Table 1). For the calculation of the optimal cut-off probability according to Palepu, a histogram of predicted acquisition offer probabilities for targets and non- targets was created from the Estimation Sample, and followed the error minimisation procedure detailed above in section 4.4.1. To calculate the optimal cut-off under the Barnes methodology outlined in section 4.4.2, the ratio of A11/ TP1 for all cut-off probabilities between 0 and 1 was calculated to determine the maximum point. A simple grid search from 0 to 1 in increments of 0.05 was used. The classification and prediction accuracies under these two methods of calculating cut-off probabilities was compared for all four models considered in this study. Non-Targets Targets Estimated Acquisition Offer Cut-off Probabilities for Non-Targets and Targets PALEPU CUT-OFF BARNES CUT-OFF Type 2 Error Type 1 Error Relative Frequency of Non-Targets and Targets Targets Non-Targets Targets 17 5. Results 5.1 Multicollinearity Issues An examination of the correlation matrix and Variance Inflation Factors (VIFs) of the Estimation Sample indicated that five variables needed to be eliminated. They are listed in Table 3. That these variables should contribute to the multicollinearity problem was not a surprise considering the presence of the large number of potential explanatory variables measuring similar attributes suggested by the hypothesised motivations for takeover. These variables have correlation coefficients that exceeded 0.8 or VIFs that exceeded 10. Exclusion of these five variables eliminated significant correlations in the Variance/Covariance matrix, along with reduction of the VIF values of all the remaining variables to below 10. The resultant reduced variable set was used in the backward stepwise logit models estimated and reported in the following sub-section. Table 3 Variables Removed Due to Multicollinearity ROE (NPAT/Shareholders Equity – Outside Equity Interests) FCF/Total Assets Current Ratio (Current Assets/Current Liabilities) (Current Assets – Current Liabilities)/Total Assets Total Liabilities/Total Assets 5.2 Backward Stepwise Regression Results Using the remaining variables after controlling for multicollinearity, backward stepwise logistic regressions were performed for each of the four model specifications. Consistent with the methodology of Walter (1994), the significance level for retention of variables in the analysis was set at 0.15. The results for these models that were estimated using a common set of target and non-target firms are presented in Tables 4 to 7, with the results for the combined adjusted model in Table 7 described in more detail in the following sub-section.6 The backward stepwise analysis for this model required seven steps, eliminating six of the fifteen starting variables, while retaining nine significant variables. These 6 Detailed results for each of the models represented in Tables 4 to 7 are available from the authors on request. 18 Table 4 Backward Stepwise Results for Single Raw Model Variable Parameter Estimate Prob > Chi Sq Intercept -13.14 ( Chi Sq Intercept -0.58 (0.02) Asset Turnover (Net Sales/Total Assets) -0.59 (0.03) Capital Expenditure/Total Assets 0.34 (0.07) Dividend Payout Ratio -0.22 (0.07) Long Term Debt/Total Assets -0.21 (0.11) Ln (Total Assets) 12.07 ( Chi Sq Intercept -12.36 ( Chi Sq Intercept -0.04 (0.92) ROA (EBIT/Total Assets – Outside Equity Interests) 0.28 (0.09) Asset Turnover (Net Sales/Total Assets) -0.54 (0.05) Capital Expenditure/Total Assets 0.69 (<0.01) Quick Assets/Current Liabilities (Current Assets – Inventory)/Current Liabilities 0.93 (0.02) Dividend Payout Ratio -0.34 (0.02) Long Term Debt/Total Assets -0.32 (0.07) Merger Wave Dummy -0.59 (0.06) Ln (Total Assets) 13.34 (<0.01) Net Assets -0.21 (0.07) results provided evidence concerning six of the eight hypothesised motivations for takeover discussed previously in Section 3. The growth resource mismatch hypothesis was only significant in the two adjusted models. This suggested that growth should be measured relative to an industry benchmark when attempting to discriminate between target and non-target firms. 5.3 Classification Analysis While the analysis of the final models was of theoretical interest, the primary aim of this paper was to evaluate their classification accuracy. For the purposes of classification, the models were re-estimated using the Estimation Sample with all variables included. The complex relationships between all the variables were assumed to provide us with the ability to discriminate between target and non-target firms. Using financial data from 2001 and 2002, the models were estimated on the basis of 62 targets matched with 62 non-targets where the targets were identified between January, 2003 and December, 2004. Following estimation of the model, an in-sample fit was sought for the entire sample of the 1060 firms reporting 2001 and 2002 financial data. To proceed with classification, we derived a cut-off probability using 20 the methods of Palepu (1986) and Barnes (1999). The graph presented in Figure 3 focuses on the combined adjusted model and the Palepu cut-off point. Using a bin range of 0.05, it showed the histograms required for the calculation of the cut-off probability using the Palepu methodology was approximately 0.675. This is the probability corresponding to the highest point of intersection of the plots of the estimated acquisition probabilities for target and non-target companies. Figure 3 Cut-off Calculations using the Palepu methodology and 0.05 histogram bin increments. Table 8 Summary of optimal cut-off probabilities for all models under both methodologies. Optimal Cutoff Methodology Probabilities Palepu Barnes Single Raw Model 0.725 0.85 Single Adjusted Model 0.725 0.90 Combined Raw Model 0.850 0.95 Combined Adjusted Model 0.675 0.95 The optimal cut-off probabilities derived by using both the Barnes and Palepu methodologies for all four models are reported in Table 8. The optimal cut-off 21 probabilities calculated using the Barnes methodology were significantly larger than the cut-offs calculated under the Palepu methodology for all models.7 Table 9 below shows the outcome of the application of all of four models to the entire Estimation Sample based on a cut-off derived under the Barnes approach. Included in this table are the outcome matrices for each of the models. An outcome of 0 indicated that the firm was not a target or was not predicted to be a target in the sample period. A value of 1 indicated that a firm was predicted to be, or become a target in the sample period. On the basis of these outcome matrices, a number of performance measures were generated. The first measure was the Concentration Ratio. This is a measure of Predictive Accuracy measure of the model and corresponds to the Maximum Chance Criterion. It is the proportion of actual targets that formed the portfolio of predicted target firms for each of the models and was represented by the ratio A11/TP1 from the outcome matrix depicted previously in Table 2. The next measure indicated the expected accuracy under a chance selection of takeover targets within the sample period (TA1/T). It measured the extent to which the model exceeded the accuracy expected under a chance selection and quantified the Proportional Chance Criterion. The last measure is a measure of the accuracy of the model relative to chance and is calculated by dividing the first ratio by the second and then subtracting unity. All three measures were expressed as a percentage. An examination of the statistics corresponding to these measures for all four models in Table 9 indicated that, for the estimation sample with a Barnes cut-off, the combined raw model was the most accurate. Of the 80 firms that this model predicted to become takeover targets in the estimation period, 19 actually became targets. This represented a prediction accuracy of 23.75%. When taken relative to chance, this accuracy exceeded the benchmark by 305%. For the purpose of comparison, the classification results for the cut-off probabilities calculated using the Palepu cut-off points are presented in Table 10. As was the case when the Barnes methodology was used to determine the cut-off values for classification, the Palepu approach realised similar results. The combined raw model was again the most accurate model for prediction with a predictive accuracy of 19.59% and a relative to chance figure of 234.3%. However, as was the case for all four models, the use of this cut-off probability approach significantly reduced the 7 As is noted in following tables, this is an explanation for the smaller number of predicted targets under this methodology. 22 Table 9 Outcome Matrices for all models for classification of Estimation Sample (Barnes cut-off probability) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off = 0.85 probability) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 874 124 998 97.00% 94.15% 3.03%** 1 27 35 62 22.01% 5.85% 276.24%** Total 901 159 1060 Single Adjusted Model (Cut-off probability = 0.90) †† Actual Outcome Predicted Outcome 0 1 Total 0 906 88 994 96.18% 94.13% 2.18%** 1 36 26 62 22.81% 5.87% 288.59%** Total 942 114 1056 Combined Raw Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 935 61 996 95.60% 94.14% 1.55%** 1 43 19 62 23.75% 5.86% 305.29%** Total 978 80 1058 Combined Adjusted Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 938 56 994 95.33% 94.13% 1.27%* 1 46 16 62 22.22% 5.87% 278.54%** 23 Total 984 72 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. Table 10 Outcome Matrices for all models for classification of Estimation Sample (Palepu cut-off probabilty) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off probability = 0.725) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 812 186 998 97.83% 94.15% 3.91%** 1 18 44 62 19.13% 5.85% 227.01%** Total 830 230 1060 Single Adjusted Model (Cut-off probability = 0.725) †† Actual Outcome Predicted Outcome 0 1 Total 0 787 207 994 97.52% 94.13% 3.60%** 1 20 42 62 16.87% 5.87% 187.39%** Total 807 249 1056 Combined Raw Model (Cut-off probability = 0.85) †† Actual Outcome Predicted Outcome 0 1 Total 0 840 156 996 97.22% 94.14% 3.27%** 1 24 38 62 19.59% 5.86% 234.30%** Total 864 194 1058 Combined Adjusted Model (Cut-off probability = 0.675) †† Actual Outcome Predicted Outcome 24 0 1 Total 0 749 245 994 97.53% 94.13% 3.61%** 1 19 43 62 14.93% 5.87% 154.34%** Total 768 288 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. Concentration Ratio and, therefore, the classification accuracy of the models under the Maximum Chance Criterion. Interestingly, while the Palepu methodology did improve the correct classification of targets accurately predicted (A11), in doing so, it also predicted a large number of non-target firms to become targets (A01). The Barnes methodology focused on the maximisation of returns from an investment in predicted targets. Rather than being focused on the prediction of a large number of targets accurately, it focused on the improvement in the proportion of actual targets in the portfolio of predicted targets. Accordingly, there are a smaller number of targets predicted under the Barnes methodology. As previously noted in Section 4.3.2, the Barnes methodology coincided more with the spirit of the Maximum Chance Criterion rather than the Proportional Chance Criterion. According to the Proportional Chance Criterion, all four models were able to jointly classify targets and non-targets within the estimation period significantly better than chance. Further, as revealed by the Maximum Chance Criterion, all models also classified targets alone significantly better than chance but on an individual basis. Overall, these results indicated high model classification ability. This was expected given that all targets in the estimation sample were used in the estimation of the model parameters. 5.4 Classification in the Prediction Period The next step of the analysis was to assess the predictive abilities of our models using the Prediction Sample. Of the total 1054 firms in this sample, 108 became targets during the prediction period. Panel A and Panel B of Table 11 report the predictions from the four estimated models using both the Barnes and Palepu cut-off probability approaches. Under the Barnes cut-off methodology, calculation of the Concentration Ratio indicated that the combined raw and combined adjusted models performed best of all of the models. This confirmed the results from the estimation 25 period. The combined adjusted model predicted 125 firms to become targets during the prediction period, during which 25 actually became targets. Prediction accuracy was 20%. Under a chance selection, we would have expected only 10.30% of those companies predicted to become targets to actually become targets. This meant that the model exceeded a chance prediction by 94.18%. While Walter (1994) was able to predict 102% better than chance, other studies including that of Palepu (1986) and Barnes (1999) were unable to achieve this level of accuracy. Industry adjustment increased predictive ability for both the single and combined models, suggesting that stability may be achieved through these adjustments. Furthermore, the combination of two years of financial data also appeared to improve predictive accuracy. This suggests that this adjustment eliminates random fluctuations in the financial ratios being used as input to the prediction models. Table 11 Prediction results for all four models using the Prediction Sample and both Barnes and Palepu cut-off probabilities PREDICTION SAMPLE (Barnes cut-off probabilities) Panel A PREDICTION SAMPLE (Palepu cut-off probabilities) Panel B Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Single Raw Model (Cut-off probability = 0.90) 15.09% 10.25% 47.22%* Single Raw Model (Cut-off probability = 0.725) 16.83% 10.25% 64.20%* Single Adjusted Model (Cut-off probability = 0.95) 15.79% 10.27% 53.75%* Single Adjusted Model (Cut-off probability = 0.725) 17.79% 10.27% 73.22%* Combined Raw Model (Cut-off probability = 0.85) 17.65% 10.25% 72.29%** Combined Raw Model (Cut-off probability = 0.85) 17.51% 10.25% 70.83%** Combined Adjusted Model † (Cut-off probability = 0.95) Combined Adjusted Model (Cut-off probability = 0.675) 26 20.00% 10.30% 94.18%** 16.77% 10.30% 62.82%** † Indicates that the overall predictions of the model are significantly better than chance at the 5% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. The prediction results for the Palepu derived cut-off probabilities are presented in Table 11 (Panel B). By a comparison of Panel A with Panel B in Table 11, it can be seen that when the Barnes cut-off probability methodology was used for the single models, the Concentration Ratio decreased relative to that of Palepu. However, it improved the ratio for the combined models. This result was reversed when the Palepu cut-off probability approach was used. Further, given the better performance of the combined models using the estimated sample, this provided the rationale for the use of the combined modes and the Barnes methodology to calculate the optimal cutoff probabilities. A different variable selection approach was implemented in an attempt to improve the accuracy of the two best predictive models, namely, the combined raw model and the combined adjusted model. A number of variables that had been insignificant in all estimated models were removed and the estimation and classification procedures repeated on the remaining variable data set.8 The classification results for the Table 12 Application of improved models to both the Estimation Sample and Prediction Sample. ESTIMATION SAMPLE (Barnes cut-off probabilities) PREDICTION SAMPLE (Barnes cut-off probabilities) Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Combined Raw Model †† Combined Raw Model 8 The variables removed were: Growth in EBIT over the past year, Market to book ratio (Market Value of Securities/Net Assets), and the Price/Earnings Ratio. 27 (less variables 7,9 and 10) 24.66% 5.86% 320.77%** (less variables 7,9 and 10) 17.54% 10.25% 71.22%** Combined Adjusted Model †† (less variables 7,9 and 10) 24.56% 5.87% 318.34%** Combined Adjusted Model (less variables 7,9 and 10) 22.45% 10.30% 118.05%** †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. application of this model to both the estimation and prediction periods are given in Table 12. The elimination of variables resulted in significant improvements in the in-sample classification accuracy using the estimation sample, with accuracies exceeding chance by well over 300%. This improvement in classification accuracy was maintained into the prediction period. The accuracy of the combined adjusted model was 118% greater than chance. This represented a level statistical accuracy above that reported by any similar published study in the area of takeover prediction. These results can be used to refute the claims of Barnes (1999) and Palepu (1986) that models cannot be implemented which achieved predictive accuracies greater than chance. They further confirm the results of Walter (1994) while using a wider sample of firms. The combined adjusted model significantly outperformed the other models for predictive purposes, suggesting that this is the most appropriate model for the application of logit analysis to predict takeover targets in the Australian context. 5.5 Economic Outcomes Although the above methodology provided us with a statistical assessment of model performance, it had nothing to say about the economic usefulness of the model. Palepu (1986), Walter (1994), and Wansley et al. (1983) all implemented an equally weighted portfolio technique to assess whether their predictions of takeover targets were able to earn abnormal risk adjusted returns. The conclusion we drew from the results of the abovementioned studies was that a positive abnormal return was not guaranteed from an investment in the targets predicted from these models. The portfolios of predicted targets in two of these studies were unrealistically large at 91 28 in the case of Walter, and 625 in the case of the Palepu studies. Due to the effect of transaction costs on returns, practitioners would be likely to limit themselves to smaller portfolios in the order of 10 to 15 stocks. To make an economic assessment of the economic usefulness of our modelling approach, we replicated a modified version of the Palepu (1986) and Walter (1994) portfolio technique using our predicted targets. Only commonly predicted targets across all models were included in the portfolio analysis for two reasons. The first was to reduce the number of stocks to a manageable level, and second, to improve the ratio of actual targets in the portfolio. Further, we rejected the equally weighted portfolio approach on the grounds that it was an inefficient strategy for an informed investor who possessed results from our modelling. We reasoned that such an investor could most likely take a leveraged position through derivatives. The portfolio analysed in this study comprised of 13 predicted target firms of which 5 actually became targets. While this is a good result per se, we sought to quantify the economic benefit from an investment in these stocks. The portfolio of predicted targets was held for the entire prediction period of 2005 and 2006 that constituted 503 trading days. The first column of Table 13 below presents the percentage of Cumulative Average Abnormal Return (CAAR%) at 20 day intervals during the prediction period. Table 13 Cumulative Average Abnormal Returns (CAARs) for the portfolio of commonly predicted takeover targets for the Prediction Period of 2005 and 2006 Day Portfolio (13 Stocks) Actual Targets (5 Stocks) Day Portfolio (13 Stocks) Actual Targets (5 Stocks) CAAR (%) CAAR (%) CAAR (%) CAAR (%) 20 1.38 5.36 280 4.77 29.64 40 2.84 10.50 300 4.67 32.47 60 -1.98 5.58 320 3.08 33.53 80 -2.53 6.11 340 0.73 31.96 100 -5.52 -1.15 360 2.89 26.62 120 4.40 25.16 380 5.28 33.72 140 3.06 17.83 400 6.99 32.02 160 4.38 20.70 420 9.78 37.43 180 5.51 24.79 440 11.33 40.22 200 9.90 34.82 460 57.44 46.00 29 220 7.51 34.87 480 58.38 47.27 240 6.40 29.31 500 68.90 52.12 260 5.04 27.71 503 68.67* 50.86^ The full prediction period CAAR of 68.67% was significantly greater than zero at the 1% level of significance under the Standard Abnormal Return [SAR] methodology of Brown and Warner (1985). We recognised that these results could have been potentially driven by actual non-target firms within the portfolio of predicted targets. This would suggest that the abnormal return was the result of the chance selection of over-performing non-target firms, rather than an accurate selection of target firms. To answer this question, the same CAAR calculation was applied to the sub-portfolio of firms that actually became targets. The full period CAAR of 50.86% was also significantly greater than zero at the 1% level. This supported the proposition that the CAAR of the portfolio was driven by the performance of the actual targets within the portfolio. Table 13 also indicated that the CAAR for the portfolio increased significantly between days 440 and 460. This result was driven by the extremely positive returns on the stock ATM which was a non-target firm predicted by the models to be a target. After repeating the portfolio analysis with this stock eliminated from the portfolio of predicted targets, it was found that a significant positive abnormal return of 25%9 was realized for the entire prediction period. Another observation from Table 13 was that the CAAR was not positive (nor significant) for either of the portfolios early in the prediction period. From the second column of Table 14, the CAAR after 100 days was negative. Further, after 340 days the CAAR was indistinguishable from zero. The real gains to the portfolio were made as mergers or acquisitions were announced and completed in the latter stages of 2006, highlighting the fact that the portfolio had to be held for the entire prediction period in order to realise the potential available returns. 6. Conclusion The main finding of this paper was that the combined adjusted model which was based on averaged, industry adjusted financial ratios across the sample period, 9 t = 9.63 30 emerged as a clear standout with regards to predictive accuracy. Further, the implementation of industry adjusted data, as described in Section 4.3.3 of this paper, significantly improved the classification accuracy of all the models bar one that were analysed in both the estimation and prediction periods. Additionally, this paper provided evidence that the inclusion of the Barnes methodology for calculation of the optimal cut-off point significantly improved classification accuracy and enabled the successful use of logit models to predict takeover targets within the Australian context. The accuracy of the single best model in this paper exceeded a chance selection by 118% and represented the highest reported accuracy for a logit model. Another important finding of this paper resulted from the examination of a portfolio of predicted targets. We demonstrated that an investment in the predicted targets, that were common across the logit models, resulted in significant Cumulative Average Abnormal Returns (CAARs) being made by an investor. Several steps were undertaken to ensure that this result was robust against returns on predicted non-target stocks. This suggests that the abnormal returns made are based on the accuracy of the predictions common to the logit models analysed in this study rather than any chance selection. We believe our results provide evidence in favour of the proposition that an abnormal return can be made from an investment in the commonly predicted takeover targets from the four logit-based models analysed in this paper. There is a wealth of evidence in existence that suggests that combining forecasts from different models improves forecasting ability. This is an obvious direction for future research and may well be achieved either by a logit and MDA combination, or through the inclusion of a neural network approach to predict targets. References Allison, Paul D, 2006, Logistic Regression Using the SAS System. Cary, NC: SAS Institute. Barnes, Paul, 1990, The Prediction of Takeover Targets in the UK by means of Multiple Discriminant Analysis, Journal of Business Finance and Accounting 17, 73- 84. Barnes, Paul, 1999, Predicting UK Takeover Targets: Some Methodological Issues and Empirical Study, Review of Quantitative Finance and Accounting 12, 283-301. Belkaoui, Ahmed, 1978, Financial Ratios as Predictors of Canadian Takeovers, Journal of Business Finance and Accounting 5, 93-108. 31 Brown, Stephen J, and Jerold B Warner, 1985, Using Daily Stock Returns, Journal of Financial Economics 14, 3-31. Dietrich, Kimball J, and Eric Sorensen, 1984, An Application of Logit Analysis to Prediction of Merger Targets, Journal of Business Research 12, 393-402. Fama, Eugene F, 1980, Agency Problems and the Theory of the Firm, Journal of Political Economy 88, 288-307. Fogelberg, G, CR Laurent, and D McCorkindale, 1975, The Usefulness of Published Financial Data for Predicting Takeover Vulnerability, University of Western Ontario, School of Business Administration (Working Paper 150). Gort, Michael C, 1969, An Economic Disturbance Theory of Mergers, Quarterly Journal of Economics 83, 624-642. Harris, Robert S, John F Stewart, David K Guilkey, and Willard T Carleton, 1984, Characteristics of Acquired Firms: Fixed and Random Coefficient Probit Analyses, Southern Economic Journal 49, 164-184. Jennings, DE, 1986, Judging Inference Adequacy in Logistic Regression, Journal of the American Statistical Association 81, 471-476. Jensen, Michael C, 1986, Agency Costs of Free Cash Flow, Corporate Finance, and Takeovers, American Economic Review 76, 323-329. Jensen, Michael C, and William H Meckling, 1976, Theory of the Firm: Managerial behaviour, agency costs, and ownership structure, Journal of Financial Economics 3, 305-360. Jensen, Michael C, and Richard S Ruback, 1983, The Market for Corporate Control: The Scientific Evidence, Journal of Financial Economics 11, 5-50. Lewellen, Wilbur G, 1971, A Pure Financial Rationale for the Conglomerate Merger, Journal of Finance 26, 521-537. Manne, Henry G, 1965, Mergers and the Market for Corporate Control, Journal of Political Economy 73, 110-120. Miller, Merton H, and Franco Modigliani, 1964, Dividend Policy, Growth, and the Valuation of Shares, Journal of Business 34, 411-433. Mitchell, Mark L, and J Harold Mulherin, 1996, The impact of industry shocks on takeover and restructuring activity, Journal of Financial Economics 41, 193-229. Myers, Stewart C, and Nicholas S Majluf, 1984, Corporate Financing and Investment Decisions when Firms have Information that Investors do not, Journal of Financial Economics 13, 187-221. 32 Ohlson, J, 1980, Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research 18, 109-131. Palepu, Krishna G, 1986, Predicting Takeover Targets: A Methodological and Empirical Analysis, Journal of Accounting and Economics 8, 3-35. Platt, Harlan D, and Marjorie D Platt, 1990, Development of a Class of Stable Predictive Variables: The Case of Bankruptcy Prediction, Journal of Business Finance and Accounting 17, 31-51. Powell, Ronan G., 2001, Takeover prediction and portfolio performance: A note, Journal of Business Finance and Accounting 28, 993-1011. Rege, Udayan P, 1984, Accounting Ratios to Locate Takeover Targets, Journal of Business Finance and Accounting 11, 301-311. Simkowitz, Michael A, and Robert J Monroe, 1971, A Discriminant Function for Conglomerate Targets, Southern Journal of Business 38, 1-16. Singh, A, 1971, Takeovers: Their Relevance to the Stock Market and the Theory of the Firm. Cambridge University Press. Smith, Richard L, and Joo-Hyun Kim, 1994, The Combined Effect of Free Cash Flow and Financial Slack on Bidder and Target Stock Returns, Journal of Business 67, 281- 310. Stevens, David L, 1973, Financial Characteristics of Merged Firms: A Multivariate Analysis, Journal of Financial and Quantitative Analysis 8, 149-158. Walter, Richard M, 1994, The Usefulness of Current Cost Information for Identifying Takeover Targets and Earning Above-Average Stock Returns, Journal of Accounting, Auditing, and Finance 9, 349-377. Zanakis, SH, and C Zopounidis, 1997, Prediction of Greek Company Takeovers via Multivariate Analysis of Financial Ratios, Journal of the Operational Research Society 48, 678-687.

saya pergi ke kediri

Last Update: 2014-02-23
Usage Frequency: 1
Quality:
Reference: Wikipedia
Warning: Contains invisible HTML formatting

Dear Pak Lim, FYI, Berdasarkan program mengenai laporan harian via Email yang sudah berjalan dan kelengkapan pendukung diatas kapal armada PT BAL, saya ingin melaporkan bahwa ada beberapa fasilitas laptop diatas kapal maupun kantor yang mengalami kerusakan, yaitu di kapal TJA 286, TJA 285, TJA 2812, PEC 249, dan laptop radio, tercatat pembelian laptop tersebut pada tanggal 6/5/2012 dan laporan kerusakan pertama dari TJA 2812 pada tanggal 12/12/2012, TJA 286 Tanggal 31 Desember 2012, Tja 285 Tanggal 3/04/2013 dan PEC 249 pada bulan May 2013 dan sampai sekarang masih belum bisa diperbaiki karena kerusakan pada VGA card yang menyatu dengan mainboardnya, langkah yang sudah diambil untuk mengatasi masalah ini sudah kita bawa ke DATACOM dan pihak DATACOM sendiri sampai sekarang kesulitan untuk mencari sparepart yang sama dengan spesifikasi laptop tersebut karena menurut mereka seri laptop Toshiba L740 sudah tidak diproduksi lagi dan kalaupun ada harga untuk mainboardnya sama dengan harga laptop yang baru. Demikian pemberitahuan ini saya buat untuk kelancaran operasional PT BAL, Terimakasih atas perhatiannya dan mohon saran dari Bapak. Best Regards, Wawan/Herman

translation of Britons to Indonesia

Last Update: 2013-06-13
Subject: General
Usage Frequency: 1
Quality:
Reference: Anonymous

Koneksi ke %s tidak aman, dan tidak disarankan untuk mengirim/menerima informasi yang sensitif.\n\nKomunikasi dilakukan menggunakan text saja, dan tidak ada jaminan keabsahan server

The connection to %s is insecure, and should not be used to exchange sensitive information.\n\nThe communication is done in plain text, and there is no way to guarantee the identity of the server.

Last Update: 2014-09-23
Subject: Computer Science
Usage Frequency: 1
Quality:
Reference: MatteoT

Add a translation