Vraag Google

Je was op zoek naar: expectancy theory (Indonesisch - Engels)

Menselijke bijdragen

Van professionele vertalers, bedrijven, webpagina's en gratis beschikbare vertaalbronnen.

Voeg een vertaling toe

Indonesisch

Engels

Info

Indonesisch

Hybrid Theory EP

Engels

Hybrid Theory

Laatste Update: 2015-06-14
Gebruiksfrequentie: 8
Kwaliteit:

Referentie: Wikipedia

Indonesisch

Special Dividends and the Evolution of Dividend Signaling 1. Introduction Dividend signaling plays a prominent role in corporate finance theory, with numerous studies outlining scenarios in which managers use cash dividends to convey information about firm profitability (see, e.g., Bhattacharya (1979), Miller and Rock (1985), John and Williams (1985), and more recent papers cited in Allen and Michaely’s (1995) survey of the dividend literature). However, few empirical studies indicate that signaling is pervasively important, although some research suggests it might be important in limited circumstances (see, e.g., DeAngelo, DeAngelo, and Skinner (1996), Benartzi, Michaely, and Thaler (1997), and many earlier studies cataloged by Allen and Michaely). In their comprehensive survey, Allen and Michaely (1995, p. 825) state that “…the empirical evidence (on dividend signaling) is far from conclusive …. more research on this topic is needed.” The juxtaposition of continued strong theoretical interest in signaling models on the one hand, with limited empirical support on the other, has made the relevance of dividend signaling an important unresolved issue in corporate finance. There are firms in which dividend signaling is inarguably at work, and they are the ones studied by Brickley (1982, 1983), whose managers pay both regular dividends and occasional special dividends (extras, specials, year-ends, etc., hereafter “specials”). As Brickley indicates, the differential labeling of special and regular dividends inherently conveys a warning to stockholders that the “special” payout is not as likely to be repeated as the “regular” payout. Brickley’s evidence indicates that investors treat special dividends as hedged managerial signals about future profitability, in that unanticipated specials are associated with weaker stock market reactions than are regular dividend increases of comparable size. One contribution of the current paper is to provide evidence that the historically prevalent practice of paying special dividends has largely failed the survival test, casting further doubt on the overall importance of signaling motivations in explaining dividend policy in general. We document that special dividends were once commonly paid by NYSE firms but have gradually disappeared over the last 40 to 45 years and are now a rare phenomenon. During the 1940s, 61.7% of dividend-paying NYSE firms paid at least one special, while only 4.9% did so during the first 2 half of the 1990s. In the single year 1950, 45.8% of dividend-paying NYSE firms paid specials, while just 1.4% of such firms paid specials in 1995. In years past, special dividends constituted a substantial fraction of total cash dividends. Among NYSE firms that paid specials, these bonus disbursements average 24.3% (median, 16.8%) of the dollar value of total dividends paid over all years between the firm’s first and last special. Firms that at one point frequently paid specials include such high visibility “blue chip” corporations as General Motors, Eastman K odak, Exxon, Mobil, Texaco, Gillette, Johnson & Johnson, Merck, Pfizer, Sears Roebuck, J.C. Penney, Union Pacific, Corning, International Harvester, McGraw Hill, and Boeing. Today, only a handful of NYSE firms continues to pay frequent special dividends, and these firms are generally not well known companies. Why have firms largely abandoned the once pervasive practice of paying special dividends? Our evidence suggests that the evolution of special dividends reflects the principle that dividends are a useful signaling mechanism only when they send clear messages to stockholders. Surprisingly, most firms paid specials almost as predictably as they paid regulars, thereby treating the two dividend components as close substitutes and impeding their ability to convey different messages. Over 1926-1995, more than 10,000 specials were paid by NYSE firms and virtually all of these were declared by firms that announced specials in multiple years. Remarkably, a full 27.9% of the latter firms skipped paying specials in less than one year out of ten on average (i.e., they paid specials in over 90% of the years between their first and last special dividend). Well over half (56.8%) the firms that paid specials in multiple years did so more frequently than every other year on average. We find that the only specials that have survived to an appreciable degree -- and that, in fact, have grown in importance -- are large specials whose sheer size automatically differentiates them from regular dividends.1 When investors view specials and regulars as close substitutes, there is little advantage to differential labeling and so firms should eventually drop the practice of paying two types of dividends and simply embed specials into the regular dividend. Evidence supporting this prediction comes from our 1 Large specials, like large repurchases, are likely to get stockholders’ attention. These large payouts may or may not serve as signals in the conventional sense, however, depending on whether stockholders interpret them as information about the firm’s future profitability as opposed, e.g., to information about the success of its current restructuring efforts. 3 Lintner (1956) model analysis of the dividend decisions of firms that eliminated specials after paying them frequently for many years. This analysis shows that, controlling for earnings, the pattern of regular dividends after the cessation of specials does not differ systematically from the earlier pattern of total (special plus regular) dividends. Other data indicate that these sample firms preserved the relation between earnings and total dividends by substituting into greater reliance on regular dividend increases. We also find that firms generally tended to increase regulars when they reduced specials to a still-positive level (and this tendency becomes more pronounced in recent years), further supporting the view that firms treat specials and regulars as reasonably close substitutes. Finally, our data show that the disappearance of specials is part of a general trend toward simple, homogenous dividend policies in which firms converged on the now standard practice of paying exactly four regular dividends per year. Our event study analysis reveals that the stock market typically reacts favorably to the fact that a special dividend is declared (given a constant regular dividend), but the market response is not systematically related to the sign or magnitude of the change from one positive special dividend payment to another. We observe a significantly positive average stock market reaction of about 1%, both when firms increase specials and when they reduce them to a still-positive level (and leave the regular dividend unchanged). The stock market’s favorable reaction to special declarations is significantly greater than the essentially zero reaction when firms omit specials. These empirical tendencies provide some incentive for managers to pay special dividends more frequently than they otherwise would, even if specials must sometimes be reduced. These findings may therefore help explain why managers typically paid specials frequently, effectively converting them into payout streams that more closely resemble regular dividends than one would think based on the nominal special labeling. We also find some empirical support for the notion that the long term decline in special dividends is related to the clientele effect shift from the mid-century era in which stock ownership was dominated by individual investors to the current era in which institutions dominate. One might reasonably expect this clientele shift to reduce the importance of special dividends, since institutions are presumably more sophisticated than retail investors and are therefore better able to see that most firms treated specials as close substitutes for regulars. At the aggregate level, the secular decline in specials and the increase in 4 institutional ownership occurred roughly in parallel, with both trends proceeding gradually over many years. At the firm level, our logit regressions show a significant negative relation between the level of institutional ownership and the probability that a firm continues to pay special dividends. Finally, we find little support for the notion that special dividends were displaced by common stock repurchases. Theoretically, one mi ght expect a close connection between the disappearance of specials and the adoption of stock repurchases. Both payout methods allow managers to signal their beliefs about company prospects through temporary bonus distributions, with no necessary commitment to repeat today’s higher cash payout in future years. Moreover, repurchases are now widely prevalent (much as specials used to be) although historically they were rare events (as specials are now). However, at the aggregate level, the secular decline in specials began many years before the upsurge in repurchase activity, so that any theory which attributes the disappearance of specials to the advent of repurchases faces the difficult task of explaining the long time gap between the two phenomena. Moreover, at the firm level, the number of companies that repurchased stock after they stopped paying special dividends is significantly less than expected if firms simpl y substituted one for the other form of payout. Finally, repurchase tender offers and large specials both increase in recent years with the upsurge in corporate restructurings and takeovers. Perhaps the most important implication of the findings reported here is the challenge they pose for dividend signaling theories. Specifically, the fact that special dividends once flourished, but have largely failed to survive, is inconsistent with the view that these signals serve an economically important function. We discuss this and other implications of our findings for corporate finance research in section 7. We begin in section 2 by documenting the long-term evolution of special dividend payments. Section 3 analyzes the predictability of special dividends, the evolution of large specials, the behavior of total dividends around the time firms stopped paying specials, and firms’ general tendency to increase regulars when they reduce specials. Section 4 presents our event study analysis of the information content of special dividends. Section 5 examines the relation between institutional ownership and the payment of specials. Section 6 investigates the connection between repurchases and the decline in specials.

Engels

geogle terjemahan indonesia-englishSpecial Dividends and the Evolution of Dividend Signaling 1. Introduction Dividend signaling plays a prominent role in corporate finance theory, with numerous studies outlining scenarios in which managers use cash dividends to convey information about firm profitability (see, e.g., Bhattacharya (1979), Miller and Rock (1985), John and Williams (1985), and more recent papers cited in Allen and Michaely’s (1995) survey of the dividend literature). However, few empirical studies indicate that signaling is pervasively important, although some research suggests it might be important in limited circumstances (see, e.g., DeAngelo, DeAngelo, and Skinner (1996), Benartzi, Michaely, and Thaler (1997), and many earlier studies cataloged by Allen and Michaely). In their comprehensive survey, Allen and Michaely (1995, p. 825) state that “…the empirical evidence (on dividend signaling) is far from conclusive …. more research on this topic is needed.” The juxtaposition of continued strong theoretical interest in signaling models on the one hand, with limited empirical support on the other, has made the relevance of dividend signaling an important unresolved issue in corporate finance. There are firms in which dividend signaling is inarguably at work, and they are the ones studied by Brickley (1982, 1983), whose managers pay both regular dividends and occasional special dividends (extras, specials, year-ends, etc., hereafter “specials”). As Brickley indicates, the differential labeling of special and regular dividends inherently conveys a warning to stockholders that the “special” payout is not as likely to be repeated as the “regular” payout. Brickley’s evidence indicates that investors treat special dividends as hedged managerial signals about future profitability, in that unanticipated specials are associated with weaker stock market reactions than are regular dividend increases of comparable size. One contribution of the current paper is to provide evidence that the historically prevalent practice of paying special dividends has largely failed the survival test, casting further doubt on the overall importance of signaling motivations in explaining dividend policy in general. We document that special dividends were once commonly paid by NYSE firms but have gradually disappeared over the last 40 to 45 years and are now a rare phenomenon. During the 1940s, 61.7% of dividend-paying NYSE firms paid at least one special, while only 4.9% did so during the first 2 half of the 1990s. In the single year 1950, 45.8% of dividend-paying NYSE firms paid specials, while just 1.4% of such firms paid specials in 1995. In years past, special dividends constituted a substantial fraction of total cash dividends. Among NYSE firms that paid specials, these bonus disbursements average 24.3% (median, 16.8%) of the dollar value of total dividends paid over all years between the firm’s first and last special. Firms that at one point frequently paid specials include such high visibility “blue chip” corporations as General Motors, Eastman K odak, Exxon, Mobil, Texaco, Gillette, Johnson & Johnson, Merck, Pfizer, Sears Roebuck, J.C. Penney, Union Pacific, Corning, International Harvester, McGraw Hill, and Boeing. Today, only a handful of NYSE firms continues to pay frequent special dividends, and these firms are generally not well known companies. Why have firms largely abandoned the once pervasive practice of paying special dividends? Our evidence suggests that the evolution of special dividends reflects the principle that dividends are a useful signaling mechanism only when they send clear messages to stockholders. Surprisingly, most firms paid specials almost as predictably as they paid regulars, thereby treating the two dividend components as close substitutes and impeding their ability to convey different messages. Over 1926-1995, more than 10,000 specials were paid by NYSE firms and virtually all of these were declared by firms that announced specials in multiple years. Remarkably, a full 27.9% of the latter firms skipped paying specials in less than one year out of ten on average (i.e., they paid specials in over 90% of the years between their first and last special dividend). Well over half (56.8%) the firms that paid specials in multiple years did so more frequently than every other year on average. We find that the only specials that have survived to an appreciable degree -- and that, in fact, have grown in importance -- are large specials whose sheer size automatically differentiates them from regular dividends.1 When investors view specials and regulars as close substitutes, there is little advantage to differential labeling and so firms should eventually drop the practice of paying two types of dividends and simply embed specials into the regular dividend. Evidence supporting this prediction comes from our 1 Large specials, like large repurchases, are likely to get stockholders’ attention. These large payouts may or may not serve as signals in the conventional sense, however, depending on whether stockholders interpret them as information about the firm’s future profitability as opposed, e.g., to information about the success of its current restructuring efforts. 3 Lintner (1956) model analysis of the dividend decisions of firms that eliminated specials after paying them frequently for many years. This analysis shows that, controlling for earnings, the pattern of regular dividends after the cessation of specials does not differ systematically from the earlier pattern of total (special plus regular) dividends. Other data indicate that these sample firms preserved the relation between earnings and total dividends by substituting into greater reliance on regular dividend increases. We also find that firms generally tended to increase regulars when they reduced specials to a still-positive level (and this tendency becomes more pronounced in recent years), further supporting the view that firms treat specials and regulars as reasonably close substitutes. Finally, our data show that the disappearance of specials is part of a general trend toward simple, homogenous dividend policies in which firms converged on the now standard practice of paying exactly four regular dividends per year. Our event study analysis reveals that the stock market typically reacts favorably to the fact that a special dividend is declared (given a constant regular dividend), but the market response is not systematically related to the sign or magnitude of the change from one positive special dividend payment to another. We observe a significantly positive average stock market reaction of about 1%, both when firms increase specials and when they reduce them to a still-positive level (and leave the regular dividend unchanged). The stock market’s favorable reaction to special declarations is significantly greater than the essentially zero reaction when firms omit specials. These empirical tendencies provide some incentive for managers to pay special dividends more frequently than they otherwise would, even if specials must sometimes be reduced. These findings may therefore help explain why managers typically paid specials frequently, effectively converting them into payout streams that more closely resemble regular dividends than one would think based on the nominal special labeling. We also find some empirical support for the notion that the long term decline in special dividends is related to the clientele effect shift from the mid-century era in which stock ownership was dominated by individual investors to the current era in which institutions dominate. One might reasonably expect this clientele shift to reduce the importance of special dividends, since institutions are presumably more sophisticated than retail investors and are therefore better able to see that most firms treated specials as close substitutes for regulars. At the aggregate level, the secular decline in specials and the increase in 4 institutional ownership occurred roughly in parallel, with both trends proceeding gradually over many years. At the firm level, our logit regressions show a significant negative relation between the level of institutional ownership and the probability that a firm continues to pay special dividends. Finally, we find little support for the notion that special dividends were displaced by common stock repurchases. Theoretically, one mi ght expect a close connection between the disappearance of specials and the adoption of stock repurchases. Both payout methods allow managers to signal their beliefs about company prospects through temporary bonus distributions, with no necessary commitment to repeat today’s higher cash payout in future years. Moreover, repurchases are now widely prevalent (much as specials used to be) although historically they were rare events (as specials are now). However, at the aggregate level, the secular decline in specials began many years before the upsurge in repurchase activity, so that any theory which attributes the disappearance of specials to the advent of repurchases faces the difficult task of explaining the long time gap between the two phenomena. Moreover, at the firm level, the number of companies that repurchased stock after they stopped paying special dividends is significantly less than expected if firms simpl y substituted one for the other form of payout. Finally, repurchase tender offers and large specials both increase in recent years with the upsurge in corporate restructurings and takeovers. Perhaps the most important implication of the findings reported here is the challenge they pose for dividend signaling theories. Specifically, the fact that special dividends once flourished, but have largely failed to survive, is inconsistent with the view that these signals serve an economically important function. We discuss this and other implications of our findings for corporate finance research in section 7. We begin in section 2 by documenting the long-term evolution of special dividend payments. Section 3 analyzes the predictability of special dividends, the evolution of large specials, the behavior of total dividends around the time firms stopped paying specials, and firms’ general tendency to increase regulars when they reduce specials. Section 4 presents our event study analysis of the information content of special dividends. Section 5 examines the relation between institutional ownership and the payment of specials. Section 6 investigates the connection between repurchases and the decline in specials.

Laatste Update: 2015-10-10
Gebruiksfrequentie: 3
Kwaliteit:

Referentie: Anoniem
Waarschuwing: Bevat onzichtbare HTML-opmaak

Indonesisch

scientevid learning Originally Posted by Ocelot Hi Radrook, Thanks once again for your response. Indeed I was just checking that I'd located the one article amongst many that you thought addressed my claim. As I detailed, it doesn't. Well actually some people do make that claim. I've seen much talk from various brands of creationist that claim that MacroEvolution (evolution of new taxa of the species level and above) is impossible. They use similar arguments as you do so please forgive me for my presumption. I do apologise. Of course the fact that new species have been observed to evolve both in the lab and in the wild, make this claim one of the more ridiculous creationist claims but it is nonetheless one that I have encountered. I asgee that speciation does and has occurred. It's interesting to me that you have raised the bar. You accept that not all individual species need to have been created. Presumably you accept that Lions, Tigers and the domestic Cat all have a common ancestor? Am I correct in my estimation of your beliefs. Yes. If so do you also accept the more controversial conclusion that Homo Sapiens has a common ancestor with Chimps, Bonobos, Gorillas and Orang Utans. It may be more controversial for it's broad implications to theology and philosophy but perhaps because of this added interest it is a conclusion backed by even greater quantities of genetic evidence. No, that's where we diverge. Of course the genetic evidence that all placental mammals share a common ancestor more recent than the one they share with marsupials is as compelling as the evidence for a common ancestor amongst other Genus, Family, Order or Classes. If your theory is true, it would be interesting to see if the genetic evidence could tell us what the original common ancestors were beyond which we can find no further link. For example lets take for want of a better choice a red kangaroo named Charles. You and I both agree that Charles shares a common ancestor with all other red kangaroos, the genetic evidence backs this up. I see no reason to object. According to the genetic evidence Charles also shares a more distant common ancestor with other species of kangaroo such as grey kangaroos, and antilopine kangaroos. Ok. The genetic evidence suggests that further back in time these kangaroos shared a common ancestor with a variety of other species of kangaroo, wallaby and walleroo of the macropus genus. Would you agree? Sure. If so then the genetic evidence further indicates that the macropus genus shares a common ancestor with all other members of the macropod family including various other Kangaroos and Wallabies, the quokka and pademelons. Would you agree that these are all of the same "kind" sharing a common ancestor. That might be acceptable. If so then the genetic evidence indicates that the macropod family share a more distant common ancestor with all members of the order diprodontia. This includes possums koalas and wombats. Is it conceivable to you that the genetic evidence is correct and that these creatures all share a common ancestor with one another? Could they all be of the same "kind"? If they are of the same kind. In fact could all the australidelphia super order of marsupials share a common ancestor as the genetic evidence would suggest, if so are they collectively a "kind" Or do they, as the empirical evidence would suggest, all share a common ancestor with all other marsupials. Are marsupials a kind? I presume that you do not accept that some time in the cretaceous there was an early mammal type reptile or therapsid from whom both you and Charles can claim lineage. However how do you explain why when the genetic evidence is so clear? Because I believe that the data is being interpreted to fit into a preconceived notion. It doesn’t matter where you place the bar, the genetic evidence is clear, there is only one "kind" currently on planet earth we are all descended from the same single common ancestor. I too have no problem when seriously considering a theory of intelligent design that the designer might choose to vary their techniques. What I have a problems with is why the techniques should so closely match a picture of common descent with particular variations being more closely clustered amongst species that appear to be more closely related. Creationists did not make this prediction. Evolutionary biologists did. The examination of the evidence continues to uphold the prediction of the evolutionary biologists. Unless Creationism can explain this remarkable coincidence it is deficient as a theory. I'm afraid your meaning here is not entirely clear to me. However the assumption that evolution is true is rather the point. If you make that assumption you make a prediction that turns out to be true. If you don't make that assumption you need an alternative explanation for the prediction. I offer the analogy once more. If you assume that I am related to my son you will expect a roughly 50% match between the various genes in highly variable alleles. If you do not make that assumption and otherwise find the 50% match you must find another explanation (perhaps we are brothers...) If you find more genetic matches amongst placental mammals than between placental mammals and marsupials this is explained by assuming that placental mammals sharing a more recent common ancestor amongst themselves than the one they might share with marsupials. If you reject that assumption then it would benefit your case to offer an alternative that fits the known facts at least as well. I agree that certain animals share more genetic material in common than other kinds. As I said previously, some of that sharing is due to a common ancestor called a kind in Genesis. What I don't agree with is the transformation of one kind into another or that all living things are ultimately related. Or that my ancestor was a one celled creature which slowly turned into a fish, and later into a reptile, and later into some type of piglike animal as the evolutionist interpretations of data say. Not simply because it is repulsive thought, but because it all depends on a mindless process which I and most human beings on this earth, including human beings who are scientists, find unbelievable due to its inherent improbability and based on the cause and effect phenomena we perceive which indicates that machinelike complex things do not make themselves but are the product of mind or else are programmed to replicate themselves by a mind. Hi again Radrook, It's good to hear back from you. This appears to be a derail from my original question of how do you account for the genetic evidence of common descent if not through common descent. Originally Posted by Radrook It's not the frequency it's the mutation process itself that is a dubious choice for the organization of complex organisms. Originally Posted by Radrook I never denied the occurrence of neutral or beneficial mutations. It is the unlikelyhood of a mindless process with its high probability of being harmful to an organism being said to ultimately lead to the intricate organization as is evident in the human eye with its iris, to adjust the entry of light, the lens to focus that light, on a screen called the retina which is connected to an optic nerve, which reacts to the radiation by coding it into neural impulses, which in turn arrives at a specialized part of the brain which can decode those impulses and turn them into the perception of images. Sorry but in the presence of such strong evidence to the contrary, I just can't buy into the mindless mutation explanation First let me congratulate you on your acceptance of the existence of small positive mutations. This is a major step towards your understanding of what evolution is truly about. It is a step that some creationists are not prepared to make even in the face of reproducible empirical evidence. It appears that you are not sufficiently aware of the intricate complexity that can be produced by undoubtedly mindless processes. Snowflakes, have complexity, a rock arch has irreducible complexity, the water cycle is a steam engine. There is nothing you have demonstrated to be beyond the reach of a mindless process. Are you familiar with John Conway's Game of Life. Draw a random pattern in this very simple purely mechanical 2D universe. The odds that within a few generations you'll see a small glider pattern. It looks designed but you know that you didn't design it. Genetic recipes for life allow new increases in complexity to build upon previous ones. This allows many small mutations to add up to a bigger one. As such it offers us the possibility for a pinnacle of "mindless design" It is in fact so good at design that genetic algorithms have been put to good use by human designers in computer simulations. For example a genetic algorithm produces a shape which is tested virtually for various structural properties. Those algorithms which produce the best designs are then used as the seeds for the next generation of designs. It is not uncommon for such a mindless process to produce "designs" superior in structural efficiency to any of intelligent origin. What evolutionary theory accepts can never evolve is a feature than cannot be broken down into many small neutral or positive stages. The discovery of such a feature would indeed be a problem for evolution. However it is difficult to demonstrate that a feature could not be the result of an appropriate evolutionary path. To do so would probably require examination of an infinite number of possible paths. Instead we get argument for incredulity: "I cannot see how this feature could have evolved, therefore it could not have evolved." I'm sure you don't need me to point out the flaw in this logic. In all cases that I'm aware of, biologists have made progress in discovering possible evolutionary paths for the formation of seemingly problematic features. You bring up the example of the eye as one candidate. This has of course been much discussed and I'm surprised that you do not acknowledge that the solution to this apparent conundrum has already been provided. In fact it was a topic discussed by Darwin himself, who also provided a solution. From here The gradual steps listed are briefly... • photosensitive cell • aggregates of pigment cells without a nerve • an optic nerve surrounded by pigment cells and covered by translucent skin • pigment cells forming a small depression • pigment cells forming a deeper depression • the skin over the depression taking a lens shape • muscles allowing the lens to adjust From the same page you can find links detailing how each stage has been observed in the natural world. Since you accept that small positive mutation can occur and be subject to natural selection it should now be clear to you that the evolution of the eye can be broken down into a series of such steps.

Engels

english translation into Indonesian

Laatste Update: 2014-10-27
Gebruiksfrequentie: 1
Kwaliteit:

Referentie: Anoniem
Waarschuwing: Bevat onzichtbare HTML-opmaak

Indonesisch

1 Predicting Australian Takeover Targets: A Logit Analysis Maurice Peat* Maxwell Stevenson* * Discipline of Finance, School of Finance, The University of Sydney Abstract Positive announcement-day adjusted returns to target shareholders in the event of a takeover are well documented. Investors who are able to accurately predict firms that will be the subject of a takeover attempt should be able to earn these excess returns. In this paper a series of probabilistic regression models were developed that use financial statement variables suggested by prior research as explanatory variables. The models, applied to in-sample and out-of-sample data, led to predictions of takeover targets that were better than chance in all cases. The economic outcome resulting from holding a portfolio of the predicted targets over the prediction period are also analysed. Keywords: takeovers, targets, prediction, classification, logit analysis JEL Codes: G11, G17, G23, G34 This is a draft copy and not to be quoted. 2 1. Introduction In this paper our aim is to accurately predict companies that will become takeover targets. Theoretically, if it is possible to predict takeovers with accuracy greater than chance, it should be possible to generate abnormal returns from holding a portfolio of the predicted targets. Evidence of abnormal returns of 20% to 30% made by shareholders of firms on announcement of a takeover bid is why prediction of these events is of interest to academics and practitioners alike. The modelling approach adopted in this study was based on the discrete choice approach used by Palepu (1986) and Barnes (1999). The models were based on financial statement information, using variables suggested by the numerous theories that have been put forward to explain takeover activity. The performance of the models was evaluated using statistical criteria. Further, the predictions from the models were rated against chance and economic criteria through the formation and tracking of a portfolio of predicted targets. Positive results were found under both evaluation criteria. Takeover prediction studies are a logical extension of the work of Altman (1968) who used financial statement information to explain corporate events. Early studies by Simkowitz and Monroe (1971) and Stevens (1973) were based on the Multiple Discriminant Analysis (MDA) technique. Stevens (1973) coupled MDA with factor analysis to eliminate potential multicollinearity problems and reported a predictive accuracy of 67.5%, suggesting that takeover prediction was viable. Belkaoui (1978) and Rege (1984) conducted similar analyses in Canada with Belkaoui (1978) confirming the results of these earlier researchers and reporting a predictive accuracy of 85% . Concerns were raised by Rege (1984) who was unable to predict with similar accuracy. These concerns were also raised in research by others such as Singh (1971) and Fogelberg, Laurent, and McCorkindale (1975). Reacting to the wide criticism of the MDA method, researchers began to use discrete choice models as the basis of their research. Harris et al. (1984) used probit analysis to develop a model and found that it had extremely high explanatory power, but were unable to discriminate between target and non-target firms with any degree of accuracy. Dietrich and Sorensen (1984) continued this work using a logit model and achieved a classification accuracy rate of 90%. Palepu (1986) addressed a number of methodological problems in takeover prediction. He suggested the use of statebased prediction samples where a number of targets were matched with non-targets 3 for the same sample period. While this approach was appropriate for the estimation sample, it exaggerated accuracies within the predictive samples because the estimated error rates in these samples were not indicative of error rates within the population of firms. He also proposed the use of an optimal cut-off point derivation which considered the decision problem at hand. On the basis of this rectified methodology, along with the application of a logit model to a large sample of US firms, Palepu (1986) provided evidence that the ability of the model was no better than a chance selection of target and non-target firms. Barnes (1999) also used the logit model and a modified version of the optimal cut-off rule on UK data. His results indicated that a portfolio of predicted targets may have been consistent with Palepu’s finding, but he was unable to document this in the UK context due to model inaccuracy. In the following section the economic explanations underlying takeover activity are discussed. Section 3 outlines our takeover hypotheses and describes the explanatory variables that are used in the modelling procedure. The modelling framework and data used in the study is contained in Section 4, while the results of our model estimation, predictions, classification accuracy and portfolio economic outcomes are found in Section 5. We conclude in Section 6. 2. Economic explanations of takeover activity Economic explanations of takeover activity have suggested the explanatory variables that were included in this discrete choice model development study. Jensen and Meckling (1976) posited that agency problems occurred when decision making and risk bearing were separated between management and stakeholders1, leading to management inefficiencies. Manne (1965) and Fama (1980) theorised that a mechanism existed that ensured management acted in the interests of the vast number of small non-controlling shareholders2. They suggested that a market for corporate control existed in which alternative management teams competed for the rights to control corporate assets. The threat of acquisition aligned management objectives with those of stakeholders as managers are terminated in the event of an acquisition in order to rectify inefficient management of the firm’s assets. Jensen and Ruback (1983) suggested that both capital gains and increased dividends are available to an 1 Stakeholders are generally considered to be both stock and bond holders of a corporation. 2 We take the interests of shareholders to be in the maximization of the present value of the firm. 4 acquirer who could eliminate the inefficiencies created by target management, with the attractiveness of the firm for takeover increasing with the level of inefficiency. Jensen (1986) looked at the agency costs of free cash flow, another form of management inefficiency. In this case, free cash flow referred to cash flows in excess of positive net present value (NPV) investment opportunities and normal levels of financial slack (retained earnings). The agency cost of free cash flow is the negative NPV value that arises from investing in negative NPV projects rather than returning funds to investors. Jensen (1986) suggested that the market value of the firm should be discounted by the expected agency costs of free cash flow. These, he argued, were the costs that could be eliminated either by issuing debt to fund an acquisition of stock, or through merger with, or acquisition of a growing firm that had positive NPV investments and required the use of these excess funds. Smith and Kim (1994) combined the financial pecking order argument of Myers and Majluf (1984) with the free cash flow argument of Jensen (1986) to create another motivational hypothesis that postulated inefficient firms forgo profitable investment opportunities because of informational asymmetries. Further, Jensen (1986) argued that, due to information asymmetries that left shareholders less informed, management was more likely to undertake negative NPV projects rather than returning funds to investors. Smith and Kim (1994) suggested that some combination of these firms, like an inefficient firm and an efficient acquirer, would be the optimal solution to the two respective resource allocation problems. This, they hypothesised, would result in a market value for the combined entity that exceeded the sum of the individual values of the firms. This is one form of financial synergy that can arise in merger situations. Another form of financial synergy is that which results from a combination of characteristics of the target and bidding firms. Jensen (1986) suggested that an optimal capital structure exists, whereby the marginal benefits and marginal costs of debt are equal. At this point, the cost of capital for a firm is minimised. This suggested that increases in leverage will only be viable for those firms who have free cash flow excesses, and not for those which have an already high level of debt. Lewellen (1971) proposed that in certain situations, financial efficiencies may be realized without the realization of operational efficiencies. These efficiencies relied on a simple Miller and Modigliani (1964) model. It proposed that, in the absence of corporate taxes, an increase in a firm’s leverage to reasonable levels would increase the value of the equity share of the company due to a lower cost of capital. By a 5 merger of two firms, where either one or both had not utilised their borrowing capacity, would result in a financial gain. This financial gain would represent a valuation gain above that of the sum of the equity values of the individual firms. However, this result is predicated on the assumption that the firms need to either merge or be acquired in order to achieve this result. Merger waves are well documented in the literature. Gort (1969) suggested that industry disturbances are the source of these merger waves, his argument being that they occurred in response to discrepancies between the valuation of a firm by shareholders and potential acquirers. As a consequence of economic shocks (such as deregulation, changes in input or output prices, etc.), expectations concerning future cash flow became more variable. This results in an increased probability that the value the acquirer places on a potential target is greater than its current owner’s valuation. The result is a possible offer and subsequent takeover. Mitchell and Mulherin (1996), in their analysis of mergers and acquisitions in the US during the 1980s, provided evidence that mergers and acquisitions cluster by industries and time. Their analysis confirmed the theoretical and empirical evidence provided by Gort (1969) and provided a different view suggesting that mergers, acquisitions, and leveraged buyouts were the least cost method of adjusting to the economic shocks borne by an industry. These theories suggested a clear theoretical base on which to build takeover prediction models. As a result, eight main hypotheses for the motivation of a merger or acquisition have been formulated, along with twenty three possible explanatory variables to be incorporated predictive models. 3. Takeover hypotheses and explanatory variables The most commonly accepted motivation for takeovers is the inefficient management hypothesis.3 The hypothesis states that inefficiently managed firms will be acquired by more efficiently managed firms. Accordingly, H1: Inefficient management will lead to an increased likelihood of acquisition. Explanatory variables suggested by this hypothesis as candidates to be included in the specifications of predictive models included: 1. ROA (EBIT/Total Assets – Outside Equity Interests) 3 It is also known as the disciplinary motivation for takeovers. 6 2. ROE (Net Profit After Tax / Shareholders Equity – Outside Equity Interests) 3. Earnings Before Interest and Tax Margin (EBIT/Operating Revenue) 4. EBIT/Shareholders Equity 5. Free Cash Flow (FCF)/Total Assets 6. Dividend/Shareholders Equity 7. Growth in EBIT over past year, along with an activity ratio, 8. Asset Turnover (Net Sales/Total Assets) While there are competing explanations for the effect that a firm’s undervaluation has on the likelihood of its acquisition by a bidder, there is consistent agreement across all explanations that the greater the level of undervaluation then the greater the likelihood a firm will be acquired. The hypothesis that embodies the impact of these competing explanations is as follows: H2: Undervaluation of a firm will lead to an increased likelihood of acquisition. The explanatory variable suggested by this hypothesis is: 9. Market to book ratio (Market Value of Securities/Net Assets) The Price Earnings (P/E) ratio is closely linked to the undervaluation and inefficient management hypotheses. The impact of the P/E ratio on the likehood of acquisition is referred to as the P/E hypothesis: H3: A high Price to Earnings Ratio will lead to a decreased likelihood of acquisition. It follows from this hypothesis that the P/E ratio is a likely candidate as an explanatory variable for inclusion in models for the prediction of potential takeover targets. 10. Price/Earnings Ratio The growth resource mismatch hypothesis is the fourth hypothesis. However, the explanatory variables used in models specified to examine this hypothesis capture growth and resource availability separately. This gives rise to the following: H4: Firms which possess low growth / high resource combinations or, alternatively, high growth / low resource combinations will have an increased likelihood of acquisition. The following explanatory variables suggested by this hypothesis are: 7 11. Growth in Sales (Operating Revenue) over the past year 12. Capital Expenditure/Total Assets 13. Current Ratio (Current Assets/Current Liabilities) 14. (Current Assets – Current Liabilities)/Total Assets 15. Quick Assets (Current Assets – Inventory)/Current Liabilities The behaviour of some firms to pay out less of their earnings in order to maintain enough financial slack (retained earnings) to exploit future growth opportunities as they arise, has led to the dividend payout hypothesis: H5: High payout ratios will lead to a decreased likelihood of acquisition. The obvious explanatory variable suggested by this hypothesis is: 16. Dividend Payout Ratio Rectification of capital structure problems is an obvious motivation for takeovers. However, there has been some argument as to the impact of low or high leverage on acquisition likelihood. This paper proposes a hypothesis known as the inefficient financial structure hypothesis from which the following hypothesis is derived. H6: High leverage will lead to a decreased likelihood of acquisition. The explanatory variables suggested by this hypothesis include: 17. Net Gearing (Short Term Debt + Long Term Debt)/Shareholders Equity 18. Net Interest Cover (EBIT/Interest Expense) 19. Total Liabilities/Total Assets 20. Long Term Debt/Total Assets The existence of Merger and Acquisition (M&A) activity waves, where takeovers are clustered in wave-like profiles, have been proposed as indicators of changing levels of M&A activity over time. It has been argued that the identification of M&A waves, with the corresponding improved likelihood of acquisition when the wave is surging, captures the effect of the rate of takeover activity at specific points in time, and serves as valuable input into takeover prediction models. Consistent with M&A activity waves and their explanation as a motivation for takeovers is the industry disturbance hypothesis: 8 H7: Industry merger and acquisition activity will lead to an increased likelihood of acquisition. An industry relative ratio of takeover activity is suggested by this hypothesis: 21. The numerator is the total bids launched in a given year, while the denominator is the average number of bids launched across all the industries in the ASX. Size will have an impact on the likelihood of acquisition. It seems plausible that smaller firms will have a greater likelihood of acquisition due to larger firms generally having fewer bidding firms with the resources to acquire them. This gives rise to the following hypothesis: H8: The size of a firm will be negatively related to the likelihood of acquisition. Explanatory variables that can be employed to control for size include: 21. Log (Total Assets) 22. Net Assets 4. Data and Method The data requirements for the variables defined above are derived from the financial statements and balance sheet date price information for Australian listed companies. The financial statement information was sourced from the AspectHuntley data base which includes annual financial statement data for all ASX listed companies between 1995 and 2006. The database includes industry classifications for all firms included in the construction of industry relative ratios. Lists of takeover bids and their respective success were obtained from the Connect4 database. This information enabled the construction of variables for relative merger activity between industries. Additionally, stock prices from the relevant balance dates of all companies were sourced from the AspectHuntley online database, the SIRCA Core Price Data Set and Yahoo! Finance. 4.1 The Discrete Choice Modelling Framework The modelling procedure used is the nominal logit model, made popular in the bankruptcy prediction literature by Ohlson (1980) and, subsequently, in the takeover prediction literature by Palepu (1986). Logit models are commonly utilised for dichotomous state problems. The model is given by equations [1] to [3] below. 9 [3] The logit model was developed to overcome the rigidities of the Linear Probability Model in the presence of a binary dependent variable. Equations [1] and [2] show the existence of a linear relationship between the log-odds ratio (otherwise known as the logit Li) and the explanatory variables. However, the relationship between the probability of the event and acquisition likelihood is non-linear. This non-linear relationship has a major advantage that is demonstrated in equation [3]. Equation [3] measures the change in the probability of the event as a result of a small increment in the explanatory variables, . When the probability of the event is high or low, the incremental impact of a change in an explanatory variable on the likelihood of the event will be compressed, requiring a large change in the explanatory variables to change the classification of the observation. If a firm is clearly classified as a target or non-target, a large change in the explanatory variables is required to change its classification. 4.2 Sampling Schema Two samples were used in the model building and evaluation procedure. They were selected to mimic the problem faced by a practitioner attempting to predict takeover targets into the future. The first sample was used to estimate the model and to conduct in-sample classification. It was referred to as the Estimation Sample. This sample was based on financial data for the 2001 and 2002 financial years for firms that became takeover targets, as well as selected non-targets, between January, 2003 and December, 2004. The lag in the dates allows for the release of financial information as well as allowing for the release of financial statements for firms whose balance dates fall after the 30th June. Following model estimation, the probability of a takeover offer was estimated for each firm in the entire sample of firms between January, 2003 and December, 2004 using the estimated model and each firm’s 2001 and 2002 financial data. Expost predictive ability for each firm was then assessed. 10 A second sample was then used to assess the predictive accuracy of the model estimated with the estimation sample data. It is referred to as the Prediction Sample. This sample includes the financial data for the 2003 and 2004 financial years, which will be used in conjunction with target and non-target firms for the period January, 2005 to December, 2006. Using the model estimated from the 2001 and 2002 financial data, the sample of firms from 2005 and 2006 were fitted to the model using their 2003 and 2004 financial data. They were then classified as targets or non-targets using the 2005 and 2006 data. This sampling methodology allows for the eva

Engels

saya pergi ke kediri

Laatste Update: 2014-02-23
Gebruiksfrequentie: 1
Kwaliteit:

Referentie: Wikipedia
Waarschuwing: Bevat onzichtbare HTML-opmaak

Indonesisch

tdgdgdgdrgeeeThe Place of Logic in Philosophy. The sciences fall into two broad divisions, viz.: the speculative and the regulative (or normative) sciences. In the speculative sciences, philosophic thought deals with those things which we find proposed to our intelligence in the universe: such sciences have no other immediate end than the contemplation of the truth. Thus we study Mathematics, not primarily with a view to commercial success, but that we may know. In the normative sciences, on the other hand, the philosopher pursues knowledge with a view to the realization of some practical end. "The object of philosophy," says St. Thomas of Aquin, "is order. This order may be such as we find already existing; but it may be such as we seek to bring into being ourselves."¹ Thus sciences exist, which have as their object the realization of order in the acts both of our will and of our intellect. The science which deals with the due ordering of the acts of the will, is Ethics, that which deals with order in the acts of the intellect is Logic. ¹St. Thomas in Ethic. I. lect. 1. Sapientis est ordinare. . . . Ordo autem quadrupliciter ad rationem comparatur. Est enim quidam ordoquem ratio non facit sed solum considerat, sicut est ordo rerum naturalium. Alius autem est ordo quem ratio considerando facit in proprio actu, puta cum ordinat conceptus suos ad invicem et signa conceptuum quae sunt voces significativae. Tertius autem est ordo quem ratio considerando facit in operationibus voluntatis. Quartus autem est ordo quem ratio considerando facit in exterioribus rebus, quarum ipsa est causa, sicut in arca et domo. The question has often been raised, whether Logic is science or an art. The answer to this will depend entirely on the precise meaning which we give to the word 'art.' The medieval philosophers regarded the notion of an art as signifying a body of rules by which man directs his actions to the performance of some work.2 Hence they held Logic to be the art of reasoning, as well as the science of the reasoning process. Perhaps a more satisfactory terminology is that at present in vogue, according to which the term 'art,' is reserved to mean a body of precepts for the production of some external result, and hence is not applicable to the normative sciences. Aesthetics, the science which deals with beauty and proportion in the objects of the external senses, is now reckoned with Ethics and Logic, as a normative science. By the medieval writers it was treated theoretically rather than practically, and was reckoned part of Metaphysics. It may be well to indicate briefly the distinction between Logic and two other sciences, to which it bears some affinity. Logic and Metaphysics. The term Metaphysics sometimes stands for philosophy in general sometimes with a more restricted meaning it stands for that part of philosophy known as Ontology. In this latter sense Metaphysics deals not with thoughts, as does Logic, but with things, not with the conceptual order but with the real order. It investigates the meaning of certain notions which all the special sciences presuppose, such as Substance, Accident, Cause, Effect, Action. It deals with principles which the special sciences do not prove, but on which they rest, such as e.g., Every event must have a cause. Hence it is called the science of Being, since its object is not limited to some special sphere, but embraces all that is, whether material or spiritual. Logic on the other hand deals with the conceptual order, with thoughts. Its conclusions do not relate to things, but to the way in which the mind represents things. ²St. Thomas us An. Post. I., lect. x. "Nihil enim aliud ars esse videtur, quam certa ordinatio rationis qua per determinata media ad debitum finem actus humani perveniunt." Logic and Psychology. The object of Psychology is the human soul and all its activities. It investigates the nature and operations of intellect, will, imagination, sense. Thus its object is far wider than that of Logic, which is concerned with the intellect alone. And even in regard to the intellect, the two sciences consider it under different aspects. Psychology considers thought merely as an act of the soul. Thus if we take a judgment, such as e.g., "The three angles of a triangle are together equal to two right angles," Psychology considers it, merely in so far as it is a form of mental activity. Logic on the other hand, examines the way in which this mental act expresses the objective truth with which it deals; and if necessary, asks whether it follows legitimately from the grounds on which it is based. Moreover, Logic, as a regulative science, seeks to prescribe rules as to how we ought to think. With this Psychology has nothing to do: it only asks, "What as a matter of fact is the nature of the mind's activity?" The Scope of Logic. Logicians are frequently divided into three classes, according as they hold that the science is concerned (1) with names only, (2) with the form of thought alone, (3) with thought as representative of reality. The first of these views — that Logic is concerned with names only — has found but few defenders. It is however taught by the French philosopher Condillac (1715 — 1780), who held that the process of reasoning consists solely in verbal transformations. The meaning of the conclusion is, he thought, ever identical with that of the original proposition. The theory that Logic deals only with the forms of thought, irrespective of their relation to reality, was taught among others by Hamilton (1788 —1856) and Mansel (1820 —1871). Both of these held that Logic is no way concerned with the truth of our thoughts, but only with their consistency.In this sense Hamilton says: "Logic is conversant with the form of thought, to the exclusion of the matter" (Lectures. I. p. xi). By these logicians a distinction is drawn between 'formal truth,' i.e., self-consistency and 'material truth,' i.e., conformity with the object and it is said that Logic deals with formal truth alone. On this view Mill well observes: "the notion of the true and false will force its way even into Formal Logic. We may abstract from actual truth, but the validity of reasoning is always a question of conditional truth — whether one proposition must be true if the others are true, or whether one proposition can be true if others are true" (Exam. of Hamilton, p. 399). According to the third theory, Logic deals with thought as the means by which we attain truth. Mill, whom we have just quoted, may stand as a representative of this view. "Logic," he says, "is the theory of valid 'thought, not of thinking, but of correct thinking" (Exam. of Hamilton, p. 388). To which class of logicians should Aristotle and his Scholastic followers be assigned? Many modern writers rank them in the second of these groups, and term them Formal Logicians. It will soon appear on what a misconception this opinion rests, and how completely the view taken of Logic by the Scholastics differs from that of the Formal Logicians. In their eyes, the aim of the science was most assuredly not to secure self-consistency, but theoretically to know how the mind represents its object, and practically to arrive at truth. The terms Nominalist, Conceptualist, and Realist Logicians are now frequently employed to denote these three classes. This terminology is singularly unfortunate: for the names, Nominalist, Conceptualist and Realist, have for centuries been employed to distinguish three famous schools of philosophy, divided from each other on a question which has nothing to do with the scope of Logic. In this class we shall as far as possible avoid using the terms in their novel meaning.

Engels

Woww photonya terlalu seksi

Laatste Update: 2013-11-29
Gebruiksfrequentie: 1
Kwaliteit:

Referentie: Anoniem
Waarschuwing: Bevat onzichtbare HTML-opmaak

Indonesisch

terjemahanABSTRAKS The background of this research is the phenomenon of not optimal effectiveness of employees of the General Secretariat Garut. This study aimed to analyze the influence of human relations employee effectiveness of the General Secretariat Garut. The method used in this research is descriptive quantitative analysis. The population in this study were all employees of the office of the General Secretariat Garut totaling 130 people. Classified sampling with random sampling techniques, sample size is 57 people altogether. The theory used as a basis for the analysis variable is human relations theory advanced by Jalaluddin (1999) that aspects of psychosocial and physical aspects of the environment. While the basic theory used to analyze the effectiveness of the employment variable using the theory proposed Robbin (1994) which consists of goal attainment approach, system approach, strategic approach to constituency and competing values ​​approach. The test results showed positive hypothesis, where human relations significantly affect employee effectiveness of 85.15%, while 14.85% is the influence of other factors that are not incorporated into the model. While the correlation between the two variables were obtained at 0.92 with the degree of relationship is very strong. Based on the results of the study found several issues considered important, namely: 1) is less good room layout and meubelair, 2) friendly and courteous attitude terbudayakan yet well especially when dealing with constituents, 3) still lack the level of employee adaptation to environmental developments; 4 ) is still not optimal competitive level organization. The suggestion that the authors proposed to solve this problem include: 1) Rebuilding meubelair layout of the room and make it look neat and comfortable with maximum attention to lighting and circulation, 2) Establish and continue local culture positive or culture is hospitality and courtesy to every employee and 3) create a positive atmosphere, removing the fears of employees, support and appreciate creative thinking, even if those ideas are not implemented; 4) Creating effective communication, so that everyone in the organization know and understand the vision of the organization that has been set. Through communication that everyone can understand the specification can serve the needs of their constituents and their constituents.

Engels

translation

Laatste Update: 2013-01-02
Gebruiksfrequentie: 1
Kwaliteit:

Referentie: Anoniem

Indonesisch

Ketika Aku dan istriku sampai di Italian Hotel, kami disambut ramah oleh penjaga hotel dan dia mengantarkan kita kekamar lantai dua. Ruanganya begitu besar dan kami bisa melihat monument, taman dari kamar kita. Pada hari itu hujan begitu deras dan aku berbaring ditempat tidur sambil membaca buku. Hobiku adalah membaca dan menulis buku. Buku yang sudah selesai kutulis antara lain Old Testament Translation Problem, A Translator’s Handbook on Mark, In Other Words, A Textbook of Translation, dan buku yang sedang kubaca adalah The Theory and Practice of Translation, saat ini aku juga sedang sibuk menulis buku yang berjudul Orthography Studies. Ketika hujan turun deras istriku sedang berdiri dijendela dan menatap keluar melihat seekor kucing ditengah hujan, lalu dia menceritakan kepadaku bahwa dia menginginkan kucing itu. Aku tidak suka kucing yang kotor dan tidak terawat.

Engels

Türkçe Endonezya sözlük çevirileri

Laatste Update: 2012-10-28
Gebruiksfrequentie: 1
Kwaliteit:

Referentie: Anoniem

Krijg een betere vertaling met
4,401,923,520 menselijke bijdragen

Gebruikers vragen nu voor assistentie



Wij gebruiken cookies om u de best mogelijke ervaring op onze website te bieden. Door de website verder te gebruiken, geeft u toestemming voor het gebruik van cookies. Klik hier voor meer informatie. OK