MyMemory, la memoria di traduzione più grande del mondo
Click to expand

Coppia di lingue: Click to swap content  Argomento   
Chiedi a Google

Hai cercato: between    [ Disabilita i colori ]

Contributi umani

Da traduttori professionali, aziende, pagine web e banche di dati di traduzione disponibili al pubblico.

Aggiungi una traduzione

Indonesiano

Inglese

Dettagli

Ceramic materials were discussed briefly in Chapter 1, which noted that they are inorganic and nonmetallic materials.Most ceramics are compounds between metal- lic and nonmetallic elements for which the interatomic bonds are either totally ionic, or predominantly ionic but having some covalent character. The term “ceramic” comes from the Greek word keramikos, which means “burnt stuff,” in- dicating that desirable properties of these materials are normally achieved through a high-temperature heat treatment process called firing. Up until the past 60 or so years, the most important materials in this class were termed the “traditional ceramics,” those for which the primary raw material is clay; products considered to be traditional ceramics are china,porcelain,bricks,tiles,and, in addition, glasses and high-temperature ceramics. Of late, significant progress has been made in understanding the fundamental character of these materials and of the phenomena that occur in them that are responsible for their unique properties. Consequently, a new generation of these materials has evolved, and the term “ceramic” has taken on a much broader meaning.To one degree or another, these new materials have a rather dramatic effect on our lives;electronic,computer,com- munication, aerospace, and a host of other industries rely on their use. This chapter discusses the types of crystal structure and atomic point defect that are found in ceramic materials and, in addition, some of their mechanical char- acteristics. Applications and fabrication techniques for this class of materials are treated in the next chapter.

Ceramic materials were discussed briefly in Chapter 1, which noted that they are inorganic and nonmetallic materials.Most ceramics are compounds between metal- lic and nonmetallic elements for which the interatomic bonds are either totally ionic, or predominantly ionic but having some covalent character. The term “ceramic” comes from the Greek word keramikos, which means “burnt stuff,” in- dicating that desirable properties of these materials are normally achieved through a high-temperature heat treatment process called firing. Up until the past 60 or so years, the most important materials in this class were termed the “traditional ceramics,” those for which the primary raw material is clay; products considered to be traditional ceramics are china,porcelain,bricks,tiles,and, in addition, glasses and high-temperature ceramics. Of late, significant progress has been made in understanding the fundamental character of these materials and of the phenomena that occur in them that are responsible for their unique properties. Consequently, a new generation of these materials has evolved, and the term “ceramic” has taken on a much broader meaning.To one degree or another, these new materials have a rather dramatic effect on our lives;electronic,computer,com- munication, aerospace, and a host of other industries rely on their use. This chapter discusses the types of crystal structure and atomic point defect that are found in ceramic materials and, in addition, some of their mechanical char- acteristics. Applications and fabrication techniques for this class of materials are treated in the next chapter.Engineering

Ultimo aggiornamento: 2014-07-16
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

The compound cup stack carbon nanotub coating reduces friction between the main ,d cross strings and combines sith the thin 0.66mm gauge high-Intensity multifilament nylon provider effortless power. The next generation of string producing improved power and feel for enhanced flight performance

male

Ultimo aggiornamento: 2014-07-07
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo

Home Malls Markets Hotels Transport Nightlife What To See HomeMobile PhonesMobile Phones in Bangkok | Buying A Mobile Phone Bangkok Mobile Phones in Bangkok | Buying A Mobile Phone Bangkok Where To Buy A Mobile Phone in Bangkok For Apple's iPhone 4 Click Here Buying a mobile phone in Bangkok will not be a problem. What might be a problem is knowing where to buy it and deciding which mobile phone suits you the best because there are hundreds to choose from with many models available that might not be available in your country including the iPhone 3G which can be cracked and unlocked ready for use, but before you buy, make sure it really can be used in your country.. Everybody in Bangkok has a mobile phone and everybody wants the latest mobile phone. Bangkok must have more mobile phone shops than any other country. Either the manufacturers themselves have outlets, or the service providers have mobile phone outlets or anyone who doesn't have any other job has a mobile phone stall and these stalls are in almost every shopping center in Thailand meaning there must be literally millions of phones, either new or second hand, waiting to be sold and bought. mobile phones in Bangkok, Thailand Where's The Best Place To Go To Buy A Mobile Phone? Without a doubt, the most well known of these mobile phone "malls" in Bangkok is MBK shopping mall which has almost an entire floor dedicated to new and second hand mobile phones including phones that are not yet officially for sale in Thailand such as Apple's iPhone. These are brought over from countries that legally sell them and then they are cracked or unlocked to allow them to work with any network, but we doubt very much whether you'll get a warranty with one of these phones. If you know the phone you want, there is a very good chance you"ll find it at MBK . This goes for mobile phone accessories and repairs with small stalls housing a man hunched over a soldering iron and some broken mobile phones. Samsung Wave II 11,900 Baht Blackberry Bold 17,900 Baht Samsung Wave II Blackberry Bold 9780 Blackberry Torch 21,490 Baht HTC Touch Pro 22,900 Baht Blackberry Torch HTC Touch Pro HTC Desire HD 22,900 Baht HTC Desire HD Sony Ericsson W595 13,790 Baht Nokia C7 Nokia e72 Navigation 16,400 Baht HTC Touch Pro 6,490 Baht Nokia e72 Navigation LG GT505 The latest mobile phones will cost you quite a lot of money but prices generally fall quite quickly and as soon as another model comes out. Price Comparisons For Some Phones It's possible to get the latest mobile phones second hand at some of the many hundreds of mobile phone stalls but we can't vouch for the quality of these phones or how long they'll last once you've bought it. As mentioned earlier, Thais love the latest gadgets and the latest mobile phones are high on their list but many people buy them on the never-never, or on interest free credit purchase agreements which they sometimes can't afford and which is why new mobiles can be found in the second hand stalls. The mobile phone system in Thailand is different from, let's say, the UK. In the UK, it's more common to get a mobile phone either free, or very cheaply with a contract you sign up with. This contract can be between 12 and 18 months and usually covers the cost of the phone, calls, text messages etc depending on the package you buy. In Thailand however, you buy your phone and then sign up to a service or you opt for the "pay as you go" system which is probably why there are so many second hand phones on the market. It's too easy to get rid of the one you already have. Also, Thais want you to believe they can afford new items and so a mobile phone shows they can afford it, whereas in the West, we're not as concerned about this kind of thing. Recent Comments Che Le Che Jaafar I plan to bring some product to start my small business. Im quit interest to brings product like a clothes and hand made survinior,..can somebody help me,..thanks and really apreciate it. Shopping In Bangkok - Where To Shop in Bangkok · 5 days ago Che Le Che Jaafar I will go to bangkok this september. Where can i get the thai helmet, also branded helmet like shoei and agv. Not just helmet,...cheap price. Shopping In Bangkok - Where To Shop in Bangkok · 5 days ago monti hello sir how are you You dispatch Produced in india , yes ya no , Produced name :- apple s-5 iPhone 5 prices in Bangkok · 1 week ago sonam thinley hi can you tells me the price of secondhand Japanese made golf complete set, mizuno shatt-graphite in your thaniya plaza? Thanks Golf Equipment In Bangkok - Shops, Stores, Driving Ranges · 2 weeks ago Marcino Be careful with Mr. Mya Kyi from tailor shop “Tramp” in Bangkok (Mandarin Oriental off 38th street near Thanon Charoen Krung). We ordered the shirts on January 2014, paid for them with cost of of... Bangkok Tailors - Advice About Reliable Tailors Bangkok · 1 month ago Popular Attractions Discounted Hotels Top Shopping Malls Thai Food what food to order in Thailand iPad Air Prices cost of ipad air in bangkok iPhone 5s prices How Much Does The iPhone 5s Cost in Bangkok Seasons Hot March - June Rainy July - October Cool November - February How Much Does The Samsung S4 Cost in Bangkok where to buy a tv in Bangkok cameras in bangkok and where to buy a camera in bangkok Thai Cooking Classes Bangkok Looking for iPads ? where to buy an iPad in Bangkok Main Menu Home Malls Markets Hotels Links Transport Jobs Nightlife What To See General After Shopping Avoid Scams Banks Dentists Fake Goods Hotels Save $$$ Money Exchange Thai Currency Thai Food Tipping in Bangkok Travel Agents VAT Refunds Western Union I Want To Buy Apple Products Samsung Galaxy S3 Samsung Galaxy S4 iPad Mini iPhone 5 MacBook Pro Prices iPad2 iPad3 iPads iPod Touch Cameras Mobile Phones MP3 Players Antiques Computers Golf Equipment Laser Eye Surgery Men's Shoes Tailors Women's Shoes Iphone Warranty Samsung s5 iPad Air iPhone 5s iPhone 6 Syndicate Syndicate content Who's online There are currently 1 user and 25 guests online. Online users Admin

is a piece of metal of predetermined shape, size and surface area.

Ultimo aggiornamento: 2014-07-05
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

T Announcement: Follow-up with UIHG’s Second Listing (IV) 2014-05-07 11:47:31 UIHG to launch IPO roadshow next week Please be informed that Unwall International Holding Group (UIHG) plans to launch its IPO roadshow for all institutional investors the soonest by next week. The roadshow will cover major cities in the US and Europe. As you are all well aware, IPO roadshows are usually carried out one to two months prior to a company’s expected date of listing. After completing the roadshow briefing and receiving the final notice of approval from the relevant securities commission, the company may then begin to fix the final price of its shares. This means that UIHG will be able to list its shares publicly very soon. UIHG’s valuation for the listing still remains confidential, so do pay close attention for any latest developments! Important information for IPO roadshows: 1.What is an IPO roadshow Roadshows are a method widely used internationally for the promotion of share issuance. They are promotional activities carried out by securities issuing agents targeted at investors (institutional and retail) before the securities are issued. They are an important promotional and publicity means to facilitate a successful issuance of shares on the basis of a full exchange between the investors and investees. 2.The objective of IPO roadshows Roadshows are research and survey activities arranged by share underwriters for the issuers prior to the issuance of shares, to encourage communication and exchange between the investors and the share issuer, so as to ensure a smooth issuance of the shares. 3.The benefits of IPO roadshows Display the company’s image, increase transparency of the issuance of the shares, foster closer engagement with the investors, and maintain a long-term public relation, to help ensure a reasonable issue price. erjemahan

Translation

Ultimo aggiornamento: 2014-05-24
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

This statement is one I do not agree with. Furthermore I cannot give it any credibility. For one thing, it is gathering all peoples into one group. To say that theists and athiests are both victims, in my mind, is comparing apples to oranges. There is a distinct and comprehensive difference between the two. A theists has faith in a Higher Power. The atheist has no faith in anything. As he relates to the theist, there is faith in a Higher Power, and it is the Bible, Quran, or the Bhagavad Gita that teaches and guides the person in faith as it relates to the person’s Higher Power. The atheist has no such teachings, faith, teachings, or Holy Book that does what the Theists believes and is the foundation which the Theist draws from. As Christians, we believe in God the Father, His Son Christ, and the Holy Spirit. They are the One God from whom all of us are created, exist and draws its strength from. We believe our total existence comes from the Bible, which is the foundation from where is The One God’s Holy, Final, and Written Word. I do not understand how or why His Holiness comes to such conclusions. Even Wiccans and Pagans have faith and belief in something. To say that, “He can say yes to whatsoever life brings to him; he is a yea-sayer”, is not in accordance to Christian belief. Actually, if one thinks about it, this is a statement that no believer whatever his religion is can take as having any merit or credence what so ever. There is such a things as provocative, critical, and reasoning thinking. This statement from His Holiness does not, in my estimation, provides none of these. This is just off the top of my head, it is early, and not had my coffee yet. I look at this, and find it ludicrous.

Indonesian cuisine

Ultimo aggiornamento: 2014-05-09
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

9.5 Determination of Electrostatic Forces from Energy We saw that we can find electric forces between charged bodies only if we know the charge distribution on them, which is rarely the case. Moreover, the previously discussed method cannot be used to determine forces on polarized bodies except in a few simple cases. For example, suppose that a parallel-plate capacitor is partially dipped in a liq- uid dielectric, as in Fig. 9.3a. If the capacitor is charged, polarization charges exist only on the two vertical sides of the dielectric inside the capacitor. The electric force acting on them has only a horizontal component, if any. Yet experiment tells us that when we charge the capacitor, there is a small but noticeable rise in the dielectric level between the plates. How can we explain this phenomenon? The answer lies in what happens not at the top of the dielectric but near the bottom edge of the capacitor. In that region, the dipoles in the dielectric orient them- selves as shown in Fig. 9.3b. The net force on the dipoles points essentially upward and pushes the dielectric up between the plates. Although we can explain the nature of this force, based on what we have learned so far we have no idea how to calculate it. The method described next enables us to determine the electric forces in this and many other cases where the direct method fails. In addition, conceptually the same method is used for the more important determination of magnetic forces in practical applications. Figure 9.3 (a) When a parallel-plate capacitor dipped in a liquid dielectric is charged, the level between the plates rises due to electric forces acting on dipoles in the dielectric in the region around the edge of the capacitor, where the field is not uniform. (b) Enlarged domain of the capacitor fringing field in the dielectric, indicating the force on a dipole in a nonuniform field. ENERGY, FORCES, AND PRESSURE IN THE ELECTROSTATIC FIELD 129 Figure 9.4 A body in an electrostatic system moved a small distance dx by the electric force Consider an arbitrary electrostatic system consisting of a number of charged conducting and polarized dielectric bodies. We know that there are forces acting on all these bodies. Let us concentrate on one of the bodies, for example the one in Fig. 9.4, that may be either a conductor or a dielectric. Let the unknown electric force on the body be F, as indicated in the figure. Suppose we let the electric force move the body by a small distance dx in the direction of the x axis indicated in the figure. The electric force would in this case do work equal to where F, is the projection of the force F on the x axis. At first glance we seem to have gained nothing by this discussion: we do not know the force F, so we do not know the work dAel,fo,c, either. However, we will now show that if we know how the electric energy of the system depends on the coordi- nate x, we can determine the work dAel,f,,,,, and then from Eq. (9.10), the component F, of the force F. In this process, either (1) the charges on all the bodies of the system can remain unchanged or (2) the potentials of all the conducting bodies can remain unchanged. Let us consider case (1) first. The charges can remain unchanged in spite of the change in the system geometry only if none of the conducting bodies is connected to a source that could change its charge (for example, a battery). Therefore, by conservation of energy, the work in moving the body can be done only at the expense of the electric energy contained in the system. Let the system energy as a function of the coordinate x of the body, We(x), be known. The increment in energy after the displacement, dW,(x), is negative because some of the energy has been used for doing the work. Since work has to be a positive number, we have in this case dA,l,fo,,, = -dW,(x). Combining this expression with Eq. (9.10), the component F, of the electric force on the body is F - -.------- x - We (charges kept constant). dx Figure 9.5 Determination of the force on the electrodes of a parallel-plate capacitor using Eq. (9.11) Example 9.4-Force acting on one plate of a parallel-plate capacitor. In this example, we will find the electric force acting on one plate of a parallel-plate capacitor. The dielectric is homogel~eous, of permittivity E, the area of the plates is S, and the distance between them is x. One plate is charged with Q and the other with -Q (Fig. 9.5). Let the electric force move the right plate by a small distance dx. The energy in the capacitor is given by W,(x) = Q2 /2C(x) = Q2x/(2cS), SO the force that tends to irlcrease the distance between the plates is This is the same result as in Example 9.2, except for the sign. The minus sign tells us that the force tends to decrease the coordinate x, i.e., that it is attractive. Example 9.5-Force per unit length acting on a conductor of a two-wire line. The wires of a two-wire line of radii a are x apart, and are charged with charges Q' and -Q'. The energy per unit length of the line is using C' as calculated in problem P8.13. From Eq. (9.11) we obtain the force per unit length on the right conductor, tending to increase the distance between them, as This is the same as in Example 9.3, except for the minus sign. We know that this means only that the force tends to decrease the distance x between the wires, i.e., that it is attractive. Example 9.6-Force acting on a dielectric partly inserted into a parallel-plate capacitor. Let us find the electric force acting on the dielectric in Fig. 9.6. Equation (9.11) allows us to do this in a simple way. The capacitance of a capacitor such as this one is given by ENERGY, FORCES, AND PRESSURE IN THE ELECTROSTATIC FIELD 131 Figure 9.6 Determination of the force on the dielectric partly inserted between the electrodes of a parallel-plate capacitor using Eq. (9.11) (see problem P8.8). The energy in the capacitor is Q 2 We(x) = - = Q2 - Q2d 2C 2(C1+ C2) 2b[cx + to(a - x)] ' The derivative dW,(x)/dx in this case is a bit more complicated to calculate, and it is left as an exercise. The force is found to be Note that this force is always positive because t > to. This means that the forces tend to pull the dielectric further in between the plates. Example 9.7-Rise of level of liquid dielectric partly filling a parallel-plate capacitor. As a final example of the application of Eq. (9.11), let us determine the force that raises the level of the liquid dielectric between the plates of the capacitor in Fig. 9.3. Assume the dielectric is distilled water with t-, = 81, the width of the plates is b, their distance is d = 1 cm, and the capacitor was charged by being connected to V = 1000 V. The electric forces will raise the level of the water between the plates until the weight of the water between the plates becomes equal to this force. The weight is equal to where p,, is the mass density of water and g = 9.81 m/s2 . By equating this force to the force that we found in Example 9.6, we get So far, we have discussed examples of case (I), where the charges in a system were kept constant. Case (2) is finding forces from energy when the voltage, not the charge, of the n conducting bodies of the system is kept constant (for example, we connect the system to a battery). When a body is moved by electric forces again by dx, some changes must occur in the charges on the conducting bodies, due to electro- static induction. These changes are made at the expense of the energy in the sources (battery). So we would expect the energy contained in the electric field to increase in this case. It can be shown in a relatively straightforward way that the expression for the component F, of the electric force on the body in this case is Fs = +--- We(x) (potentials kept constant). dx (9.12) Of course, this formula in all cases leads to the same result for the force as Eq. (9.11), but in some cases it is easier to calculate dW,/dx for constant potentials than for constant charges, and conversely. Example 9.8-Example 9.6 revisited. Let us compute the force from Example 9.6 using Eq. (9.12) instead of Eq. (9.11), which we used in Example 9.6. Now we assume the potential of the two plates to be constant, and therefore express the system energy in the form so that The result is easier to obtain than in Example 9.6. Questions and puoblems: P9.17 to P9.20

The buttons on my phone are worn thin I don't think that I knew the chaos I was getting in. But I've broken all my promises to you I've broken all my promises to you.

Ultimo aggiornamento: 2014-04-12
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

9.5 Determination of Electrostatic Forces from Energy We saw that we can find electric forces between charged bodies only if we know the charge distribution on them, which is rarely the case. Moreover, the previously discussed method cannot be used to determine forces on polarized bodies except in a few simple cases. For example, suppose that a parallel-plate capacitor is partially dipped in a liq- uid dielectric, as in Fig. 9.3a. If the capacitor is charged, polarization charges exist only on the two vertical sides of the dielectric inside the capacitor. The electric force acting on them has only a horizontal component, if any. Yet experiment tells us that when we charge the capacitor, there is a small but noticeable rise in the dielectric level between the plates. How can we explain this phenomenon? The answer lies in what happens not at the top of the dielectric but near the bottom edge of the capacitor. In that region, the dipoles in the dielectric orient them- selves as shown in Fig. 9.3b. The net force on the dipoles points essentially upward and pushes the dielectric up between the plates. Although we can explain the nature of this force, based on what we have learned so far we have no idea how to calculate it. The method described next enables us to determine the electric forces in this and many other cases where the direct method fails. In addition, conceptually the same method is used for the more important determination of magnetic forces in practical applications. Figure 9.3 (a) When a parallel-plate capacitor dipped in a liquid dielectric is charged, the level between the plates rises due to electric forces acting on dipoles in the dielectric in the region around the edge of the capacitor, where the field is not uniform. (b) Enlarged domain of the capacitor fringing field in the dielectric, indicating the force on a dipole in a nonuniform field. ENERGY, FORCES, AND PRESSURE IN THE ELECTROSTATIC FIELD 129 Figure 9.4 A body in an electrostatic system moved a small distance dx by the electric force Consider an arbitrary electrostatic system consisting of a number of charged conducting and polarized dielectric bodies. We know that there are forces acting on all these bodies. Let us concentrate on one of the bodies, for example the one in Fig. 9.4, that may be either a conductor or a dielectric. Let the unknown electric force on the body be F, as indicated in the figure. Suppose we let the electric force move the body by a small distance dx in the direction of the x axis indicated in the figure. The electric force would in this case do work equal to where F, is the projection of the force F on the x axis. At first glance we seem to have gained nothing by this discussion: we do not know the force F, so we do not know the work dAel,fo,c, either. However, we will now show that if we know how the electric energy of the system depends on the coordi- nate x, we can determine the work dAel,f,,,,, and then from Eq. (9.10), the component F, of the force F. In this process, either (1) the charges on all the bodies of the system can remain unchanged or (2) the potentials of all the conducting bodies can remain unchanged. Let us consider case (1) first. The charges can remain unchanged in spite of the change in the system geometry only if none of the conducting bodies is connected to a source that could change its charge (for example, a battery). Therefore, by conservation of energy, the work in moving the body can be done only at the expense of the electric energy contained in the system. Let the system energy as a function of the coordinate x of the body, We(x), be known. The increment in energy after the displacement, dW,(x), is negative because some of the energy has been used for doing the work. Since work has to be a positive number, we have in this case dA,l,fo,,, = -dW,(x). Combining this expression with Eq. (9.10), the component F, of the electric force on the body is F - -.------- x - We (charges kept constant). dx Figure 9.5 Determination of the force on the electrodes of a parallel-plate capacitor using Eq. (9.11) Example 9.4-Force acting on one plate of a parallel-plate capacitor. In this example, we will find the electric force acting on one plate of a parallel-plate capacitor. The dielectric is homogel~eous, of permittivity E, the area of the plates is S, and the distance between them is x. One plate is charged with Q and the other with -Q (Fig. 9.5). Let the electric force move the right plate by a small distance dx. The energy in the capacitor is given by W,(x) = Q2 /2C(x) = Q2x/(2cS), SO the force that tends to irlcrease the distance between the plates is This is the same result as in Example 9.2, except for the sign. The minus sign tells us that the force tends to decrease the coordinate x, i.e., that it is attractive. Example 9.5-Force per unit length acting on a conductor of a two-wire line. The wires of a two-wire line of radii a are x apart, and are charged with charges Q' and -Q'. The energy per unit length of the line is using C' as calculated in problem P8.13. From Eq. (9.11) we obtain the force per unit length on the right conductor, tending to increase the distance between them, as This is the same as in Example 9.3, except for the minus sign. We know that this means only that the force tends to decrease the distance x between the wires, i.e., that it is attractive. Example 9.6-Force acting on a dielectric partly inserted into a parallel-plate capacitor. Let us find the electric force acting on the dielectric in Fig. 9.6. Equation (9.11) allows us to do this in a simple way. The capacitance of a capacitor such as this one is given by ENERGY, FORCES, AND PRESSURE IN THE ELECTROSTATIC FIELD 131 Figure 9.6 Determination of the force on the dielectric partly inserted between the electrodes of a parallel-plate capacitor using Eq. (9.11) (see problem P8.8). The energy in the capacitor is Q 2 We(x) = - = Q2 - Q2d 2C 2(C1+ C2) 2b[cx + to(a - x)] ' The derivative dW,(x)/dx in this case is a bit more complicated to calculate, and it is left as an exercise. The force is found to be Note that this force is always positive because t > to. This means that the forces tend to pull the dielectric further in between the plates. Example 9.7-Rise of level of liquid dielectric partly filling a parallel-plate capacitor. As a final example of the application of Eq. (9.11), let us determine the force that raises the level of the liquid dielectric between the plates of the capacitor in Fig. 9.3. Assume the dielectric is distilled water with t-, = 81, the width of the plates is b, their distance is d = 1 cm, and the capacitor was charged by being connected to V = 1000 V. The electric forces will raise the level of the water between the plates until the weight of the water between the plates becomes equal to this force. The weight is equal to where p,, is the mass density of water and g = 9.81 m/s2 . By equating this force to the force that we found in Example 9.6, we get So far, we have discussed examples of case (I), where the charges in a system were kept constant. Case (2) is finding forces from energy when the voltage, not the charge, of the n conducting bodies of the system is kept constant (for example, we connect the system to a battery). When a body is moved by electric forces again by dx, some changes must occur in the charges on the conducting bodies, due to electro- static induction. These changes are made at the expense of the energy in the sources (battery). So we would expect the energy contained in the electric field to increase in this case. It can be shown in a relatively straightforward way that the expression for the component F, of the electric force on the body in this case is Fs = +--- We(x) (potentials kept constant). dx (9.12) Of course, this formula in all cases leads to the same result for the force as Eq. (9.11), but in some cases it is easier to calculate dW,/dx for constant potentials than for constant charges, and conversely. Example 9.8-Example 9.6 revisited. Let us compute the force from Example 9.6 using Eq. (9.12) instead of Eq. (9.11), which we used in Example 9.6. Now we assume the potential of the two plates to be constant, and therefore express the system energy in the form so that The result is easier to obtain than in Example 9.6. Questions and puoblems: P9.17 to P9.20

HAPPINE SONLY COMES TO FEW, BUT IN YOU MAY FIND IT IS LOVE SOMEONE NEW

Ultimo aggiornamento: 2014-04-08
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

Venice is a city in northern Italy. It is the capital of region Veneto. Together with Padua, the city is included in the Padua-Venice Metropolitan Area. Venice has been known as the “Queen of the Adriatic”, “City of Water”, “City of Bridges”, and “The City of Light”. The city stretches across 117 small islands in the marshy Venetian Lagoon along the Adriatic Sea in northeast Italy. Venice is world-famous for its canals. It is built on an archipelago of 117 islands formed by about 150 canals in a shallow lagoon. The islands on which the city is built are connected by about 400 bridges. In the old center, the canals serve the function of roads, and every form of transport is on water or on foot. You can ride gondola there. It is the classical Venetian boat which nowadays is mostly used for tourists, or for weddings, funerals, or other ceremonies. Now, most Venetians travel by motorised waterbuses (“vaporetti”) which ply regular routes along the major canals and between the city’s islands. The city also has many private boats. The only gondolas still in common use by Venetians are the traghetti, foot passenger ferries crossing the Grand Canal at certain points without bridges. You can see the amusing city’s landmarks such as Piazza San Marco, Palazzo Contarini del Bovolo, Saint Mark’s Cathedral or villas of the Veneto. The villas of the Veneto, rural residences for nobles during the Republic, are one of the most interesting aspects of Venetian countryside. They are surrounded by elegant gardens, suitable for fashionable parties of high society. The city is also well known for its beautiful and romantic view, especially at night.

Indonesian english translation google

Ultimo aggiornamento: 2014-03-11
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo

1 Predicting Australian Takeover Targets: A Logit Analysis Maurice Peat* Maxwell Stevenson* * Discipline of Finance, School of Finance, The University of Sydney Abstract Positive announcement-day adjusted returns to target shareholders in the event of a takeover are well documented. Investors who are able to accurately predict firms that will be the subject of a takeover attempt should be able to earn these excess returns. In this paper a series of probabilistic regression models were developed that use financial statement variables suggested by prior research as explanatory variables. The models, applied to in-sample and out-of-sample data, led to predictions of takeover targets that were better than chance in all cases. The economic outcome resulting from holding a portfolio of the predicted targets over the prediction period are also analysed. Keywords: takeovers, targets, prediction, classification, logit analysis JEL Codes: G11, G17, G23, G34 This is a draft copy and not to be quoted. 2 1. Introduction In this paper our aim is to accurately predict companies that will become takeover targets. Theoretically, if it is possible to predict takeovers with accuracy greater than chance, it should be possible to generate abnormal returns from holding a portfolio of the predicted targets. Evidence of abnormal returns of 20% to 30% made by shareholders of firms on announcement of a takeover bid is why prediction of these events is of interest to academics and practitioners alike. The modelling approach adopted in this study was based on the discrete choice approach used by Palepu (1986) and Barnes (1999). The models were based on financial statement information, using variables suggested by the numerous theories that have been put forward to explain takeover activity. The performance of the models was evaluated using statistical criteria. Further, the predictions from the models were rated against chance and economic criteria through the formation and tracking of a portfolio of predicted targets. Positive results were found under both evaluation criteria. Takeover prediction studies are a logical extension of the work of Altman (1968) who used financial statement information to explain corporate events. Early studies by Simkowitz and Monroe (1971) and Stevens (1973) were based on the Multiple Discriminant Analysis (MDA) technique. Stevens (1973) coupled MDA with factor analysis to eliminate potential multicollinearity problems and reported a predictive accuracy of 67.5%, suggesting that takeover prediction was viable. Belkaoui (1978) and Rege (1984) conducted similar analyses in Canada with Belkaoui (1978) confirming the results of these earlier researchers and reporting a predictive accuracy of 85% . Concerns were raised by Rege (1984) who was unable to predict with similar accuracy. These concerns were also raised in research by others such as Singh (1971) and Fogelberg, Laurent, and McCorkindale (1975). Reacting to the wide criticism of the MDA method, researchers began to use discrete choice models as the basis of their research. Harris et al. (1984) used probit analysis to develop a model and found that it had extremely high explanatory power, but were unable to discriminate between target and non-target firms with any degree of accuracy. Dietrich and Sorensen (1984) continued this work using a logit model and achieved a classification accuracy rate of 90%. Palepu (1986) addressed a number of methodological problems in takeover prediction. He suggested the use of statebased prediction samples where a number of targets were matched with non-targets 3 for the same sample period. While this approach was appropriate for the estimation sample, it exaggerated accuracies within the predictive samples because the estimated error rates in these samples were not indicative of error rates within the population of firms. He also proposed the use of an optimal cut-off point derivation which considered the decision problem at hand. On the basis of this rectified methodology, along with the application of a logit model to a large sample of US firms, Palepu (1986) provided evidence that the ability of the model was no better than a chance selection of target and non-target firms. Barnes (1999) also used the logit model and a modified version of the optimal cut-off rule on UK data. His results indicated that a portfolio of predicted targets may have been consistent with Palepu’s finding, but he was unable to document this in the UK context due to model inaccuracy. In the following section the economic explanations underlying takeover activity are discussed. Section 3 outlines our takeover hypotheses and describes the explanatory variables that are used in the modelling procedure. The modelling framework and data used in the study is contained in Section 4, while the results of our model estimation, predictions, classification accuracy and portfolio economic outcomes are found in Section 5. We conclude in Section 6. 2. Economic explanations of takeover activity Economic explanations of takeover activity have suggested the explanatory variables that were included in this discrete choice model development study. Jensen and Meckling (1976) posited that agency problems occurred when decision making and risk bearing were separated between management and stakeholders1, leading to management inefficiencies. Manne (1965) and Fama (1980) theorised that a mechanism existed that ensured management acted in the interests of the vast number of small non-controlling shareholders2. They suggested that a market for corporate control existed in which alternative management teams competed for the rights to control corporate assets. The threat of acquisition aligned management objectives with those of stakeholders as managers are terminated in the event of an acquisition in order to rectify inefficient management of the firm’s assets. Jensen and Ruback (1983) suggested that both capital gains and increased dividends are available to an 1 Stakeholders are generally considered to be both stock and bond holders of a corporation. 2 We take the interests of shareholders to be in the maximization of the present value of the firm. 4 acquirer who could eliminate the inefficiencies created by target management, with the attractiveness of the firm for takeover increasing with the level of inefficiency. Jensen (1986) looked at the agency costs of free cash flow, another form of management inefficiency. In this case, free cash flow referred to cash flows in excess of positive net present value (NPV) investment opportunities and normal levels of financial slack (retained earnings). The agency cost of free cash flow is the negative NPV value that arises from investing in negative NPV projects rather than returning funds to investors. Jensen (1986) suggested that the market value of the firm should be discounted by the expected agency costs of free cash flow. These, he argued, were the costs that could be eliminated either by issuing debt to fund an acquisition of stock, or through merger with, or acquisition of a growing firm that had positive NPV investments and required the use of these excess funds. Smith and Kim (1994) combined the financial pecking order argument of Myers and Majluf (1984) with the free cash flow argument of Jensen (1986) to create another motivational hypothesis that postulated inefficient firms forgo profitable investment opportunities because of informational asymmetries. Further, Jensen (1986) argued that, due to information asymmetries that left shareholders less informed, management was more likely to undertake negative NPV projects rather than returning funds to investors. Smith and Kim (1994) suggested that some combination of these firms, like an inefficient firm and an efficient acquirer, would be the optimal solution to the two respective resource allocation problems. This, they hypothesised, would result in a market value for the combined entity that exceeded the sum of the individual values of the firms. This is one form of financial synergy that can arise in merger situations. Another form of financial synergy is that which results from a combination of characteristics of the target and bidding firms. Jensen (1986) suggested that an optimal capital structure exists, whereby the marginal benefits and marginal costs of debt are equal. At this point, the cost of capital for a firm is minimised. This suggested that increases in leverage will only be viable for those firms who have free cash flow excesses, and not for those which have an already high level of debt. Lewellen (1971) proposed that in certain situations, financial efficiencies may be realized without the realization of operational efficiencies. These efficiencies relied on a simple Miller and Modigliani (1964) model. It proposed that, in the absence of corporate taxes, an increase in a firm’s leverage to reasonable levels would increase the value of the equity share of the company due to a lower cost of capital. By a 5 merger of two firms, where either one or both had not utilised their borrowing capacity, would result in a financial gain. This financial gain would represent a valuation gain above that of the sum of the equity values of the individual firms. However, this result is predicated on the assumption that the firms need to either merge or be acquired in order to achieve this result. Merger waves are well documented in the literature. Gort (1969) suggested that industry disturbances are the source of these merger waves, his argument being that they occurred in response to discrepancies between the valuation of a firm by shareholders and potential acquirers. As a consequence of economic shocks (such as deregulation, changes in input or output prices, etc.), expectations concerning future cash flow became more variable. This results in an increased probability that the value the acquirer places on a potential target is greater than its current owner’s valuation. The result is a possible offer and subsequent takeover. Mitchell and Mulherin (1996), in their analysis of mergers and acquisitions in the US during the 1980s, provided evidence that mergers and acquisitions cluster by industries and time. Their analysis confirmed the theoretical and empirical evidence provided by Gort (1969) and provided a different view suggesting that mergers, acquisitions, and leveraged buyouts were the least cost method of adjusting to the economic shocks borne by an industry. These theories suggested a clear theoretical base on which to build takeover prediction models. As a result, eight main hypotheses for the motivation of a merger or acquisition have been formulated, along with twenty three possible explanatory variables to be incorporated predictive models. 3. Takeover hypotheses and explanatory variables The most commonly accepted motivation for takeovers is the inefficient management hypothesis.3 The hypothesis states that inefficiently managed firms will be acquired by more efficiently managed firms. Accordingly, H1: Inefficient management will lead to an increased likelihood of acquisition. Explanatory variables suggested by this hypothesis as candidates to be included in the specifications of predictive models included: 1. ROA (EBIT/Total Assets – Outside Equity Interests) 3 It is also known as the disciplinary motivation for takeovers. 6 2. ROE (Net Profit After Tax / Shareholders Equity – Outside Equity Interests) 3. Earnings Before Interest and Tax Margin (EBIT/Operating Revenue) 4. EBIT/Shareholders Equity 5. Free Cash Flow (FCF)/Total Assets 6. Dividend/Shareholders Equity 7. Growth in EBIT over past year, along with an activity ratio, 8. Asset Turnover (Net Sales/Total Assets) While there are competing explanations for the effect that a firm’s undervaluation has on the likelihood of its acquisition by a bidder, there is consistent agreement across all explanations that the greater the level of undervaluation then the greater the likelihood a firm will be acquired. The hypothesis that embodies the impact of these competing explanations is as follows: H2: Undervaluation of a firm will lead to an increased likelihood of acquisition. The explanatory variable suggested by this hypothesis is: 9. Market to book ratio (Market Value of Securities/Net Assets) The Price Earnings (P/E) ratio is closely linked to the undervaluation and inefficient management hypotheses. The impact of the P/E ratio on the likehood of acquisition is referred to as the P/E hypothesis: H3: A high Price to Earnings Ratio will lead to a decreased likelihood of acquisition. It follows from this hypothesis that the P/E ratio is a likely candidate as an explanatory variable for inclusion in models for the prediction of potential takeover targets. 10. Price/Earnings Ratio The growth resource mismatch hypothesis is the fourth hypothesis. However, the explanatory variables used in models specified to examine this hypothesis capture growth and resource availability separately. This gives rise to the following: H4: Firms which possess low growth / high resource combinations or, alternatively, high growth / low resource combinations will have an increased likelihood of acquisition. The following explanatory variables suggested by this hypothesis are: 7 11. Growth in Sales (Operating Revenue) over the past year 12. Capital Expenditure/Total Assets 13. Current Ratio (Current Assets/Current Liabilities) 14. (Current Assets – Current Liabilities)/Total Assets 15. Quick Assets (Current Assets – Inventory)/Current Liabilities The behaviour of some firms to pay out less of their earnings in order to maintain enough financial slack (retained earnings) to exploit future growth opportunities as they arise, has led to the dividend payout hypothesis: H5: High payout ratios will lead to a decreased likelihood of acquisition. The obvious explanatory variable suggested by this hypothesis is: 16. Dividend Payout Ratio Rectification of capital structure problems is an obvious motivation for takeovers. However, there has been some argument as to the impact of low or high leverage on acquisition likelihood. This paper proposes a hypothesis known as the inefficient financial structure hypothesis from which the following hypothesis is derived. H6: High leverage will lead to a decreased likelihood of acquisition. The explanatory variables suggested by this hypothesis include: 17. Net Gearing (Short Term Debt + Long Term Debt)/Shareholders Equity 18. Net Interest Cover (EBIT/Interest Expense) 19. Total Liabilities/Total Assets 20. Long Term Debt/Total Assets The existence of Merger and Acquisition (M&A) activity waves, where takeovers are clustered in wave-like profiles, have been proposed as indicators of changing levels of M&A activity over time. It has been argued that the identification of M&A waves, with the corresponding improved likelihood of acquisition when the wave is surging, captures the effect of the rate of takeover activity at specific points in time, and serves as valuable input into takeover prediction models. Consistent with M&A activity waves and their explanation as a motivation for takeovers is the industry disturbance hypothesis: 8 H7: Industry merger and acquisition activity will lead to an increased likelihood of acquisition. An industry relative ratio of takeover activity is suggested by this hypothesis: 21. The numerator is the total bids launched in a given year, while the denominator is the average number of bids launched across all the industries in the ASX. Size will have an impact on the likelihood of acquisition. It seems plausible that smaller firms will have a greater likelihood of acquisition due to larger firms generally having fewer bidding firms with the resources to acquire them. This gives rise to the following hypothesis: H8: The size of a firm will be negatively related to the likelihood of acquisition. Explanatory variables that can be employed to control for size include: 21. Log (Total Assets) 22. Net Assets 4. Data and Method The data requirements for the variables defined above are derived from the financial statements and balance sheet date price information for Australian listed companies. The financial statement information was sourced from the AspectHuntley data base which includes annual financial statement data for all ASX listed companies between 1995 and 2006. The database includes industry classifications for all firms included in the construction of industry relative ratios. Lists of takeover bids and their respective success were obtained from the Connect4 database. This information enabled the construction of variables for relative merger activity between industries. Additionally, stock prices from the relevant balance dates of all companies were sourced from the AspectHuntley online database, the SIRCA Core Price Data Set and Yahoo! Finance. 4.1 The Discrete Choice Modelling Framework The modelling procedure used is the nominal logit model, made popular in the bankruptcy prediction literature by Ohlson (1980) and, subsequently, in the takeover prediction literature by Palepu (1986). Logit models are commonly utilised for dichotomous state problems. The model is given by equations [1] to [3] below. 9 [3] The logit model was developed to overcome the rigidities of the Linear Probability Model in the presence of a binary dependent variable. Equations [1] and [2] show the existence of a linear relationship between the log-odds ratio (otherwise known as the logit Li) and the explanatory variables. However, the relationship between the probability of the event and acquisition likelihood is non-linear. This non-linear relationship has a major advantage that is demonstrated in equation [3]. Equation [3] measures the change in the probability of the event as a result of a small increment in the explanatory variables, . When the probability of the event is high or low, the incremental impact of a change in an explanatory variable on the likelihood of the event will be compressed, requiring a large change in the explanatory variables to change the classification of the observation. If a firm is clearly classified as a target or non-target, a large change in the explanatory variables is required to change its classification. 4.2 Sampling Schema Two samples were used in the model building and evaluation procedure. They were selected to mimic the problem faced by a practitioner attempting to predict takeover targets into the future. The first sample was used to estimate the model and to conduct in-sample classification. It was referred to as the Estimation Sample. This sample was based on financial data for the 2001 and 2002 financial years for firms that became takeover targets, as well as selected non-targets, between January, 2003 and December, 2004. The lag in the dates allows for the release of financial information as well as allowing for the release of financial statements for firms whose balance dates fall after the 30th June. Following model estimation, the probability of a takeover offer was estimated for each firm in the entire sample of firms between January, 2003 and December, 2004 using the estimated model and each firm’s 2001 and 2002 financial data. Expost predictive ability for each firm was then assessed. 10 A second sample was then used to assess the predictive accuracy of the model estimated with the estimation sample data. It is referred to as the Prediction Sample. This sample includes the financial data for the 2003 and 2004 financial years, which will be used in conjunction with target and non-target firms for the period January, 2005 to December, 2006. Using the model estimated from the 2001 and 2002 financial data, the sample of firms from 2005 and 2006 were fitted to the model using their 2003 and 2004 financial data. They were then classified as targets or non-targets using the 2005 and 2006 data. This sampling methodology allows for the evaluation of ex-ante predictive ability rather than ex-post classification accuracy. A diagrammatic explanation of the sample data used for both model estimation and prediction can be found below in Figure 1, and in tabular form in Table 1. Figure 1 Timeline of sample data used in model estimation and prediction Table 1 Sample data used in model estimation and prediction Sample Financial Data Classification Period Estimation Sample 2001 and 2002 2003 and 2004 Prediction Sample 2003 and 2004 2005 and 2006 11 For model estimation, a technique known as state-based sampling was used. Allison (2006) suggested the use of this sampling approach in order to minimise the standard error of the estimated parameters when the dependent variable states were unequally distributed in the population. All the target firms were included in the estimation sample, along with an equal number of randomly selected non-target firms for the same period. Targets in the estimation sample were randomly paired with the sample of non-target firms for the same period over which financial data was measured.4 4.3 Assessing the Estimated Model and its Predictive Accuracy Walter (1994), Zanakis and Zopounidis (1997), and Barnes (1999) utilised the Proportional Chance Criterion and the Maximum Chance Criterion to assess the predictions of discriminant models relative to chance. These criteria are also applicable to the discrete choice modelling exercise that is the focus of this study and, accordingly, are discussed more fully below. 4.3.1 Proportional Chance Criterion To assess the classification accuracy of the estimated models in this study, the Proportional Chance Criterion was utilized to assess whether the overall classifications from the models were better than that expected by chance. This criterion compared the classification accuracy of models to jointly classify target or non-target firms better than that expected by chance. Although the criterion does not indicate the source of the classification accuracy of the model (that is, whether the model accurately predicts targets or non-targets), it does allow for the comparison with alternative models. A simple Z-score calculation formed the basis of a joint test of the null hypothesis that the model was unable to jointly classify targets and nontargets better than chance. Under a chance selection, we would expect a proportion of targets and non-targets to be jointly equal to their frequencies in the population under consideration. The null and alternative hypotheses, along with the test statistic are given below. H0: Model is unable to classify targets and non-targets jointly better than chance. H1: Model is able to classify targets and non-targets jointly better than chance. 4 This approach differs from matched pair samples where targets are matched to non-targets on the basis of variables such as industry and/or size. 12 If the statistic is significant, we reject the null hypothesis and conclude that the model can classify target and non-target firms jointly better than chance. 4.3.2 Maximum Chance Criterion While the Proportional Chance Criterion indicated whether a model jointly classified target and non-target firms better than chance, it did not indicate the source of the predictive ability. However, under the Maximum Chance Criterion, a similar test of hypotheses does indicate whether a model has probability greater than chance in classifying either a target or a non-target firm. The Z-score statistic to test the null hypothesis that a model is unable to classify targets better than chance is given below. It is based on the Concentration Ratio defined by Powell (2001) that measures the maximum potential chance of correct classification of a target, or the proportion of correctly classified targets from those firms predicted to be targets. H0: Model is unable to classify targets better than chance. H1: Model is able to classify targets better than chance. In order to assess the classification accuracy of the models in the Estimation and Prediction Samples, these two criteria were used. The focus of this study was on the use of the Maximum Chance Criterion for targets, as it assessed whether the number 13 of correctly predicted targets exceeded the population of predicted targets.5 The Concentration Ratio was the ratio advocated by Barnes (1999) for maximising returns. 4.3.3 Industry Relative Ratios Platt and Platt (1990) advocated the use of industry relative variables to increase the predictive accuracy of bankruptcy prediction models on the pretext that these variables enabled more accurate predictions across industries and through time. This argument was based on two main contentions. Firstly, average financial ratios are inconsistent across industries and reflect the relative efficiencies of production commonly employed in those industries. The second is that average financial ratios are inconsistent throughout time as a result of variable industry performance due to economic conditions and other factors. Platt and Platt (1990) argued that firms from different industries or different time periods could not be analysed without some form of industry adjustment. In this study both raw and industry adjusted financial ratios were used to determine the benefits of industry adjustment. There are four different model specifications. One was based on raw financial ratios for the single year prior to the sample period (the Single Raw Model). Another was based on averaged raw financial ratios for the two years prior to the acquisition period (the Combined Raw Model). A third specification was based on industry adjusted financial ratios for the single year prior to the sample period (the Single Adjusted Model), while the fourth was based on averaged industry adjusted financial ratios for the two years prior to the sample period (the Combined Adjusted Model). The purpose of using averages was to reduce random fluctuations in the financial ratios of the firms under analysis, and to capture permanent rather than transitory values. This approach was proposed by Walter (1994). Most researchers used industry relative ratios calculated by scaling firms’ financial ratios using the industry average defined by equation [4] below. Under this procedure all ratios were standardised to unity. Industry relative ratios such as ROA or ROE that were greater than unity indicated industry over-performance, while those less than unity were consistent with underperformance. Problems were encountered when the industry average value was negative. In this case, those firms that underperformed the industry average also had industry relative ratios greater than one. This was the result of a large negative number being divided by a smaller negative 5 That is, the ratio A11/TP1 in Table 2. 14 number. Additionally, those firms that over-performed the negative industry average ratio, but still retained a negative financial ratio, had a ratio less than one. This ambiguity in the calculation of industry relative ratios had implications for those models in this study that included variables with negative industry averages for some ratios. This problem may explain the inability of researchers in the recent literature to accurately predict target and non-target firms that utilised industry adjustments and may have caused the Barnes (1999) model to predict no takeover targets at all. An alternative methodology was implemented to account for negative industry averages. Equation [5] below uses the difference between the individual firm’s ratio and the industry average ratio, divided by the absolute value of the industry average ratio. As a result all ratios are standardised to zero rather than one. Problems relating to the sign of the industry relative ratio are also corrected. Under-performance of the industry results in an industry relative ratio less than zero, with over-performance returning a ratio greater than zero. This approach is similar to the variable scaling methods widely documented in the Neural Network prediction literature. It was used for the two models based on industry relative variables with industry adjustment based on the 24 industry classification from the old ASX. 4.4 Calculation of Optimal Cut-off Probabilities for Classification In the case of a logit model, predictive output for an input sample of the explanatory variables is a probability with a value between 0 and 1. This is the predicted probability of an acquisition offer being made for a specific firm within the prediction period. What is needed is a method to convert these predicted probabilities of an acquisition offer into a binary prediction of becoming a target or not. These methods are known as optimal cut-off probability calculations and two main methodologies were implemented in this study. 4.4.1 Minimisation of Error Probabilities (Palepu, 1986) 15 In order to understand the calculation of the optimal cut-off probability, what is needed is an understanding of Type 1 and Type 11 errors. A Type 1 error occurs when a firm is predicted to become a takeover target when it does not (outcome A01 in Table 2 below), while a Type 11 error occurs when a firm is predicted not to become a target but actually becomes a target (outcome A10). Palepu (1986) assumed that the cost of these two types of errors were identical. To calculate the optimal cut-off probability, he used histograms to plot the predicted probabilities of acquisition offers for targets and non-targets separately on the same graph. The optimal cut-off probability which minimised the total error rate occurs at the intersection of the two conditional distributions. Firms with predicted probabilities of acquisition offers above this cut-off were classified as targets and those with probabilities below the cutoff classified as non-targets. Table 2 An outcome matrix for a standard classification problem Predicted Outcome Actual Outcome Non-Target (0) Target (1) Total Non-Target (0) A00 A01 TA0 Target (1) A10 A11 TA1 Total TP0 TP1 T 4.4.2 Minimisation of Error Costs (Barnes, 1999) Palepu (1986) assumed equal costs of Type 1 and Type 11 errors. However, it has been suggested that, due to investment being less likely in predicted non-targets, the cost of investing in the equity of a firm which did not become a takeover target (Type 1 error) was greater than the cost of not investing in the equity of a firm that became a takeover target (Type 11 error). Accordingly, Barnes (1999) proposed minimisation of the Type 1 error in order to maximise returns from an investment in predicted targets. From Table 2, it can be seen that the minimisation of Type 1 error is equivalent to the minimisation of the number of incorrectly predicted targets, A01, or alternatively, the maximisation of the number of correctly predicted targets, A11. It follows that, a cutoff probability is needed to maximise the number of predicted targets in a portfolio that became actual targets. This involved maximisation of the ratio of A11 to TP1 in 16 Table 2. Figure 2 below is an idealized representation of the Type 1 and Type 2 errors associated with the Palepu and Barnes cut-off probability methodologies. As the purpose of this paper was to replicate the problem faced by a practitioner, unawareness of the actual outcomes of the prediction process was assumed. Further, Figure 2 Idealized Palepu and Barnes Cut-off Probabilities the probabilities that companies will become targets were derived from a prediction model estimated using estimation data on known targets and non-targets. The companies for which these probabilities are calculated comprised the Prediction Sample (recall Table 1). For the calculation of the optimal cut-off probability according to Palepu, a histogram of predicted acquisition offer probabilities for targets and non- targets was created from the Estimation Sample, and followed the error minimisation procedure detailed above in section 4.4.1. To calculate the optimal cut-off under the Barnes methodology outlined in section 4.4.2, the ratio of A11/ TP1 for all cut-off probabilities between 0 and 1 was calculated to determine the maximum point. A simple grid search from 0 to 1 in increments of 0.05 was used. The classification and prediction accuracies under these two methods of calculating cut-off probabilities was compared for all four models considered in this study. Non-Targets Targets Estimated Acquisition Offer Cut-off Probabilities for Non-Targets and Targets PALEPU CUT-OFF BARNES CUT-OFF Type 2 Error Type 1 Error Relative Frequency of Non-Targets and Targets Targets Non-Targets Targets 17 5. Results 5.1 Multicollinearity Issues An examination of the correlation matrix and Variance Inflation Factors (VIFs) of the Estimation Sample indicated that five variables needed to be eliminated. They are listed in Table 3. That these variables should contribute to the multicollinearity problem was not a surprise considering the presence of the large number of potential explanatory variables measuring similar attributes suggested by the hypothesised motivations for takeover. These variables have correlation coefficients that exceeded 0.8 or VIFs that exceeded 10. Exclusion of these five variables eliminated significant correlations in the Variance/Covariance matrix, along with reduction of the VIF values of all the remaining variables to below 10. The resultant reduced variable set was used in the backward stepwise logit models estimated and reported in the following sub-section. Table 3 Variables Removed Due to Multicollinearity ROE (NPAT/Shareholders Equity – Outside Equity Interests) FCF/Total Assets Current Ratio (Current Assets/Current Liabilities) (Current Assets – Current Liabilities)/Total Assets Total Liabilities/Total Assets 5.2 Backward Stepwise Regression Results Using the remaining variables after controlling for multicollinearity, backward stepwise logistic regressions were performed for each of the four model specifications. Consistent with the methodology of Walter (1994), the significance level for retention of variables in the analysis was set at 0.15. The results for these models that were estimated using a common set of target and non-target firms are presented in Tables 4 to 7, with the results for the combined adjusted model in Table 7 described in more detail in the following sub-section.6 The backward stepwise analysis for this model required seven steps, eliminating six of the fifteen starting variables, while retaining nine significant variables. These 6 Detailed results for each of the models represented in Tables 4 to 7 are available from the authors on request. 18 Table 4 Backward Stepwise Results for Single Raw Model Variable Parameter Estimate Prob > Chi Sq Intercept -13.14 ( Chi Sq Intercept -0.58 (0.02) Asset Turnover (Net Sales/Total Assets) -0.59 (0.03) Capital Expenditure/Total Assets 0.34 (0.07) Dividend Payout Ratio -0.22 (0.07) Long Term Debt/Total Assets -0.21 (0.11) Ln (Total Assets) 12.07 ( Chi Sq Intercept -12.36 ( Chi Sq Intercept -0.04 (0.92) ROA (EBIT/Total Assets – Outside Equity Interests) 0.28 (0.09) Asset Turnover (Net Sales/Total Assets) -0.54 (0.05) Capital Expenditure/Total Assets 0.69 (<0.01) Quick Assets/Current Liabilities (Current Assets – Inventory)/Current Liabilities 0.93 (0.02) Dividend Payout Ratio -0.34 (0.02) Long Term Debt/Total Assets -0.32 (0.07) Merger Wave Dummy -0.59 (0.06) Ln (Total Assets) 13.34 (<0.01) Net Assets -0.21 (0.07) results provided evidence concerning six of the eight hypothesised motivations for takeover discussed previously in Section 3. The growth resource mismatch hypothesis was only significant in the two adjusted models. This suggested that growth should be measured relative to an industry benchmark when attempting to discriminate between target and non-target firms. 5.3 Classification Analysis While the analysis of the final models was of theoretical interest, the primary aim of this paper was to evaluate their classification accuracy. For the purposes of classification, the models were re-estimated using the Estimation Sample with all variables included. The complex relationships between all the variables were assumed to provide us with the ability to discriminate between target and non-target firms. Using financial data from 2001 and 2002, the models were estimated on the basis of 62 targets matched with 62 non-targets where the targets were identified between January, 2003 and December, 2004. Following estimation of the model, an in-sample fit was sought for the entire sample of the 1060 firms reporting 2001 and 2002 financial data. To proceed with classification, we derived a cut-off probability using 20 the methods of Palepu (1986) and Barnes (1999). The graph presented in Figure 3 focuses on the combined adjusted model and the Palepu cut-off point. Using a bin range of 0.05, it showed the histograms required for the calculation of the cut-off probability using the Palepu methodology was approximately 0.675. This is the probability corresponding to the highest point of intersection of the plots of the estimated acquisition probabilities for target and non-target companies. Figure 3 Cut-off Calculations using the Palepu methodology and 0.05 histogram bin increments. Table 8 Summary of optimal cut-off probabilities for all models under both methodologies. Optimal Cutoff Methodology Probabilities Palepu Barnes Single Raw Model 0.725 0.85 Single Adjusted Model 0.725 0.90 Combined Raw Model 0.850 0.95 Combined Adjusted Model 0.675 0.95 The optimal cut-off probabilities derived by using both the Barnes and Palepu methodologies for all four models are reported in Table 8. The optimal cut-off 21 probabilities calculated using the Barnes methodology were significantly larger than the cut-offs calculated under the Palepu methodology for all models.7 Table 9 below shows the outcome of the application of all of four models to the entire Estimation Sample based on a cut-off derived under the Barnes approach. Included in this table are the outcome matrices for each of the models. An outcome of 0 indicated that the firm was not a target or was not predicted to be a target in the sample period. A value of 1 indicated that a firm was predicted to be, or become a target in the sample period. On the basis of these outcome matrices, a number of performance measures were generated. The first measure was the Concentration Ratio. This is a measure of Predictive Accuracy measure of the model and corresponds to the Maximum Chance Criterion. It is the proportion of actual targets that formed the portfolio of predicted target firms for each of the models and was represented by the ratio A11/TP1 from the outcome matrix depicted previously in Table 2. The next measure indicated the expected accuracy under a chance selection of takeover targets within the sample period (TA1/T). It measured the extent to which the model exceeded the accuracy expected under a chance selection and quantified the Proportional Chance Criterion. The last measure is a measure of the accuracy of the model relative to chance and is calculated by dividing the first ratio by the second and then subtracting unity. All three measures were expressed as a percentage. An examination of the statistics corresponding to these measures for all four models in Table 9 indicated that, for the estimation sample with a Barnes cut-off, the combined raw model was the most accurate. Of the 80 firms that this model predicted to become takeover targets in the estimation period, 19 actually became targets. This represented a prediction accuracy of 23.75%. When taken relative to chance, this accuracy exceeded the benchmark by 305%. For the purpose of comparison, the classification results for the cut-off probabilities calculated using the Palepu cut-off points are presented in Table 10. As was the case when the Barnes methodology was used to determine the cut-off values for classification, the Palepu approach realised similar results. The combined raw model was again the most accurate model for prediction with a predictive accuracy of 19.59% and a relative to chance figure of 234.3%. However, as was the case for all four models, the use of this cut-off probability approach significantly reduced the 7 As is noted in following tables, this is an explanation for the smaller number of predicted targets under this methodology. 22 Table 9 Outcome Matrices for all models for classification of Estimation Sample (Barnes cut-off probability) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off = 0.85 probability) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 874 124 998 97.00% 94.15% 3.03%** 1 27 35 62 22.01% 5.85% 276.24%** Total 901 159 1060 Single Adjusted Model (Cut-off probability = 0.90) †† Actual Outcome Predicted Outcome 0 1 Total 0 906 88 994 96.18% 94.13% 2.18%** 1 36 26 62 22.81% 5.87% 288.59%** Total 942 114 1056 Combined Raw Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 935 61 996 95.60% 94.14% 1.55%** 1 43 19 62 23.75% 5.86% 305.29%** Total 978 80 1058 Combined Adjusted Model (Cut-off probability = 0.95) †† Actual Outcome Predicted Outcome 0 1 Total 0 938 56 994 95.33% 94.13% 1.27%* 1 46 16 62 22.22% 5.87% 278.54%** 23 Total 984 72 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. Table 10 Outcome Matrices for all models for classification of Estimation Sample (Palepu cut-off probabilty) ESTIMATION SAMPLE Predictive Accuracy Single Raw Model (Cut-off probability = 0.725) †† Chance Accuracy Actual Outcome Predicted Outcome Relative to Chance 0 1 Total 0 812 186 998 97.83% 94.15% 3.91%** 1 18 44 62 19.13% 5.85% 227.01%** Total 830 230 1060 Single Adjusted Model (Cut-off probability = 0.725) †† Actual Outcome Predicted Outcome 0 1 Total 0 787 207 994 97.52% 94.13% 3.60%** 1 20 42 62 16.87% 5.87% 187.39%** Total 807 249 1056 Combined Raw Model (Cut-off probability = 0.85) †† Actual Outcome Predicted Outcome 0 1 Total 0 840 156 996 97.22% 94.14% 3.27%** 1 24 38 62 19.59% 5.86% 234.30%** Total 864 194 1058 Combined Adjusted Model (Cut-off probability = 0.675) †† Actual Outcome Predicted Outcome 24 0 1 Total 0 749 245 994 97.53% 94.13% 3.61%** 1 19 43 62 14.93% 5.87% 154.34%** Total 768 288 1056 †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. Concentration Ratio and, therefore, the classification accuracy of the models under the Maximum Chance Criterion. Interestingly, while the Palepu methodology did improve the correct classification of targets accurately predicted (A11), in doing so, it also predicted a large number of non-target firms to become targets (A01). The Barnes methodology focused on the maximisation of returns from an investment in predicted targets. Rather than being focused on the prediction of a large number of targets accurately, it focused on the improvement in the proportion of actual targets in the portfolio of predicted targets. Accordingly, there are a smaller number of targets predicted under the Barnes methodology. As previously noted in Section 4.3.2, the Barnes methodology coincided more with the spirit of the Maximum Chance Criterion rather than the Proportional Chance Criterion. According to the Proportional Chance Criterion, all four models were able to jointly classify targets and non-targets within the estimation period significantly better than chance. Further, as revealed by the Maximum Chance Criterion, all models also classified targets alone significantly better than chance but on an individual basis. Overall, these results indicated high model classification ability. This was expected given that all targets in the estimation sample were used in the estimation of the model parameters. 5.4 Classification in the Prediction Period The next step of the analysis was to assess the predictive abilities of our models using the Prediction Sample. Of the total 1054 firms in this sample, 108 became targets during the prediction period. Panel A and Panel B of Table 11 report the predictions from the four estimated models using both the Barnes and Palepu cut-off probability approaches. Under the Barnes cut-off methodology, calculation of the Concentration Ratio indicated that the combined raw and combined adjusted models performed best of all of the models. This confirmed the results from the estimation 25 period. The combined adjusted model predicted 125 firms to become targets during the prediction period, during which 25 actually became targets. Prediction accuracy was 20%. Under a chance selection, we would have expected only 10.30% of those companies predicted to become targets to actually become targets. This meant that the model exceeded a chance prediction by 94.18%. While Walter (1994) was able to predict 102% better than chance, other studies including that of Palepu (1986) and Barnes (1999) were unable to achieve this level of accuracy. Industry adjustment increased predictive ability for both the single and combined models, suggesting that stability may be achieved through these adjustments. Furthermore, the combination of two years of financial data also appeared to improve predictive accuracy. This suggests that this adjustment eliminates random fluctuations in the financial ratios being used as input to the prediction models. Table 11 Prediction results for all four models using the Prediction Sample and both Barnes and Palepu cut-off probabilities PREDICTION SAMPLE (Barnes cut-off probabilities) Panel A PREDICTION SAMPLE (Palepu cut-off probabilities) Panel B Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Single Raw Model (Cut-off probability = 0.90) 15.09% 10.25% 47.22%* Single Raw Model (Cut-off probability = 0.725) 16.83% 10.25% 64.20%* Single Adjusted Model (Cut-off probability = 0.95) 15.79% 10.27% 53.75%* Single Adjusted Model (Cut-off probability = 0.725) 17.79% 10.27% 73.22%* Combined Raw Model (Cut-off probability = 0.85) 17.65% 10.25% 72.29%** Combined Raw Model (Cut-off probability = 0.85) 17.51% 10.25% 70.83%** Combined Adjusted Model † (Cut-off probability = 0.95) Combined Adjusted Model (Cut-off probability = 0.675) 26 20.00% 10.30% 94.18%** 16.77% 10.30% 62.82%** † Indicates that the overall predictions of the model are significantly better than chance at the 5% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. The prediction results for the Palepu derived cut-off probabilities are presented in Table 11 (Panel B). By a comparison of Panel A with Panel B in Table 11, it can be seen that when the Barnes cut-off probability methodology was used for the single models, the Concentration Ratio decreased relative to that of Palepu. However, it improved the ratio for the combined models. This result was reversed when the Palepu cut-off probability approach was used. Further, given the better performance of the combined models using the estimated sample, this provided the rationale for the use of the combined modes and the Barnes methodology to calculate the optimal cutoff probabilities. A different variable selection approach was implemented in an attempt to improve the accuracy of the two best predictive models, namely, the combined raw model and the combined adjusted model. A number of variables that had been insignificant in all estimated models were removed and the estimation and classification procedures repeated on the remaining variable data set.8 The classification results for the Table 12 Application of improved models to both the Estimation Sample and Prediction Sample. ESTIMATION SAMPLE (Barnes cut-off probabilities) PREDICTION SAMPLE (Barnes cut-off probabilities) Predictive Chance Relative to Accuracy Accuracy Chance Predictive Chance Relative to Accuracy Accuracy Chance Combined Raw Model †† Combined Raw Model 8 The variables removed were: Growth in EBIT over the past year, Market to book ratio (Market Value of Securities/Net Assets), and the Price/Earnings Ratio. 27 (less variables 7,9 and 10) 24.66% 5.86% 320.77%** (less variables 7,9 and 10) 17.54% 10.25% 71.22%** Combined Adjusted Model †† (less variables 7,9 and 10) 24.56% 5.87% 318.34%** Combined Adjusted Model (less variables 7,9 and 10) 22.45% 10.30% 118.05%** †† Indicates that the overall predictions of the model are significantly better than chance at the 1% level of significance according to the Proportional Chance Criterion. ** Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 1% level of significance according to the Maximum Chance Criterion. * Indicates that the prediction of targets or non targets individually is significantly greater than chance at the 5% level of significance according to the Maximum Chance Criterion. application of this model to both the estimation and prediction periods are given in Table 12. The elimination of variables resulted in significant improvements in the in-sample classification accuracy using the estimation sample, with accuracies exceeding chance by well over 300%. This improvement in classification accuracy was maintained into the prediction period. The accuracy of the combined adjusted model was 118% greater than chance. This represented a level statistical accuracy above that reported by any similar published study in the area of takeover prediction. These results can be used to refute the claims of Barnes (1999) and Palepu (1986) that models cannot be implemented which achieved predictive accuracies greater than chance. They further confirm the results of Walter (1994) while using a wider sample of firms. The combined adjusted model significantly outperformed the other models for predictive purposes, suggesting that this is the most appropriate model for the application of logit analysis to predict takeover targets in the Australian context. 5.5 Economic Outcomes Although the above methodology provided us with a statistical assessment of model performance, it had nothing to say about the economic usefulness of the model. Palepu (1986), Walter (1994), and Wansley et al. (1983) all implemented an equally weighted portfolio technique to assess whether their predictions of takeover targets were able to earn abnormal risk adjusted returns. The conclusion we drew from the results of the abovementioned studies was that a positive abnormal return was not guaranteed from an investment in the targets predicted from these models. The portfolios of predicted targets in two of these studies were unrealistically large at 91 28 in the case of Walter, and 625 in the case of the Palepu studies. Due to the effect of transaction costs on returns, practitioners would be likely to limit themselves to smaller portfolios in the order of 10 to 15 stocks. To make an economic assessment of the economic usefulness of our modelling approach, we replicated a modified version of the Palepu (1986) and Walter (1994) portfolio technique using our predicted targets. Only commonly predicted targets across all models were included in the portfolio analysis for two reasons. The first was to reduce the number of stocks to a manageable level, and second, to improve the ratio of actual targets in the portfolio. Further, we rejected the equally weighted portfolio approach on the grounds that it was an inefficient strategy for an informed investor who possessed results from our modelling. We reasoned that such an investor could most likely take a leveraged position through derivatives. The portfolio analysed in this study comprised of 13 predicted target firms of which 5 actually became targets. While this is a good result per se, we sought to quantify the economic benefit from an investment in these stocks. The portfolio of predicted targets was held for the entire prediction period of 2005 and 2006 that constituted 503 trading days. The first column of Table 13 below presents the percentage of Cumulative Average Abnormal Return (CAAR%) at 20 day intervals during the prediction period. Table 13 Cumulative Average Abnormal Returns (CAARs) for the portfolio of commonly predicted takeover targets for the Prediction Period of 2005 and 2006 Day Portfolio (13 Stocks) Actual Targets (5 Stocks) Day Portfolio (13 Stocks) Actual Targets (5 Stocks) CAAR (%) CAAR (%) CAAR (%) CAAR (%) 20 1.38 5.36 280 4.77 29.64 40 2.84 10.50 300 4.67 32.47 60 -1.98 5.58 320 3.08 33.53 80 -2.53 6.11 340 0.73 31.96 100 -5.52 -1.15 360 2.89 26.62 120 4.40 25.16 380 5.28 33.72 140 3.06 17.83 400 6.99 32.02 160 4.38 20.70 420 9.78 37.43 180 5.51 24.79 440 11.33 40.22 200 9.90 34.82 460 57.44 46.00 29 220 7.51 34.87 480 58.38 47.27 240 6.40 29.31 500 68.90 52.12 260 5.04 27.71 503 68.67* 50.86^ The full prediction period CAAR of 68.67% was significantly greater than zero at the 1% level of significance under the Standard Abnormal Return [SAR] methodology of Brown and Warner (1985). We recognised that these results could have been potentially driven by actual non-target firms within the portfolio of predicted targets. This would suggest that the abnormal return was the result of the chance selection of over-performing non-target firms, rather than an accurate selection of target firms. To answer this question, the same CAAR calculation was applied to the sub-portfolio of firms that actually became targets. The full period CAAR of 50.86% was also significantly greater than zero at the 1% level. This supported the proposition that the CAAR of the portfolio was driven by the performance of the actual targets within the portfolio. Table 13 also indicated that the CAAR for the portfolio increased significantly between days 440 and 460. This result was driven by the extremely positive returns on the stock ATM which was a non-target firm predicted by the models to be a target. After repeating the portfolio analysis with this stock eliminated from the portfolio of predicted targets, it was found that a significant positive abnormal return of 25%9 was realized for the entire prediction period. Another observation from Table 13 was that the CAAR was not positive (nor significant) for either of the portfolios early in the prediction period. From the second column of Table 14, the CAAR after 100 days was negative. Further, after 340 days the CAAR was indistinguishable from zero. The real gains to the portfolio were made as mergers or acquisitions were announced and completed in the latter stages of 2006, highlighting the fact that the portfolio had to be held for the entire prediction period in order to realise the potential available returns. 6. Conclusion The main finding of this paper was that the combined adjusted model which was based on averaged, industry adjusted financial ratios across the sample period, 9 t = 9.63 30 emerged as a clear standout with regards to predictive accuracy. Further, the implementation of industry adjusted data, as described in Section 4.3.3 of this paper, significantly improved the classification accuracy of all the models bar one that were analysed in both the estimation and prediction periods. Additionally, this paper provided evidence that the inclusion of the Barnes methodology for calculation of the optimal cut-off point significantly improved classification accuracy and enabled the successful use of logit models to predict takeover targets within the Australian context. The accuracy of the single best model in this paper exceeded a chance selection by 118% and represented the highest reported accuracy for a logit model. Another important finding of this paper resulted from the examination of a portfolio of predicted targets. We demonstrated that an investment in the predicted targets, that were common across the logit models, resulted in significant Cumulative Average Abnormal Returns (CAARs) being made by an investor. Several steps were undertaken to ensure that this result was robust against returns on predicted non-target stocks. This suggests that the abnormal returns made are based on the accuracy of the predictions common to the logit models analysed in this study rather than any chance selection. We believe our results provide evidence in favour of the proposition that an abnormal return can be made from an investment in the commonly predicted takeover targets from the four logit-based models analysed in this paper. There is a wealth of evidence in existence that suggests that combining forecasts from different models improves forecasting ability. This is an obvious direction for future research and may well be achieved either by a logit and MDA combination, or through the inclusion of a neural network approach to predict targets. References Allison, Paul D, 2006, Logistic Regression Using the SAS System. Cary, NC: SAS Institute. Barnes, Paul, 1990, The Prediction of Takeover Targets in the UK by means of Multiple Discriminant Analysis, Journal of Business Finance and Accounting 17, 73- 84. Barnes, Paul, 1999, Predicting UK Takeover Targets: Some Methodological Issues and Empirical Study, Review of Quantitative Finance and Accounting 12, 283-301. Belkaoui, Ahmed, 1978, Financial Ratios as Predictors of Canadian Takeovers, Journal of Business Finance and Accounting 5, 93-108. 31 Brown, Stephen J, and Jerold B Warner, 1985, Using Daily Stock Returns, Journal of Financial Economics 14, 3-31. Dietrich, Kimball J, and Eric Sorensen, 1984, An Application of Logit Analysis to Prediction of Merger Targets, Journal of Business Research 12, 393-402. Fama, Eugene F, 1980, Agency Problems and the Theory of the Firm, Journal of Political Economy 88, 288-307. Fogelberg, G, CR Laurent, and D McCorkindale, 1975, The Usefulness of Published Financial Data for Predicting Takeover Vulnerability, University of Western Ontario, School of Business Administration (Working Paper 150). Gort, Michael C, 1969, An Economic Disturbance Theory of Mergers, Quarterly Journal of Economics 83, 624-642. Harris, Robert S, John F Stewart, David K Guilkey, and Willard T Carleton, 1984, Characteristics of Acquired Firms: Fixed and Random Coefficient Probit Analyses, Southern Economic Journal 49, 164-184. Jennings, DE, 1986, Judging Inference Adequacy in Logistic Regression, Journal of the American Statistical Association 81, 471-476. Jensen, Michael C, 1986, Agency Costs of Free Cash Flow, Corporate Finance, and Takeovers, American Economic Review 76, 323-329. Jensen, Michael C, and William H Meckling, 1976, Theory of the Firm: Managerial behaviour, agency costs, and ownership structure, Journal of Financial Economics 3, 305-360. Jensen, Michael C, and Richard S Ruback, 1983, The Market for Corporate Control: The Scientific Evidence, Journal of Financial Economics 11, 5-50. Lewellen, Wilbur G, 1971, A Pure Financial Rationale for the Conglomerate Merger, Journal of Finance 26, 521-537. Manne, Henry G, 1965, Mergers and the Market for Corporate Control, Journal of Political Economy 73, 110-120. Miller, Merton H, and Franco Modigliani, 1964, Dividend Policy, Growth, and the Valuation of Shares, Journal of Business 34, 411-433. Mitchell, Mark L, and J Harold Mulherin, 1996, The impact of industry shocks on takeover and restructuring activity, Journal of Financial Economics 41, 193-229. Myers, Stewart C, and Nicholas S Majluf, 1984, Corporate Financing and Investment Decisions when Firms have Information that Investors do not, Journal of Financial Economics 13, 187-221. 32 Ohlson, J, 1980, Financial Ratios and the Probabilistic Prediction of Bankruptcy, Journal of Accounting Research 18, 109-131. Palepu, Krishna G, 1986, Predicting Takeover Targets: A Methodological and Empirical Analysis, Journal of Accounting and Economics 8, 3-35. Platt, Harlan D, and Marjorie D Platt, 1990, Development of a Class of Stable Predictive Variables: The Case of Bankruptcy Prediction, Journal of Business Finance and Accounting 17, 31-51. Powell, Ronan G., 2001, Takeover prediction and portfolio performance: A note, Journal of Business Finance and Accounting 28, 993-1011. Rege, Udayan P, 1984, Accounting Ratios to Locate Takeover Targets, Journal of Business Finance and Accounting 11, 301-311. Simkowitz, Michael A, and Robert J Monroe, 1971, A Discriminant Function for Conglomerate Targets, Southern Journal of Business 38, 1-16. Singh, A, 1971, Takeovers: Their Relevance to the Stock Market and the Theory of the Firm. Cambridge University Press. Smith, Richard L, and Joo-Hyun Kim, 1994, The Combined Effect of Free Cash Flow and Financial Slack on Bidder and Target Stock Returns, Journal of Business 67, 281- 310. Stevens, David L, 1973, Financial Characteristics of Merged Firms: A Multivariate Analysis, Journal of Financial and Quantitative Analysis 8, 149-158. Walter, Richard M, 1994, The Usefulness of Current Cost Information for Identifying Takeover Targets and Earning Above-Average Stock Returns, Journal of Accounting, Auditing, and Finance 9, 349-377. Zanakis, SH, and C Zopounidis, 1997, Prediction of Greek Company Takeovers via Multivariate Analysis of Financial Ratios, Journal of the Operational Research Society 48, 678-687.

saya pergi ke kediri

Ultimo aggiornamento: 2014-02-23
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

The next morning Dorothy kissed the pretty green girl good-bye, and they all shook hands with the soldier with the green whiskers, who had walked with them as far as the gate. When the Guardian of the Gate saw them again he wondered greatly that they could leave the beautiful City to get into new trouble. But he at once unlocked their spectacles, which he put back into the green box, and gave them many good wishes to carry with them. "You are now our ruler," he said to the Scarecrow; "so you must come back to us as soon as possible." "I certainly shall if I am able," the Scarecrow replied; "but I must help Dorothy to get home, first." As Dorothy bade the good-natured Guardian a last farewell she said: "I have been very kindly treated in your lovely City, and everyone has been good to me. I cannot tell you how grateful I am." "Don't try, my dear," he answered. "We should like to keep you with us, but if it is your wish to return to Kansas, I hope you will find a way." He then opened the gate of the outer wall, and they walked forth and started upon their journey. The sun shone brightly as our friends turned their faces toward the Land of the South. They were all in the best of spirits, and laughed and chatted together. Dorothy was once more filled with the hope of getting home, and the Scarecrow and the Tin Woodman were glad to be of use to her. As for the Lion, he sniffed the fresh air with delight and whisked his tail from side to side in pure joy at being in the country again, while Toto ran around them and chased the moths and butterflies, barking merrily all the time. "City life does not agree with me at all," remarked the Lion, as they walked along at a brisk pace. "I have lost much flesh since I lived there, and now I am anxious for a chance to show the other beasts how courageous I have grown." They now turned and took a last look at the Emerald City. All they could see was a mass of towers and steeples behind the green walls, and high up above everything the spires and dome of the Palace of Oz. "Oz was not such a bad Wizard, after all," said the Tin Woodman, as he felt his heart rattling around in his breast. "He knew how to give me brains, and very good brains, too," said the Scarecrow. "If Oz had taken a dose of the same courage he gave me," added the Lion, "he would have been a brave man." Dorothy said nothing. Oz had not kept the promise he made her, but he had done his best, so she forgave him. As he said, he was a good man, even if he was a bad Wizard. The first day's journey was through the green fields and bright flowers that stretched about the Emerald City on every side. They slept that night on the grass, with nothing but the stars over them; and they rested very well indeed. In the morning they traveled on until they came to a thick wood. There was no way of going around it, for it seemed to extend to the right and left as far as they could see; and, besides, they did not dare change the direction of their journey for fear of getting lost. So they looked for the place where it would be easiest to get into the forest. The Scarecrow, who was in the lead, finally discovered a big tree with such wide-spreading branches that there was room for the party to pass underneath. So he walked forward to the tree, but just as he came under the first branches they bent down and twined around him, and the next minute he was raised from the ground and flung headlong among his fellow travelers. This did not hurt the Scarecrow, but it surprised him, and he looked rather dizzy when Dorothy picked him up. "Here is another space between the trees," called the Lion. "Let me try it first," said the Scarecrow, "for it doesn't hurt me to get thrown about." He walked up to another tree, as he spoke, but its branches immediately seized him and tossed him back again. "This is strange," exclaimed Dorothy. "What shall we do?" "The trees seem to have made up their minds to fight us, and stop our journey," remarked the Lion. "I believe I will try it myself," said the Woodman, and shouldering his axe, he marched up to the first tree that had handled the Scarecrow so roughly. When a big branch bent down to seize him the Woodman chopped at it so fiercely that he cut it in two. At once the tree began shaking all its branches as if in pain, and the Tin Woodman passed safely under it. "Come on!" he shouted to the others. "Be quick!" They all ran forward and passed under the tree without injury, except Toto, who was caught by a small branch and shaken until he howled. But the Woodman promptly chopped off the branch and set the little dog free. The other trees of the forest did nothing to keep them back, so they made up their minds that only the first row of trees could bend down their branches, and that probably these were the policemen of the forest, and given this wonderful power in order to keep strangers out of it. The four travelers walked with ease through the trees until they came to the farther edge of the wood. Then, to their surprise, they found before them a high wall which seemed to be made of white china. It was smooth, like the surface of a dish, and higher than their heads. "What shall we do now?" asked Dorothy. "I will make a ladder," said the Tin Woodman, "for we certainly must climb over the wall."

The next morning Dorothy kissed the pretty green girl good-bye, and they all shook hands with the soldier with the green whiskers, who had walked with them as far as the gate. When the Guardian of the Gate saw them again he wondered greatly that they could leave the beautiful City to get into new trouble. But he at once unlocked their spectacles, which he put back into the green box, and gave them many good wishes to carry with them. "You are now our ruler," he said to the Scarecrow; "so you must come back to us as soon as possible." "I certainly shall if I am able," the Scarecrow replied; "but I must help Dorothy to get home, first." As Dorothy bade the good-natured Guardian a last farewell she said: "I have been very kindly treated in your lovely City, and everyone has been good to me. I cannot tell you how grateful I am." "Don't try, my dear," he answered. "We should like to keep you with us, but if it is your wish to return to Kansas, I hope you will find a way." He then opened the gate of the outer wall, and they walked forth and started upon their journey. The sun shone brightly as our friends turned their faces toward the Land of the South. They were all in the best of spirits, and laughed and chatted together. Dorothy was once more filled with the hope of getting home, and the Scarecrow and the Tin Woodman were glad to be of use to her. As for the Lion, he sniffed the fresh air with delight and whisked his tail from side to side in pure joy at being in the country again, while Toto ran around them and chased the moths and butterflies, barking merrily all the time. "City life does not agree with me at all," remarked the Lion, as they walked along at a brisk pace. "I have lost much flesh since I lived there, and now I am anxious for a chance to show the other beasts how courageous I have grown." They now turned and took a last look at the Emerald City. All they could see was a mass of towers and steeples behind the green walls, and high up above everything the spires and dome of the Palace of Oz. "Oz was not such a bad Wizard, after all," said the Tin Woodman, as he felt his heart rattling around in his breast. "He knew how to give me brains, and very good brains, too," said the Scarecrow. "If Oz had taken a dose of the same courage he gave me," added the Lion, "he would have been a brave man." Dorothy said nothing. Oz had not kept the promise he made her, but he had done his best, so she forgave him. As he said, he was a good man, even if he was a bad Wizard. The first day's journey was through the green fields and bright flowers that stretched about the Emerald City on every side. They slept that night on the grass, with nothing but the stars over them; and they rested very well indeed. In the morning they traveled on until they came to a thick wood. There was no way of going around it, for it seemed to extend to the right and left as far as they could see; and, besides, they did not dare change the direction of their journey for fear of getting lost. So they looked for the place where it would be easiest to get into the forest. The Scarecrow, who was in the lead, finally discovered a big tree with such wide-spreading branches that there was room for the party to pass underneath. So he walked forward to the tree, but just as he came under the first branches they bent down and twined around him, and the next minute he was raised from the ground and flung headlong among his fellow travelers. This did not hurt the Scarecrow, but it surprised him, and he looked rather dizzy when Dorothy picked him up. "Here is another space between the trees," called the Lion. "Let me try it first," said the Scarecrow, "for it doesn't hurt me to get thrown about." He walked up to another tree, as he spoke, but its branches immediately seized him and tossed him back again. "This is strange," exclaimed Dorothy. "What shall we do?" "The trees seem to have made up their minds to fight us, and stop our journey," remarked the Lion. "I believe I will try it myself," said the Woodman, and shouldering his axe, he marched up to the first tree that had handled the Scarecrow so roughly. When a big branch bent down to seize him the Woodman chopped at it so fiercely that he cut it in two. At once the tree began shaking all its branches as if in pain, and the Tin Woodman passed safely under it. "Come on!" he shouted to the others. "Be quick!" They all ran forward and passed under the tree without injury, except Toto, who was caught by a small branch and shaken until he howled. But the Woodman promptly chopped off the branch and set the little dog free. The other trees of the forest did nothing to keep them back, so they made up their minds that only the first row of trees could bend down their branches, and that probably these were the policemen of the forest, and given this wonderful power in order to keep strangers out of it. The four travelers walked with ease through the trees until they came to the farther edge of the wood. Then, to their surprise, they found before them a high wall which seemed to be made of white china. It was smooth, like the surface of a dish, and higher than their heads. "What shall we do now?" asked Dorothy. "I will make a ladder," said the Tin Woodman, "for we certainly must climb over the wall."

Ultimo aggiornamento: 2014-01-28
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

Sweet, kind and understanding, you tend to be a peacekeeper at heart. You rarely ever cause trouble, and may generally try to avoid confrontation altogether. You may find yourself putting others before yourself and having trouble saying no when someone needs help, although this very kindness is what causes most people to respect you. You may be drawn to people that hold similar kindness and appreciation, as you might feel as though you can relate to them. All the same, there will be the haters that insist that some people can be 'too nice', and therefore 'fake', and that they don't stand up for themselves and avoid issues that need to be confronted. Patient and good-hearted, you may give nearly everyone the benefit of the doubt. You're truly a good friend, and a good person :) Dislikes/Haters: None; you don't judge others, and they don't judge you :) Best Friend: Kyoko Friends: Ryohei, Tsuna, Basil, Yamamoto, Lambo Possible Love Interests: Tsuna, Gokudera, Basil, Enma Possible Role: Being a good friend of Tsuna, and Kyoko's best friend, it's safe to say that you might get tied up in some sticky situations involving the mafia...! Being as kind as you are, you won't have to worry about getting hurt, because there'll be plenty of people there to back you up if you need it...! Whether you'd actually want to be involved in the mafia or not is something for you to decide~ Tsuna: Being friendly and accepting, chances are, you weren't one of the people bullying Tsuna beforehand :) You might have even been a friend of his. He probably respects you a lot because of this, and your infallible caring nature might often remind him of how grateful he is to have you as a friend. We all know Tsuna has an eye for Kyoko, but that's not to say that he doesn't have a little crush on you, too. Gokudera: Anyone who's a friend of the Tenth's is on the radar; fortunately for you, your compassionate nature makes it kind of difficult to dislike you! Gokudera can pretend to be irritated and jealous when you're around, but c'mon, he's not fooling anybody. If he'd stop fangirling over the Tenth for a minute or two, he might realize that he likes you a little more than he thinks. Yamamoto: Considering that you're both friendly and generally don't cause problems with others, you'd probably feel pretty content around each other; there's no reason to believe that you two wouldn't be friends. Hibari: Alright... you might be a little scared of him... or not? His tactics are rather unorthodox, and you might not accept his merciless beatings unto other classmates, but you don't seem particularly like a trouble-maker; sure, he might call you an herbivore, but you probably haven't gotten on his bad side. Yet. Basil: That way he talks is a little weird... but then again, you might like it. Either way, you probably accept him for who he is, which is a good person. He's sweet and compassionate, but surprisingly strong and determined in retrospect. Basil probably admires you for your sense of compassion and your loyalty to your friends, and would strive to protect you if anything went wrong. Enma: (This is a manga-only character, but he's awful cute, haha) Tormented and often bullied, you might serve as a savior to this boy by not judging him based on what others do and say. Enma would probably be pretty shy when it came to anything romantic with you, and he might be a bit hesitant in getting you involved with the mafia, but chances are he'd cherish your friendship, and while he may remain quiet most of the time, he'd probably really care for you. Lambo: Whether you like kids or not, you may be able to remain patient with Lambo, while others (ahem Gokudera coughcough) may get impatient and irritated and yell. As naive and thick-headed as he comes off, deep down, Lambo probably realizes this (somewhere in there) and may be even more inclined to hang around you. Kyoko: Whether you prefer to hang around friends that are boys or friends that are girls, deep down, you two have similar morals; be friendly, and don't intentionally hurt anybody. You both remain loyal to your friends and don't approve of tormenting others, and whether it's on purpose or not, you're often finding yourselves together. Not to mention that Kyoko likes to hang out with her friends, and would probably invite you out to get cake or go shopping. Ryohei: Chances are that hanging around Kyoko, you've been hanging around Ryohei as well. Being the good guy that he is, he would never scare off his little sister's friend, and may even be a little protective over you as well, as he may see some similarities between you and his little sister. He might take on a sort of 'big brother' role.

Google Translate

Ultimo aggiornamento: 2013-12-08
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

The surface is not plate. please use the PV structure drawings and submit the calculation. R 45 meter-please clarify. acceesories for connection between tape and BCC is not clear. Please clarify BCC 50 sqmm? to use 95 sqmm for loop sytem on the above roof or copper tape 3mm wide.

England

Ultimo aggiornamento: 2013-12-05
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

tdgdgdgdrgeeeThe Place of Logic in Philosophy. The sciences fall into two broad divisions, viz.: the speculative and the regulative (or normative) sciences. In the speculative sciences, philosophic thought deals with those things which we find proposed to our intelligence in the universe: such sciences have no other immediate end than the contemplation of the truth. Thus we study Mathematics, not primarily with a view to commercial success, but that we may know. In the normative sciences, on the other hand, the philosopher pursues knowledge with a view to the realization of some practical end. "The object of philosophy," says St. Thomas of Aquin, "is order. This order may be such as we find already existing; but it may be such as we seek to bring into being ourselves."¹ Thus sciences exist, which have as their object the realization of order in the acts both of our will and of our intellect. The science which deals with the due ordering of the acts of the will, is Ethics, that which deals with order in the acts of the intellect is Logic. ¹St. Thomas in Ethic. I. lect. 1. Sapientis est ordinare. . . . Ordo autem quadrupliciter ad rationem comparatur. Est enim quidam ordoquem ratio non facit sed solum considerat, sicut est ordo rerum naturalium. Alius autem est ordo quem ratio considerando facit in proprio actu, puta cum ordinat conceptus suos ad invicem et signa conceptuum quae sunt voces significativae. Tertius autem est ordo quem ratio considerando facit in operationibus voluntatis. Quartus autem est ordo quem ratio considerando facit in exterioribus rebus, quarum ipsa est causa, sicut in arca et domo. The question has often been raised, whether Logic is science or an art. The answer to this will depend entirely on the precise meaning which we give to the word 'art.' The medieval philosophers regarded the notion of an art as signifying a body of rules by which man directs his actions to the performance of some work.2 Hence they held Logic to be the art of reasoning, as well as the science of the reasoning process. Perhaps a more satisfactory terminology is that at present in vogue, according to which the term 'art,' is reserved to mean a body of precepts for the production of some external result, and hence is not applicable to the normative sciences. Aesthetics, the science which deals with beauty and proportion in the objects of the external senses, is now reckoned with Ethics and Logic, as a normative science. By the medieval writers it was treated theoretically rather than practically, and was reckoned part of Metaphysics. It may be well to indicate briefly the distinction between Logic and two other sciences, to which it bears some affinity. Logic and Metaphysics. The term Metaphysics sometimes stands for philosophy in general sometimes with a more restricted meaning it stands for that part of philosophy known as Ontology. In this latter sense Metaphysics deals not with thoughts, as does Logic, but with things, not with the conceptual order but with the real order. It investigates the meaning of certain notions which all the special sciences presuppose, such as Substance, Accident, Cause, Effect, Action. It deals with principles which the special sciences do not prove, but on which they rest, such as e.g., Every event must have a cause. Hence it is called the science of Being, since its object is not limited to some special sphere, but embraces all that is, whether material or spiritual. Logic on the other hand deals with the conceptual order, with thoughts. Its conclusions do not relate to things, but to the way in which the mind represents things. ²St. Thomas us An. Post. I., lect. x. "Nihil enim aliud ars esse videtur, quam certa ordinatio rationis qua per determinata media ad debitum finem actus humani perveniunt." Logic and Psychology. The object of Psychology is the human soul and all its activities. It investigates the nature and operations of intellect, will, imagination, sense. Thus its object is far wider than that of Logic, which is concerned with the intellect alone. And even in regard to the intellect, the two sciences consider it under different aspects. Psychology considers thought merely as an act of the soul. Thus if we take a judgment, such as e.g., "The three angles of a triangle are together equal to two right angles," Psychology considers it, merely in so far as it is a form of mental activity. Logic on the other hand, examines the way in which this mental act expresses the objective truth with which it deals; and if necessary, asks whether it follows legitimately from the grounds on which it is based. Moreover, Logic, as a regulative science, seeks to prescribe rules as to how we ought to think. With this Psychology has nothing to do: it only asks, "What as a matter of fact is the nature of the mind's activity?" The Scope of Logic. Logicians are frequently divided into three classes, according as they hold that the science is concerned (1) with names only, (2) with the form of thought alone, (3) with thought as representative of reality. The first of these views — that Logic is concerned with names only — has found but few defenders. It is however taught by the French philosopher Condillac (1715 — 1780), who held that the process of reasoning consists solely in verbal transformations. The meaning of the conclusion is, he thought, ever identical with that of the original proposition. The theory that Logic deals only with the forms of thought, irrespective of their relation to reality, was taught among others by Hamilton (1788 —1856) and Mansel (1820 —1871). Both of these held that Logic is no way concerned with the truth of our thoughts, but only with their consistency.In this sense Hamilton says: "Logic is conversant with the form of thought, to the exclusion of the matter" (Lectures. I. p. xi). By these logicians a distinction is drawn between 'formal truth,' i.e., self-consistency and 'material truth,' i.e., conformity with the object and it is said that Logic deals with formal truth alone. On this view Mill well observes: "the notion of the true and false will force its way even into Formal Logic. We may abstract from actual truth, but the validity of reasoning is always a question of conditional truth — whether one proposition must be true if the others are true, or whether one proposition can be true if others are true" (Exam. of Hamilton, p. 399). According to the third theory, Logic deals with thought as the means by which we attain truth. Mill, whom we have just quoted, may stand as a representative of this view. "Logic," he says, "is the theory of valid 'thought, not of thinking, but of correct thinking" (Exam. of Hamilton, p. 388). To which class of logicians should Aristotle and his Scholastic followers be assigned? Many modern writers rank them in the second of these groups, and term them Formal Logicians. It will soon appear on what a misconception this opinion rests, and how completely the view taken of Logic by the Scholastics differs from that of the Formal Logicians. In their eyes, the aim of the science was most assuredly not to secure self-consistency, but theoretically to know how the mind represents its object, and practically to arrive at truth. The terms Nominalist, Conceptualist, and Realist Logicians are now frequently employed to denote these three classes. This terminology is singularly unfortunate: for the names, Nominalist, Conceptualist and Realist, have for centuries been employed to distinguish three famous schools of philosophy, divided from each other on a question which has nothing to do with the scope of Logic. In this class we shall as far as possible avoid using the terms in their novel meaning.

Woww photonya terlalu seksi

Ultimo aggiornamento: 2013-11-29
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

Current transformer From Wikipedia, the free encyclopedia This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2010) A CT for operation on a 110 kV grid A current transformer (CT) is used for measurement of alternating electric currents. Current transformers, together with voltage transformers (VT) (potential transformers (PT)), are known as instrument transformers. When current in a circuit is too high to apply directly to measuring instruments, a current transformer produces a reduced current accurately proportional to the current in the circuit, which can be conveniently connected to measuring and recording instruments. A current transformer also isolates the measuring instruments from what may be very high voltage in the monitored circuit. Current transformers are commonly used in metering and protective relays in the electrical power industry. Contents [hide] 1 Design 2 Usage 3 Safety precautions 4 Accuracy 4.1 Burden 4.2 Knee-point core-saturation voltage 4.3 Rating factor 5 Special designs 6 Standards 7 High voltage types 8 See also 9 References 10 External links Design[edit] Basic operation of current transformer SF6 110 kV current transformer TGFM series, Russia Current transformers used in metering equipment for three-phase 400 ampere electricity supply Like any other transformer, a current transformer has a primary winding, a magnetic core and a secondary winding. The alternating current flowing in the primary produces an alternating magnetic field in the core, which then induces an alternating current in the secondary winding circuit. An essential objective of current transformer design is to ensure that the primary and secondary circuits are efficiently coupled, so that the secondary current bears an accurate relationship to the primary current. The most common design of CT consists of a length of wire wrapped many times around a silicon steel ring passed 'around' the circuit being measured. The CT's primary circuit therefore consists of a single 'turn' of conductor, with a secondary of many tens or hundreds of turns. The primary winding may be a permanent part of the current transformer, with a heavy copper bar to carry current through the magnetic core. Window-type current transformers (aka zero sequence current transformers, or ZSCT) are also common, which can have circuit cables run through the middle of an opening in the core to provide a single-turn primary winding. When conductors passing through a CT are not centered in the circular (or oval) opening, slight inaccuracies may occur. Shapes and sizes can vary depending on the end user or switchgear manufacturer. Typical examples of low voltage single ratio metering current transformers are either ring type or plastic molded case. High-voltage current transformers are mounted on porcelain bushings to insulate them from ground. Some CT configurations slip around the bushing of a high-voltage transformer or circuit breaker, which automatically centers the conductor inside the CT window. The primary circuit is largely unaffected by the insertion of the CT. The rated secondary current is commonly standardized at 1 or 5 amperes. For example, a 4000:5 CT would provide an output current of 5 amperes when the primary was passing 4000 amperes. The secondary winding can be single ratio or multi-ratio, with five taps being common for multi-ratio CTs. The load, or burden, of the CT should be of low resistance. If the voltage time integral area is higher than the core's design rating, the core goes into saturation towards the end of each cycle, distorting the waveform and affecting accuracy. Usage[edit] Many digital clamp meters utilize a current transformer for measuring AC current Current transformers are used extensively for measuring current and monitoring the operation of the power grid. Along with voltage leads, revenue-grade CTs drive the electrical utility's watt-hour meter on virtually every building with three-phase service and single-phase services greater than 200 amps. The CT is typically described by its current ratio from primary to secondary. Often, multiple CTs are installed as a "stack" for various uses. For example, protection devices and revenue metering may use separate CTs to provide isolation between metering and protection circuits, and allows current transformers with different characteristics (accuracy, overload performance) to be used for the devices. Safety precautions[edit] Care must be taken that the secondary of a current transformer is not disconnected from its load while current is flowing in the primary, as the transformer secondary will attempt to continue driving current across the effectively infinite impedance up to its core saturation voltage. This may produce a high voltage across the open secondary into the range of several kilovolts, causing arcing, compromising operator and equipment safety, or permanently affect the accuracy of the transformer. Accuracy[edit] The accuracy of a CT is directly related to a number of factors including: Burden Burden class/saturation class Rating factor Load External electromagnetic fields Temperature and Physical configuration. The selected tap, for multi-ratio CTs For the IEC standard, accuracy classes for various types of measurement are set out in IEC 60044-1, Classes 0.1, 0.2s, 0.2, 0.5, 0.5s, 1 and 3. The class designation is an approximate measure of the CT's accuracy. The ratio (primary to secondary current) error of a Class 1 CT is 1%[ERROR] at rated current; the ratio error of a Class 0.5 CT is 0.5%[ERROR] or less. Errors in phase are also important especially in power measuring circuits, and each class has an allowable maximum phase error for a specified load impedance. Current transformers used for protective relaying also have accuracy requirements at overload currents in excess of the normal rating to ensure accurate performance of relays during system faults. A CT with a rating of 2.5L400 specifies with an output from its secondary winding of 20 times its rated secondary current (usually 5 A x 20 = 100 A) and 400 V (IZ drop) its output accuracy will be within 2.5 percent. Burden[edit] The secondary load of a current transformer is usually called the "burden" to distinguish it from the load of the circuit whose current is being measured. The current transformer is mounted one of the power transformer leads; it can be associated with an Lv or Hv lead; depending on voltage and current consideration. A section of the lead is demountable locally to enable the current transformer to removed, should the necessity arise, without disturbing the main connection. The secondary of CT is connected to the heating coil directly located under the main cover in the oil. On the larger Errs the various connections may be brought up to terminals in the main the cover for external linkage. The burden, in a CT metering circuit is the (largely resistive) impedance presented to its secondary winding. Typical burden ratings for IEC CTs are 1.5 VA, 3 VA, 5 VA, 10 VA, 15 VA, 20 VA, 30 VA, 45 VA and 60 VA. As for ANSI/IEEE burden ratings are B-0.1, B-0.2, B-0.5, B-1.0, B-2.0 and B-4.0. This means a CT with a burden rating of B-0.2 can tolerate up to 0.2 Ω of impedance in the metering circuit before its secondary accuracy falls outside of an accuracy specification. These specification diagrams show accuracy parallelograms on a grid incorporating magnitude and phase angle error scales at the CT's rated burden. Items that contribute to the burden of a current measurement circuit are switch-blocks, meters and intermediate conductors. The most common source of excess burden is the conductor between the meter and the CT. When substation meters are located far from the meter cabinets, the excessive length of wire creates a large resistance. This problem can be reduced by using CTs with 1 ampere secondaries, which will produce less voltage drop between a CT and its metering devices. Knee-point core-saturation voltage[edit] The knee-point voltage of a current transformer is the magnitude of the secondary voltage above which the output current increases to linearly follow the input current within declared accuracy. In testing, if a voltage is applied across the secondary terminals the magnetizing current will increase in proportion to the applied voltage, until the knee point is reached. The knee point is defined as the voltage at which a 10%[ERROR] increase in applied voltage increases the magnetizing current by 50%[ERROR]. For voltages greater than the knee point, the magnetizing current increases considerably even for small increments in voltage across the secondary terminals. The knee-point voltage is less applicable for metering current transformers as their accuracy is generally much tighter but constrained within a very small bandwidth of the current transformer rating, typically 1.2 to 1.5 times rated current. However, the concept of knee point voltage is very pertinent to protection current transformers, since they are necessarily exposed to currents of 20 or 30 times rated current during faults.[1] Rating factor[edit] Rating factor is a factor by which the nominal full load current of a CT can be multiplied to determine its absolute maximum measurable primary current. Conversely, the minimum primary current a CT can accurately measure is "light load," or 10%[ERROR] of the nominal current (there are, however, special CTs designed to measure accurately currents as small as 2%[ERROR] of the nominal current). The rating factor of a CT is largely dependent upon ambient temperature. Most CTs have rating factors for 35 degrees Celsius and 55 degrees Celsius. It is important to be mindful of ambient temperatures and resultant rating factors when CTs are installed inside padmount transformers or poorly ventilated mechanical rooms. Recently, manufacturers have been moving towards lower nominal primary currents with greater rating factors. This is made possible by the development of more efficient ferrites and their corresponding hysteresis curves. Special designs[edit] Specially constructed wideband current transformers are also used (usually with an oscilloscope) to measure waveforms of high frequency or pulsed currents within pulsed power systems. One type of specially constructed wideband transformer provides a voltage output that is proportional to the measured current. Another type (called a Rogowski coil) requires an external integrator in order to provide a voltage output that is proportional to the measured current. Unlike CTs used for power circuitry, wideband CTs are rated in output volts per ampere of primary current. CT RATIO Standards[edit] Depending on the ultimate clients requirement, there are two main standards to which current transformers are designed. IEC 60044-1 (BSEN 60044-1) & IEEE C57.13 (ANSI), although the Canadian & Australian standards are also recognised. High voltage types[edit] Current transformers are used for protection, measurement and control in high voltage electrical substations and the electrical grid. Current transformers may be installed inside switchgear or in apparatus bushings, but very often free-standing outdoor current transformers are used. In a switchyard, live tank current transformers have a substantial part of their enclosure energized at the line voltage and must be mounted on insulators. Dead tank current transformers isolate the measured circuit from the enclosure. Live tank CTs are useful because the primary conductor is short, which gives better stability and a higher short-circuit current withstand rating. The primary of the winding can be evenly distributed around the magnetic core, which gives better performance for overloads and transients. Since the major insulation of a live-tank current transformer is not exposed to the heat of the primary conductors, insulation life and thermal stability is improved. A high-voltage current transformer may contain several cores, each with a secondary winding, for different purposes (such as metering circuits, control, or protection).[2] A neutral current transformer is used as earth fault protection to measured any fault current flowing through the neutral line from the wye neutral point of a transformer.[3]

Translation

Ultimo aggiornamento: 2013-11-24
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia
Avvertenza: Contiene formattazione HTML non visibile

Seymour summons a seemingly robotic being, called Mortiorchis, upon which he sits. As with previous Seymour battles, certain party members may use the Trigger Command to talk to him. For this battle, Kimahri can talk to Seymour to raise his Strength, and Yuna can talk to him to raise her Magic Defense. Seymour Flux can use the attack Lance of Atrophy to put an ally into Zombie status, and the Mortiorchis will combo with Full-Life, effectively KO'ing a character (unless the characters have high Agility, as well as many Holy Waters). Like Seymour Natus, he can also Banish Aeons, giving it approximately one turn to attack if summoned. Seymour Flux occasionally casts Protect and Reflect upon himself. Three turns in, Mortiorchis will start to use Cross Cleave, which deals around 2,000 damage to the whole party. Seymour will then cast Reflect on himself and rebound Flare at a party member, but a character with high Agility or in Haste may get a turn in between and can Dispel Seymour, causing him to cast the spell on himself. Total Annihilation is Seymour Flux's signature attack, as well as his deadliest, which requires three rounds of charging, and will kill anything but an extremely good party. If the player summons an aeon then Mortiorchis will postpone its use of Total Annihilation until Seymour Flux banishes it; therefore an aeon cannot be a shield for Total Annihilation. Strategy Seymour is vulnerable to the poison status (the Mortiorchis is immune), and if Lulu casts Bio on him, he will take 1,400 damage at the end of each of his turns (sometimes, he will get two turns in a row not long before Mortiorchis, so he'll use Lance of Atrophy on his first turn then skip the second, thus he will react to damage twice just a second apart); this will whittle down his health without, potentially, any other actions.

Seymour summons a seemingly robotic being, called Mortiorchis, upon which he sits. As with previous Seymour battles, certain party members may use the Trigger Command to talk to him. For this battle, Kimahri can talk to Seymour to raise his Strength, and Yuna can talk to him to raise her Magic Defense. Seymour Flux can use the attack Lance of Atrophy to put an ally into Zombie status, and the Mortiorchis will combo with Full-Life, effectively KO'ing a character (unless the characters have high Agility, as well as many Holy Waters). Like Seymour Natus, he can also Banish Aeons, giving it approximately one turn to attack if summoned. Seymour Flux occasionally casts Protect and Reflect upon himself. Three turns in, Mortiorchis will start to use Cross Cleave, which deals around 2,000 damage to the whole party. Seymour will then cast Reflect on himself and rebound Flare at a party member, but a character with high Agility or in Haste may get a turn in between and can Dispel Seymour, causing him to cast the spell on himself. Total Annihilation is Seymour Flux's signature attack, as well as his deadliest, which requires three rounds of charging, and will kill anything but an extremely good party. If the player summons an aeon then Mortiorchis will postpone its use of Total Annihilation until Seymour Flux banishes it; therefore an aeon cannot be a shield for Total Annihilation. Strategy Seymour is vulnerable to the poison status (the Mortiorchis is immune), and if Lulu casts Bio on him, he will take 1,400 damage at the end of each of his turns (sometimes, he will get two turns in a row not long before Mortiorchis, so he'll use Lance of Atrophy on his first turn then skip the second, thus he will react to damage twice just a second apart); this will whittle down his health without, potentially, any other actions.

Ultimo aggiornamento: 2013-10-15
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo

Gareth Bale has spoken of his excitement at becoming a ‘Galactico’ following his unveiling as a Real Madrid player. A joyous crowd greeted the Welshman’s arrival at the Bernabeu on Monday, following his protracted, expensive move from Tottenham Hotspur. The unveiling ceremony was met with raucous cheers, and there was even a moment when an image of Bale as a boy, wearing a Real replica jersey, was shown on the big screen. That image perhaps gave credence to Bale’s typical footballer’s claim that he has always “dreamed” of playing for Los Blancos. There has been vigorous debate as to whether or not Real even need Bale, and he is under no illusion that he faces a fight to make the first-team. “I have a job to get into the XI. Every player here is world class. Madrid sign the best players in the world,” he said. “I know I cannot walk straight into the team, I have to work hard. I am always looking to improve myself as a footballer, and that should not stop now. “There is a lot more to come from me, I am here with the best players, the best coaches and with the best chance to keep improving. I have played in a lot of different positions. “Wherever the coach thinks I can play my best football I will give 100%.” There were also rumours that Real superstar, Cristiano Ronaldo, was nonplussed to find that Bale was stealing his thunder. But Bale has moved to smooth over any ill feeling between himself and CR7, by doffing his hat to the Portuguese goal machine as “the best player in the world.” “Cristiano Ronaldo is for me the best player in the world,” Bale said. “He is a massive factor why I wanted to come here. The team is full of world-class players, no better than him. It will be an honour to play with him, and hopefully learn from him. Hopefully we can win a lot of trophies together.” “He’s the boss here I think, the main player, the best player in the world,” Bale continued, fawning over CR7. “I want to obviously help the team and try to win trophies. I will have to wait and see what he says.”

WEALTHY is the brand related with the automotive aftermarket industry. As major suppliers in the automotive aftermarket, WEALTHY has standard quality and services that should be fulfilled by each product, and every employee before the products arrive at the customers. WEALTHY was founded in 2006 to fulfill the demand of quality and realible products in automotive aftermarket maintenance in Asia. Although WEALTHY brand is relatively young in the aftermarket industry, but WEALTHY has a motto: "To Serve and Educate the Workshop to have Safety & Standard Quality of Services" By Combining the knowhow of the founder and the demand of Asian Automotives Aftermarket, WEALTHY has produced some reliably high standard quality products such as Wiper Blades, Refill of Wiper Blades, Bulbs, Automatic Transmission Conditioner, Power Steering Conditioner, and many other products that the car owner can rely for their car's maintenance. WEALTHY is also going green to protect the environment ; we have a commitment towards the environment. Such as our water based products. These products can be found in our range, i.e. Polishing Products, etc. As technology of the automotive industry continue to grow, WEALTHY will also continue to grow and develop through her R&D Department to fulfill a demand of the automotive industry. If one day you see any development of the aftermarket products for the automotive industry, it shoud be come fromWEALTHY. Until now we have over 4,000 customers spread out across Indonesia, from Jakarta, West Java, Central Java, East Java, and Sumatra; using our maintenance products and accessories for cars and motorcycles.

Ultimo aggiornamento: 2013-09-08
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

Gareth Bale has spoken of his excitement at becoming a ‘Galactico’ following his unveiling as a Real Madrid player. A joyous crowd greeted the Welshman’s arrival at the Bernabeu on Monday, following his protracted, expensive move from Tottenham Hotspur. The unveiling ceremony was met with raucous cheers, and there was even a moment when an image of Bale as a boy, wearing a Real replica jersey, was shown on the big screen. That image perhaps gave credence to Bale’s typical footballer’s claim that he has always “dreamed” of playing for Los Blancos. There has been vigorous debate as to whether or not Real even need Bale, and he is under no illusion that he faces a fight to make the first-team. “I have a job to get into the XI. Every player here is world class. Madrid sign the best players in the world,” he said. “I know I cannot walk straight into the team, I have to work hard. I am always looking to improve myself as a footballer, and that should not stop now. “There is a lot more to come from me, I am here with the best players, the best coaches and with the best chance to keep improving. I have played in a lot of different positions. “Wherever the coach thinks I can play my best football I will give 100%[ERROR].” There were also rumours that Real superstar, Cristiano Ronaldo, was nonplussed to find that Bale was stealing his thunder. But Bale has moved to smooth over any ill feeling between himself and CR7, by doffing his hat to the Portuguese goal machine as “the best player in the world.” “Cristiano Ronaldo is for me the best player in the world,” Bale said. “He is a massive factor why I wanted to come here. The team is full of world-class players, no better than him. It will be an honour to play with him, and hopefully learn from him. Hopefully we can win a lot of trophies together.” “He’s the boss here I think, the main player, the best player in the world,” Bale continued, fawning over CR7. “I want to obviously help the team and try to win trophies. I will have to wait and see what he says.”

i will try my best Translation

Ultimo aggiornamento: 2013-09-07
Frequenza d'uso: 1
Qualità:
Riferimento: Wikipedia

A head gasket is a gasket that sits between the engine block and cylinder head(s) in an internal combustion engine. Its purpose is to seal the cylinders to ensure maximum compression and avoid leakage of coolant or engine oil into the cylinders; as such, it is the most critical sealing application in any engine, and, as part of the combustion chamber, it shares the same strength requirements as other combustion chamber components

A head gasket is a gasket that sits between the engine block and cylinder head(s) in an internal combustion engine. Its purpose is to seal the cylinders to ensure maximum compression and avoid leakage of coolant or engine oil into the cylinders; as such, it is the most critical sealing application in any engine, and, as part of the combustion chamber, it shares the same strength requirements as other combustion chamber components

Ultimo aggiornamento: 2013-07-03
Argomento: Finanza
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo

gAssessment of menopausal status, including premature ovarian failure Assessing ovarian status, including follicle development, ovarian reserve, and ovarian responsiveness, as part of an evaluation for infertility and assisted reproduction protocols such as in vitro fertilization Assessing ovarian function in patients with polycystic ovarian syndrome Evaluation of infants with ambiguous genitalia and other intersex conditions Evaluating testicular function in infants and children Diagnosing and monitoring patients with antimullerian hormone-secreting ovarian granulosa cell tumors Method Name Immunometric Assay Reporting Name Antimullerian Hormone, S Aliases Mullerian inhibiting factor (MIF) Mullerian-inhibiting hormone (MIH) Mullerian-inhibiting substance (MIS) Specimen Type Serum Specimen Required Container/Tube: Preferred: Red top Acceptable: Serum gel Specimen Volume: 0.2 mL Specimen Minimum Volume 0.1 mL Reject Due To Hemolysis Mild OK; Gross reject acceptable to 1,000 mg/dL Lipemia Mild OK, Gross needs to be spun Icterus Mild OK, interpret with caution; Gross reject Other NA Specimen Stability Information Specimen Type Temperature Time Serum Refrigerated (preferred) 7 days Frozen 90 days Clinical Information Antimullerian hormone (AMH), also known as mullerian-inhibiting substance, is a dimeric glycoprotein hormone belonging to the transforming growth factor-beta family. It is produced by Sertoli cells of the testis in males and by ovarian granulosa cells in females. Expression during male fetal development prevents the mullerian ducts from developing into the uterus and other mullerian structures, resulting in normal development of the male reproductive tract. In the absence of AMH, the mullerian ducts and structures develop into the female reproductive tract. AMH is also expressed in the follicles of females of reproductive age and inhibits the transition of follicles from primordial to primary stages. Follicular AMH production begins during the primary stage, peaks in the preantral and small antral stages, and then decreases to undetectable concentrations as follicles grow larger. AMH serum concentrations are elevated in males under 2 years old and then progressively decrease until puberty, when there is a sharp decline. By contrast, AMH concentrations are low in female children until puberty. Concentrations then decline slowly over the reproductive lifespan as the size of the pool of remaining microscopic follicles decreases. AMH concentrations are frequently below the detection limit of current assays after natural or premature menopause. Because of the gender differences in AMH concentrations, its changes in circulating concentrations with sexual development, and its specificity for Sertoli and granulosa cells, measurement of AMH has utility in the assessment of gender, gonadal function, fertility, and as a gonadal tumor marker. Since AMH is produced continuously in the granulosa cells of small follicles during the menstrual cycle, it is superior to the episodically released gonadotropins and ovarian steroids as a marker of ovarian reserve. Furthermore, AMH concentrations are unaffected by pregnancy or use of oral or vaginal estrogen- or progestin-based contraceptives. Studies in fertility clinics have shown that females with higher concentrations of AMH have a better response to ovarian stimulation and tend to produce more retrievable oocytes than females with low or undetectable AMH. Females at risk of ovarian hyperstimulation syndrome after gonadotropin administration can have significantly elevated AMH concentrations. Polycystic ovarian syndrome can elevate serum AMH concentrations because it is associated with the presence of large numbers of small follicles. AMH measurements are commonly used to evaluate testicular presence and function in infants with intersex conditions or ambiguous genitalia, and to distinguish between cryptorchidism (testicles present but not palpable) and anorchia (testicles absent) in males. In minimally virilized phenotypic females, AMH helps differentiate between gonadal and nongonadal causes of virilization. Serum AMH concentrations are increased in some patients with ovarian granulosa cell tumors, which comprise approximately 10%[ERROR] of ovarian tumors. AMH, along with related tests including inhibin A and B (#81049 Inhibin A, Tumor Marker, Serum; #88722 Inhibin B, Serum, #86336 Inhibin A and B, Tumor Marker, Serum), estradiol (#81816 Estradiol, Serum), and CA-125 (#9289 Cancer Antigen 125 (CA 125), Serum), can be useful for diagnosing and monitoring these patients. Reference Values Males 12 years: 0.7-19 ng/mL Females 45 years: <1.0 ng/mL Interpretation Menopausal women or women with premature ovarian failure of any cause, including after cancer chemotherapy, have very low antimullerian hormone (AMH) levels, often below the current assay detection limit of 0.25 ng/mL. While the optimal AMH concentrations for predicting response to in vitro fertilization are still being established, it is accepted that AMH concentrations in the perimenopausal to menopausal range (0-0.6 ng/mL) indicate minimal to absent ovarian reserve. Depending on patient age, ovarian stimulation is likely to fail in such patients and most fertility specialists would recommend going the donor oocyte route. By contrast, if serum AMH concentrations exceed 3 ng/mL, hyper-response to ovarian stimulation may result. For these patients, a minimal stimulation would be recommended. In patients with polycystic ovarian syndrome, AMH concentrations may be 2 to 5 fold higher than age-appropriate reference range values. Such high levels predict anovulatory and irregular cycles. In children with intersex conditions, an AMH result above the normal female range is predictive of the presence of testicular tissue, while an undetectable value suggests its absence. In boys with cryptorchidism, a measurable AMH concentration is predictive of undescended testes, while an undetectable value is highly suggestive of anorchia or functional failure of the abnormally sited gonad. Granulosa cell tumors of the ovary may secrete AMH, inhibin A, and inhibin B. Elevated levels of any of these markers can indicate the presence of such a neoplasm in a woman with an ovarian mass. Levels should fall with successful treatment. Rising levels indicate tumor recurrence/progression. Cautions Like all laboratory tests, antimullerian hormone (AMH) measurement alone is seldom sufficient for diagnosis and results should be interpreted in the light of clinical findings and other relevant test results, such as ovarian ultrasonography (in fertility applications, this would include an antral follicle count), abdominal or testicular ultrasound (intersex/testicular function applications) and measurements of sex steroids (estradiol, testosterone, progesterone), follicle-stimulating hormone (FSH), inhibin B (for fertility), and inhibin A and B (for tumor workup). Elevated AMH is not specific for malignancy, and the assay should not be used exclusively to diagnose or exclude an AMH-secreting ovarian tumor. This assay demonstrates no cross reactivity with transforming growth factor beta-1, activin A, inhibin A or B, luteinizing hormone alpha or beta, FSH, thyroid-stimulating hormone, or insulin-like growth factor-1. However, although unlikely, there might be cytokines that have not been evaluated for cross reactivity that do cross react, resulting in false-elevations. As with other immunoassays, the AMH assay can be susceptible to false-low results at extremely high analyte concentrations (hooking effect) or in the hypothetical scenario of the presence of anti-AMH autoantibodies in a patient serum specimen. Heterophilic antibody interferences that are not blocked by the assay’s blocking regents may also rarely occur, causing typically false-high results. If test results are incongruent with the clinical picture, the laboratory should be contacted.

Indonesian translation google english

Ultimo aggiornamento: 2013-04-16
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Riferimento: Anonimo
Avvertenza: Contiene formattazione HTML non visibile

terjemahanABSTRACT Consumer satisfaction is achieved when the need and desire for such services can be met. To meet customer satisfaction, companies must create and maintain a system to serve the growing consumer demand. The background to the problem and the low level of customer satisfaction with service delivery to customers Semarang Container Terminal, which was indicated by the static amount of container traffic and the high complaints from customers for services rendered. This is also supported by the apparent contradiction between the studies with one another aim of this study was to analyze the effect of the performance of services, barriers to change and corporate image on customer satisfaction for services provided in Semarang Container Terminal. The population is around consumer Container Terminal in Semarang are still active as a customer until the month of October 2012 as many as 519 customers. While the sample is customer Container Terminal in Semarang are still active by 84 respondents. The sampling technique used in this study was purposive sampling Accidental. This research analyzed using multiple linear regression analysis tool. Prior to the regression test, first tested the validity and reliability and the classic assumption test. The results showed that the performance of the service to customer satisfaction shows a positive effect, meaning that if the performance of the services provided Semarang further enhanced Container Terminal, it will further improve customer satisfaction, as evidenced by the value of 11.325 t count> t table 1.9901. Barriers to move towards customer satisfaction has a negative effect, meaning that the higher the resistance of consumers to switch, it is because of consumer dissatisfaction with the services provided. Evidenced by the t value -2.120 1.9901. Keywords: Performance of services, barriers to change, corporate image and customer satisfaction

translaton

Ultimo aggiornamento: 2013-02-02
Argomento: Generico
Frequenza d'uso: 1
Qualità:
Avvertenza: Contiene formattazione HTML non visibile

Aggiungi una traduzione