Free Essays

Evaluation of the performance of the European bond market


This study addresses the contemporary issues that affect the performance of European bond over the recent period of 18 months. In order to identify the major events that have impact on the European bonds, analysis of the yield curve spread between 10 year and 2 year maturity was conducted. The results indicate that since the global crises of 2008 and the performance of Eurobonds have been widely violated and the weak performances of some European economies have created a problem of trust among the global investors.

In light of those evidences, it could be concluded that although with all the financial supports to stimulate the European bond market “EBM”. The results of those supports would take a while before fully recover the current crises.

1. Introduction

The increasing importance of mutual funds in society by the individual investors and portfolio managers explains the massive number of studies conducted and broadcasted in the financial and academia press. The bond market is a topic which has been less studied comparing the stock market (Otten and Schweitzer 2002). With exception of the US bond market, others bond markets have been a reasonably negligible area which need more research to be conducted. Even though many researches were investigated the bond market in Europe, those studies were conducted about one country at the time[1]. According to Otten and Schweitzer (2002, P (1)) “An important explanation for the lack of studies is the institutional setting of the industry in different European countries”.Hence, the aim of this study came from the motivation of further exploration of this topic and gives a comprehensive analysis of the performance of EBM over a relatively short period of 18 months. Many reasons were behind the selection of EBM; one of the most important reasons was the creation of the European currency “the euro” has stimulated strong interest in EBM. Another reason is the fact of increasing demand for mutual fund services in Europe which has fuelled up the interest in the European mutual fund industry.

2. Definition of the European Bond Market

A simple definition of European bond was provided by Gros & Lannoo (2000) where they stated that Eurobonds are international bonds issued by the European governments and companies in any international currency often denominated in non-European currencies for instance dollar and yen (Flowers & Lees 2002). The EBM consists of investors, banks, borrowers, and trading agents that buy, sell, and transfer Eurobonds. International bodies such as the World Bank have the right as well to issue Eurobond. Moreover, the original creation of the EBM was in 1963, but not until the early 1980s when the EBM has become a large and active in the international market.

The fact that Eurobonds offer certain tax shelters and anonymity to their buyer as well as providing the borrowers with advantageous interest rates and international exchange rates have made them very popular with issuers and investors. EBM has superiority over others bond markets due to numerous reasons; firstly, it is one of the most developed and sophisticated market in the world. The adoption of series of innovations have given a further extend to this market along with the special character which is offering certain types of government and corporate finance that are not provided somewhere else make the investors more convenient to trade in EBM (Choudhry, 2001). Secondly, Eurobonds have been designed with a range of instruments that are not available to certain investors in their domestic area. Finally, the tax advantages and the fact of issuing the bonds in different currencies, countries and trading in many financial centres have all attributed to make the EBM in top of all bond markets.

3. Evolution of the performance of the European bond market

In order to understand the performance of a bond market, the reasons that affect this performance have to be clarified. Generally, a bond appears to be an easy-to-understand security. It generates interest for a limited period at a specified interest rate and then sells it for a precise price. However, several external factors could affect the price of a bond, and change its yields through the changing in the price.

Even though the bond generates an unchangeable and fixed interest rate, the market interest rate plays very important role in determining the price of individual bond in the market. Generally, it is controversial relationship

All bonds are subject to inflation risk. Therefore, unpredictable level of inflation would move the price of bond in unsynchronised way and, consequently, its yield.

Any changing on credit ration of bonds reflects directly the level of risk that the investors bear by holding those bonds.

Finally, Bond prices are greatly influenced by the reputation and financial situation of the issuing institution. Any problems with the issuing institution would definitely reflect on their bonds prices.

Since the creation of the EBM, the yields on bonds issued by different European governments moved on synchronised way. Especially after the adoption of Euro and the benefits that every member state suddenly gain from “free-riding” in the Euro-zone bond market, or borrowing at approximately the similar interest rates as Germany, regardless of whether the country’s economic fundamentals justified the lower rate. However, the impact of the global financial crises 2008 in the EBM was clearly noticed by the underperformance of 16 member Euro-zone bonds. The duration of this underperformance of the bond was long which explained the violated of the yield’s curve till the beginning of 2010, where the curve witnessed an appreciation. However, for many years, Greece, Spain, and Ireland took advantage of the easy money that came their way, borrowing beyond their treaty-set limits, while their trade deficits remained wide. Particularly Greek the big deficit of 14% in GDP in last year made the Greece government unable to honour their liabilities. An addition to Greece deficit, all of Spain, Portugal and Ireland have caused serious problems to the European economy in general, and the EBM in particular. Many affords have been made by the advanced countries in European to rescue the struggling economies. This support programme started with Greece by providing a support up to ˆ30 billion of three-year fixed- and variable-rate loans in the first 12 months. The recent agreements which has been made by last summit on 11 of march reached in principle to increase the effective lending capacity of the European Financial Stability Facility to ˆ440 billion, offer it greater flexibility and lower lending costs, and specifically, to lower the cost of loans to Greece by 100 basis points and to extend maturity to 7.5 years.

4. Conclusion

The development of the mutual fund industry has made the subject of performance evaluation one of the most highly debated issues in the finance literature. The performance of EBM appeared to be effected by the global and domestic crises. In order to support the EBM, many financial affords has been made to save the Eurobonds from further deprecation.

With all that affords to stimulate the European economy, however many of the economists believed that the financial supports are worthwhile as long as it permits the struggling countries to organise and grow their economies. Yet there is a risk that the rescue is treated as an opportunity to relax which would influence the performance of EBM. In light of the uncertainty of recovery of the damaged economies, the global investors will be more selective and demanding higher interest rates, that put a massive pressure on the European leaders to solve the problem of high interest rates and to recuperate the reputation and financial situation of some European countries.

5. Reference

Basarrate, B.; Rubio, G. (1999), “Nonsimultaneous Prices and the Evaluation of Managed Portfolios in Spain”, Applied Financial Economics 9 (3), 273-281.
Cesari, Riccardo; Panetta, Fabio (2002), “The Performance of Italian Equity Funds”,Journal of Banking and Finance 26 (1), 99-126.
CHOUDHRY, M. 2001. The bond and money markets: strategy, trading, analysis, Butterworth-Heinemann.
FLOWERS, E. B. & LEES, F. A. 2002. The euro, capital markets, and dollarization, Rowman & Littlefield Publishers.
GROS, D. & LANNOO, K. 2000. The Euro capital market, Wiley.
Maag, Felix; Zimmermann, Heinz (2000), “On Benchmarks and the Performance of DEM Bond Mutual Funds”, The Journal of Fixed Income 10 (3), 31-45.
OTTEN, R. 2002. A comparison between the European and the US mutual fund industry. Managerial Finance, 28, 14.
Stehle, Richard; Grewe, Olaf (2001), “The Long-Run Performance of German Stock Mutual Funds”, Working Paper, Humboldt-Universitat zu Berlin.

Free Essays

Report on the European Bond Market – March 2010 to August 2011


This report evaluates the European bond market’s performance over the last 18 months, and explains some of the underlying causes and events that have affected it, including the perception of risk related to sovereign debt levels in the Eurozone. The report goes on to discuss the outlook for the bond market over the next 12 months, and possible mechanisms that may be used to bring debt to more sustainable levels not only for the benefit of struggling economies, but also for the future of the Eurozone and global economy as a whole.


A bond is a fixed income security, issued by Governments and corporations to raise long term capital. Governments sell bonds to finance the shortfall between their spending and revenue. Investors are interested in bond returns, which are determined by the initial bond price, coupon value and the maturity date (Buckle and Thomas, 2009).

Although there is no formal single European bond market, bonds by countries in the Eurozone are viewed as a single market, as these economies share a single European currency (excluding the UK), and are heavily exposed to each others’ economies. As the majority of bonds are held by Governments, global banks and institutional investors, even perceived risk of one Government defaulting on bonds can have severe consequences for the European and global economy.

European Bond Market Performance

Bond markets increased in importance post 2008 as investors shifted exposure from equity to debt instruments, and from private to public sector securities (Forster et al. 2011), in a flight to safety.

The European bond market has experienced volatility over the last 18 months, due to a number of concerns regarding sovereign debt and growth prospects of European and global economies, leading to a decline in investor confidence. Greece’s default risk (see Appendix 1.3) resulted in sharp increases in yields (Appendix 1.4) on Government bonds. This also resulted in bond yields of other European countries including Ireland, Portugal and Spain to rise due to fears of contagion (Bank of England, 2010a). In Greece’s case, already high sovereign debt levels meant that higher yields made servicing debts more expensive.

Due to this and to stop contagion spreading, in May 2010 the European Union (EU) and International Monetary Fund (IMF) agreed a bailout package for Greece of 110bn Euros, to provide certainty to the market and prevent Greece defaulting on its debt.

Chart A plots the spread of ten-year Government bond yields for European countries (benchmark – German Bonds/Bunds). Greece’s bond yields began to rise sharply from late 2009, peaking in May 2010 as the bailout package was announced. For risk neutral investors, higher yields meant increased returns, however this had to be balanced against the risk of default by the issuer.

Irish, Portuguese and Spanish bond yields also increased with their spreads diverging away from other European countries. Concerns now started to build for these countries as the Euro continued to fall, decreasing Europe’s buying power in the global economy.

From June 2010, bond yields began to fall as economies experienced a redistribution of capital into safe assets, causing economies to struggle to attract investors to finance spending. In November, the EU and IMF were forced to agree a bailout package for the Republic of Ireland of 85bn Euros, due to the re-emergence of sovereign and banking system concerns.

Bond yields hit historically low levels (Bank of England, 2010b), as sovereign debt crises triggered a search for safe assets (Chart B). However, it was also deemed that such extended periods of low bond yields could trigger a search for yield in riskier assets, resulting in overheating in emerging markets.

Then between March and May 2011, Greece, Ireland, and Portugal experienced sharp rises in yield spreads due to uncertainty about how they will resolve their economic challenges (Chart C). In May 2011, the EU and IMF were forced to bailout Portugal, as it struggled to finance its sovereign debt. In August 2011, the European Central Bank indicated that it will buy Spanish and Italian Government bonds, in a bid to bring down those countries’ borrowing costs, and prevent concerns growing of a Europe-wide sovereign debt crisis.

Future of the European Bond Market

Over the next 12 months, movements in the global economy, including the downgrade of the US bond market from its AAA rating may slow growth and return the country, and global economy, into recession. This will have a significant impact as investor confidence falls and global growth expectations are downgraded.

In terms of debt sustainability, there will be further efforts by EU Governments to implement more severe austerity measures, in a bid to bring sovereign debt to manageable levels. However, as we have seen over the past 18 months, this has come at a cost to growth. Without growth, it is evident that countries cannot afford to service their current debt levels, let alone reduce them. In the EU, this leaves Governments with a lack of fiscal policy levers to manage the economy. Apart from the UK which has not joined the Euro, other Eurozone countries are unable to use mechanisms such as currency devaluation in an attempt to control economic fluctuations.

In addition, as the budgets of larger European countries come under scrutiny, if fears of default or a downgrade of their debt arise, the fallout will be massive. For these economies, a bailout may prove impossible, causing the disintegration of the Euro and Eurozone.


The European bond market has experienced volatility over the last 18 months, due to fears of default by European economies such as Greece, Ireland and Portugal, and deteriorating economic conditions globally. The pursuance of austerity plans have meant that growth has stunted and with the recent downgrade of US debt, fears of a double dip recession have returned.

The financial crisis that began in 2008 has now evolved into a sovereign debt crisis in 2011, making it difficult for countries to service their debt, and difficult for institutions such as the European Central Bank and IMF to prevent the contagion spreading. Although the new Basel regulations have supported banking systems by ensuring Banks are retaining profits to improve their capital to lending ratios, their level of exposure to sovereign debt means that default by any advanced economy may trigger another, deeper, financial crisis, as Governments will not have the funds to bail Banks out.

However, there are some mechanisms that are being explored to return normality to the bond markets. Short selling is currently banned, reducing volatility in the equity and bond markets. In addition, other solutions are being explored such as the automatic extension of bond maturities, allowing Governments more time to pay back lenders, and the potential for a common Euro area bond. This would potentially bring down the cost of borrowing for Greece and other troubled countries, however may increase costs for countries with healthy balance sheets such as Germany, meaning this proposal has faced substantial opposition.

Finally, there is renewed debate that credit ratings agencies such as Moody’s, Fitch and Standard & Poor’s cause volatility in markets by prematurely downgrading Government and corporate debt, and subsequently causing weak investor confidence and market jitters that affect all economies. Their role in the financial crisis of 2008 and the current sovereign crisis is coming under intense scrutiny and we may see Governments coming together to reign in their power, reducing volatility in both bond and equity markets.


1.1Bond Definition

A bond is a fixed income security, or debt instrument, issued globally by Governments and corporations to raise long term capital. Governments sell bonds to finance the shortfall between government spending and government revenue. In the UK, this is referred to as the Public Sector Net Cash Requirement. Bonds represent a promise by the issuer (borrower) to pay the holder (lender) a fixed single payment or stream of payments, called coupons, at specified dates over the term of the bond. Once the bond matures, the issuer must return to the holder the par value of the security plus any outstanding payments. Importantly, bond returns are determined by the initial bond price, coupon value (dependant on percentage of yield) and the maturity date (CFA UK, 2009).

1.2Bond Calculation

The Present Value of a bond can be calculated using the following basic formula:

C = coupon payment
n = number of payments
i = interest rate, or required yield
M = value at maturity, or par value

1.3The Beginnings of the European Sovereign Debt Crisis

Bond markets have increased in importance following the global financial crisis in 2008. Investors shifted exposure from equity to debt instruments, and from private to public sector securities (Forster et al. 2011), in a flight to safety. However at this time, government borrowing also rose substantially, partly due to the banking crisis, but also to provide a stimulus to ailing economies.

In 2009, the EU ordered France, Spain, Ireland and Greece to implement austerity measures to reduce their burgeoning budget deficits. Following this, in December 2009, Greece admitted that its debt had reached 300bn Euros, equivalent to 113% of its Gross Domestic Product (GDP). Ratings agencies swiftly downgraded Greece’s credit rating due to fears that it may not be able to repay its lenders on the bond market. This led to concerns about the debt sustainability of other European countries such as Portugal, Ireland and Spain, and fears of contagion, where other countries’ national banks were exposed to Greece’s default risk. If Greece did default on its debt, this would have severe consequences for other European economies and the European single currency, and this led to bailouts for Greece and other economies.

1.4Explanation of Yield

The yield determines the value of the coupon payment that the issuer must pay to the lender on the bond. An increased yield can indicate that there is a greater risk associated with holding that bond for the lender, and therefore the lender requires a greater return.

1.5Growing Sovereign Debt Levels as a Proportion of Gross Domestic Product (GDP)

As bond markets finance Government debt, and this debt can only be serviced by economic growth in the domestic economy, it is important to consider the growing levels of Eurozone debt as a backdrop to the volatility in the bond markets. Table A shows the consolidated gross debt of European countries since 2004 as a percentage of GDP (Source: Eurostat).

United Kingdom80.069.654.444.543.442.540.9


BANK OF ENGLAND (2010a) Financial Stability Report, Issue no 27 (June). London.

BANK OF ENGLAND (2010b) Financial Stability Report, Issue no 28 (December). London.

BANK OF ENGLAND (2011) Financial Stability Report, Issue no 29 (June). London.

BUCKLE, M. and THOMAS, S. (2009) Official Training Manual, Volume 2: Investment Practice, 7th ed. London: CFA Society of the UK.

FORSTER, K. et al (2011) European Cross-Border Financial Flows and the Global Financial Crisis. Occasional Paper Series, European Central Bank, No 126 (July).


BANK OF ENGLAND (2010) Financial Stability Report, Issue no 27 (June). London.

BANK OF ENGLAND (2010) Financial Stability Report, Issue no 28 (December). London.

BANK OF ENGLAND (2011) Financial Stability Report, Issue no 29 (June). London.

BBC NEWS (2010) Greece Crisis: Fears Grow that it could Spread., 28th April.

BUCKLE, M. and THOMAS, S. (2009) Official Training Manual, Volume 1: UK Regulations & Markets, 7th ed. London: CFA Society of the UK.

BUCKLE, M. and THOMAS, S. (2009) Official Training Manual, Volume 2: Investment Practice, 7th ed. London: CFA Society of the UK.

DE GRAUWE, P. and MOESEN, W. (2009) Gains for All: A Proposal for a Common Euro Bond. Intereconomics (May / June), pp 132-135.

EUROPEAN CENTRAL BANK (2011) Financial Integration in Europe (May) Germany.

EUROPEAN CENTRAL BANK (2011) Financial Stability Review (June) Germany.

EWING, J and HEALY, J (2010) Cuts to Debt Rating Stir Anxiety in Europe. The New York Times, 27th April.

FORSTER, K. et al (2011) European Cross-Border Financial Flows and the Global Financial Crisis. Occasional Paper Series, European Central Bank, No 126 (July).

NASH, M. (2011) Debt Report: Sovereign Issuers Dominate the Debt Agenda. FTSE Global Markets, Issue 53, pp 32-34 (July / August).

ZANDSTRA, D. (2011) The European Sovereign Debt Crisis and its Evolving Resolution, Capital Markets Law Journal, Volume 6, No. 3, pp 285-316 (May).

Free Essays

Performance Evaluation of the European Bond Market: Analysis of the Yield Spreads on 10-Year Government Bonds


This paper analyses the performance of the European bond market with a particular focus on analysing yields and yield spreads in the Euro Zone area. The study observes poor performance of the bond market over the 18-month period April 2010 to September 2011. Countries with high credit and liquidity risks such as Greece, the Republic of Ireland, Spain, Italy and Portugal exhibit significant high yields and yield spreads over the period under analysis.

1. Introduction

Significant developments have occurred in the European Bond Market over since the onset of the global financial crisis in 2007. In the light of these developments, this paper evaluates the performance of the European bond market by analysing yield spreads of the Euro Zone over the 18-month period 2010 to September 2011. The rest of the paper is organised as follows: section 2 provides a literature review, which focuses on understanding bonds, the factors that determine their prices and empirical evidence; section 3 provides a methodology for evaluating the performance of the European bond market; section 4 presents empirical results and analysis; and section 5 presents conclusions and recommendations.

2. Literature Review

One of the main variables used in measuring the performance of a bond is the yield spread. The yield serves as a measure of risk associated with investing in the bonds. Consequently, the higher the risk of a bond, the higher will be its yield and thus, the higher will be the yield spread relative to a particular benchmark bond.

Yield spreads on national bond markets depend on a number of factors. These include credit risk, liquidity risk and risk aversion (Barrios et al., 2009). With regards to credit risk, bond yield spreads are affected by three types of credit risk including default risk, credit spread risk, and downgrade risk[1]. In general, the yield spread increases with the credit risk and vice versa.

Empirical evidence suggests that yield spreads depend on both national and international factors. Codgogno et al. (2003); and Longstaff et al. (2007) argue in favour of international factors. With respect to national factors, credit risk has been found to be an important factor in determining yield spreads (e.g., Schuknecht et al., 2008; ECB, 2009). Despite the importance of credit risk, Beber et al. (2006) argue that liquidity risk is more relevant downturns and that the impact of credit risk is only relevant during stable economic conditions. Haugh et al. (2009) argue in favour of the general degree of risk aversion. In addition, government and Central Bank intervention in the economy also have an impact on bond yield spreads. For example, a European Central Bank Study ECB (2008) study suggests that stimulus packages aimed at rescuing banks from collapsing during the global financial crisis resulted to the immediate transfer of risk from the private to the public sector thus leading to an increase in government bond yield spreads.

3. Research Methods

To understand how the European Bond market has performed over the last 18 months, this paper looks at how the yield 10 year government bond yield spread of the Euro Area has moved over the period April 2010 to September 2011. The Euro area consists of 15 countries. These countries are presented in Appendix 1. The yield spread is computed by taking the yield on the 10-Year government bond and subtracting the benchmark yield. In this case, the benchmark yield is the German 10-Year government bond. Germany has the lowest level of inflation and thus the most stable economy in the Euro Zone (Fabozzi, 2011). This explains why its 10-Year government bond yield is used as the benchmark. Data on yields is obtained from the website of the European Central Bank and observations are based on a monthly frequency.

4. Analysis and Discussion

Appendix 1 illustrates the 10-year government bond yield for euro zone member countries over the period April 2010 to September 2011. It can be observed thatGreecehad the highest 10-Year government bond yield of 12.35% whileGermanyhad the lowest average yield of 2.72% on its 10-Year government bond. Other countries with significantly high yields on their 10-Year government bonds include theRepublicofIrelandwith an average yield of 8.05% andPortugalwith an average yield 7.66%.Cyprus,Italy,Spain,SloveniaandSlovakiaalso have significantly high yields with yields on their 10-Year government bonds ranging from 4.08% to 5.01%. It can also be observed that the yields have been rising for all countries in the Euro Zone as illustrated in Appendix 3. Appendix 2 illustrates the 10-Year government bond yield spreads for each of the euro zone relative to the German 10-Year Government bond. Appendix 4 illustrates the corresponding movement in the yield spreads. Like the yields, the yield spreads forGreece, theRepublicofIreland,Italy,Spain, Portugal Slovakia and Slovania have been significantly higher than the rest of the other countries. Moreover, the yield spreads have increased significantly over the period under investigation.

The countries with alarming yields and yield spreads have made news headlines of various financial publications over the last 18 months.Greecefor example, has been unable to meet its sovereign debt obligations and has been forced to seek help from the ECB.Portugal, theRepublicofIreland,ItalyandSpainhave also been struggling with meeting their sovereign debt commitments (The Economist, 2011; New York Times, 2011).

In general, the rising yields and yield spreads inGreece,Portugal,Spain,Italyand theRepublicofIrelandcan be attributed to an increase in credit risk, risk aversion and liquidity risk.

The rise in the government bond yield spreads have also been attributed to auctions that were to take place in Portugal and Spain (Reuters, 2011).

5. Conclusions and Recommendations

Based on the analysis above, a number of conclusions can be drawn. Firstly, the yields for almost all the countries under analysis have been rising. Significant increases in yields can be observed for countries with high credit and liquidity risks. These risks have led to an increase in investor’s degree of risk aversion and thus to an increase in the risk premium required for investing in their bonds. Yield spreads have also been increasing for all countries and significantly high for countries with high credit and liquidity risk, as well as a high degree of risk aversion. Governments in Europe and the European Central Bank are committed to supporting troubled countries from defaulting on their debt so as to avoid a Euro Zone debt crisis. Assuming that these efforts are going to be fruitful, there will be reductions in the cost of borrowing. Consequently one should expect to observe a decline in yields and yield spreads over the next 12 months (The Economist, 2011). However, given that Germany remains the most stable economy amongst Euro Zone economies, it is unlikely that the yield spreads over the German government bonds will every narrow to pre-crisis levels. The Economist (2011) suggests that it is unlikely for smaller countries to seat at the top of the table. However, it will be possible for them to maintain their Euro Zone membership. A major shortcoming of the study is that it is limited only to Euro zone countries due to time constraints. Another study including the U.K and other non-Euro Zone economies such as Denmark and Sweden will be desirable. This can be treated as an important area for further research.


Barrios, S., Iverson, P., Lewandowska, M., Setzer, R. (2009), “Determinants of intra-euro area government bond spreads during the financial crisis, Economic Papers No. 338, European Central Bank (ECB), Available online at: [accessed: 3rd November 2011].

Beber, A., M. Brandt, K. Kavajez (2006), Flight-to-quality or flight-to-liquidityEvidence from the euro area bond market, National Centre of Competence in Research Financial Valuation and Risk Managament, WP No. 309.

Bernoth, K., J., V. Hagen and L. Schuknecht (2006), Sovereign Risk Premiums in the

Bodie Z., Kane, A., Marcus, A. J. (2007), Investments, Seventh Edition, McGraw-Hill.

Codogno, L., C. Favero and A. Missale (2003), Yield spreads on EMU government bonds, Economic Policy, October, pp. 503–532.

ECB (2008), Recent widening in euro area sovereign bond yield spreads, November 2008, ECB monthly report.

ECB (2009), Financial integration in Europe, European Central Bank, April, pp. 33-9.

European Government Bond Market, SFB?TR 15 Discussion Paper, No. 150.

Fabozzi F. J. (2011) “General Principles of Credit Analysis” in Alternative Asset Valuation and Fixed Income, CFA Program Curriculum, vol. 5 Pearson Learning Solutions, p. 155.

Fabozzi, F. J. (2005) Fixed Income Analysis for the Chartered Financial Analyst, Second Edition, CFA Institute.

Haugh, D., P. Ollivaud, D. Turner (2009), What drives sovereign risk premiumsAn analysis of recent evidence from the euro area, OECD Economics Department, Working Paper No. 718.

Longstaff, F., Pan, J., Pedersen, L., Singleton, K. (2007), How sovereign is sovereign credit risk, NBER Working Paper 13658, December.

New York Times (2011), “Economic Crisis and Market Upheavals”, available online at: [accessed: 1st November, 2011].

Reuters (2011) “Euro zone bond yield spreads widen ahead of auctions”, available online at: [accessed: 3rd November 2011].

Schuknecht, L., J. von Hagen, and G. Wolswijk (2008), Government risk premiums in the bond market, ECB Working paper, No. 879.

The Economist (2011), “European bond markets: Last among equals”, available online at: [accessed: 2nd November, 2011].

Free Essays

Report on the European Bond Market – November 2010 to April 2012

This report describes the performance of the European bond market for the last 18 months and evaluates the impact of the major events, causes and factors which affected the sovereign and corporate bond market during this period. The report also provides the future of the European bond market based on the political scenario prevailing in that region along with a discussion on the methods to ease the financial crisis in the Euro zone.


The European bond market experienced severe tension in the year of 2011 and the sovereign yields increased further. Wealth holders are aware of the risk and price the bonds accordingly. In some cases there is an overestimation of risk which leads to an increase of the respective yield (see Appendix 1.1) (European Central Bank, 2012).

European Bond Market Performance

Performance of Sovereign Bond

The risk involved in the European bond market intensified considerably over the past 18 months due to slowing global growth prospect and broadening of concern about the government debt position in large European economies. Chart 1 shows the yield spreads (Appendix 1.2) of selected European countries against the German bund. The yield spread has increased by the end of 2011 except for Ireland.

Chart 1 – Yield spreads of European bonds over German bunds (a)

Source: Thomson Reuters Data stream and Bank calculations cited in Bank of England, 2011b, p.7.

(a) Ten-year government and EFSF bond spreads over German bunds. (b) Spreads as on 15 June 2011 except for EFSF, which is as on 17 June 2011. (c) Spreads as on 22 November 2011.In May, Portugal becomes the third European country to seek financial assistance from European authorities and International Monitory Fund (IMF) (Bank of England, 2011a). Even though Greece and Portugal obtained financial support, market was concerned about the sustainability of their fiscal position. Chart 2 plots the yield on 10 year government bond of Greece which shows that the yield has come down in March 2012 and after that it shows a rising trend due to uncertainty about the financial stability of Greece.

Chart 2 – Yield of 10 year Government bond of Greece

Source: / Public Debt Management Agency, 2012

During the same period Ireland showed a decline in bond yield (Chart 3). This is achieved by building confidence in economy though implementation of adequate fiscal adjustment measures.

Chart 3 – Yield of 10 year Government bond of Ireland

Source: / Ireland Department of Treasury, 2012

Similarly the ten year Government bond yield spread for Greece (Chart 4) shows an increasing trend which confirms that the bailout package provided by the European Union (EU) and IMF has not provided the expected results.

Chart 4 – Yield spread of 10 year Government bond of Greece

(Bench mark – German Bunds)

Source: Bloomberg, 2012.

Another development in 2011 is the increase in the number of factors affecting the sovereign yield. After the introduction of Euro in 1998, the bond yield among the European nations remained the same (Chart 5) and during the period 2003-2007 the yield spread was very minimal due to abundant global liquidity. Chart 5 clearly shows that since 2007 onwards the bond yield shows a diverging trend. The increase in European bond yield spread in 2011 can be attributed to the fiscal sustainability concerns and risk aversion.

Chart 5 – Yield spread of 10 year Government bond of European countries to 10 year German Bunds

Source: Bloomberg Global Financial Data cited in, 2012.

Apart from the fiscal related concerns, the yield on European sovereign bond is influenced by strong demand for safe assets and change in investor demand. Chart 6 shows the yield spread between the government guaranteed agency bonds and sovereign bonds of Germany. It can be seen that during the tense times in 2011 the agency-sovereign spread was around 60 basis points which is attributed by the better liquidity of the sovereign bond compared to the agency bond. In 2012, with the improvement in financial markets the agency-sovereign spread shows a declining trend.

Chart 6 – Yield spread between the government guaranteed agency bonds and sovereign bonds of Germany

Source: Thomson Reuters and European Central Bank cited in European Central Bank, 2012, p.22.

The performance of European sovereign bonds for the past 18 months shows that country wide effects like fiscal situation, economic outlook, risk aversion among investors and portfolio shift to safe assets are influencing more in driving yield development.Performance of Corporate BondYields of corporate bond also showed divergence across European countries similar to sovereign bond. Investors now apply more rigorous risk pricing methods to individual company specific risks within the same country. Chart 7 and 8 shows the yield curves for the covered bond markets of Germany and France which are estimated for various issuers in these markets. The dispersion of yield for individual bonds in both countries was high in past 18 months compared to that observed in 2008. This shows that the yield of corporate bond is changing not only with respect to country of origin, but also with individual issuer.

Chart 7 – German covered bond yield curve in 2008 and 2011

Source: Bloomberg and ECB calculations cited in European Central Bank, 2012, p.24

Notes: For both years, the first Monday of the second half of the year (in July) is chosen. Estimated par yield curves (solid lines) and observed yields to maturity (points) are presented.

Chart 8 – French covered bond yield curve in 2008 and 2011

Source: Bloomberg and ECB calculations cited in European Central Bank, 2012, p.24

Notes: For both years, the first Monday of the second half of the year (in July) is chosen. Estimated par yield curves (solid lines) and observed yields to maturity (points) are presented.

Future of the European Bond MarketFor the next 12 months the outcome of the elections in European counties will have a major impact on investor confidence and global growth expectations. The citizens of Ireland, Spain and Portugal have already voted to change their government and Greece is moving forward to a coalition government. Among the four, Ireland showed a decline in yield on bonds, but it has the worst public finance in Europe. For the other three, a turnaround will be less likely for the next one year since they have got large fiscal and trade deficit, look uncompetitive and needs support for years. One of the options to ease the European crisis is to go for large scale bond buying of the affected countries by the European Central Bank (ECB).But such purchases are against the ECB rules and can slowdown the fiscal and structural reforms adapted by the members of EU. This can bring future losses to the ECB and its member banks. ECB and Germany are of the opinion that the euro zone government has to support the peers using the funds raised though European Financial Stability Facility (EFSF) or obtained from IMF (Ian Campbell, 2012). But this route can burden the core economies which are at risk themselves and further worsen the situation.


Predictability attracts long term investors in betting a currency or trading in a bond. But the euro area faces political and financial uncertainty. The future of Greece, lately a byword for ‘Euro-geddon’, will be decided by voters those who are fighting against the imposed austerity measures by the EU and IMF and those who want a stable future for the euro. Crisis in some European area sovereign debt markets and their impact on credit conditions along with high unemployment are expected to dampen the underlying growth momentum. The US market is also having uncertainty because of the presidential election planned to be held in November 2012.Presently investors in bond markets are looking for safe heavens with reasonable returns. South East Asian and Australian markets offer a good opportunity for investment.Even though growth is slow along with global economy, these markets can still outpace the European and US markets.


1.1 Definition of Yield The yield of bond is defined as the single discount rate when applied to all future interest and principal payments produces a present value equal to the purchase price of the bond. The yield depends on the risk involved in holding the bond. A greater risk can fetch a higher yield and a lower risk will result in lower yield.1.2 Definition of Yield spreadYield spread is defined as the difference between yields of two bonds. Usual practice is to fix the yield of one bond as a benchmark and to calculate the yield spread of other bonds. The yield spread is generally specified in basis points and a difference in yield of one percent is equal to 100 base points. Yield spread helps the bond trader to get a clear picture of relative movement of the bonds. Finally it is used as a tool to decide the buying or selling of a bond.

Reference List

Bank of England, 2011a. Financial Stability Report June 2011, Issue no 29. [pdf] London: Bank of England. Available at: [Accessed 23 May 2011]. Bank of England, 2011b. Financial Stability Report December 2011, Issue no 30. [pdf] London: Bank of England. Available at: [Accessed 23 May 2011]. Bloomberg, 2012. Snap shot of Greek-German spread (.GRGER10). [online] Available at : [Accessed 23 May 2011]. Campbell, I., 2012. Another Year of Living Euro-Dangerously. Reuters Breakingviews, p.37. European Central Bank, 2012. Financial Integration in Europe April 2012. [pdf] Frankfurt: European Central Bank. Available at: [Accessed 23 May 2011]. Gartside, N., 2011. Global Bond Outlook. [pdf] New York: J.P.Morgan Asset Management. Available at: [Accessed 23 May 2011]. Reserve Bank of Australia (RBA), 2012. Graphs. [online] Available at: [Accessed 23 May 2011]. Tradingeconomics, 2012. Greece Government Bond 10 Y. [online] Available at : [Accessed 23 May 2011]. Tradingeconomics, 2012. Ieland Government Bond 10 Y. [online] Available at : [Accessed 23 May 2011].


Bank of England, 2010a. Financial Stability Report June 2010, Issue no 27. [pdf] London: Bank of England. Available at: 0 >[Accessed 23 May 2011]. Bank of England, 2010b. Financial Stability Report December 2010, Issue no 28. [pdf] London: Bank of England. Available at: [Accessed 23 May 2011]. European Central Bank, 2011. Financial Integration in Europe May 2011. [pdf] Frankfurt: European Central Bank. Available at: [Accessed 23 May 2011]. Forbes,S.M. et al, 2008. Yield-to-Maturity and the Reinvestment of Coupon Payments. Journal of Economics and Finance Education, 7(1), p.48. Jorg Homey and Michael Spies, 2011. The German Pfandbrief Market 2011-2012. Hamburg: Deutsche Genossenschafts-Hypothekenbank AG, Rodrigo, 2012. Report on the European Bond Market- March 2010 to August 2011.[online] Available at: [Accessed 23 May 2011].

Free Essays

Vodafone Group Market Analysis

This report examines the current market position of Vodafone, which is currently ranked second on the FTSE 100, and has a market capitalization of 84,991 Million GBP. The report undertakes a SWOT analysis to examine the main strengths and weaknesses of Vodafone. This is followed by a Porter’s five forces analysis of the industry structure. The main conclusion of this report is that Vodafone needs to change rapidly to meet the needs of the customers, and meet the changing demands in the industry. These include the changing nature of the mobile communication, and the dynamics of the digital economy, which have led to the changes in the market. Vodafone must therefore change its strategy to deal with these changes in the market.


Vodafone is one of the largest telecommunications operators in Europe and around the world and provides mobile voice and data communications to consumers and the businesses (Daruwala, 2011, Mc, 2012). Vodafone is currently ranked second on the FTSE 100, and has a market capitalization of 84,991 Million GBP (FTSE, 2012). Vodafone group has recorded revenue of 45,884 million GBP during the fiscal year ended March 2011, and has an increase of around 3.2 percent over fiscal year 2010. This report examines the strengths and weaknesses of Vodafone to discuss the ways in which the company can improve its competitive position in the industry. One of the key recommendations of this report is that Vodafone needs to work more actively in developing data communications to counter the business risk in a volatile European market, which will help the organization to grow in the short and medium term. Vodafone also needs to develop mobile applications and new digital media to remain competitive in this market.

2Market Analysis

A detailed analysis of the telecommunications sector shows that it is highly competitive today, as the different companies and their brands are continuing to introduce new and robust products to their customers (FEER, 1999). A SWOT analysis has been carried out for the Vodafone, which has highlighted a number of strengths and weaknesses of the organization. One of the key strengths of Vodafone is that it has a robust brand image, which has been developed over time and has a highly extensive market reach (Browning, 2011). The organization has also been able to develop a deeper understanding of the needs of its customers, which has allowed it to grow phenomenally, and has enabled the organization to develop a customer loyalty which is highly important for it (Datamonitor, 2012). For example, the annual BrandX Top 100 valuable global brands have ranked Vodafone as second highest brand and it is a top brand in UK. Similarly, Brand Finance has ranked Vodafone as the fifth most valuable brand in the world (Datamonitor, 2012). However, the weakness of the organization has been its inability to capture the brand loyalty and market share in terms of new customers. Vodafone has not been able to capture the level of customers in the new digital environment, and therefore the market growth has not been as phenomenal in UK (Anwar, 2003) as some other brands such as ‘Three’. Another weakness of Vodafone is its inability to capture the Third Generation signal market, which could have been a significant catapult for the organization.

However, the same weaknesses of Vodafone also is one of the key opportunities for the company, the mobile data market is exploding, with the use of smart phones and other wireless enabled devices increasing phenomenally (Savitz, 2012, Uzama, 2009, Mishra et al., 2010, Nayak and Pai, 2012). The ability of Vodafone to provide services to these services is one of the best opportunities in Europe. However, outside Europe, many other countries have not introduced a high speed mobile data service, and these are also expected to be a significant opportunity for these companies. In such a case, Vodafone can potentially have significant growth avenues, which can led to significant profits for the organization (Mishra et al., 2010, Nayak and Pai, 2012). These opportunities also present a number of threats to Vodafone. In this regard, the most significant threat is the increased use of VOIP services, which challenges the traditional model of mobile phone companies due to lost revenue and customers. However, the mobile companies can continue to provide wirelesses services to customers, which they can then use to generate other forms of revenue. Another threat to Vodafone is the mature European market, which has become highly competitive with lower margins for profits and could potentially be an issue for the organization (Grocott, 2010, Jankovic, 2010).

A Porters five forces analysis of the mobile industry also shows a number of issues for Vodafone, which needs consideration. One of the key aspects of the five forces model is that it enables the examination of the various forces which are exerted on an organization within the industry. In this regard, the buying power of the buyer has certainly increased (Glajchen, 2006), as they are no longer constrained by the mobile service providers. The buyers can choose other providers and VOIP based services (Hass, 2006), which is a key concern for Vodafone. Another concern is the potential competition between the companies, which is increasing due to the maturing of the mobile phone market. A third issue for Vodafone is the power of the suppliers such as Apple, who are able to dictate their terms on the use of services (Glajchen, 2006), and therefore a significant threat to the business of Vodafone. The threat of new entrants and new substitute products is also ever increasing in the mobile communications markets (Hass, 2006), as new digital products and services are continuously evolving which limits the use of mobile phone services by the consumer (Jung and Ibanez, 2010, Te-Yuan et al., 2010). These include VOIP based services such as Skype and Facetime, which have meant that some services such as video calls from mobile operators have been completely made redundant (Chang et al., 2010, Bodhani, 2011, Shin, 2012).

3Conclusions & Recommendations

A number of conclusions can be drawn based on the SWOT and Porter analysis of Vodafone conducted as part of this essay. One of the key aspects of the future of Vodafone depends on its ability to harness the data communication, which will be the future of the company. Increasingly, innovative applications and products are being used by customers to communicate at a lower cost, and the role of the traditional mobile phone calls is increasingly being marginalized. Vodafone needs to realize the potential of data communication, and use new and innovative strategies to ensure that it can stay ahead of the competition and deliver groundbreaking and new services to its customers. The future of mobile telephony may depend on 4G connections, and Vodafone needs to ensure that it is fully ready to deal with the new challenges which it will face in the changing landscape of mobile communications. New services such as mobile payment and online communities are also significant new avenues for future growth, however proper planning is needed to meet these needs of the customer.


Anwar, S. T. (2003) Vodafone and the wireless industry: A case in market expansion and global strategy. Journal of Business & Industrial Marketing, 18(3), 270.

Bodhani, A. (2011) Voip – voicing concerns. Engineering & Technology (17509637), 6(7), 76-79.

Browning, J. (2011) Vodafone tears down its walled garden. Bloomberg Businessweek, 4257), 30-33.

Chang, L.-h., Sung, C.-h., Chiu, S.-y. & Lin, Y.-w. (2010) Design and realization of ad-hoc voip with embedded p-sip server. Journal of Systems & Software, 83(12), 2536-2555.

Daruwala, F. (2011) Vodafone revisited. International Financial Law Review, 30(5), 52-52.

Datamonitor 2012. Datamonitor: Vodafone group public limited company. Datamonitor Plc.

FEER (1999) Britain’s vodafone. Far Eastern Economic Review, 162(42), 66.

FTSE. (2012) Ftse factsheet [Online]. London: FTSE. Available: [Accessed 16 July 2012].

Glajchen, D. (2006) A comparative analysis of mobile phone-based payment services in the united states and south africa, London, Proquest.

Grocott, J. (2010) Tax authorities’ claim vodafone warned of $2 billion tax bill. International Tax Review, 21(6), 6-8.

Hass, M. (2006) Management of innovation in network industries: The mobile internet in japan and europe, Weisbaden, Deutscher Universitatsverlag.

Jankovic, M. (2010) Global communications newsletter. IEEE Communications Magazine, 48(11), 1-4.

Jung, Y. & Ibanez, A. A. (2010) Improving wireless voip quality by using adaptive packet coding. Electronics Letters, 46(6), 459-460.

Mc (2012) Vodafone becomes vodafone. Marketing (00253650), 11-11.

Mishra, G., Makkar, T., Gupta, A., Vaidyanathan, M., Sarin, S. & Bajaj, G. (2010) New media experiences: Dealing with the game changer. Vikalpa: The Journal for Decision Makers, 35(3), 91-97.

Nayak, R. & Pai, G. (2012) India: Sc rules in favour of vodafone. International Tax Review, 23(2), 51-51.

Savitz, E. (2012) Will verizon bid for vodafoneNo, almost certainly not., 49-49.

Shin, D.-H. (2012) What makes consumers use voip over mobile phonesFree riding or consumerization of new service. Telecommunications Policy, 36(4), 311-323.

Te-Yuan, H., Huang, P., Kuan-Ta, C. & Po-Jung, W. (2010) Could skype be more satisfyingA qoe-centric study of the fec mechanism in an internet-scale voip system. (cover story). IEEE Network, 24(2), 42-48.

Uzama, A. (2009) A critical review of market entry selection and expansion into japan’s market. Journal of Global Marketing, 22(4), 279-298.

Free Essays

What is superior: Linux, the Windows XP linage, or Mac X? Your discussions should focus on desktop, scientific usage, and the server market.


For a computer to operate to the high level at which users demand a large amount of software is required. The crucial software aspect of any computer system is the Operating System. There are many options for a user when it comes to selecting which operating system to implement, at present the three key market leaders are Linux, Windows XP Lineage and Mac X.

This essay will focus on detailing and comparing these three platforms. The main areas which will be researched will be; – desktop comparison, how the operating systems are utilised in the scientific quarter and where each operating system is placed in the current server market.


To give a detailed view of each of these platforms firstly we need to define ‘Operating System’ or ‘OS’, describe it as; –

‘‘The OS is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop — it makes sure those different programs and users running at the same time do not interfere with each other. The operating system is also responsible for security, ensuring that unauthorized users do not access the system.’’

From this it is apparent that to give an accurate comparison of the different platforms, and to truly evaluate each, the investigation will need to centre on key areas such as performance, usability, application interface and core design.

Platform Descriptions

Windows XP Lineage

Windows XP was released in 2001. It is produced by Microsoft. It follows on from Windows ME and Windows 2000. At the present time it is still most popular version of Windows (based on user base). The most widely used versions of this were Windows XP Home, Professional, Media Centre Edition and XP Tablet PC Edition.

In 2011 W3Schools carried out a study that showed Windows XP was currently the most commonly used OS on the planet with a 39.7% share of the market. At its highest Windows XP had a staggering 76.1% share of the market, this was in 2007.


Linux in essence is the first truly free Unix-like operating system. Its history goes way back but essentially the underlying GNU project was launched in 1983 by Richard Stallman. His aim was to develop a UNIX compatible operating system intended to be free to own. Linux itself was invented in 1991 by Linus Torvalds. The kernel Linus created (Linux) was combined with the GNU system which then produced a completely free operating system. The kernel created is currently the base around which Linux operating system is developed. In the current market there is an abundance of companies and organizations around the worldwide who have released operating systems based on the Linux kernel created by Linus Torvalds.

Mac X

Mac OS X is an operating system series developed by Apple for the intention of use on Macintosh computers. The first desktop release was in March 2001 and at this point to date there has been a total of 8 releases so far.

It is based on the same kernel as UNIX. Mac X is the preceded by Mac OS 9 which had been released in 1999. Mac OS9 was the last release of what was billed the “classic” Mac OS range, Apples main operating system for over 15 years.


For the purpose of this essay I will now carry out research on the desktop market. I will define desktop market and its key deliverables and also give a critique of each operating systems version. I will conclude with reasoning on market share and reasons for this.

According to Wikipedia the desktop environment can be described as;-

‘‘In graphical computing, a desktop environment (DE) commonly refers to a style of graphical user interface (GUI) derived from the desktop metaphor that is seen on most modern personal computers.[1] These GUIs help the user in easily accessing, configuring, and modifying many important and frequently accessed specific operating system (OS) features. The GUI usually does not afford access to all the many features found in an OS. Instead, the traditional command-line interface (CLI) is still used when full control over the OS is required in such cases.

A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP).’’

A key area of any operating system is the GUI (Graphical User Interface). This can impact considerably on a number of things, including whether a potential buyer purchases it. Mac X, Windows and Linux all offer different ‘brands’ of GUI.

To define Desktop environment essentially one would say it provides a platform for the user to manipulate the system to a certain degree. A desktop environment allows the user to manage files, give a platform for storing and easily viewing files etc.

When it comes to evaluating the market share, windows still leads the way when it comes to the desktop market, as the graph below (Desktop OS share in US 2011) shows. Why is this so Essentially, Windows XP has been an exceptionally stable operating system from its release. It makes efficient use of low end hardware and in general it’s not susceptible to unexplained system crashes or random slowdowns. Most users feel that a change or upgrade is unnecessary, in laymen’s terms ‘if the current one is working completely fine, why change it’.

Although this shows Windows 7 to be shading it in the US, worldwide Windows XP is still the desktop market leader. The key thing to notice from this graph is how far behind both Mac X and Linux are when it comes to the desktop market. Why is this soIs each platforms release really so different?

There are many key areas that are looked at by users which can help make up their mind when it comes to buying an operating system. Look and feel of the desktop can be a key area. Essentially a user is usually more common to pick an OS which features a certain look and performs in a certain way. A user will lean more towards something they are used too opposed to one they have no experience using. Many factors come into consideration; the basic look of the desktop can be a key factor in what a user picks for example. Ease of usage can also play a part when a consumer comes to buy a computer. How flexible the desktop is can also be an issue when it comes to picking a desktop operating system. A more advanced user may lean towards a product that is flexible and that they can configure to their own wants and needs. A basic user in general just wants a basic easy to use system.

It would be impossible to give a 100% in depth investigation into each OS platforms desktop environment as this would be a never ending task due to the complex infrastructure of each. I will focus on evaluating the look, usability, consistency, flexibility, integration, stability, speed, technology, bugs, framework and programming. Basically the overall user experience generated by these desktop environments

Windows XP Desktop Environment

Visually wise Windows XP is an improvement on its predecessors. The default theme is Luna. It is generally not felt to be the most visually appealing one but it is definitely the one that is most logically designed. It has an abundance of widgets which are all in all well defined while there have also been great measures taken to make sure it still works in the way that which previous Windows users are accustomed to.

With XP, the Windows graphics interface is now more changeable than ever, however the majority of the users that use Windows stay with the defaults. A user is still able to alter, wallpaper, colour of the windows decorations (default is blue). Even after all these changes the user will still essentially end up with the usual Windows interface. This is a major visual drawback in comparison to say Linux.

The interface itself has definitely matured and improved from previous releases e.g. 1995, which all in all is a major plus point, even if it is still not as visually appealing as other platforms.

Usability wise the key phrase for me would be, Windows XP always does what I expect it to do. It runs reasonably similarly to most other windows releases although with great improvements. This factor means I am rarely shocked by any action carried out. A key feature is the tooltips that are active throughout; this is extremely useful in getting users that have transferred from say MAC X or Linux familiar to the new platform. Windows XP also features excellent keyboard navigation; this can be extremely handy in navigating around the screen if there is a mouse fault.

Even with all these plus points Windows XP desktop environment still has their fault. A key fault is the new ‘Start’ menu which seems to be way overloaded with useless items.

On the plus side it is handy to be able to run previous Windows software in XP, this however doesn’t always necessarily mean that you will get the same look or if it will perform the same function to the same level which can basically considered to be a negative. Another negative would be the fact that Windows is a proprietary OS; this means that technically you cannot make changes.

Linux Desktop Environment

In recent years Linux as a desktop option has become an even more attractive option to potential buyers. Most Linux ‘flavours’ include a GUI (Graphical User Interface). As of 2011 the two most popular of these environments are GNOME and KDE Plasma Desktop.

When it comes to performance there has been major issues between the server world and the desktop world. It is widely believed that the Linux community focuses on the server market. Although, in recent year’s projects such as Upstart, have been key in the Linux approach to improving their placing in the desktop market.

One major plus point of the Linux desktop is the flexibility. It is highly visually appealing and the 3d desktop and abundance of other features are extremely popular among users. The Linux desktop feels almost unlimited, a user can manipulate the look and feel of it to however they want it to look and act. This can vary from the absolute minimum look to the splendour of a full blown 3d version. Another positive when it comes to Linux is the fact that it has been designed from the beginning for multitasking and this can be highly appealing to users.

As previously stated there are a multitude of changes that a user can make to their desktop, from how it performs to how it looks, this flexibility however can also be a negative. Advanced users are not necessarily included in this but new users and users with basic Linux knowledge may be intimidated by the fact that every desktop they encounter could be different from the next.

Mac X Desktop Environment

A distinctive aspect of the Mac X desktop range is the amazing visuals on offer. The interface is definitely visually appealing but can it match up in other areas?

Key features include easy and quick navigation which is provided by finder, also present is an extremely useful drag and drop feature. There are also some negative aspects when it comes to this desktop, in my opinion a main one could be the fact that it is a lot more keyboard led, this fact is due to the fact that there is no right click.

Another negative in my opinion, and an aspect that comes up with a lot of Apple products, is the fact that it seems to be a matter of style over substance. It has fancy icons and hover over buttons yes, but if looked closer into these are ITunes, IPhoto etc. So in my opinion it is clearly in place to boost the Apple product range.

The Mac X range of desktops may not be aimed specifically aimed at the high end user specifically but in my opinion, mainly due to price, that is the market that it currently holds. This is one area where a huge different is apparent between the two other platforms. As I’ve mentioned previously Linux can be acquired freely and in many forms, whereas Windows XP range are also very reasonably priced. For a user to become a Mac X customer it will come in at a much higher price.

In conclusion, and after extensive research it is clear that each desktop platform clearly works in the way that you would imagine a desktop environment to work, although in differing and unique ways. Windows XP still leads the way in comparison to Linux and Mac X on the desktop market front and this can be attributed to many factors. In my opinion one of the main reasons this is the case is the fact the average user is simply used to the Windows brand of desktops. Mac x and Linux are definitely making headways in this area and in many ways they are a lot better than windows. It may still be some time though before they have the same kind of market share in the desktop market as Windows has.

Server World and Scientific Usage define the server operating system as

‘’Server operating systems are designed from the ground up to provide platforms for multi-user, frequently business-critical, networked applications. As such, the focus of such operating systems tends to be security, stability and collaboration, rather than user interface.’’
When it comes to choosing a server operating system the choice can be crucial. Many factors are taken into consideration such as cost, what it’s actually going to be running on and a main one is technical support when it is installed.

The main features Server operating systems include are, the ability to reconfigure and update both hardware, advanced backup facilities which allow online backups of critical data whenever necessary, flexible and advanced networking facilities, a large focus on system security, with advanced user, resource, data and memory protection.

Main reasons for server software include File and printer sharing, Terminal services, E-mail, messaging, Web site services.

Most offices, institutions and businesses worldwide that have an IT department use server software in some form.

There are many arguments about which server operating system is better. It essentially comes down to Windows or Linux.

Essentially Linux has a huge advantage when it comes to aspects such as price. Windows in general seems to have a wider reach.

A plus point of Windows is that it Windows supports .NET and ASP. Server hosting on Windows also allows you to us the development tools Microsoft provide such as FrontPage and Visual Interdev. Windows also provides support for Microsoft SQL and Access as well as Visual basic. A bonus is the fact that you also still get the chance to use all Microsoft Office Services as well.

When it comes to what Linux has to offer, this includes support for PHP, MySQL and Perl. Linux is also considered more secure than its Windows counterpart.

The TOP500 project ranks and details the 500 (non-distributed) most powerful known computer systems in the world states that currently 91.6% of the world’s supercomputers run with Linux Server Operating systems, a quite staggering amount.

In reality Linux distributions have been used as server OS for a long time, and have long been largely prominent in that sector. Another key fact is that 8 of the 10 most reliable and trustworthy internet hosting companies worldwide run Linux on their web servers.

Another key area is the amount of servers that are technically counted as Windows servers as that’s how they are sold but in reality they are only sold that way and now run Linux.

The Mac OS X server is essentially a UNIX server which is developed by Apple. There have been many versions released over the years. The server release is identical when compared to its desktop counterpart when we focus on system architecture. In my opinion it is not a rival to either of the other two platforms when it comes to the server market. Only companies with an all Mac setup would it seem consider to run with Mac servers. Again cost is a major issue and as stated before, Linux will always come on top in that category.

When it comes to the world of science Linux has been the desired operating system for some time. A key factor of this is the fact that Linux is supported by almost all systems.

XPs emergence was a competitor but its servers versions never had the feel of being as well constructed as Linux. Another key factor is the fact that Linux is much easier to run and also operate. It is also much easier to find qualified people in the Linux sector opposed to the windows sector.

Essentially the scientific community is in favour of Linux due to the fact that it is so close to UNIX. This is not the only reason, Linux is also very stable and in the long term this can only help productivity. For this reason Linux is an ideal tool for numerical calculations.

In this modern age, it is key to have many things to keep up with the changes in the scientific world; unfortunately Mac X and Windows XP in this area seem to have lagged behind whereas Linux seems to be streets ahead.

Final Conclusion

It is very difficult to come to a conclusion on which operating system platform is the most superior. Each one has plus points and negatives. Whether it be Linux’ free distribution, the sleek design of Mac X or the fact that Windows XP provides the user design they are common with. It is easier to break it down into sectors. It is clear that Windows XP is the market leader in the desktop market, a fact proven by the considerable share it has in that sector compared to the others. When it comes to the server market again there is also a clear front runner, with Linux having a huge share of this market and also a considerable say in software on super computers. The key area in which Mac X flourishes is in the high end user category and users more design orientated. Finally in the scientific sector it seems that at present Linux seems to be the desired choice of OS.

In conclusion each platform has a huge amount of plus points in several areas as well as being negative in others. They each differ greatly and offer the user a completely different experience to one another. So in my opinion each OS is great in its own unique way and the decision essentially will come down to what a user needs it for.


Free Essays

What is Market Segmentation?

1. What do you understand by the term Market?

Marketing is followed by the different way of processes that may included the success of marketing and involves of marketing research selling product, sales promotion, product and services and main thing of consumer satisfaction and their need and wants. Main things of marketing to envolved and meet the developing of new markets caused by mature markets and over capacities in the last 2-3 centuries. The increased of marketing strategies requires to focus of production and needs and wants of customer means the satisfaction of customer means of staying profitable. (

Marketing is a social and managerial process and also individual and groups means the chain of process, groups obtain want and needs are creating , offering and exchanging of product of value with others product (Kotler 1997. 9th edition)

Marketing is an organisational function it obtain to process for creating communicating and delivering value of customer and managing of customer relationship in the way of benefit the organisation and its stakeholders.

2. Market Segmentation:-

Markets segmentation divided in to two groups heterogenous, homogenous. The aims for segmentation a market is to allow your marketing/sales programmed to focus on the subset of prospects that are most likely to purchase a offer for customer.


Tata have started a new Mercedes Benz care in India . Many people have need car like Maruti , Santro but they may not need like Mercedes car. They all are happy with santro and maruti. Some people are interested in a luxuary car, but may not be interested in Mercedes . Some people may be interested Mercedes but they are not able to afford of Rs. 32lakh. Thus, the market for Mercedes is a very small part of luxuary car markets(Ananksha Goswami)

A market segment unity of a group of customers within a market who share a similar level of interest in the same. Or comparable, set of needs.

Levels Of Market Segmentation:-

It is discussed in importance of segmentation that consumer have a different needs and good seller have satisfied consumer needs and wants . That’s why market segmentation can be carried out of a different levels of segments. (

1) Mass Marketing:-

In simple words seller offer same product for all the buyers with different needs and seller engages in mass production, mass distribution, and mass promotion of one product for all buyers.

2) Niche Marketing:-

A niche is a defined more closely group, It is dividing the segment in to sub segments it can be divided that identifying the distinct of consumer which might special needs and benefits this sub segments are made for those consumer whose needs and wants are not satisfied.

3) Local Marketing:-

It depends upon the brand and promoting and to needs and wants of local customer and the design marketing programmed depends upon local consumer group like nearest consumer, Neighborhoods and even specific stores.

4) Individual Marketing:-

Individual Marketing knows as the satisfying of needs and wants of individual customer, it means one-to-one marketing and customized marketing it is the segment level where seller offer customized product to the consumer .

How to identify the market differnation from one another

3. Why Segment The Market / Reasons Of Market Segmetation:-

The market segmentation benefits in different ways. It helps the market to pick up the market targets, and hence research how markets differ from one another. It does this by distinguishing one customer group from other within the market and by showing which segment of the market matters to its situation and should, hence form its target market. Following are the different points of view for the differentiation of market one another.

1) Facilities effective of the chosen market:-

Segmentation depends upon the market and understanding the needs of each the chosen segment, when the buyers are handled the after the segmentation and the response for each segment will be more homogeneous. In product distribution and pricing for matching the particular customer group and developing the marketing offers.

2) Marketing effort more efficient and economic:-

Segmentation is more important of marketing efficient and economic, marketing effort is concentrated or selected for well defined segment after all for most firms. If efforts were concentrated on select the segments the ones that match the firms resources and are most productive and profitable.

3) Helps Identify Less Satisfied Segments and Concentrate on Them:-

Segmentation also helps the marketer to asses to what extent existing offers in the market matches the needs of different customer segments and which are the less satisfied segments.

4. Market Can Be Segmented Using Several Bases:-

A market/consumer population for a product can be segmented using several bases


1) Geographic Segment:-

Geographical segmentation tries to divided in to different market geographical units: these units depends on political boundaries, climate regions, population boundaries

Countries: perhaps categorized by size, development or membership of geographic region.

a) City / Town size: e.g. population within ranges or above a certain level.

b) Population density: e.g. urban, suburban, rural, semi-rural.

c) Climate: according to weather patterns common to certain geographic regions

For Example:-

Geographic segmentation aims the product consumption of different patterns , people in different area like some people in India and some in Kenya in India(kokan and goa) found in stple grain and fish available from sea. Southern area found in a coffee and northern area tea , and the international market products are made according to their needs like soft drink when added a glass of water with lots of sugar, in India complexion cream fair and lovely only sell by India.

2) Demographic Segmentation:-

Consists of dividing the market into groups based on variables such as age, gender family size, income, occupation, education, religion, race and nationality. As you might expect, demographic segmentation variables are amongst the most popular bases for segmenting customer groups. The main demographic segmentation variables are summarized below :


Consumer need and ants change for their age factors they may still wish to consumer the same type of products. So defiened the different types of product, packing, promote the product differently to meet the wants of different age groups. Example for the market of different type of soap and body wash ( the branding soap like Dove and dove body wash use for adults and childrens)

Life-cycle stage:-

A consumer stage in the life-cycle is an important variable – principally in markets such as spare time and sightseeing. For example, contrast the product and promotional approach of Club 18-30 holidays with the slightly more refined and dignified loom adopted beside Saga Holidays.


Gender segmentation is widely used in consumer marketing. The best examples include clothing, hairdressing, magazines and toiletries and cosmetics.


Many companies target affluent consumers with luxury goods and convenience services. Good examples include Coutts bank; Moet & Chandon champagne and Elegant Resorts – an up-market travel company. By contrast, many companies focus on marketing products that appeal directly to consumers with relatively low incomes. Examples include Aldi (a discount food retailer), Airtours holidays, and discount clothing retailers such as TK Maxx.

Social class:-

Many Marketers believe that a consumers “perceived” social class influences their preferences for cars, clothes, home furnishings, leisure activities and other products & services. There is a clear link here with income-based segmentation.


Marketers are increasingly interested in the effect of consumer “lifestyles” on demand. Unfortunately, there are many different lifestyle categorization systems, many of them designed by advertising and marketing agencies as a way of winning new marketing clients and campaigns.


Zee Televisions deals with variety of channels regional channel, sports channel, movie channel.
Peri-peri has both veg and non veg burger and also McDonald has veg burger for vegetarian and non veg burger for non vegetarian.

3) Psychographic Segmentation :-

Psychographic segmentation groups according to customer and their life style, many manufacturer or company product offers based on their attitudes, beliefs and efforts or emotions of their target of market. The desires for status, enhanced somebody look like and more money of psychographic variables.

Psychographic segmentation helps in repositioning, launch of new products and brand extension: Segmentation consumer based on psychographic rest on identifying consumer state of mind and hence helpsin piecing together a more inclusive profile of target consumer.

Buying Behavior Segmentation :-

Markets can be segmented on the basis of buyer behavior as well. Since all Segmentation is in a way related to buyer behavior, one might be tempted to ask why buyer behavior-based segmentation should be a separate method. It is because there is some distinction between buyer’s characteristics that are reflected by their geographic, demographic and psychographic profiles, and their buying behavior. Marketers often find practical benefit in using buying behavior as a separate segmentation base in addition to bases like geographic, demographics, a psychographics.

The primary idea in buyer behavior segmentation is that different customer groups expect different benefits from the same product and accordingly, they will be different in their motives in owing it and their behavior in buying it. Variables of buyer behavior are:- Benefit sought: – Quality / economy / service / look etc of the product.


Nestle has found a separate segment atta noodles as distinct from the maida noodles.
Usage rate: – Heavy user / moderate user / light user of a product.
User status: – Regular / potential / first time user / irregular /occasional.
Loyalty to brand: – Hard core loyal / split loyal / shifting / switches.
Occasion: – Holidays and occasion stimulate customer to purchase products.
Attitude toward offering: – Enthusiastic / positive attitude / negative attitude / indifferent / hostile.(


Shampoos, soap and all FMCG products buying behavior segmentation is used.
5. Market Criterial For Effective Segmentation in Sales/marketing Management

A decision to use a market segmentation strategy should rest on consideration of four important criteria that affect its profitability. (www.citemannetwork)

1) Accessible:

The firm has to now consider whether the segments are accessible to it. This may need further analysis. The market realities will have to be taken into consideration. The popular segment will be accessible only to the firm with a cost advantage, since price is a major determinant in this segment. Liril of Hindustan Lever has a commanding position in this segment. At the upper end of the segment, HLL’s Pears and Dove are well entrenched. Several other brands of different companies are competing in the segment.

2) Identifiable and measurable: –

Segments must be identifiable so that the market can determine which consumers belong to a segment and which do not. However, there may be a problem with the segment’s measurability (that is, the amount of information available on specific buyer characteristics) because numerous variables (e.g. psychological factors) are difficult, if not impossible, to measure at the present time. For example, if the marketer discovered that consumers who perspire profusely favored a particular brand, very little could be done with this information since such a group would be difficult to measure and identify for segmentation purposes.

3) Substantial:-

This collection refers to the degree to which a chosen segment is large enough to support profitably a separate marketing program. As was noted preciously a strategy of market segmentation is costly. Thus, one must carefully consider not only the number of customers available in a segment but also the amount of their purchasing power.

4) Responsive:-

There is little to justify the development of a separate and unique marketing program for a target segment unless it responds uniquely to these efforts. Therefore, the problem is to identify market segments that will respond favorably to marketing programs designed specifically for them.

6. Identifing Market Segmentation And Market Target :-

There are five different patterns of target market selection:

1. Single Segment Concentration:-

Also known as the “be a big fish in a small pound” strategy is, based on the generic competitive strategy-Focus. Through concentrated marketing, an industry achieves a strong understanding of the segment’s needs, develops competitive advantages to accomplish a strong presence in the given segment. Through segment leadership, it captures an elevated return on investment.

2. Selective Specialization:-

The company may decide to target several segments, but may be selective about the segments to target. This multi-segment strategy has the benefit of spreading a firm’s hazard and gives it larger presence in an industry.

3. Product Specialization:-

A firm specializes in a specific product and vends it to numerous segments. A firm may specialize in water purifiers and may decide to target homes, institutions, commercial research labs, and government establishments. For each market segment it targets the company would have to appropriately modify several elements of the marketing mix, but what it achieves is a strong repute in a particular product area. The disadvantage of this strategy is a technological revolution that may threaten the business of the company.

4. Market Specialization:-

A firm concentrates on serving the various requirements of a specific customer group.

For example, an online seller of books, who having built up a large customer base, diversifies into selling other products like music, cameras and other such assortments to the customer group. The firm achieves a well-built status in serving this customer group and turns out to be a channel for extra products that the customer segment can utilize. The disadvantage of this strategy is major shift in the attractiveness of the given market segment, would threaten the entire business of the company.

5. Full Market Coverage:-

The company tries to cover all market segments through their products. Huge firms can take on a full market coverage approach. Firms can cover the market in two ways: through differentiated marketing or undifferentiated marketing. In undifferentiated marketing, the firm practices mass marketing by going after the entire market with one marketing mix. In differentiated marketing, the firm works in numerous markets, but designs an explicit marketing mix for each market segment, that it selects to target.

Example of General Motors:-

GM has identified about 40 different ‘customer needs’ and correspondingly, 40 different market segments in which it would present with its vehicles. For example, it has targeted the Pontiac at active, sports-oriented, young couples, the Chevrolet at price-conscious young families, the Oldsmobile at affluent families, and the Buick at older, more conservative couples.

7. Segment-Target-Postion:-

The strategic marketing planning process flows from a mission and vision statement to the selection of target markets, and the formulation of specific marketing mix and positioning objective for each product or service the organization will offer. Leading authors like Kotler present the organization as a value creation and delivery sequence. In its first phase, choosing the value, the strategist “proceeds to segment the market, select the appropriate market target, and develop the offer’s value positioning. The formula – segmentation, targeting, positioning (STP) – is the essence of strategic marketing.” (Kotler, 1994, p. 93).

The marketer would draw out the map and decide upon a label for each axis. They could be price (variable one) and quality (variable two), or Comfort (variable one) and price (variable two). The individual products are then mapped out next to each other Any gaps could be regarded as possible areas for new products.


Whether market segmentation is successful or not can be evaluated by the following questions( ankasha 91)

1. Is it sizeable: –

Size-wise, the popular segment is a bigger compared to the premium segment. In term of tonnage, of the total market of around 6, 00,000 tones, the popular segment account for 80 percent and the premium segment for the remaining 20 percent. If the firm wants a very large volume, it has to think of the popular segment. At the same time, it has to note that the premium segment too is sizeable, as it account for over 120,000 tones. In term of value, the premium segment is even more sizeable, formerly nearly 30 percent of the total market. Clearly, the segment cannot be ruled out as lacking in size.

2. Is it growing: –

Growth rate and likely future position of the segment will be the next consideration in the evaluation processUsually, business firms seek out the high growth segments. Analysis will readily indicate to the firm that in bath soaps, the premium segment happens to be the high growth segment. Whereas the popular segment has been growing at 10 percent per annum, the premium segment has been growing at over 20 percent annum. When this fact is taken into consideration, the firm’s choice may tilt toward the premium segment.

3. Is it profitable: –

Next consideration will be the extend of profitability. In the present example, the firms quickly sense that the premium segment is more profitable one. Even a relatively lower volume in the segment may bring in good returns. On the contrary, in the popular segment, a much larger volume will be necessary for the business to be viable, since prices and margins in the segment are low. Another point is that costs of marketing, distribution and promotion in the business are quite high and are constantly on the rise. Costs of launching a new brand are particularly high. The market is very competitive, aggressive promotional support through expensive media like TV becomes essential.

4. Is it accessible: –

The firm has to now consider whether the segments are accessible to it. This may need further analysis. The market realities will have to be taken into consideration. The popular segment will be accessible only to the firm with a cost advantage, since price is a major determinant in this segment. Premium segment will be accessible only to firms, which enjoy a differentiation advantage, and which are also marketing savvy. Liril of Hindustan Lever has a commanding position in this segment. At the upper end of the segment, HLL’s Pears and Dove are well entrenched. Several other brands of different companies are competing in the segment.


Principle of market segmentation is that the product and services needs of individual customers differ. Market segmentation involves the grouping of customers together with the aim of better satisfying their needs whilst maintaining economies of scale. It consists of three stages and if properly executed should deliver more satisfied customers, few direct confrontations with competitors, and better designed marketing programmes. Segmentation depends the differentiation of market starategy and the achieve of goals that the main important thing with segment market don’t have achieve the target, market segmentation of different ways that the concludes are Segmentation to helps the market to pick up the market targets and that the differentaion of main thing is bases of segmentation they decided the all level of ages customer needs and wants. Segments must be identifiable so that the market can determine which consumers belong to a segment and which do not This collection refers to the degree to which a chosen segment is large enough to support profitably a separate marketing program. As was noted preciously a strategy of market segmentation is costly. Thus, one must carefully consider not only the number of customers available in a segment but also the amount of their purchasing power.


1) introduction.

2) Kotler, Philip (1997), Marketing Management, 9th Ed., Prentice Hall, New Jersey.

3) Market Segmentation.,akanksha91,market-segmentation.( student of bhilai business school)

4) MarketSegmentDefinition.

5) Levels of market segmentation.

6) Levels of market segmentation 91

7) Reasons Of Market Segmention:

8) Shewe,charlesed.(2009-2010). Market Can Be Segmented Using Several Bases:

9) ankansha 91.(2010). www.slideshare.net91

10) Market Criteria for effective Segmentation in Sales / Marketing Management. Market-criteria-for-effective-segmentation(2009).

11) ht Jorge A. Restrepo. (1994). Segment-Target-Position.

12) Kotler, Philip. Marketing Management, 1997, 9th edition, pp. 249-268.

Free Essays

Pricing-to-Market and Its implications for PPP


At present, with the increasing process of globalization, the world is integrated as a huge market. International trade is becoming increasingly important between countries. Therefore, exchange rate are now of great concern, people care about its ups and downs, and its implications, especially, to what extent will the exchange rate change influence the price of imported or exported goods. It is important to introduce the idea of PTM to understand this question. The article will first introduce the concept of pricing to market, and then introduce the implications of pricing to market for purchasing power parity. Finally, I will give the conclusion.

Review of PTM

PTM is considered to be a phenomenon, and this happens in international trade between countries. When the market is in division and there is no “hot money”, exporters could set different prices according to the places importing from them; they could choose either producer currency pricing or local currency pricing. When producer currency is used, devaluation reduces export price of local commodities, change in exchange rate has conducting effect to price, therefore guarantees the effectiveness of one price law and purchasing power parity; however when local currency pricing is chosen, devaluations of producer currency does not affect export price of commodities since they are priced in local currency. International trade cost is essential in pricing to market.

Atkeson and A.Burstein(2008) stated that without international trade costs, even in the presence of variable markups that lead to incomplete pass-through, we have no pricing-to-market. Hence, imperfect competition with variable markups is necessary, but not sufficient, for pricing-to-market.

It is widely believed that PTM does not apply to all categories of goods, that it , the extent of Pricing to market varies in different goods trade. Krugman.P(1987) stated that:” PTM is not universal, pricing to market seems to be limited to the transportation equipment and machinery industries” according to his research on US and Germany.

In short, PTM refers to the action of a firm to set different price of the same product in different markets.

Implications of PTM for PPP and empirical evince on that

1. PPP was first formally introduced by Gustav Cassell in 1920, it was aimed to provide a standard for currencies to slove the problem of compensations after the first world war. It indicates that when consumers purchase identical products in any market worldwide, the quantity of money should be the same when measured in one currency (Hallwood and MacDonald, 2000). At present time, PPP has two functions in economics: one, to judge a currency has whether been over or under estimated, second, as a tool of conversion, to convert GDP or GNP of one country from its own currency into another, therefore compare the economic strength between them.

PPP has now been developed into two forms, the absolute ppp and relative ppp. Absolute ppp is based on the law of one price, the theory of absolute purchasing power parity states that the same basket of goods should sell for the same price everywhere (Alessandria,G&Kaboski,P,K, 2009), while relative ppp debates that exchange rates can be adjusted according to the inflation differentials existing in two markets (Pilbeam, 2006).

Unquestionably, the implication of PTM for purchasing power parity is influential according to many researches and studies done. PTM weaken the influence of PPP. The theory of PPP indicates that significant change in exchange rate should result in the devaluation of national inflation and appreciation of national deflation. But the truth is that, either the ups or downs of exchange rate did not significantly affect the inflation. One reason why exchange rate is such insignificant is obvious: exchange rate does not affect volume of trade and total price, as what people expected. The reason lies here is PTM, producers exporting commodities to other countries did not change exporting prices as people expected. For instance, when Japanese producer exporting automobiles into American market, they will set their products’ dollar price on the basis of specific situations in American market. If yen was in an appreciation, they would, by and large, reduce the yen price but not raise the dollar price to maintain its current business scale in American market. They would rather to eliminate the effects caused by change of exchange rate by adjustments within the enterprise itself. In such condition, exchange rate could not play the important role as it should have played.

Betts,C and Devereux M,B (1998) argued that: “PTM plays a central role in exchange rate determination and in international macroeconomic fluctuations.” The pass-through from exchange rate changes to prices is strongly restricted by actions of PTM. They also stated that: “PTM generates departures from purchasing power parity; it tends to reduce the comovement in consumption across countries, while increasing the comovement of output.” (Betts,C and Devereux M,B ,1998). Generally, according to theory of one price and purchasing power parity, changes in exchange rate would pass through efficiently to prices, that prices will be adjusted until it fits the changes in exchange rate, and there will finally be equilibrium. But PTM, as personal actions of enterprises, to some extent, obstructs the channel of passing through. In the conditions of high PTM extent, devaluation of exchange rate has a very limited impaction on the prices determination of imported commodities, as (Betts,C and Devereux M,B ,1998) stressed:”the allocative effects of exchange rate changes are therefore wakened.”

PTM also has important welfare implications for the transmission of monetary policy shocks. (Betts,C and Devereux M,B ,1998). In a situation under PPP, an unexpected expansion of monetary policy will result in increase in welfare of all organization, no matter home ones or foreign ones. Betts,C and Devereux M,B ,(1998) concluded that: “Monetary policy is a ‘beggar- thy-neighbor’ instrument in the presence of PTM”.

2. The phenomenon of pricing to market is everywhere in our daily life, and it significantly weaken the influence of law of one price and theory of purchasing power parity, tells people the truth that the same amount of currency could not always buy same basket of goods in different countries’ market.

Since the same product could be priced differently in two countries, there will be space of arbitrage. I found exactly the same mode of DELL laptop both sold in U.S and China, on EBay, this dell allienware m15x is priced $1449.99, on the official site of dell china, it is priced 16999 RMB, according to the present exchange rate of RMB over US dollar: 6.573, 16999RMB is $2586, it is $1086 more that in US. Obviously, the transportation cost to bring one laptop from US to China is far more less than $1086. It can be implied that many people will try smuggling commodities like this laptop to earn profits.


As above discussion, PTM is important in the determination of exchange rate, higher the PTM is, less influence will the exchange rate change make on prices. It also makes significant welfare implications for the transmission of monetary policy shocks. PTM and its implication on PPP is still necessary to be further researched.

Krugman, P. (1986), “Pricing to Markets when exchange rate changes”, In: Arndt, S.W., Richardson,J.D. (Eds.), Real-Financial Linkages among Open Economies. MIT Press, Cambridge.
Mark, N. C. (2001), International macroeconomics and finance: theory and econometric methods, Wiley-Blackwell.
Sarno, L. & Taylor, M.P. (2002), “new open-economy of macroeconomics”. In The economics of exchange rate, Cambridge University Press, Cambridge,
Hallwood, P. and MacDonald, R. (2000) International Money and Finance, 3rd ed.Blackwell.
Pilbeam, K . (2006) International Finance, 3rd ed. MacMillan.
Atkeson,A & Burstein,A (2008) “Pricing-to-market, Trade cost, and International Relative Price*”. University of California, Los Angeles and Federal Reserve Bank of Minneapolis
Alessandria,G&Kaboski,P,K (2009) “Pricing-to-Market and the Failure of Absolute PPP”
Betts,C and Devereux M,B (1998), “Exchange rate dynamics in a model of pricing-to-market” in Journal of International Economics 50 (2000) 215–244
alienware m15x items- get great deals on PC Laptop notebooks,alienware m17x items on! (2011). Retrieved March 9, 2011 from
???? ???? Dell ???? (2011). Retrieved March 9, 2011 from

Free Essays

The organisations are facing the increased level of competition throughout the world and in order to be a part of this competition and to maximize their share in the market or to increase the profits


In today’s era the organizations are facing the increased level of competition throughout the world and in order to be a part of this competition and to maximize their share in the market or to increase the profits, they must always keep an eye on the changing environment of the business world and should keep themselves updated to survive. Everyday many changes are appearing in the business world and one of them is “BPR”. (OLALLA, M. F. 2000).

BPR can be defined as the “Re-engineering of an organization by examining existing processes and then restoring and change them for incremental improvement” (DeMarco, 2006), thus giving the organizations an edge over their competitors to be successful in the market or in other words we can say that re-inventing the way of doing business to increase the performance of the organization. (Radhakrishnan and Balasubramanian, 2008).

Hammer and Champy have defined re-engineering as a “fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed” (Hammer and Champy, 1993).

BPR is widely in consideration within the organisations and in this report we are going to discuss the important aspects of information technology and information systems in BPR and will analyse that how IT and IS are implemented in the re-engineering process of the organisations.

Role and importance of IS/IT in the implementation of BPR

IT has been a great part for the success of the businesses all over the world since years and “information technology is the technology of sensing, coding, transmitting, translating and transforming information” (Whisler, 1970). If we relate IT with the BPR it is said that “Information technology is an integral part of re-engineering as an enabler” (Hammer and Champy, 1993), as it allows the organisations to re-engineer their business processes. It is also said that the relationship of BPR and information technology is recursive. (Davenport and Short, 1990). The capabilities of IT must carry the business in their process and business process must be in terms that can be provided by the capabilities of IT . The major role that IT can play is to make it possible for the organisations to design a new process. If there is no change in the way of doing work and IT is just used to automate the process which is already in use and there will be no benefits from this automation or there will be minimum benefits. The organisations should not use IT just to automate the processes when thinking about re-engineering in the organisation. The organisations should think inductively when requiring the application of IT for re-engineering which means an organisation first looks the best solution and then thinks about what problems can be solved by using that solution. “A company can never re-engineer that first look at the problems and then think about the solutions provided by the technology.” (Hammer and Champy, 1993)

Most of the companies commit such errors as they see technology as how the new technology will help them in solving the problems faced by the current business processes. They should think how technology can help the business in improving their ways that are not being done correctly in their existing processes. “Organisations see re-engineering as an innovation for them. It helps the organisations to exploit the best capabilities for the achievement of new objectives entirely provided by IT”. (Hammer and Champy, 1993). BPR heavily depends on the power of information technology and systems for improving of the way the work is done by the organisations and for those who are responsible of using the information needs. Alcoa (an American firm) gives a good idea of utilizing the capabilities of IT in achieving new business objectives. The largest IT project was announced by the company in its history. They spent $150 in the efforts and held in about 26 countries and took many years to complete the project. It was a difficult task for the company but the idea was really simple which was “Lets employees around the globe communicate quickly and easily”. This initiative showed the basic fact of life in using the IT strategically and today all over the world organisations are focusing on connecting their employees, customers, throughout their functions and between the people who make important decisions. In the above example the company didn’t think of IT will solve their problems but they thought of the things that could be solved by the use of new technology. One of the major advantages of IT in re-engineering is its disruptive power. IT has the capability to break the rules and make the users think inductively and give their organisations a competitive advantage.

One of the best example of using IT in BPR as a competitive advantage is started selling their books online without being present physically for the customers. In doing so they brought a huge change in the business of books store and broke all the rules by changing their process through the use of IT. also offers a function which alerts the customers automatically by sending the e-mails whenever a new book which is searched by the customers is published and this was only possible for the because of the internet technology. The effective use of IT does not mean to move the information faster but it means to do the things right. IT helps the organisations to be proactive when making decisions and to improve the performance of the businesses rather than getting the reports on it after the facts.(Radhakrishnan and Balasubramanian, 2008).

One more example of re-engineering the process through IT is of IBM when they reorganised their crediting functions. Previously it took between six days and two weeks to process an application from the credit department to the pricing department and then to the administration who use to write a quotation formally. Then IBM credit realised that processing an application normally takes 90 minutes and rest of the time is wasted in waiting the file on the desks of specialist, then the idea of re-engineering the entire process occurred. What IBM did was they replaced the four specialists who were processing the applications and hired a generalist who they called a “deal structurer”. The whole application was processed by a new generalist from the start to the end by using templates on a new computer system providing all the data and tools which each specialist was using before. The re-engineering resulted in reducing the turnaround time from 7 days to 4 hours and without any increase in the number of staff IBM was able to achieve improvement in the productivity and it was able to handle the credit applications 100 times more than before the re-engineering took place. (Hammer and Champy, 1993).

“Similarly the importance of IT in re-engineering can also be verified by one more example of ford motor company. In 1980s, they looked at their 500 employees at accounts payable department and realised that the employees were spending most of their time in tracking down the discrepancies between the purchase orders, invoices and receipts. They decided to re-engineer the entire process of parts procurement and they created an online database for the purchase orders. Whenever they issued a purchase order to a buyer, it was entered directly into the database and as they received the goods at the dock, someone from the department checks the database. If they find the shipment matches the order they accept it and if it does not they did not accept it in order to decrease discrepancies between what they ordered and what they received. As soon as they received a shipment, the database is updated and automatically a check is generated to be issued to the vendor at the same time. So, as a result of re-engineering at Ford, the number of employees were reduced from 500 to 125 and at the same time the there was a dramatic change in the efficiency. (Hammer and Champy, 1993).”

IS/IThave always helped the organisations in re-engineering their business process and as the whole globe is now a networked world so the organisations need to re-engineer their processes to cope up with the new trends as proper BPR would definitely reduce the cost of production and this is evident in the CISCO’s launch of a product called “MCO (manufacturing connection online)” in 1999. This “manufacturing connection online” was used to channel the orders directly and immediately from customers to manufacturers and suppliers. This new system reduced the cost of operations drastically for Cisco and reduced number of days involved in paying the suppliers, and also eliminated the cost of papers which were used for paper based products and brought the savings of $24 million for the cost of material and $51 million which was normally spent on the labour and a 45% reduction in the inventory. (Iroegbu, 2011)

Information systems are being widely used in the business and they are the systems which are inserted into the organisation in order to support the data processing and decision making. These information systems are regarded as a kind of structures reflecting all the activities of the organisations and a software system is a part of it. (Yih-Chang Chen, 2001).

Information systems which are supported by the surplus of information and communication technologies are sustaining the core business processes in most of the organisations today. IT is now beyond the gains of efficiency and effectiveness of 1960s and 1970s and are giving more advantages strategically which is going to transform the organisations of the future. Therefore, in order change a business process, it will involve the redesigning the information systems and information technology to support the new processes. (Broadbent, M. 1999) We are living in the period where there is a huge growth in the use of IS/IT by the organisations. Independently these information systems have no existence of their own unless taken in the context of an organisation and its business processes and IS/IT is considered as the main driver of the change in the organisation. It also plays a major role in the BPR process. BPR has changed the way organisations view IS/IT. It is also said that the re-engineering processes by the use of IS/IT support may give largest short-term payoffs from BPR. The move from the mainframe systems to the modern PC based networking systems is considered as the one of the useful aspect of IS/IT in the BPR, but sometimes it is a difficult task to change the application software as well as the platform. “Even internet has enhanced the capabilities of IS/IT in improving the business processes. IS/IT has helped the organisations to implement business processes innovatively and improve the quality of their operation in terms of time scale and accuracy as well as provided organisations with the new ways of working that help the organisations to be successful. (Parker et al 1988). But on the other hand when implementing BPR through IS/IT it has a negative image on the people as they feel it is causing the decrease in the middle managers role and numbers. (Dopson, S and Stewart, R 1993). Many surveys confirms that the use of IS/IT in BPR is enabling the majority of BPR initiatives. Most of the managers in the organisations are looking towards BPR as way of applying IS/IT in their business process to achieve their goals and to give the better quality of products and services to the customers.

According to the survey by Willcocks has highlighted the importance of IS/IT and its management is considered as the top 10 critical success factors for BPR programs and about 75% of the top 30% BPR programs had seen IS/IT as the key enabler in the radical process redesign and in the support of processes re-engineered. (Willcocks, L 1995). There are some new views on the use of IS/IT in the BPR and as the use of IS/IT is increasing, therefore the term BPR should be altered to “BP&ISR” and as per this new term the definition of BPR by (Hammer and Champy, 1993) “fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed” can be further re-defined and the new definition can be as “BP&ISR is the fundamental rethinking and radical redesign of an organisation’s business processes and information systems with an aim to achieve significant improvements in quality and service, and optimise costs and productivity” (Currie, 2003). It is a fact that is the IS efforts are focused in the requirement of the business instead of improving internal efficiency it will have a great imact on the business. IS specialist can easily give their contibutions to redesign the processes as they have expertise, they are technical and the style of their thinking, this could be done by looking at the limitations in the design of the processes that the tools and techniques of BPR may have and establish different ways of improving them. (Richards, N 1993).”


“This report presents the importance of IS/IT in implementing BPR and still it is difficult to understand the relationshop between BPR and IS/IT but we can say that BPR and IS/IT goes together in most of the cases. The organisation will have to consider the support of IS/IT in their processes and at the same time they have to keep an eye on the on their processes to be linked with IS/IT properly in order to get the best outcome. The examples have also shown that how different organisations re-engineered their processes using information technology and information systems and were successful in achieving their objectives. the proper use of IT and IS in the re-engineering of the processes will play a vital role in the future as internet has changed the ways of doing business and most of the customers are attracted towards internet shopping rather than going to the shops physically. Tesco, Asda, Sainsbury’s are the good examples of using IS/IT in their business processes to achieve their goals and upto an extent they are going towards success. BPR in terms of information technology and Information system can only be successful if needs and objectives of the businesses are directly realated to the activities of the process and the organisation must see the critical aspects of IT in their business processes before implementing it otherwise IT will only be playing the role of automation for their processes.”

Free Essays

The concept of capital market

1. Introduction

The concept of capital market efficiency can be said to be a research topic that has bring a lot of arguments and suggestions on the field of finance as a course of study. As such, it would only be expected that the wider implications of market efficiency would also be just as far-reaching.

Therefore, the justifications for this research undertaking may be considered to be a great task and as a result, capital market efficiency as an area of finance requires a rigorous work for scholars and as well for managers of an organization.

The first time the term “efficient market “was in a 1965 paper by E. F Fama who said that in an efficient market, on the average, competition will cause the full effects of new information on intrinsic values to be reflected “instantaneously” in actual price.The efficient market hypothesis asserts that none of the techniques used by fund managers ( i.e. forecasting, valuation techniques ) can outperform the market.

Arguably, no other theory in economics or finance generates more passionate discussion between its challengers and proponents. For example, Harvard Financial Economist Michael Jensen writes “there is no other proposition in economics which has more solid empirical evidence supporting it than efficient market hypothesis”. While investment maven Peter Lynch claims “efficient marketsThat’s a brunch of junk, crazy stuff” (Fortune, April 1995).

The efficient market hypothesis (EMH) suggests that profiting from predicting price movements is very difficult and unlikely. The main engine behind price changes is the arrival of new information. Consequently, there is no reason to believe that prices are too high or too low. Security prices adjust before an investor has time to trade on and profit from a new a piece of information.

The key reason for the existence of an efficient market is the intense competition among investors to profit from any new information. The ability to identify over- and underpriced stocks is very valuable (it would allow investors to buy some stocks for less than their “true” value and sell others for more than they were worth). Consequently, many people spend a significant amount of time and resources in an effort to detect priced” stocks. Naturally, as more and more analysts compete against each other in their effort to take advantage of over- and under-valued securities, the likelihood of being able to find and exploit such mis-priced securities becomes smaller and smaller. In equilibrium, only a relatively small number of analysts will be able to profit from the detection of mis-priced securities, mostly by chance. For the vast majority of investors, the information analysis payoff would likely not outweigh the transaction costs.

The Efficient Markets Hypothesis (EMH) over this period, has witnessed widespread recognition and has found its place amongst various scholars and researchers of finance and economics.

The empirical observations of Kendall (1953) thus came to be known as the ‘random walk model’ or in some instances the ‘random walk theory’ (ibid). However, this theory suggests that the share price at any one time reflects all available information, and will only change if new information arises. In this way, unpredictable news hitting the market causes share prices to fluctuate in a random and unpredictable manner.

The Kendall observation is based on the fact that the market is irrational world and that the price of stock has a rampant walk. However, as Bodie et al (2001: 268) point out, in a reversal of opinion, economists concluded that the unpredictable behaviour of stock prices actually indicated an ‘efficient market’ by the very virtue of such unpredictability.

Although Kendall (1953) is refer to as the father of the random walk theory, however, the actual concept of market efficiency, and indeed a ‘random walk theory’, had already been tested as many years ago as the year 1900 (Dobbins et al, 1994: 70). This was in fact the result of a doctoral thesis in mathematics written by the French economist Louis Bachelier (1900). He argues that the price of a commodity today is the best estimate of its price in the future, and therefore commodity speculation is a ‘fair game’ in which there are no winners, and wherein prices tend to follow a random walk (Pilbeam, 2005: 250-51).

Subsequently, as suggested by Dimson and Mussavian (1998: 93), the 1960’s marked a transition for research concerning the unpredictable nature of stock market prices. Leading on from the findings of Samuelson (1965) and Roberts (1967), Fama (1970) summarise a comprehensive review of the random walk literature, and proposed the idea of an informationally efficient market wherein “prices always fully reflect available information” (Fama, 1970: 383). Thus, the Efficient Markets Hypothesis (EMH) was introduce into the theory of efficient market. This theory supported the random walk theory and also created a theoretical framework in which economists could perceive the unpredictable nature of stock market prices.

More recently, Yen and Lee (2008) provide a chronological review of empirical evidence on the Efficient Market Hypothesis over the last five decades. Their survey clearly demonstrates that the Efficient Market Hypothesis no longer enjoys the level of strong support it received during the golden era of the 1960s, but instead has come under relentless attack from the school of behavioural finance in the 1990s. Besides the above broad review, there are other survey papers with a specific theme, for instance, Fama (1998) surveys the empirical work on event studies, with a focus on those papers reporting long-term return anomalies of under- and over-reactions to information; Malkiel (2003) and Schwert (2003) scrutinize those studies reporting evidence of statistically significant predictable patterns in stock returns; Park and Irwin (2007) review the evidence on the profitability of technical trading rules in a variety of speculative markets, including 66 stock market papers published over the period from 1960 to 2004.

The Efficient Market Hypothesis has over the years been accepted by various scholars since its inception nearly forty years ago; however, in more recent times the Efficient Market Hypothesis has come under criticism, and as a result has been apparently undermined by the emergence of a new introduction to finance thought which is known as Behavioural Finance. The recent discussion published in Malkiel et al. (2005) clearly indicates that there is no sign of compromise between proponents of the Efficient Market Hypothesis and advocates of behavioural finance.

Lo (2004) notes that useful insights can be gained from the biological perspective and calls for an evolutionary alternative to market efficiency. In particular, he proposes the new paradigm of Adaptive Markets Hypothesis (AMH) in which the Efficient Market Hypothesis can co-exist alongside behavioural finance in an intellectually consistent manner. In this new hypothesis, market efficiency is not an all-or-none condition but is a characteristic that varies continuously over time and across markets. The main contribution of this paper is to provide a systematic review on the empirical literature of evolving weak-form stock market efficiency, which is consistent with the prediction of Adaptive Markets Hypothesis.

Find mat on b. Finace and adaptive

The history of stock trading and trading associations can be traced as far back as the 11th century when Jewish and Muslim merchants set up trade associations. After centuries of evolution, stock markets have become the symbol of commerce in the modern world. It operates in various countries and trades a range of securities. The world stock market capitalisation is estimated to be about $ 47.7 Trillion as January, 2010. The stock market has various functions such as capital mobilisation, investing opportunities, risk distribution etc.

The major stock exchanges in the world today include New York Stock Exchange, London Stock Exchange, Frankfurt Stock Exchange, Italian Stock Exchange, Hong Kong Stock Exchange and Tokyo Stock Exchange.

Stocks can be defined as a collection of shares in a company. Therefore, a stock market is a place where buyers and sellers of stock meet to transact business. It is a place where exchange of shares takes place.

Basically, there are two types of stock markets:

Primary Market
Secondary Market

The primary market is the market of first sale where companies first sell their shares to the public. It is used by companies who are coming to the exchange for the first time to raise finance.

The secondary market on the other hand is a market where existing or already issued securities are traded. The activities in the secondary market are usually carried out on the floors of the Stock Exchange where investors and sellers of securities meet to consummate deals through the stock brokers who are the dealing member of the exchange.

The stock market is often influenced by the availability of information on the various securities traded on the stock exchange. The price of stock’s moves up and down reflecting the mood of the market, the price of stock move only when there is information in the market.

The concept of an efficient market is a special application where the market price is an unbiased estimate of the true value of the investment.

1.1 A Historical Overview of Capital Market Efficiency

During the 1950’s the advancement of information technology gave rise to a new founded power to analyse economic time series using various computer applications; unsurprisingly much to the fortuity of economists and business theorists alike. It was widely believed among such a community of scholars that “tracing the evolution of several economic variables over time would clarify and predict the progress of the economy through boom and bust periods” (Bodie et al, 2001: 268).

A viable candidate for such experimental analyses at the time was the behavioural pattern of stock market prices over a certain time period, which, it was hoped, could be used as an indicator of economic performance (ibid).

Subsequently, an undertaking of such an analytical study was conducted by Maurice Kendall (1953) who attempted to find recurring patterns in 22 UK security and commodity price indices. He unearthed results that were to open up a whole new dimension to the theory behind the behaviour of stock market prices. Hence, contrary to the then prevalent views among economists, he found that there appeared to be no identifiable pattern between stock price fluctuations, in that they seemed to move in an erratic fashion. He therefore concluded that stock prices appeared to follow a random walk (Dobbins et al, 1994: 71).

The empirical observations of Kendall (1953) thus came to be known as the ‘random walk model’ or in some instances the ‘random walk theory’ (ibid). In short, this theory suggests that the share price at any one time reflects all available information, and will only change if new information arises. In this way, unpredictable news hitting the market causes share prices to fluctuate in a random and unpredictable manner.


The aims and objectives of the study is to discuss how efficient is the stock market with particular reference to London Stock Exchange.
To establish that there exist a concept of the efficiency of the stock market which implies that at any point of time the prices of securities react to all the market information positively or negatively depending on the nature of the information.
To review all available literature on efficient market hypothesis and compile a report.
To make a report on the stock market anomalies that seems to contradict the efficient market hypothesis.
To determine the effect of insider abuse on the efficient of the stock market.
To present a comprehensive and critical analysis of the efficient of the stock market.
To arrive at a conclusion base on the report gathered on the efficient market hypothesis, insider abuse, stock market anomalies and other factors affecting stock market efficiency.


To achieve the objectives outlined, scope of the study and report undertaken by this dissertation extend to the following areas for an extensive research and analysis:

The study will focus on the types, needs and level of stock market efficiency.
The study extends to the analysis of the stock market operations and the relative effect of market information on the buying and selling of the securities.
The study will also examine issue of insider trading and the effect of such insider trading on the stock market efficiency
While detailing the anomalies of the stock market the report envisages to bring out the effect of those anomalies on the efficiency of the stock market


To present a comprehensive and coherent report, this dissertation adopts the following structure:

Chapter 1: Introduction, provides a brief description on the stock market efficiency

Chapter 2: Literature Review, this represent the body of the text covers a detailed review of the available literature on the stock market efficiency. This section will attempt to present a review of the academic works relating to the research topic.

Chapter 3: This chapter is where the research methods to be used are developed and it will demonstrate the research methodology that has been used to carry out this study.

Chàðtår Four: This såñt³în ðrîv³dås the main ñàså study of the research, the general overview of London stock exchange.

Chapter Five: This chapter discusses the data collection and findings, the interviews and questionnaires taken from stakeholders- investment analyst, pension fund managers, brokers and investors in the stock market. Thå³r v³åw ànd îð³n³îns àbîut stock market efficiency.

Free Essays

Market and Value Chain Analysis of Starbucks

1.0 Introduction:

Market research is often conducted by various companies in order to determine its niche market position as well as to determine the direction that it must take on order to remain competitive and succeed. Variety of methods is utilized while collecting date. Quantitative market research has historically been the territory of professional researchers with backgrounds in statistics, economics or mathematics. For any company producing various products, there may exist an almost infinite number of combinations of price, packaging, convenience and perceived value additions by its customers. Marketing research offers a set of well defined and generally accepted methods for identifying which combination may have the greatest likelihood of success. This report describes such market research endeavor undertaken to gauge the performance of the coffee giant Starbucks. The research obtains data by collecting primary data from the consumers using an interview questionnaire method and obtains secondary data using various published research and reports.

2.0 Research methodology:

2.1 Primary Research:

Large, established companies, typically expend considerable resources conducting marketing research, either through their own internal research departments or by contracting with outside research firms. There is always a need to develop ways to monitor customers and identify their needs and demands and this could be done by asking questions such as why a consumer chooses a particular purchase, or not purchase, a particular product. While qualitative data may be useful in assessing customers’ feelings about a product, offers little insight as to how many customers in a given marketing area might actually purchase it, unless of course, every potential customer is questioned about his or her intentions.

2.1.1 Questionnaire:

A questionnaire could be developed by the company to identify these issues. Various questions that could be asked are as under:

Consumer’s preference: The question can be designed to assess why consumers use this particular brand and what are their preferences. For example in case of coffee why people would prefer Starbuck coffee over other brands and if they chose to come to Starbuck then what could be their most preferred choice.
The consumer may be asked about the variety of coffee at the outlets.
The consumers may be asked about the price.
The consumers may be asked about the availability of various sizes.
If the consumers prefer some other addition to the variety

It is important to know not only which attributes customers desire, or are repulsed by as in the milk example, but also to be able to estimate the cost of adding these attributes to the product. It is ultimately the difference in the cost of adding attributes, otherwise known as value, compared with what the customer is willing to pay these attributes, that determines whether to bring a product to market.

2.2 Secondary Research:

The most obvious benefit of secondary data is that it already exists and does not require additional time and expense to collect. The disadvantage, of course, is that it is not likely to be tailored specifically to the questions that the producer wishes to answer. Nonetheless, it may be used to glean much useful information at very little cost. External secondary data may be available from a number of sources including government publications and industry trade groups. A third source, syndicated data, is available for purchase from private data collection agencies such as Morningstar, Hoover or Yahoo finance.

3.0 Starbucks an Overview:

Starbucks was founded by Jerry Baldwin, Zev Siegel, and Gordon Bowker as a small store called Starbucks Coffee, Tea, and Spice in Pikes Place Market in Seattle in the year 1971. The mission of the corporation is perhaps one of the reasons for its success. Starbucks seeks to maintain a balance between fiscal responsibility and social responsibility by growing its business in not only coffee, but also in “third-place environments” (places people can gather that aren’t work or home). Their mission statement itself is relatively simple: “to establish Starbucks as the most recognized and respected brand in the world” (

3.1 The Performance:

Starbucks is a public limited company operates from Seattle Washington and is traded at NASDAQ as SUBX. Ever since its inception the company has shown strong sales and growth record. According to Hoover (2010), there are more than 16600 Starbucks coffee shops in 40 countries across the globe. As of September 2009 Starbucks had notched up an impressive annual revenues of $ 9774.6 million with a net income of $ 3908 million (Yahoo Finance 2010). Starbucks had some 142,000 employees globally in year 2009 (Hoover, 2010).

The Company has a wide variety of products to offer. It provides tips on how to make good coffee at home. It offers biscotti, some salads, pastries, as well as sandwiches to go with the coffee. Majority of stores are modeled on Italian themes providing the customers an unmatched experience of Italian experience a little luxury. It also offers a variety of Italian products that may include lattes, cappuccinos and mochas. (Fletcher & Brown 2005)

Customer care is most important for Starbucks towards and therefore employees are specially trained towards customer orientation and customer satisfaction. It is not unusual to see employees asking customers about their coffee preferences and taking feedback on customer’s experience at Starbucks. Starbucks has been successful owing to changing lifestyle and a high availability of Disposable Personal Income (DPI). The baby boomers have moved towards a healthy lifestyle. This phenomenon is further coupled with the fact that is more and more people world over are shifting towards non alcoholic drinks implying further demand of Starbucks’ products. (Borden 1978)

3.2 Competition and Market strategy:

The success does not come easy: along with success come many competitors and that’s what has happened to Starbucks. There are many companies imitating Starbucks in terms of the store layouts and light furniture. (Hoovers 2010) In addition to this, some competitors have copied Starbucks’ rapid expansion plan. The example being Seattle’s Best Coffee Company that has been waiting for Starbucks to engage in aggressive consumer education about the importance of coffee, and then it goes to those same locations and opens up stores there. This has caused a lot of competition for Starbucks as they have to keep watching their backs. Another major competitor is Second Cup with the distinction of expanding rapidly in the US retail market growing in numbers and eroding the market share of Starbucks.

The company realizes that it has reached a saturation in the US coffee market with competition breathing down its neck and is thus gearing itself for new markets in hitherto unexplored and uninitiated international markets. Starbucks enjoys a well established brand name having a niche market; however off late they have been challenged by dwindling profit margins dues to increased coffee prices. All these factors prompt Starbucks to expand internationally. (Benter and Booms, 1981)

3.3 Value Chain at Starbucks:

3.3.1 Inbound Logistics

In order to deliver on its promise to customers of offering products at everyday low prices at its stores, Starbucks utilizes economies of scale in its inbound logistics activities by having excellent supply chain methodology that involves negotiating globally with managers negotiating with and developing strategic alliances with vendor partners for products.

3.3.2 Operations

Starbucks has global operations spread over 40 countries. These stores are set up in similar fashion and most offer a large variety of products. Operational efficiency is critical to the overall success of Starbucks as well as augurs well for a superior customer service. By setting up outlets in a similar fashion, Starbucks gains greater control that can be sustained at the corporate level however individual stores are allowed and encouraged to make modifications with justification.

3.3.3 Marketing and Sales

Starbucks employs economies of scale in its overall national-level strategic marketing and advertising campaigns, at the same time providing some degree of autonomy and financial independence to execute seasonal or tactical sales promotional campaigns. Starbucks is able to generate cost savings in advertising because of economies of scale and at the same time, create customer value through local adaptation of promotional campaigns.

3.3.4 Outbound Logistics

The nature of the business ensures that Starbucks has minimal shipping and distribution traffic, mainly. However, as an option for customers who prefer to have their goods delivered, Starbucks has carefully built up a distribution network, which has the capability to deliver products to customers’ homes

3.3.5 Service

Starbucks places extra emphasis on its primary business goal i.e. to serve its customers’ needs by efficiently and with a personal touch. Its managers are expected to spend more time on the shop floors, listening to their customers and employees, thereby enabling them to make decisions that respond quickly to the unique needs of target customers. By taking a decentralized approach to its operations, Starbucks is able to deliver on what it sees is its core value proposition to its customers: superior service.

3.3.6 Human Resources

Starbucks has a strong commitment to investing in their employees, which they feel is their greatest competitive advantage. The Company values its employees and considers them as important stakeholders in the business. Starbucks’ management believes that when all the needs of employees have been dealt with adequately then they will do their part in provision of quality services. On top of that, the Company’s leaders like its CEO Schultz believe that all employees should feel appreciated. Compensation plans such as performance bonuses and employee stock ownership plans help in retention of employees as well as recognition programs and emphasizing an open-door policy with management.

3.3.7 Technological

From a technological standpoint Starbucks have both internal and external issues to deal with. External issues pertaining to product development and improvement, patenting and R&D can be looked as mainly a supplier based concern. Even though the majority of that burden is on the suppliers, there are many internal technological issues that especially in Information technology. Expansion in this area is definitely an area of growth opportunity and positioning within the overall industry.

4 Conclusions

Marketing research almost invariably centers on collecting and analyzing the information necessary to make decisions about how to most effectively market a product. Quantitative data, unlike attitudes, perceptions or ideas, refers to information that can be measured, such as the quantity of a product that is sold during a specified time period, the sales price or the population of potential customers residing in a particular marketing area. Various aspects of Starbuck’s operations in the wake of its market position, current capabilities and various critical success factors make Starbucks Corporation an excellent model for success.

Benter, J. and Booms, B. (1981): business development strategies and organizational structures for service firms, in Donnelly, J. and George, W. Marketing, American Marketing Association, Chicago.
Borden, N. (1978): The Concept of the business development. Journal of Advertising Research, June, Vol. 2, (Available in Schwartz G. Science in Management, John Wiley & Sons,)Crynes,
Bryan. “Starbucks Overview.”
Fletcher, R & Brown, L. (2005): International business development skills, 3rd edition, Pearson prentice hall, French’s Forest
Starbucks Corporation, Company Description Hoovers (2010);
Starbucks (SBUX) Income statement; Yahoo Finance Retrieved on 23rd April 2010, from

Free Essays

The britain money market and is for specially britain’s islamic bank


I aim, in my dissertation to establish a Islamic credit card model in the Britain money market and is for specially Britain’s Islamic bank. There is no Islamic credit card available till today in Britain, and the only Islamic’s financial institute is Islamic Bank Of Britain. My research basis Islamic credit card which defeats in the Middle East and in Malaysia’s money market. The most banks they who provides such card in Asia act according to the deferred payment to know that they (to the inah bay) were discovered later the achievement has the dispute, disguises with to oppose Islamic completely law concept (Shariah). About, the Shariah related mostly bank has their policy formulation own Shariah in Asia to obey the court, but I know the question am at the same time mostly the Shariah committee to am in the same subject question harmful viewpoint. Provides by these Asian Bank these, but is all explanations to according to Holy Bible Koran is not suitable fore-mentioned by from the “Middle East Banker”.

when the Persian Gulf States Bank has visits new plans Muslim to believe and to provide them the credit card. Their method is very attractively to principal and granter or the guarantee based on the system. My financial model will simultaneously carry out the religion and financial anticipated two. Perhaps this model also has loads impracticability’ s as for is, because perhaps by Britain’s Islamic’s bank’s use or they are I knew needs complex procedure.


I planed about Construction of Islamic credit card the finance model and specially for s money market. Goal in somebody’ Followed; the s heel is the academic lecture core goal.

1. Discovers credit card ‘ According to Islamic legal (Shariah) s function about payment style.

2. In British appraisal credit card market, and may Muslim use these cards.

3. Provides credit card ‘ s reliability and in Islamic market revelation function.

4. Anything about in Islamic credit card ‘ Use; These s suitable model existence concept in credit card ‘ Use aspect; Discovered that the latent Islamic principle’s s function credit card is?

What Is Credit Card.

The credit card possibly is defined to arrive invariable compensation card that to propose may to purchase cardholder’s credit specific quantity which and pay the flowered amount.

Outstanding widespread balanced, in is assigned in the time is may pay, or the interest will cause on surplus balanced. (Paxon and five, 1998)

History of Credit Card.

The credit card system’s first type is developed in the US. Later in 20 centuries at the beginning, will use in the metal plate to the Western Alliance and other financial institute recording the customer detail and the account. The Flatbush State bank introduced its monthly allowance account bank accountholders in 1947. In 1951, the Franklin State bank was issues credit card’s first financial institute to other bank customer. (Lindsey 1980). the diners club issued in 1950 the first modern credit card, and was called the travel and the entertainment (T & E) card. The US express followed the visitor who ate meal to club and to provide in had the credit period characteristic between the expense and the payment for the credit card 1958, but did not have the partial payments facility. (Wonglimpiyarat 2005) the national credit card first developed and allows it to stretch across Earth’s other banks in 1966 from the American Bank. Will become all things as for the result rival by later to result in the card between the bank and the main charge name which will provide is joined. (Frazer 1985)

The Credit Card In Britain.

Barclays issued in 1966 the first credit card in the agreement later by the American Bank. Barclays imports all facilities and the structure in Britain revises the American Bank operation. (Wonglompiyarat 2005)

Credit Card System.

The visa, the switch and Master manage under the four directions plan. These all transactions through the plan involvement are four main parties;

Holder of the card, through the use of card, pays the payment,
The card publisher, provides the card to the user, and runs the trading account,
The retail merchant, sells the goods or the service back-spacing promise for the payment.
Merchant acting as a purchasing agent, the absorption retail merchant, obtains from the card publisher’s payment and the repayment gives the merchant. They have frequently with the merchant, but it’s Relations; does not force.

Islamic Bank Of Britain.

Islamic Bank Of Britain is only Shariah the obedient financial service authority (FSA) authorization financial institute in Britain. It started its operation and located at three British various cities and the branch in 2004 in London and the main office in the Birmingham. Other Islamic’s bank, I elect likely the bank goal also will provide the choice for the regular mechanics of banking through to avoid interest (Riba) and definitely to maintain the money only spends at the moral enterprise. At present the bank expands its product rapidly and serves two pair of current finances and the banking industry crisis, the Islam community in Britain, the availability non-interest credit card, in massive the growth Islamic’s mechanics of banking, chooses the choice modern mechanics of banking product and the service, the integrated Islamic financial concept is looking like Lloyds TSB modern bank neutral HSBC and is not voluntarily the high interest rate strong existence rule credit card. (

Literature reviews.

In today’s society the credit card uses the achievement to pay money a basic way. Has to credit card’s various uses for example payment, the Credit facility, the cash advance easy way and as for the status symbol. Presented the payment proposition money value way compared with Islamic credit card and the conventional credit card, has various questions which a lower penalty spends, provides free bonus year after year, a fancier look and the proposition expense gives up. (Ma’ Sum total Billah 2001). Islam permission use credit card, because it does not incur the interest, and at the same time it does not violate Shariah any rule. (Ahamad and lake huron 2002). Whether I did know the credit card service only pays the main amount as for the user to add on operates and the overhead charge credit card, the financial entry is permitted, because it does not involve any kind in the Islam the element benefit which forbids. (in el Azura 2006). The use to pays money other way credit card’s advantage for the purchase, the cost effectiveness, the security and the world acceptability is easy to use. (Mohammad 2003).


In the Middle East and Malaysia the method which discussed, anticipated financial model and gift payment method and religious belief flaxen cloth. The research possibly completes the explanation, but is may be the description possibly takes the bank with it to this domain research union important work.


The hypothesis has the limited research in the region Islamic credit card in Britain. (2007) the Shah pale research which and the discovery conducts has in the human limited aware about Islamic credit card. Mohd (2008) has identified influence Islamic credit card usage several factors for them. Had has developed following three hypotheses;

H1: Technical and the function service’s quality has to immediate influence Islamic credit card user.

H2: The religion has positive influence to the usage of Islamic credit card.

H3: The culture is directly affects Islamic credit card’s choice, the usage and satisfaction.

Research Methodology

The research methodology based on secondary data heavily. My research may use to the bank website in the Middle East and Southeast Asia and various origin together for example article, the research, the journal and the book. Other origins and insure London including on-line Islamic’s institute mechanics of banking, the Middle East banker magazine library Islamic finance (, journal and complete other on-line resources. Because appears self-confidently in the promiscuous method has the research qualitative and the quantitative method mix. Except that beside further studies may be appears to the possibility to it with the current research findings.


Core goal, if the research is the development to the financial model Islamic credit card Britain’s Islamic bank. Bank possibly for theirs credit card research applications research at that time credit card in theirs stock list, but its long waiting. The report will also highlight the key question for example in massive the general manner, the belief and the perception about the non-interest Islamic product and the service option and the usage. The report is willing in the modern day mechanics of banking also to show the convention mechanics of banking system choice and Islamic financial concept integration, and, when result trend toward mechanics of banking Islamic way.


Limits all hates diligently, the research has the loading limit. First it is limited Islamic the credit card and Britain’s Islamic bank. Next, the sample will be small, and will not provide to the population overall picture. It will concentrate mainly signs in upon arrival at work a broader picture in the merchant, because of it’ True s stemming from found individual the control use Islamic financial organ. Also the will did not think that perhaps the religious responder and it neglects to the Islamic financial service Islam user’s great proportion.


What the conclusion Islamic Bank of Britanavoids establishing likely is the regular mechanics of banking provides chooses other Islamic bank interest (Riba) and definitely maintains the money only spends at the moral enterprise. Two pair of current finances and banking industry crisis, the Islam community in Britain, is not the availability strong existence non-interest credit card, to chose the modern mechanics of banking voluntarily in massive, Islamic financial concept integrated choice in has looked like HSBC in the modern bank, and a higher interest rate in the regular credit card, there was in the non-interest giant hidden growth potential sum.

Free Essays

Explore how the volume of maritime transport remains high due to the dynamic growth in developing countries with emerging market economies.


Sea transport is the backbone of international trade and globalization, carrying more than 80% of the volume of world merchandise trade. In 2007 the volume of international maritime transport increased by 4,8% compared with the 2006 year and reached 8.02 billion tons. For comparison, over the past three decades, average annual growth rate of world sea transport accounted for an estimated 3,1%.

Strong demand for maritime transportation services spur growth in the global economy and international commodity trade. In 2007, the gross
domestic product (GDP) of all countries in the world increased by 3,8%, while world merchandise exports increased by 5,5% compared with the previous year.

Thus, despite rising energy prices and its potential impact on transport costs and trade, and despite increasing global risks and uncertainties such as those associated with a significant rise in prices for petroleum commodities, the global credit crisis, the depreciation of the dollar of the United States, growth in the global economy and trade was still sustainable.

In 2007, was carried more than 8 billion tons of cargo, the historical record. More than 80% of all raw materials and goods were transported by sea in the world.

In 2007, the volume of traffic by sea increased by 4.8% to over 8 billion tons.

By 2008 the world fleet grew by 7.2% and reached a total deadweight of 1.12 billion tons.
However, the initial favorable half of 2008 was replaced by a worldwide crisis and a sharp drop in shipments.

After second half of 2008 the strong demand for bulk carrier by grain traders led to an increase in freight rates in September 2009 year. The situation has been fueled by a slight shortage of ships.
Dynamics of the Baltic Freight Index (BFI) in September 2009 for the “Panamax” wore an ambiguous character: in the first half of the month the index has grown, and then gradually began to lose ground.

Last month of the 2010 year does not justify the hopes of ship-owners to recover the freight market for bulk tonnage. The level of the Baltic Freight Index for almost all types of bulk carriers (except “hendysize’’) showed a decrease in the first place, for large vessels. Finally, in anticipation of the Christmas holidays BFI for bulk carriers “Panamax” – by 523 points or 22%.

According to analysis of previous years 2006, 2007, 2008 ,2009, 2010 freight brokers pay attention to their return to a crisis level in December 2008 the ratio of time – dry bulk charter rates “keypsize”and “Panamax. ” Currently it is 50% and means that the deadweight ton bulk carriers such as “Panamax” Charterers costs 4 times more expensive than bulk carriers “keypsize” (twice the rate and approximately the same half DWT). This indicates a strong demand for bulk carrier “Panamax”, which in turn is caused by using them to carry a wider range of commodities (coal, grain, sugar, fertilizer, bauxite, etc.) and also has a smaller excess of the courts type compared with the bulk carriers “keypsize’’. “Executive summary.(Review of Maritime Transport, 2008)”, Review of Maritime Transport, Annual 2008 Issue

‘’Panamax’’ dry bulk market – perfect market

Under the marketing research is meant to collect, analyze range of data needed to solve the problems facing the firm marketing the situation, as well as the formation of a report on progress. Known a lot of instances where large companies are destroyed due to the fact that it is not paid due attention to research on marketing.( Paul Krugman , Robin wells )

Panamax dry bulk market is perfect market firstly because in the market involved many independent firms, which takes their own decision what to produce and in what amount, while monopoly market is the exclusive right to manufacture owned by one person, group of persons or the state. Also Monopolistic firms create entry barriers for new firms, limiting access to sources of raw materials and energy, using high technology, used very big expenses when Panamax (perfect) market is not restricted, possible free access to anyone who wishes to become an entrepreneur. Oligopoly – is the existence of several companies, usually large, on whose shoulders the brunt of industry sales, so Panamax dry bulk market still perfect in that case, because in perfect market the sellers are independent of each other against to oligopoly market system. In perfect market will not be founded a series of concrete forms which have artificial monopolies such as a cartel, syndicate, trust, concern. Oligopoly is divided in two different parts: price – this artificial churning of commodity prices. It is widely used price discrimination, under certain conditions: the seller of a monopoly, the presence of a strong marketing policy of the firm; inability to resell the goods at the original purchaser. This kind of competition is especially often used in services. This tendency cannot be founded in perfect market as in perfect market the buyers are well informed about the prices. The other part of oligopoly is non-price – it is competition, carried out by means of improving product quality and conditions of sale at that point ‘’Panamax’’ dry bulk market still the perfect market according to that the goods in perfect market are homogeneous so there is no real way for improving quality of bulk cargo and selling conditions. So all previously mentioned types of market are not about Panamax dry bulk market, there is clearly identified that ‘’Panamax’’ dry bulk market is a perfect market. ( KRUGMAN WELLS ) microeconomics).

Bulk carrier “panamax”

Dynamics of the Baltic Freight Index (BFI) in December 2009 was stable for all sizes of dry bulk tonnage. Compared with the beginning of the first months of 2009 year BFI for “Panamax” has decreased by 1.8%. However, if we turn to the beginning of January 2009, dramatic changes are evident freight market conditions that have occurred over the past 12 months. Growth of BFI for the “Panamax” for the period totaled 6.6 times. Compared with just a failure late 2008 – early 2009 Freight market dry bulk actually “reborn from the ashes” and the beginning of 2009 has stabilized at a relatively high level.

When, last month of 2010 as was mentioned before does not justify the hopes of ship-owners to recover the freight market for bulk tonnage, in that case could be identified that first half of 2009 was much pleasant for ship-owners than late 2010.

Dynamics of the Baltic freight index in December 2010:

Type of tonnage01.12.1015.12.1023.12.10
Bulker “Keypsize”286926872379
Bulker “Panamax”238022341857
Bulker “Superhendymaks”155016591517
Bulker “Hendysize”802835834

Source: Clarkson

Dynamics of the Baltic freight index in December 2009:

Type of tonnage01.12.0914.12.0930.12.09
Bulker “Panamax”363535743567
Bulker “Superhendymaks”237624202224
Bulker “Hendysize”114312391159

Source: Clarkson

Dynamics of the Baltic Freight Index in April 2008:

Type of tonnage01.04.08



Bulker “Panamax”7767



Bulker “Superhendymaks”4792



Bulker “hendysize”2392


Source: Clarkson2789

To analyze the above table does not need to be a doctor of Mathematical Sciences, bremsstrahlung from the table to see that every year the dynamics of Baltic Exchange decrease, if compare the data from first half of 2008 with first half of 2010, can be easily identified the fall of almost 3 times when 2009 was also not rising with the alignment on a 2008 or even 2010.

New building Market

According to Clarkson Research Services, in 2009 compared with 2008 the number of ordered bulk carriers has decreased in 6,5 times, container – by 30 times, universal dry cargo ships – 15 times, Ro-Ro – 44 times, and reefer vessels generally was not ordered. And this despite the fact that contract prices for bulk carriers, according to the same source, as compared with 2008 decreased by 23-36%, and container – by 24-40%. The basis for this decline, as analysts note, decreased by 30-40% on the price of ancillary equipment, as well as reducing the cost of shipbuilding steel.

Number of dry cargo ships ordered by the type of tonnage in 2006-2009.:

Type of vesselYear

* – till middle of December
Source: CRS

According to Chinese sources, 65% of China’s shipbuilders received no new orders in 2009, the worst off were newly emerging in the wake of the shipbuilding boom of 2007-2008, the players. It is estimated that in 2009, China built a total deadweight tonnage of about 40 million tons, up 41% more than in 2008 at the same time, Chinese shipyards in 2009, received orders for vessels with total deadweight of 22.94 million tons, up 61% from a year earlier. Fortunately, Chinese shipyards a great help by the Government of that country’s direct and indirect support measures for domestic shipbuilding.

During the first 11 months of 2010, orders for dry bulk tonnage amounted to 68.4 million dwt, reaching the highest since 1996, except for the shipbuilding boom of 2007/2008., indicator. Almost as many – 68.9 million dwt in the 11 months of 2010 amounted to delivery of new tonnage. As a result, with 22 million dwt (8%) ruptured shipbuilding contracts worldwide portfolio of orders for bulk carrier according to Clarkson’s totaled 279 million dwt, or 52.9% of the existing fleet.

“Second hand” vessel market

There is a high activity of Chinese buyers of used dry bulk carriers, which was dominating this market segment ’’ Panamax’’ dry bulk. Brokers noted buying dry bulk carrier in China with symbolic name INDIA deadweight of 76,620 tons, built in 2005 by a Japanese shipyard Sasebo. The transaction price was $ 36.3 million
In general, during December the price of modern bulk carrier “Panamax” less than 5 years of age have been further strengthened, and the modern “superhendimaks” remained at the same level.

The price level for modern bulk carrier “second hand” in December 2009.:

Type of tonnageYear of vesselDWTPrice, thoundsdant. USD
Start of December2009 y.End of December2009 y.Changes (+) (–)
‘’Panamax’’max 5 y.740003293533424+389
‘’Superhendymaks’’max 5 y.520002728127308+27

Source: Baltic Sale & Purchase Assessment

Considering to the December price dynamics for used bulk carrier, we note that the decrease in the freight market conditions affected the level of prices, led to their downward trend. Most analysts believed that the excess of the formed dry bulk lead to a further fall in prices. But, despite a decrease in prices for used bulk carrier, according to analysts Arctic Securities in 2010 they still remain 20% higher than in early 2009.
The price level for modern bulk carrier “second hand” in December 2010.:

Type of tonnageYear of vesselDWTPrice, thoundsdant. USD
Start of December2010 y.End of December2010 y.Changes (+) (–)
‘’Panamax’’max 5 y.740003849437830– 664
‘’Superhendymaks’’max 5 y.520003231531267– 1048

Source: Baltic Sale & Purchase Assessment


Forecast of the freight market

According to analysts of First Securities with a total deadweight bulk carrier, which will go to scrap in 2011, could reach 12 million tons and revenues of the new tonnage – 20 million tons That is the pressure of excess tonnage in the freight market is retained.

According to experts of international rating agency Moody’s Investors excess of supply over demand in the area of dry bulk traffic will negatively affect the marine industry in the short term, although the overall long-term prospects for the industry over the next decade are more favorable due to expected further growth in demand for raw materials from developing countries. Therefore, Moody’s recommends that the shipping companies involved in transportation of bulk cargo hold more prudent investment policy on orders new fleet. Now, when are launched more new bulk carriers, freight rates remaining highly volatile due to large variations in supply and demand.


. “Executive summary.(Review of Maritime Transport, 2008)”, Review of Maritime Transport, Annual 2008 Issue

1.«Review of Maritime Transport», UNCTAD, 2008 ã,

2. Îáçîð ìèðîâîãî ìîðñêîãî ôëîòà îò ÞÍÊÒÀÄ, èíôîðìàöèÿ ñ ñàéòà

3. Èçäàíèå Fairplay,

Page Reference : null


Microeconomics 1

Microekonomiks 2


Baltic Sale & Purchase Assessment

Source: CRS

agency Moody’s Investors

analysts of First Securities

baltic freight index

Free Essays

A Detail Study of the Role of Options, Futures and Forward Contracts In Market Risk Management (MRM)


Market Risk management includes managing different types of risks like commodity price risks, interest rate fluctuations risks and currency risks. The research aims at a thorough study of the market risk management though the identification of the factors of these risks, the critical study of Value at Risk (VAR) and other models that are used to measure them. The risk control methods would be specified to the use of futures, options and forwards contracts in doing so. It also aims at critically evaluating the roles played by these and their effective management.


The primary objective of the research is to study the role played by Financial Derivatives namely forward contracts, futures and options in managing market risks. It also aims at having a clear understanding of the methods or risk measurement, analysis and management techniques. Thereby aims to understand the intricacies of derivative markets.


Basel Committee that was formed in 1974 laid the regulatory framework for Financial Risk Management. (McNeil, A.J., Frey, R., Embrechts, P. 2005). Basel II (2001) defines Financial Risk Management to be formed of 4 steps: “identification of risks into market, credit, operational and other risks, assessment of risks using data and risk model, monitoring and reporting of risk assessments on a timely basis and controlling these identified risks by senior management.”(Alexander, C. 2005). It determines the probability of a negative event taking place and its effects on the entity. Once identified risk can be treated in following manners:

Eliminated altogether by simple business practices. These are the risks that are detrimental to the business entity.
Transferred to other participants.
Actively managed at firm level.

Market Risk constitutes of commodity risk, interest risk and currency risks. Commodity price risk includes the potential change in the price of a commodity. The rising or falling commodity prices affect the producers, traders and the end-users of the various commodities. Moreover if they are traded in foreign currency, there arises the risk of currency exchange rate. These are normally hedged by offering forward or future contracts at fixed rates. This is especially important for commodities like oil, natural gas, gold, electricity etc whose prices are highly volatile in nature. However the hedging doesn’t always ensure profits. (Berk, J and Demarzo, P. 2010).

Interest Risk relates to change in interest rates of bonds, stocks or loans. A rising rate of interest would effectively reduce the price of a bond. Increased interest rates result in increasing the borrowing costs of the firm and thereby reduce its profitability. It is hedged by swaps or by investing in short term securities.

Currency risksa rise from the exceedingly volatile exchange rates between the currencies of different countries. For e.g. Airbus, an aircraft manufacturing company based in France requires oil for its production. Oil being traded in US dollars and the company doing trading in Euros, has a foreign exchange risk. It would be therefore beneficial for Airbus to enter a forward contract with its oil suppliers. Options are another way of hedging against currency risks. They facilitate the holder to exchange currency at a fixed pre-determined exchange rate. If the option rate is higher than the exchange rate, the company will not exercise the option. However if the rate increases the company would benefit by exercising the option. (Berk, J and Demarzo, P. 2010).

The above risks basically depend on the time value of assets. Moreover with the increased level of multinational functioning of business entities and the highly volatile nature of markets, risk management has now become a mandatory part of running the business. It therefore becomes important to analyze the various methods of assessing risks, measuring them and the preventive measures implemented against them. Also the hedging techniques stated above do not always ensure profits. The research would thereby include a detail study of the effectiveness of the methods implemented. It would also study the hazards of the failures of the implemented methods.

Market risks are measured by Value at Risk (VaR) model. This model is used extensively to measure market risks. It aggregates the portfolio market risks in a single number. However authors McNeil, Frey and Embrechts (2005) have debated over the model stating that it doesn’t take into account the costs of liquidation. It takes into account historical data and a series of assumptions. Therefore its ability to measure future risks is highly disputable. The research would thereby include the study of VaR and other relative models used to measure market risks.

Literature Review:

Forward contracts, Futures and Options are called the Financial Derivatives and are used largely to reduce market risks.

Walsh David (1995) explains that if two securities have same payoffs in future, they must have same price today. Thus the value of a derivative moves in the same way as that of underlying asset. This is called arbitrage.

Hedging of risks is nothing but the holder of an asset has two positions in opposite directions. One is of the derivative and other of the under-lying asset respectively. As such if the value if the asset decreases then value of the derivative will also decrease. But the change in value is off-set by the opposite positions to each other. Thus risk is reduced. This is called hedging. Long Hedge refers when an investor anticipates increase in market price and therefore buys future contracts. Short Hedge is when an investor already has a futures contract and expects the value of asset to fall and therefore sells it beforehand. (Dubofsky, D and Miller, T. Jr. 2003).

Forward Contracts– These involve buying or selling specific asset at a specific price at a specified time. They are Over the Counter (OTC) Derivatives. These are used for locking-in the price and require no cash transfer in the beginning, thereby involve credit risks. They are typically used to hedge the exchange rate risks. (Claessens, S. 1993).

Futures– These are more standardized than the Forward contracts. They are traded at Foreign Exchanges. The standardized contract specifying the asset, price and delivery time is either bought or sold through broker. The delivery price depends on market and determined by the exchange. Initial margin amount is required and profit-loss calculations are done daily. Hence involve margin calls. Credit risk involved is minimum but these cannot be tailored to individual demands. (Claessens, S. 1993). These exist typically for commodities, interest rate risks, currencies etc. (Walsh, D., 1995).

Fig.2: Hedging through Futures. (Walsh, D. 1995).

Options– The holder can buy from or sell to, the asset at a strike rate at a future maturity date. However the holder of the option has no moral obligation to do so. The cost of buying the option involves a premium which is to be paid up front. The option that enables the holder to buy an asset is called Call option while in Put option the holder is able to sell the asset. (Claessens, S. 1993). These can be bought Over the Counter (OTC) at a bank or can be exchange traded options.

Walsh David (1995) further explains that options have a non-linear relation with payoff. Its payoff increases with the price of the asset if it is in-the-money and has a constant payoff which is the option premium if it is out-of-the-money. While futures and forward contracts have a linear relation with the payoffs in both, profit as well as loss. Therefore options might be preferred over futures and forwards for hedging. The research would include the detailed characteristics, similarities and differences in futures, forward contracts and options, along with the concept of delta hedging in which perfect hedging is created by use of options.

Data and Methodology:

The Research would be Qualitative in nature, based on the primary data available though online journals and books. The popularity of the derivatives and their exponential growth has favoured the availability of many articles on this topic and would thereby form the basis of research. It might include interviews of professionals having extensive research or expertise in this area.


Alexander, C. (2005). The Present and Future of Financial Risk Management, Journal of Financial Econometrics, 3 (1), pp. 3-25. JSTOR (Online). Available at (accessed: 8 March, 2011).

Berk, J. and Demarzo, P. (2010) Corporate Finance. 2ndedition. Global edition: Pearson.

Claessens, S (1993) World Bank Technical Paper no 235.Washington DC: The World Bank.

Dubofsky, D and Miller, T. Jr. (2003) Derivatives: Valuation and Risk Management. Oxford: Oxford University Press.

McNeil, A.J., Frey, R., Embrechts, P. (2005) Quantitative Risk Management. Princeton and Oxford: Princeton University Press.

Walsh, David. (1995). Risk management using derivative securities. Managerial Finance, 21(1), pp. 43. ABI/INFORM Global (Online). Available at (accessed: March 27, 2011).


Hinkelmann, C & Swidler, S. (2004). Using futures contracts to hedge macroeconomic risk in the public sector, Derivatives Use, Trading & Regulation. 10(1), pp. 54-69. ABI/INFORM Global (Online) available at

.( accessed: 21March, 2011).

Free Essays

In organisations pay rates are influenced by several factors, such as: the intrinsic value of the job, the internal and external relativities and the market inflation.


Every job should be valued to be given a rate of pay. The intrinsic value of a job is based on the amount of responsibility, the level of competences, and the degree of skills required to perform the job (Armstrong and Murlis, 2007). Nevertheless, the intrinsic value depends on other characteristics, such as: the number of resources managed the amount of power job holders possess, the degree of flexibility in making decisions and the extent to which they receive guidance on how to perform their duties (Armstrong and Murlis, 2007). It has been argued that it is impossible to find a definite value for a job since the value of anything is always relative to something else (Armstrong and Murlis, 2007). So jobs are compared either to other jobs (internal relativities) in the organisation or to similar job outside the organisation (external relativities) (Milkovich and Newman, 2002).

The Internal relativities are based on determining the value of the job compared to other jobs in the organisation (Milkovich and Newman, 2002). The internal differentials between jobs are based on the information related to the input requirements and the use of different levels of knowledge and skills. Differentials between jobs can also be connected to the added value they create (Milkovich and Newman, 2002). While external relativities are based on determining the value of the job according to the market rate (Milkovich and Newman, 2002). However, market rates are not very accurate, since market surveys reveal salaries about organisations that are usually linked to their unique circumstances in terms of their organisational structures, the numbers of people employed and their own pay policies (IRS Employment Review, 2006). Nevertheless, organisations that tend to compete and seek to recruit high employee calibre need to adjust their salaries based on market rates (IRS Employment Review, 2006). In some cases, specific individuals whose talents are unique in the market place could have their own individual market value, based on what the market is willing to pay them for their services. Head hunters are usually interested in these people (Armstrong and Murlis, 2007).

Organisations strive to achieve a balance between the internal relativities ensuring equity and the external relativities ensuring a competitive position in the market. However, organisations seeking this balance will always face a degree of tension when creating it (Armstrong and Murlis, 2007).

The inflation and market movement will definitely have an influence on the pay rates of any organisation (People Management, 2007). Other factors can influence the pay rates, as follows: the budget, the pay strategy, and the pressure by trade unions (Perkins and White, 2007).

Job evaluation is a systematic approach that establishes internal relativities among jobs by assessing their intrinsic values, to be later placed in grade and pay structures (CIPD, 2002d). Job evaluation is important to ensure consistency, equity and transparency (Milkovich and Newman, 2002). Nevertheless, many researchers argue that job evaluation is very time consuming, inflexible and has no relation to salaries since salaries are set in reference to market rates (Armstrong and Brown, 2009).

Job evaluation can be both analytical and non analytical. The former makes decisions based on the analysis of the extent to which a set of predetermined factors are present in a job (Burn, 2002). The factors should be selected carefully and agreed on by the management. The same factors are used in the evaluation of other jobs (Armstrong and Cummins, 2011).

In a point factor rating, jobs are broken down into several factors and certain levels are allocated to these factors reflecting the extent to which the factor is present in the job. The levels should be carefully described ensuring fair judgements are made. Following that, all points are summed up together resulting in a total score for the job (Burn, 2002). Ranks are then arranged sequentially in reference to the total scores of the jobs which are divided later into grades. Pay ranges will be attached to these grades taking into account the external relativities (Burn, 2002). This evaluation might be very time consuming therefore it is possible to evaluate several general roles and then mach the rest to these grades. The other analytical approach is the factor comparison that compares jobs, factor by factor using a scale of values.

In comparison, the non-analytical job evaluation compares jobs as a whole together and not based on any factors as in the analytical approach and then place them in a grade or rank order. Therefore it is a simple and easy process, yet it is very subjective since it is not based on any fixed standards (Armstrong and Cummins, 2011).There are different non analytical methods, as follow: job classification where jobs are placed in a grade by matching the job description with the grade description without any numerical values (Burn, 2002). The other method is job ranking which compares whole jobs to one another and then arrange them according to their perceived size. The paired comparison ranking is another tool which compares a certain job as a whole separately with each and every other job in the organisation (Burn, 2002).

The non analytical approach is another type for job evaluation that neither falls under the analytical job scheme nor the non-analytical scheme, it encloses the following two tools: market pricing and job/role matching (internal benchmarking) (Burn, 2002).

Market pricing is the process of determining the rate of pay for a certain job according to the supply and demand in the labour market (Corby, 2008).However, it neglects the internal relativities, which may results in dissatisfaction among employees who put in the same effort in different jobs but are paid unequally due to the different market rates (Corby, 2008). Nevertheless, internal relativities might be achieved if all jobs in the organisation are market driven (Armstrong and Murlis, 2007). These rates should be accurate and up to date. Market info sources should compare jobs that are similar in: region, sector, organisation’s size, industry and job value (Milkovich and Newman, 2002).There is no absolute correct market rate for a certain job since it is not possible to track similar jobs everywhere (IRS Management Review, 2006). Therefore, it only provides approximate values. Organisations must decide its market posture and stance, in other words it must decide the extent to which their pay levels should relate to market rates (Milkovich and Newman, 2002).

Spot rates are rates given to a certain job taking in consideration the market relativities and the market value of an individual (Fisher, 1995). They are usually applied when paying professionals, senior managers, and manual workers. This type of pay does not consist of grades and pay ranges; therefore there is no scope for pay progression (CIPD, 2010). Any increases will produce a new spot rate for the person (Berger. and Berger, D., 2000). Spot rates are often used by small organisations that don’t have any formal graded structure. Furthermore, organisations that seek a maximum amount of flexibility in their pay would adopt the spot rate (Armstrong and Murlis, 2007).

Individual job grades are similar to spot rates with a defined pay range in both sides. A scope of progression is available to reward employees without the need to upgrade their jobs (Corby, 2008).

4.1. Narrow graded structure (Conventional graded structure) consists of 10 or more grades arranged sequentially, pay ranges are attached to each grade with a maximum of pay range between 20%-50% above the minimum and differentials between pay grades are usually around 20% (Perkins and White, 2007). Grades are described by points based on analytical job evaluation, by grades’ definitions or by the types of jobs slotted in grades (Fisher, 1995). Mid points are defined between the minimum and the maximum of grades based on the market rates and the organisation’s market policy which explains the relation of its pay levels and market rates. The mid point represents the rate of a fully proficient person (Berger, L. and Berger, D., 2000). This structure ensures equal pay for equal value since all jobs are placed in the structure in reference to their job evaluation points (Milkovich and Newman, 2001). Nevertheless, this type of structure may result in extended hierarchy that does not exist in flat and flexible organisations. The width of grades and pay ranges are very narrow, therefore this structure doesn’t provide enough scope for progression (Armstrong and Murlis, 2007) .A common problem that arises with this structure is the grade drift; where grades are upgraded only to reward people that might already have reached the max of their grades and yet there is no more scope for progression for them within the current grade (Fisher, 1995).

This type of structure fits mostly in large and bureaucratic organisations with rigid structures and in which great amount of control is required.

4.2. Broad banded pay structure is a structure where a number of different grades are all put together in a small number of wider bands (Milkovich and Newman, 2001). The number of bands is usually 5 to 6 bands with a width typically around 50% and 80%; the number of bands is based on the levels of responsibility and accountability in an organisation (Martocchio, 2001). In this type of structure; the pay is based more on market relativities (Caruth and Handlogten, 2001). Jobs are slotted in bands either by reference to their market rate or by reference to both: their job evaluation and the market rates (Armstrong and Murlis, 2007). Bands are described by the type of jobs slotted (senior management) in them and the similar roles they enclose (administration and support) (Armstrong and Murlis, 2007).

Broad band pay structures offer great flexibility in pay that could result in increased labour costs and high employee expectations. Therefore reference points that are aligned to market rates can be added in order to guide pay progression rather than to control it as in narrow graded structure (Milkovich and Newman, 2002). Ranges for progression exist as zones around the predefined reference points (CIPD, 2010). It is easy to update and modify this type of structure because many jobs can be placed in one band (IRS Management Review, 2001c). Additionally, a broad banded structure facilitates cross team working since it allows lateral movement between bands (Corby, 2008).This type of pay structure puts a great load on line managers and HR especially in appraisals and pay progression, therefore equal pay problem may arise (Berger,L. and Beger,D., 2000). It is difficult to explain for employees the basis upon which salaries and pay progression are set. It has also been argued that wider bands will result in placing jobs that are different in value altogether, since this structure depends more on market rates than job evaluation (Martocchio, 2001).

This structure fits in flexible and delayered organisations where a scope of flexibility within pay is required. Furthermore, it fits in organisations that focus on lateral development rather than vertical career growth and promotions (Corby, 2008).

4.3. Career and Job Family structures in these types of structures jobs are grouped into families. Career families group jobs in either functions or occupation, such as: marketing, operations and engineering (People Management, 2000a). They are grouped based on the activities they execute and the fundamental knowledge, skills and competencies needed to perform the job, but in which levels of responsibility, knowledge, skills and competences differ (Armstrong and Murlis, 2011, p.219). The number of levels within a family is often 6 to 8 levels and pay ranges are attached to levels. Nevertheless, these numbers are not fixed and might vary between career families. However, career families provide clear and well defined career paths for progression within levels (Zinghein and Schuster, 2000). So organisations must inform all employees with the criteria needed for progression. It is a single graded structure since the same specific grade might be placed in several families; furthermore, jobs in the same level have the same value and the same pay range across all families which in return ensure equity. It perfectly fits in organisations that aim to achieve career development and that already have a comprehensive competency framework for all their jobs. Nevertheless, it has been argued that progression occurs in an occupational ‘silo’ (Armstrong and Murlis, 2011, p.221).

Job families are very similar to career families, but jobs are grouped according to their common proposes, processes and skills like business support (People Management, 2000a).This structure consists of 3-5 separate families. It also consists of levels that might vary between families (People Management, 2000a). Levels are defined depending on their responsibilities, knowledge and the levels of skills and competences required (Corby, 2008). In other words, role profiles are often set that could also be used for progression between levels (IRS Management Review, 2001b). This structure facilitates market differentials between families since there are different market pay structures for each family (IRS Management Review, 2001a).Moreover, jobs in the same level among families may differ in their size and their pay ranges unless they are slotted in families based on job evaluation. Consequently, this structure might appear very divisive and unfair if it isn’t linked to job evaluation (Corby, 2008).

Job families could be designed either by the use of job evaluation or by the modelling process that is developed by hay group. The job evaluation process relies on using the analytical job evaluation that produces a rank order of jobs that are then divided into families. Levels within families are then defined either by the knowledge, skills and competences needed or by job evaluation points(Armstrong and Murlis, 2007). After defining the groups and levels, organisations can match role profiles with levels definitions.

Modelling involves identifying job groups with similar work but performed at different levels, followed by a general level identification. Produce general job roles and profiles and match them with the levels’ definitions in order to place them into the appropriate levels. The matching process will then be validated by the use of job evaluation processes. Following that pay ranges are determined based on market rates (Armstrong and Cummins, 2011).

This structure fits in organisations that consist of different groups of job families and a huge number of professionals (People Management, 2000a). It also fits in organisations that emphasise on planning career paths and routes. It is highly recommended when a specific job family needs to be rewarded and paid distinctively and differentially (Personnel Today, 2009).

The mixed model; which mixes broad banded and job family pay structures, arranges job families within bands. This model provides a flexible scope for promotions and pay progression since it encloses the benefits of both pay structures (IRS Management Review, 2001a).

4.4. Pay Spines is a series of incremental pay points enclosing all jobs in an organisation. Progression is usually linked to the length of service and increments are fixed between 2.5%-3%. Public sectors and voluntary organisations usually use this type of structure. Furthermore, it is used by organisations that are unable to evaluate fairly the different performance levels of employees (CIPD, 2009).

Job evaluation is recently used in some structures like the broad band and job family to validate only the positioning of roles into bands and jobs into levels. ‘Reports of the death of job evaluation have been overstated, but its lifestyle has changed’ (People Management, 2000a). The role of job evaluation is now minor and only supportive rather than being the solid base pay structure as it has always been.

Every organisation has its own context and circumstances that makes one specific structure more suitable than another. The type of structure depends on the organisational strategy, structure, culture and size (People Management, 2000a). It also depends on the type of industry, the type of people employed and the budget available. Other external factors can influence the selection process, as follows: the competitive pressure, the demographic trends and employment legislation (Corby, 2008). For more statistics regarding pay structures, see appendix (4).

It is suggested to replace the current pay structure with a job family pay structure. Job families should be identified and the levels should be defined carefully either based on the job evaluation points or the level of skills and competences required for each level. Jobs are later placed in different levels within each family. It is advised to have wider pay ranges to allow a scope for progression (CIPD, 2010). Differentials between pay ranges, in other words between the mid point of one level and the mid-point of the neighbouring level, should provide enough scope to recognise the increases in job sizes among levels (CIPD, 2010). Large differential gaps will cause problems for people on borderlines waiting for long period of time, while small gaps will result in many appeals and arguments regarding the fairness of the pay structure. It is also recommended to have an overlap between pay ranges to allow flexibility and to recognise the contributions of well experienced employee on the top of his level that might be more beneficial for the organisation than a new employee in the next higher level (CIPD, 2010). It is highly recommended to design the job family pay structure based on the job evaluation that is already in place to ensure equity among families rather than the modelling process where job evaluation exist only to validate the allocation of jobs within families.

Reward strategies should be aligned with other HR strategies so that they balance and support one another (Armstrong and Murlis, 2007). For instance, implementing job family structure will provide basis for planning career progression and career paths and setting the general framework for managing performance. Pay is one of the main factors that attracts candidates to join an organisation through offering them excellent packages. Pay policies are also linked to training and development, since many organisations reward their employees upon their possession of skills and the development of their competencies (Armstrong and Murlis, 2007).


Every organisation strives to achieve a balance between internal relativities ensuring internal equity and external relativities ensuring competitive benefits. Many organisations try to replicate best practices in reward structures neglecting the fact that each organisation is operating in a distinct context and a unique culture. Organisations must look for the best fit rather than the best practice as there is no one best model structure that fits all organisations(Armstrong and Murlis, 2007). Furthermore, it is impossible to achieve 100% pay satisfaction in any organisation; as O’Neil cited in Armstrong and Murlis (2007, p.9) stated : “ it is not possible to create a set of rewards that is generally acceptable and attractive to all employees since there is no right single set of solution that can address all the business issues and concerns”.

Free Essays

Islamic credit in Britain financial market and especially for the Islamic bank of Britain.


I m aiming in my dissertation to built a model of Islamic credit in Britain financial market and especially for the Islamic bank of Britain. There is no Islamic Credit Card present at the moment in the Britain and the only Islamic financial institute is Islamic Bank Of Britain. My study is based on the Islamic credit cards which are prevailing in the financial markets of Middle East and Malaysia. Most Banks in Asia they been offering such cards based on deferred payment knows them (bay to the inah) concept which is later on been discovered as controversial, mock and totally against the Islamic law (Shariah). As for as Shariah is concerned most of banks in Asia have their own Shariah compliance tribunals for decision-making but I know the problem was that at same Time most of Shariah boards to were adverse opinion on the same subject matter. These But all of solutions providing by these Asian banks to were not compatible according to the Holy Book Quran Said By the “Bankers from Middle East”.

While in the Gulf countries banks have new plans to access the Muslims believes and offering them credit card. Their approach is very seemlier to principal and granter, or guarantee based system. My financial model will carry out the religious and financial prospective both at the same Time. This model may also have loads impracticability’s as for as adoption by the Islamic Bank Of Britain or their might be I know much complex procedures required.


I am aiming to construct, regarding Britain’s Islamic bank Islamic credit card’s financial model and especially for Britain’s financial market.

The goal follows on somebody’s heels is the academic lecture core goal.

To finds out credit card’s effect according to Islamic law (Shariah) regarding the payment pattern.
In the British appraisal credit card market as well as may Muslims use these card.
May provide credit card’s reliability and the effect in the market revelation Islamic.
What regarding uses in Islamic credit card’s appropriate model existence these concept use aspect in credit card’s function finding out the latent Islamic principle credit card is

What Is Credit Card

The credit card possibly is defined to arrive invariable compensation card that to propose may to purchase cardholder’s credit specific quantity which and pay the flowered amount.

Outstanding widespread balanced, in is assigned in the time is may pay, or the interest will cause on surplus balanced. (Paxon and five, 1998)

History of Credit Card

The credit card system’s first type is developed in the US. Later in 20 centuries at the beginning, will use in the metal plate to the Western Alliance and other financial institute recording the customer detail and the account. The FlatbushState bank introduced its monthly allowance account bank accountholders in 1947. In 1951, the FranklinState bank was issues credit card’s first financial institute to other bank customer. (Lindsey 1980). the diners club issued in 1950 the first modern credit card, and was called the travel and the entertainment (T & E) card. The US express followed the visitor who ate meal to club and to provide in had the credit period characteristic between the expense and the payment for the credit card 1958, but did not have the partial payments facility. (Wonglimpiyarat 2005) the national credit card first developed and allows it to stretch across Earth’s other banks in 1966 from the American Bank. Will become all things as for the result rival by later to result in the card between the bank and the main charge name which will provide is joined. (Frazer 1985)

The Credit Card In Britain

Barclays issued in 1966 the first credit card in the agreement later by the American Bank. Barclays imports all facilities and the structure in Britain revises the American Bank operation. (Wonglompiyarat 2005)

Credit Card System

The visa, the switch and Master manage under the four directions plan. These all transactions through the plan involvement are four main parties;

Holder of the card, through the use of card, pays the payment,
The card publisher, provides the card to the user, and runs the trading account,
The retail merchant, sells the goods or the service back-spacing promise for the payment.
Merchant acting as a purchasing agent, the absorption retail merchant, obtains from the card publisher’s payment and the repayment gives the merchant. They have frequently with the merchant, but it’s Relations; does not force.

Islamic Bank Of Britain

Islamic Bank Of Britain is only Shariah the obedient financial service authority (FSA) authorization financial institute in Britain. It started its operation and located at three British various cities and the branch in 2004 in London and the main office in the Birmingham. Other Islamic’s bank, I elect likely the bank goal also will provide the choice for the regular mechanics of banking through to avoid interest (Riba) and definitely to maintain the money only spends at the moral enterprise. At present the bank expands its product rapidly and serves two pair of current finances and the banking industry crisis, the Islam community in Britain, the availability non-interest credit card, in massive the growth Islamic’s mechanics of banking, chooses the choice modern mechanics of banking product and the service, the integrated Islamic financial concept is looking like Lloyds TSB modern bank neutral HSBC and is not voluntarily the high interest rate strong existence rule credit card. (

Literature reviews

In today’s society the credit card uses the achievement to pay money a basic way. Has to credit card’s various uses for example payment, the Credit facility, the cash advance easy way and as for the status symbol. Presented the payment proposition money value way compared with Islamic credit card and the conventional credit card, has various questions which a lower penalty spends, provides free bonus year after year, a fancier look and the proposition expense gives up. (Ma’ Sum total Billah 2001). Islam permission use credit card, because it does not incur the interest, and at the same time it does not violate Shariah any rule. (Ahamad and lake huron 2002). Whether I did know the credit card service only pays the main amount as for the user to add on operates and the overhead charge credit card, the financial entry is permitted, because it does not involve any kind in the Islam the element benefit which forbids. (in el Azura 2006). The use to pays money other way credit card’s advantage for the purchase, the cost effectiveness, the security and the world acceptability is easy to use. (Mohammad 2003).


In the Middle East and Malaysia the method which discussed, anticipated financial model and gift payment method and religious belief flaxen cloth. The research possibly completes the explanation, but is may be the description possibly takes the bank with it to this domain research union important work.


The hypothesis has the limited research in the region Islamic credit card in Britain. (2007) the Shah pale research which and the discovery conducts has in the human limited aware about Islamic credit card. Mohd (2008) has identified influence Islamic credit card usage several factors for them. Had has developed following three hypotheses;

H1: Technical and the function service’s quality has to immediate influence Islamic credit card user.

H2: The religion has positive influence to the usage of Islamic credit card.

H3: The culture is directly affects Islamic credit card’s choice, the usage and satisfaction.

Research Methodology

The research methodology based on secondary data heavily. My research may use to the bank website in the Middle East and Southeast Asia and various origin together for example article, the research, the journal and the book. Other origins and insure London including on-line Islamic’s institute mechanics of banking, the Middle East banker magazine library Islamic finance (, journal and complete other on-line resources. Because appears self-confidently in the promiscuous method has the research qualitative and the quantitative method mix. Except that beside further studies may be appears to the possibility to it with the current research findings.


Core goal, if the research is the development to the financial model Islamic credit card Britain’s Islamic bank. Bank possibly for theirs credit card research applications research at that time credit card in theirs stock list, but its long waiting. The report will also highlight the key question for example in massive the general manner, the belief and the perception about the non-interest Islamic product and the service option and the usage. The report is willing in the modern day mechanics of banking also to show the convention mechanics of banking system choice and Islamic financial concept integration, and, when result trend toward mechanics of banking Islamic way.


Limits all hates diligently, the research has the loading limit. First it is limited Islamic the credit card and Britain’s Islamic bank. Next, the sample will be small, and will not provide to the population overall picture. It will concentrate mainly signs in upon arrival at work a broader picture in the merchant, because of it’ True s stemming from found individual the control use Islamic financial organ. Also the will did not think that perhaps the religious responder and it neglects to the Islamic financial service Islam user’s great proportion.


What the conclusion Islamic Bank of Britanavoids establishing likely is the regular mechanics of banking provides chooses other Islamic bank interest (Riba) and definitely maintains the money only spends at the moral enterprise. Two pair of current finances and banking industry crisis, the Islam community in Britain, is not the availability strong existence non-interest credit card, to chose the modern mechanics of banking voluntarily in massive, Islamic financial concept integrated choice in has looked like HSBC in the modern bank, and a higher interest rate in the regular credit card, there was in the non-interest giant hidden growth potential sum

Free Essays

Testing the hypothesis that monetary policy responds to stock market movements


This paper highlights a problem in using the OLS model of Taylor’s rule, including lagged value of fed funds rate and stock index’s return to address whether the monetary policy should respond to asset prices movement or not. However, to know more exactly about the direct or indirect relationship between stock prices and the monetary rules, the more efficient method of Generalized method of moments (GMM) needs to be carried out based on an instrument list of output gap, growth rate of real GDP and inflation. The combination of two above methods gives reasonable results in our empirical framework. In order to derive accuracy empirical result, two country (US 1990-2009, UK 1990-2009) data would be provided in the paper.


In modern macroeconomics, there is a considerable amount of interest in understanding the question whether the central banks should response or ignore asset price volatility. This question has been a controversial issue since the Federal Reserve Chairman Greenspan sparked it, as one of “irrational exuberance” as early as 1996, then was followed by Bernanke and Gertler (1999), Cecchetti (2000), Adam Posen (2006) and many other authors. No agreement has been reached but what all researchers agree on is the necessity of further research on the nature of asset price movement, how they affect the economy and the effect of money policy on volatilities. This paper attempts to estimate this multilateral relationship based on ongoing researches in this field.

There are several reasons why the correlation between monetary-policy framework and asset bubbles is an important issue. From the perspective of central banks, dependable estimates of the reflection of asset prices to the policy instrument plays an essential role in formulating effective policy decisions. In reality, the effect of the short-term interest rates on asset prices has caused much of transmission of inflation targeting. Moreover, the attention of financial press for the Federal Reserve performance states the significant influence of monetary on financial markets. Hence, a well-performed economy does need to have accurate estimates of the reaction of asset prices to monetary policy.

In order to approach these issues, we develop an OLS estimation using simple regression Taylor Rule (1993) and apply Generalized Method of Moments (GMM) to deal with the relationship between monetary policy and asset price movements.

The paper proceeds as follows. Section 2 presents the theoretical framework, in which empirical debates from recent authors including supporting and rejecting are particularly discussed. Data and Testing Method and Empirical Framework are described in section 3. Section 4, empirical framework, is going to present the descriptive statistics with testing procedures of hypothesises. Results on the responsiveness of stock prices and interest rates to monetary policy coming through all of the actual testing process from data collection are presented in section 5 and section 6.

Theoretical Framework

How a central bank should set short-term interest rates

Taylor (1993)[1] estimated policy reaction functions and found that monetary policy can often be well approximated empirically by a simple instrument rule for interest rate setting. The following is one variant of the Taylor rule:

it = r* + ?* + ?(?t – ?*) + ?(yt – yN)

where ? , ? > 0; r* is the average (long-run) real interest rate.

Taylor (1993) found that ?=1.5 and ?=0.5:

it = r* + ?* + 1.5(?t – ?*) + 0.5(yt – yN)

The Taylor rule is acknowledged by all to be a simple approximation to actual policy behavior. It represents a complex process with a small number of parameters.

Monetary policy should or should not responds to stock market movements

In recent decades, in line with the development of stock market, which provide information about the current and future course of monetary policy goal variables – inflation, employment, and output, the monetary authority should respond to equity price. A more controversial role of equity prices in the monetary policy process concerns whether the monetary policy should or should not take a direct interest in stock market.

It has attracted many researchers to discuss about the usefulness and test the validity of the hypothesis that monetary policy responds to stock market movements. Those researches have created many different results of both supporting and rejecting the hypothesis.

Supporting the hypothesis

Some of the supporters for the hypothesis that the reaction to misalignments in stock prices would have impact on the monetary policy rules are Cecchetti et al. (2000), and Jeff Fuhrer and Geoff Tootell (2004). Cecchetti used a linear rational expectations model, focusing on the output gap and inflation expectations – some primary elements in the policy rules- to support the hypothesis. Hence, the result showed that if the estimation errors are positively correlated, both inflation and misalignments with a positive coefficient increases the impact on the monetary policy. By contrast, Jeff Fuhrer and Geoff Tootell took the regression from the form of Taylor policy that used correlations among the funds rate and the gap measure, a four-quarter moving average of inflation, the real GDP and the lags of stock prices. After that, they used GMM to estimate the regression. Because the data used are ex-post, the policy actions should be taken to the stock prices movements.

However, there are still some contention is that this counter-argument is not entirely valid and these come from several antagonists like Bernanke and Gertler with their predominant abstractions (1999 and 2001).

Rejecting the hypothesis

There have been a significant number of advocates against the responsiveness of central banks to asset price movement in their policy formulation so far. Two of the most important contributors in this field are Bernanke and Gertler with their two seminal studies (1999 and 2001). On the basis of non-optimizing models of monetary policy, where coefficients of interest rates on GDP and inflation is selected specifically, Bernanke and Gertler generated the result that inflation-targeting policymakers should not take asset prices into account other than changes in expected inflation are foreshadowed. Adam Posen (2006), likewise, has argued that central banks need not target asset prices, but would be well advised to monitor them, when those bubbles get very far out of line. By studying the correlation between periods of monetary easing and property bubbles, he found the hypothesis that quantitative easing would result in asset volatility unsupported.

These results are challenged among other researchers including Roubini Nouriel (2006), Andrew Filardo (2004), especially Stephen Cecchetti (2000). Cecchetti arrived at the different results, even though one portion of the testing method he employed was the same with Bernanke and Gertler (1999).

Data and testing method

U.S. Data selection

The data that being selected reflects the period from Q1/1990 to Q4/2009 in the U.S., including time series of

Inflation: Cpi all Items City Average /Index Number /Base year: 2005 /averages /Cnt: United States /Source: IMF, Wash

Fed Funds Rates: Discount Rate (end of Period) /percent per annum /stocks /Cnt: United States /Source: IMF, Wash

Real GDP: Gdp vol 2005 Ref., Chained /U.s. Dollars ,billions of .. /averages /constant prices (seas. /Cnt: United States /Source: IMF, Wash

S&P500 stock index: Finance.Yahoo.Com, the quarterly close price index.

Stock Index S&P500 is chosen because it is typically weighted averages of the prices of the component stocks. Very often dividends are excluded from the return calculation of the index. The composition of most indices changes occasionally, so that a long time series will not be made up of return from a homogeneous asset.

In this paper, we prefer the continuous compounded definition of using logarithm because multi-period returns are then sums of single-period returns. This could make time series of data more smoothing and increase the exactness of estimation[2].

The regressions take the form of Taylor rule, augmented to allow partial-adjustment or “interest rate smoothing”, or more simply the conclusion of a lagged fed funds rate, as discussed in Clarida, Gali, and Gertler (1998). We begin with the regression, in which the fed funds rate responds to contemporaneous observations on a “gap” measure (either the unemployment rate or a Hodrick-Prescott detrended real GDP gap[3]), a four-quarter moving average of an inflation measure, the growth rate of real GDP, and lags of a variety of stock price measures.

Unit root test for f – log(fed funds rate)

Possibly, there is some omitted variables causing autocorrelation. We need to use unit root tests for random variables.

Null Hypothesis: F has a unit root
Exogenous: Constant
Lag Length: 4 (Automatic – based on SIC, maxlag=11)



Augmented Dickey-Fuller test statistic-3.380296


*MacKinnon (1996) one-sided p-values.

As might be seen from the graph, there is a downward trend of fed funds interest rates. The unit root test gives an idea that there should be an inclusion of a lagged dependent variable

Inflation – 100*(log(CPI)t-log(CPI)t-4)

Delta_Y – 100*(log(real_gdp)t-log(real_gdp)t-1)

GAP – a Hodrick-Prescott detrended real GDP gap

U.K. Data selection

The following covers the period from Q1/1990 to Q4/2009 in U.K, including time series of

Inflation: Cpi all Items City Average /Index Number /Base year: 2005 /averages /Cnt: United States /Source: IMF, Wash

Bank of England Interest Rates: Discount Rate (end of Period) /percent per annum /stocks /Cnt: United States /Source: IMF, Wash

Real GDP: Gdp vol 2005 Ref., Chained /U.s. Dollars ,billions of .. /averages /constant prices (seas. /Cnt: United States /Source: IMF, Wash

FTSE 100 stock index: Finance.Yahoo.Com, the quarterly close price index.

Stock Index FTSE 100 is chosen because it is typically weighted averages of the prices of the component stocks. Very often dividends are excluded from the return calculation of the index. The composition of most indices changes occasionally, so that a long time series will not be made up of return from a homogeneous asset.

In this paper, we prefer the continuous compounded definition of using logarithm because multi-period returns are then sums of single-period returns. This could make time series of data more smoothing and increase the exactness of estimation[4].

The regressions take the form of Taylor rule, augmented to allow partial-adjustment or “interest rate smoothing”, or more simply the conclusion of a lagged fed funds rate, as discussed in Clarida, Gali, and Gertler (1998). We begin with the regression, in which the fed funds rate responds to contemporaneous observations on a “gap” measure (either the unemployment rate or a Hodrick-Prescott detrended real GDP gap[5]), a four-quarter moving average of an inflation measure, the growth rate of real GDP, and lags of a variety of stock price measures.

Unit root test for f – log(fed funds rate)

Possibly, there is some omitted variables causing autocorrelation. We need to use unit root tests for random variables.

Null Hypothesis: I has a unit root
Exogenous: Constant
Lag Length: 1 (Automatic – based on SIC, maxlag=11)


Augmented Dickey-Fuller test statistic-1.2811770.6347

As might be seen from the graph, there is a downward trend of fed funds interest rates. The unit root test gives an idea that there should be an inclusion of a lagged dependent variable.

Inflation – 100*(log(CPI)t-log(CPI)t-4)

OUTPUT_GAP – 100*(log(real_gdp)t-log(real_gdp)t-1)

Empirical Framework

U.S. Data test section


Following the Taylor’s rule, a regression of the model below is going to be estimated:

It = a + bIt-1 + cGapt + d?yt + z?t + ?ekst-k (1)

In which:

It: the quarterly average of the daily observations on the federal funds rate
Gapt: the Hodrick-Prescott detrended real GDP gap
?t: the four-quarter moving average of the rate of inflation in the consumer price index
?yt: the quarterly percentage change in real GDP
st-k: the quarterly percentage change in a stock price index, lagged as k intervals

We choose this specification to begin with because it represents a simple augmentation of the canonical Taylor rule, without worrying about the potential simultaneity between the current funds rate and the current stock prices[6]. Importantly, (1) differs from (0) since we consider more lagged values of the fed funds rates (It-1) and the asset price movements (?ekst-k).

When we estimate equation (1), we set the following null hypothesi


H0: ek = 0
H0: ek # 0

If we accept the above null hypothesis, we may safely claim that monetary policy is not accepted by asset price movements. If we reject the above null hypothesis, we may safely claim that asset price movements play a role in determining monetary policy.

We start estimating equation (1) by using only one lag of st and then we proceed from specific to general to decide the correct number of lags (in other words, we estimate the model again by including more lags until we meet a lagged regressor, which is not significan

Eview results of estimating equation (1) using 1 lagged value of st

Dependent Variable: F
Method: Least Squares
Date: 02/07/11Time: 23:57
Sample (adjusted): 1991Q1 2009Q4
Included observations: 76 after adjustments



Std. Error


































Mean dependent var1.194237

Adjusted R-squared0.913101

S.D. dependent var0.658791

S.E. of regression0.194203

Akaike info criterion-0.364172

Sum squared resid2.640028

Schwarz criterion-0.180167

Log likelihood19.83853

Hannan-Quinn criter.-0.290634


Durbin-Watson stat1.742329


It might be noted that the coefficient to s(-1) of 0.010320 is positive and statistically significant (p-value=0.0012 much lower than ?=0.05).

Eview results of re-estimating equation (1) using 2 lagged value of st

Dependent Variable: F
Method: Least Squares
Date: 02/07/11Time: 23:59
Sample (adjusted): 1991Q1 2009Q4
Included observations: 76 after adjustments



Std. Error







































Mean dependent var1.194237

Adjusted R-squared0.913145

S.D. dependent var0.658791

S.E. of regression0.194153

Akaike info criterion-0.352758

Sum squared resid2.600980

Schwarz criterion-0.138085

Log likelihood20.40479

Hannan-Quinn criter.-0.266964


Durbin-Watson stat1.789665


We may note that the second lag of s (s(-2) of -0.003290) is not significant (p-value of 0.3123 much higher than ?=0.05) whereas the first lag s(-1) is still positive and significant (p-value=0.0019 lower than ?=0.05). This implies that the correct choice about the number of lags implies that we need to include only the first lagged value of the stock index’s return.

Testing significant coefficients

The statistically significant positive coefficient to s(-1) implies that the asset price goes up Fed should increase interest rate. Inflation and real GDP growth rate are significant (p-values are 0.0023 and 0.0160 less than ?=0.05) whereas gap is insignificant (p-value of 0.7665 much higher than ?=0.05). However, there are some conflicts with the theory if the gap is removed out of the specification. Indeed, R-squared and Adjusted R-squared show that the model explains the dependent variable more than 90%. It is necessary to examine the model by applying GMM (the generalized method of moments).

Testing a good fit

As we can see, R2 of 0.918894 and Adjusted R2 of 0.913101 show that the model explains above 90% changes of fed fund rates. F-statistic also depicts that all regressors are jointly significant. This seems to support the view of Cecchetti et al. (2000).

Testing autocorrelation

Durbin’s h test, with n=76, we have

– d = 1.742329

– h=(1-d/2)v(n/(1-n/error(b))) = 1.2559142007 < z-critical = 1.96

We cannot reject H0 of Durbin’s h test and conclude that this model does not suffer from serial correlation.

Breusch-Godfrey Serial Correlation LM Test:


Prob. F(4,66)0.4134


Prob. Chi-Square(4)0.3610

Both the LM statistic and the F statistic are quite low, suggesting that we cannot reject the null of no serial correlation (p-value of 0.3610 is much higher than ?=0.05). Therefore, both Durbin’s h and Breusche-Godfrey test conclude that this model does not suffer from autocorrelation.

The analysis above lets us to give a positive answer to the first question we posed before. The significance of the lagged value of s implies that asset price movement plays a role in refining monetary policy.

Testing heteroskedasticity

Both Breusch-Pagan-Godfrey statistic (16.19839) and F statistic (3.792163) are quite high, suggesting the rejection of the null hypothesis of the heteroskedasticity test (p-values of 0.0063 and 0.0043 are much less than ?=0.05). As can be seen from the test results, coefficients of gap, delta_y, and s(-1) are insignificant. Contrastingly, the inflation might have certain impact on the variance of the error terms.

Heteroskedasticity Test: Breusch-Pagan-Godfrey


Prob. F(5,70)0.0043


Prob. Chi-Square(5)0.0063

Scaled explained SS55.18278

Prob. Chi-Square(5)0.0000

With White’s test with no cross products, the p-values of both White’s statistic (0.0895) and F statistic (0.0879) are higher than ?=0.05 and these tests do not conclude a rejection of no heteroskedasticity.

Heteroskedasticity Test: White


Prob. F(5,70)0.0879


Prob. Chi-Square(5)0.0895

Scaled explained SS32.49105

Prob. Chi-Square(5)0.0000

In White’s test with cross products, both White’s statistic F statistic with p-values nearly zero illustrate that there is a conclusion of a rejection of no heteroskedasticity. Therefore, there is evidence of heteroskedasticity. Intuitively, delta_y*s(-1) and inflation*s(-1) do not have a significant impact on the variance of error terms (p-values are 0.3398 and 0.2187 much higher than ?=0.05, respectively). This might imply that the relationship between delta_y and inflation with s(-1) should be examined in GMM application.

Heteroskedasticity Test: White F-statistic


Prob. F(20,55)




Prob. Chi-Square(20)


Scaled explained SS


Prob. Chi-Square(20)


The graph of residuals also shows that the model does not fit the data precisely (there are more and more underestimates and overestimates, especially from 2000 to 2009).

Testing heteroskedasticity possibly implies that the traditional view as the equation (1) might be violated by heteroskedasticity of cross products. The co-efficiencies are still unbiased and consistent. This has a wide impact on hypothesis testing neither the t statistics or the F statistics are reliable any more for hypothesis testing because they will lead us to reject the null hypothesis too often. This is possible reason to examine whether the relationship between the stock index’s return and fed funds rate is direct or indirect.

Testing misspecification

In Ramsey RESET test, since the p-values of 0.1056 (Likelihood ratio) and 0.1245 (F-statistic) are both higher than ?=0.05, we cannot reject the null hypothesis of correct specification. Notice, as well, that the coefficient of the squared fitted term is not significant (t-stat = 1.555220).

Ramsey RESET Test
Equation: LAG1_V
Specification: F C F(-1) GAP DELTA_Y INFLATION S(-1)
Omitted Variables: Squares of fitted values




t-statistic 1.555220



F-statistic 2.418708

(1, 69)


Likelihood ratio 2.618454



Generalized method of moments (GMM)

We will evaluate whether the asset price movement plays a role directly or indirectly through its effects on inflation and output growth. We can do this by using a GMM estimator, which is an instrumental variable approach.

By applying the forward-looking extension for Taylor rule, the instrument list of gap (a Hodrick-Prescott detrended real GDP gap), delta_y (the growth rate of real GDP) and inflation (inflation rate) will be taken to perform GMM.

Dependent Variable: F
Method: Generalized Method of Moments
Date: 02/13/11Time: 14:11
Sample (adjusted): 1991Q1 2009Q1
Included observations: 73 after adjustments
Linear estimation with 1 weight update
Estimation weighting matrix: HAC (Bartlett kernel, Newey-West fixed
bandwidth = 4.0000)
Standard errors & covariance computed using estimation weighting matrix
Instrument specification: GAP(+1) GAP(+2) GAP(+3) DELTA_Y(+1)
Constant added to instrument list



Std. Error

































Interpreting results

The relationship between inflation and real GDP with fed funds rate

As OLS’ result, the growth rate of real GDP and inflation affects positively the dependent variable and it is statistically significant, while the gap is not. However, it might be spurious if the gap is removed because the heteroskedasticity testing shows that the OLS model (1) is no longer the most efficient estimator[1].

The relationship between FTSE100 and Bank of England interest rate

As GMM’s result, it is important to pay attention to the coefficient associated to s(-1). As OLS’ result, it is positive and significant but this time it is not statistically significant in GMM’s result. This implies that the asset price movements affect the way in which the monetary policy is set only indirectly and not directly.

Consequently, asset price movements have impact on fed funds rates indirectly.


In conclusion, the results of this project show that the hypothesis that refer to whether the volatility of the stock price would affect the monetary policy, in some mood, is proved to be indirectly impact the policy making.

These results might be caused by the limitation of this project. Compare to Cecchetti et al. (2000) who focused on using some estimation of primary elements in the policy rules, including the output gap and inflation expectations to produce the hypothesis. He supposed that these elements would have to depend on some estimation of the evolution of stock prices providing that the asset prices affect the economy’s path. However, we build up two models to by OLS and GMN to test the hypothesis. For GMM, we compare the data and observe that the co-efficient are insignificant. At this time, we switch to OLS model, surprisingly found that the co-efficient of GAP is insignificant. Moreover, it is still controversial to clear about the strength of the endogeneity test for GMM estimation because we do not have another test to compare with. This is the reason why the instrument list we choose is still controversial.

However, the data we estimate is enough that is perfectly applies to the line with the theory of Taylor rule that is approximately well estimated, significant of R-squared, Durbin-Watson and F-test by the means of OLS and GMM model test. Hence, large amount of surveys indicate that recent researches tend to have similar results. For Example, Two of the most important contributors in this field are Bernanke and Gertler with their two seminal studies (1999 and 2001). On the basis of non-optimizing models of monetary policy, where coefficients of interest rates on GDP and inflation is selected specifically, Bernanke and Gertler generated the result that inflation-targeting policymakers should not take asset prices into account other than changes in expected inflation are foreshadowed. Adam Posen (2006), likewise, has argued that central banks need not target asset prices, but would be well advised to monitor them, when those bubbles get very far out of line. By studying the correlation between periods of monetary easing and property bubbles, he found the hypothesis that quantitative easing would result in asset volatility unsupported. Therefore, the results of this project may reflect the nature.

It is likely that the volatility of stock price would have effect on the monetary policy. In order to meet the needs of inflation targeting which is proved by the classical economy theory that slightly inflation could stimulate the real economy, the central bank authorities do evaluate these two factors when making monetary policy decision to maintain the asset price is neither too high nor too low so does the inflation rate.

Jeff Fuhrer and Geoff Tootell. “Eyes on the Prize: How Did the Fed Respond to the Stock Market”. Federal Reserve Bank of Boston, Discussion Papers, April 2004, No. 04-2.

Bernanke, Ben and Gertler, Mark. “Money Policy and Asset Volatility”. Federal Reserve Bank of Kansas City Economic Review, Fourth Quarter 1999, 84(4), pp. 17-52.

Bernanke, Ben and Mark Gertler “Should Central Banks Respond to Movements in Asset Prices” American Economic Review, May 2001.

Posen, Adam S. (2006) “Why Central Banks Should not Burst Bubbles”, Institute for International Economics, Washington, DC, working paper 01/06.

Filardo, Andrew (2004) “Monetary Policy and Asset Price Bubbles: Calibrating the Monetary Policy Trade-Offs,” BIS working paper No.155, Basel.

Roubini, Nouriel, (2006), “Why Central Banks Should Burst Bubbles”. International Finance 9, no.1

Cecchetti, Stephen ; Genberg, Hans; Lipsky, Jonhn and Wadhwani, Sushil. “Asset Prices and Central Bank Policy”. LondonInternationalCenter for Monetary and Banking Studies, 2000

Stephen J.Taylor. “Asset Price Dynamics, Volatility, and Prediction”. PrincetonUniversity Press 2005.

Dimitrios Asteriou and Stephen G.Hall. “Applied Econometrics – A modern approach”, Palgrave Macmillan, revised edition 2006, 2007.

H.Studentmund. “Using Econometrics: A practical Guide”. Pearson 2000, International Edition, 5th edition.

Jeff Fuhrer and Geoff Tootell. “Eyes on the Prize: How Did the Fed Respond to the Stock Market”. Federal Reserve Bank of Boston, Discussion Papers, April 2004, No. 04-2.

Bernanke, Ben and Gertler, Mark. “Money Policy and Asset Volatility”. Federal Reserve Bank of Kansas City Economic Review, Fourth Quarter 1999, 84(4), pp. 17-52.

Bernanke, Ben and Mark Gertler “Should Central Banks Respond to Movements in Asset Prices” American Economic Review, May 2001.

Posen, Adam S. (2006) “Why Central Banks Should not Burst Bubbles”, Institute for International Economics, Washington, DC, working paper 01/06.

Filardo, Andrew (2004) “Monetary Policy and Asset Price Bubbles: Calibrating the Monetary Policy Trade-Offs,” BIS working paper No.155, Basel

Roubini, Nouriel, (2006), “Why Central Banks Should Burst Bubbles”. International Finance 9, no.

Cecchetti, Stephen ; Genberg, Hans; Lipsky, Jonhn and Wadhwani, Sushil. “Asset Prices and Central Bank Policy”. LondonInternationalCenter for Monetary and Banking Studies, 2000

Stephen J.Taylor. “Asset Price Dynamics, Volatility, and Prediction”. PrincetonUniversity Press 2005

Dimitrios Asteriou and Stephen G.Hall. “Applied Econometrics – A modern approach”, Palgrave Macmillan, revised edition 2006, 2007.

H.Studentmund. “Using Econometrics: A practical Guide”. Pearson 2000, International Edition, 5th edition.

Free Essays

Test the Efficiency of Chinese Stock Market


1. Background

Osborne (1959) proposed that the movements of stock price are similar to the “Brownian motion” in the area of chemistry which mentions the never-ending and disorder movement of the particles that suspended in liquid or gas. Since the feature of “random walk”, the path of the share price is unpredictable. Fama also recognized that the return of stock price has no “memory”, according to Fama (1965, 1970), the share prices in the stock markets are following the random walk, which indicates that in efficient stock market, the current share prices reflect all of the available and relevant information immediately and rationally. In that case, the current share prices in a financial market will be equal to the optimal forecasts using all available information (Fama, 1970).

Based on the “random walk theory”, Fama proposed “Efficient market hypothesis”, it implies that in the condition of efficient stock market, investors cannot predict future trends of the share price by using the historical data, thus no abnormal return exist in the efficient market since the prices reflect all the relative information immediately (Fama, 1970). According to Fama’s efficient market hypothesis, there are three levels of efficient market, weak form efficiency; semi-strong form efficiency and strong form efficiency (Fama, 1970).

Based on Fama’s theory, in the recent years there are many researches are related to the stock returns in financial economics. There are also a vast number of empirical researches of the stock price in both developed and developing countries. The tests had different results, for instance, most early research is supportive of the weak and semi-strong forms of the efficient market hypothesis in developed capital markets (see, e.g., Osborne 1962; Granger and Morgenstern 1963; Fama 1965; Ball and Brown 1968). Recent research, however, has reported that stock market returns are predictable (Poterba and Summers 1986; Fama and French 1988; Lo and MacKinlay 1988). The empirical evidence is also mixed for the developing countries. These studies 2 on emerging stock markets can be divided into two groups depending on findings. Researches who finds the evidence to support the weak-form efficiency (e.g., Urrutia 1995; Ojah and Karemera 1999; Abrosimova et al. 2005; Moustafa 2004), and others shows the evidence of predictability or rejection of the random walk hypothesis in stock returns (e.g., Huang 1995; Poshakwale 1996; Mobarek and Keasey 2002; Khaled and Islam 2005).

1.1.1 Chinese stock market

According to the above results, and considering the fastest economic gowning in China and the rapid development of Chinese stock market in recent years, a motivation to test the emerging stock market, namely Chinese stock market is emerged.

China has two stock markets—Shanghai stock market and Shenzhen stock market, and both of the markets classified stocks into A-share and B-share that are available for Chinese residents (Ma, 2004). The Shanghai Stock Exchange was found on 26th November 1990 and the Shenzhen Stock Exchange started on 3rd July 1991 (Liu, 2010). Since 2000, the government of China made the market-oriented policies, under the effective organization, both of the stock markets expanded rapid and have operated in an increasingly regulatory circumstance. Indeed, as the second large capital market in Asia, Chinese stock market is one of the most rapid growing emerging markets (Chung, 2006).

There are previous researches related to the test of the efficiency of Chinese stock market and the other emerging markets. (e.g.,Harvey 1993, Laurence et al. 1997; Mookerjee and Yu 1999; Lima and Tabak 2004, Ma 2004, Chung 2006, Liu 2010). Despite some studies shown that the emerging markets are not as nonrandom as thought, most of the studies indicated that the emerging markets are less efficient than the developed markets, even most of the researches explained the prices change in the Chinese stock markets are nonrandom walk. However, most these studies are concentrated on the initial years of the Chinese stock markets established. With the rapid development of the stock market in the recent years, under the circumstance of increasingly privatization and liberalization, nowadays Chinese stock market should be more than weak form efficiency, this study attempted to test the weak form efficiency and also semi-strong form efficiency of Chinese stock market using a vast number of indexes and data of both Shanghai stock market and Shenzhen stock market.

1.2 Objectives of the study

1.3 Scope of the study

1.4 Limitations of the study

2. Theoretical and empirical review

2.1 Concept of the efficient market hypothesis (EMH)

According to Fama (1965, 1970), in an efficient market, the stock price accurately reflect all of the available and relevant information instantaneously, in that case, investors in the efficient market could only receive normal return since the stock price change immediately with the information.

Fama’s paper “Efficient Capital Markets: A Review of Theory and Empirical Work” in 1970 be viewed as the earliest formal foundation of efficient market hypothesis and in the paper he pointed out three conditions of the existence of efficient market, no transaction costs, free information and all participants have the same preference in front of profit maximization, risk aversion, and sufficient acknowledge of the market and mentioned that in the circumstance of efficient market, the current price “fully reflect” all of the available information correctly and immediately (Fama, 1970).

Fama’s theory on efficient market hypothesis is concentrating on how market using information, while some other ideas keep on the relationship between investors and share prices (Chung, 2006). There are also some critical voices to the efficient market hypothesis, such as Leroy (1989) view efficient market hypothesis as empty and tautological since it did not point out how to use all of the information to determine share price. Despite that, the Fama’s theory of efficient market hypothesis is commonly acceptable in the research of determining the efficiency of the market.

Based on the assumption of “fully reflect” all of the available information, and all investors are rationally enough, the model of the efficient market can be explained as follow:

(Mishkin and Eakins, 2009)

= the current prices

= the optimal forecasts using all available information

Fama (1970) pointed out the three forms of market efficiency, which are weak form efficiency, semi-strong form efficiency and strong form efficiency. There are relationship among the three types of market efficiency, the weak form efficiency is included in the semi-strong form efficiency, while the strong form efficiency conclude both of weak form and semi-strong form efficiency (Fama, 1970).

2.1.1 Weak form efficient hypothesis

Weak form efficiency is the lowest level of the market efficiency, in a weak form efficient market, share prices full reflect all of the information regarding to the historical movement of the price (Fama, 1970). In other words, in the condition of weak form efficiency, future price of the stock could not be predicted by tracing the past path of the price, investors could not obtain abnormal profits by analyzing the historical prices information (Samuelson, 1965).

Based on the theory of weak form efficiency, to test whether the technical analysis of the past prices could make abnormal return could test whether the market is weak form efficiency. There are some specified methods that can be used to test whether the market is weak form efficiency, such as the filter approach and test the random walk hypothesis (Ma, 2004). The Historical evidence on weak form of market efficiency are as follow.

Evidence from tests of the random walk hypothesis

In order to test the weak form market efficiency, the serial correlation between the security’s current return and its historical return are applied in the investigations (Fama, 1965).

Bachelier (1900) performed a research about the French government bonds and found that the bonds follow a random walk, which seemed to be the first test of the random walk. Kendall (1953) had researched the British securities’ by tested correlation of weekly prices and got the independent time series of prices. Roberts(1959) used the weekly change of Dow Jones Industrial Average (DJIA) to examine whether the share prices can be replicated in assumption of the prices are following random walk, and the result of his study support the random walk theory. Osborne (1959) attributed an economic rationale behind the random walk, he views that individuals have separately ideas when facing the information, which is the reason of securities prices’ independently change. Cootner (1962) found that since the information appears independent, so the share prices changed with the information following the random walk. Granger and Morgenstern (1963) studied the independence of the successive share prices movements and the outcome also verified the similar result about the random walk. In the year of 1965, Fama had tested 30 securities’ daily return in Dow Jones Industrial Average between 1957 and 1962 by using serial correlation test, run test and Alexander’s filter rules. The result is a very small positive correlation coefficients and his test is also support that share price following random walk (Fama, 1965). Poterba and Summers’ (1988) further study presented that share returns are positive correlated in short period and negative correlated during the long period by studying the share returns from the New York Stock Exchange and 17 other stock markets.

On the other hand, there are also some evidences appear contrast with the weak form efficient hypothesis. Timothy (1993) shows his finding which is against the hypothesis of weak form efficiency since the transaction costs delay the timing of price adjustment and cause the correlation of share returns. Richard and Starks (1997) argued that the professional investors’ correlated trading pattern is the reason of the autocorrelation of share returns.

Evidence from tests of trading strategies

With the purpose of test the weak form efficiency, some trading strategies are used in the researches. The filter approach as one of the essential trading strategy is usually designed to track the essential movement of share prices on the base of any percentage (Kisor and Messner, 1969). For instance, selling a share once it falls 5 percent of its previous high point, and buying a share if it rises 5 percent from the previous low point. If use such a strategy, the investors could receive excess profits, it could be suggestive of weak form inefficiency (Kisor and Messner, 1969).

Alexander (1961) applied filer rule methodology to test the random walk, by analyzing the prices changes from 5 percent to 50 percent, Alexander tested Dow Jones Industrial Average from 1897 to 1929 and the Standard and Poor’s index of S&P 500 between 1929 and 1959. The consequence of his studies is that such trading strategy can not generate higher return than simple buy-and hold strategy after transaction fees were taken into account. Fama and Blume (1966) tested 30 securities in Dow Jones Industrial Average by using 24 different filters ranging from 5 to 50 percent. They found that the 5 percent of filter strategy can obtain much higher than buy-and hold strategy, however, after the transaction costs taken into account, the profit is lower than buy-and hold strategy. Consequently, the outcome of their examination is that no matter what size of filter be used, no extra returns can be generated by using filter strategy. Ball (1978) had added the condition of risk into consideration when testing the Melbourne Stock Market and the conclusion of his test is also consistent with the weak form efficiency.

However, Bertoneche (1981) applied two strategies (spectral analysis and filter rules) to test the same stock markets (New York Stock Exchange and six European stock markets) during the same periods (from 1969 to 1976), the outcomes are opposite. Bertoneche (1981) concluded that the spectral analysis is not powerful enough and the stock markets are not weak form efficiency.

Anomalous evidence of the efficient market hypothesis

There are also some anomalies (such as day-of-week effect, month-of year effect, holiday effect) existing in the stock markets, De Bondt and Thaler (1985) analyzed some portfolios with better returns and worse returns in the interval of three years; they found that the superior portfolio were underperformed in the last three years and he viewed that there should be some implications by analyzing the past path of stock price.

The seasonal effect is the important anomalous which expressed as that stock returns change regularly in some special time, such as day-of-week effect and January effect (Ma, 2004). Fields (1931) tested Dow Jones Industrial Average from 1915 to 1930 and first found that there was day-of-week effect in the market. Cross (1973), French (1980), Jaffe and Westerfield (1985), Ball and Browers (1988) and some other scholars also verified the existence of the Monday effect in many stock markets. Wang (1997) found that the Monday effect had been disappeared in the U.S stock market by analyzed the data of the market during the period between 1962 and 1993. Wachtel (1942) first found the January effect which express as stock returns in January is higher than any other month. After that, Rozeff and Kinney (1976), Brown (1983), Tinic (1987), Reinganum and Shapiro (1987), Agrawal and Tandon (1994) Mookerjee and Yu (1999) found that in January, stock returns normally higher than the other months in most of the stock market.

Despite the existence of anomaly, it is hard for investors to get abnormal returns purely by analyzing the historical prices and most of evidence support that the stock markets are following random walk and the markets are weak form efficiency ().

2.1.2 Semi-strong form efficiency and tests

In the semi-strong form efficient market, the securities prices fully reflect all of the publicly available information, which include not only past prices but also technological breakthrough, earnings data, stock splits and dividend announcement (Fama, 1970). In case of the semi-strong form efficient market, no market participant has time to make superior returns based on analyzing the announcements, since the share prices had adjusted to the new information instantaneously (Fama, 1970).

Historical evidence on semi-strong form of market efficiency

The test on semi-strong form efficiency focuses on whether the securities prices adjust to the new public information fully correct and immediately, in other words, after the announcement of the information, whether the investors could benefit by analyzing the announced information (Fama, 1965). The pioneers of this field is Fama, Fisher, Jensen and Roll (1969), they examined 940 stocks in the New York Stock Exchange during the period from 1927 to 1959 and found that through the method of purchasing split securities after the announcement of split is announced could not increase the expected returns. The results support the semi-strong form efficiency. Brown (1970), Brown (1972), Brown, Finn and Hancock (1977) studied the Half-year and annual profit announcements and the evidence shows that share prices concluded all the published information immediately, the evidence supports the theory of semi-strong form efficiency.

On the other hand, there are also evidence shows that in some conditions, markets are not semi-strong form efficiency; Scholes (1972) tested the shares in New York Stock Exchange between July 1961 and December 1965 and found that superior returns existence as the corresponding to insiders’ block transaction. Ibbotson (1975) represented that in the first month of new issue, initial purchasers could obtain superior returns, and the return will lower to the average level in the third month. Kahenam and Tversky (1982) explained that for the reason of investors’ overreaction, a strategy that buying stock that undervalued and sell the stock which overvalued will accept over average profits. Rendleman, Jones and Latane (1987) tested the performance of the shares during the periods of announcements, he found that the prices adjusted before the announcement published and commonly overreact to the announcement, which is inconsistence with the semi-strong form efficiency. Lewellen and Shanken (2002) suggest that some variables such as interest rates, financial ratios could be used to predict the securities prices in stock markets.

2.1.3 Strong form efficiency and tests

In the strong form efficient market, securities prices correctly and fully reflect all of the available information, includes historical information, private information and public information (Fama, 1970). In such condition, all of the market participants, including the ones who monopolistic access the inside information, have no chance to get abnormal returns in the strong form efficient market since all the information have been incorporated into the share prices (Fama, 1970).

Historical evidence on strong form of market efficiency

In the assumption of strong form market efficiency, the average prices of the securities must be separated from the distribution of information and investors have the same value preference of the information (Ma, 2004).

If the insider has the chance to access the privilege information and buy the shares before the information publication and sell them after the price increase with the information, it will be viewed as the contradiction with the strong form efficiency (Fama, 1970).

Empirical tests of the strong form efficiency are focus on whether the managers, analysts, professional investors have chance to obtain the inside information and whether they could benefit from the inside information (Fama, 1970).

Jensen (1969) evaluated the fund managers’ performance by studying 115 mutual funds of the United States during the year from 1955 to 1964, he got the consequence that after the deduction of management fees and some other fees, the mutual funds did not outperform the average performance, the consequence support the strong form efficiency of the United States stock market. Diefenbach (1972) reported that the over average performance and superior profits of the professional investors is due to chance.

On the contrary, Niederhoffer and Osborne (1966) have mentioned that on the New York Stock Exchange, the specialists could obtain superior returns by using their monopolistic rights to get inside information. Jaffe (1974) found that insiders can get over average returns by investigating the inside trading of the 200 companies of the United States. Ippolito (1989) analyzed 143 mutual funds in the United States between 1965 and 1984, he found the returns of mutual funds were 0.83 higher than the Sharpe-Lintner market line and got the conclusion that there were inside trading.

2.2 Empirical work on efficiency

The stock markets around the world have different background and grown up with the different government, some of the stock markets are developed markets, such as the United Stated stock market, the United Kingdom stock market, the Japanese stock market, they are more liberal; on the other hand, there are also some emerging stock markets such as the Latin American stock markets, the Indonesia stock market, which present more restriction (Risso, 2009). Particularly, the efficiency of the stock markets expressed different and the developed markets normally performed more efficient then the emerging markets under the same circumstance (Risso, 2009). It is necessary to discuss the developed markets and the emerging markets separately.

2.2.1 Evidence from developed stock market

Empirical test of the United States stock market

As one of the most essential stock market, the efficiency of the United States stock market attracted many scholars attention. In terms of weak form efficiency, Fama (1965) researched the Dow Jones Industry Average in the period from 1957 to 1962 and the result supports to the weak form efficient hypothesis. Alexander (1961) developed the trading strategy of filter rules to test the Dow Jones Industry Average (DJIA) (1987–1929) and Standard and Poor’s index of 500 stocks (S&P 500) (1929–1959) and the result also support the random walk theory. The study of Fama and Blume (1966) about the Dow Jones Industrial Average (DJIA) by using filter rules also sustain the weak form efficiency. On contrary, Fields (1931) examined the Dow Jones Industrial Average (DJIA) from 1915 to 1930 found the day-of-week effects; similarly, Cross (1973), French (1980) also certificated the day-of-week effects in the Standard and Poor’s index of 500 stocks (S&P 500). Lo and MacKinlay (1988) also found significant positive serial dependence in the consequently weekly stock price indexes and single securities. Rozeff and Kinney (1976), Keim (1982), Sias and Starks (1997) and some others also represented evidence of the month-of-year effects in the U.S. markets. And Keim (1982) viewed there were relationship between the January effects and company size while Schultz (1985) explained the reason of the January effects in the U.S. as the result of tax regime. Field (1931) and Smidt (1988) found the pre-holiday effects in the U.S. since the stock prices increased significantly on the days in advance of public holidays.

Regarding to semi-strong form efficiency, Fama, Fisher, Jensen and Roll (1969) investigated the stock splits on the New York Stock Exchange from 1927 to 1959, and outcome to be semi-strong form efficiency. However, Scholes’s (1972) and Chan’s (1988) results show that the New York Stock Exchange failed to support the semi-strong form efficiency. Busse and Green (2002) also found that in the U.S. stock market, share prices react to positive announcement much quicker than the reaction to the negative announcement.

As to the strong form efficiency, Jensen’s (1969) and Diefenbach’s (1972) researches consistent with the strong form efficient hypothesis in the U.S. stock market. In contrast, more evidence appear that there are inside trading or abnormal return existing in the U.S. stock market, such as the results of Jaffe (1974), and Ippolito (1989).

Empirical test of the European developed stock markets

As the typical case of developed markets, European stock markets are tested by many scholars. As regards to the weak form efficiency, Kendall (1953) investigated the British stock market and concluded that British stock coincide with weak form efficiency. Bird (1985) tested the London Metal Exchange and viewed it is weak form efficiency. Hudson, Robert, Dempsey and Keasey (1996) represented trading strategy cannot bring abnormal return in the United Kingdom stock market and the market is consistent with the weak form efficiency. While according to the other scholars, such as Reinganum and Shapiro (1987) identified the existing of the January effects in the United Kingdom. The results of Jennergren (1975) and Hunter (1998) indicate that the trading strategies are useful in the stock transaction in Swedish market, and Jamaican Stock Exchange, separately. Al-Loughani and Chappel (1997) applied Dickey-Fuller unit root, Lagrange Multiplier (LM) serial correlation and Brock, Dechert and Scheinkman non-linear tests to investigate the United Kingdom stock market and the conclusion rejected the weak form efficient hypothesis.

Goss (1983) represent the London Metal Exchange is semi-strong form efficiency.

Empirical tests of the other developed markets

There are various empirical tests of the efficient market hypothesis and as to the developed markets, the essential evidence as follow: Ball (1978) used filter rules tested the Australia stock market and his result was accordance with weak form efficiency. Laurence (1986) had examined Kuala Lumpur stock market and Singapore stock market and concluded that both markets are following random walk. Lee (1992) tested the U.S. market and 10 industrialized countries’ markets by using variance ratio test and the outcome of his research is consistent with random walk. Choudhry (1994) had studied seven OECD countries’ markets with the Augmented Dickey-Fuller and KPSS unit root tests and the consequence support the random walk in such markets. Cheung and Coutts (2001) confirmed the Hang Seng Index of Hong Kong was following random walk. Huang (1995) employed variance ratio statistics to test nine Asian stock markets (Hong Kong, Japan, Korea, Singapore, Malaysia, Indonesia, Thailand, Philippines and Taiwan), only Japan, Indonesia and Taiwan’s markets support random walk. Brown, Keim, Kleidon and Harsh (1983) found evidence of the month-of-year effect existed in Australian; Gultekin and Gultekin (1983) and Agrawal and Tandon (1994) also verified the existence of such effects in global markets. Ball and Bowers (1988) represented that despite the existence of month-of-year effects in the Sydney Stock Exchange index, the significance of the effects can be ignored; however, they verified the holiday effects in Australian stock market.

Brown (1970 and 1972), Brown, Finn and Hancock (1977) investigated the impacts of announcement in Australian and the conclusion indicated the Australian stock market was consistent with semi-strong form efficiency. Groenewold (1997) used cointegration and Granger causality tests examined the semi-strong form efficiency of the Australia and New Zealand stock markets and the outcome shown that both markets support semi-strong form efficiency.

Robson (1986) had researched the performance of the Australian mutual fund during ten years from 1969 to 1978 and the evidence accordance with the strong form efficiency.

2.2.2 Evidence from emerging stock markets

With the rapid development of the developing countries and the emerging markets, an increasing number of studies are focus on the level of efficiency in the emerging markets (Hasson, 2006). Sharma and Kennedy (1977) tested the Bombay Stock Exchange support weak form efficiency. Dickinson and Muragu (1994) supported the Nairobi Stock Exchange is weak form efficiency. Urrutia (1995) tested four Latin American emerging markets and the results support the weak form efficiency. Filer and Hanousek (1996) confirmed that the stock markets of the Czech Republic, Hungary, Poland and the Slovak Republic are consistent to weak-form efficient market hypothesis. Ojah and Karemera (1999) represented four Latin American markets and the result was three of them except Chile were following random walk, which is similar to Urrutia’s (1995) result. Rockinger and Urga (2000) reported that the Hungarian stock market index is unpredictable and it is following random walk. Abeysekera (2001) confirmed the weak form efficiency of the Colombo Stock Exchange in Sri Lanka and reported no seasonal effects to be found in Sri Lanka Stock Market. Gilmore and McManus (2003) confirmed that the central European countries’ stock markets are weak form efficiency. Abrosimova— (2005) tested Russian stock market and the conclusion of the investigation support the weak form efficient hypothesis. Recently, Tsukuda et al. (2006) also supported the evidence that the stock markets in the Czech Republic, Hungary and Poland are weak form efficiency. Chander, Mehta and Sharma (2008) tested the India stock market and the results signified that the prices change in the India stock market follow the random walk and trading strategies based on the past prices cannot obtain extra return.

On the other hand, Solnik (1973) and Jennergren and Korsvold (1975), they tested some European markets include less-developed markets, and the consequences were contrast to the random walk. Errunza and Losp (1985) investigated 10 emerging markets and found the emerging markets are less efficient than the developed markets. Similarly, Roux and Gilbertson (1978) found Johannesburg Stock Market was weak form inefficiency. Ghandi, Saunders and Woodward (1980) also viewed Kuwaiti Stock Market not weak form efficiency. Paekinson’s (1987) research rejected the weak form efficiency hypothesis in the Nairobi Stock Exchange. Wong, Neho, Lee and Thong (1993) confirmed the January effects in Malaysian stock markets. Harvey’s (1995) tested six Latin American, eight Asian, three European and two African emerging markets and the investigation consisted with the finding that most of emerging markets are failed to support weak form efficient market hypothesis. Claessens, Dasgupta and Glen (1995) investigated nineteen emerging markets and he voiced the emerging markets inconsistent with weak form efficient market hypothesis. Poshakwale (1996, 1997) reported the India Stock Market is not weak form efficiency. Coutts (2000) certificated the existence of day-of-week effect in Athens stock market. The research of the three Gulf countries—Saudi Arabia, Kuwait and Bahrain made by Abraham, Seyyed and Alsakran (2002) rejected weak form efficient hypothesis. Mobarek and Keasey (2002) certificated the weak form inefficiency of the Dhaka stock market by using runs and autocorrelation tests. Appiah- Kusi and Menyah (2003) tested eleven African stock markets and only five of them had the evidence to consistent with the efficient market hypothesis. Hassan and Maroney (2004) tested the Dhaka Stock Exchange and found that the current stock prices can be predict by analyzing the history price path, which rejected the weak form efficient market hypothesis. Consequently, most of the research accepted the view that most of the emerging stock markets are not as efficient as the developed stock markets. Moustafa (2004) had confirmed the inefficiency of the United Arab Emirates stock exchange with the evidence that individual stocks did not follow normal distribution. Tas and Dursonoglu (2005) confirmed that the Turkey stock market is inefficiency during the research period. Goldman and Sosin (1979) explained the inefficiency in emerging markets as the results of the barriers of information dissemination. Butler and Malaikah (1992) result the inefficiency to the institutional factors such as illiquidity, reporting delays, market fragmentation.

However, Akinkugbe (2005) found evidence to confirm that the Botswana Stock Market is consistent with semi-strong form efficient market hypothesis by using autocorrelation, Augmented Dickey-Fuller and Phillip-Perron unit tests.

2.3 Previous researches about Chinese stock market

Since the 1990s, with the rapid development of Chinese economy, the Shenzhen Stock Market and Shanghai Stock Market appeared. During nearly 20 years development, increasingly investors and scholars are interested in investigating whether China’s stock market efficiency and what level efficiency the Chinese stock market belong to.

2.3.1 Chinese stock market is weak form efficiency

Traced back to the year of 1994, Bailey firstly examined nine stocks from both Shanghai and Shenzhen Stock Markets, and the conclusion of Bailey was that B-shares of China’ stock market was consistent with the weak form efficient hypothesis. Groenewold —-(2001) confirmed weak form efficiency of B-shares in Chinese Stock Market.

As to A-shares in Shanghai and Shenzhen stock market, the conclusions are conflicting among the researches. Song (1995) tested 29 securities in Shanghai Stock Market and found Shanghai stock market is weak form efficient. Wu (1996) applied serial correlation test on both Shanghai and Shenzhen Stock Markets during the period from June 1992 to December 1993 and concluded that both of the stock markets are weak form efficiency (Seddighi and Nian, 2004). Laurence, Cai and Qian (1997) tested Shanghai A-shares, B-shares and Shenzhen A-shares and B-shares with serial correlation tests, and got the results that except both A-shares presented negative correlation and both B-shares represented positive correlation, and the magnitude of the correlation became decreased since the year of 1994, which indicated China’s stock markets were increasing efficient. Liu, Song and Romilly (1997) applied ADF unit root test to examine the Shanghai and Shenzhen stock markets and the consequence of the investigation was support that China’s stock markets were following random walk. Long, Payne and Feng (1999) tested A-shares and B-shares in Shanghai Stock Market and support the random walk. Ma and Barnes (2001) applied the serial correlation, the runs and the variance ratio tests to examine both of A-shares and B-shares in Shanghai Stock Market and Shenzhen Stock Market, the results of their study shown the inefficient condition of Chinese stock market, while by Fama’s (1965) standard, Chinese stock market was in consistent with the weak form efficient hypothesis. Lima and Tabak’s (2004) variance ratio test of Shanghai and Shenzhen stock markets between June 1992 and December 2000 certificated that Chinese stock markets were weak form efficient.

Yu (1994) tested the data of Shanghai stock market before 1994 and confirmed Shanghai stock market is weak form inefficient. Hsiao (1996) tested the Shanghai Stock Market and the result of the test rejected weak form efficient hypothesis. Mookerjee and Yu (1999) investigated Shanghai Stock Market (from December 19, 1990 to December 17, 1993) and Shenzhen Stock Market (from April 3, 1991 to December 17, 1993) by using the serial correlation test and the runs test, the test presented seasonal anomalies in both of stock markets, which in contrast with weak form efficiency. Darrat and Zhong (2000) employed the variance ratio test and a model-comparison method to test A-shares of the Shanghai Stock Market and the Shenzhen Stock Market, their results shown that the share prices in both stock markets did not follow random walk. Groenewold —-(2001) and Chen and Hong (2003) showed evidence of inefficiency in both Shanghai and Shenzhen stock market. Seddighi and Nian (2004) tested Shanghai Security Index and eight shares in Shanghai Stock Exchange from January 2000 to December 2000 by using Lagrange Multiplier test, the Dickey-Fuller test and ARCH test, and the consequence of the study also expressed as Chinese stock market was weak form inefficiency. Gao and Kling (2005) confirmed that the existence of seasonal effects in Chinese stock markets. Chung (2006) used four methods to test the weak form efficiency of Chinese stock markets and the results were that despite increasingly efficient, Chinese was not in consistent with weak form efficiency. Recently, Lim and Brooks (2009) employed a battery of nonlinearity tests to investigate Chinese stock markets and the results indicate China’s stock market was weak form inefficiency. Liu (2010) employed serial correlation test, unit root test, ARIMA, GARCH, artificial neural network tests, and the bootstrap test to obtained a result that Chinese stock market is weak form inefficient.

According to Mookerjee and Yu (1999), Darrat and Zhong (2000), Liu (2010), the reasons of inefficiency in China’s stock markets should be the restricted supply of shares; thin trading; asymmetric information and the imperfection and partial implement of policies.

2.3.2 Chinese stock market is semi-strong form efficiency

Since the contradiction with the weak form efficiency of Chinese stock markets, only few scholars believe that Chinese stock market could reach the level of semi-strong form efficiency.

Haw, Qi and Wu (2000) found that the good information generally reported earlier than the bad news by investigating the A-shares of Shanghai Stock Market and Shenzhen Stock Market. Ma (2000) tested the prices movements after the announcement of information, such as dividend and bonus, and found Chinese stock was departure from semi-strong form efficiency since evidence shown that the existence of under-reaction and over-reaction in Chinese stock markets. Su (2003) viewed A-shares could not reflect all of the available information immediately and correctly while the B-shares performed better than A-shares. Ma (2004) tested the random walk, seasonal effects and the reaction of share prices to the announcement of dividends, bonus and rights issues, and concluded that Chinese stock market was not semi-strong efficient nor weak form efficient.

3. Methodology

As mentioned above, whether Chinese stock markets efficiency is still a controversy, in order to test the efficient level of Chinese stock markets, the following methods will be used in this paper.

3.1 Methods to test whether Chinese stock market weak form efficient

Based on the former scholar researches about the efficient market hypothesis, four main statistical methods are applied in the test of weak form efficiency in this essay; the Augmented Dickey-Fuller unit root test, the serial correlation tests, runs tests and variance ratio tests are used to test the random walk.

According to Campbell and Hamao (1989), random walk processes can be classified into three sub-hypothesis: random walk 1 (RW1), random walk 2 (RW2) and random walk 3 (RW3). The random walk 1 is the most restrictive one which requires the movements of share prices should be independent and identically distributed; the random walk 2 (RW2) only require the independent distribution; and the random walk 3 (RW3) further observes the opportunity of serial correlation in the squared increments that cause the existence of conditional heteroscedasticity (Campbell and Hamao, 1989).

Unit root test

The unit root test is a common approach that used to test the random walk nature of the price changes which is developed by Dickey and Fuller (1981) to test the stationary of a time series. The only requirement of the unit roots test is that the first and second moments are subject to fixed stationary values, while the covariance between and tend to the stationary value which depend on |t-s| (Davidson and MacKinnon, 2004).

The theory of the unit root test is based on testing the stationary of time series, to investigate the random walk, a series can be viewed as stationary if the mean and the auto-covariance independent for time, a very common non-stationary series is the random walk (Campbell and Ludvigson,1997). And the non-stationary time-serial process (random walk process) is as follow:

(Davidson and MacKinnon, 2004) (1.1)

y: the series;

: a stationary random disturbance term.

The series y has a constant forecast value, conditional on t, and the variance of the series is increase over time, the random walk here is a difference stationary series since the stationary of the first difference of y (Davidson and MacKinnon, 2004).

(Davidson and MacKinnon, 2004) (1.2)

The first differences of the series is I (0), a series which is integrated to order d is denoted by I (d), if the model has d unit roots and it must have d times difference of such a model before an I (0) series results(Davidson and MacKinnon, 2004). Generally, it is stationary when the d times difference of a process has d unit roots (Enders, 2003).

The efficiency of the price change can be resulted by testing whether the time series has a unit root. The most popular approach to do the unit roots test is the Augmented Dickey-Fuller (ADF) tests, which were first designed by Dickey and Fuller (1988), and based on the assumption that the error term follow an autoregressive (AR1) process of known order (Davidson and MacKinnon, 2004).

In order to test the efficient of the stock market, the Augmented Dickey-Fuller unit root test applies the tau-statistic for every estimated coefficient to examine the significant of the estimated coefficients (Davidson and MacKinnon, 2004). The tau-statistic can be calculated with the same way as the student’s t-statistics but they do not follow the same distribution (Davidson and MacKinnon, 2004). To test the random walk of the data, the comparing between the estimated tau-statistics value and the critical values, in the case that the estimated tau-statistics value is greater than the critical values, the null hypothesis will not be rejected at the conventional test sizes (Davidson and MacKinnon, 2004).

After that, the further Augmented Dickey-Fuller unit root test will base on the regression and hypothesis as follow:

(Enders, 2003)(1.3)

Formula above is the pure random walk model which has no constant and no time trend.

(Enders, 2003)(1.4)

This model adds the constant in the equation comparing with the first one (Chung, 2006).

(Enders, 2003)(1.5)

The last model includes both constant and time trend.

: first difference;

: the log of the price index;

: the constant;

: the error term which is assumed to be white noise;

the coefficient to be estimated;

the coefficient to be estimated;

t: the trend;

the estimated coefficient for the trend;

q: the number of lagged terms; (Enders, 2003)

According to the equations above, the null hypothesis can be obtained:

H0: =0, the time series is non-stationary or unit roots, in this condition, the hypothesis can be accepted and the market can be confirmed as following the random walk;

H1: =1, the time series is stationary or not unit roots, in this case, the hypothesis will be rejected and the test of the market will result as weak form inefficient.

The equation 3.7 examines the null hypothesis of random walk against stationary alternative, and the equation 3.8 tests the null hypothesis against a trend alternative (Enders, 2003).

The length of the lags should be selected appropriately since the critical values depend on the sample size, too few lags means that the regression residuals do not perform the white noise processes and the Augmented Dickey-Fuller unit root test may over-reject the null hypothesis; on the other hand, including too many lags may cause the loss of freedom level and reduce the power of the test to reject the null hypothesis of a unit root (Enders, 2003).

A useful rule to determine the maximum length of lag () is as follow:

(Schwert, 1989)(1.6)

With the finite length of leg, the result of Augmented Dickey-Fuller unit roots test would be accurately and functional (Ender, 2003).

Serial correlation coefficients

Serial correlation test is one of the most commonly and intuitive tests for the random walk (Chung, 2006). Under the circumstance of the weakest version of the random walk (RW3), there is no correlation at all leads and lags, the correlation coefficient () should be near zero (Fama, 1965). Serial correlation test measures the relationship between the value of a variable at time t and in the previous time, the complete linear independence is while the complete linear dependence is (Fama, 1965).

The model of the serial correlation coefficient is as follow:

(Ma, 2004) (2.1)

: the return of a single share or a portfolio at time t;

the autocorrelation coefficient of the time series ;

k: the lag of the period;

Cov(): the covariance between the return of time period (t-1, t) and the lagged return t-k periods before;

Var(): the variance of a securities return during the time period (t-1, t);

The In condition of large sample, the standard deviation can be estimated as:

(Chung, 2006) (2.2)

: the sample mean of series y;

k: time lag, ;

To test the random walk, hypothesis should be used:

H0:, the serial correlation coefficient is approximately zero; in this case, the hypothesis can be accepted and the market will be confirmed as following the random walk;

H1: , in such condition, the serial correlation is existing in the observations and the market is weak form inefficient.

In large sample, if the autocorrelation equal to zero, the mean and variance of the sample autocorrelation is zero and 1, respectively. Generally, 1%, 5% and 10% significant levels can be accept by the null hypothesis, and the critical values are 2.576, 1.96 and 1.64, respectively (Chung, 2006). A 99% confident interval is, a 95% confident interval is, while the 90% confident interval is the entire autocorrelation coefficient () falling inside the intervals are accepted, and the market is weak form efficiency, otherwise, the hypothesis will be rejected (Ma, 2004).

Box and Pierce (1970) made the Q-statistic to test whether all of the autocorrelation equal to zero. The formula of the Q-statistic is as follow:

Box and Pierce (1970)(2.3)

Q: asymptotically distribute as Chi-Square with m degree of freedom;

m: the maximum lag length;

n: the number of observations.

For small samples, LB-statistic is formed by Ljung and Box (1978) and the model is:

Ljung and Box (1978)(2.4)

Both of the statistics are designed to test whether the autocorrelations departure from zero.

For the purpose to test the random walk of Chinese stock market, it is necessary to test whether the daily returns are independence by using the autocorrelation coefficient and Ljung-Box statistic.

Run tests

The run test is a method that used to test the statistical independencies (RW1). Unlike the serial correlation test, the run test does not require the normal distribution of the time series (Chung, 2006). The run test is a non-parametric statistic that is applied to detect the whether the changes of share prices are independent (Ma, 2004). A run is a sequence of the price changes with the same signs, such as: ++, –, 00,which indicates price increase, decrease and not change, respectively (Campbell, Lo and Mackinlay, 1997). In the case of no change can be viewed as the mean of the distribution and increase stands for positive change of the prices and the return is greater than the average level; contrarily, decrease stands for the return is lower than average (Worthington and Higgs, 2004). The runs test assumes that the expected number of runs and the observed number of runs should be closely, if the runs departed from each other significantly, the test will reject the hypothesis of random walk (Campbell, Lo and Mackinlay, 1997).

According to Wallis and Roberts (1956), the formula of the expected number of runs can be explained as follow:

Wallis and Roberts (1956)(3.1)

M: expected number of runs;

N: the number of observations (n1+n2+n3);

n1: the number of prices positive changes;

n2: the number of prices negative changes;

n3: the numbers of prices do not change.

If the observed number of runs is R=M1.64 S.E. in the condition of 90% confident interval, R=M1.96 S.E. in the condition of 95% confident interval or R=M2.756 S.E. in the case of 99% confident interval, the market can be conformed as following random walk at 1%, 5% or 10% significant level (Chander, Mehta and Sharma, 2008).

According to Wallis and Roberts (1956), the formula of the standard error is:

= Wallis and Roberts (1956) (3.2)

As Fama (1965) mentioned, in the condition of large size of sample, the expected number of runs M is normally distributed. Take standard normal Z-statistic to express the difference between the actual and expect number of runs:

(Wallis and Roberts, 1956) (3.3)

R: actual number of runs;

: the correction factor for continuity adjustment; (Wallis and Roberts, 1956)

If R?M, the continuity adjustment (Z) is positive, if R?M, the continuity (Z) is negative (Wallis and Roberts,1956). The positive Z indicates negative serial correlation while the negative Z indicates positive serial correlation, both of the results violated to the random walk theory (Chander, Mehta and Sharma, 2008).

Hypothesis can be set up as follow:

H0: R=M, in this case, the market would be viewed as weak form efficient;

H1: R?M, if the result in consistent with this hypothesis, the market should be departure from random walk.

Under the condition that the absolute value of Z is less than the critical value, the null hypothesis will be accepted. Based on the significant value of 10%, 5% and 1%, the confident intervals are 1.64, 1.96 and 2.576, respectively (Ma, 2004).

In the case that the expected number of runs equal to the actual number of runs, the hypothesis should be accepted and the market is considered to be weak form efficient; otherwise, the hypothesis will be rejected (Ma, 2004). Accordingly, the run test will be applied in this dissertation to test the random walk of the share returns and exam whether the movement of share prices in Chinese stock markets are predictable.

Variance ratio tests

Variance ratio test was developed by Lo and MacKinlay (1988), this test based on the assumption that the increments in a random walk series are linear; specifically, the variance of the qthdifference of a random walk increases linearly with q. In other words, the variance of the qth difference variable would be q times variance than its first difference. It can be expressed as the following equation:

(Ma, 2004) (4.1)

q: the number of one-period intervals;

: the mean of one-period return in logarithm;

: the stochastic error in logarithm.

In the version of q-period, the equation is:

(Ma, 2004)(4.2)

: the return over q-period.

The variance ratio is designed to test the random walk in returns; and it is not only sensitive to correlated price changes, but also with respect to non-normality and heteroscedasticity of the stochastic disturbance term (Cheung and Coutts, 2000). According to Campbell, Lo and MacKinlay (1997), despite the Random Walk 1 can be tested by comparing the variance of rt+rt-1to twice the variance rt;it is very difficult to indicate linearity in the condition of Random Walk 2 and Random Walk 3 since the variances of increments change over time. Lo and MacKinlay’s (1988) variance ratio test provided a method to test the uncorrelated residuals in series with the assumption of homoscedastic and heteroskedastic random walk. The variance ratio test rely on both of the rank of series and the magnitude of the observations, it is consider to be a more functional technique to conduct the random walk test when facing the uncorrelated increments (Hassan and Chowdhury, 2008).

Generally, under the circumstance of random walk, the variance of q period returns should be q times bigger than the first period returns, and the general expression of the variance ratio is as follow:

(Chung, 2006) (4.3)

Var(): the unbiased estimator of 1/q of the variance of the q times difference of the logged security return ();

Var(): the unbiased estimator of the variance of the logged return ().

According to Lo and MacKinlay (1988), the estimators and Var() can be computed as:



Where (4.6)

: the first observation of the time series;

Pnq: the last observation of the time series;

Pt: the price of security at time t;

n: the number of observations.

And the equation of the variance ratio can be reduced as:

(Ma, 2004) (4.7)


: the order autocorrelation coefficient of ;

In the case of RW1, when the stock returns are uncorrelated, the autocorrelation coefficients in lags one to q should equal to zero [] for all of the conditions of, and the variance ratio should equal to one [VR(q)=1]. Similarly, under the RW2 and RW3, the variance ratio must still be equal to one (Workington and Higgs, 2004).

Accordingly, the hypothesis of the variance ratio test can be drawn as follow:

H0: VR(q)=1, the time series are uncorrelated, the hypothesis should be accepted. Regarding to the stock returns, they follow the random walk and the stock market should be weak form efficiency;

H1: VR(1)?1, the time serial are correlated, the hypothesis should be rejected. As to the stock returns, they are serial correlated and departure from the random walk and the stock market should be weak form inefficiency.

The test will be conducted at the 1% 5% and 10% significant levels, the critical values are 2.576, 1.96 and 1.64, respectively. If the absolute value of the test-statistics is less than the given critical values of 2.576, 1.96 or 1.64, correspondingly, the null hypothesis can be accepted at the 1% or 5% significant level, the series of stock returns are uncorrelated and the stock market confirmed as weak form efficiency; if not, the null hypothesis will be rejected and the stock market are weak inefficiency (Lo and MacKinlay, 1988).

The first test statistics for the variance ratio test is designed under the assumption of homoscedasticity, the asymptotic variance of the variance ratio is , which is defined as follow:

(Lo and MacKinlay, 1988) (4.8)

nq: the number of observations.

The standard normal test-statistics Z(q) for the variance ratio under the assumption of homoscedasticity is:

(Lo and MacKinlay, 1988)(4.9)

The rejection of the random walk under the assumption of homoscedasticity may be for the reason of heteroscedasticity or the existence of autocorrelation (Worthington and Higgs, 2004). Therefore, in order to get the exactly result of variance ratio test, the assumption of heteroscedasticity should be conducted.

Since the continuously change of the volatilities, random walk were rejected for the reason of heteroscedasticity. In order to allow general form of heteroscedasticity, Lo and MacKinlay (1988) recommended the heteroscedasticity-consistent method, he derived a test statistic Z*(q) of the variance ratio test under the assumption of heteroscedasticity, the asymptotic variance estimator under such assumption is, and it can be defined as:

(Lo and MacKinlay, 1988) (4.10)


(Lo and MacKinlay, 1988) (4.11)

nq: the number of observations;

: the heteroscedasticity-consistent estimator;

: the stock price at time t;

: the average return.

The standard normal test-statistics Z*(q) for the variance ratio under the assumption of heteroscedasticity is:

(Lo and MacKinlay, 1988)(4.12)

According to the null hypothesis, VR(q) should be equal to one. If the heteroscedastic random walk be rejected, it indicates the serial correlation (Worthington and Higgs, 2004). If the value of VR(q) is greater than one, it implies the positive serial correlation and the stock returns are predictable; and in the case of VR(q) less than one, it indicates the negative serial correlation (Darrat and Zhong, 2000).

4. Data and analysis

The primary data of this dissertation are drawn from the database, which based on the two stock markets and four official exchanges in China, gathered the six price indices during the range from 1st July 1993 to 31st May 2011, and the number of the observations is 4674. The tests performed in this study will be conducted on the indices which are daily price-weighted series of all of the shares on the two stock markets, specifically, there are Shanghai A Share Price Index, Shanghai B Share Price Index, Shanghai SE Composite Price Index, Shenzhen A Share Price Index, Shenzhen B Share Price Index and Shenzhen SE Composite Price Index.

4.1 Data and analysis for the unit root test

Since the importance of the unit root test for random walk, it is necessary to use the unit root to test the random walk of the Chinese stock market. As the popular method of the unit root test, the Augmented Dickey- Fuller unit root test is used here to examine the null hypothesis of a unit root (Chung, 2006). In order to obtain more accurate result, the length of lags should be order in finite length; in the Augment Dickey-Fuller unit root test of Chinese stock market, according to the Schwarz’s (1989) equation, the maximum length of Chinese stock markets are 31 for all of the indices (the Shanghai A and B Prices Indices, Shanghai Composite Index, Shenzhen A and B Share Indices, Shenzhen Composite Index). The following tables presented the results of the Augmented Dickey-Fuller unit root test in the condition of no constant or the time trend (table 1), with constant but no time trend (table 2), with constant and time trend (table 3), which corresponding with the three equations mentioned above, equation 1.3, 1.4 and 1.5, respectively.

Table 1 Result of the ADF test in level

The table explains the unit root results of the Shanghai and Shenzhen prices indices by using the Augmented Dickey-Fuller unit root test in level, and the period of the sample is range from 1st January 1993 to 31st May 2011, the number of observations is 4673 for each of the index. The optimal lag length for the Augmented Dickey-Fuller is selected with the Schwarz’s equation (1.6), the maximum lag is 31. The conditions are with no constant or time trend, with constant but no time trend and with both of constant and time trend, and based on the equation 1.3, 1.4 and 1.5.

ADF test statistics (level)

Noneconstant only

constant and time trend

Shanghai A Share Price Index



Shanghai B Share Price Index




Shanghai Composite Index




Shenzhen A Share Price Index




Shenzhen B Share Price Index




Shenzhen Composite Index

Critical value 1%

Critical value 5%

Critical value 10%0.206099












***, ** and * indicate the statistical significance at 1%, 5% and 10% level, respectively.

According to the table above, under the three conditions of with no constant or time trend, with constant but no time trend, and with both constant and time trend, the test critical values of the 1% level significance are -2.545657, -3.431568 and -3.960003, the critical values at 5% significance are -1.940892, -2.861963 and -3.410767, and the values of 10% significance are -1.616654, -2.567038 and -3.127175, respectively. The Augmented Dickey-Fuller test statistics show that as to the level test, all of the statistics are greater than the corresponding critical values at all of the three significant levels, which indicates all of the indices have unit roots, and the null at conventional test sizes should not be rejected. Accordingly, the Chinese stock indices are confirmed as having unit roots and following random walk under the Augmented Dickey-Fuller unit root test in level.

Based on the results of the ADF test in level, the further test of Augmented Dickey-Fuller unit root test in first difference should be applied to continue testing the unit root of the Chinese stock markets. And the results of the Augment Dickey-Fuller unit root test in first difference are as the following table 2.

Table 2 Result of the ADF test in first difference

The table explains the unit root results of the Shanghai and Shenzhen prices indices by using the Augmented Dickey-Fuller unit root test in first difference, and the period of the sample is range from 1st January 1993 to 31st May 2011, the number of observations is 4673 for each of the index. The optimal lag length for the Augmented Dickey-Fuller is selected with the Schwarz’s equation (1.6), the maximum lag is 31. The conditions are with no constant or time trend, with constant but no time trend

ADF test statistics (first difference)

Noneconstant only

constant and time trend

Shanghai A Share Price Index


-30.85919*** -30.85595***
Shanghai B Share Price Index




Shanghai Composite Index




Shenzhen A Share Price Index




Shenzhen B Share Price Index




Shenzhen Composite Index

Critical value 1%

Critical value 5%

Critical value 10%-36.24753***












and with both of constant and time trend, and based on the equation 1.3, 1.4 and 1.5.

***, ** and * indicate the statistical significance at 1%, 5% and 10% level, respectively.

Based on the outcomes of the Augmented Dickey-Fuller unit root test in first difference, the test critical values are the same as the test in level under the same conditions, which are -2.545657, -3.431568 and -3.960003 at the 1% significant level, -1.940892, -2.861963 and -3.410767 at 5% significant level, and -1.616654, -2.567038 and -3.127175 at the 10% significant level. All of the Augmented Dickey-Fuller test statistics in first difference are less than the critical values of the three confidence intervals significantly; therefore, the results are that the time series are stationary and all of the Chinese stock indices have no unit root, which indicates that for the given conditions the null at conventional test sizes should be rejected. As a result, in the case of the Augmented Dickey-Fuller test in first difference, the results indicate that the Chinese stock indices are departure from the random walk and reject the hypothesis that Chinese stock market is weak form efficiency.

As a consequence, under the three conditions of the Augmented Dickey-Fuller test, the results of the level test show that Chinese stock markets have unit root and follow the random walk, however, with regards to the first difference test, all of the results express that the Chinese stock indices are stationary and do not have unit roots and six of the indices are not following random walk. Synthesizing the results above, it can be concluded that despite the existing of random walk in the six indices of Chinese stock markets, there are also some evidences explain the existence of stationary. In other words, with the Augmented Dickey-Fuller unit root test by using the daily data around 18 years and 4674 observations, the stochastic components do not necessary to verify that the stock return in Chinese stock markets are unpredictable, there is no enough evidence to confirm the weak form efficiency of the Chinese stock market, and the further tests are required.

4.2 Data and analysis of the serial correlation

The serial correlation coefficient is normally used to test the whether the time series are auto correlated, in order to test the random walk of Chinese stock market, the serial correlation test is applied here to exam whether the daily returns in the six main indices are correlated. Based on the methodology mentioned above and the theory of the serial correlation test, the null hypothesis is that is not different from zero significantly, in this case, the stock markets are viewed as random walk; on the other hand, the alternative hypothesis is, which indicates the existence of autocorrelation in the time series and the stock markets are departure from random walk (Fama, 1965). The test here is considered at two significance levels 1%, 5% and 10%, and the critical values of the significant levels are 2.576 and 1.96 and 1.64 respectively. If the absolute values of x are greater than the critical values, or the p-values of the Q-statistic are bigger than 0.01, 0.05 or 0.10, the random walk hypothesis will be rejected at the 1%, 5% or 10% significant level. The results of the serial correlation test for the random walk in Shanghai stock market are represented in table 3, and the results for Shenzhen stock market are shown in table 4.

Table 3 Results of the serial correlation test for Shanghai Stock Market

The table expresses the results of the autocorrelation coefficient and the Ljung-Box statistics for the daily return of the three indices in the Shanghai stock market during the period from 1st July 1993 to 31st May 2011. Thevalue is the autocorrelation coefficient which is calculated with equation (2.2); Q-value stands for the Ljung-Box statistics that is gotten by equation (2.4). The total lags used here is 20.

Shanghai A Share Index Shanghai B Share IndexShanghai Composite Index
valueQ-value (p-value) valueQ-value (p-value) valueQ-value (p-value)
Lag 1

Lag 2

Lag 3

Lag 4

Lag 5

Lag 6

Lag 7

Lag 8

Lag 9

Lag 10

Lag 11

Lag 12

Lag 13

Lag 14

Lag 15

Lag 16

Lag 17

Lag 18

Lag 19

Lag 20-0.006



















0.034** 0.1934 (0.660)

2.1885 (0.335)

28.685*** (0.000)

43.169*** (0.000)

46.941*** (0.000)

48.983*** (0.000)

50.524*** (0.000)

55.606*** (0.000)

57.417*** (0.000)

60.892*** (0.000)

65.637*** (0.000)




























0.01147.275*** (0.000)

50.001*** (0.000)

55.197*** (0.000)

61.523*** (0.000)

62.210*** (0.000)

82.077*** (0.000)

94.070*** (0.000)

98.030*** (0.000)

104.59*** (0.000)

104.61*** (0.000)

105.01*** (0.000)

115.68*** (0.000)

121.83*** (0.000)

133.92*** (0.000)

135.50*** (0.000)

135.62*** (0.000)

135.65*** (0.000)

140.36*** (0.000)

140.76*** (0.000)

141.37*** (0.000)-0.006





















28.757*** (0.000)

43.606*** (0.000)

47.390*** (0.000)

49.244*** (0.000)

50.845*** (0.000)

55.895*** (0.000)

57.803*** (0.000)

61.309*** (0.000)

66.125*** (0.000)

74.051*** (0.000)

79.584*** (0.000)

83.457*** (0.000)

99.277*** (0.000)

106.78*** (0.000)

107.03*** (0.000)

107.74*** (0.000)

109.22*** (0.000)


***, ** and * indicates the significance at 1%, 5% and 10% level, respectively.

Considering the autocorrelation coefficient in the table 3, it can be seen that in the first two lags of Shanghai A Index and Shanghai Composite Index and some other lags in three indices, the autocorrelation coefficients are nearly zero. While the highest value is at the first lag of the Shanghai B Index, and many other values are departure from zero, such as the lags 3, 4, 12 and 15 of Shanghai A Index, lags 6, 7, 12, 14 for Shanghai B Index and lags 3, 4, 12, 15 and 16 in Shanghai Composite Index, all of them are none-zero at 1% significant level. Some others are none-zero at 5% and 10% significant level as the table shown, for instance, lags 5, 10, 11, 13, 14 and 20 of Shanghai A Index, lags 3, 4, 8, 9, 13, 18 of Shanghai B Index and lags 5, 8, 10, 11, 13, 14 and 20 for Shanghai Composite Index.

With regarding to the Ljung-Box statistics, the near zero autocorrelation coefficients in the first two lags in Shanghai A Index and Shanghai Composite Index have been supported by the Ljung-Box statistics, however, all of the other near zero autocorrelation coefficients have been rejected by the Ljung-Box at 1% significant level.

Consequently, the serial correlation are confirmed at the first two lags of Shanghai A Index and Shanghai Composite Index, all the others express as reject the null hypothesis of independence.

Table 4 Results of the serial correlation test for Shenzhen Stock Market

The table express the results of the autocorrelation coefficient and the Ljung-Box statistics for the daily return of the three indices in the Shenzhen stock market during the period from 1st July 1993 to 31st May 2011. Thevalue is the autocorrelation coefficient which is calculated with equation (2.2); Q-value stands for the Ljung-Box statistics that is gotten by equation (2.4). The total lags used here is 20.

Shenzhen A Share Index Shenzhen B Share IndexShenzhen Composite Index
valueQ-value (p-value) valueQ-value (p-value) valueQ-value (p-value)
Lag 1

Lag 2

Lag 3

Lag 4

Lag 5

Lag 6

Lag 7

Lag 8

Lag 9

Lag 10

Lag 11

Lag 12

Lag 13

Lag 14

Lag 15

Lag 16

Lag 17

Lag 18

Lag 19

Lag 20 0.054***



















0.024*13.792*** (0.000)

15.460*** (0.000)

44.259*** (0.000)

53.891*** (0.000)

62.528*** (0.000)

68.502*** (0.000)

71.323*** (0.000)

74.271*** (0.000)

76.330*** (0.000)

88.658*** (0.000)

90.274*** (0.000)

93.167*** (0.000)

99.380*** (0.000)

99.872*** (0.000)

108.97*** (0.000)

110.32*** (0.000)

110.61*** (0.000)

112.23*** (0.000)

114.11*** (0.000)

116.73*** (0.000) 0.058***



















0.043**15.678*** (0.000)

18.502*** (0.000)

31.828*** (0.000)

42.247*** (0.000)

46.406*** (0.000)

47.796*** (0.000)

51.401*** (0.000)

54.171*** (0.000)

59.180*** (0.000)

65.643*** (0.000)

65.663*** (0.000)

66.052*** (0.000)

70.886*** (0.000)

78.190*** (0.000)

78.203*** (0.000)

78.427*** (0.000)

78.449*** (0.000)

78.481*** (0.000)

78.541*** (0.000)

87.130*** (0.000) 0.054***



















0.025*13.619*** (0.000)

15.505*** (0.000)

43.926*** (0.000)

54.024*** (0.000)

63.665*** (0.000)

68.949*** (0.000)

71.731*** (0.000)

74.588*** (0.000)

76.450*** (0.000)

89.009*** (0.000)

90.068*** (0.000)

93.024*** (0.000)

100.13*** (0.000)

100.95*** (0.000)

110.32*** (0.000)

111.65*** (0.000)

112.05*** (0.000)

113.87*** (0.000)

115.27*** (0.000)

117.68*** (0.000)

***, ** and * indicates the significance at 1%, 5% and 10% level, respectively.

According to the autocorrelation coefficients and the Ljung-Box statistics showned in table 4. It can be seen clearly that despite there are some autocorrelation coefficients express as near zero, all of the value of Ljung-Box statistics rejected the independence hypothesis at 1% level for all of the three indices in the Shenzhen stock market. As a consequence, the daily return in Shenzhen A Index, Shenzhen B Index and Shenzhen Composite Index are not following random walk, the Shenzhen stock markets are weak form inefficiency.

4.3 Data and analysis for run test

The nonparametric run test as a functional approach that is used to exam the asymmetrically distributed data. The stock returns are asymmetrically distribution, in order to detect the random walk of the Chinese stock market, it is necessary to apply the nonparametric run test (Ma, 2004). As the theory of the run test shown, a run is a successive sequence which expresses as positive sign, negative sign and zero (Campbell, Lo and MacKinlay, 1997). A random walk time series should have the nature that the expected number of runs similar to the actual number of runs, if the difference between the expected number of runs and the actual number of runs is too big, the time series will be inconsistent with randomness (Campbell, Lo and MacKinlay, 1997). The normal distribution of the estimated Z-value is N (0, 1), at the 1%, 5% and 10% significant levels, the critical values of Z are 2.576, 1.96 and 1.64, the null hypothesis is Chinese stock markets follow random walk, when the absolute value of Z is less than the critical values, the null hypothesis of random walk should be accepted at the responded significant levels (Campbell, Lo and MacKinlay, 1997). Otherwise, the random walk hypothesis will be rejected and the market will be verified as inefficient market. The results of the run test of the six indices of the two exchanges in the Chinese stock market are represented at table 4. For the purpose to get the trend of the efficiency of Chinese stock markets, this study classify the whole period into three sub-periods, from 1st July 1993 to 09th June 1999, from 10th June 1999 to 22nd February 2006, and from 23 February 2006 to 31 May 2011.

Table 5: Results of the run test

This table expresses the result of run test of the random walk of the Chinese stock market with the full period and the three sub-periods, the test conducted the daily returns of six main indices in China: Shanghai A Share Index, Shanghai B Share Index, Shanghai Composite Index, Shenzhen A Share Index, Shenzhen B Share Index and Shenzhen Composite Index. The full period include 4674 observations, and the three sub-periods include 1550, 1750 and 1374observations respectively. Total cases mentions the number of observations, ‘cases?mean’ expresses the number of runs have positive signs and equal to mean, ‘cases

Time series

Total cases



Number of runs



Full period (from 1st July1993 to 31st May 2011 )

Shanghai A Share Index

Shanghai B Share Index

Shanghai Composite Index

Shenzhen A Share Index

Shenzhen B Share Index

Shenzhen Composite Index





































First sub-period (from 1st July 1993 to 09th June 1999)

Shanghai A Share Index

Shanghai B Share Index

Shanghai Composite Index

Shenzhen A Share Index

Shenzhen B Share Index

Shenzhen Composite Index





































Second sub-period (from 10th June 1999 to 22ndFebruary 2006 )

Shanghai A Share Index

Shanghai B Share Index

Shanghai Composite Index

Shenzhen A Share Index

Shenzhen B Share Index

Shenzhen Composite Index





































Third sub-period (from 23rd February 2006 to 31st May 2011 )

Shanghai A Share Index

Shanghai B Share Index

Shanghai Composite Index

Shenzhen A Share Index

Shenzhen B Share Index

Shenzhen Composite Index





































***,** and * express the significance at 1%, 5% and 10% levels, respectively.

As the table shows, for the full period (from 1st July 1993 to 31st May 2011), the Z values for all of the six indices express as against the random walk hypothesis. The estimated Z values against the null hypothesis at 1% significant level for Shanghai B Share Index, Shenzhen A and B Indices and Shenzhen Composite Index, and reject the random walk hypothesis at 5% significant level for Shanghai Composite Index, the null hypothesis for Shanghai A index has been rejected at 10% significant level. The evidence indicates that for the nearly 18 years’ period, the Chinese stock markets are weak form efficiency, and Shanghai A Share Index performed better than the others.

For the first sub-period (from 1st July 1993 to 09th June 1999), there are 1550 observations and the results for Shanghai B Index, Shenzhen B Index and Shenzhen Composite Index are rejected the null hypothesis at 5% significant level, and the Shenzhen A Index against the random walk at 5% significant level. In contrast with the results of the full period, the Shanghai A Index and Shanghai Composite Index express as following the random walk; which suggest that at this period, the daily return of Shanghai A share and Shanghai Composite are weak form efficiency.

Regarding to the second sub-period (between 10th June 1999 and 22nd February 2006), 1750 observations are included, the outcomes show except the Shanghai B Index, all the others reject the random walk hypothesis at 1% (Shenzhen A Index and Shenzhen Composite Index) and 5% (Shanghai A Index, Shanghai Composite Index and Shenzhen B Index) significant level. The evidence confirms that in this period only the daily return in Shanghai B Index follow random walk.

As to the last interval (from 23rd February 2006 to 31st May 2011), the number of observations are 1374, the estimated Z value for Shanghai A Index and Shanghai Composite Index are insignificant, but for Shanghai B Share Index is significant at 5% level. All the indices in Shenzhen stock exchange reject the random walk hypothesis strongly (at 1% significant level). The results here show that for the recent years, the daily returns of Shanghai A Share Index and Shanghai Composite Index are following random walk, and two of the stock markets are weak form efficiency.

Consequently, the Shanghai A Share Index and Shanghai Composite Index are trending to weak form efficiency, and the Shenzhen stock exchange is far away from the random walk.

4.4 Data and result for variance ratio test

As Lo and MacKinlay (1988) mentioned, the variance ratio test is a powerful and reliable method to do the test of random walk. It is necessary to apply the variance ratio test in the efficient test of Chinese stock market. Since the variance ratio test is based on the assumption that in a random walk series of the sampling interval, the increments are linear, under the assumption of random walk for variance ratio test, the stock return should be linear in the given interval (Lo and MacKinlay, 1988). Consequently, the variance of the q days share return should be equal to the variance of the one day return multiplied by q. According to the theory of variance ratio test, the VR(q) represented the variance ratio, if it equal to one, the null hypothesis of random walk should be accepted (Worthington and Higgs, 2004). If the value of test statistics is greater than 1.64, 1.96 or 2.576, the null hypothesis will be rejected at the corresponding significant levels (Lo and MacKinlay, 1988). The results [VR(q), Z(q) and Z*(q)] of the variance ratio test of the six indices (Shanghai Composite Index, Shanghai A Share Index, Shanghai B Share Index, Shenzhen Composite Index, Shenzhen A Share Index, Shenzhen B Share Index) are presented in the table 3, the variance lags of q are selected as 2, 4, 6, 8, 12 and 16 days.

Table 6: Results of the Variance Ratio Test for Chinese Stock Market

The table represents the results of the variance ratio test of Chinese stock market, includes the statistics of the six indices and the dada are daily return in two of the Chinese stock markets, VR(q) is the estimated variance ratio that get from equation (4.7), Z(q) and Z*(q) are the asymptotic normal test statistics under the assumption of homoscedasticity and heteroscedasticity that calculated by equations (4.9) and (4.12), respectively. The number of the observations in each index is 4673, and the variance lags are 2, 4, 6, 8, 12 and 16 days.

Observations Nq=2





Shanghai composite




















Shanghai A share




















Shanghai B share




















Shenzhen composite




















Shenzhen A share




















Shenzhen B share




















***, ** and * indicate the 1%, 5% and 10% significant level, respectively.

According to the statistics above, the exactly results of the variance ratio test of the random walk for the Chinese stock markets are explained. As to the Shanghai Composite Index, the values of the estimated variance ratio [VR(q)] are nearly to one at all of the different variance lags, which indicates the null hypothesis can be accepted at this step. The standard normal test-statistics value under the assumption of homoscedasticity [Z(q)] is 2.113943 which is greater than the critical value of 5% significant level; therefore, the null hypothesis would be rejected at this level. All of the values of standard normal test-statistics under the assumption of heteroscedasticity [Z*(q)] are support to accept the null hypothesis. The result indicates the random walk of the daily return in the Shanghai Composite Index has been rejected at the 16 days variance lags under the assumption of homoscedaticity. Consequently, the evidence shows that the daily returns in the Shanghai Composite Index are departure from random walk.

Regarding to the daily return of the Shanghai A Share Index, the results above show that the heteroscedasticity [Z*(q)] and estimated variance ratio [VR(q)] are not rejected the null hypothesis for the Shanghai A Share Index, however, under the assumption of homoscedasticity, the null hypothesis has been rejected at 5% significant level in interval 16. The random walk hypothesis of Shanghai A Share Index is rejected correspondingly.

With regards to the daily return of the Shanghai B Share Index, the estimated variance ratio [VR(q)] do not reject the null hypothesis in any interval. In the case of the standard normal test-statistics under the assumption of homoscedasticity, the values of Z(q) are rejected the null hypothesis at the 1% significant level in all of the intervals. The values of Z*(q) rejected the null hypothesis of heterocedastic random walk at 5% significant level in the intervals 2, 8 and 12. As a result, the Shanghai B Share Index is departure from random walk hypothesis.

The results of the Shenzhen Composite Index show that the values of the estimated variance ratio [VR(q)] and the values of Z*(q) are support the null hypothesis of random walk and homoscedastic random walk; however, under the assumption of heteroscedasticity, the values Z(q) rejected the random walk hypothesis at 1% significance in all intervals. Therefore, the Shenzhen Composite Index is not following random walk as well.

The consequence in the Shenzhen A Share Index and Shenzhen B Share Index are similar with the result of the Shenzhen Composite Index, despite the estimated variance ratio [VR(q)] and the values under the assumption of heteroscedasticity [Z*(q)] are supporting the random walk hypothesis in each of the interval, the homoscedastic random walk hypothesis still be rejected at 1% significant level for all of the intervals examined. The consequence of the daily returns in the Shenzhen A Share Index and Shenzhen B Share Index are confirmed to be departure from random walk.

All in all, the estimated values of Z(q) and Z*(q) show the evidence to reject the random walk hypothesis for Shanghai B Share market strongly, and this finding indicates the reason for the rejection of random walk may be autocorrelation. The estimated values of Z(q) rejected the null hypothesis of homoscedastic random walk strongly for the Shenzhen Composite Index, Shenzhen A Share and Shenzhen B Share Indices, and also rejected the null hypothesis of homoscedastic random walk in the interval 16 for the Shanghai A Share and Shanghai Composite Indices. This evidence suggested the rejection may due to heteroscedasticity. The overall evidences suggest that the Shanghai A Share Index and Shanghai Composite Index are expressed more efficient than the others, and the Shenzhen A and B Share Index, Shenzhen Composite Index are far away from random walk.

Market Making and Reversal on the Stock Exchange Victor Niederhoffer and M. F. M. OsbornePage 897 of 897-916 Journal of the American Statistical Association
Vol. 61, No. 316, Dec., 1966

Osborne, M.F.M. 1959. Brownian motion in the stock market. Operations research 7 : 145-173

1960. reply to “comments on Brownian motion in the stock market”. Operations research 8: 806-810

On March 29, 1900, a Ph.D. thesis by Louis Bachelier entitled ”Theory of Speculation” was

accepted by the Faculty of Sciences of the Academy of Paris, which eventually laid the

foundation for the random walk hypothesis of market efficiency. (Dimson and Mussavian


Can Stock Market Forecasters Forecast?

Alfred Cowles 3rd

Page 309 of 309-324
Vol. 1, No. 3, Jul., 1933

Econometrica Published by: The Econometric Society

Stable URL:

John Y. Campbell & Yasushi Hamao, 1989. “Predictable Stock Returns in the United States and Japan: A Study of Long-Term Capital Market Integration,” NBER Working Papers 3191, National Bureau of Economic Research, Inc

Free Essays

A review of efficient market hypothesis—from the point of view of current financial crisis

1 Introduction

Since Fama (1970) published his paper “Efficient capital markets: A review of theory and empirical work” summarized the basic Efficient Market hypothesis (henceforth EMH) content and the tests based on it, the economics professors has never stopped to debate on it. According to Fama (1969), EMH is an interpretation about how do stock prices relate to the market information. EMH states that the security prices already incorporate and reflect all relevant information. Currently the whole world faces massive financial crisis while EMH and other theories based on it has faced opprobrium and questioning.

This paper includes an overview of EMH and discussions about the strength and limitations from point of view of the current financial crisis. There are three parts in this paper. In the first part, I have summarized the EMH including the definition and three forms of efficient markets. In the second part, I have evaluated the strengths, and limitations of EMH from the point of view of current financial crisis. In the third part, I have given my own conclusion about EMH.

2 Overview of EMH

2.1 Definition

According to Fama (1969) and Jensen (1978), EMH can be described as the text and mathematic formula as the following.

2.1.1 Descriptive Definition

As Fama (1969) has stated, Efficient Market Hypothesis is an interpretation about how do stock prices relate to the market information. EMH means that the security prices already incorporate and reflect all relevant information. So it is impossible to beat the market to obtain extra profit. As Malkiel (2003) described “Markets do not allow investors to earn above average returns without accepting above-average risks”.

2.1.2 Formulated Definition

Jensen (1978) has stated the formulization and model concepts of market efficiency. The joint distribution established based on the information consistent with the joint distribution of future price, the specific formulation is as (1.1).


indicates the joint density function of the correct future prices, while (|) indicates the joint density function of future security prices based on all the available information at the time point t. Then we can rewrite the formula as (1.2).

= (1.2).

In this formula, indicates the expectation of true yield stock j at the time point t. indicates the estimated expectation at time point t, which is equilibrium yield. That means the return expectation which is obtained from an economic activity is equal to its marginal cost. That is, when there is no cost of information collection, the return expectation should be 0.

2.2 Main points of EMH

2.2.1 Main points from microeconomic perspective

From the microeconomic perspective, EMH is under the assumption of “economic man” which is from Adam Smith. It means people are rational and self-interest. Similarly, in the stock market, the people who trade stock are also this kind of “economic man”. In the financial market, every stock represents its company which is under strict surveillance of rational and self-interested people. They conduct fundamental analysis; estimate the company’s future profitability to evaluate the company’s stock prices, then discount future values to present value, cautiously choose between risk and return trade-offs.

EMH shows the balance between demand and supply in markets. For every stock, the number of people who want to sell is equal to those who want to buy, that is, the number of people who think the stock is overvalued is equal to those who think the stock is undervalued. If somebody finds that it is unbalance between those two kinds of people, in other words, if there is a possibility of arbitrage, rational traders will immediately buy or sell stock to make them equal. This is the basic theory of supply and demand in economics. On the one hand, any fluctuation on the prices of commodities is a result of supply and demand changes. On the other hand, prices impact the relationship between supply and demand.

2.2.2 The preconditions of EMH

As Fama (1970) has stated the efficient market is based on three preconditions. Firstly, the cost of information is 0. Secondly, the market is perfectly competitive market. Thirdly, all investors are rational.

Firstly, according to the definition of Fama, the market is inefficient. Grossman and Stiglitz (1980) have proofed that no cost of information is the sufficient condition for efficient market.This condition exposes on questions on the market structure. It is unrealistic if transaction costs and taxes are 0. On contrary, huge transaction costs may hinder the possibility of arbitrage in real world. That may cause the stock prices do not increase with good information and information is not reflected in the price. The second precondition of EMH is the perfectively competitive market that leads to each investor can accept the price. However, under the situation that information costs exist, there is bargaining behaviour in market. Therefore, the market participants are not price-taker. For the third precondition, investors are rational and they can evaluate the securities rationally. Shleifer (2000) improves the three levels of rational market participants. The investors at the first level are perfectly rational. The ones at the second level are even if some of the investors are irrational; their trade generated randomly and can be cancelled out. For the third level, if irrational investors’ behaviour is not random, arbitrageurs can eliminate noise traders’ influences on prices. Shleifer (2000) has argued that “With a finite a risk-bearing capacity of arbitrageurs as a group, their aggregate ability to bring prices of broad groups of securities into line is limited”. That suggests the risk-free arbitrage opportunities may exist, but they cannot be the direct evidence of market inefficient.

2.2 Three Forms Efficient Market and Their Test

Based on the different types of investment approaches Fama (1970) defined the efficient market to three forms—weak-form efficiency, semi-strong form efficiency and strong-form efficiency.

2.2.1 Weak form efficient market and its test

As the description in Fama’s (1970) paper, a weak form efficient market is a kind of market in which the shares’ prices fully reflect the historical information. So in weak form efficient market, investors cannot make a strategy to obtain extra profits through technical analysis. It is useless to analyze historical information to predicted future price, because the current market price has already contained all the information which acquired by technical analysis.

The tests for weak form market include two methods. The first is the random walk model while the second is the filter approach. The first method is focus on whether the fluctuation of stock price is random which is first published by Osborne (1959). The filter approach can be described that in an efficient market, if there is no new information released, the price would randomly fluctuate between the resistance lines.

2.2.2 Semi-strong efficient market and its test

As Fama (1970) has stated the semi-strong efficient market refers to the market in which the current stocks’ prices reflect not only historical price information but all available public information related to security companies. If the market is efficient in this sense, then it will not be possible to acquire abnormal profit through the analysis of a company’s balance sheet, income statement, changes in dividend, stock split announcement and any other public information.

The tests for the semi-strong efficient market mainly focus on determining the speed of share prices adjust to new information. Scholars have conducted a variety of tests. The most famous one is “Event Study” which firstly published by Ball and Brown (1968). An event study measures the cumulative performance of stock from a specific time before and after information released.

The semi-strong efficiency of market attracts a lot of studies to test it. Some empirical studies proof that the US stock market is a semi-strong market. Fama (1969) investigate 115 companies stocks and prove that the US stock market is semi-strong.

2.2.3 Strong form efficient market and its tests

As Fama (1970) has stated the strong form efficient market is a market in which the share price reflect all the information includes the inside information. That means in strong form efficient market nobody can obtain abnormal profit even the insiders.

The tests of strong form efficient market focus on the company insiders, stock exchange brokers, securities analysts and mutual fund performance, in order to verify whether they can earn extra returns. Some studies have showed that several markets are close to strong form. Maloney and Like several files on the professional investment managers study showed that after deduct the expense of trading, the randomly selected securities and index without conduction were nearly at the same return level with carefully analysis. Mulherin (2003) has conducted the analysis of the Challenger Crash and declared it supports the strong form efficiency. While the other scholars argue that the strong form efficient market will never exit in reality.

3 Discussions of EMH from the current Financial Crisis

The following chapter provides analysis of EMH from the point of view of current financial crisis. The first section provides a review of current financial crisis evolution; the second section gives the critical analysis of the challenges that EMH faces, particularly from the view of information dissemination, information quality and the role of self-regulation of stock market; the third section provides suggestions to avoid financial crisis.

3.1 The Evolution of Current Financial Crisis

The current financial crisis has root in credit crisis which is a financial storm along with bankruptcy of subprime mortgage lenders, close of investment funds and the turbulence stock market in the United States. As Tylor (2009) has described, the evolution of financial crisis were as follows. First, the U.S. commercial banks issued a large number of high-risk real estate mortgage loans (i.e. subprime mortgages), then sold these subprime mortgages to Fannie Mae and Freddie Mac in order to transfer the potential risk of mortgages and return the funds as soon. Fannie Mae and Freddie Mac created subordinated bonds through asset to security approaches, sold bonds to investment banks like Merrill Lynch. Investment banks seek for high returns create financial innovations by complex means and make subprime lending which under investment grade (BBB / Baa) into a so-called “structured products” to attract the risk-interested investors. These typical derivatives finally have been sold to financial institutions and investors through their marketing network all over the world. When the original debtor cannot repay the mortgage on time, the financial crisis broke out and rapidly spread to whole world by the chain which is also the risk transfer line. We can illustrate this process as graph 1.

Graph 1 Evolution of Financial Crisis

3.2 What does EMH faces in the financial crisis?

The financial crisis has proved that the precondition of EMH is too far from reality. In graph1, there is a stream which contains risk, information and cash transferred between people and market. If the original debtors are also investors, an information circulation mechanism has been established. Ding (2005) interprets in this process, as investors not only analyze the information in the market, but also think about other investors’ potential activities in response to these changes. These changes in the market then become the basis for new thinking. A self-feedback loop established between the investors and market. Investors are both participants and observers. That is, the investors affect the market changes as well as are affected by the market, so the information that market participants obtained includes the information which is influenced by the participants’ own behavior. Therefore, it is impossible to understand the market completely and objectively. As Baker (2006) has suggested that the investors’ behaviors need to be considered as one important factor in those theories like EMH’s perfect preconditions.

From the view of information dissemination, false and short information commonly exists in market and it cannot be aware of. That is may be another reason for leading irrational behaviour. For instance, as Duncan reported (2009), on September 15th 2008, Lehman Brother collapsed with about $60 billion in toxic bad debts, and assets of $639 billion against debts of $613 billion. That made Lehman Brother, the largest investment bank, collapsed since 1990’s. However, just five months ago, Lehman Brother held the annual shareholders meeting and the stock price was up to about 86 dollars per share. According to Fama (1970), investors operate stocks according to the information. When news spread on the stock market, the share prices begin to fluctuate. With the rapid dissemination of information, more and more people take part in the trade of stock. The share price will stay at a right level when all the people know the information. However from the Lehman Brother’s example such evidence has been provided that because of the information quality, the price cannot reflect the right value. As Barry and Harvard (1979) have stated that the sufficient uncertainty information frequent transacting may be deleterious to market.

Another precondition of EMH is the market is a perfectly competitive market. The perfectly competitive market is a market without government intervention and everyone is a price-taker (Nicholson, 2005). In reality, the perfectly competitive market is impossible to exist, although the governments advocate the market liberalization to attract people to take part in trading. Some liberal economics like Levine (2001) have pointed out “financial liberalization leads to more efficient investments and that financial liberalization boosts productivity growth”, but the huge rescue is the biggest evidence of the failure of market liberalization. The disappearance of business profit model of investment banks, government managed commercial banks and mortgage institutions provide the most effective large-scale evidence. The large investment institutions cannot effectively regulate themselves. So there is no perfectly competitive market and all the theory based on this assumption seems to go to failure.

3.3 What do we learn from financial crisis?

Financial crisis reveals that the preconditions of EMH cannot realize in present world. Information uncertainty and feed-back loop lead people irrational and the huge rescue policy proves market is never perfectly competitive. So the prices cannot inflect reflect information in the right level. Lack of regulation of information and financial innovations may be the main reasons for this financial crisis.

The Lehman Brother’s collapse indicates that financial markets potential failure really exists and that blindly believe in market lead to systemic collapse of financial markets. Therefore, only relying on the market self-regulation is insufficient, it needs government regulation and macroeconomic control to solve the problems. As the modern financial system in particular with the features of high leverage, high-relevance and high asymmetric, the market systemic risk and complexity have increased. In this case, government must play its leading role in financial supervision and take effective measures to curb excessive market speculation and the vicious competition among financial institutions. Particularly, government should strengthen the investment banking and financial regulations of derivatives to prevent financial institutions rely on excessive leverage to blind investment.

4 Conclusions

The efficient market hypothesis provides an ideally situation that the stock prices reflect all relevant information in a perfectly competitive market in which the people are rational. Some valuable studies base on the concept of efficient market has been recognized.

However, the extremely ideally preconditions of EMH lead people to rethink the application scope and its practice value. In the current financial crisis, EMH has faced huge challenges to the perfectly preconditions that perfectly rational man and perfectly competitive market cannot realize. These challenges are mainly from two aspects which are information and role of self-regulation in market. Firstly, with the rapidly information dissemination, an information circulation mechanism was established between investors and market. Investors not only absorb information from market, but also give their own views to market. So the information they get already includes their own views which is a reason to make investors irrational. Another problem about information is the uncertainty and inaccuracy that investment banks may use accounting method to blind investors and leads investors to operate stocks irrational. Secondly, EMH overemphasizes the role of self-regulation in the market. However, large investment institutions cannot regulate themselves effectively. The U.S. government’s rescue policy is the greatest evidence of the failure of market liberalization.

The departure from reality does not mean the complete failure of EMH. In future studies, EMH may be combined with other disciplines, in order to achieve a greater scope.


Barber, B. & T. Odean (2001) ‘The Internet and the Investor’ Journal of Economic Perspectives 15(1):41-54.

Ball, R & P. Brown (1968) ‘An Empirical Evaluation of Accounting Income Numbers’ Journal of Accounting Research 6(2):159-178

Barry, M. & B. Harvard (1979) ‘Information dissemination, market efficiency and the frequency of transactions’ Journal of Financial Economic 7(1):29-61

Baker, M. & J. Wurgler (2007) ‘Investor Sentiment in the Stock Market’ Journal of Economic Perspectives 21(2):129-151

Duncan, G. (2009) ‘Lehman Brothers collapse sends shockwave round world’ The Times Sep 16th, 2008

Fama, E.F., L. Fisher, M. C. Jensen, R Roll (1969) ‘The Adjustment of Stock Prices to New Information’ International Economic Review 10(1):1-22

Fama, E.F. (1970) ‘Efficient capital markets: A review of theory and empirical work’ The Journal of Finance 25(2):383-417

Fama, E.F (1976) Foundations of Finance New York: Basic Books

Grossman, S & J. Stiglitz (1980) ‘On the Impossibility of Information Efficient Markets’ American Economic Review 70(3):393-408

Jensen M.C. (1969) ‘Risk, The Pricing of Capital Assets, and The Evaluation of Investment Portfolios’ Journal of Business 42(2):67-247

Maloney, M.T. & J.H. Mulherin (2003) ‘The Complexity of Price Discovery in an Efficient Market: the Stock Market Reaction to the Challenger Crash’ Journal of Corporate Finance 9(4): 453-479

Osborne, M. (1959): ‘Brownian Motion in the Stock Market’ Operation Research 7: 145-173.

Malkiel B.G. (2003) ‘The Efficient Market Hypothesis and Its Critics’ Journal of Economic Perspectives 17(1):59-82

Taylor, J.B. (2009) ‘The Financial Crisis and the Policy Responses: An Empirical Analysis of What Went Wrong’ paper presented to Proceedings of FIKUSZ ’09 Symposium for Young Researchers, Budapest, Hungary

Nicholson, W. (2004) Microeconomic Theory 9th ed. 2005 South-Western College Pub

Shleifer, A. (2000) ‘Inefficient Market- An Introduction to Behavioral Finance’ Oxford: Oxford University Press

Free Essays

A basic study of pricing to market


Exchange rate is always changing, which can cause different relative price of traded goods among countries. However, when firms adopt PTM (pricing to market), price of trading goods does not change with the exchange rate, That may result in deviation from purchasing power parity theory. Many aspect of economy such as consumption, welfare distribution will change according to this behavior. This paper tries to illustrate the meaning and the effect of PTM with some theory and empirical evidence.

Definition and theories about PTM

Pricing to market(PTM) is a new definition emerged in mid-1980s.During that period ,US dollar has experienced a strong appreciation. However, it has been noticed that the price of import commodities in the US did not decrease according to the exchange rate changes. PTM represent the phenomenon of foreign firms maintaining or even increasing their export price when the currency of the importer country rises. (Krugman,1987) .PTM can also be understand as export firms set price of trading goods in local currency instead of adjust the price according to the exchange rate. The international evidence shows that the pricing to market behavior and exchange-rate pass through is often interpreted as consistent with “local currency price stability”. This kind of price discrimination behavior does not only affect the price of traded goods, but also influences various kinds of price rigidities. (Alexius and Vredin,1999)

Pick and Cater(1994) explain that the reasons for PTM behaviors. Firms prefer to keep stable prices in foreign markets that have fluctuating exchange rates may exercise this preference by exerting market power. It may result from the demand elasticity: As the importing country’s currency appreciates, the import price falls and demand increases. However, when the exporter does not have the ability to adjust the raised demand for its goods, the extend of currency appreciation will not completely reflect in the price of trading commodities. In addition, there is also implication that shocks to national market conditions, such as exogenous changes in the exchange rate, can generate deviations between the prices that firms charge in each market. ( Bergin,2003) At last, PTM may simply because that exporting firm want to keep competitive: many firms in these countries are said to have followed pricing policies designed to keep export prices competitive despite changes in exchange rates.(Marston,1990)

Literatures about PTM are generally based on models. In pricing to market modelsfrom the work of Betts and Devereux(1996), they adapt a model that firms produce different products to export to different countries, besides, firms can set different exporting prices for different destinations. They learn from the result of the model that the increase in the fluctuation of exchange rate arising from PTM may be very large by doing a simple quantitative exercise based on the estimated degree of PTM in international trade. That’s to say, when firms engage in PTM, when a country face with money shocks, the effects of this shocks are quite different compare with the traditional exchange rate models in which prices are set in the currency of the exporter. What’s more, PTM plays a central role in exchange rate determination and in international macroeconomic fluctuations. It acts to limit the pass-through from exchange rate changes to prices, and reduces the traditional ‘‘expenditure switching’’ role o exchange rate changes. (Betts and Deuereux,2000)

Implications of PTM for PPP

A direct implication of the PTM hypothesis is the low pass-through from the exchange rate to prices, and the resultant failure of the relative PPP to hold in the short and intermediate-runs.(Aizenman,2004) PPP(purchasing power parity) is a conception widely used in international economy. The basic idea about PPP is when consumers purchase identical products in any market worldwide, the quantity of money should be the same when measured in one currency (Hallwood and MacDonald, 2000). Applied to aggregate price data, purchasing power parity is the hypothesis that the import prices that of one country to purchase another country’s goods should move one-for-one with the producer prices for goods in those countries that are the sources of those imports when all of these prices are expressed in a common currency. (Atkeson and Burstein,2008)

When firms and producers applied PTM, many aspect of the country’s economy will different from the PPP holders. From the analysis of model in Betts and Deuereux(2000),the implications of PTM to PPP can be conclude as follows:

The most obvious implication of PTM is the price volatility. Actually, if there is no price rigidity, the law-of one- price would be available for all kinds of products, and PPP would hold generally, even though there is still exist some extend of international market segmentation. However, if sticky local-currency prices hold, changes in the exchange rate will result in deviations from the law-of-one-price. While when complete PTM applied, the exchange rate will play a different role in the LOOP environment, relative prices of importer and exporters will not be affected by the fluctuation of exchange rate. but it has an impact on relative incomes. If export prices are set in foreign currency, when a depreciation happened in this currency. The home currency earnings of home firms will increase, at the same time, foreign firms’ foreign currency earnings will decrease at given production levels. Thus a depreciation generates a world redistribution of income towards the home country, which raises home consumption relative to foreign consumption. This occurs without the influence of relative price changes. (Betts and Deuereux,2000)

PTM have a positive effect on promoting the real exchange rate movements: the larger of the PTM sector is, the lower effect of a money shock will the country get. Say a country faced with a money shock which will result in a depreciation of the currency, if PTM holds in a large sector, the impact on reallocation of spending away from domestic goods consumption towards foreign goods will be reduce to a large extend. The reason for this maybe the exchange rate is response to depreciation, however, this kind of currency price change would not affect the domestic market. PTM acts to limit the pass-through from exchange rate changes to prices, and reduces the traditional ‘‘expenditure switching’’ role of exchange rate changes. Nominal price stickiness associated with PTM magnifies the response of the exchange rate to shocks to fundamentals. (Betts and Deuereux,1996) The effect of monetary policies varies from cooperative firms and non-cooperative firms: at first, this need to be confirm that there is always a gain from coopperation, and secondly, that the gain reaches a maximum at the polar cases of no and full pricing to market since in these cases the movement in the terms of trade and thus the welfare spill-over is at a maximum in the non-cooperative setting. (Michaelis,2006)

Empirical evidence of PTM

As a common strategy of international firms, PTM behavior are widely used all over the world. By investigating data from those firms, we can get the empirical effect of pricing to market. Gil-Pareja(2002) investigated PTM behavior in European car markets during 1993 and 1998. He found that local currency price stability is a strong and pervasive phenomenon across products independent of the invoicing currency. In fact, there are large gaps among the automobile retail prices across EU Member States since the early 1980s, which is deviated from the law-of-one-price. (Gil-Paraja,2002) After analyze and compare the data of different EU countries, it can be easily conclude that the strategy tries to avoid the effect of changes in exchange rate changes is just in order to make the profits across segmented markets be maxime. Exporting firm will get the highest expected profits under exchange rate uncertainty by setting price of the importer’s currency.

Since early 1980s, Japanese yen has experienced a depreciation. Marston(1990) has investigated pricing to market by Japanese firms from 1980 to 1987. He explores how Japanese firms responded to shifts in the real exchange rate by varying the prices of their exports relative to prices of products destined for the domestic market. The estimation distinguishes between inadvertent but temporary changes in these margins due to exchange rate surprises and planned changes associated with PTM behavior; He found there is overwhelming evidence that export-domestic price margins are systematically varied to help Japanese firms protect their competitive position. (Marston,1990) According to the exporter in UK, how is export pricing affected by other firm specific or contextual environmental variables such as export experience of the firm, degree of export development, type and intensity of market competition among others. The extend of PTM is based on variable of elements such as the industry level, the information and the like.( Tzokas et al.,2000)


Pricing to market make the price of international trading goods free from the fluctuation of exchange rate, as a result, PPP no longer hold in those countries which applied PTM. Theories, as well as empirical evidence suggest that PTM has a strong implication of consumption and welfare distribution. It is a effective way for international co operations to avoid the negative influence of exchange rate fluctuation. But the extend of PTM is varias among different counties and industries.

Aizenman.J. (2004) .“Endogenous pricing to market and 1inancing costs Original Research Article”.Journal of Monetary Economics, Volume 51, Issue 4, PP 691-712

Alexius.A, Vredin.A.(1999).“Pricing-to-Market in Swedish Exports”.The Scandinavian Journal of Economics, Vol. 101, No. 2 pp. 223-239

Atkeson.A, Burstein.A.(2008).“Pricing-to-Market, Trade Costs, and International Relative Prices”.The American Economic Review, Vol. 98, No. 5, pp. 1998-2031

Bergin. P. R, 1eenstra.R.C. (2001). “Pricing-to-market, staggered contracts, and real exchange rate persistence Original Research Article”. Journal of International Economics, Volume 54, Issue 2, PP 333-359

Bergin. P. R.(2003).“A model of relative national price levels under pricing to market Original Research Article”. European Economic Review, Volume 47, Issue 3, PP569-586

Betts.C, Devereux. M.B. (2000) .“Exchange rate dynamics in a model of pricing-to-market Original Research Article”.Journal of International Economics, Volume 50, Issue 1, PP 215-244

Betts.C, Devereux.M.B. (1996) .“The exchange rate in a model of pricing-to-market Original Research Article”. European Economic Review, Volume 40, Issues 3-5, PP 1007-1021

Gil-Pareja.S. (2003).“Pricing to market behaviour in European car markets Original Research Article”.European Economic Review, Volume 47, Issue 6, PP 945-962

Hallwood. P ,MacDonald. R. (2000) “International Money and Finance”, 3rd ed.Blackwell.

Krugman, P. (1986), “Pricing to Markets when exchange rate changes”, In: Arndt, S.W., Richardson,J.D. (Eds.), Real-financial Linkages among Open Economies. MIT Press, Cambridge.

Mark, N. (2001) “International Macroeconomics and Finance”, Blackwell.

Marston. R. C. (1989) “ Pricing to Market in Japanese Manufacturing”. Journal of International Economics, 29(3), PP 217-236.

Michaelis. J.(2006). “Optimal monetary policy in the presence o1 pricing-to-market Original Research Article”.Journal of Macroeconomics, Volume 28, Issue 3, PP 564-584

Patureau.L.(2007).“Pricing-to-market, limited participation and exchange rate dynamics Original Research Article”.Journal of Economic Dynamics and Control, Volume 31, Issue 10, PP 3281-3320

Pick. D H, Carter. C A. (1994). “Pricing to Market with Transactions Denominated in a Common Currency”.American Journal of Agricultural Economics, Vol. 76, No. 1, pp. 55-60

Sarno, L. & Taylor, M.P. (2002), “new open-economy of macroeconomics”. In The economics of exchange rate, Cambridge University Press, Cambridge,

Tzokas.N, Hart.S, Argouslidis.P ,Saren.M. (2000), “Strategic pricing in export markets: empirical evidence from the UK Original Research Article”. International Business Review, Volume 9, Issue f, PP 95-117

Free Essays

Management Principles: Market Entry Strategies

Executive Summary

This report has been written to seek to explore the management principles, which may be applied by businesses when they seek to enter new international markets. The report shall be split into three parts, which will focus on different elements of this. Firstly, the various market entry strategies, which are available to firms who are intending to become international businesses, shall be briefly described. Secondly, three market entry strategies shall be discussed (including their definitions, the basic decisions required to implement these and the positive or negative aspects of each of these strategies). Finally, an analysis of a market entry strategy, which has been used by IBM to move their business into China from the United States (U.S.) shall be analysed and discussed. Then a number of conclusions shall be drawn in relation to market entry strategies and how they have been utilised by firms to date.


This report has been written to seek to explore the management principles, which may be applied by businesses when they seek to enter new international markets. The report shall be split into three parts, which will focus on different elements of this. Firstly, the various market entry strategies, which are available to firms who are intending to become international, shall be briefly described. Secondly, three market entry strategies shall be discussed (including their definitions, the basic decisions required to implement these and the positive or negative aspects of each of these strategies). Finally, an analysis of a market entry strategy, which has been used by IBM to move their business into China from the United States shall be analysed and discussed. The first of these topics shall now be briefly discussed.

Market entry strategies available for a firm intending to become international.

When an organisation has made a decision to enter an overseas market, there is a variety of options open to it (Meyers et al. 2009). These options will be varied depending upon the cost, risk or the degree of control, which can be exercised over them (McDonald, Burton and Dowling, 2002). However, the easiest way which a firm may enter a new market is by using a form of entry strategy derived from exporting (Morsink, 1998). This may be implemented by using either a direct or indirect method such as, an agent or countertrade (Morschett, Schramm-Klein and Swoboda, 2010). However, there are more complex ways through which firms may seek to enter new international markets (Porter, 1980) these may be derived from the undertaking of joint ventures, or export processing zones (Roberts and & Hite, 2007). Cunningham(1986) identified five strategies, which have been used by firms when they are seeking to enter foreign markets, these are: Technical innovation strategy. This is when a firm seeks to create an image so that they are perceived to have superior products.

Product adaptation strategy – This is when a business modifies an existing product.
Availability and security strategy – This is when a firm seek to overcome transport risks by countering perceived risks.
Low price strategy – This is when a firm use a low price to penetrate the new market.
Total adaptation and conformity strategy – This is when a firm use a foreign producer to manufacture their products (Cunningham, 1986: 9).

Therefore, from the above, we can ascertain that there are a number of market entry strategies, which may be used by firms to seek to enter new international markets. In the next section of this report, three of these shall now be discussed in more detail.

Three market entry strategies, which firms may use to become international businesses

There are three main entry strategies, which may be used by firms to enter international markets. These are direct, indirect or foreign based (Dunning, 1985).Each of these has a number of advantages and disadvantages. For example, a direct strategy involves the sharing of risk and knows, may be only means of entry into an international market or may be source of supply for third country (Dunning, 1985). Each of these is advantageous and may be implemented through an agent, distributor, government or an overseas subsidiary. However, the disadvantages associated with this approach are that partners may not have full control or management within a company, it may be impossible to recover capital, there could be disagreements between purchasers or third parties or partners may have different views on the exported benefits of the goods or services in question (Ferrell and Hartline, 2008). In comparison, indirect approaches involve trading companies, export management companies, piggybacking or countertrading (Glowik and Smyczek, 2011). Furthermore, foreign based market entry strategies enable companies to set up their operations in other countries. So there are a variety of ways in which organisations can enter foreign markets.Three of these methods shall now be outlined in more detail.

The first of these is the use of export processing zones. This is often defined as a zone within a country, exempt from tax and duties, for the processing or reprocessing of goods for export (Croft, 1994). This is a foreign market entry strategy, which is derived from the use of licensing, joint venture, contract manufacture or ownership (Griffin, 2008). In order to determine if this is the best approach, a firm will need to ascertain if there is a demand for their product, they will need to identify potential partners and they will need to ascertain if their earnings will be advantageous from adopting this market entry strategy. The advantages of using this approach are that the host country obtains know how, there is capital, technology or employment opportunities created within the country in question, there could be foreign exchange earnings and this helps foreign internationalization is enabled more easily (Gwartney, Stroup, Sobel and MacPherson, 2009). However, the disadvantages of this approach are that partners do not have full control or management of their business, it may be impossible to recover capital, there could disagreements between parties as they may have different views on exported benefits or other business topics (Lane, 2006).

The second approach, which may be used to enter a foreign market, is often based on bartering. Bartering is defined as the direct exchange of one good for another (Kotler and Armstrong, 2008). In order to determine if this is the best approach, a firm will need to ascertain if there is a demand for their product, they will need to identify potential partners with whom they may barter goods and they will need to ascertain if their earnings will be advantageous from adopting this market entry strategy (Schultz, Robinson and Petrison, 1998). The disadvantages of this approach are that it may involve short-term investments, capital or employment movements, transaction costs and benefits, the business is not part of economy so it may be aliened, laws may be different or create more bureaucracy (Smith, 2011). However, they are simple to administer, there is no currency and they are commodity based valuation or currency based valuation, so there are also a number of advantages to adopting this approach.

The third method, which may be used by firms to enter a foreign market, is referred to as countertrade (Williamson, 1975). Countertrade is when a customer agrees to buy goods on condition that the seller buys some of the customer’s own products in return (Kotler and Armstrong, 2008). In order to determine if this is the best approach, a firm will need to ascertain if there is a demand for their product or a demand for their partners, they will need to identify potential partners from which they make purchases and they will need to ascertain if their earnings will be advantageous from adopting this market entry strategy (Williamson, 1985). The advantages of this are that it is a method of obtaining sales by retaining a seller and it is an effective method of breaking into a closed market. However, the disadvantages are that there may be usage differences or variety differences between products and locations, it is difficult to set a market price and there may be inconsistencies in the delivery and specification of the product or service quality (Glowik and Smyczek, 2011).

Each of these three market entry strategies may be employed by companies who wish to enter foreign markets. However, what has been interesting is the recent shift in companies moving to China (Hira and Hira, 2008). This shall be discussed using IBM as an example (, 2005).

Analysis of the market entry strategy of IBM to move their business into China from the United States.

In recent years, according to Hira and Hira (2008), a number of multi-national companies, which have been based in the United States, have started to move their operations to China. This is sometimes referred to as off shoring. Off shoring is when companies seek to move parts of their operations to other countries. One example of this is the U.S. company named IBM, which is moving their business China. In this scenario, the market entry strategy, which is being adopted by IBM, is based on knowledge transfer and a foreign market entry strategy (Glowik and Smyczek, 2011). Additionally, IBM is adopting a total adaptation and conformity strategy as they are using a foreign producer to manufacture their products (Cunningham, 1986). This is a big move for IBM, which is a multinational technology and consulting corporation with headquarters based in Armonk, New York in the U.S. They produce and market computers, to both businesses and domestic consumers. The manufacture and develop both hardware and software, and provides infrastructure, hosting and consulting services in areas covering many divisions from mainframe computers to nanotechnology (Shelly, 2010). Originally IBM had twenty-five 25 research and development (R&D) centers in the US, Europe and Asia. However recently it has decided to establish two major research and development centres in Beijing and Shanghai of China. This move has been undertaken to seek to take advantage of the emerging markets and economies in China and to encourage the technology development in an open standard and open source code (, 2005).This has brought a number of advantages to IBM due to the low labour costs, the high technological centres which already exist in China and a general reduction in their overheads due to the lower operating costs (Thomson and Sigurdson, 2007). Therefore, this move has been advantageous to them in a number of ways. However, they have also been able to take advantage of entering this new market as their products and services are now being utilized in China too (Engardio, Roberts and Bremner, 2004). To this end, this move has been good for IBM. However, one must also consider that there may be a number of disadvantages for this company in the future such as, the company may not attain full control or management of their business or it may be impossible to recover capital should these new investments in China fail (Lane, 2006). However, to date IBM seems to be reaping the benefits of this move, only time will tell if this has been beneficial overall.


From the above, we can ascertain that there are a number of market entry strategies, which may be used by firms to seek to enter new international markets. Each of these has a number of positive or negative aspects which must be considered carefully by businesses before they enter new markets as is shown by IBM who have moved their business into China from the United States.


Croft, M. J. (1994), Market segmentation: a step-by-step guide to profitable new business, New York: Routledge.

Cunningham, M.T. (1986) “Strategies for International Industrial Marketing”. In D.W. Turnbull and J.P. Valla (eds.) Croom Helm 1986,

Davidson, W. (1982). Global strategic management. New York. Wiley.

Dunning, J. H. (1985). Multinational enterprises, economic structure, and international competitiveness. New York. Wiley

Engardio, P., Roberts, D. and Bremner, B. (2004) ‘The China Price’, Business Week, 3911, pp. 102.

Ferrell, O. C. & Hartline, M. D. (2008), Marketing Strategy, Mason: Thomson Higher Education.

Glowik, M. & Smyczek, S. (2011), International Marketing Management: Strategies, Concepts and Cases in Europe. Germany: Oldenbourg Wissenschafts Verlag GmbH.

Griffin, R. W. (2008), Management. 9th edition, Boston, MA: Houghton Mifflin Company.

Gwartney, J. D., Stroup, R. L., Sobel, R. S. & MacPherson, D. (2009), Economics: Private and Public Choice. Mason, OH: South-Western Cengage Learning. (2005). IBM to Set Up 4 R&D Centers in China. [online]: Accessed on 08/08/2013

Hira, R. & Hira, A. (2008), Outsourcing America: the true cost of shipping jobs overseas and what can be done about it. New York: AMACOM books.

Kotler, P., & Armstrong, G. (2008), Principles of marketing, New Jersey: Prentice Hall.

Lane, H. W. (2006), International Management Behavior, Fifth Edition, Malden, MA: Blackwell Publishing.

McDonald, F., Burton, F. & Dowling, P. (2002). International business. Cincinnati: South Western College Publishers.

Meyer, K. E., Estrin, S., Bhaumik, S. K., & Peng, M. W. (2009). Institutions, resources, and entry strategies in emerging economies. Strategic management journal, 30(1), 61-80.

Morschett, D., Schramm-Klein, H., & Swoboda, B. (2010). Decades of research on market entry modes: what do we really know about external antecedents of entry mode choice?. Journal of International Management, 16(1), 60-77.

Morsink, R. L. A. (1998), Foreign direct investment and corporate networking: a framework for spatial analysis of investment conditions. Cheltenham: Edward Elgar Publishing Limited.

Porter, M. E. (1980), Competitive Strategy: Techniques for Analysing Industries and Competitor, New York: Free Press

Roberts, J. T. & Hite, A. (2007), The globalization and development reader: perspectives on development and social change. London: Blackwell Publishing Ltd.

Schultz, D. E., Robinson, W. A. & Petrison, L. (1998) Sales promotion essentials: the 10 basic sales promotion techniques — and how to use them. New York: McGraw – Hill. Inc.

Shelly, G. B. (2010). Discovering Computers 2010: Living in a Digital World, Complete. Boston, MA: Course Technology Cengage Learning.

Smith, T. J. (2011), Pricing Strategy: Setting Price Levels, Managing Price Discounts Establishing Price Structure. Natorp boulevard: South – Western Cengage learning.

Thomson, E. & Sigurdson, J. (2007), China’s science and technology sector and the forces of globalization. London: World Scientific Publishing Co. Pte. Ltd.

Williamson, O. E. 1975. Markets and hierarchies – analysis and antitrust implications. New York: The Free Press

Williamson, O. E. 1985. The economic institutions of capitalism: firms, markets, relational contracting. New York: The Free Press

Free Essays

The Stock Market Crash of 1929


The paper investigates the causative factors of the 1929 stock market crash. With help of the economic data and previous researches, it has been highlighted that the tight monetary policy led to the crash and subsequent economic failure in 1932. Besides other factors, the prime lesson from the crash for investors in the 21st century is to adopt a diversification strategy, and that for government officials is to employ their power wisely and strategically.

List of Figures and Table

Table 1: Economic Indicators of the US: 1929 to 1932. 5 ——————————————- 5

Figure 1: The Fall of the Dow Jones: 1928 to 1934. 4 ———————————————– 4

Figure 2: Unemployment rate in the US: 1929 to 1940. 6 —————————————— 6


DJIA= Dow Jones Industrial Average

MS= Money Supply

NYSE= New York Stock Exchange

ROI= Return on Investment


America’s economy suffered a massive stock market crash in 1929 when the “roaring twenties” bubble burst, resulting in eradicating $5 billion from the NYSE in a matter of 3 days(Lange, 2007). The objective of this brief is to investigate the elements which played a pivotal role in the crash and its impact on the American economy. However, this discussion cannot be comprehensive without highlighting the lessons which today’s economists and investors could learn from the 1929 crash.

Pre-crash Dynamics

In the early 20th century, US economy experienced exponential growth as a ramification of industrialization, advancements in technology, and expanding middle class; hence the economists referred to that era as the roaring twenties. The DJIA soared during 1921-1929, primarily driven by the aggressive share-purchase. The bull market sentiments found their roots in the belief that stock market investment was the safest because of the country’s strong economic boom.

Source: Gitlin, 2008, p.64

The index skyrocketed; from 60 points reached 400 points within a small time span, as is shown in Figure 1 (Gitlin, 2008). However, the short-sightedness, or sheer foolishness of investors, reversed the situation.

How Did the Crash Occur?

The boom in the stock market provided an opportunity for margin purchasing; which simply means borrowing to purchase. The rationale behind this strategy is the effect of financial leverage; it increases ROI by manifolds (McLaney, 2003); for instance, a 1% increase in market return could result in 10% increase in ROI. However, the leverage effect can be opposite as well which was completely ignored due to over-optimism of the investors. Experts are of view that during 1929, the speculation about the rise in price, driven by strong economic growth gave birth to an abnormally optimistic investor attitude (Galbraith, 2009).

Nevertheless, when the Federal Reserve (Fed) increased the interest rate to cool the overheated stock market, a powerful bear market emerged. Investors, in a state of panic, started selling their shares, which made the DJIA drop from 400 to 145 points within 3 days (Lange, 2007). Investors, who purchased on debt paid for 10% in cash and 90% from borrowed funds, were not able to pay back their debt when the share prices fell very much contrary to their expectations; consequently they reached a point of bankruptcy. Experts in Behavioural Finance believe that acting in a state of panic was also one of the significant drivers for the stock market crash. Stock-market-behaviour observation asserts that investor sentiments are usually contagious and can cause a snow-ball effect, which in extreme situations, can lead to market crash (Reilly & Brown, 2007). The effect of crash augmented because many banks had also invested their customers’ deposits in the stock market; when prices fell, banks lost their deposits. Subsequently, bank runs occurred when depositors tried to withdraw their funds from the banks all at once. As a ramification of this, 10,000 banks were bankrupted, $140billion of saving was lost in the crash, and the financial system of the country was decimated (Galbraith, 2009).

Impact of the Crash

Economists believe that Fed’s tight monetary policy was the main driver for the crash that subsequently resulted in the Great Depression of 1932.

Source: EIU, 2013

Source: EIU, 2013

The degree of decimation brought by the stock market crash can be assessed by the fact that the economy had to suffer for four years before it reached the trough in 1932. American economy was unbelievably crashing; its unemployment rate increased from 3.2% in 1929 to 25% in 1932 (EIU, 2013) (See Figure 2). Its GDP fell by 26% within a span of four years. The decrease was primarily driven by 48.7% and 5.8% average decrease in investment and consumption respectively (EIU, 2013) (See Table 1). Moreover, the economy suffered deflation, that is, negative rate of inflation, which implies that the borrowers had to payback more valuable USD than the ones they had borrowed (Mishkin, 2007). The deflation occurred because the money multiplier decreased to such an extent that the MS increased despite the decrease in monetary base; this was the repercussion of the Fed’s short-sighted monetary policy.

This deflation along with 16% real interest rate in 1932 (McGrattan & Prescott, 2004), actually drove the collapse in investment. In fact, it was not only the economic loss, but the number of suicides, the decrease in the quality of life, the rise in the street crime, the lawlessness, and the despair in the society have also been highlighted as gruesome impact of the depressive economy.

What Does the Crash Teach?

There are many lessons which economists and investors can learn from the historic crash; however; few of the most applicable lessons in today’s market are;

Government actions can help strike a balance between optimism and pessimism pertaining to stock market investment. Therefore, if government would constantly pronounce markets as over-valued, investors would come to believe it and strategise accordingly (White, 1990).
The fact that an investment could deteriorate by 40% in one month and by over 90% in three years time, highlights an immense need for diversification in a portfolio. Adding bonds, cash, and precious metals along with equity could help mitigate the risk (Reilly & Brown, 2007).
Leveraged portfolio can be a blessing in disguise for uninformed investors. The positive leverage could help increase the profits provided that the sensitivity of cues for unfavourable conditions remains high at all times. There were people who were able to cap their losses in 1929, because they were more sensitive to indicators cueing problems than others (White, 1990).
Last but not the least; banks, financial markets, and economy are all interlinked with each other. An issue in one could spread to all, within and outside domestic economy. Therefore, government policy makers must see the big-picture before devising changes in the monetary and fiscal policies.


The stock market crash of 1929 was one of the most significant events in the history of America. It was the short-sighted tight monetary policy of Federal Reserve which fuelled the crash. Tight monetary policy created a fear of loss for margin buying investors, which subsequently resulted in panic driven bear market and an eventual crash. The economy suffered long-term effects of the crash in the shape of the Great Depression of 1932 when the country was at her deepest troughs. The crash of 1929 teaches that; investments are more sentimentally driven; therefore, if authorities constantly pronounce markets being over-priced, investors would eventually believe it. Secondly, diversification should be a significant element in one’s investment strategy. Lastly, magnification of returns by financial leverage should be done only when the borrowing cost is less than ROI.


[1]EIU (2013) Economic data: The United States of America [Online] Available at: [Accessed 8 October, 2013]

Galbraith, J.K. (2009) The Great Crash of 1929, n.d. London, Houghton Mifflin

Gitlin, M. (2008) The 1929 Stock Market Crash, n.d. Minnesota, ABDO

Lange, B. (1929) The Stock Market Crash of 1929: The End of Prosperity, n.d. New York, Infobase

McGrattan, E.R., & Prescott, E. (2004) The 1929 stock market: Irving fisher was right, International Economic Review, Vol. 45, pp. 991–1009.

McLaney, E. (2003) Business Finance: Theory and Practice, 6th ed. Essex, Financial Times Prentice Hall

Mishkin, F.S. (2007) Monetary Policy Strategy, n.d. Massachusetts, MIT

Reilly, F., & Brown, K. (2007) Investment Analysis Portfolio Management, 11th ed. Mason, South-Western

White, E. N. (1990) The stock market boom and crash of 1929 revisited, Journal of Economic Perspectives, Vol. 4, Issue no, 2, pp. 67-83 [Online] Available at: [Accessed 8 October, 2013]

Free Essays

Market inefficiencies. A case study of financial recession 2007


2007 was dubbed the year the current financial recession started. Initially signs developed with the bailout of Northern Rock in August 2007, later to unfold to becoming a world financial crisis. Alarming features of this crisis was the collapse of big investment banks such as Lehman Brothers, Merrill Lynch and Bear Stearns who had to receive bail out plans from the US government in a bid to quickly calm the crises. Discussion of moral hazards was introduced and carrying negative connotations to bailing out the banks (The Economist, 2008). Banks were accused of rashly giving out loans to would be home owners, creating conditions for easy credit on demand. This pattern of easy credit is a consumption lead approach to development and market stimulation. Characteristics of this was recognised when greater demands for housing in the USA lead to sharp raising in property prices along with high rates of interest . Acharya and Richardson (2009) explained behaviours being displayed by the lenders did to transfer the credit risks onto person receiving the loan. The example given was holding an AAA trench, with AAA referring to the classification of the loan while the trench makes notes of risks and liabilities carried with the loan.

The crises also developed within the housing market which was not a surprise as most of the firms involved were mortgage lenders in the UK Bradford & Bingley and property investment firms in the US, turning into what is now know as the housing bubble burst (Sowell, 2010). The American Government sponsored enterprise of Freddie Mac and Frannie Mae had to be bailed out in a bid to secure nearly $12 trillion of property mortgage (BBCNews, 2008). This was in response to house prices falling house prices and loans were being defaulted on or not being paid. Bailing out Freddie Mac and Frannie Mae played an important role in resorting confidence back into the American housing market. The two large mortgage firms are described as behemoths owning around $5.3 trillion properties within the American market. This is 25 times as big as Northern Rocks responsibilities and twice the size of the UK economy (BBCNews, 2011). Ivashin and Scharfstein (2008) reported large drop in the numbers of new loans borrowed by 47% during the peak period of the financial recession.

Critics blame the severe downturn leading on to the financial crisis on intense speculator behaviours which resulted in inflationary price increases which often lead to question of market efficiency. The efficiencies in this content related heavily on asset pricing in the stock and housing market as explained above. This easy aims to assess how and to what extent events of the financial crisis beginning in 2007 reflect asset-pricing inefficiencies in stock markets and housing markets.

Expressing efficiency

Efficiency of the financial market is concerned with the rate and degree in which market information is assimilated into asset prices (Koch, 2007). There are many sources providing market information such newspaper articles that publish daily stock market information and yearly financial review publish by companies themselves. Financial economist regard efficiency as being able to make the best possible investment decision on information provided (Malkiel, 2003). There is a need for information deemed relevant to be cheap and within easy access to parties enquiring for it, these are the condition for information efficiency. Readily available information creates conditions whereby resources can be allocated to get the best possible use out of them allowing allocation efficiency.

Based on the conditions of informational efficiency it is possible to gauge if markets operate efficiently. More also if the results states otherwise, it help in providing clues as to what feature of the market are not operating efficiently. Mama (2010) explained, in a study of capital allocation in early 2008 amid the 2007 US housing market, financial crises, Germany exhibited negative correlated stock returns across domestically trade firms. Mama (2010) going into more details explaining it was possible to have an increase in the movement of stock returns. Under this condition explained by Mama it can be concluded that Germany failed to meet allocation efficiency.

The aim of the informed decision made is to increase returns in investment by allowing the investors to recognise undervalued stocks. Intensions to achieve higher rates of returns are likely to be covered with a higher rate of risk attached to the investment (Acharya and Richardson, 2009). The problem following this approach is assuming stock markets behaviour is predictable, leading to speculative actions on set of assets and stocks.

‘Random walk’ (Malkiel) suggests, another means to choosing the best possible outcome of share and stock prices, which operates under the assumption the flow of information is continuous and would therefore affect the prices stocks. Therefore, each day’s price is independent of the previous day, in line with news and information. The random walk condition was best suited to describing pricing determinants within during the crises as seen from the need to boost confidence in the market, bring in positive news a method attempted by Portugal in November 2010 (Wise, 2010).

Watson and Head (2007) identified the condition necessary for getting a perfect and efficient market. These conditions are base assumptions for creating conditions whereby trades stocks can be traded effectively with minimum limitations.

Promotion of free mobility of stocks when buying and selling, which means barriers such as transaction cost and taxes are removed from market trading procedures.
All market stakeholders have the same expectations regarding asset prices interest rates and other influencing economic factors.
There is free mobility to all who wish to enter and leave the market at will.
Information must be free of financial costs to attain and also free to all stakeholders within the market.
There is expected to be numerous sellers and buyers, under conditions where no individual interest is able to dominate the market.

Conditions leading to the financial crises of 2007 exhibited none if any of these features under the current market function. A Feature that stands out as definitely unresolved is the free mobility of stock which is often hindered by liquidity issues. Another characteristic is the conditions for free market information available with no cost and free to all. Besides the several different sources of information available t the public, participants incur large volumes of financial and time cost in an attempt to have perfect knowledge of market behaviour. Furthermore, the privately acquired information is not freely dispersed into the public for free usage.

It is possible for investors to operate under an imperfect market environment. There is a greater need for markets to be efficient instead of being perfect because it creates condition where investors stand to make profits by choosing to invest on stocks at the prices they are give. Under and efficient capital market there is there is expected to be operationally efficient. The condition for operational efficiency dictates transaction cost in the market incurred by investors when moving resources for the purpose of market operations must be kept as low as possible (Dimson and Mussavian, 1998).

Within some trade regions of the world this condition was met. The division of most of the world’s trading economies into trading blocs (UNIONS) ensured some features of minimal operational cost incurred when trading. For example countries within the EU and those using the EURO were able to trade under a no barrier policy. For those with the EU it was possible to move resources about without being affected by significant levels of cost through taxes or limits on expenditure, for example UK residents wanting to buy property in Portugal. Furthermore, a France resident wanting to buy property in Portugal is likely to even get lower operational cost as there is no cost incurred when exchanging currency as both countries trade in the same currency.

Price Efficiency recommends a fair pricing method for the role of stocks and shares. This requires the pricing of “capital market to fully and fairly reflect information concerning the past and all events that the market expects to occur in the future” (Watson and head, 2007). Wickens (2006) defined conditions whereby there are two types of asset pricing. Relative asset pricing involves comparing the price of one asset to another, derivatives and bonds are examples of this. Absolute asset pricing operates by relating the price of a stock to fundamentals. Aspects of arbitrage opportunities are expected to be removed, when they appear in the market. This is because within a financial market the existence of arbitrage would lead to increase in competition which wipes out the conditions through increase in price. The competition is by opportunities for investors to gain returns from nothing. This was a dominant feature in the conditions leading the 2007 recession, whereby investor were able to exploit differential prices of comparatively similar stock. There was intense level of price speculation based around the property market boom, which imposed strains on the economy globally, With the US homes sales fell across 40 states in 2006 (Whitney, 2007). This was after a 14.3% fall in house prices. Whitney (2007) expressed it was similar condition were present during the 1920s at the beginning of the last global recession.

Conditions for allocation efficiency within the market requires strong characteristics of pricing efficiency within the market, it is through that resources can be allocated for the purpose of best use. It is understood that the fair price of an investment needs to be known by investors. Therefore they are able to better understand cost conditions surrounding the decisions made when selecting their investment portfolio. High level of emphasis is placed on information available and the conditions for reaching fair and fully secured prices. From this it is possible to explain market efficiency is heavily dependent on accuracy between speed and quality of information and the rates at which prices adjust.

Forms of Efficiency

It is possible to find out how shear prices are effect by availability of relevant information within the capital market using empirical tests. In many cases this approach (correlating with pricing efficiency) is used when there is information deficit, due to insufficient data to carry out tests relating to allocation efficiency and operational efficiency. Many of the tests carried out focus on the presence of condition suitable for investors to make abnormal returns on their investment, and to what extent that condition is feasible across the market.

Weak form efficiency

Weak forms efficiency within the capital market occurs when current shear prices reflect all historical information (Koch, 2007). This means that the behaviours of current investments opportunities follow the same historical movement and price trends. Under this situation, conditions for making abnormal return on investments are forgone when using technical analysis method. Similar results weak form efficiency is also expressed under empirical means and testing. It is expected that shear prices will behave in a random pattern due to information being used arriving at random times and under random circumstances.

Semi-strong form efficiency

This expresses conditions where all shear prices reflect all historical information and all publically available information (Watson and Head, 2007). Furthermore it is expected that share prices are more responsive to reacting quickly and accurately to securing price as soon as new information becomes available. Under semi-strong efficiency conditions for attaining supernormal returns on investments are still not possible through the use of publically as supported by empirical forms of market efficiency study.

Strong form efficiency

Markets with strong form efficiency are characterised with share prices reflecting all information, available within public and private domain. This condition describes a situation whereby information is freely mobile and price is very responsive to the arrival of new information. Those with full informational efficiency can make the best possibly decision on allocating their resources to produce the best returns. The condition of efficient information mobility means when situations of achieving supernormal returns arise all participants with the market are aware of it therefore removing the conditions of achieving supernormal profit. It is generally assumed under conditions of strong mobility and information there will not be opportunities for investor to achieve supernormal profit due to high levels efficiency. Investors who achieve supernormal profits through inside information are often prosecuted for inappropriate behaviours. This behaviour is generally a few in occurrence compared to overall practices within the market (Wickens, 2006).

Empirical evidence on real financial markets suggests there are some levels of market efficiency (Koch, 2007); it is possible to argue on the range of these efficient characteristics. In case studies of the financial Market crisis in 2007 most studies find no evidence of superior returns for technical or fundamental analysis (Mama, 2010). However, anomalies do exist and pose challenges for price efficiency. For example, trading at some times of the year can lead to negative or positive returns on investment. This is known as the calendar arising from changes within the market often the result of anomalies. Situations like this lead to question regarding the possibility of predicting future share prices. It is already well know that it is not possible to predict market trends based on past data proven through vast researches and empirical study.

There is emphasis placed on analytical tools used for proving efficiency. The use of graphs and charts expressing correlation is regarded as technical analysis. Using this method it is possible to imply there is a relationship between past and future prices which allows condition for investors to make supernormal profits. Other analysis technique is fundamental analysis which proceeds with using public information for the purpose of calculating values of current market. It is important to note, because both form of analysis are interested in finding supernormal returns on investment, they facilitate to increasing the speed in which share prices assimilate new market information therefore, unintentionally creating conditions whereby supernormal returns is prevented from be achieved (Watson and Head, 2007). It is expressed that these analysis create new information during the investigation period which is then added to the overall information available within the market.


When relating discussion points expressed in this essay with inefficient market behaviours leading to the 2007 financial crises, aspects that stands out most to causing price inefficiency area information inefficiency with a semi-strong form efficiency which resulted in allocation inefficiency. Information inefficiency was present within the stock market in terms of asset pricing and also within the property market in terms of risk associated with loans being given out to households. There are characteristic explained within The Economist (2008) and Ivashin and Scharfstein (2008) which leads to believe that future projections of market behaviour were not being made. This along with the increase in demand for mortgage (provision of public information) coincides, to demonstrating that large portions of the finance market (excluding those investing in private information) was displaying features more suited to semi-strong efficiency. This assumption correlates with large numbers of household applying for mortgage loans not having relevant information which is able project plans of the rates of returns expected for individual household. Information deficit for households are often generated from costs incurred when trying to access or compile large volume of information. There is also a risk that the investment of resources for the purpose of attaining market information might not produce favourable returns on investment, a characteristic which is shared by both households and corporate investor.

Information inefficiency would mean prices are not secure and fairly. Speculators often take advantage of this situation whereby it is possible to identify differentiated prices from having inside information or private information. Financial market behaviours become more unpredictable (Acharya and Richardson, 2009) risk of affecting decision making are often ignored due to the possibility of making supernormal profits. The financial market will show trends more closely matching allocation inefficiency as decisions are not being made based on the fair prices. The investments carry high risks along with high level of opportunity cost. Opportunity cost demonstrates loss in profits that could be made, under conditions where assuming market prices are secured and the investors were well informed.

This conclusion was reached using fundamental analysis assumption along with efficiency condition outlined as aspect of perfectly competitive finance market.


Acharya, V.V, and Richardson, M. (2009) Restoring Financial Stability: How to Repair a Failed System. [online] (Last updated 2010) Available at: [Accessed on 14 January 2011].

BBCNews (2011) Bank of America pays Fannie Mae and Freddie Mac $2.6bn. BBC News [online] (Last updated January 2011) Available at: <> [Accessed on 14 January 2011].

Dimson, E. and Mussavian, M. (1998) A brief history of market efficiency. European Finance management. Wiley online. Vol 4 (1), page 91-193

Ivashina, V., and Scharfstein, D, (2010) Bank lending during the financial crisis of 2008. Journal of Financial Economics, Elsevier Vol 97 (3), Pages 319-338

Koch. A, K. (2007). Empirical Evidence on Asset Pricing Theories and Market Efficiency. Department of Economics, Royal Holloway, University of London, [online] (Last updated December 2007) Available at: [Accessed on 14 January 2011].

Mama. H,B, (2010) Information dissemination, market efficiency and the joint test issue. Books on demand GmbH. Norderstedt

Malkiel, B.G., 2003. The Efficient market hypothesis and its critiques. Journal of Economic Perspectives, Vol 17( 1), Page 59 – 82

Sowel, T. (2010) The Housing Boom and Bust: Revised Edition. Basic books. New York

The Economist (2008) Call it off. The economist [online] (Last updated November 2008) Available at: [Accessed on 14 January 2011].

Watson, D. and Head, A (2007) corporate finance principle and practice Fourth Edition. Financial times prentice hall. Harlow

Whitney. M, (2007). US Housing Market Crash to result in the Second Great Depression. Market Oracle, [online] (Last updated February 2007) Available at: [Accessed on 14 January 2011].

Wise, P.(2010) Euro in crisis. Financial times [online] (Last updated 2010) Available at: [Accessed on 14 January 2011].

Wickens. M (2007) ASSET PRICING AND MARKET EFFICIENCY. University of Athens

Department of Economics [online] (Last updated 2007) Available at: [Accessed on 14 January 2011].

Free Essays



The labour market includes all markets in which people sell their mental and physical services in employment from which they earn their living (Kersley 2006). It is a part of the economy in which various kinds of industrial and commercial services intermediated by people are brought, sold and priced. Employment, therefore, refers to participation in this labour market on its supply and/or demand side, the buying and selling of labour as a distinct factor among the various factors of production such as capital (Gintis 1987). The individuals selling their services in the labour market are referred to as workers or labourers and the pay that they receive in return for such contribution and effort is referred to as wages which can either be in the form of a weekly wage, a monthly salary, bonuses and other forms of remuneration (CIPD 2008).

Payment or reward is a core element of the employment relationship and is defined, in economics, as the portion of the national product that represents the aggregate paid for individuals contributing labour and services which is distinguished from the portion retained by management or reinvested in capital (CIPD 2008). Labour wage is a significant source of income for a major segment of the population, a primary contributor to their sustenance often referred to as a source of living. It also features prominently in the microeconomics of governments through its effect on the national policies such as taxation and supply and demand for goods and services with the resultant government revenues and resources allocated to a variety of alternative uses (IDS 2006).

There have been several theories and models that have attempted to explain the various phenomena surrounding the labour market among other markets. This paper delves into the Neoclassical theory of pay, its faults, limitations and criticisms and the alternative labour theories challenging it, as well as strategies and practices for pay determination at the level of the state and the firm.

Neo-classical theory of pay

Analysis of capitalism through the Neoclassical theory basically entails the examination of market relations of determinate actors, featuring technology and psychology, viewing the organization within the capitalist enterprise as the solution to the main challenge of finding techniques of production with the least cost, considering an array of the prices of various factors(Gintis 1987). This is in line with capitalism’s reduction of essential economic relations between freely acting, mutually benefiting firms and households engaging in independent exchanges. Neoclassical theories on labour economics basically consist of the theory of demand based on marginal productivity, the endeavour of employers to maximize profit and theory of supply based on maximization of utility by workers (Glen 1976).

This Neoclassical theory encompasses and is fundamentally guided by assumptions including the view that individuals through their rationalization of preferences among various outcomes maximize utility while firms endeavour to maximize their profits; and that people’s actions are based on their consideration of relevant information, all occurring in a circular flow in a closed market (Gintis 1987). Workers in this theory balance gains from the offer of the marginal unit of their contribution (their wage), with their loss of leisure, a disutility, while firms in their hiring of employees balance the resultant cost with the value output of the additional employee (Kessler 2007).

This theory makes the assumption that each worker has ordered preferences over the jobs in the labour market/economy with a capacity of performing at higher or lower productivity levels in each of them, treating labour just as a factor of production (a commodity) and blurring the distinction between labour, the entity that gets into the production process, the concrete, active process in the worker’s contribution expressed by labour power and its capacity for capitalist exploitation; and the labour power, the commodity with attributes including the capacity for the performance of some productive activity of varying types and intensities. This commodity is exchanged in the market, valued and tagged with a price (wage) (Gintis 1987).

This theory is generally useful in its simplification of the consideration of the labour market, and consequently payment, in view of the total capitalist enterprise and the many factors of production that are employed (Glen 1976). It has its strengths in traditional pay systems and payment that have been examined in consideration of the objectives of recruitment, retention and employee motivation with the assumption that equilibrium pay is attained through the interaction of market forces that are also useful in determining pay. This is of significant use in removing the complexities of attempts to understand social relations in the workplace and its requirement for a focus on psychology (IDS 2006).

Its perspective places values on the relationships between factors or objects and the persons seeking or obtaining them, between costs of production and the elements subject to the forces of supply and demand (Gintis 1987). In a capital market, each factor’s demand and supply is derived to those of the final output analogously to determine equilibrium in income and its distribution. This basic Neoclassical model of the labour market is based on assumptions which include perfect competition, maximization of profit, and homogeneity within the workforce, suggesting that wages should be equal. Its vision involves households or firms, described as economic agents, optimizing subject to some constraints or scarcity, with the tensions and decision problems being worked out in the market (Kessler 2007).

This theory is, however, faulted for being rigid and highly mechanistic, with strict assumptions, and its treatment of labour as a commodity, blurring the distinction between the active contribution of workers (labour) and labour power exchanged on the market for pay as a factor of production. Labour cannot be categorized under technological data as it depends on the worker’s biology, consciousness and skill, conditions of the labour market, solidarity with others, and the social organization within the process of work, as it is in essence a social relationship (Glen 1976). In this theory, the worker is valued as capital whose value rises with demand and supply and is looked upon as any other commodity or factor of production, thereby reducing all social relationships to exchange relations, a feature of the capitalist production process (Kessler 2007).

It is also faulted for its handling of the labour exchange with its basis on a crucial assumption that the exchange of labour for a wage (the labour exchange) can be treated as an exchange of commodities, a derivation from mathematical formulations of the general equilibrium theory (Gintis 1987). This is, however, incorrect as the capitalist firm’s hallmark is the reliance on authoritative allocation of activity rather than as a result of market forces. This is evident when internal movements within departments are not due to relative changes in prices as would be implied by the Neoclassical theory, but is done through the authority of management giving orders (Traxler 2003).

The tangible substance of labour entering the production process is distinct conceptually and must be analyzed in different terms. Labour is not exchanged for the wage according to the principles of the market, as the capitalist and worker power relations are the result of the economic organization which cannot be assumed as given before an economic analysis is conducted (Traxler 2003).

The limitations of the Neoclassical theory of pay include its lack of capacity in explaining the existence of pay differentials as wage scales generally deviate from its analysis of demand and supply (Glen 1976). Wage scales and their manipulation are often instruments used in an enterprise’s labour exchange to ensure integrity, an internal labour market that develops along with the labour market as traditionally perceived but having a fundamental qualitative difference in the exchange, and is also an instrument through which the state controls and manages its microeconomic environment (CIPD 2008).

Other limitations include its attempts to attribute discrepancies to market failures or events occurring outside the closed market system and interfering with its operation, as well as, its assertion claiming efficiency of capitalist production internal to the organization (Glen 1976). If profit is not deemed to entail efficiency, the wage may therefore be representative of the marginal contribution of the worker, without being representative of their productivity. Evidence bearing on the adequacy of its assertion regarding marginal productivity such as indices of skill and abilities, behaviour records, and supervisor ratings can hardly be precisely measured and are indirect. Therefore, individuals who have productivities differing widely will often be in the same positions covered by the same wage (Kersley 2006).

Its major criticism is significantly premised on the pay equalization of demand and supply, the assumption that forces similar to those of other markets are also in play in the labour market and that therefore rates of pay should swing similar to these forces (Glen 1976). This is, however, a false premise as there exists differentials in pay within the labour market not related to demand and supply, differentials that result from variations in quality of human capital; the presence of barriers to movements within different sectors due to high costs (including time) of training thereby resulting in non-competing groups within the labour market; the creation of partially closed markets through balkanization and unionization; and a myriad of social factors that often influence pay (IDS 2006). Alternative labour theories have thus been developed in attempts to explain the existence of pay differentials within the labour market and cover for the shortcomings of the Neoclassical theory of pay.

Alternative labour market theories

Human capital theory

Human capital theory relaxes the assumption of homogeneity in the basic Neoclassical model by addressing the heterogeneous nature of the labour market. It seeks to explain that wage differentials are a consequence of differences in human capital stocks, the stock of knowledge, skills, education, training and aptitudes that individuals or groups possess (Forth 2000). In the basic Neoclassical model, wages are paid basing on the marginal product of labour while the Human Capital theory links wage differentials to productivity differentials with the former a by-product of the latter (Gintis 1987).

Disparities in Human capital transfers into variable productivity and therefore different wages earned. For instance, workers with more education or unique learned skills, on average, earn more wages and this is explained by the human capital analysis as resulting from the increase in productivity among those with enhanced training. It follows that individuals who invest time (opportunity cost that can be evaluated in monetary terms) and money to gain more skill enhance both the human capital stock and ultimately their productivity (Forth 2000). This consequently results in the wage differentials between them and those within the workforce who made no such investment.

Non competing groups

These consist of several defined classes based mainly on gaps in skills or gaps in locations creating barriers to free movement in the labour market. There is therefore not a single labour market but a number of separate labour markets divided mainly by gaps in skills. Workers in one of these defined markets do not compete with workers in the other markets due to the presence of a skills gap, and the labour market is therefore not as homogenous as assumed by the Neoclassical theory (Forth 2000).

Sportspeople like footballers, for instance, with their special talent, just as other unique jobs that require unique talents as art and music form non competing groups with their own closed labour markets (not shared with everyone else), that despite having enhanced pay levels that would under the Neoclassical theory be a cause for a shift due to forces of demand and supply, however feature little movement between them and the other labour markets (Kersley 2006). The footballer is therefore not in any competition with a doctor as they are members of non competing groups that have little chance of transfers between their markets, hindered by skills gaps and rare abilities.


Balkanization results when workers organize themselves into unions seeking the establishment of sovereignty over job territory, a form of private government of a section of the labour market (Kersley 2006). In this territory, the demands of members of these unions are met before petitions of aliens are considered. This effectively brings about change in market considerations from individual preferences and involvement to a more plural and partially closed society (Traxler 2003).

Balkanization creates boundaries within the labour market, which are specific and challenging to cross, with the association or union defining points of competition, the groups that may compete, and the grounds for such competition (CIPD 2008). Examples of balkanized groups include doctors and accountants who have strong and wide-reaching associations which take up the mandate of overseeing and controlling what happens in their specific labour markets under the boundaries created. The labour market unlike that described in the Neoclassical theory is not homogenous under such balkanization with the characteristic barriers to free movement and individual preferences (Gintis 1987).

Social factors

Differentials in wage could also result from direct influence of a number of social factors which may include gender, geographic locations, demographics, race, among other factors. Of significance among the social factors is the gender pay factor which results in the presence of a pay gap between the different genders. The weekly earnings of women working full-time rose 4.2% in 2006 while those of men rose by 3.5%. This was despite the fact that women’s earnings fell significantly below those of their male counterparts (CIPD 2008).

The gender pay gap measured by the Annual Survey of Hours and Earnings (ASHE), published by the Office for National Statistics (ONS) through median hourly earnings excluding overtime (government’s preference) though narrowing from 12.1% in 2009 (ASHE 2009) to 10.2% in 2010 (ASHE 2010), a result not from a rise in women’s wages but from a fall in the growth of men’s pay at 0.3% in 2010 relative to 2.6% for women, still retains the variation. The private sector is shown to have a wider gender pay gap at 19.8% nearly double that in the public sector at 10% (ASHE 2010). Alternatively, based on average hourly earnings (putting more weight at the pay extremes at the top and at the bottom) ASHE found that the gender pay gap was down almost a percentage point to 15.5% in 2010 from 16.4% in 2009 (ASHE 2010). The Equal Opportunities Commission (EOC) argues for these extreme values are vital to the explanation of the pay gap as the highest earning jobs are still a preserve and are heavily skewed towards the male workers, and vice versa (Forth 2000).

This gender pay gap could structurally be as a result of differences between men and women in the jobs they engage, occupational segregation within the workforce and the general undervaluation of the jobs women dominate; the length experience of work as well as that of part-time employment; their qualifications and skills; issues regarding travel to work among other unobserved factors including discrimination at work (Kersley 2006).

Role of the state

The state takes up an oversight role in pay determination as it seeks to manage its economic environment. The labour market affects the performance of the economic system substantially with its effect on opportunities for the workers and its contribution to the national product, as well as, focusing on its expenditure limits and inflation targets (Traxler 2003). There are several mechanisms through which the state engages in pay determination and influences pay levels. These include direct approaches such as policy formulation to define pay levels, the use of collective bargaining mechanisms and indirect mechanisms such as the emulation of private sector strategies and the development of sector schemes covering labour markets.

Direct mechanisms include legislation such as the equal pay act seeking consistency, equality and fairness in pay, and ensuring that all receive the same pay for the same amount of labour and across gender. There are also several legislations such as those covering remuneration recommendations for school teachers covered by the 1991 Act that have to be approved by parliament (IDS 2006). These mechanisms also entail the involvement of the pay labour board in local determination of grading structures and rates, pay review bodies (PRB systems) that determine pay for public servants acting midway between collective bargaining and government imposition, with features common with the statutory wage boards in which various parties are engaged in decision of outcomes (Forth 2000).

Collective bargaining entails either negotiations on industry-wide agreements on minimum rates that would apply to an organization or bargaining on the enterprise level over pay levels among other terms. This often involves the employment of unions and associations to engage in the negotiations and often contribute to the setup of minimum wage levels (Forth 2000).

The indirect mechanisms entail the use of job evaluation and analytical schemes. These seek to enhance equity in pay and systems of grading through the determination of the relative importance of jobs, thus informing decisions about banding and remuneration. It is done through systematic processes that bring a degree of objectivity to the making of such decisions (CIPD 2008). These schemes include proprietary schemes used by multiple employers and supplied by consultancies or some overarching schemes that cover an entire part of the public sector meeting the needs of their particular sectors, home grown schemes of individual organizations and hybrid schemes that are in between (Kersley 2006).

An example of the overarching schemes is the National Health Sector scheme (NHS) which can also be used, for a fee, by private organizations, which covering a wide range of jobs that are contained within the health sector from the obvious medical to management and administration and jobs in between with pay system strategies (Traxler 2003).

Firm level

Within the firm, pay determination and strategies are influenced by external influences such as the involvement of the state in setting minimum wage, and comparator data from industry surveys of competitive rates, as well as, internal considerations such as performance and productivity of individual employees. These influence company profitability and therefore available resources (its ability to pay), and the focus, therefore, is on effective organizational structures necessary in a capitalist system to enhance efficiency and productivity (Kersley 2006).

A grading or pay structure identifies the expectations from the effort-bargain for different levels and provides a basis for the differentials (Kersley 2006). The hierarchical structure in capitalist enterprises with a matching pay structure is indicative of pay differentials within the organization and industry, reinforcing the structure and desired goals within the organization as it features discretionary and variable pay systems that can flexibly adjusted to suit patterns (Forth 2000).

The worth of each job or employee is also subject to external influences such as relative market value and the social value of particular skills and duties which are variable with time (IDS 2006). The pay system within the organization has continually changed in composition towards performance-orientation intended to enhance productivity and profits which has led to the development and employment of performance-related pay based on certain performance criteria including individual performance or profitability within the organization, in addition to, basic pay (Kessler 2007)


The Neoclassical theory which though successful in explaining various phenomena surrounding the labour exchange, and simplifying the consideration of the labour exchange among the several factors of production in the capitalist enterprise, has shortcomings and limitations in its rigidity and its handling of labour as a commodity, a factor of production, as well as focusing on market forces of supply and demand thereby deeming labour to also be affected.

However, there clearly exist variations to its assumptions that result in pay differentials which are then explained by alternative theories including the Human Capital theory which explains variations in pay through differences in worker’s skills sets and abilities, balkanization and non competing groups which shatter the homogeneity assumption, and the presence of a variety of social factors. The state and the firm have also through pay strategies and practices reinforced such differentials through various schemes, and means available to them and that are desirous for their particular goals which include economic management for the state and profitability for the firm.


Annual Survey of Hours and Earnings (ASHE), 2009. Office for National Statistics. Viewed from on 12th May 2012.

Annual Survey of Hours and Earnings (ASHE), 2010. Office for National Statistics. Viewed from on 12th May 2012.

Brown, W., P., Marginson, and J. Walsh., 2003. “The Management of Pay as the Influence of Collective Bargaining Diminishes.” In: P., Edwards (ed.) Industrial relations. Theory and Practice, 2nd ed., Oxford, Blackwell.

CIPD, 2008. Reward Management. Survey Report, London, Chartered Institute of Personnel Development.

Forth, J., and N., Millward, 2000a. The Determinants of Pay Levels and Fringe Benefit Provision in Britain. Discussion paper N0. 171, London, National Institute of Economic and Social Research.

Gintis, Herbert, 1987. “The Nature of Labour Exchange and the Theory of Capitalist Production.” In: Albelda, R., C., Gunn and W., Waller (eds.), Alternatives to economic orthodoxy: a reader in political economy, pp. 68-88, Armonk, NY, Sharpe

Glen, C., 1976. “The Challenge of Segmented Labor Market Theories to Orthodox Theory: A Survey.” In: Journal of Economic Literature. Vol. 14, No. 4, pp. 1215-1257. Wisconsin, USA, American Economic Association

IDS, 2006. Developments in occupational pay differentiation, research report for the Office for Manpower economics by Incomes Data Services, October.

Kersley, B. et al., 2006. “The Determination of Pay and Other Terms and Conditions.” In: Inside the workplace: findings from the 2004 workplace employment relations survey. pp.178-206, 347-350, London, Routledge.

Kessler, I., 2007. “Reward choices: strategy and equity.” In: J. Storey (ed.) Human Resource Management: A Critical text, 3rd ed., London, Thomson.

Traxler, F., 2003. “Bargaining (De) centralization, macroeconomic performance and control over employment relationship.” In: British Journal of Industrial Relations, March, Vol. 41(1): 1-27

Free Essays

The Impact of Trading Volumes on Stock Market Volatility and Returns: A Case of the Swiss Stock Market


1.1. Background to the Study

There is a general consensus among financial economists that the distribution of stock returns can be completely described by two moments: (i) the conditional mean; and (ii) the conditional variance (volatility). The conditional mean is the lower moment, which is difficult to predict. In contrast, the conditional variance represents a higher moment, which can be predicted with accurate forecasting techniques (Mandelbrot, 1963; Fama, 1965). Considerable research has been devoted towards the modelling of the conditional mean. However, little attention has been devoted towards modelling the conditional variance or volatility. It was not until the October 1987 stock market crash when researchers and financial market regulators began devoting attention to the modelling of stock market volatility (McMillan et al., 2000). One may be forced to question why there is such an interest in understanding the volatility of stock markets. It is important to note that while stock returns themselves cannot be predicted, their volatility can be predicted. Volatility is a measure of financial risk (Yul, 2002). Tests of market efficiency, which rely on stock returns, must account for heteroskedasticity to be able to arrive at appropriate asymptotic distributions of the test statistic. Furthermore, empirically relevant pricing models typically relate risk premium to the second moments of returns and other processes. Therefore being able to determine the future volatility of the returns on a particular stock can enable investors and regulators to understand the risk inherent in investing in the stock. As such better investment decisions can be made. Volatility is not limited to the equity market. In recent years, bond and foreign exchange markets are becoming increasingly volatile thus raising important issues with regards to public policy regarding financial and economic stability (Yul, 2002). Developing accurate models of volatility is a critical input to the process of pricing of financial securities and their derivatives (Bollerslev, 1997). The Black-Scholes Option Pricing Model for example relies heavily on volatility as a key input to the pricing of stock options (Bodie et al., 2007). In addition to its importance in pricing financial assets, volatility is also important for the management of financial risk (Anderson et al., 1999).

Most work on modelling of volatility has been on forecasting the volatility. Most models that attempt to forecast volatility tend to focus on how past stock return volatility can affect future volatility. This has been the case, despite the potential impact that trading volume can have on the volatility of stock prices. Like any asset, demand and supply affects the price of a stock. Subsequently, the level of demand and supply is determined by trading volumes. The higher the trading volume, the higher should be the demand and supply (Yul, 2002). One should therefore expect the price of stock to change much more often if its trading volume is high as opposed to one with a low trading volume. Brailsford (1994) suggests that trading volume should have a positive impact on the volatility of stock returns. Despite the potential impact of trading volume on stock return volatility, substantial effort has not been devoted towards understanding the relationship and its implications of for stock market investments. The objective of this paper is to investigate the relationship between trading volume and stock return volatility in the Swiss Stock market.

1.2. Objective of the study

This study seeks to analyse the impact of trading volume on stock return volatility in the Swiss Stock market. The study aims at achieving this objective by using a regression model that models changes in volatility as a function of trading volume.

1.3. Research Questions

Based on the research objective, the following research questions would be answered in the course of this study:

What is the impact of trading volume on the volatility of the Swiss Stock market
What is the impact of trading volume on the returns of the Swiss Stock market
How does the relationship differ across daily, weekly, and monthly stock returns
What is the implication of these findings to investors and stock market regulators

1.4. Limitations of the Study

The study is limited only to the Swiss Stock market, which means that results cannot easily be generalised to other stock markets. Moreover, the study employs regression analysis, which depends on a number of underlying assumptions such as normality of stock returns; no autocorrelations in stock returns and constant variance across returns. However, in reality, these assumptions may be violated, as such results obtained using regression analysis may be misleading. Finally, the study focuses only on trading volume as a factor that determines stock return volatility. It therefore ignores the possibility that other variables may be at work.

2.Literature Review

Volatility is a topic that has gained tremendous attention from both financial and economic researchers. Most researchers focus on developing models for predicting volatility (e.g., Engle, 1982; Engle and Bollerslev, 1986; Bollerslev, 1990). Some studies have focused on understanding the forecast accuracy of volatility prediction models (e.g., McMillan, 2000; Yu, 2002; Anderson et al., 2003; Anderson et al., 2007). A few studies have focused on the study of the term structure of volatility. Amongst these studies, Peters (1994) suggests that volatility follows a random walk (Brownian motion) and scales with the square root of time. In contrast, Balaban (1995) provides evidence from the Turkish stock market that volatility moves at a faster rate than the square root of time indicating that volatility does not follow a random walk process. Yilmaz (1997) provides results that are somewhat consistent with Peters (1994) in that although the term structure of volatility is not totally consistent with Brownian motion, it exhibits a random walk with the square root of time.

Recent studies have focused on understanding the link between trading volume and stock returns, as well as between trading volume and stock return volatility. Two main hypotheses have been developed and tested in the literature : (i) the “Mixture of Distribution Hypothesis (MDH)”; and (ii) “the Sequential Arrival Information Hypothesis”. Both hypotheses provide support for a positive and contemporaneous link between trading volume and absolute return and assume that there are asymmetric effects of price increases and decreases for future contracts (Karpoff, 1987). According to the MDH the value of consequential price change and trading volume are independently distributed from each other (Clark, 1973). This hypothesis has been tested across a number of different stock markets. For example, Lee and Nam (2000) provide evidence in support of the hypothesis using data from the Korean stock market. Likewise positive results have been obtained for the Polish stock market (Bohl and Henke, 2003). In contrast, Luckey (2005) provide evidence that is mixed using data from the Irish stock market.

Ragunathan and Pecker (1997) investigated the relationship between trading volume and price changes in the Australian stock market. The evidence suggests that trading volume has a positive relationship with stock return volatility. This is evidenced from the fact that there is an asymmetric volatility response to unanticipated changes in trading volume. Positive unanticipated changes in trading volume resulted in an average increase in volatility at 76 percent whereas negative unanticipated changes resulted in a smaller volatility response. The foregoing suggests that investors tend to be more responsive to upward movements in stock prices than to downward movements. When the price of a stock increases unexpectedly, most investors tend to believe that the price will continue to rise. By so doing, the volume of trade increases which results to an increase in the volatility of stock returns. There is a strand of research that focuses entirely on the relationship between trading volume and stock returns. Most studies employ Granger Causality tests to determine the link between trading volume and stock returns. For example Campbell et al. (1993) observe that price changes that are influenced by high volume tend to be reversed and that the reversal is less severe on days when trading volume is low compared to days when trading volume is high. Blume et al. (1994) argues that past price and trading volume can be used to forecast future prices. Chorida and Swaminathan (2000) analyse the relationship between volume and short-term returns and conclude that trading volume plays a significant role in propagating a wide range of market information. Some studies have employed the stochastic time-series model for conditional heteroskedasticity to understand whether trading volume contains information about stock returns.

Lamoureux and Lastrapes (1990) employ this model to determine whether there are residual GARCH effects after the conditional volatility specification has been expanded to account for the contemporaneous trading volume. Chen et al. (2001) argues that incorporating the contemporaneous trading volume into the GARCH specification does not result in the elimination of the persistence in volatility. Braisford (1996) observe that regardless of the direction the volatility response was significant across three different measures of trading volume. This shows that there is a strong link between return volatility and trading volume.

3.Methodology and Data Description

3.1. Research Methods

3.1.1. Linear and Non-Linear Trend in Trading Volume

There have been observations of linear and non-linear trends in trading volume time-series data by Chen et al (2001). This means that to achieve accurate results with the data, the time series need to be detrended using the following model:

= raw trading volume at time t;

and are linear and quadratic time trends, respectively.

3.1.2. Unit Root Testing

Most time series models rely on the assumption that time series data is stationary. However, there are situations where this assumption is violated and conducting analysis with non-stationary time-series can result to misleading findings. In order to ensure that the data in this study is stationary, the Augmented Dickey-Fuller Unit root test will be employed. The model for conducting this test can be stated as follows:

3.1.3. Trading Volume and Stock Returns

In order to determine whether the stylised facts about stock returns and trading volume fit the data for the Swiss stock market, the contemporaneous correlation will be tested using the following regressions:

= detrended trading volume

= stock return.

3.1.4. Trading Volume and Stock Returns Fluctuations

In order to test the relationship between trading volume and stock returns this paper employs the analysis used in Chen et al (2001). A Bivariate autoregression models are specified which enable one to determine the Granger Causality between trading volume and returns. The models are stated as follows:

Where = detrended trading volume at time t;

= stock return at time t.

3.1.5. Trading Volume and Conditional Volatility

To measure the link between trading volume and volatility, the EGARCH model will be used. Specifically, the following EGARCH (1, 1) specification will be used:

Given that the flow of information to the market cannot be easily observed, trading volume is used as a substitution for information flow. Systematic changes in trading volume could occur as a result of new information into the market:

3.2. Data Description

The data used for this study will be collected from the Thomson Financial DataStream Database. The data will include trading volume data and stock price data over the five-year period January 2007 to December 2011. The data will observed over three frequencies of observation : (i) daily, weekly, and monthly. The data will be for all listed companies in the Swiss stock market.


In conclusion, this research proposal sets the precedence for a research project that would investigate the impact of stock market volatility within Swiss stock markets. A review of existing literature has shown varying findings for stock markets around the world, and the researcher hopes that by conducting a quantitative analysis based on existing findings; using pre-existing research methods, such as those of Chen et al (2001), the findings would help stock market investors and regulators better understand how to deal with stock market volatility and better predict earning potentials. This study is however without its limitations, one of which is the difficulties inherent in generalising the results across other stock markets.


Andersen T., T. Bollerslev, F.X. Diebold , P. Labys (2003.) Modeling and Forecasting Realized Volatility Econometrica, Volume 71, Issue 2

Andersen, T. G., Bollerslev, T., Diebold, F. X. (2007) Roughing It Up: Including Jump Components In The Measurement, Modeling And Forecasting Of Return Volatility, National Bureau Of Economic Research, Working Paper 11775

Balaban, E. (2004) Comparative forecasting performance of symmetric and asymmetric conditional volatility models of an exchange rate Economics Letters, Volume 83, Issue 1, Pages 99-105

Blume, L., D. Easley, and M. O’Hara, (1994), “Market statistics and technical analysis: The role of volume”, Journal of Finance, Vol: 49, No: 1, pp. 153-182

Bohl, M.T and H. Henke, (2003), “Trading volume and stock market activity: the Polish Case”, International Review of Financial Analysis, Vol.12, pp. 513-525.

Bollerslev, T. (1986) Generalized autoregressive conditional heteroscedasticity, Journal of Econometrics, 31, 307- 27.

Brailsford, T. J. (1996), “The Empirical Relationship Between Trading Volume, Returns and Volatility”, Accounting and Finance, Vol. 36, pp. 89- 111

Chen, G., M. Firth, and O.M. Rui, (2001), “The dynamic relation between stock returns, trading volume, and volatility”, The Financial Review, Vol. 38, pp. 153–174.

Chordia, T. and B. Swaminathan, (2000), “Trading volume and cross-autocorrelations in stock returns”, Journal of Finance, Vol. 55, pp. 913-935.

Clark, P. K. (1973), “A subordinated stochastic process model with finite variance for speculative prices, Econometrica, Vol. 4, pp. 135-155.

Engle, R. F. (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of UK in? ation, Econometrica, 50, 987- 1007.

Fama, E. F. (1965) The behaviour of stock market prices, Journal of Business, 38, 34-105.

Karpoff, J. M. (1987), “A Relation between Price Changes and Trading Volume: A Survey”, Journal of Financial and Quantitative Analysis, Vol. 22, pp. 109-126.

Lamoureux, C. G. and Lastrapes, W. D. (1990), “Heteroskedasticity in Stock Return Data: Volume versus GARCH Effects”, Journal of Finance, Vol. 45, pp. 221-230.

McMillan D., Speight A., and Apgwilym O. (2000) Forecasting UK stock market volatility, Applied Financial Economics, 10, 435-448.

Wang, J., (1994), “A model of competitive trading volume”, Journal of Political Economy, Vol.102, pp. 127–168.

Free Essays

Global Marketing: An Analysis of IKEA and Ashley Furniture Industries’ Marketing Activities in the Furniture Market


For most ambitious companies in today’s complex business environment, gaining competitive advantage and achieving expansion in capacity often requires internationalizing operations and entering new markets with the goal of building a broader and more diversified customer base. However, internationalization typically presents the problem of how to establish the company’s business or brand in a foreign market, considering the cultural and contextual differences in global markets (De Mooij, 1998). Due to the spread of globalization and the convergence of markets and economies, it has been increasingly acknowledged that a broad range of products and services can potentially have a global appeal and generate considerable revenues across the world. As long as the marketing activities designed to promote products and services are tailored to suit respective markets in line with the prevailing cultural and environmental realities, there is every possibility of achieving commercial success (Kandampully and Duddy, 1999).

Accordingly, global marketing requires a flexible framework or structure that enables companies to respond dynamically to observed differences in the respective markets in which they do business (Philip et al., 1994). This makes it possible to organize, plan, and control global marketing activities effectively and efficiently (Keegan, 1989). This report therefore focuses on the marketing activities of the furniture market, specifically on two leading international furniture manufacturing and retailing companies: IKEA and Ashley Furniture Industries, with a view to establishing the kinds of marketing activities they have adopted to establish their presence in specific countries.

IKEA in Russia

As the largest furniture retailer in the world with extensive operations in several markets across the world (Armitstead, 2010), IKEA’s entry into the Russian furniture market was in line with its sustained global expansion driven by strong product development and differentiation, attention to operational detail, and emphasis on consistent cost control (Thomas White, 2011). In spite of the relative strengths that IKEA have accumulated with its long history of international retail business, it is widely argued that internationalization of retailers is a decidedly challenging and complex proposition. This is because, in contrast with, for instance, manufacturing, international retailers often do not have the benefit of using a traditional export strategy because they need to develop and manage several stores located in their new markets (Dawson, 1994). For IKEA therefore, it was imperative to devise knowledgeable strategies that would not only ensure efficient development and management of its stores in Russia, but also effective marketing activities that would give it competitive advantage in the tough Russian market. Consequently, the company found it necessary to adapt its promotion and communication strategies for Russia in order to present its product range to the Russian public in a manner that paid due attention to local characteristics and cultural realities of the market. However, this requires a strategic balance in line with the interactive approach to global marketing suggested by Gillespie (2004) that companies operating in foreign markets need to determine how best to maintain their traditional values and brand ideals, while also adapting their brand image to suit the requirements of the foreign markets. As such, IKEA’s marketing activities in Russia attempt to preserve the company’s Swedish heritage whilst adapting its strategies to suit local circumstances.

Product Formulation and Packaging

Through its popular flat packaging and product design, Product Formulation and Packaging is a central aspect of IKEA’s marketing activities in Russia. The company mainly packages its products with the strategy of aligning the relevance of its furniture products range with the local needs of the Russian consumers, and presents the company’s offer in a manner that is compatible with Russians’ traditional home decoration styles. Although, IKEA has been criticised severally by Russian consumers that its product designs are sometimes not reflective of the Russian market as they tend to be generically designed and often reflective of designs better suited to other climes (Bloomberg July 2, 2009). IKEA denies such criticism citing that: “It is essential to note that we are a global company which strives to deliver our best in markets where we operate by adapting our designs and processes, it should be noted however that this takes a gradual and learning process which we are constantly striving towards, it should also be noted that several of our products formation have Russian orientation therefore it is not completely true that we don’t think like Russians, of course we do (Bloomberg July 2, 2009)”. Strategies used by the company to achieve this objective include extensive use of surveys and home visits, reliance on customer feedback and shop floor feedback, friendly and interactive store designs, and constantly explaining the philosophy behind the low price/high quality of products through multiple media, catalogues, and shop floor staff. These strategies collectively help IKEA to retain its traditional product design and corporate brand image however with less adaptation to the Russian market as noted by many consumers. This could be because IKEA as a global brand understands the Russian market less or deliberately adopts the strategy of not essentially adapting or standardising its products to the Russian Market but perhaps remain as a brand which offers standard furniture product which should fit into any market (See: Nytimes, Jun 22, 2009).

Market Driving/Promotion

It is worthwhile to point out that IKEA adopts a market driving approach in Russia, and this is possibly because of the fast developing and rapidly changing demand that characterizes the market (Tarnovskaya et al, 2005). Central to its market driving approach is the use of corporate branding which entails the alignment of its internal organizational systems and external networks with its core attitudes and values in order to present the image of a strong corporate brand. The company’s unique collection of values and attitudes, which is internally known as the “IKEA Way”, represents an expression of a wide array of operational details including its product range, distribution system, history, management style and so on – all of which coincide functionally and permeate the formulation of strategy, product development, and organizational processes of the company. One major challenge to the market driving force of IKEA in Russia however is that of the business ethics in Russia which tends to be corrupt and is said to be riddled with institutional bribery and many forms of corruption (theage, March 2, 2010). In terms of promotion, Indeed, IKEA relies heavily on Television and billboard advertising in its local marketing in Russia. Apart from highlighting the high quality and relatively low costs of its products in the advertisements, the company also seeks to attract potential customers by using humour and attention catching devises. One of the most important elements in IKEA’s corporate branding that has underpinned its marketing activities in Russia is its “cost consciousness” value, which is a well advertised testimony to the affordability of the company’s product range. Although while IKEA has consistently positioned itself as an affordable brand, it is believed by many Russians that IKEA’s products are above the market average (theage, March 2, 2010).

Another of the strong market driving approaches of IKEA in Russia is that it has found it important to develop a network of local suppliers in the Russian wood industry in order to help sustain its low product price strategy and preserve high standards that are consistent with its corporate brand values. IKEA strives to achieve this difficult task by increasing the competitiveness and competencies of Russian suppliers in order to create the conditions that could lead to their integration into the company’s global supply network. Supply chain and distribution channel issues are at the heart of IKEA’s marketing considerations in Russia, especially because the company’s competitiveness and unique selling proposition relies substantially on comparatively low product prices (Sanders, 2010).

Distribution Network & Pricing

One of the key marketing activities that IKEA considered crucial was the selection and development of an effective distribution system as a way of bypassing the high import duties in Russia and use alternative channels to get its goods to the country (Jonsson, 2005). In order to achieve this objective, the company has explored possibilities of producing more of its products locally; this does not only help resolve the problem of distribution and high import duties, it also adds to the appeal of the products and makes them more marketable to the local populations. As a result of its local production in Russia, IKEA is able to offer relatively low prices to Russian furniture consumers. However, given that there is a limited degree to which the company can reduce product prices without compromising high quality, it also relies on market knowledge to find good local suppliers that not only understand Russian preferences, but can also offer the lowest prices (Jonsson, 2005). Indeed the price element is important for IKEA because furniture was traditionally seen as expensive in Russia, and thus quality furniture was something that was out of reach for majority of the ordinary Russians. It must be pointed out nonetheless that many Russian consumers still see IKEA’s products as relatively expensive to the average price of furniture in the market.

IKEA in China and Taiwan

The activities of IKEA in each of the country it operates are quite distinct as would be shown in how it operates in China and Taiwan in contrast to its operations in Russia and other places. In Russia for example, customers are more aesthetically inclined and thus look forward to quality furniture which are contemporary and yet has Russian intonation. In China customers want quality furniture with western influence. As there are several local furniture companies in China, Chinese consumers prefer a western furniture company where it would be convenient to buy furniture locally yet with style, design and western aesthetic. However, in attempting to market its furniture products in China and Taiwan, IKEA maintained its characteristic international marketing strategy of thinking globally and acting locally in order to build long-term relationships with customers and capture customer value. In this regard, IKEA uses three principal marketing activities in the Chinese and Taiwanese furniture markets: product design/packaging, price, and promotion/advertising. Its marketing activities in both countries take advantage of the cultural similarities and geographical proximity.

Product Design/Packaging

As in its operation in Russia, Product Design and packaging also represents the main element in IKEA’s market offering in China and Taiwan because the company’s localisation in China and Taiwan aims at appealing to local/cultural tastes in home furniture. Accordingly, not only are the range of furniture products that IKEA offers in the area strive to match customers’ preferences, the products showrooms are also designed and arranged to suit the Chinese cultural styles (Armstrong and Kotler, 2006). In order to meet the tastes and preferences of its diverse Chinese and Taiwanese customers, IKEA offers almost 8,000 products that are roughly categorized into about 21 product ranges more than it offers in Russia showing that IKEA goes with the flow and acts according to the need of the specific markets in which it operates. In designing these products, IKEA manages to retain its Swedish heritage while also considering traditional Chinese furniture preferences. In contrast with the usual style of Chinese furniture – which is typically darker and characterised by fancy carvings – IKEA’s furniture designs adopt a much lighter and simpler style that nonetheless incorporates Chinese themes (Miller, 2004). Furthermore, rather than undertake completely new designs for the local market (which may be difficult to apply to a wide product range due to cost overruns), IKEA prefers the strategy of implementing slight alterations of key product features in line with the dictates of the local market. This is has proved an easier approach for IKEA than making brand new designs; it is also less costly and has proved as effective. In terms of product packaging, IKEA prefers to use simple-looking and recyclable materials for wrapping its products. The idea behind this product packaging strategy is two-fold: it seeks to save cost otherwise associated with fancy product wrappings (thereby making the products cheaper for customers), and it also emphasises to the local furniture consumers that IKEA is concerned about environmental-friendliness in the production, transportation and protection of its products (Kling and Goteman, 2003).


Price is an important aspect of communication and it represents a decisive element in the interaction between buyers and sellers (Usunier, 2000). Accordingly, IKEA’s marketing activities in China and Taiwan strongly involves the price element, which the company has used to attract and retain customers for its furniture products. Although IKEA’s initial target market when it first entered the region was the category of individuals with top-tier urban income, it has increasingly adjusted its strategy by reducing prices to accommodate a wider customer-base as is the practice in Russia. In order to achieve profitability in spite of these price reductions, IKEA has had to implement cost-saving methods such as sourcing materials for its products locally instead of relying on imports as it used to do at the beginning of its operations in China (Song, 2005). Overall, IKEA’s pricing policy serves as a strategic marketing tool for creating customer value. The price-based marketing activity attempts to capture the highest possible customer value at the lowest possible cost – principally by sourcing locally for the furniture production materials.


IKEA’s promotion activities in China and Taiwan is based on the idea that promotion is arguably the most culture-bound aspect of the marketing mix – especially because promotion often needs to be adapted in the local market for cultural and language reasons (see for instance Usunier, 2000). Accordingly, IKEA’s promotion activities as it is in Russia involve the use of advertising in local newspapers and television stations to promote its new product ranges and upcoming sales. In using such promotional devices, IKEA ensures that its messages are exclusively transmitted in the Mandarin language (which is the most widely spoken language across Chinese and Taiwan), and that they retain cultural relevance in order to target the local customers effectively. This is in line with the idea that it is important to put a ‘local face’ in advertisements and promotions in order to gain the attention of customers (see Mummert, 2007). Accordingly, the main form of promotion used by IKEA in China and Taiwan is advertisement, and such advertisements often communicate messages in the local language, coupled with culturally relevant themes.

Ashley Furniture Industries in the United States & Canada

Ashley Furniture Industries is one of the largest furniture manufacturing and retailing companies in North America that sells home furniture products and accessories through its own retail furniture stores as well as through independent furniture dealers (see Ashley Furniture, 2011). Its marketing activities in the United States and Canada mainly involve product, price, and distribution network. It is also important to note there are differences in the activities and strategies of Ashley and IKEA because the companies operate in two different markets where consumers are different, the macroeconomic environment is different and market conditions are different while indeed consumer preferences and choices are also different.


Like IKEA. Ashley Furniture maintains a product strategy which involves providing a fully integrated product line for all aspects of home furnishing including bedrooms, living rooms, and dining rooms. The company’s North American operation is a vertically integrated enterprise that oversees every aspect of product development from design to manufacturing. Using an efficient production system, the company is able to maximize productivity while also minimizing waste-generation (Data Monitor, 2007).

Bearing in mind the furniture tastes of the American and Canadian customers, Ashley Furniture Industries’ products are mainly categorized into three divisions: case goods, upholstery, and millennium. Products in the case goods division include bedroom sets, dining room sets, wall units, and table sets. Its upholstery division consists of products such as sofas, sectionals, and loveseats. Its millennium collection comprises products with the highest quality offered by the company, mainly premium furniture products for top-tier customers (Ashley Furniture, 2011). The idea behind Ashley Furniture’s product design and development process is to ensure that there is sufficient variety in product range to suit the needs and interest of different consumer segments in the US and Canadian markets.


Product pricing is a central aspect of Ashley Furniture’s marketing activities, particularly because the company’s products are targeted at budget-minded consumers that are looking for the best quality furniture products at the best prices (Abbas, 2010). A sharp contrast can indeed be drawn between IKEA and Ashley in terms o price given that both companies have different pricing strategies. Indeed, Ashley Furniture uses an innovative strategy of offering a vast array of product combinations that are grouped together not only on the basis of matching materials, but also on the basis of pricing. This helps the company achieve two strategic marketing objectives: customers can easily choose product combinations that suit their needs in line with their respective budgets. Ashley Furniture is able to offer at products at relatively affordable prices because of a number of cost-saving systems it implements across the production process. For instance, an in-house design team is responsible for creating the product designs – thereby saving the company considerable costs it would have otherwise incurred on design fees. These efficient, cost-cutting systems enable the company to transfer some of the cost-savings to customers in the form of lower prices, and this is a significant aspect of the company’s marketing to its United States and Canada target markets (Data Monitor, 2007). The company’s vertically integrated structure helps it to save cost at every stage of the production process and ensures the delivery of high quality furniture to customers at relatively low prices.

Distribution Network

The use of a strong retail distribution network is one of the most effective means by which Ashley Furniture markets its products across North America. In order to ensure availability, affordability, and prompt delivery of its products, the company maintains a network of over 300 retail stores (known as ‘Home Stores’) in strategic locations across the United States and Canada (Data Monitor, 2007). It also supplies its products to independent furniture retailers in these two countries; it also delivers orders from strategically located warehouses, serviced through a sizable fleet of delivery trucks. Ultimately, Ashley Furniture is able to reach a large proportion of customers in the USA and Canada due to the effectiveness of its extensive retail and distribution network. The company’s distribution network takes advantage of the geographical proximity between Canada and America to ensure that there is significant synergy in the production and distribution of its furniture products in both markets.

Ashley Furniture Industries in Japan

While there are no considerable differences in the operations of the company’s operation in Japan, US and Canada except the standardisation and adaptation to the Japanese market. Ashley Furniture Industries maintain a number of store licensees in Japan for the distribution and sales of diverse products in its furniture collections. The company’s marketing activities in Japan mainly involve product design/packaging, social media and website-based sales and promotion, and distribution network.

Product Design/Packaging

Using an in-house, full-time design group, Ashley Furniture is able to create furniture styles that complement the decorating styles preferred by Japanese customers. In adapting its furniture designs and packaging to suit local Japanese cultural tastes, the company nonetheless ensures that it uses its central, international workmanship and construction systems in order to retain the same level of quality (Ashley Furniture Homestore, 2011). However, a significant aspect of the product design process is centrally coordinated to reflect the company’s furniture design collections, allowing only for slight packaging elements to correspond with prevailing cultural themes that appeal to local Japanese customers.

Social Media and Website Sales & Promotion

Ashley Furniture also relies heavily on social media platforms to promote its products and increase sales in the Japanese market (Media Sauce, 2010). The company takes advantage of the almost universal internet usage in Japan, as well as the high use of social media among different age groups in the country to advertise its products by communicating strong messages through social media. Similarly, a significant volume of the company’s sales in Japan is done through its home store website, which offers consumers the convenience to choose their preferred products/product combinations, and price ranges online without having to visit the home store (Ashley Furniture Homestore, 2011). Although Ashley furniture also sells its products through its website in other locations in which its home stores are present (see Data Monitor, 2007), its approach to website and internet-based sales and promotion in Japan is more elaborate in view of Japanese consumers’ preference for internet shopping, and the high rate of social media use in the country.

Distribution Network

Ashley Furniture enjoys the benefit of a strong distribution network boosted by the fact that the company sources a significant volume of its manufacturing inventory from nearby China (Data Monitor, 2007). Sourcing from China not only helps Ashley Furniture keep costs low, it enhances the distribution system for the sale of the finished furniture products in strategic parts of Japan particularly in its major cities. The considerably large number of retail stores and warehouses that Ashley Furniture operates across china also facilitates easy accessibility and prompt delivery of products to customers, and this helps the company counteract the threat posed by local Japanese furniture makers.


Abbas, A. (2010) Ashley Furniture Review [online], Available at: [21 December 2011]

Armitstead, L. (2010) “Ikea”, The Daily Telegraph, Available online at: [Accessed 10 November 2011]

Armstrong, G. and Kotler, P. (2006) Marketing: an introduction, 8th edition, New Jersey: Pearson Education, Inc

Ashley Furniture (2011) Ashley Furniture Industries Home Page [online], Available at: [Accessed 20 December 2011]

Ashley Furniture Homestore (2011) Ashley Furniture Homestores [online], Available at: [20 December 2011]

Bradley F. (1991) International Marketing Strategy, New York: Prentice Hall

Bloomberg (July 2, 2009). Why IKEA Is Fed Up with Russia.

Cain, W. (1970) ‘International Planning: Mission Impossible?’, Columbian Journal of World Business, 58, July-August, 61-72.

Data Monitor (2007) Ashley Furniture Industries Inc: Company Profile [online], Available at: [Accessed 20 December 2011]

Dawson, J. (1994) ‘The internationalization of retailing operations’, Journal of marketing management, 10, 267-268

De Mooij, M. (1998) Global Marketing and Advertising: Understanding Cultural Paradoxes, Thousand Oaks, CA: Sage.

Gillespie, K. (2004) Global Marketing – an interactive approach, New York: Houghton Mifflin Company

Gupta, S. and Lehmann, D. R. (2005) Managing Customers as Investments: The Strategic Value of Customers in the Long Run, Upper Saddle River, NJ: Pearson Education/Wharton School Publishing.

Jonsson, A. (2005) “Retail Internationalization and the Role of Knowledge Sharing – The Case of IKEA’s Expansion into the Russian Market. Lund, Sweden: School of Economics and Management, Lund University, Available online at: [Accessed 9 November 2011]

Kandampully, J. and Duddy, R. (1999), ‘Competitive advantage through anticipation, innovation and relationships’, Management Decision, 37(1), 51–6

Keegan, W. J. (1989) Global Marketing Management, 4th edition, New York: Prentice Hall

Kling, K. and Goteman, I. (2003) “IKEA CEO Anders Dahlvig on international growth and IKEA’s unique corporate culture and brand identity”, Academy of Management Executive, 17(1), 31-37

Media Source (2010) Ashley Furniture Marketing Show Handout [online], Available at: [21 December 2011]

Miller, P. M. (2004) “IKEA with Chinese characteristics”, The China Business

Review, 31(4), 36-38.

Mummert, H. (2007) “Culture: More Than a Language”, Target Marketing, 30(5), 54-55.

Philip, C., Lowe, R. and Doole, I. (1994), International Marketing Strategy and Analysis, London: Butterworth Heinemann.

Sanders, P.W. (2010) Foreign Retail Groups in Russia – The limits of development, Dijon, France: Burgundy School of Business

Song, L. (2005) “Changing IKEA—an interview with Ian Duffy, IKEA’s CEO of

Asia-Pacific region”, National Business Daily [online], Available: [Accessed 8 November 2011]

Tarnovskaya, V., Elg, U. and Burt, S. (2005) The Role of Corporate Branding in a Market Driving Strategy, Working Paper Series, Lund, Sweden: Lund Institute of Economic Research.

The Age (March, 7, 2009). Corruption halts IKEA in Russia.

Thomas White (2011), “Global Players: Ingvar Kamprad, Founder and Senior Advisor, IKEA”, Chicago, Illinois: Thomas White International Ltd.

Usunier, J.C. (2000). Marketing Across Cultures, 3rd edition, New York: Prentice Hall.

Free Essays

The credit and debit card market environment has been affected significantly by the economic situation, in which a variety of macro environmental, industrial and internal company factors have had an impact on participating companies.


Barclaycard is a market-leading provider of credit and debit cards within the UK. With 8.4 million customers, we have built a considerably reputation of growth and innovativeness over the years. However recent external and internal issues in the credit card market have had significant effects on the opportunities and threats present within Barclaycard.

Being the Marketing Manager at Barclaycard, I have conducted a situational analysis of the macro economy within the UK, the plastic card industry and within Barclaycard as a company, in order to ascertain the major issues affecting the company, and highlight future strategic options that may be relevant for growth.

The following report therefore highlights a situation analysis of the UK macro environment, industrial analysis of the credit and debit card industry, and a SWOT analysis of Barclaycard. The major issues affecting the organization, as a result of these analyses would be discussed subsequently, followed by four major strategic options that could drive the organization’s growth. An update to the case study has also been provided in the last chapter, which highlights relevant factors that have occurred since the case study was written.


The following situation analysis contains key factors that have been identified using the case study, as being relevant and having a direct or indirect effect on the macro economy, payment processing industry and Barclaycard as a company.



The credit crisis that emanated from the US in 2007, hugely affected financial institutions the world over, and led the UK government to nationalize Northern Rock, one of the nation’s biggest mortgage lenders, and part nationalise RBS and Lloyds TSB, two leading UK banking groups.

The financial crisis resulted in restrictions on capital movement, and tightened lending criteria, amongst financial institutions, effectively affecting economic growth. Though the government may have hastened the availability of credit by bailing out some major banks, it is unlikely that the UK economy would recover soon.

The UK household debt increased to ?1.4 billion in April 2008, though it is mostly as a result of mortgages and secured lending, it does reflect the relevance of debt in the everyday lives of the average UK resident. Being in debt has become an inevitable part of living, especially amongst younger individuals who have embraced the ideology of buying items on credit.


The lacklustre potential for future recovery or growth within the UK economy, has impacted on most indebted citizens as 25% of adults have plans to repay back their debt in coming months. These decisions to repay debt and cut back on spending are also as a result of the increasing cost of living within the UK, and cost of oil.

43% of consumers surveyed regarding their concerning social issues, cited money as the most concerning issue. This reflects the impact that the current state of the economy has on each individual, and their growing attitude towards personal finance and cash management.



The amount of credit cards in issue has also been in steady decline, compared to the growth in debit cards. This illustrates that consumers are increasingly turning to their savings to make payments, as opposed to using credit cards for purchases, hence the growth in debit cards.

The credit card market is also becoming increasingly settled with a reduction in switching intentions amongst customers, thereby reflecting negatively on the current strategy amongst lenders to attract customers by offering attractive balance transfer incentives.

The average purchase made through debit and credit cards is currently high, thereby establishing an opportunity to facilitate spending on lower value items. This could increase customer reliance on plastic cards, and subsequently help increase the total amount spent on each card.


The economy may take longer than expected to recover, which adversely affects consumer income. Though this may potentially increase application for credit, it also increases the risk of customers who are likely to default on loans.

The average loss on credit cards issued has grown recently, with banks being hit with larger impairment charges on consumer credit cards and other related personal loans.

The economical situation has resulted in mistrust between lenders, and a subsequent unwillingness to lend between each other. This is resulted in lesser amount of available credit for these banks to loan to customers.


Recent legislations by the government have outlawed the sales of payment protection insurance, thereby resulting in fines for several credit card providers, and refunds to all customers that have paid premiums for the insurance.

The FSA administered Insurance Conduct of Business (ICOB) has also released new regulations regarding the process in which financial service companies disclose information during sales of insurance products, and has also increased the length of cooling off period, thereby impacting on credit card companies’ ability to sell insurance products to consumers.


There is an increasing divide in the market amongst credit card users. A huge amount of customers are increasingly shifting their focus from obtaining credit to paying off credit card debts, whilst another portion of consumers are using credit cards to pay off house rents and mortgages, amidst rising unemployment. This could therefore result in a situation whereby credit worthy individuals are paying off their credit card debts, while those that do not have a source of income, run the risk of becoming bad debts due to lack of available income to service the credit card bills.

Consumer trust in financial service providers has been seriously affected due to the events surrounding the credit crisis. Consumers are increasingly weary of financial service products due to the effect of the credit crisis on the leading banks within UK.

Due to the increasing competition amongst lenders, the bargaining power has shifted towards the consumers. They are increasingly consulting price comparison websites to compare credit card packages for the best deal.

Consumers are also drawn to transfer their balances to new credit cards, wherein they would not have to pay interest on the amount they owe for longer periods of time.

Debit card users have become active in switching, and this has adversely affected the major players within the cards industry, as the five major banks have jointly lost 5ppt of their market share over the past 12 months.


The competitive rivalry amongst industry participants is high. Presently there are five players controlling the credit card market and these are Barclays/Barclaycard (22%), Lloyds TSB (17%), HSBC (12%), Natwest (12%), and Halifax (11%). Lloyds TSB however has the largest market share in the debit card market, followed by NatWest/RBS, Barclays and HSBC.

However the two largest debit card companies (Lloyds TSB and RBS/NatWest) are partly owned by the UK government, thereby posing a threat to their competitive sustainability if the government continues to exert substantial influence over their operations. Banks such as Barclays and HSBC are therefore at a competitive advantage as they can leverage their reputation and financial independence.

The economic condition has resulted in various lenders tightening criteria on lending conditions, therefore higher risk individuals are unable to obtain credit cards, while those with good credit rating are being fought for by several lenders, thereby resulting in a wave of competitive and low priced offers.

The fierce competition amongst providers has led to various competitive strategies based on very attractive offers and loyalty schemes for customers. Offers such as 0% on purchases and balance transfers are being promoted strongly, while other providers such as Barclays are offering lifetime low interest rates on consumer credit. Providers such as Lloyds are promoting Air miles loyalty scheme, while Capital One and Virgin Money offer long term zero interest rates so as to attract new customers. Customers that are feeling the effect of the credit crisis are likely to shop for such deals and switch providers.


The major suppliers are Visa and MasterCard, however, they only provide the relevant infrastructure through which payments are processed, and charge a fee to banks for utilising their services. Their supplier power is therefore considerable as they are the two major credit card companies.



Barclaycard is currently the biggest provider of credit cards within the UK, with a market share of 22%, whilst its debit card share is third only to Lloyds TSB and NatWest. Barclaycard claims to have 8.4 million credit card customers, and posted impressive profits for 2009.

The company has invested significantly in product innovation such as its contactless technology, which allows its cardholders to make purchases lower than ?10; by tapping them card against a contactless reader. It has also launched the OnePulse card that incorporates the Oyster card for transportation purposes.


Barclays’ inability to gain substantial market share in the debit card market, may result in a loss of possible income if the debit card growth trend continues to persist.

Barclays has been severely affected by the credit crisis and consumer default, as it had to write off ?5.4 billion off its asset, as consumers failed to make payments for credit card and other personal debts. Rising unemployment and the current economic situation has led many consumers to declare bankruptcy or profess their inability to make payments.


The current state of the economy also serves as a considerable threat to Barclaycard as it is increasingly difficult to gain new credit worthy customers that would be profitable and repay back loans.

The competition amongst existing players is only slated to increase as the number of credit cards in circulation decreases, while that of debit cards increase. The outstanding balances may fall, while the profit made off charges and interest could also reduce significantly.

The drop in advertisement within the leading providers of credit card could result in a situation whereby other smaller brands take advantage of their lack of media presence, and gain considerable market share in the process.


There are opportunities in contactless technology, which is new and revolutionary, and may indulge more people into switching to an innovative provider or paying for lower priced goods with their credit and debit cards.

Barclaycard’s partnership with Oyster and O2, could result in newer products that have the possibility of ensuring that their market share continues to increase on the back of innovativeness.

The economic condition presents the right opportunity for lenders that have been less affected, such as Barclays, to gain market share by leveraging its reputation. Barclays is one of the two major banks that did not seek government protection during the credit crisis, and that has helped improve its reputation in the financial markets and also amongst the population. Consumers would therefore trust the brand better, as opposed to those that have been previously accused of mismanagement and risky practices


Based on the situational analysis conducted above, these are the four major issues currently affecting Barclaycard:

There is a general decline in the use of credit card for transactions. The number of credit cards in issue has declined in recent years, while that for debit cards has increased. This represents a significant issue because, as the outstanding balances in the lender’s portfolio reduces, so does the fee chargeable for interest rates, wherein their profit is derived from.

Consumers have cut back on spending amidst rising unemployment and low confidence about the UK economy. Consumers cutting back on spending, due to the rising unemployment and uncertainty about the economy, reflect negatively on the credit card industry. Consumers would be less willing to use their credit cards for purchases or incur more debt, and this affects the interest attributable to card companies and the number of new cards issued.
The current economic situation has led lenders to tighten lending conditions, in order to avoid increasing impairment charges. Impairment charges for Barclays were ?5.4 billion in 2008, thereby illustrating the increasing threat of the economy on Barclaycard’s business. This increases the emphasis on managing bad debts, and obtaining credit worthy customers that are able to service their credit card debts.

Competition amongst existing players in the credit and debit card market is on the rise. Smaller companies are gaining market share, leading companies are offering lucrative deals, and consumers are looking for more attractive offers. In order to protect it’s market leading position in the credit card market, and gain substantial share of debit cards, Barclaycard needs to device strategies that attracts and retains consumers.


Based on the situation analysis already conducted and the four major issues affecting Barclaycard, the following are strategic options that have been developed based on Porter’s (1980) Generic Strategies and the Ansoff’s (1984) Matrix.


i.Strategic Option

Introduction of new and innovative debit and credit card products into the market, that follow suite on the success of contactless technology, and encourage users to adopt the card for everyday multiple processes.

This could increase Barclays’ market share by increasing the number of consumers who switch to its innovative service, and improve customer usage of plastic cards for smaller purchases.


This strategy is based on Ansoff’s Matrix: New Products + Existing market = Product Development


The amount of credit cards in issue is declining, so this could help improve applications for credit or debit cards within a particular brand.

Consumers do not really use their credit cards for lower value purchases.

Contactless technology and multiple applications in credit cards (Oyster and Mobile Phone recharge) could change the industry and attract more consumers to a particular brand.

iv.Marketing Strategy

The Product would be a new range of innovative cards capable of handling multiple applications such as the already existing contactless technology, and Oyster card. It could also be integrated into other useful cards like School ID and Gym membership cards. APR Pricing would be slightly higher than a normal card due to the innovativeness of the product. Promotion for the card could be targeted at young professionals and students who would like to use their cards as a one-stop shop for transport, credit card, gym membership and even school ID. The product could be marketed at Universities, through affiliates, and other relevant channels.


i.Strategic Option

Barclaycard could strengthen its position in the debit card market by offering attractive packages that lure customers into opening current accounts within its bank.

Since debit card issuance is based solely on the number of current accounts within a bank, Barclaycard could utilise this in growing its market share from third position into first or second and beat rivals Lloyds TSB/HBOS and RBS/NatWest.


This strategy is based on the Ansoff Matrix: Existing Markets + Existing Products = Market Penetration.


The current market for credit cards is reducing, with a subsequent increase in debit card issuance, it seems consumers are reducing their reliance on credit and focusing on using their savings and debit cards for purchases.

Consumer confidence has fallen in recent years; therefore Barclays could leverage its untarnished reputation in gaining market share against competitors.

Its innovative credit cards could be utilised as an effective attractant.

iv.Marketing Strategy

The Product here would be attractive current accounts, marketed with lucrative benefits and at reduced prices, in a way to encourage customers to switch from their current providers and set up with Barclays. Attractive introductory offers could be offered on current accounts, while impressive benefits such as insurance and assistance packages could also be included. An important selling point would be the issuance of innovative debit cards, and a reputable organization with excellent service. Promotion could be handled through all available media channels, while the target audience would be anyone with a current account, or who has the intentions of opening one.


i.Strategic Option

Barclaycard could reposition itself within the industry as a provider of unique differentiated debit and credit cards. These cards would contain its full range of innovative technologies, with a slightly higher APR, premium loyalty schemes and would be targeted at ABC1 credit worthy individuals.


This strategic option is based on Porters generic strategies: Broad based industry wide scope + Product uniqueness = Differentiation Strategy


The current economic situation increases the risk of bad debt amongst the mass population, therefore targeting a premium service at credit worthy individuals, would go a long way in securing Barclaycard against bad debt, and help in attracting sought after customers.

Individuals who already have credit cards, would be attracted to switch towards Barclaycard due to the range of premium services provided and benefits associated with using the card, thereby eliminating the high competitiveness amongst existing players all fighting to introduce the latest 0% offer to attract consumers.

iv.Marketing Strategy

The Product would be marketed as a premium service, encompassing latest technologies, premium packages, and attractive features such as Air miles, Supermarket and store discount, and a range of VIP benefits. Promotion would be aimed at ABC1 credit worthy individuals through appropriate channels, and the products would be communicated as a differentiated product, based for the affluent. APR Pricing would be slightly higher than normal cards due to the benefits, while the products could be sold through all relevant channels across the bank’s network.


i.Strategic Option

The last option would be to adopt a cost leadership strategy, aimed at attracting as many customers as possible either through new customer acquisition or balance transfers.


This model is based on Porters generic strategies: Broad Industry wide focus + Low cost strategy = Cost leadership strategy.


The buying power has increased and consumers are increasingly looking to online channels to compare and find the cheapest deals. If Barclays provides cheap deals, it has a fair chance of increasing its customer base.

Attractive packages such as balance transfer and payment offers, coupled with low interest rates, could go a long way in ensuring that a wide amount of other customers switch and start using Barclaycard.

Since there is already a general decline in credit card usage, customer acquisition from competitors by offering lower prices and attractive offers, could go a long way in promoting sustainable competitive advantage in the industry, and securing already existing customers.

iv.Marketing Strategy

The Product would be very cheap credit card deals for credit worthy customers. Pricing would be based on competitive offers against current competitors. Lower APRs would be offered, while balance transfer and payment would be offered at competitive rates to prospective customers. Promotion would be targeted at the general public, while the products would be offered through all available channels.


Though the market condition may highlight several relevant issues that currently pose opportunities and threats to Barclaycard, there is no right or wrong strategic option. I have therefore prepared four strategic options that could be interpreted based on our current strengths and competitive capacity, and utilised with the aim of ensuring organizational growth and improving shareholder wealth.



The UK economy emerged from its recession in the 4th quarter leading up to December 2009 as GDP increased slightly by 0.3%, fuelling forecasts that the economy would experience in subsequent months (Office for National Statistics, 2010)

Unemployment rates within the UK fell by 0.3% in January 2010, and the number of people in full time employment within the UK fell by 54,000 (Office for National Statistics, 2010)

The Nationwide survey on UK consumer confidence reportedly hit a two year high in March 2010, as the UK reported a GDP growth of 0.3%. Increasing consumer confidence reflects on the general social attitude of consumers, and their views on the economic environment within UK (BBC News, 2010)


The UK Department for Business Innovation and Skills has launched a public consultation on the card industry, which aims to examine the current allocation of payments, repayment policies, limit increases and debt re-pricing, amongst credit card companies. These policies are likely to be widespread and affect the industry and its customers, and it may lead to policies that change operations within the industry (Datamonitor, 2009).

Identity theft fraud has been increasing consistently with the increase in ecommerce, as plastic cards do not necessarily have to be present for such purchases. Thereby increasing the relative cost of IT security to prevent such occurrences (Mintel, 2009)

Poulter (2010) predicts that credit card providers that had previously sold Payment Protection Insurance would have to issue refunds of up to ?4 billion to consumers that have bought it.

Mintel (2009) reports that the number of credit cards in issue has reduced by 5.4% from its high of 70 million in 2004, to 66.2 million in 2009, while the number of debit cards in issue has increased in 19.8% within that same period.

Mintel also reports that the number of consumers that repay their balances in full each month has increased by 3.9% since 2004, reflecting the general trend in the industry, and consumer’s increasing attitude towards repayment of debt.

Payment methods such as PayPal offer a more convenient and safer method for consumers willing to buy products on particular websites, and could act as a convenient substitute to debit and credit card transactions.


Barclaycard income grew by 26% in 2009, while average customer asset also increased by 19% to ?28.1billion, reflecting strong growth across the company (Barclays, 2010).

Profits for Barclaycard decreased by 4% in 2009 as a result of impairment charges, which increased by 64%. These impairment charges are due to customers within the UK that have defaulted on their credit payments or declared bankruptcy (Barclays, 2010).

Wallop (2010) also reports that the government could scrap the current 0% wave of offers on balance transfers and purchases, and could also force lenders to stop previous practices, which entailed charging customers for the cheaper credit first.

Word Count: 3,660


Ansoff, H I (1984) Implanting Strategic Management, Prentice-Hall: NJ, 455pp

Barclays (2010) 2009 Annual Report,, accessed: 20/03/10

BBC News (2010) UK consumer confidence ‘hits two-year high’,, accessed: 25/03/10

Datamonitor (2009) UK crackdown is likely to lead to fees and higher interest rates, MarketWatch: Financial Services – Industry Comment,, accessed: 21/03/10

Mintel (2009) Credit and Debit Cards – UK – July 2009,, accessed: 20/03/10

Office for National Statistics (2010) Employment: Employment rate falls to 72.2%,, accessed: 25/03/10

Porter, M.E. (1980) “Competitive Strategy”, The Free Press, New York

Poulter, S. (2010) Banks may be forced to pay back ?4bn in mis-sold insurance scam,, March 09 2010,, accessed: 20/03/10

Wallop, H. (2010) Credit card reforms: 0pc credit cards could be scrapped,, date accessed: 22/03/10

Free Essays

Marketing Activities in the Global Alcoholic Spirits Market: A Look at the Marketing Mix Strategies of the Top Spirits Producers


The global alcoholic beverages industry was found to have produced around 183bn litres at the end of 2004, increasing steadily at approximately 2% annually since the end of the 20th century. The industry is segmented into the different types of alcoholic drinks, and also the several different brands offered by various industry players. Over the last couple of decades, a number of mergers and acquisitions across the industry, alongside strong marketing campaigns and powerful brand recognition, have seen the emergence of a handful of large firms who have dominated the core beverage markets of wine, beer and spirits. Companies such as Diageo and Constellation Brands have established themselves as global market leaders, and have developed key marketing strategies to achieve and retain market share. This paper focuses on the highly competitive spirits market, where the top 3 major competitors hold approximately 60% of market share (Diageo Investor Conference, 2010). We look specifically at Diageo, Pernod Ricard and Bacardi, and compare the similarities and differences in the firms’ approaches to product design, packaging and promotion which enable these producers to sustain competitiveness.

1. Industry and Market Trends

Before attempting to tackle the individual marketing approaches of the top spirits producers, it is important to comprehend the recent trends in the industry which reflect the need for, and the shape of, the marketing strategies adopted by these firms.

Competition has always been high the key alcoholic beverage producers, but the last thirty years or so have seen wave upon wave of mergers, acquisitions and strategic alliances formed amongst international and local players which have intensified competition significantly (Zorova et al, 2010). Whilst previously this may have been driven by a need to diversify portfolios and widen geographical presences, the last decade or so has seen an increasing trend amongst these firms where they are aiming to shrink their portfolios to just a limited number of global brands, and apply similar marketing strategies throughout[1]. Whatever the reasons behind it, the spirit industry today is characterised by high firm concentration:

Spirits Industry Analysis (2008)

Adapted from Impact Databank 2009 report

Recent years have also witnessed the so-called “premiumisation” of alcoholic beverages, whereby the rising number and greater purchasing power of the middle class in established, and particularly in emerging nations, tend to be willing to spend more money on luxury spirits in order to display their new status and wealth. Strong brands and reputations have built up customer perceptions of specific brands or variants as the top premium spirits of their category (vodka, gin) and buying Smirnoff vodka or Bombay Sapphire gin for example, is seen a reflection of the consumer’s prosperity (Novak, 2004).

Finally, the global recession undoubtedly had a significant impact on the on-trade spirit sales in pubs, bars and clubs in recent times. With fewer people venturing out for drinks and a greater selection of beverage options on the shop shelves, consumers have been encouraged to challenge traditional drinking habits and mannerisms creating a kind of blurring of national drinking cultures: vodka has come under increasing pressure in its cultural base of Russia, but is thriving in the US, India and Germany where it has mushroomed in popularity amongst the new generation of spirit consumers[2]. (Gauder, 2008).

2. Marketing strategy

Marketing mix is an important concept in modern marketing and according to Kotler & Armstrong (2006), it is the set of controllable variables or tools that a firm blends together in order to achieve the response that it desires in the target market. Thus the marketing mix strategy consists of all possible activities the firm can undertake to influence the demand for its product, and establish strategic communication between the organisation and its customers (Proctor, 2000).

The key variables which comprise the marketing mix are known as the four “P”s of marketing (Kotler & Armstrong, (2004):

Product – The physical product or service offered to the customer that may also involve additional services or conveniences, includes marketing choices on how to adjust product characteristics such as packaging, brand, service and quality in order to suit the organisations objectives and cater for demand.
Pricing – The process of effectively setting a price for a specific product or service. It is often divided into two parts: price determination (based on relative prices of similar offerings) and price administration (fitting prices to sales situations such as geographic location or sales position of local distributors).
Placement – Decisions associated with distribution channels that make the product or service available to target consumers, including choosing between direct customer access or using middlemen and choosing the number and length of distribution channels.
Promotion –Strategies to communicate the values and benefits of a firm’s offerings and sell the product or service to targeted consumers. Decisions involved revolve around advertising, personal selling, media, customer relations and the budget.

The leading producers in this market have to experiment with and adapt their marketing strategies often to maintain competitiveness. For the purposes of this paper, we will look at the two aspects of product and promotion in more detail as they are imperative for this market.


Product policy and strategy are of utmost importance to spirits producers, firstly because they determine scope and direction of company activity (and key market indicators such as profits, market share and reputation are dependent on them), and secondly because they have a profound influence on other marketing mix elements such as advertising, selling, distribution and pricing, making product both a component and a determinant of the firm’s marketing strategy (Lazer, 1971).

2.1.1Core brands and design

Diageo has taken advantage of the brand-dominated nature of the global spirits market and invested heavily in mergers, acquisitions and alliances to develop a strong brand portfolio, particularly their so-called “Global Priority Brands”. Premium brands such as Smirnoff Vodka, Johnnie Walker, Captain Morgan’s Rum and Baileys Irish Cream Liqueur are marketed and positioned separately, and in 2009 contributed to 58% of total volume produced, bringing in over ?5000m to the company over the year. In fact Diageo’s top brands account for 8 of the top 20 premium spirits brands in the world. These leading brands are generally offered as standardised, premium products in all the company’s target geographical locations because their packaging and appearance are well recognised in many countries as representing high quality, reducing the need for adaptation and increasing economies of scale (Diageo website, 2011) .[3]

Bacardi began as a single brand company, focused on positioning and selling its own brand of premium rum through independent companies servicing different regions across the globe. However, changing economic conditions meant reorganisation into a single independent entity, followed by a merger with Martini & Rossi and the acquisitions of Dewar, Bombay and most recently, Grey Goose. Bacardi today positions its products in the high-end segment of the market, and like Diageo, they depend on the strong brand recognition and perceived quality of their products to ensure sales (Bacardi website).

According to Bacardi, its portfolio consists of six core brands: Bacardi Rum, House of Dewar’s, Bombay & Bombay Sapphire Gin, Martini & Rossi and Cazadores tequila. These products are sold worldwide and are standardised both in terms of the beverage and in terms of the packaging[4]. Bacardi’s success with these brands is attributed to their uniform approach to marketing regardless of where their target customers are. By standardising the packaging of their products across all regions, they have managed to imprint the product image, particularly the “bat” logo (Figure 1, Appendix), on consumer minds as well as achieving economies of scale alongside other efficiencies (Cisnal et al, 2006).

The trend of premiumisation is also consistent in Pernod Ricard’s strategy who see it as a value strategy based on raising brand status and creating more upmarket categories. Today, the company see their premium brands as a response to the consumer’s desire to “drink less, but drink better”. In essence, they understand that the rising middle and upper classes, particularly in emerging economies, are searching for brands that convey class and status. Their global icons Absolut and Chivas Regal are amongst the top ten spirits in the world, whilst their strategic spirits brands like Jameson, Beefeater and Malibu and are all major players in their respective categories (PR Press Kit, 2011).

However Pernod Ricard has invested considerably, perhaps more than Diageo and Bacardi, in developing and marketing both their high-end Prestige range of products, as well as a range of strategic local brands which often lead the markets they serve. The former includes internationally recognised, fine whiskeys like Glenlivet and Royal Salute, and also well-known champagne brands like Mumm and Perrier-Jouet. The latter is comprised of brands like Clan Cambell (France) and Montilla (Brazil) which are the leading brands of their category in their respective countries (Pernod Ricard website). Thus to a further extent than the other two producers discussed, Pernod, although standardising their strategic premium and global brands, have invested in diversifying their portfolio to offer a wider range of quality products, and in adapting their offering to provide specific brands to cater for certain countries or regions, many of which are market leaders.


Diageo pride themselves in being innovators and they use the strong brand reputation as a base, from which to develop their marketing, sales and supply functions, to best suit consumer trends and drinking habits. For example, Smirnoff vodka has been transformed into variants such as Smirnoff Black and Smirnoff Norsk, whilst Captain Morgan Rum has seen varieties such as Captain Morgan Parrot Bay and Captain Morgan Tattoo. Most of these variants were created to offer something new to existing customers and to attract the new generation of spirit drinkers, many of whom preferred flavoured or sweet varieties of their spirits (Diageo website). Furthermore, in response to the rising number of consumers drinking at home in recent times, Diageo has released a range of ready-to-serve cocktails (e.g. Smirnoff Cosmopolitan) and pre-mixed spirit and mixer beverages (e.g. Gordon’s Gin & Tonic), predominantly in North American and European markets. These innovations are increasing in popularity and in the first half of 2010, accounted for 10% of net sales (Diageo Investor Conference, 2010).

By far the best example of Diageo’s successful innovation however, was the launch of Smirnoff’s best-selling flavoured alcoholic beverage (FAB), Smirnoff Ice, in 1999. Smirnoff Ice created an entirely new market segment as a ready-to-drink (RTD) variant in the alcoholic beverage market, during its introduction at the end of the 20th century. By 2003, Smirnoff Ice had gone on to attain over a third of the UK FAB market, over 40% of the US market and along with its closest rival Barcardi breezer, a dominating 88.5% of the European market, serving Belgium, Greece and Scandinavia as some of their best customers (Band, 2007).

Like Diageo, Bacardi too has used their powerful reputation base to create variants on their core spirits such flavoured Bacardi rums, which has been increasingly popular amongst young consumers. However here again, one of the companies most successful innovations has been the FAB Bacardi Breezer, introduced during the flavour fad of the 1990s, which has grown to become an established leader in the RTD category alongside Smirnoff Ice, thanks to attracting young drinkers with its sugary taste, catchy name and colourful packaging (Cisnal et al, 2006).

Pernod Ricard’s commitment to innovation has been a key growth driver and helped to build brand equity. Besides building a portfolio of leading local brands, this commitment is reflected in recent examples such as the introduction of Chivas Regal 25 Year Old, an exclusive version which was positioned at the top end of the market, fetching approximately ?160-180 per bottle. This demonstrates Pernod’s focus on “premiumising” their products and associating their brand names with wealth and prestige, as this is a powerful lever for accelerating sales and profit growth in the long run (Pernod website, 2011).


Promotion is essentially the company’s strategy to cater for the market communication process whereby the sender (firm) uses the media (TV, magazines, radio) to communicate a message (advertisement, sales presentation) to the receiver (potential customer), in the hope of landing sales (Lazer, 1971). Alcohol advertising is subject to stringent WHO regulations in the majority of countries worldwide, due to well-established links between consumption-related injuries and fatalities, as well as more recent allegations towards the industry related to the targeting of young, underage consumers by advertising their sweet-tasting, carbonated “alco-pops”. Marketers therefore, have had to come up inventive means by which to advertise their products without any hint of promoting irresponsible, underage or over-consumption (IAS, 2010).

The firms under consideration in this paper are established global spirits producers and incorporate a many methods in promoting their products. Therefore we look at each of the firms’ approaches relative to two principal marketing tools in the global spirits market: promotion via sports and entertainment sponsorship, and direct TV advertising.

2.2.1.Sports & Music Entertainment Sponsorship

Alcohol producers know that officially sponsoring an important sporting or entertainment event offers opportunities for significant brand exposure in the media, especially when the event is widely televised. During the summer months when beverage demand is high, producers ramp up marketing efforts and seek high-budget sponsorship deals with internationally recognised events in order to raise international exposure and help their brands stand out from the competition (Novak, 2004).

The sponsorship deals for major sporting events was dominated by leading beer brands in the past (such as Coors, Budweiser and Heineken) but recent times has seen the emergence of Diageo spirit brands sponsoring events such as the golf tournament, Championship at Gleaneagles (Johnnie Walker), motor-racing events such as NASCAR (Crown Royal), the ICC Cricket World Cup 2007 and has been a sponsor of the Team Mclaren F1 car since 2006.

Diageo also maintain sponsorship in the music entertainment industry, mainly through Smirnoff and non-spirit brands such as their Jamaican beer, Red Stripe. Investments in the Smirnoff Music Center[5], an outdoor performing arts centre in Dallas, saw several high-profile concerts take place at the venue and create a strong association with Smirnoff and popular culture. In 2010, Smirnoff launched its Nightlife Exchange Project which attempts to turn consumer-generated ideas on nightlife experiences in different countries into actual events, subtly exploiting the growing integration of national party cultures.

Bacardi capitalised on the easing of regulation over sports sponsorships as well, and has been a sponsor of the F1 Ferrari Team since 2006. In 2007, Ferrari also signed up former champion F1 driver Michael Schumacher as a brand ambassador for the spirits company’s social responsibility programme around the sport (Sport Industry, 2007)

More significantly in April of last year, Bacardi signed a 3-year, highly lucrative sponsorship deal with the National Basketball Association (NBA). The groundbreaking agreement between the two was the first time one of the four major league sports in the US had signed a spirits brand as a corporate patron, and according to experts, may act as a catalyst for other sports at a time when spirits brands have been steadily increasing their presence in “pro” teams and venues (Sports City, 2010).

Bacardi’s ventures into music sponsorship are no less impressive: the brand sponsored the 2010 world tour of the Black Eyed Peas, one of the best selling pop bands of all time. According to the tour promoter A.E.G, Bacardi was chosen due to the brand’s “long standing connection with music”. This stems from Bacardi’s music involvement such as sponsoring various festivals and music events, as well as pioneering projects such as the first brand and band partnership with well-known dance music duo Groove Armada. This project enabled Bacardi to use the band, and indeed music, as a vehicle to deliver brand messages in a cultured environment (UTalk-Marketing, 2009).

Pernod Ricard contrastingly has only very recently latched on to this trend of rising spirit brand sponsorship in major sporting events. The company predominantly views its sponsorship arm as a means to fulfil and promote its commitment to corporate social responsibility[6]. It prides itself on sponsoring the arts, humanitarian action and science, thus encouraging creativity and entrepreneurship, respecting tradition and supporting major long-term projects (Pernod website, 2011). Last year however, the wine and spirits producer signed a deal to sponsor the 2011 Rugby World Cup, one of the largest sporting events reaching an estimated audience of 4bn worldwide.

Pernod’s promotion in the entertainment industry is relatively recent and for the most part done through its Malibu Rum brand. The brand sponsored the reunification tour of the global pop band Take That, was behind the Malibu Music Awards 2010[7] and most recently, launched Radio Maliboom Boom, a digital radio station transmitting Caribbean music and consumer videos, as well as performances from featured music guests, worldwide (BBC Business News, 2010).

2.2.2.Direct TV advertising

Over the past couple of decades, TV has proved to be the most influential direct advertising medium with firms investing heavily in designing and implementing advertising campaigns, as well in securing valuable airtime for them to run (Gauder, 2008). Strict WHO and EU legislation mean stringent restrictions on how alcohol can be advertised, so marketers must try to combine eye-catching visuals with a clever underlying theme or message that will stick with the customer (IAS, 2010).

Diageo’s Smirnoff has embarked variety of advertising campaigns, the most recent being the “Be There” campaign which promotes Smirnoff sponsored party events across the globe where consumers can win tickets through promotions, climaxing in the Nightlife Exchange Project described previously. However Smirnoff’s prior campaign was its largest ever (approx. ?5m in the UK) and involved large investment in creating eye-catching and memorable TV advertisements. The campaign was known as “Sea” and the advert features the sea dramatically ridding itself of all the debris deposited in it over the years (, bringing to life the purification lengths to which Smirnoff goes through to produce its high quality vodka. It was a resounding success and had increased retail sales value by around ?8m within the year (Talking Retail, 2007).

The recent Bacardi campaign known as the “Spirit of Bacardi” attempted to celebrate the warmth, optimism and pioneering attitude of Bacardi rum. The advert features a group of friends who build their own island to create a unique adventure and a celebration of life that can be achieved through this “island lifestyle” ( The imaginative nature of these friends reflects the inspiration and pioneering spirit of the rum which has contributed to the creation of many famous cocktails (Mojito, Pina Colada, Cuba Libre). In essence, rather like the Smirnoff example, both firms are trying to convey the intrinsic benefits and utility that can only be gained from consuming their product: in the case of Smirnoff it is the vodka itself, pure and untainted, whereas Bacardi emphasise the environment in which the product is meant to be consumed, and how the rum makes you feel.

February 2011 saw the release of the latest commercial from Chivas Regal’s “Live with Chivalry “ campaign (, which according to Patrick Venning[8], draws inspiration from the fundamental values of modern men and communicates a point of view that aims to encapsulate the Chivas Regal heritage. The campaign emphasises the luxurious and exuberant essence of the brand and reinforces consumer perceptions of fine Scotch whisky, one of the firm’s core objectives. Interestingly however, the company’s largest advertising spend was on their Emmy award-winning Absolut Anthem advertisement ( The underlying theme here was one of “doing something differently leads to something exceptional”. This was an interesting take on the fact that some consumers were initially averse to drinking the vodka which was made in Sweden, and even received criticisms from bars/clubs worldwide due to bartenders struggling with the bottle shape (Dodd, 2010). Despite this, the marketers involved managed to turn the products issues into individualities that defined its uniqueness and made it stand out from the rest. It was the idea that being different is not necessarily a bad thing, and marketers aimed to relate that to the vodka on a superficial level, but more profoundly related to other issues such as anti-war protests and equal gay rights for example (

3. Conclusion

The global spirits market is certainly mature, saturated and competitive making opportunities for growth, limited. In order to achieve increased sales volume, producers have to either snatch market share away from competitors or expand the overall size of the market. In fact the top players have had to raise marketing spend in a bid to differentiate their products from those of competitors, sustain competitive advantage and attract consumer attention via media that is already highly cluttered (Novak, 2004).

The rising middle class in developed and emerging nations, particularly on the back of the recent global recession, are keen to accumulate wealth and do better, as this is an inherent human characteristic. The leading firms in the global market rely heavily on brand recognition and perceived product quality as a major selling point, particularly for their core brands. This is certainly the case for Diageo and Bacardi who tend to offer a standardised premium product in the majority of locations albeit with some variants and adaptation present. Pernod Ricard also relies on its global icons but has demonstrated a greater involvement in producing a range of adapted brands to successfully cater for rising emerging markets that represent the future of the industry (Pernod Website, 2011).

Moreover in promotion, Diageo and Bacardi seem to follow similar marketing patterns that involve strong affiliation with popular sports and entertainment culture through sponsorship, and high-budget advertising campaigns to promote the core characteristics of their brands’ images which they have built up over many years. Pernod’s interaction with popular culture is a relatively more recent phenomenon but they have raised brand equity over the years (and maintained competitiveness) by building a diverse spirits portfolio and by marketing their products as sophisticated goods, synonymous with high quality and status.

4. References

Absolut Vodka Official Website:
Anonymous, 2007, “Diageo GB Launches Largest Ever Smirnoff Advertising Campaign”, Talking Retail article,
Anonymous, 2007, “Schumacher to become Bacardi Ambassador”, Sports Industry Group,
Anonymous, 2009, “How Bacardi’s Groove Turned the Music Industry on Its Head”, UTalk Marketing article 13231,
Anonymous, 2010, “NBA signs ground-breaking Bacardi Deal”, Sports City,
Anonymous, 2010, “Pernod Ricard in Marketing Push for Higher Drink Sales”, BBC Business News,
Bacardi Official Website:
Band, J., 2007, “The European Beer, Cider and FABs Market to 2007”,
Cisnal, L., Becerik, G., Moser, K. & Skopal, J., 2006, “Global Strategy Report: Bacardi & Co. Ltd”, KFK International Marketing,
Diageo Official Website:

Diageo Investor Conference, 2010,

Dodd, C. 2010, “Pernod Ricard Announces Biggest Ad Spend Yet for Absolut” The Publican Article 67927,
Gauder, B., 2008, “Diageo: Business Summary”, Company Analysis Files, The Patient Investor,
Institute of Alcohol Studies, 2010, “Alcohol and Advertising Factsheet”,
Impact Databank Global Drinks Market Review and Forecast: 2009 Edition
International Center for Alcohol Policies, 2006, “The Structure of the Beverage Alcohol Industry”, ICAP report.
Kotler, P. & Armstrong, G., 2004, “Principles of Marketing: 10th Edition”, Upper Saddle River, NJ Pearson Education.
Kotler, P. & Armstrong, G., 2006, “Marketing: An Introduction”, Prentice Hall.
Lazer, W., 1971, “Marketing Management: A System’s Perspective”, New York, John Wiley & Sons.
Novak, J., 2004, “Alcohol Promotion & The Marketing Industry: Trends, Tactics and Public Health”, Association to Reduce Alcohol Promotion in Ontario (ARAPO)
Pernod Ricard Official Website:
Pernod Ricard Press Kit, February 2011,

Popp, J., 2005, “Bacardi’s Breakout Year”, Beverage Industry,
Proctor, T., 2000, “Strategic Marketing: An Introduction”, Routledge Publishers.
Youtube videos: Smirnoff “Sea”; Bacardi “Spirit of Bacardi”; Absolut “Absolut Manifesto/Anthem”
Zorova, N., Tantilova, M., Tokcan, E. & Ladino, F., 2010, “Diageo: Analysis of Sustainable Competitive Advantage”,

5. Appendix

Figure 1: Bacardi Bat logo

Free Essays

Select a consumer product/brand that has a large market in the UK and research its associated marketing programme and environment


This document is related to marketing and its various concepts. It is about the external environment in which a product or brand operates, the marketing mix and the overall marketing applications and environment.

In order to illustrate the application and how these things affect practically, a real world brand has been selected and all these concepts applied and analysed with respect to the particular brand. The assignment illustrates how marketing principles, tools and methods are employed within an organisation and how effective these strategies and actions prove to be. The brand selected for the purpose of analysis is Coke of which around 260 million products are sold every year in UK.

Assessment Task

You are to select a consumer product/brand that has a large market in the UK and research its associated marketing programme and environment.

The Brand-Coca Cola


Coca-Cola is a worldwide global brand that exists in more than 200 countries. It is known for making carbonated soft drink generally known as Coke. Coke has been the top most consumer brand in UK throughout and was the leader in 2009 and 2010 as well (Source: The Nielsen Company). Coke is produced by the Coca Cola Company that also makes drinks for sports, juices etc. Coke is made from the extracts of coca leaves. The origin of the brand is United States. Coke is the industry leader and has the largest market share in United Kingdom and world over (Daily Mail, 2010). The company is involved in both related and unrelated diversification. Coke is a range brand that in turn has a product line under it in the form of diet Coke, Coca Cola Vanilla, Coca Cola Cheery, Coca Cola Zero and also comes in the flavours of coffee, lemon and lime.

The greatest focus on the brand is on the health aspect of its customers and this is the reason that the brand has decided to move towards the production of products made from natural raw material instead of synthetic ones (Daily Mail, 2010). The motto of the brand is to provide its consumers with life full of happiness and refreshing moments. It aims to provide the best value for people’s money in the form of a quality and tasteful product at affordable price. The brand provides employment opportunities to many people around the world thus adding values to their lives and improving their lifestyles and living. The brand has a proactive and learning approach.

Part ‘1’- The product/brand’s macro and competitive environments

Macro Environment

The macro environment consists of the overall external environment in which a product, brand or company exits. It consists of all the external forces which affect the performance of a brand or product but are not under control such as the overall political factors, cultural state, environmental factors etc. For this, the PESTAL analysis is used to study, understand and analyse the forces existing in the external environment and their overall affect. PESTEL analysis stands for political, economic, socio cultural, technological, environmental and legal forces that exist in the external environment and are uncontrollable that affect the firm externally. Research has claimed that organizational scanning technique is linked with improved organizational performance (Newgren, et al, 1984; Dollinger, 1984; West, 1988; and Murphy, 1987). The first component of environmental change, complexity is defined as a measure of heterogeneity or diversity in many environmental sub-factors such as customers, suppliers, socio-politics or technology (Lane & Maxfield, 1996:217; Chae & Hill, 1997:8 and Chakravarthy, 1997:69). Since complex and turbulent environments can be desirable, but since many businesses are uncertain about how to cope with such situations, it makes sense to identify ways to handle such environments. Many believe that identifying a causative link between environmental variables and management action is not possible because of the complexity of variables and the chaotic nature of environments (Winsor, 1995:181). However, recent research has stressed the inter-relationship between an organisation and its environment (Polonsky, Suchard & Scott, 1999:52). Organisations co-exist and co-evolve with their environments and therefore are able to influence the environment to a greater extent than previously thought (Brooks & Weatherston, 1997:13). Organisations shape their environments by influencing their industries or collaborating with each other, thereby gaining some control over some part of their environments. The environment is thus not completely determined by external forces, but can also be influenced by the organisation (Anderson, Hakansson & Johanson, 1994, in Ford, 1997:229).

Competitive Environment

Competitive environment consists of the factors affecting the competitive position of the firm. It consists of the competition a product or brand faces in terms of both; direct and indirect competitors, suppliers, buyers etc. For this, Porter five forces model best suits to analyse the effect and present situation of the competitive position of the firm. There is continuing interest in the study of the forces that impact on an organisation, particularly those that can be harnessed to provide competitive advantage. The ideas and models which emerged during the period from 1979 to the mid-1980s (Porter, 1998) were based on the idea that competitive advantage came from the ability to earn a return on investment that was better than the average for the industry sector (Thurlby, 1998). Porter (1980a) defined the forces which drive competition, contending that the competitive environment is created by the interaction of five different forces acting on a business.

You are required to write a report on the product/brand’s macro and competitive environments, so what you have presented is not required. You have written a literature review/essay on PESTEL and PORTER, both covering about 500 words. Not needed. You are only required to go straight into the product’s macro and competitive environments. Even though it is recommended to insert a bit of theory, it does not have to own its own chapter, or be composed of about 500 words in a 3,000 word report.

PESTEL analysis for Coke

Figure 1 on the below shows the forces affecting a firm externally.

Figure: 1 (


Coke comes in the food category of consumer products. Favourable state policies and a stable government with no war, unrest is necessary for the production of Coke. A sound governmental policy is necessary so that the necessary infrastructure facilities, investment opportunities, skilled labour and manpower, technology advancement, distribution ease, resources and access to raw materials is possible and the necessary plant and production set up can be installed and executed properly. Changes in governmental regulations regarding production, policy regarding transfer of money across countries etc affect the company and all of these are determined by the political state. The ability to enter into strategic alliances also depends on the political conditions set by the state.


A strong economic environment with low interest rates, high GDP growth rate gives boost to any business and is helpful in generating sales and profits. Same is the case with Coke. If the overall economic environment in which a brand exits is healthy, growing and developing, the consumers have high demand, more disposable income and thus they spend more on consumer products which means more business for Coke. This also involves less spending on R&D by Coke and more innovative products can be introduced. It also helps to achieve economies of scale and scope. This also means better chances of business and penetration in emerging markets.

Socio Cultural

As Coke is a family brand that is meant for people of all ages, gender, occupation, lifestyle etc it is bought equally by individuals and families. Particularly, in countries where there is a broad family structure, it is consumed more and so is by youngsters and teenagers particularly by students. As the world is seeing more and more societal changes for example more women joining the work force, students living outside their homes etc, Coke has seeing tremendous growth as it is a time saving, quality and tasty beverage. People’s view regarding health is also changing dramatically and they are becoming more health conscious day by day. That is why Coke came up with and introduced Diet Coke meant for people who are diet conscious, have some health related issues such as diabetes or are generally more aware about healthy food products. Another evidence of changing societal and cultural aspects is that Coke has decided to come up with alternative to synthetic elements used in Coke. One such element is sodium benzoate which is addictive in nature and is the major constituent of Diet Coke. Research showed that this element is primarily responsible for damaging DNA in of yeast cells and causing children to become hyperactive. In light of increasing pressure, Coke has decided to come up with natural elements and stop suing this particular material in all its products including Diet Coke and Sprite.


Advancements in technology help a business in improving its functions, management, processes, distribution network, marketing techniques etc. It helps save both cost and time and exert less effort. Technology is also a vital part of Coke and has led to advancements in its marketing practices and techniques as Coke now makes use if the internet and indulges in e-commerce, m-commerce etc. which has become a medium of advertising and selling. Technology also helps in effective supply chain management for Coke and integrates all the functions and processes of the brand. Technology also helps to innovate and make products more appealing and attractive to customers. Technology has led to introduction and advancement in machinery which has enabled Coke to produce more number of products and thus fulfil the increasing demand conditions. Coke CCE UK employs state of the art technology which produces quality products in no time. It produces Coke cans faster than bullets come out a machine gun.


Every brand has to look up towards the safety of the society and environment it operates in. It has to adhere to universal environmental laws and regulations. Coke is no exception in this regard and works actively towards the betterment and protection of society. The production of plastic bottles and cans are an example of this as they are bio degradable and easy and safe to dispose off. It also participates in corporate social responsibility and conducts various activities that focus on creating awareness, education, improving social welfare and providing facilities and benefits to the people and society and climate protection. That is why it pays special attention to the need and importance of recycling and disposal of its products. In order to preserve the natural resources and climate, Coke has constantly been reducing the amount of water used in its products in an effort to preserve water.


Like all other industries, Coke also has to adhere to legal procedures and regulations set by the government. Without abiding to these laws and regulations, Coke cannot carry out its business. These include accounting principles, transparency and reporting standards, taxation laws, and foreign firm restrictions as all these form part of the legal environment and is mandatory for all firms operating in the industry to abide by. So changes in any of these things also affect Coke and the way it does its business.

There is no actual evidence, or reference that supports any of your analysis. They seem like something a non academic would write by having known Coke all their life. In order to present a good quality work, you must present “ good range of issues identified; evidence of clear thought and subject perception.”, When writing such, I would recommend looking for actual evidence of PESTEL from the company’s annual reports. There is always talks on their political issues (you should have spoken about Saudi there), and when doing such, you should reference appropriately, so the writer knows your arguments are well justified and have been obtained from somewhere.

Porter five forces model for Coke

This model is used to access the competitive position of the brand and see where it stands relative to its competitors.

Existing rivalry among firms

Rivalry among existing firms is high as Coke has both direct and indirect competitors. The industry is generally regarded as duopoly and the small firms are not significant. The biggest competitor of Coke is Pepsi which enjoys the same kind of image, brand loyalty, customer base and geographic reach as Coke. Both of these brands compete head to head with each other and thus rivalry is intense among them. They mostly compete on the basis of differentiation and advertising and not on pricing strategies. Competition between them is at a global level as both are multinationals.

Figure: 2 below shows the five forces model.

Figure: 2 Source:

Bargaining power of buyers

The major buyers of Coke not only include end customers who buy from retails stores, vending machines etc but also include restaurants, fast food chains, cafes in schools, colleges and universities. Thus, the bargaining power of buyers is high as there are many buyers of the product.

Bargaining power of suppliers

The bargaining power of suppliers is low because the raw materials that Coke requires are general and easily available such as sugar, caffeine etc. Switching costs from one supplier to another are also very low. Threat of forward integration is low as well.

Threat of substitute products

Threat of substitute products is high in the form of both direct products and in the form of other non alcoholic drinks like; Pepsi and indirect substitutes such as water, tea, juice, energy drinks, coffee etc. Switching costs are also low. Also, all the products are almost the same in terms of value and pricing. Thus, product for product substitution, generic substitution and related substitution are all there in this industry.

Threat of new entrants

Threat of new entrants is low due to several barriers to entry. Firstly, it involves huge capital investment setting up bottling plants, machinery etc. The market is already mature and saturated with market leaders. Coke has achieved economies of scale and scope. Its ingredients are rare, valuable and difficult to imitate. Customers are very brand loyal and it’s difficult to have the same brand image and equity as Coke. Coke has agreements with distributors and suppliers in every region and no other company can enter into agreements with them. Also, for a new entrant it’s not easy to spend so much on advertising campaign the way Coke does as it has a strong financial muscle.

Just like I wrote in relation to PESTEL. You show that you do have an understanding of relevant theories as they have been incorporated accurately. However, the main issues you are analysing are not necessarily backed up with evidence, which may then affect the student. How do you know they have substitute products, except for what you can readily think ofHow do you know they are actually substitute productsWho told you soWhat new substitute products have come up recentlyWhat recent issues have they had with suppliersWhat are the current strategiesWhere are the referencesWhat are the new most recent threats to Coke?

Part ‘2’ – The marketing programme elements (marketing mix elements) currently employed

Marketing Mix

Marketing mix or the 4 P’s or 7 P’s of marketing refers to the tactics and strategies of product, price, placement, people, processes, physical evidence and promotion of a brand. These are all part of the marketing plan and help create brand image, equity and loyalty among the target market segment. It defines how a product or brand is positioned. traditional market research and traditional marketing mix models are too simplistic to understand complex marketing situations, as such models assume linear relationships between mix variables and the resultant outcomes (McGlone & Ramsey, 1998:248 and Tedesco, 1998:5). Since the simplistic approaches recommended by traditional theories can be dangerous, marketers should consider the overall environmental position when designing their strategies and adopt non-traditional marketing methodologies (Wollin & Perry, 2004:568). The classic 4Ps of marketing have been questioned as inadequate (Van Waterschoot & Van den Bulte, 1992:91), and developed further into the 7Ps of Booms and Bitner (in Zeithaml & Bitner, 2003:24) and of Christopher et al. (in Palmer, 1994:32). However, the 4Ps is still the most common model of the marketing mix (e.g. Kotler & Keller, 2006:19)

I have a feeling that if I did have the time to count, the number of actual theories (that were not requested for) would exceed the actual company analysis (that was requested for).

Marketing Mix of Coke


Coke is a non-alcoholic, soft carbonated drink that is made from the extracts of coco leaves. It has a distinguishing taste and is of great quality and taste. Apart from the regular Coke, it also has other variants such as Diet Coke, Coca Cola Vanilla, Coca Cola Zero, and Coca Cola Cheery along with special types of Coke that comes in lime, coffee and lemon. The concentrate which is the sole of the product is manufactured by the company itself and then sold to licensed bottlers who later produce the finished product by making a mixture of the concentrate, sweeteners and water. It is then poured into cans and bottles and sold. The brand name is well known, famous, recognised and recalled by virtually everyone in UK and around the globe.

Figure: 3Source:

The product’s logo, design and colour are all well known and people associate themselves with this brand. It’s a symbol of lifestyle. The brand is among the highest equity brands. The product keeps on innovating to maintain its attractiveness and appeal. It comes in glass and plastic bottles as well as cans of different sizes and capacities in order to suit the needs of everyone be it individuals or big families, parties etc. Coke even comes as Coke Mini which is a can of 7.5 ounce. The brand is known for its great quality and taste. It gives special attention to the health and safety aspect of its consumers and that is why is moving towards the use of all natural elements used in production. The introduction of Diet Coke and focus on recycling and safe disposal of the product are also given great importance. The design of the bottle has a great appeal and is known as the Contour bottle. It is distinctive in nature and looks smart, hip and trendy. It serves to fulfil the physiological or basic needs of its consumers. Figure 3 above shows the elements of a marketing mix.


Coke is seen to be involved in competitive pricing in order to better compete with its competitor Pepsi but it does not compromise on quality. The price of Coke is fixed but it occasionally comes up with seasonal discounts and pricing, bundling, volume discounts, etc in times of holiday season or other such events. Coke does not indulge in discriminatory pricing however it offers wholesale pricing when selling to fast food chains such as McDonalds, restaurants etc.


A distribution channel is a non-linear system that can be stable, periodically oscillating, or chaotic (Priesmeyer, 1992:79). Distribution network of Coke is very extensive and that is why it is available everywhere. It makes sure that Coke is easily available to everyone by making use of supermarkets, retail stores, vending machines, restaurants etc. The company has contracts with distributors and suppliers and occupies the best shelf space in every store and market. It can be found everywhere. It works all over the globe by making use of franchising.

Apart from the Promotion subchapter below, no other marketing mix element has been reported with actual proof. Even the promotion subchapter below does not contain any references, therefore where did you get your information fromThey all seem to be based on the understanding of the writer and nothing else. This is an academic report, and as such needs market report. For instance Datamonitor, Mintel or KeyNote would have been very crucial in understanding the soft drinks market which Coke operates in, and thus assist you in conducting a better analysis.


Promotion has been the biggest success factor behind Coke. Coke spends heavily on advertising and is involved mainly in above the line promotion. Initially modes of advertisement were newspapers, radio, billboards and providing customers with free samples and coupons of Coke. Today, Coke relies mostly on television advertising as it has both high frequency and reach and involves both audio and visuals. Coke always comes up with new and innovative adverts and makes special advertisements for every region it operates in that are suited to the local culture and needs. Coke also has brand ambassadors and spends heavily on promotion. Coke makes use of glittering generalities which are very persuasive in nature. It is involved in the sponsorship of many events particularly related to youth and sports. The major events sponsored by Coke include Olympics, FIFA World Cup, Cricket World Cup and The Football League. Many movie and television serial producers also promote Coke in their productions.

Part ‘3’ – Your fully justified recommendations in respect of the revised marketingprogramme elements needed to improve market size and/or profitability

In light of the above analysis, there is no doubt that Coke is one of the biggest, strongest and most successful consumers brands present not only in UK but everywhere on this planet. It has a huge and loyal customer base and is the market leader.


But there are certain things that Coke needs to focus and improve upon in order to be more productive, efficient and earn more profit margins.

Firstly, it needs to work upon its brand image in certain countries such as Saudi Arabia and the parts of Middle East. In these parts of the world, Coke is banned and boycotted as Coke keeps on making timely investments in Israel. These countries only allow Pepsi to be sold within their premises and thus Coke is losing on a very big chunk of the market and Pepsi is making its strong hold there.
Secondly, it should focus more on its distribution channels and make them wider and more effective as Coke is still not widely and easily available as Pepsi.
Thirdly, it needs to work more on its advertisement campaigns as Pepsi has an edge over Coke in this aspect. Coke should also advertise and sell more through online channels and advertise on social networking websites such as Twitter, Facebook etc.
The company should make sure that the consumers are well aware about all its latest promotions and activities such as bundling offers, discounts, special offers, price cuts, timely offers etc. The company should also revise its segmentation strategy and form even small target market segments so it can better and effectively understand the drastically changing needs and wants of its customers and be able to fulfill and satisfy them.
Coke needs to diversify into other conventional and non conventional products thus engaging both in related and unrelated diversification (non-carbonated products) so that it has a broad and diverse portfolio of products and it can do this easily by making use of its brand name and equity.
Establishing a feedback mechanism is also very necessary.
Coke needs to pay special attention to the markets in which its performance is declining such as Thailand and Indonesia and should try to penetrate more in the emerging markets which are the biggest market for consumer goods today.

The requirements were for “Your fully justified recommendations in respect of the revised marketingprogramme elements needed to improve market size and/or profitability” And as such, each recommendation you have provided should have been backed up with actual problems, and references to these problems. These problems should have been highlighted as the major threats of your market research, and not just brought up in the recommendations chapter. One of your recommendations is “establishing a feedback mechanism is also very necessary”. How is that justified in anywayWhy is it essential to establish a feedback mechanism, who said soAnd what makes you feel it is important. If it is you, then why should it be an actual recommendation for an organisation. There is a reason why we only have 14 writers out of up to 200 applications, and it is because we need to trust everyone to provide the best quality written work. I cannot risk damaging the reputation of our site Essays by sending this work to the client. I would hate to think of the consequences.


There is no doubt that Coke is one of the largest and most successful consumer brand in UK and globally. It occupies the largest market share and deals with external and internal pressures effectively. Its marketing plan and mix is very good and has helped to enable Coke to stay competitive and compete with Pepsi in the best possible and efficient manner.

As everything has pros and cons, a bright and dark side, some advantages and disadvantages; same is the case the Coke. There are some aspects on which Coke lags behind and they require its immediate attention if Coke wants to stay on the top and further increase its market value, market share and size and increase in profitability.


Aaker,D. (1991), Managing Brand Equity: Capitalizing on the Value of a Brand Name, Free Press, New York, NY.
Aaker, Jennifer (1997), “Dimensions of Brand Personality”, Journal of Marketing Research, 34 (3), 347-356
Amirkhan, J.H. (1990), “A factor analytically derived measure of coping: the coping strategy indicator”, Journal of Personality and Social Psychology, Vol. 59 No. 5, pp. 1066-74.
Batra, Rajeev, Donald R. Lehmann and Dipinder Singh (1993), “The Brand Personality Component of Brand Goodwill: Some Antecedents and Consequences” in Brand Equity and Advertising, ed. David A. Aaker, Alexander Biel, Hillsdale, NJ: Lawrence Erlbaum Associates, 83-96.
Ben Delaney, (2007), The Marketing Mix.
Chandy, R., Tellis, G. J., MacInnis, D., & Thaivanich, P. (2001), What to say when: Advertising appeals in evolving markets. Journal of Marketing Research, 38, 399–41
Dacin, P.A. and Smith, D.C. (1994), “The effect of brand portfolio characteristics on consumer evaluations of brand extensions”, Journal of Marketing Research, Vol. 31, May, pp. 229-42.
Davis, F.D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, pp. 319-39
Doyle, Peter (2001), Marketing Management and Strategy, 3rd edition, New York.
Fernandez, Colin (2008,. “DNA Damage Fear”. London: The Daily Mail.
Gerard J. Tellis. (2006), Modelling Marketing Mix, University of Southern California.
Hanssens, Dominique M., Leonard J. Parsons, Randal L. Schultz. . Market Response Models: Econometric and Time Series Analysis ,2nd ed. Kluwer Academic Press, Boston, M,2001.
Michael Porter, (1990), The Competitive Advantage of Nations.
Porter, Michael E.(1980) Competitive Strategy. New York: Free Press
Tellis, G. J. (2004). Effective advertising: How, when, and why advertising works. Thousand Oaks, CA: Sage
Terwiesch, C, 2005, Product Development as a Problem-solving Process. in S. Shane, Blackwell Handbook on Technology and Innovation Management, forthcoming
Simon , Kucher & Partners (2004), Power Pricing, WHU Koblenz.
Singh V., K. Hansen, R. Blattberg. 2006. Market entry and consumer behaviour: An investigation of a wal-mart supercenter. Marketing Sci. 25(5) 457–476.
Shankar, V., R. Bolton. (2004), An empirical analysis of determinants of retailer pricing strategy. Marketing Sci. 23(1) 28–49
Rajendran, K. N., & Tellis, G. J. (1994). Is reference price based on context or experienceAn analysis of consumers’ brand choices. Journal of Marketing, 58, 10–22
Wen-fei L. Uva, (2001), Smart Pricing Strategies, Department of Applied Economics and Management, Cornell University 30-12-2010 30-12-2010 31-12-2010 31-12-2010

Free Essays

The Value of Accounting Disclosure in UK Capital Market


This study is focused on identifying the value of accounting disclosure in the UK capital market, citing the relationship between market capitalisation and money raised of 50 selected LSE-listed companies. The data of these companies are obtained from the database of the London Stock Exchange website. Non-probability sampling is adopted in selecting these companies from a list of 350. Linear regression is adopted to present the results. A significant relationship is observed between market capitalisation and money raised. The value of accounting disclosure in the UK capital market are reduction of cost of capital and information asymmetry; consistency of firms’ accounting policies; enhancement of management reputation; capability to address the information needs of stakeholders; and mitigate inefficiencies within the market.

The recommendations include pursuing a study on the impact of financial globalisation on financial reporting and on the role of audit committee in corporate governance.

Key words: Accounting disclosure, financial globalisation, market capitalisation, money raised, UK capital market

Chapter 1: Introduction

1.1 Context

Capital markets operate in a highly regulated environment; they are well-developed and well-functioning compared to other markets (Kieff and Paredes, 2010). Pohl et al. (1995) defined capital markets as those markets for shares (equities) of joint-stock firms and do not engage with markets for other securities, such as bonds, etc. More regulations exist in this market than product and other markets, a fact that is attributed to the historical origin of securities regulation, whereby public investors must be protected against deceptive misleading and manipulative activities by brokers and other personages. Mandatory disclosure and anti-fraud laws are two fundamental ways in which investors are protected in capital markets (OECD, 2001; Kieff and Paredes, 2010; Moore, 2013). In the UK, the London Stock Exchange which serves as its capital market, is valued at ˆ2,307 billion (House of Commons Treasury Committee, 2007).

OECD (2001) described the well-recognised value of disclosure and the well-analysed relationship between the efficiency of the stock market and the available information in the market place. According to Shehata (2014), disclosure is defined in the accounting literature as a process where the firm informs the public of its financial statement. It is also referred to as the financial or nonfinancial communication of economic information, quantitative or qualitative, about a firm’s financial position and performance. It is however not entirely clear why the law needs to require disclosure of certain information by a firm (Kieff and Paredes, 2010). There are in fact countries which hold a register system of merchants that present particular information about the amount of capital, names of directors and officers, etc. of business entities. In many jurisdictions, a detailed disclosure system has been provided by securities law for large public firms (Kieff and Paredes, 2010). Notwithstanding, the value of information being disclosed is the basis of the value of disclosure (Flood, 2014). Accounting and non-accounting types of information are the two kinds of information being disclosed, with this research investigation being centered on the first. The basis of useful accounting information is appropriate accounting treatment (Nikolai et al., 2010; Kieff and Paredes, 2010). Both benefits and costs are involved in mandatory disclosure, with its values being reliant on other elements of corporate governance. Additionally, more available information in the market place enables capital markets to function better, which in turn makes disclosure become more valuable in the capital market-centred environment (OECD, 2001).

The UK capital market, through the London stock market or its bond market, demands that firms disclose information under the guidance of accounting and auditing standards. The authority to set accounting standards has been delegated to the Financial Accounting Standards Board (FASB) in 1974 (Benston et al., 2006). A legal framework facilitates general accounting disclosure and audit requirements in the country. Before the conception of formal standard setting and upon the corresponding increase in public interest, financial reporting standards have been adopted by stock markets (which companies accepted for listing must follow), serving as evidence that markets can and will demand disclosure even in the absence of government directives to do so (Benston et al., 2006). It has been noted that publicly owned companies present their annual reports that include those items described above by Flesher (2010). The domain of financial reporting disclosure is constantly changing.

An important point is that the transparency in information on asset securitisation and derivatives is insufficient; necessitating increased disaggregated information and disclosure vis-a-vis the sensitivity of the fair values of change derivatives in risk variables within a specific market (Eberhartinger and Lee, 2014). Eberhartinger and Lee claimed that disclosure enhancement on fair value accounting would provide improved assistance to investors in terms of assessing the values and level of risk of banks through a clear understanding of the leverage of derivatives.

On the other hand, there are inevitable weaknesses that a firm encounters in mandated disclosure, such as a requirement to public companies to undertake periodic public disclosures, which is the most typical. Most jurisdictions require quarterly disclosure; with the law delineating the scope of information being disclosed. Accounting and financial data comprise the scope of information being disclosed (Kieff and Paredes, 2010).

Leuz and Wysocki (2006) noted that according to research, strict regulations are costly for firms, and these result in avoidance strategies, i.e. going private, having listed in other global markets, and having delisted into unregulated markets. Additionally, the globalisation of capital markets has posted limitations to what a regulator can do. It has been noted that accounting disclosure and auditing rules ascertain the amount of information asymmetries between corporate insiders and outsiders (Dietl, 1998). These rules discourage ownership concentration, facilitate corporate governance, boost risk diversification, and encourage a huge number of investors to participate in capital market operations. On the contrary, weak accounting disclosure regulations enhance the costs of outside corporate governance; lead to the promotion of ownership concentration; and discourage investors from participating in capital market operations (Dietl, 1998).

Capital markets are designed for long-term securities trading (e.g. shares and bonds), which draw together long-term capital suppliers and demanders. In the UK, the London Stock Exchange (LSE) – which serves as the principal capital market – and the Alternative Investment Market (AIM) comprise the capital market. The LSE offers UK and foreign companies the mechanism for raising capital, government securities, Eurobonds, and so on (McMenamin, 2005).

1.2 Rationale of the Study

The rational of the study is found in the fact that communicating firm performance and governance to outside investors necessitates the potential importance of financial disclosure and reporting (Healy and Pelepu, 2001). Efficient resource allocation in a capital market economy is hampered by information and incentive quandaries. These quandaries are mitigated through the important role played by disclosure and institutions tasked to facilitate reliable disclosure between managers and investors. The functioning of the capital market can encounter breakdown if there are information problems arising from information differences and conflicting incentives (Healy and Pelepu, 2001).

1.3 Research Gaps

There is lack of research explaining why the law requires the disclosure of certain information by a firm (OECD, 2001; Kieff and Paredes, 2010). There are also important gaps between knowledge on auditors’ incentives and intermediaries, and its implication on their credibility. Moreover, whilst it has been suggested by theory that auditors increase the credibility of financial reports, empirical research demonstrated only little research to substantiate this claim (Healy and Pelepu, 2001).

1.4 Scope of the Study

The scope of this study is limited to London Stock Exchange as a capital market in the UK and excludes other stock exchanges. It is also solely focused on the discussion of capital markets for accounting disclosure and does not include other markets, such as the product market.

1.5 Research Question

The research question that this study purports to answer is:

What is the value of accounting disclosure in UK capital market, citing the relationship between market capitalisation and money raised of companies trading in this market?

1.6 Aims and Objectives

The aims of this study are to discuss the value of accounting disclosure in the UK capital market and to explore the impact of globalisation on the ways in which accounting disclosure in this market is carried out.

The objectives are as follows:

To survey the extant literature on accounting disclosure in UK capital market;
To conduct an enquiry on the role of financial globalisation in accounting disclosure in the changing capital market in the UK; and
To examine how accounting disclosure could influence the pattern of market capitalisation and money raised in the UK capital market.

1.7 Significance of the Study

This study is significant to the academic community as it enables exploring and explaining why accounting disclosure is important to capital markets and corporate governance as a whole, serving as a contribution to the existing literature research on the subject. It provides relevance to the business community in its discussion of the role of globalisation in capital market transactions and accounting disclosure in this market. In addition, it is significant to the future researcher as it serves as research evidence and reference to similar research endeavor being carried out.

1.8 Ways to Investigate the Research Questions

The ways through which the research questions are investigated are delineated in the items below:

Research Method

Linear regression is carried out to process the data statistically, indicating the use of quantitative method as a way to analyse the data. Olagbemi (2011) stated that quantitative research employs statistical techniques in explaining a certain phenomenon, and likewise presents results statistically. The relevance of adopting the quantitative method is the study’s intention to find out the relationship between market capitalisation and money raised in its attempt to identify the value of accounting disclosure in UK capital market.

Data Collection and Analysis Techniques

Secondary data collection is carried out in this research, in which the researcher utilises non-original or second-hand data collected by another for a specific purpose, which he uses again for his own (Churchill and Iacobucci, 2010). This study does not make use of primary data, such as interviews, surveys, focus groups, etc. and as such relies solely on existing literature on accounting disclosure in capital markets. Since only secondary data are collected, this study thus adopts a desk-based approach.

The analysis technique is based on using high-quality secondary data from accounting disclosure enquiry, using the literature review as basis of evidence.

1.9 Summary

This chapter provides a discussion of the context of accounting disclosure in the UK capital market; the rationale of the study; research gap; scope; research question; aims and objectives; and ways to investigate the research questions covering the research method, data collection, and analysis technique to be conducted. It described that capital markets function in a highly regulated environment and necessitate the value of disclosure system. The potential value of financial disclosure and reporting is necessary in communicating firm performance and governance to outside investors.

Chapter 2: Literature Review

2.1 Introduction

This chapter aims to provide a discussion of a range of works and studies from the extant literature on the topic being examined. The purpose is to offer evidence that could help in addressing the research question and objectives. According to Oliver (2012), a literature review is not simply a descriptive inventory or a collection of synopses of related information. Rather, its main objective is to determine what has been known and unknown about a specific aspect of practice that has not been entirely resolved and to identify how this might be resolved and managed according to the evidence presented by research.

2.2 Accounting Disclosure in Capital Market

Numerous studies have been emphasised on the relationship between accounting information and capital markets. In their article, Dumontier and Raffournier (2002) examined the corresponding evidence of this relationship in Europe and thereby conducted a review classifying the European literature into three groups: Those researches tackling market reaction to recently released accounting information; researches focusing on investors’ use of accounting data; and researches that delve on the impact of market pressure on accounting decisions. This discussion is important in looking into the relationship between accounting information and capital markets.

According to Healy and Palepu (2001), corporate disclosure is an important element in the functioning of an efficient capital market. Regulated financial reports serves as the source of firms in providing disclosure, and such reports include financial statements; management discussions and analysis; and the like. The authors conducted a review of financial reporting and management’s voluntary disclosure of information, and thereby argued that demand for financial disclosure and reporting emerges out of information asymmetry and agency conflicts between the management and outside investors. Regulators, auditors, and other intermediaries of capital market enhance the credibility of management disclosures. They cited that new information is generated through financial analysts’ earnings forecasts and stock recommendations. They also stated that the contracting perspective research shows compensation, political cost considerations, and lending contracts, influencing accounting decisions. On the other hand, capital market research shows the relatedness of voluntary disclosure decisions and capital market transactions. Evidence also suggests that investors’ perspectives of voluntary disclosures (e.g. management forecasts) are taken as credible information.

Further, Healy and Palepu (2001) used the positive accounting theory as an area of focus of research on managers’ reporting decisions, on which managements’ financial reporting choices are focused. The findings indicated that the disclosure strategies of firms have implications on the speed with which information gets into prices. Findings also suggested that firms utilising accounting methods to speed up earnings are small ones and likewise have relatively high leverage. The unique contribution of the study is its development of a framework for analysing managers’ reporting and disclosure decisions in a capital market context. Its strengths include its clear explanation of the importance of financial reporting and disclosure as a valuable means to communicate firm performance and governance. No weakness was cited for the study.

On the other hand, in Botosan‘s (1997) study, the author mentioned that the financial reporting community gives importance to the impact of disclosure level on the cost of equity capital. However, there has been lack of established tenets in terms of association between disclosure level and cost of equity capital, thereby making it difficult to quantify them. The author examined such association by regressing firm-specific estimates of cost of equity capital on firm size, market beta, and self-designed measure of the level of disclosure. He used a sample of 122 manufacturing firms in the 1990 annual reports bearing the amount of voluntary disclosure. The results demonstrated association between greater disclosure and a lower cost of equity capital. Those firms that have a high analyst following revealed no evidence of an association between disclosure level measure and equity capital cost, and such lack of evidence is assumed to have been caused by the limitation of the disclosure measure to the annual report (Botosan, 1997).

Botosan’s study used historical summaries of quarterly and annual financial results as a method to conduct trend analysis. Descriptive statistics and Spearman correlation coefficients were adopted to present the results. The unique contributions of the study are its discussion of the effect of disclosure level on equity capital cost and its focus on the association between greater disclosure and lower equity capital cost.

Botosan’s (1997) study is similar to that of Kothari (2001) in such manner that both carried out an investigation of the relation between financial statements and capital markets. Kothari conducted a review of empirical research to undertake this pursuit. He claimed that the primary sources of demand for accounting research on capital market are market efficiency tests and fundamental analysis, amongst others. Evidence from research on market efficiency tests in relation to accounting information, fundamental analysis, and relevant value of financial reporting tends to be helpful in decisions for capital market investment and corporate financial disclosure. As part of the research method, Kothari (2001) summarised the extant study on the properties of management forecasts. He also reviewed a huge body of methodological capital market research focusing on positive accounting theory. The unique contribution of the research is its inclusion of advanced research in finance, economics, and econometrics. The strengths of the study is its critique of existing research, discussion of unresolved issues, historical perspectives in the accounting literature, and use of various statistical methods to measure variables (e.g. cross-sectional regressions; autocorrelation coefficient of earnings). Its weakness on the other hand, is its methodological complication due to skewed distributions of financial variables and difficulties in the estimation of the expected return rate on securities (Kothari, 2001).

Similar to the work of Kothari (2001), that of Iatridis (2006) was focused on accounting information disclosure in UK firms’ financial statements. The aim of the study was to analyse firms’ financial characteristics that give broad disclosures, as well as to provide an assessment of their (firms) motives’ financial impact, i.e. raising equity finance. The study examined the firms’ financial attributes as they disclose information on key accounting matters, such as accounting policy changes, risk exposure, and adoption of international financial reporting standards, to name some. Iatridis (2006) inferred that firms are likely to pursue disclosure of accounting information in their attempt to assure market participants of the consistency of their accounting policies with the accounting regulation and thereby address their stakeholders’ information needs. The author posited that firms fostering informative accounting disclosures tend to demonstrate leverage measures. The difference of this study from those of earlier ones already discussed is seen in it focus on stakeholders and other market participants vis-a-vis consistency of firms’ accounting policies with the accounting regulation.

Conversely, the study of Murray and colleagues (2006) took a different direction than those of Botosan (1997); Healy and Palepu (2001); and Kothari (2001). Murray and colleagues posited that capital markets are taking an increasingly powerful stride at the global level and this power is manifested in the influence of financial markets on environmental and social conditions. The authors assumed that a possible way for markets to become reeducated towards lesser unsustainable modes of behaviour is to pursue social and environmental disclosures. The study was in keeping with much research into financial reporting theory (Murray et al., 2006).

The methods thus utilised included five statistical tests to determine if a link exists between corporate disclosures and share returns. These statistical tests included Person correlation co-efficient, chi-square test, and a general linear model. The findings suggested that firms’ profitability has not been adversely affected by disclosure of sensitive accounting information. The unique contribution of Murray and colleagues’ (2006) study is its emphasis on the fact that the implementation of international financial reporting standards fosters consistency and reliability in financial reporting due to enhanced quality of the comparability of financial statements. The strengths of the study include its utilisation of a range of methods to examine the actions and attitudes of individual investors.

Taking a similar direction in their study, Watson and colleagues (2002) explored the notion of agency and signaling theories as bearing an explanation of voluntary disclosure of ratios in firms’ annual reports. The authors adopted and tested hypotheses using data gathered over five years for 313 firms in the UK. In particular, their study considered associations between ratio disclosure and firm profitability, firm efficiency, liquidity, return on investment, firm size, and industry type. The results revealed some evidence of a relationship between ratio disclosure on one hand, and firm’s performance, firm size, and industry type, on the other. The unique contribution of this study is its emphasis on the evidence of voluntary disclosure of ratios in firms’ annual reports.

In their discussion, Gray and Roberts (1997) furthered that financial reporting involves direct and indirect costs, and that explaining the choices of foreign listing involves the significant role of disclosure. Regulatory differences are narrowed through disincentives to list in some EU countries. A study involving 459 multinational corporations with foreign listings revealed that companies tend to list more on foreign stock exchanges with lower levels of financial disclosure in relation to their own. Incidentally, experts have evaluated the level of disclosure in the UK as being ranked behind the US and Canada, but ahead of France, Japan, and Germany (Gray and Roberts, 1997). To the extent that disclosure regulations are concerned, the requirements of LSE are relatively modest for foreign companies. All EU members adopt mutual recognition whereby those companies that comply with their local requirements are also considered to comply with the requirements imposed by LSE (Gray and Roberts, 1997).

2.3 Corporate Governance in Capital Market

Haque and colleagues (2008) designed a conceptual framework intended for the relationship between corporate governance on one hand, and a firm’s financial performance and access to finance – which both determine the development of capital markets – on the other. The framework created an assumption that the corporate governance of a firm is concurrently ascertained by a set of associated governance components and other firm characteristics. The authors noted that whilst a crucial role is played by the capital markets in improving the standards of corporate governance, poor firm-related corporate governance might hamper the effectiveness of this pursuit. In addition, the firm’s ability to foster financial performance and obtain access to finance can be enhanced by the quality of firm-level corporate governance, thereby leading to the development of the capital market. The basis of the framework is the use of economic approaches to corporate governance, as well as some aspects of the assumptions of stakeholder theory. The authors furthered that the capital market of a developing economy involves a kind of governance role that is being constrained by poor protection of minority shareholders and weak incentives to improve disclosure, to name two. Disclosure and auditing are in fact embodied in firm-level corporate governance, along with ownership and control structure, diversity of the board and management, and responsibility towards stakeholders (Haque et al., 2008). This has been supported by Aguilera and Jackson (2003) where the authors cited the United States as an example in terms of its facilitation of market-oriented control mechanism, in which relatively high disclosure requirements enabled liberal property rights to protect minority shareholders. Accordingly, weak information disclosure requirements and inadequate protection for minority shareholders limit the manner in which collective action challenges in corporate control could be addressed (Aguilera and Jackson, 2003).

The method employed in Haque and colleagues’ (2008) article is a survey of literature, focusing on a discussion of institutional issues of corporate governance, transparency and accountability, access to finance, firm financing, and financial performance. The theoretical approach includes the use of stakeholder theory in analysing the relationship between corporate governance and the development of capital markets. Its unique contribution is the development of a framework that is based on an assumed determination of a firm’s corporate governance by associated governance components and other firm characteristics. In the light of these, the strengths of the article is its thorough discussion of corporate governance and its link to capital market development, as well as a clear incorporation of theories into the analysis context. No weakness is perceived from the paper.

2.4 Accounting Disclosure and Market Capitalisation

In his study about the effect of the intellectual capital disclosure of Fortune 500 companies on market capitalisation, Abdolmohammadi (2005) pointed to the significant differences between the new and old economy industries in relation to partnerships and brands’ categories of intellectual capital in their adoption of disclosure. The author showed in the results that market capitalisation is significantly impacted by intellectual capital disclosure, thereby affecting how disclosure standards in annual reports are established. The unique contribution of this study is its emphasis on the association between market capitalisation and disclosure, which then assists the research topic on addressing the research questions.

According to Griining (2011), maximising market value is a way to end the attempt of reducing the cost of equity. Corporate disclosure is said to be the ultimate objective of disclosure. Linking market capitalisation and accounting information could be undertaken to analyse this effect. The study was designed to examine the relationship between disclosure in the annual reports and market liquidity. Results demonstrated that disclosure in annual reports tends to boost market liquidity through changes in investors’ expectations. The uniqueness of the study is its provision of evidence for the effect of disclosure relating to its ‘reduction effect’ on capital cost. Its strength is seen in its inclusion of return requirements of investors, direct measures for capital cost, and market value in hypothesis development, and the likewise adoption of regression models to test the hypotheses. No weakness was identified in the research.

2.5 Financial Globalisation and Accounting Disclosure in Capital Market

With the disappearance of the barriers to international investment and the likewise improvement in technology, the firm’s cost advantage for securities to trade publicly in its country of location will increasingly disappear because of financial globalisation. Securities laws continue to function as a strong determinant of issuance of securities as well as of decisions on whether they should be issued, how these securities are valued, and where they are traded. Stulz (2009) showed a demand for mechanisms that enable firms to commit to reliable disclosure since disclosure helps in reducing agency costs. The presence of financial globalisation allows national disclosure laws to have extensive effects on the welfare of a specific country, firms, and investor portfolio. The author mentioned that much of mandatory disclosure literature has been focused on evaluating whether firms have undertaken sub-optimal disclosure due to the lower benefits from disclosure at the firm level, compared to that of the level of society in general. The article did not cite specific theoretical framework in its discussion. The findings revealed that with financial globalisation, the geographical location is no longer an important factor in the trading of securities on organised capital markets. Further, the diffusion of information has been likewise sharply reduced by technological progress. This is parallel to the discussion of Webb and colleagues (2008) in which they pointed to the relevance of financial globalisation as an avenue through which firms from weak legal environments (civil law countries) can benefit from good disclosure as a result of such globalisation. The unique contribution of Stulz’s (2009) study is its focus on financial globalisation, which is likewise considered herein as its strength, along with the article’s discussion of securities laws that allow firms to commit to disclosure policies that maximise shareholder wealth (Stulz, 2009). One weakness cited for the article is the lack of theoretical framework or lack of theories that could serve as its theoretical underpinning.

Similarly, the work of Gray and colleagues (1995) was emphasised on the impact of pressures of international capital markets on the annual reports’ corporate voluntary disclosures in the multinational enterprises in the US and UK. In particular, it aimed to provide evidence on whether disclosure of more harmonised information is demonstrated by internationally listed US and UK multinational enterprises than those which are listed only on their corresponding domestic stock markets. The authors cited a range of reasons why firms may undertake voluntary information disclosure, such as the provision of information aside from what the regulation requires. Notwithstanding, in the framework of capital market pressures, the major impetus tends to be an intention to reduce the firm’s cost of capital. Through reduction of information risk, the firm may enable investors to accept a lower return rate, in which such acceptance in turn reduces the cost of capital for the firm. The unique contribution of Gray and colleagues’ (1995) article is its focus on the growing globalisation of financial markets and the current speedy increase in the number of firms acquiring listings on foreign stock exchanges.

Some companies undertake disclosure of their financial statements for parent companies and for major subsidiaries separately. There are those that adopt two or three different sets of accounting standards to produce different sets of published financial reports in a simultaneous manner. In the international sphere, disclosure is pursued in cash flows, current cost, value added, evidence of corporate social responsibility, national and international accounting standards being brought together, and the like. Related studies have tackled such questions as disclosure patterns in individual countries related to some baseline requirements, i.e. those listed under LSE (Flesher, 2010). LSE serves as the first stock market in Europe in terms of requiring disclosure of individual executive remuneration (Krivogorsky and Dick, 2011). Assessments are pursued on what active financial analysts would like to have, as opposed to what is available to them. In this chapter, Healy and Palepu (2001) reiterated that new information is generated through financial analysts’ earnings forecasts and stock recommendations. The biggest recent advance is the creation and increasing accessibility of the growing number of databases for financial disclosure (Flesher, 2010), which could be largely attributed to the disappearance of barriers to international investment and the likewise improvement in technology, as pointed out by Stulz (2009).

2.6 Summary

This chapter has discussed the concept of accounting disclosure in capital market; corporate governance; market capitalisation; and globalisation and accounting disclosure in this market. Accounting information and capital markets have a distinct relationship in that corporate disclosure is important in ensuring efficient functioning of capital market. Firms provide disclosure through regulated financial reports (e.g. Healy and Palepu, 2001).

In addition, firms’ disclosure strategies influence the speed with which information gets into prices. Despite the significant importance appropriated by the financial reporting community on the impact of disclosure level, there remains a lack of established tenets in the relationship between disclosure level and cost of equity capital (e.g. Botosan, 1997). Fundamental analysis and market efficiency tests are the major sources of demand for accounting research. It has been noted that firms that promote informative accounting disclosures are likely to exhibit leverage measures (e.g. Iatridis, 2006). Furthermore, financial globalisation caused the disappearance of the barriers to international investment – along with improvement in technology – and it likewise enabled the speedy increase in the number of firms obtaining listings on foreign stock exchanges (e.g. Gray et al., 1995; Stulz, 2009).

Chapter 3: Research Methodology

3.1 Introduction

This chapter provides a systematic approach aimed to answer the research question identified in Chapter 1. This approach is through adoption of a specific research design (qualitative), data collection (secondary data), and analysis technique, as well as identification of ethical considerations governing the overall pursuit of this research.

3.2 Research Design

The research design commonly used in research are qualitative and quantitative, or the combination of both. Qualitative design is that which is pursued not to quantify or measure variables but to focus on the language and meaning of people’s own construction of their experiences and beliefs (Lapan et al., 2012) it is characterised by multiple methods, as well as a naturalistic or interpretive approach. Its emphasis is the daily life of a certain person or event. As this research design is in effect naturalistic and interpretive, the qualitative researcher examines a phenomenon in its natural setting, of which he/she likewise tries to decipher such phenomenon in relation to the meanings attached to them by people (Wigren, 2007). Statistical methods have no room in qualitative research since the involved data are not numerical in form and could not hence be quantified or measured. Qualitative research does not also intend to test theory or hypothesis as it is a feature of quantitative research (Flick, 2008; Bryman, 2013). In addition, the qualitative research method is aimed at describing certain aspects of a phenomenon (i.e. accounting disclosure in capital market), with a perspective of explaining the subject of study (Trochim et al., 2010).

Quantitative research, on the other hand, adopts statistical techniques in explaining a certain phenomenon, and likewise presents results statistically (Olagbemi, 2011). It tends to determine whether variance in x (market capitalisation) causes a corresponding variance in y (money raised), and to what extent this is taking place. In the accounting field, quantitative studies exclude the potential of investigating in-depth issues that usually ask the ‘why’ and ‘how’ of accounting practices. The quantitative research tradition is being adopted in this domain, enabling the explanation of phenomenon variance through systematic comparisons, connoting the central role of analysis of causal relationships between variables (Green, 2005; Hoque, 2006); that is, market capitalisation and money raised. Since this study aims to quantify the relationship between market capitalisation and raising capital amongst selected LSE-listed companies, the quantitative research design is adopted.

3.3. Data Collection Technique

The common data collection techniques utilised in research are primary and secondary. Primary techniques are those that involve the collection of original or first-hand data for a specific purpose, such as interviews, questionnaires, focus groups, etc. (Mukul and Deepa, 2011). Secondary techniques, on the other hand, are those that involve non-original or second-hand data gathered by another for a specific purpose, which are again used by the researcher for his own study (Beri, 2008). Examples of secondary data are those from peer-reviewed journals, books, corporate reports, and online resources, to name a few.

This study’s utility of primary data is that which is focused on processing raw information from selected LSE-listed companies, which are generated through the London Stock Exchange website. The companies are from various sectors and are trading in the UK capital market through LSE. In particular, there are 50 companies from an original list of 350 involved in the study.

Secondary data, on the other hand, are applied through their reference to the value of accounting disclosure in the UK capital market, which is drawn from the extant literature. A desk-based approach is adopted as the study pursues collection of primary and secondary data from available sources.

3.4 Sampling Technique

Sampling is the process that allows the researcher to select the subjects or research participants in the study. Selecting a sample necessitates defining the group of cases, also known as the population that meets specific criteria, to which the study’s results are intended to form a generalisation (Dutta, 2013).

The non-probability sampling technique is used to select which companies are to be included in the sample from a range of data in the data set. No specific criterion was adopted for the selection. According to Dutta (2013), judgment sampling is applied when utilising the non-probability sampling technique, whereby the researcher has direct or indirect control over such selection.

3.5 Analysis Technique

Regression analysis is used to analyse the relationship between market capitalisation and money raised amongst selected LSE-listed companies. It is important to note that these companies are assumed to undertake accounting disclosure in their operations, as required by the UK accounting policies for corporations and incorporations. Finding out the relationship between market capitalisation and money raised would suggest whether a correlation exists between these two variables. The independent variable (x) is market capitalisation, whilst the dependent variable (y) is money raised.

Further, the analysis of findings involves the use of linear regression to find out the relationship between market capitalisation and money raised, and is likewise largely based on the extant literature, through which research inferences and enquiries are to be conducted.Themes are grouped according to their manner of discussion, citing their relevance in relation to theories being adopted – agency theory, positive accounting theory, and game theory. The idea is to examine the extant literature on the value of accounting disclosure in the UK capital market and analysing them using various works and theories as evidence.

The 50 LSE-listed companies are assumed to adopt accounting disclosure, as required by FASB, GAAP, GAAS, and by LSE itself. Whether or not they adopt accounting disclosure vis-a-vis their market capitalisation and money raised is not a point of discussion however, considering that the dataset does not include an indication whether the involved companies indeed observed accounting disclosure or not. Therefore, it is assumed that these publicly listed companies employ accounting disclosure as this study examines the relationship between their market capitalisation and money raised.

3.6 Ethical Considerations

Ethical considerations are important aspects in any research endeavor to ensure the integrity of research. In this study, first of these considerations is the adoption of a specific referencing style to cite the secondary data sources. This referencing style is identified in this research as the Harvard referencing style. Furthermore, it is important to acknowledge the sources of data being used in the study since they are not owned by the researcher but are only borrowed from another. Therefore, using certain ideas in the research without citing them in the references and within the text would constitute plagiarism, as the researcher is using the ideas of another and appears to be citing them as his/her own, which are in fact not (Johnson and Christensen, 2014). The value of paraphrasing or using one’s word in citing secondary sources is also noted here as an aspect of ethical considerations.

3.6 Summary

This chapter covered a discussion of specific research design to be employed (quantitative), a data collection technique (primary and secondary), analysis technique (regression and survey of the literature), and ethical considerations. The suitability of the quantitative research design for this study is found in its adoption of numerical data and the presence of quantification or measurement of variables. The variables being examined are market capitalisation and money raised of selected companies listed in the London Stock Exchange.

Non-probability sampling is used to select the companies from a range of companies in the LSE dataset. In determining the relationship between money raised and market capitalisation, it is assumed that they likewise undertake accounting disclosure.

Chapter 4: Analysis

4.1 Introduction

This chapter is designed to provide analysis of the research question and address the aims and objectives identified in the first chapter. Specifically, it outlines the value of accounting disclosure in UK capital market, focusing on London Stock Exchange, and discusses theories – agency theory and signalling theory – as theoretical underpinnings of the research.

4.2 The Value of Accounting Disclosure in the UK Capital Market

Data from the dataset of London Stock Exchange (2014) are used to find out if there was a significant relationship between market capitalisation and money raised amongst involved corporations listed in this financial market. The total number of companies in the dataset is limited only to 50, from a list of 350. It must be noted that all of these companies are assumed to comply with the needed accounting disclosures of FASB, GAAP, and GAAS, which is the rule for listed companies in LSE (Benston et al., 2006; Krivogorsky and Dick, 2011). Appendix-1 provides the list of the 50 companies for which an analysis of a relationship between market capitalisation and money raised is performed using a linear regression.

With the linear regression, the independent variable (market capitalisation) and the dependent variable (money raised) are analysed. The purpose is to determine any relationship between these variables for companies that are assumed to observe accounting disclosures, as they are listed in the London Stock Exchange, where such disclosure is highly required, compared to unincorporated organisations (Benston et al., 2006; Haque et al., 2008).

In Figure 1, the P-value indicates that for every increment increase in market capitalisation, one can expect a point average to go up by 4 per cent. On the other hand, the P-value for the slope coefficient – which is 0.00 – is significant, which means that there is a significant relationship between market capitalisation and money raised. Figure 1 also suggests that with the slope coefficient of 0.09, every increase in the money raised means a corresponding increase in market capitalisation by 90 ?m. With the positive coefficients (positive slope = 0.09), a positive relationship between the two variables (market capitalisation and money raised) is indicated. This is congruent with the literature in which it was posited that financial globalisation, which is promoted in capital markets, tends to maximise shareholder wealth. Stulz (2009) and Haque et al. (2008) likewise stressed access to finance, firm financing, and financial performance as outcomes of corporate disclosure and governance. The regression graph in Figure 2 however suggests non-typical fluctuations in regression lines, given the standard error of 82.30 shown in Figure 4.

CoeffsSt Errort StatP-valueLower 95%Upper 95%Lower 90%Upper 90%
Market Cap (?m)

Figure 1: Regression details

When graphed, the regression is shown as:

Figure 2: Graph of the linear regression

Given the above graph, the regression equation being derived is:

Money raised: 27.6 + 0.09x R? = 18.4%

The regression statistics in Figure 3 shows that 18.5 per cent of the variability in market capitalisation is explained by money raised. Standard error explains the typical deviation error between the actual values in money raised and the predicted values.

Regression Statistics
Multiple R0.430
R Square0.185
Adjusted R Square0.168
St Error82.301

Figure 3: Regression statistics

As mentioned, the 0.00 P value for the slope coefficient indicates a significant relationship between market capitalisation and money raised. What can be inferred here is that the selected LSE-listed companies have positive market capitalisation and money raised, which is consistent with the claim of Kamaruddin et al. (2004) in relation to raising capital as an outcome of trading publicly (which in turn means adopting corporate disclosure). The literature also abounds with information on raising equity finance in relation to voluntary disclosure (Kothari, 2001; Iatridis, 2006) in which it was asserted that firms tend to pursue disclosure of accounting information in their attempt to ensure consistency of their accounting policies with the accounting regulation. In the same manner, the results suggest that the involved firms in the survey tended to have an accelerating general pattern of market capitalisation and money raised, given the assumption of their observance of accounting disclosure. It is interesting to note that how disclosure standards in annual reports are set is influenced by market capitalisation, as forwarded by Abdolmohammadi (2005).

Further, as has been previously acknowledged, historical perspectives are present in the accounting literature (e.g. Kothari, 2001), which this analysis is taking on in identifying the value of accounting disclosure in capital markets. The Companies Acts and other legislation in the UK have for many years required companies to adopt financial disclosure on at least a minimum level. Only audits are required by the Acts in which it spells out that an auditor, aside from being a member of a recognised body of accountants, must also be independent. The UK Generally Accepted Auditing Standards (GAAS), which was established by an independent body outside the profession, lays down audit guidelines, which along with Generally Accepted Accounting Principles (GAAP), is used as the basis for audit and disclosure requirements imposed in the London Stock Exchange (LSE) (Benston et al., 2006). The filing of the annual financial reports of publicly owned companies in the UK is done with LSE, the agency tasked to make and enforce the securities trading rules and the financial reporting requirements of these companies (Tracy and Barrow, 2012). This is the reason why this study assumes that the 50 selected LSE-listed companies adopt accounting disclosure, to which the relationship of their market capitalisation and money raised is being analysed. Murray and colleagues (2006) noted in the literature review that the implementation of international financial reporting standards fosters consistency and reliability in financial reporting due to the enhanced quality of the comparability of financial statements. With the relationship between market capitalisation and money raised in selected LSE-listed firms in the study, Griining‘s (2011) assertion would be interesting to consider – that linking market capitalisation and accounting information could be carried out to analyse corporate disclosure.

The value of accounting disclosure in the UK capital market is seen in the fact that inadequate disclosure in an annual report is tantamount to utilising erroneous accounting methods for profit measurement, identification of owner’s equity and values for assets and liabilities. Improper accounting methods or misleading accounting disclosure will lead to a misleading financial report. Unpleasant lawsuits against the business and its managers are the possible outcome of these deficiencies (Tracy and Barrow, 2012). In the survey of literature, Haque and colleagues (2008) also noted that poor firm-related corporate governance might hamper the effectiveness of capital markets to improve the standards of corporate governance, parallel to what Tracy and Barrow (2012) discussed. Haque and colleagues further mentioned the importance of transparency and accountability in corporate governance, which could be used to analyse the value of disclosure. Similarly, Kothari (2001) posited that evidence from research on market efficiency tests in relation to accounting information, fundamental analysis, and relevant value of financial reporting appears to offer helpful stances in decisions for capital market investment and corporate financial disclosure. The literature also cited the impact of pressures from international capital markets on the annual reports’ corporate voluntary disclosures in the UK multinational enterprises (Gray et al., 1995; Tracy and Barrow, 2012).

Further, disclosure is crucial in the efficient functioning of a capital market (i.e. LSE) because of the fundamental roles it plays in decreasing the information asymmetry between management and shareholders, between investors and management, and between various kinds of investors. In the literature review, the same was also asserted by Healy and Palepu (2001) as they observed that corporate disclosure is an important element in the functioning of an efficient capital market. There is a link between the issue of voluntary disclosure on one hand, and information asymmetry theory, on the other (Neri, 2011). Information symmetry refers to a condition in which some people know some relevant information of all parties involved. The market becomes inefficient because of lack of access of the market participants to the information needed for their decision making. However, this is not the case when disclosure is taken in. Thus, not only does better disclosure regulate information asymmetry, but it also mitigates inefficiencies within the market (Neri, 2011). Healy and Palepu (2001) also made the same claim as they posited that demand for financial disclosure and reporting emerges out of information asymmetry and agency conflicts between the management and outside investors.

4.3 Financial Globalisation of Capital Markets

The accounting directives of the European Union (EU) serve as the major recent international influence on UK accounting as the former’s attempt to harmonise accounting rules across countries within the European Union (EU). An alternative strategy was employed by the EU upon the Commission’s proposal that beginning in 2005, the preparation of consolidated accounts by all listed companies must be on the basis of International Financial Reporting Standards (IFRS). In addition, IFRS could be used by companies for their individual accounts, with which they must seek the permission of their national government. The UK has taken an affirmative response on this since 2005 (Benston et al., 2006). This new rule by the EU did not draw much debate in the UK. The Accounting Standards Board (ASB) has been trying to converge with IFRS, as considerable differences are found between ASB standards and those of International Accounting Standards Board (IASB). At the very least, the role of the ASB is to supervise the implementation of IFRS in the country, address any potential problems as they arise, and set standards for individual entity reports that keep on adhering with the UK accounting standards (Benston et al., 2006). It may be noted that the presence of financial globalisation has enabled national disclosure laws to have extensive effects on the welfare of a specific country, firm, and investor portfolio (Stulz, 2009). Moreover, the size of the capital market in the UK appears to be an important factor that attracts foreign companies to LSE, considering its dominant position in Europe.

In the dataset, the reality of financial globalisation was exemplified in the inclusion of foreign companies in LSE, such as Gibraltar, Cayman Islands, IM, British Virgin Islands, Israel, and Canada, indicating that the UK capital market is not solely limited to the UK alone (See Appendix-1). This is what Stulz (2009) claimed in the literature review that financial globalisation has led to the disappearance of the barriers to international investment and the increasing accessibility of the growing number of databases for financial disclosure (Flesher, 2010, in which information technology has played a great role (Stulz, 2009).

Strong considerations are also placed on the market reason in that a high level of visibility is indicated in a London listing, both in the UK and the rest of the EU (Gray and Roberts, 1997). In relation to this, a note-worthy point is that of Stulz (2009) in which he stated that financial globalisation initiated the disappearance of barriers to international investment and the likewise improvement in technology, vis-a-vis the increase disappearance of the firm’s cost advantage for securities to trade publicly in its country of location. Moreover, with financial globalisation, the geographical location is no longer an important factor in the trading of securities on organised capital markets, as exemplified in the example of London Stock Exchange (Stulz, 2009).

4.5 Theories in Accounting Disclosure

4.5.1 Agency Theory

Agency theory explains the theoretical underpinning with the problem of information between management and shareholders as well as between management and investors (Neri, 2011; Desoky, 2009). The stewardship theory is used to describe these relationships, which may explain the reason for firms’ decision to disclose voluntary information in their annual reports. As an aspect of the agency relationship between them, decision-making authority is being delegated to the firm’s agents by its principals. This delegation leads to agency problems because of the agent’s incentives to focus on increasing his own wealth than that of the principal. In this regard, the market capitalisation and money raised in selected companies might indicate either of the two, but a thorough investigation would be required for this. In addition, the firm’s separation of ownership and control leads to information asymmetry in that it is assumed that the agent has access to higher information and utilises such information toward maximising his own interest at the expense of the principal. This scenario is regarded as the moral hazard problem that involves not only fraud but also such actions involving trade-offs between risk and reward (Neri, 2011). It could be inferred that the reason why this becomes possible for agents is the notion that firms’ profitability is not adversely affected by disclosure of sensitive accounting information, as had been claimed by Murray and colleagues (2006) in the literature review. In general, agency cost is a task of the potential conflicts amongst the firm’s managers, shareholders, and creditors. With the increase of these conflicts (as well as of agency costs), a corresponding increase in the demand for monitoring takes place as well. Credible disclosure – including accounting information – is part of monitoring being demanded of the firm because shareholders and creditors lack direct access to its performance information. Agency theory describes the choices that agents have concerning policies and information disclosure, which must be included in the annual reports. Through an independent outside auditor, principals possess the right to verify the accuracy of the annual reports prepared by the agents (Neri, 2011). In the same manner, Healy and Palepu (2001) inferred that regulators, auditors, and other intermediaries of capital market enhance the credibility of management disclosures. The information entered into the annual reports is regarded as an important tool to reduce information asymmetry and agency costs. Agency theory assumes that the costs of the agency relationship (including the monitoring costs involved to monitor managers and the result of their self-serving behaviour) permanently depress the value of the firm (Neri, 2011). In the literature review, Watson and colleagues (2002) also used agency theory in their study in an attempt to explain voluntary disclosure of ratios in the annual reports of firms where they revealed some evidence of a relationship between ratio disclosure and firm’s performance, size, and industry type.

Thus, agency theory explains the extent of disclosure of a firm, in which it postulates that the increased independence that the agent experiences results in the agent’s less likelihood to disclose information to the principal. The greater the dependence of the agent on the principal for financing, the more likelihood for information to be disclosed to the principal (Kamaruddin et al., 2004). Thus, disclosure in capital markets increases the fairness of these markets and causes price manipulation to become more difficult.

4.5.2 Signalling Theory

The signalling theory explains that financial reporting originates from the desire of the management to have disclosure of its superior performance, in which good performance will lead to enhancement of the reputation and position of the management; and good reporting is regarded as one area of good performance (Kamaruddin et al., 2004). Information asymmetric takes place in the perceived better information of the management than that of outside investors. Voluntary disclosure is a way to provide more information, whereby the level of information asymmetric enables reduction, which is one of the apparent incentives that motivate managers towards disclosure. The firm intends to benefit from favourable voluntary disclosure since such will lead to reduction of information asymmetric and will in turn reduce its cost of capital (Kamaruddin et al., 2004). This is exemplified in Iatridis’ (2006) assertion, that firms are likely to pursue disclosure of accounting information in their (firms) attempt to assure market participants of the consistency of their accounting policies with the accounting regulation and thereby address their stakeholders’ information needs. In a similar fashion, Gray and colleagues (1995) inferred that firms choose to disclose information in the context of capital market pressures because of an intention to reduce their cost of capital, which is also supportive of the signalling theory. Moreover, Walker and Vasconcellos (1997) asserted that information asymmetry is an imperfection of a capital market, affecting debt and equity. A comparable argument for information asymmetry on which equity pricing depends can also be made for the market’s valuation of the firm’s debt.

The above discussions provide evidence of the incentives available to firms as they decide to pursue the voluntary disclosure policy. Apart from enhancing the public image of the firm, the policy could also allow the firm to enhance its ability to raise capital.

4.6 Summary

The value of accounting disclosure in the UK capital market is seen in the fact that inadequate disclosure in an annual report has corresponding legal implications, as such is equivalent to the use of wrong accounting methods in the measurement of profit and determination of values for assets, liabilities, and owner’s equity. In addition, misleading accounting disclosure results in a likewise misleading financial report. Improper disclosure also suggests poor corporate governance, which might obstruct the effectiveness of the UK capital market to improve the standards of its corporate governance. Apart from the fact that better disclosure regulates information asymmetry, it also promotes mitigation of inefficiencies within the market. The regression analysis indicates a significant relationship between market capitalisation and money raised amongst selected companies listed in London Stock Exchange. It is suggested that these companies increase in money capitalisation as they likewise increase in money raised.

Agency theory infers that the costs of the agency relationship (i.e. the monitoring costs to monitor managers and the result of their opportunistic behaviour) permanently depress the value of the firm. The information entered into the annual reports allows reduction of information asymmetry and agency costs. In signalling theory, the value of accounting disclosure in UK capital market is demonstrated in an attempt to assure market participants of the consistency of their accounting policies with the accounting regulation, and hence respond to information needs of their stakeholders. Additionally, firms pursue disclosure of information because of the intention to reduce their cost of capital.

Chapter 5: Conclusion and Recommendations

5.1 Conclusion

This study has been centred on determining the value of accounting disclosure in the UK capital market, citing the relationship between market capitalisation and money raised of companies trading in this market. The quantitative research tradition is adopted as the specific research design, using linear regression as the statistical process in analysing the data. This research direction allows for identifying whether market capitalisation leads to increased money raised amongst 50 selected LSE-listed firms. The adoption of the quantitative research design has indicated the central role of the analysis of causal relationships between market capitalisation and money raised. It is assumed that the selected companies adopt accounting disclosure, which is required of them by IFRS, GAAP, GAAS, and even by LSE itself. The dataset is taken from the LSE website, from which only 50 companies are selected through the non-probability random sampling technique.

The regression analysis demonstrates a 0.00 result of the P-value for the slope coefficient, which is considered significant, and thus connotes a significant relationship between market capitalisation and money raised. The regression result also revealed that market capitalisation increases by 90 ?m in every increase in money raised. The positive coefficients likewise suggests a positive relationship between the two variables. The result coincides with the assertion in the literature that access to finance, firm financing, and financial performance are outcomes of corporate disclosure. This is also indicative of raising capital as an outcome of trading publicly, which in turn implies adoption of corporate disclosure. It has been noted that whether the selected firms indeed adopt accounting disclosure or not is no longer considered in the study, as this information is not stated in the data source. Thus, such adoption is assumed as part of the requirement for LSE-listed companies.

The selected show a tendency to have an accelerating pattern of market capitalisation and money raised as shown by the linear regression result, with the assumption of the value of accounting disclosure being adopted.

As mentioned, the necessity of accounting disclosure for the capital market has been earlier established by the IFRS, GAAS and GAAP, serving as basis for audit and disclosure requirements imposed in LSE. Moreover, the enhanced comparability quality of financial statements allows consistency and reliability in financial reporting because of the presence of international financial reporting standards.

The study points to advantageous and beneficial outcomes for the firm as a value of accounting disclosure in the UK capital market, as inadequate disclosure is a result of the espousal of erroneous accounting methods for profit measurement and identification of owner’s equity and values for assets and liabilities. Another value of accounting disclosure in the capital market is the smooth flow of the effectiveness such market in enhancing the standards of its corporate governance. Other valuable impacts are regulation of information asymmetry, and mitigation of inefficiencies with this market.

Furthermore, financial globalisation of capital markets show the emergence of the harmonisation of accounting rules across countries within the EU. The existence of financial globalisation led national disclosure laws to form extensive impacts on the well-being of specific countries, firms, and investor portfolios. Foreign companies are attracted to trade in LSE due to the size of this capital market. It has been noted that the inclusion of firms from foreign countries, which trade in LSE exemplifies the occurrence of financial globalisation in the capital market, whose operation is not limited to the UK alone. This fact is congruent with the notion that financial globalisation removes certain barriers that otherwise prevent international investment and accessibility of foreign capital from reaching the UK capital market.

Agency theory is cited to provide theoretical underpinning to the study. It is inferred that firms’ agents are delegated with decision-making authority, which in turn leads to agency problems due to their (agents) incentives to emphasise on the increase of their own wealth rather than that of the principal. This results in a corresponding increase in disclosure monitoring. On the other hand, the signalling theory presents the advantages for a firm in relation to financial reporting (including accounting disclosure) in that financial reporting enables enhancing the reputation andposition of the management. As emphasised earlier, reduction of cost of capital and information asymmetry are two of the reasons for the value of accounting disclosure, which the signalling theory likewise stresses. Moreover, the value of accounting disclosure is seen in firms’ desire to assure market participants of the consistency of their accounting policies and thus address their information needs and mitigate inefficiencies within the market. As information asymmetry serves as an imperfection of a capital market, the value of accounting disclosure is found in the apparent pursuit to address this imperfection with this market.

The implications to research include the furtherance of the significance of the efficiency and adequacy of accounting disclosure as a requirement in the capital market, and the contribution of the study to the large body of literature that deals with the value of accounting disclosure.

5.2 Recommendations

Given the conclusion, the recommendations being presented are as follows:

Pursue a study on the impact of financial globalisation on financial reporting.

This study will provide further evidence on the extent of the impact of financial globalisation on financial reporting. It will further give light to the role of financial globalisation in accounting disclosure embodied in the current study, given that the concept serves as an important factor to the disappearance of borders in accessing the capital market.

(2) Undertake a study on the role of audit committee in corporate governance.

This study aims to explain the role of audit committee in the enhancement of corporate governance using agency theory. This proposed study will provide further knowledge and evidence to agency problems in corporate governance, using a theoretical direction.


Abdolmohammadi, M. J. (2005) Intellectual capital disclosure and market capitalization. Journal of Intellectual Capital 6 (3), 397-416.

Aguilera, R. V. and Jackson, G. (2003) The cross-national diversity of corporate governance dimensions and determinants. Academy of Management Review, 28 (3), 447-465.

Benston, G. J., Broomwich, M., Litan, R. E., and Wagenhofer, A. (2006) Worldwide Financial Reporting: The Development and Future of Accounting Standards. Oxford: Oxford University Press.

Beri, G. C. (2008) Marketing Research. New Delhi: Tata McGraw-Hill Education.

Botosan, C. A. (1997) Disclosure level and the cost of equity capital. The Accounting Review, 72 (3), 323-349.

Bryman, A. (2013) Doing Research in Organizations. NY: Routledge.

Churchill, G. A. and Iacobucci, D. (2010) Marketing Research: Methodological Foundations. Tenth Edition. Mason, OH: South-Western Cengage Learning.

Desoky A. M. (2009) Company characteristics as determinants of internet financial reporting in emerging markets: The case of Egypt. In M. Tsamenyi and Uddin, S. (Eds.) Accounting in Emerging Economies (pp. 31-72). IWA: Emerald Group Publishing Limited.

Dietl, H. M. (1998) Capital Markets and Corporate Governance in Japan, Germany, and the United States: Organizational Response to Market Inefficiencies. London: Routledge.

Dutta, S. K. (2013) Statistical Techniques for Forensic Accounting: Understanding the Theory and Application of Data Analysis. NJ: FT Press.

Dumontier, P. and Raffournier, B. (2002) Accounting and capital markets: A survey of the European evidence. European Accounting Review, 11 (1), 119-151.

Eberhartinger, E. and Lee, S. (2014) ‘Transparency of fair value accounting and tax.’ In J. Forssbaeck and L. Oxelheim (Eds.) The Oxford Handbook of Economic and Institutional Transparency (pp. 477-492) Oxford: Oxford University Press.

Flesher, D. (2010) Gerhard G. Mueller: Father of International Accounting Education. IWA: Emerald Group Publishing Limited.

Flick, U. (2007) Designing Qualitative Research. First Edition. London: Sage Publications Ltd.

Flood, J. M. (2014) Wiley GAAP: Interpretation and Application of Generally Accepted Accounting Principles. NJ: John Wiley & Sons, Ltd.

Gray, S. J., Meek, G. K., and Roberts, C. B. (1995) International capital market pressures and voluntary anural report disclosures by US and UK multinationals. Journal of International Financial Management and Accounting, 6 (1).

Gray, S. J. and Roberts, C. B. (1997) ‘Foreign company listings on the London Stock Exchange: Listing patterns and influential factors. In T. E. Cooke and C. W. Nobes (Eds.) The Development of Accounting in an International Context (pp. 193-209). NY: Routledge.

Green, J. (2005) Principles of Social Research. England: Open University Press.

Griining, M. (2011) Capital market implications of corporate disclosure: German evidence. Verband der Hochschullehrer fur Betriebswirtschaft e.V., 4 (1), 48-72.

Haque, F., Arun, T., and Kirkpatrick, C. (2008) Corporate governance and capital markets: A conceptual framework. Corporate Ownership & Control, 5 (2), 264-276.

Healy, P. M. and Pelepu, K. G. (2001) Information asymmetry, corporate disclosure, and the capital markets: A review of the empirical disclosure literature. Journal of Accounting and Economics, 31 (1), 405-440.

Hoque, Z. (2006) Methodological Issues in Accounting Research: Theories, Methods and Issues. London: Spiramus Press Ltd.

House of Commons Treasury Committee (2007) Globalisation: Prospects and Policy Responses: 14th Report of Session 2006-2007. London: The Stationery Office.

Iatridis, G. (2006) Accounting disclosure and firms’ financial attributes: Evidence from the UK stock market. International Review of Financial Analysis, 17 (1), 219-241.

Johnson, R. B. and Christensen, L. (2014) Educational Research: Quantitative, Qualitative, and Mixed Approaches. London: Sage Publications.

Kamaruddin, K. A., Ibrahim, M. K., and Zain, M. M. (2004) Financial Reporting in Malaysia: Some Empirical Evidence. Kuala Lumpur: Utusan Publications & Distributors, Sdn Bhd.

Kieff, F. S. and Paredes, T. A. (2010) Perspectives on Corporate Governance. NY: Cambridge University Press.

Kothari, S. P. (2001) Capital markets research in accounting. Journal of Accounting and Economics, 31 (1), 105-231.

Krivogorsky, V. and Dick, W. (2011) ‘New corporate governance rules and practices. In V. Krivogorsky (Ed.) Law, Corporate Governance, and Accounting: European Perspectives (pp. 33-61). NY: Routledge.

Lapan, S. D., Quartarolli, M., and Riemer, F. J. (2012) Qualitative Research; An Introduction to Methods and Designs. NJ: John Wiley& Sons, Inc.

Leuz, C. and Wysocki, P. (2006) Research Study: Capital-Market Effects of Corporate Disclosures and Disclosure Regulation. Accessed on 3 December 2014 from

London Stock Exchange (2014) New and Further Issues. Accessed on 11 December 2014 from

McMenamin, J. (2005) Financial Management: An Introduction. NY: Routledge.

Moore, M. T. (2013) Corporate Governance in the Shadow of the State. Oxford: Hart Publishing Ltd.

Mukul, G. and Deepa, G. (2011) Research Methodology. New Delhi: PHI Learning Limited.

Murray, A., Sinclair, D., Power, D., and Gray, R. (2006) Do financial markets care about social and environmental disclosureFurther evidence and exploration from the UK. Accounting, Auditing & Accountability Journal, 19 (2), 228-255.

Neri, L. (2011) Risk Reporting: Development, Regulation and Current Practice: An Investigation on Italian Stock Market. Milano: Franco Angeli s.r.l.

Nikolai, L., Bazley, J., and Jones, J. (2010) Intermediate Accounting Update. Mason, OH: South-Western Cengage Learning.

[OECD] Organisation for Economic Co-Operation and Development (2001) Corporate Governance in Asia: A Comparative Perspective. Paris: OECD.

Olagbemi, F. O. (2011) The Effectiveness of Federal Regulations and Corporate Reputation in Mitigating Corporate Accounting. IN: Xlibris Corporation.

Oliver, P. (2012) Succeeding With Your Literature Review: A Handbook for Students. England: Open University Press.

Pohl, G., Jedrzejczak, G. T., and Anderson, R. E. (1995) Creating Capital Markets in Central and Eastern Europe. Washington, DC: The World Bank.

Shehata, N. F. (2014) Theories and determinants of voluntary disclosure. Accounting and Finance Research, 3 (1), 18-26.

Stulz, R. M. (2009) Securities laws, disclosure, and national capital markets in the age of financial globalization. Journal of Accounting Research, 47 (2), 349-390.

Tracy, J. A. and Barrow, C. (2012) Understanding Business Accounting for Dummies. NJ: John Wiley & Sons, Ltd.

Trochim, W. M., Donnelly, J. P., and Arora, K. (2010) Research Methods: The Essential Knowledge Base. Mason, OH: South-Western Cengage Learning.

Walker, J. S. and Vasconcellos, G. M. (1997) A Financial-Agency Analysis of Privatization: Managerial Incentives and Financial Contracting. NJ: Associated University Presses, Inc.

Watson, A., Shrives, P., and Marston, C. (2002) Voluntary disclosure of accounting ratios in the UK. The British Accounting Review, 34 (4), 289-313.

Webb, K. A., Cahan, S., and Sun, J. (2008) The effect of globalization and legal environment on voluntary disclosure. University of Windsor. Odette School of Business Publications. Accessed on 3 December 2014 from

Wigren, C. (2007) ‘Assessing the quality of qualitative research in entrepreneurship’. In H. Neergaard and J. P. Ulhoi (Eds.) Handbook of Qualitative Research Methods in Entrepreneurship (p. 383-405). Glos: Edward Elgar Publishing Limited.


Appendix-1: Selected LSE-listed Companies

CompanyCountry of Inc.SectorMarket Cap (?m)Money Raised (?m)
MICRO FOCUS INTERNATIONALUKSoftware & Computer Services2421.470.00
STRAT AERO PLCUKSupport Services10.390.65
HAVERSHAM HLDGS PLCUKGeneral Financial32.3030.00
FEVERTREE DRINKS PLCUKBeverages190.1593.33
QUARTIX HLDGS PLCUKSoftware & Computer Services63.190.00
NEKTAN PLCGibraltarTravel & Leisure50.103.63
ENTU (UK) LTDUKGeneral Retailers67.9032.80
FINSBURY FOOD GROUPUKFood Producers76.6835.00
MXC CAPITAL PLCUKGeneral Financial31.410.00
EDISTON PPTY INV CO PLCUKEquity Investment Instruments96.9095.00
GREEN DRAGON GAS LTDCayman IslandsOil & Gas Producers754.280.00
INDUSTRIAL MULTI PROPERTY TRUST PLCIMReal Estate Investment & Services4.370.00
C4X DISCOVERY HLDG PLCUKPharmaceuticals & Biotechnology32.0711.00
ATLAS DEV & SUPPORT SERVICES LTDCayman IslandsEquity Investment Instruments60.980.00
JIMMY CHOO PLCUKPersonal Goods603.78141.29
FULHAM SHORE PLC(THE)UKTravel & Leisure24.941.61
GAMMA COMMUNICATIONS LTDUKMobile Telecommunications181.0782.59
CROSSRIDER LTDUKMedia154.4045.90
CAMELLIAUKGeneral Financial250.670.00
PEWT SECURITIES PLCUKEquity Investment Instruments44.900.00
GENERAL INDUSTRIES PLCUKPharmaceuticals & Biotechnology1.570.93
MEDAPHOR GROUP PLCUKHealth Care Equipment & Services11.074.68
ATLAS MARA CO NVEST LTDBritish Virgin IslandsGeneral Financial7.900.00
PLUTUS POWERGEN PLCUKGeneral Financial3.780.68
AUCTUS GROWTH PLCUKEquity Investment Instruments1.450.97
ATTRAQT GROUP PLCUKSoftware & Computer Services11.031.53
FAIRFX GROUP PLCUKGeneral Financial31.843.53
OPTIBIOTIX HEALTH PLCUKForestry & Paper6.463.30
B.S.D CROWN LTDIsraelSoftware & Computer Services52.720.00
PRESSFIT HLDGS PLCUKIndustrial Metals5.0910.18
SAVANNAH PETROLEUM PLCUKOil & Gas Producers77.4929.29
TRITAX BIG BOX REIT PLCUKReal Estate Investment Trusts383.89150.00
BACANORA MINERALS LTDCanadaMining66.534.75
DJI HLDGS PLCUKTravel & Leisure150.769.00
EPWIN GROUP PLCUKConstruction & Materials131.2910.00
BLACKSTONE / GSO LOAN FINANCING LTDUKEquity Investment Instruments209.62206.52
SPIRE HEALTHCARE GROUP PLCUKHealth Care Equipment & Services854.30315.00
ERGOMED PLCUKPharmaceuticals & Biotechnology46.7211.00
TENGRI RESOURCESCayman IslandsMining18.260.00
SSP GROUP PLCUKFood & Drug Retailers1061.62467.10
JIASEN INTL HLDGS LTDBritish Virgin IslandsHousehold Goods102.802.33
SQN ASSET FINANCE INCOME FUND LTDUKEquity Investment Instruments154.50150.00
CAPITAL & REGIONALUKReal Estate Investment & Services327.60165.00
CLEARSTAR INCCayman IslandsSoftware & Computer Services21.788.84

(Source: London Stock Exchange, 2014)