Tragedy of the anticommons: Jobless Graduates in Malaysia
Over the past thirty years, macroeconomists thinking about aggregate labor market dynamics have organized their thought around two relations, the Phillips curve and the Beveridge curve.
The Beveridge curve, the relation between unemployment and vacancies, has very much played second fiddle. We think that emphasis is wrong. The Beveridge relation comes conceptually first and contains essential information about the functioning of the labor market and the shocks that affect it. Labor markets in most developed countries are characterized by huge gross flows. Globally, millions of workers move either into or out of employment every month. While that movement could be consistent with workers reallocating themselves across a given set of jobs, recent evidence by economists suggests that these flows are associated with high rates of job creating and job destruction. Using a measure of job turnover, defined as the sum of employment increases in new or expanding establishments and employment decreases in shrinking or dying establishments, we will find that during the late seventies and early eighties, a period of shrinking employment, job turnover in manufacturing averaged some 10 percent per quarter. From a macroeconomic viewpoint, the labor market is highly effective in matching workers and jobs, yet those flows are so large that they imply the coexistence of unfilled jobs and unemployed workers. Examination of the joint movement of unemployment and vacancies can tell us a great deal about the effectiveness of the matching process, as well as about the nature of shocks affecting the labor market. In this article, we first develop a conceptual frame in which to think about gross flows, about the matching process, and about the effects of shocks on unemployment and vacancies. We then turn to the empirical evidence, using data for the unemployed graduates in Malaysia. We focus first on the matching process, estimating the "matching function," the aggregate relation between unemployment, vacancies, and new hires. We then interpret the Beveridge relation. More precisely, we look at the joint behavior of unemployment, employment, and vacancies, and infer from it the sources and the dynamic effects of the shocks that have affected the labor market over the past few years. For the case of Malaysia, many vacancies in the agriculture sector have yet to be filled as most unemployed graduates regard it as a “last resort” area for jobs, reported Mingguan Malaysia. The paper quoted Agriculture and Agro-based Industry Minister Tan Sri Muhyiddin Yassin as saying that unemployed graduates were not interested in agriculture. “The question of unemployment among graduates will not arise if they opt to do short-term courses or studies in this field,” he said. Muhyiddin estimated that in five years, close to 500 experts would be needed in agricultural research institutes, adding that Malaysia could not continue to rely on foreign expertise. Further, Muhyiddin said the participation of local graduates was vital to boost the agriculture sector, which was the third biggest contributor to the gross domestic product (GDP) after manufacturing and services. Muhyiddin said his ministry had allocated an initial RM9mil to revive the unemployed graduates training scheme but it was not well received. This is classic example of the tragedy of the anticommons occurs when rational individuals (acting separately) collectively waste a given resource by under-utilizing it. This happens when too many individuals have rights of exclusion (such as property rights) in a scarce resource. This situation (the "anticommons") is contrasted with a commons, where too many individuals have privileges of use (or the right not to be excluded) in a scarce resource. The tragedy of the commons is that rational individuals, acting separately, may collectively over-utilize a scarce resource. The term "tragedy of the anticommons" was originally coined by Harvard Law professor Frank Michelman and popularized in 1998 by Michael Heller, a law professor at Columbia University. I confess to having only skimmed this, but it looks pretty mind-bending when the concept is applied to the case of unemployed graduates in Malaysia. Cyberspace was once thought to be the modern equivalent of the Western Frontier, a place, where land was free for the taking, where explorers could roam, and communities could form with their own rules. It was an endless expanse of space: open, free, replete with possibility. This is true no longer. This Article argues that we are enclosing cyberspace, and imposing private property conceptions upon it. As a result, we are creating a digital anti-commons where sub-optimal uses of Internet resources is going to be the norm and neglecting the other reality frontiers, such as agriculture, that were once the engines for the economic growth for Malaysia.
The tragedy of the commons is a metaphor that illustrates what some observers may believe to be sub-optimal use or even destruction of public or other collectively shared resources (the " commons") by private interests when the best strategy for individuals conflicts with the common good. The metaphor is often used to argue in favour of private property and against theories such as libertarian socialism, which aim at communal ownership of resources. The term was popularized by Garrett Hardin in his 1968 Science article "The Tragedy of the Commons." The metaphor goes as follows: the Commons is a shared plot of grassland used by all livestock farmers in a village. Each farmer keeps adding more livestock to graze on the Commons, because he does not experience a direct cost for doing so. In a few years, the soil is depleted by overgrazing, the Commons becomes unusable and the village perishes.
The cause of any tragedy of the commons is that when individuals or companies use a public good, they do not bear the entire cost of their several actions. If each seeks to maximize individual utility, he ignores the costs borne by others. This is an example of an externality. The best (non-cooperative) short-term strategy for an individual is to try to exploit more than his or her share of public resources. Assuming a majority of individuals or companies follow this strategy, the theory goes, the public resource gets overexploited.
The tragedy of the commons is a source of intense controversy, precisely because it is not clear whether individuals or companies will or will not follow the overexploitation strategy in any given situation. This strategy maximizes personal gain in the short run, but destroys it in the long run. Therefore, it can be expected that a group of far-sighted individuals or companies will not follow it, and make some sort of cooperative agreement with each other to avoid the tragedy of the commons.
Modern equivalents
Modern equivalents are pollution of waterways, illegal logging of forests, overfishing of the oceans, tossing of trash out of automobile windows, e-mail spamming and underutilized human resources such as the unemployed graduates. The contribution of each actor is minute, but summed over all actors, these actions degrade the resource.
Possible solutions to the 'tragedy'
The tragedy of the commons can be seen as a collective prisoner's dilemmaThe prisoner's dilemma is a type of non-zero-sum game. In this game theory problem, as in many others, it is assumed that each individual player is trying to maximise his own advantage, without concern for the well-being of the other player. Individuals within a group have two options: cooperate with the group or defect from the group. Cooperation happens when individuals agree to protect a common resource to avoid the tragedy. By cooperating, every individual agrees not to seek more than their share. Defection happens when an individual decides to use more than their share of a public resource.
The prisoner's dilemma is a type of non-zero-sum game. In this game theory problem, as in many others, it is assumed that each individual player is trying to maximise his own advantage, without concern for the well-being of the other player. This Nash equilibrium does not lead to a jointly optimum solution in the prisoner's dilemma; in the equilibrium, each prisoner chooses to defect even though the joint payoff of the players would be higher by cooperating. Unfortunately (for the prisoners), each player has an individual incentive to cheat even after promising to cooperate. This is the heart of the dilemma.
In the iterated prisoner's dilemma cooperation may arise as an equilibrium outcome. Here the game is played repeatedly. Since the game is repeated, each player is afforded an opportunity to punish the other player for previous non-cooperative play. Thus, the incentive to cheat may be overcome by the threat of punishment, leading to a superior, cooperative outcome.
The classical prisoner's dilemma
The classical prisoner's dilemma (PD) is as follows:
Two suspects, you and another person, are arrested by the police. The police have insufficient evidence for a conviction, and having separated the both of you, visit each of you and offer the same deal: if you confess and your accomplice remains silent, he gets the full 10-year sentence and you go free. If he confesses and you remain silent, you get the full 10-year sentence and he goes free. If you both stay silent, all they can do is give you both 6 months for a minor charge. If you both confess, you each get 6 years.
It can be summarized thus:
You Deny
You Confess
He Denies
Both serve six months
He serves ten years; you go free
He Confesses
He goes free; you serve ten years
Both serve six years
Let's assume both prisoners are completely selfish and their only goal is to minimize their own jail terms. As a prisoner you have two options: to cooperate with your accomplice and stay quiet, or to betray your accomplice and confess. The outcome of each choice depends on the choice of your accomplice; unfortunately, however, you don't know the choice of your accomplice. Even if you were able to talk to him, you couldn't be sure whether to trust him.
If you expect your accomplice will choose to cooperate and stay quiet, the optimal choice for you would be to confess, as this means you get to go free immediately, while your accomplice lingers in jail for 10 years. If you expect your accomplice will choose to confess, your best choice is to confess as well, since then at least you can be spared the full 10 years serving time and have to sit out 6 years, while your accomplice does the same. If however you both decide to cooperate and stay quiet, you would both be able to get out in 6 months.
Confessing is a dominant strategy for both players. No matter what the other player's choice is, you can always reduce your sentence by confessing. Unfortunately for the prisoners, this leads to a poor outcome where both confess and both get heavy jail sentences. This is the core of the dilemma.
If reasoned from the perspective of the optimal interest of the group (of two prisoners), the correct outcome would be for both prisoners to cooperate with each other, as this would reduce the total jail time served by the group to one year total. Any other decision would be worse for the two prisoners considered together. However by each following their selfish interests, the two prisoners each receive a lengthy sentence.
If you had an opportunity to punish the other player for confessing, then a cooperative outcome could be sustained. The iterated form of this game (discussed below) presents an opportunity for such punishment. In that game, if your accomplice cheats by confessing this time, you can punish him by cheating next time yourself. Thus, the iterated game builds in an opportunity for punishment absent in the classic one-period game.
Economics and non-zero-sum
Non-zero-sum situations are an important part of economic activity due to production, marginal utility and value-subjectivity . Most economic situations are non-zero-sum, since valuable goods and services can be created, destroyed, or badly allocated, and any of these will create a net gain or loss.
If a farmer succeeds in raising a bumper crop, he will benefit by being able to sell more food and make more money. The consumers he serves benefit as well, because there is more food to go around, so the price per unit of food will be lower. Other farmers who have not had such a good crop might suffer somewhat due to these lower prices, but this cost to other farmers may very well be less than the benefits enjoyed by everyone else, such that overall the bumper crop has created a net benefit. The same argument applies to other types of productive activity.
Trade is a non-zero-sum activity because all parties to a voluntary transaction believe that they will be better off after the trade than before, otherwise they would not participate. It is possible that they are mistaken in this belief, but experience suggests that people are more often than not able to judge correctly when a transaction would leave them better off, and thus persist in trading throughout their lives. It is not always the case that every participant will benefit equally. However, a trade is still a non-zero-sum situation whenever the result is a net gain, regardless of how evenly or unevenly that gain is distributed.
Complexity and non-zero-sum
It has been theorized by, among others, Robert Wright, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As one supporter of this view states:
“The more complex societies get and the more complex the networks of interdependence within and beyond community and national borders get, the more people are forced in their own interests to find non-zero-sum solutions. That is, win-win solutions instead of win-lose solutions.... Because we find as our interdependence increases that, on the whole, we do better when other people do better as well - so we have to find ways that we can all win, we have to accommodate each other” - Bill Clinton, Wired interview, December 2000.
Relation to other fields
Game theory has unusual characteristics in that while the underlying subject often appears as a branch of applied mathematics, researchers in other fields carry out much of the fundamental work. At some universities, game theory gets taught and researched almost entirely outside the mathematics department.
Analysts of games commonly use other branches of mathematics, in particular probability, statistics and linear programming, in conjunction with game theory.
In game theory, the Nash equilibrium (named after John Nash) is a kind of optimal strategy for games involving two or more players. If there is a set of strategies for a game with the property that no player can benefit by changing his strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute a Nash equilibrium.
The concept of the Nash equilibrium was originated by Nash in his dissertation, Non-cooperative games (1950). Nash showed that the various solutions for games that had been given earlier all yield Nash equilibria.
A game may have many Nash equilibria, or none. Nash was able to prove that, if we allow mixed strategies (players choose strategies randomly according to preassigned probabilities), then every n-player game in which every player can choose from finitely many strategies admits at least one Nash equilibrium of mixed strategies.
If a game has a unique Nash equilibrium and is played among completely rational players, then the players will choose the strategies that form the equilibrium.
Examples
Competition game
Consider the following two-player game: both players simultaneously choose a whole number from 0 to 10. Both players then win the minimum of the two numbers in dollars. In addition, if one player chooses a larger number than the other, then he has to pay $2 to the other. This game has a unique Nash equilibrium: both players have to choose 0. Any other choice of strategies can be improved if one of the players lowers his number to one less than the other player's number. If the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 11 Nash equilibria.
Coordination game
The coordination game is a classic two person game bi-matrix, person A is usually on the left (and corresponds to the first number in the pair) person B is usually listed on the top (and corresponds to the second number of the pair).
This game is a coordination game for driving. The choices are either to drive on the left or to drive on the right, with 100 meaning no crash and 0 meaning a crash.
Drive on the Left:
Drive on the Right:
Drive on the Left:
100,100
0,0
Drive on the Right:
0,0
100,100
In this case there are two page-strategy Nash equilibria, when both choose to either drive on the left or on the right.
If we admit mixed-strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash Equillibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%,100%) for player one, (100%, 0%) for player two; and (100%, 0%) for player one, (0%, 100%) for player two respectively. We add another where the probabilities for each player is (50%, 50%).
Prisoner's dilemma
The Prisoner's dilemma has one Nash equilibrium: when both players defect. However, "both defect" is inferior to "both cooperate", in the sense that the total jail time served by the two prisoners is greater if both defect. The strategy "both cooperate" is unstable, as a player could do better by defecting while their opponent still cooperates. Thus, "both cooperate" is not an equilibrium. As Ian Stewart put it, ‘sometimes rational decisions aren't sensible!’
Stability
The concept of stability, useful in the analysis of many kinds of equilibrium can also be applied to Nash equillibria.
A Nash equilibrium for a mixed strategy game is stable if a small change (specifically a infinitesimal change) in probabilities for one player leads to a situation where two conditions hold:
the player who did not change has no better strategy in the new circumstance
the player who did change is now playing with a strictly worse strategy
If these cases are both met, then a player with the small change in their mixed-strategy will return imediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed. John Nash showed that the latter situation could not arise in a range of well-defined games.
Privatization is frequently associated with industrial or service-oriented enterprises, such as mining, manufacturing or power generation, but it can also apply to any asset, such as land, roads, or even rights to waterDrinking water This article focuses on water as we experience it every day. The water (molecule) article describes water from a scientific and technical perspective. Water is an abundant substance on Earth. It exists in many forms, such as sea, rain, and. In recent years, government services such as health, sanitation, and education have been particularly targeted for privatization in many countries.
Arguments for privatization
The basic argument given for privatization is that governments have few incentives to ensure that the enterprises they own are well run. On the other hand, private owners, it is said, do have such an incentive: they will lose money if businesses are poorly run. The theory holds that, not only will the enterprise's clients see benefits, but as the privatized enterprise becomes more efficient, the whole economy will benefit. Ideally, privatization propels the establishment of social, organizational and legal infrastructures and institutions that are essential for an effective market economyA market economy is an economy in which most allocations of resources occur as a result of interactions between buyers and sellers of goods and services. It is often contrasted with a planned economy in which most allocations of resources occur as a resul.
Advocates of privatization argue that governments run businesses poorly for the following reasons:
They may only be interested in improving a company in cases when the performance of the company becomes politically sensitive.
Conversely, the government may put off improvements due to political sensitivity — even in cases of companies that are run well.
The company may become prone to corruptionThis article is about political corruption. For other uses, see Corruption (disambiguation In broad terms, political corruption is the misuse of public office for private gain. All forms of government are susceptible in practice to political corruption.; company employees may be selected for political reasons rather than business ones.
The government may seek to run a company for social goals rather than business ones (this is conversely seen as a negative effect by critics of privatization).
It is claimed by supporters of privatization, that privately-held companies can more easily raise capital in the financial markets than publicly-owned ones.
Governments may "bail out" poorly run businesses with money when, economically, it may be better to let the business fold.
Parts of a business which persistently lose money are more likely to be shut down in a private business.
Nationalized industries can be prone to interference from politicians for political or populist reasons. Such as, for example, making an industry buy supplies from local producers, when that may be more expensive than buying from abroad, forcing an industry to freeze its prices/fares to satisfy the electorate or control inflation, increasing its staffing to reduce unemployment, or moving its operations to marginal constituencies; it is argued that such measures can cause nationalized industries to become uneconomic and uncompetitive.
In particular, the first and last reasons are held to be the most important because money is a scarce resource: if government-run companies are losing money, or if they are not as profitable as possible, this money is unavailable to other, more efficient firms. Thus, the efficient firms will have a harder time finding capital, which makes it difficult for them to raise production and create more employment.
Another argument for privatization is, that to privatize a company which was non-profitable (or even generated severe losses) when state-owned means taking the burden of financing it off the shoulders and pockets of taxpayers, as well as free some national budget resources which may be subsequently used for something else. Especially, proponents of the laissez-faire capitalism will argue, that it is both unethical and inneficient for the state to force taxpayers to fund the business that can't work for itself. Also, they hold that even if the privatized company happens to be worse off, it is due to the normal market process of penalizing businesses that fail to cope with the market reality or that simply are not preferred by the customers.
Furthermore, it is argued that an unprofitable public company may become profitable in private hands. An example cited by proponents is Deutsche Post, once part of the German postal service, which began generating profits after it became a part of the international corporation TNT Worldwide Express .
Many privatization plans are organized as auctions where bidders compete to offer the state the highest price, creating monetary income that can be used by the state.
The state can also allow foreigners to buy privatized enterprises, which allows an outside investor to supply the capital needed to upgrade and modernize the firm. Many Polish companies were sold to outside investors, among them the Kwidzyn paper factory, which became a successful firm after its purchase by International Paper.
The tragedy of the commons can be seen as a collective prisoner's dilemma. Individuals within a group have two options: cooperate with the group or defect from the group. Cooperation happens when individuals agree to protect a common resource to avoid the tragedy. By cooperating, every individual agrees not to seek more than their share. Defection happens when an individual realizes that it is in their interest to use more than their share of public resource.
One of the leading problems of political philosophy is to articulate a solution to the tragedy of the commons. Typically, a solution involves enforcement of conservation measures by an authority, which may be an outside agency or selected by the resource users themselves, who agree to cooperate to conserve the resource. Another commonly proposed solution is to convert each common into private property, giving the owner of each an incentive to enforce its sustainability. Effectively, this is what took place in the English "Enclosure of the Commons"; this case highlights the effects of hidden wealth transfer in privatization, if no or inadequate matching compensation occurs.
The tragedy of the anticommons occurs when many individuals have rights of exclusion in a scarce resource. The tragedy is that rational individuals, acting separately, may collectively "waste" the resource by under-utilizing it compared to what some observers may believe to be a social optimum. An anticommons is contrasted with a commons where too many individuals have privileges of use (or the right not to be excluded) in a scarce resource. The tragedy of the commons is that rational individuals, acting separately, may collectively over-utilize a scarce resource. In both anticommons and commons, there is no hierarchy among owners such that the decision of one owner can dominate those of other owners, forcing them to use their resources in ways they would not, if they were permitted free will by the authority.
Worker-Company Matching and Unemployment in Transition to a Market Economy:
An exceptionally low unemployment rate in any country can been brought about principally by the following phenomena: (1) a rapid increase in vacancies along with unemployment, resulting in a relatively balanced unemployment-vacancy situation at the aggregate as well as district level, (2) a major part played by vacancies and the newly unemployed in the outflow from unemployment, (3) a matching process with strongly increasing returns to scale throughout (rather than only in parts of) the transition period, and (4) ability to keep the long term unemployed at relatively low levels.
UNEMPLOYMENT - macroeconomics
In economics, a person who is able and willing to work yet is unable to find a paying job is considered unemployed. The unemployment rate is the number of unemployed workers divided by the total civilian labor force, which includes both the unemployed and those with jobs (all those willing and able to work for pay). In practice, measuring the number of unemployed workers actually seeking work is notoriously difficult. There are several different methods for measuring the number of unemployed workers. Each method has its own biases and the different systems make comparing unemployment statistics between countries, especially those with different systems, difficult. However, the meaning of unemployment rate for those affected means different things in different countries (depending on their institutions), so we should be careful in interpreting this contrast in the case of jobless graduates in Malaysia.
The terms unemployment and unemployed are sometimes used to refer to other inputs to production that are not being fully used -- for example, unemployed capital goods.
Impact on society and the economy
Some of the likely costs of unemployment for society include increased poverty, crime, political instability, mental health problems, and diminished health standards. Understanding the forces that create unemployment, and then trying to mitigate their negative effects to the greatest extent possible, is a central issue in economics.
Costs
Joblessness can hit individual job-seekers hard. Lacking a job often means lacking social contact with fellow employees, a purpose for many hours of the day, and of course, the ability to pay bills and to purchase both necessities and luxuries. This last is especially serious for those with family obligations, debts, and/or medical costs, especially in a country such as Malaysia, where the affordability of health insurance is often linked to holding a job. Dr. M. Harvey Brenner, among others, has shown that increasing unemployment raises the crime rate, the suicide rate, and encourages bad health.
Because unemployment insurance in the U.S. typically does not even replace 50 percent of the income one received on the job (and one cannot receive it forever), the unemployed often end up tapping welfare programs such as Food Stamps — or accumulating debt, both formal debt to banks and informal debt to friends and relatives.
Some hold that many of the low-income jobs (such as McJobs) aren't really a better option than unemployment with a welfare state (with its unemployment insurance benefits). But since it is difficult or impossible to get unemployment insurance benefits without having worked in the past, these jobs and unemployment are more complementary than they are substitutes. (These jobs are often held short-term, either by students or by those trying to gain experience; turnover in most McJobs is high, in excess of 30%/year.) Unemployment insurance keeps an available supply of workers for the McJobs, while the employers' choice of management techniques (low wages and benefits, few chances for advancement) is made with the existence of unemployment insurance in mind. This combination promotes the existence of one kind of unemployment, frictional unemployment.
Another cost for the unemployed is that the combination of unemployment, lack of financial resources, and social responsibilities may push unemployed workers to take jobs that do not fit their skills or allow them to use their talents. That is, unemployment can cause underemployment (definition 1). This is one of the economic arguments in favor of having unemployment insurance.
Second, unemployment among graduates makes the employed workers more insecure in their jobs, worrying about being replaced. This feared cost of job loss can spur psychological anxiety, weaken labor unions and their members' sense of solidarity, encourage greater work-effort and lower wage demands, and/or abet protectionism. This last means efforts to preserve existing jobs (of the "insiders") via barriers to entry against "outsiders" who want jobs, legal obstacles to immigration, and/or tariffs and similar trade barriers against foreign competitors. The impact of unemployment on the employed is related to the idea of Marxian unemployment. Finally, the existence of significant unemployment raises the monopsony power of one's employer: that raises the cost of quitting one's job and lowers the probability of finding a new source of livelihood.
Finally, high unemployment implies low real Gross Domestic Product (GDP): we are not using our resources as completely as possible and are thus wasting our opportunities to produce goods and services that allow people to survive and to enjoy life. Much unemployment — called deficient-demand or cyclical unemployment — thus represents a profound form of inefficiency, sometimes called "Keynesian inefficiency." (However, this loss of production might instead be caused by classical unemployment or Marxian unemployment, which reduce potential output by restricting supply.) Okun's Law tells us that for the U.S., the economy misses out on about two percent of its potential output for each one percentage point of unemployment above the "full employment" unemployment rate or NAIRU (see below). Alternatively, this "law" says that as unemployment rises by one percentage point, say from 5% to 6% of the civilian labor force, the percentage of potential output that could have been produced but was not rises by about two points.
Benefits
Benefits for the entire economy arising from unemployment include that it keeps inflation from being high, following the Phillips curve, or from accelerating, following the NAIRU/natural rate of unemployment theory. As in the Marxian theory of unemployment, special interests may also benefit: employers often like having their employees in fear of losing their jobs, and thus working hard, keeping their wage demands low, etc. As noted, unemployment may increase employers' monopsony power. Unemployment may thus promote labor productivity and profitability.
Some say that slow economic growth and the resulting unemployment are actually good, since the constantly needed growth of the GDP cannot be sustained forever, given resource constraints and environmental impacts. But others ask if is it fair to burden the unemployed (usually those at the bottom of the economic heap) with the costs of limiting the use of resources and the abuse of the environment. This suggests that we should seek ways to improve the efficiency of our resource management and environmental stewardship to attain growth and low unemployment in order to make sure that the burdens are distributed fairly.
Causes of Unemployment
Open unemployment of the sort defined above is associated with capitalist economies. Preliterate ("primitive") communities treat their members as parts of an extended family and thus do not allow them to be unemployed — in the effort to preserve the group. In precapitalist societies such as European feudalism, the serfs (though clearly dominated and exploited by the lords) were never "unemployed" because they had direct access to the land (and the needed tools) and could thus work to produce crops. Just as on the American frontier during the 19th century, there were day laborers and subsistence farmers on poor land, whose position in society was somewhat analogous to the unemployed of today. But they were not truly unemployed, since they could find work and support themselves on the land.
Under both ancient and modern systems of slave-labor, slave-owners never let their property be unemployed for long. (If anything, they would sell the unneeded laborer.) Planned economies (often called "communist countries") such as the old Soviet Union or today's Cuba typically provide occupation for everyone, using substantial overstaffing if necessary. (This is called "hidden unemployment," which is sometimes seen as a kind of underemployment, definition 3.) Workers' cooperatives — such as those producing plywood in the U.S. Pacific Northwest — do not let their members become unemployed unless the co-op itself goes bankrupt.
On the other hand, under capitalism the individual profit-seeking employer does not have to bear the complete social costs of laying off or firing workers, so they are willing to live with (or even profit from) the existence of unemployment — unless employees are able to win good severance packages or protection from the government (such as restrictions on firing and lay-offs). (That is, there is a market failure due to the existence of external costs of firing or laying-off of people.) On the "supply side," workers' lack of significantly positive net worth (beyond equity in a home or a car) makes it very difficult for them to go into business for themselves to avoid unemployment. Economist Edward Wolff (http://www.econ.nyu.edu/user/wolffe/) estimates that in 1995 in the U.S., families with adults aged 25-45 in the middle income quintile could sustain their current consumption for only 1.2 months (or live at 125% of the poverty standard for 1.8 months) based on their financial reserves. Poorer quintiles of course had more difficulty.
Debate on Unemployment
There is considerable debate amongst economists as to what the main causes of unemployment are. Keynesian economics emphasizes unemployment resulting from insufficient effective demand for goods and service in the economy (cyclical unemployment). Others point to structural problems (inefficiencies) inherent in labor markets (structural unemployment). Classical or neoclassical economics tends to reject these explanations, and focuses more on rigidities imposed on the labour market from the outside, such as minimum wage laws, taxes, and other regulations that may discourage the hiring of workers (classical unemployment). Yet others see unemployment as largely due to voluntary choices by the unemployed (frictional unemployment). On the other extreme, Marxists see unemployment as a structural fact helping to preserve business profitability and capitalism (Marxian unemployment). The different perspectives may be right in different ways, contributing to our understanding of different types of unemployment.
Though there have been several definitions of voluntary (and involuntary) unemployment in the economics literature, a simple distinction is often applied. Voluntary unemployment is blamed on the individual unemployed workers (and their decisions), whereas involuntary unemployment exists because of the socio-economic environment (including the market structure and the level of aggregate demand) in which individuals operate. (As is usual in economics, the sociological or social-psychological factors that help determine individual choices are ignored here.) In these terms, much or most of frictional unemployment is voluntary, since it reflects individual search behavior. On the other hand, cyclical unemployment, structural unemployment, classical unemployment, and Marxian unemployment are largely involuntary in nature. However, the existence of structural unemployment may reflect choices made by the unemployed in the past, while classical unemployment may result from the legislative and economic choices made by labor unions and/or political parties aiming to help workers. So in practice, the distinction between voluntary and involuntary unemployment is hard to draw. The clearest cases of involuntary unemployment are those where there are fewer job vacancies than unemployed workers even when wages are allowed to adjust, so that even if all vacancies were to be filled, there would be unemployed workers. This is the case of cyclical unemployment and Marxian unemployment, for which macroeconomic forces lead to microeconomic unemployment.
Measuring unemployment
The U.S. Bureau of Labor Statistics (BLS) provides some definitions which are similar to, but not the same as, those of other countries.
BLS definitions
The BLS counts employment and unemployment (of those over 16 years of age) using a sample survey of households.[5] (http://www.bls.gov/cps/cps_faq.htm) In BLS definitions, people are considered employed if they did any work at all for pay or profit during the survey week. This includes not only regular full-time year-round employment but also all part-time and temporary work. Workers are also counted as "employed" if they have a job at which they did not work during the survey week because they were:
On vacation;
Ill;
Taking care of some other family or personal obligation (for example, due to child-care problems);
On maternity or paternity leave;
Involved in an industrial dispute (strike or lock-out); or
Prevented from working by bad weather.
Typically, employment and the labor force include only work done for economic gain. Hence, a homemaker is neither part of the labor force nor unemployed. Nor are full-time students nor prisoners considered to be part of the labor force or unemployment. The latter can be important. In 1999, economists Lawrence F. Katz and Alan B. Krueger estimated that increased incarceration lowered measured unemployment in the United States by 0.17 percentage points between 1985 and the late 1990s. In particular, as of this writing (2004) 3 percent of the US population is incarcerated.
On the other hand, individuals are classified as "unemployed" if they do not have a job, have actively looked for work in the prior four weeks, and are currently available for work. The unemployed includes all individuals who were not working for pay but were waiting to be called back to a job from which they had been temporarily laid off.
Finally, it is possible to be neither employed nor unemployed by BLS definitions, i.e., to be outside of the "labor force." These are people who have no job and are not looking for one. Many of these are going to school or are retired. Family responsibilities keep others out of the labor force. Still others have a physical or mental disability which prevents them from participating in labor force activities.
Children, the elderly, and some individuals with disabilities are typically not counted as part of the labor force in and are correspondingly not included in the unemployment statistics. However, some elderly and many disabled individuals are active in the labor market.
In the early stages of an economic boom, both employment and unemployment often rise. This is because people join the labor market (give up studying, start a job hunt, etc.) because of the improving job market, but until they have actually found a position they are counted as unemployed. Similarly, during a recession, the increase in the unemployment rate is moderated by people leaving the labor force.
The accuracy of unemployment statistics
The unemployment rate may be different from the impact of the economy on people. First, the unemployment figures indicate how many are not working for pay but seeking employment for pay. It is only indirectly connected with the number of people who are actually not working at all or working without pay. Second, in the United States those who work as little as one hour a week for payment are considered employed, even if they wish to work more. Therefore, critics believe that current methods of measuring unemployment are inaccurate in terms of the impact of unemployment on people as these methods do not take into account:
Those who have lost their jobs and have become discouraged over time from actively looking for work.
Those who are self-employed or wish to become self-employed, such as tradesmen or building contractors or IT consultants........
Those who have retired before the official retirement age but would still like to work.
Those on disability pensions who, while not possessing full health, still wish to work in occupations suitable for their medical conditions.
Those who work for payment for as little as one hour per week but would like to work full-time. These people are "involuntary part-time" workers.
Those who are underemployed, e.g., a computer programmer who is working in a retail store until he can find a permanent job.
On the other hand, the measures of unemployment may be "too high." In some countries, the availability of unemployment benefits can inflate statistics since they give an incentive to register as unemployed. Homemakers and other people who do not really seek work may choose to declare themselves unemployed so as to get benefits; people with undeclared paid occupations may try to get unemployment benefits in addition to the money they earn from their work. Conversely, the absence of any tangible benefit for registering as unemployed discourages people from registering.
However, in developed countries this is not a problem, since unemployment is measured using a sample survey (akin to a Gallup poll). This method is also used by many countries besides the U.S., including Canada, Mexico, Australia, Japan, and all of the countries in the European Economic Community. According to the BLS, a number of Eastern European nations have instituted labor force surveys as well.
The sample survey has its own problems, because the total number of workers in the economy is based on guesses rather than a census. So many economists look to the survey of employers to get a better estimate of the number of jobs created or destroyed.
Due to these deficiencies, many labor market economists prefer to look at a range of economic statistics such as:
Labour market participation rate (the percentage of people aged between 15 and 64 who are currently employed or searching for employment)
The total number of full-time jobs in an economy
The number of people seeking work as a raw number and not a percentage
The total number of person-hours worked in a month compared to the total number of person-hours people would like to work
Situation in the United States
There are two permanent government projects conducted by the United States Census Bureau and or the Bureau of Labor Statistics for the United States Department of Labor that gather employment statistics monthly. One is the Current Population Survey (CPS) [6] (http://www.bls.gov/cps) which surveys 60,000 households: it is used in calculating the unemployment rate. The other is the Current Employment Statistics (CES) [7] (http://www.bls.gov/ces) which surveys 300,000 employers.
These two sources have different classification criteria, and usually produce differing results. As noted, most economists these days see the CES as a more accurate estimate of the state of the job market.
Though many people care about the number of unemployed (8.0 million in the U.S. in December 2004), economists typically focus on the unemployment rate (5.4 percent). This corrects for the normal increase in the number of people working for pay or seeking work due to population increases and increases in the paid labor force relative to the population — and thus the normal increase in the number of unemployed workers.
It is important to note that these statistics are for the U.S. economy as a whole, hiding variations among groups. For December 2004 in the U.S., the unemployment rates for the major worker groups were as follows:
adult men: 4.9 percent;
adult women: 4.7 percent;
whites: 4.6 percent;
Asians: 4.1 percent.
Hispanics or Latinos: 6.6 percent;
blacks: 10.8 percent;
teenagers: 17.6 percent;
These percentages represent the usual rough ranking of these different groups' unemployment rates, though the absolute numbers normally change over time with the business cycle. They come from the Bureau of Labor Statistics (http://www.bls.gov/news.release/pdf/empsit.pdf). (Clicking on this link will lead to a pdf file with up-to-date numbers.)
Aiding the Unemployed
Most developed countries have aids for the unemployed as part of the welfare state. These unemployment benefits include unemployment insurance, welfare, and subsidies to aid in retraining. To calculate the unemployment insurance benefits you might receive in the United States, see the useful page at the Economic Policy Institute (http://www.epinet.org/content.cfm/datazone_uicalc_index).
Of course, unemployment insurance and similar programs have replaced other systems (support from community and churches, home gardening and other production) which played a similar role in the past.
Analogical extensions of Gresham's law
The Gresham's Law principle has been applied, by analogy, to many different fields. For example, in higher education, "Diploma mills" have come into existence producing low-cost qualifications which are often of little or no market value. According to Gresham's law as it applies to money, these "bad" diplomas ought to drive out the "good diplomas". However, unlike laws for money, there is no law requiring employers to accept all diplomas as being of equal value. Consequently, each employer is free to assess the value of qualifications as they see fit.
X-EFFICIENCY - microeconomics
In economics, x-efficiency is the effectiveness with which a given set of inputs are used to produce outputs. If a firm is producing the maximum output it can given the resources it employs, such as men and machinery, and the best technology available, it is said to be x-efficient. x-inefficiency occurs when x-efficiency is not achieved.
In a market with perfect competition, there will in general be no x-inefficiency because if any firm is less efficient than the others it will not make sufficient profits to stay in business in the long term. However, with other market forms such as monopoly it may be possible for x-inefficiency to persist, because the lack of competition makes it possible to use inefficient production techniques and still stay in business.
X-inefficiency is not the only type of inefficiency in economics. X inefficiency only looks at the outputs that are produced with given inputs. It doesn't take account of whether the inputs are the best ones to be using, or whether the outputs are the best ones to be producing, which is referred to as allocative efficiency. For example, a firm that employs brain surgeons to dig ditches might still be x-efficient, even though reallocating the brain surgeons to curing the sick would be more efficient for society overall.
SUPPPLY AND DEMAND - microeconomics
The supply and demand model describes how prices vary as a result of a balance between product availability at each price (supply) and the desires of those with purchasing power at each price (demand). The graph depicts an increase in demand from D1 to D2 along with the consequent increase in price and quantity required to reach a new market-clearing equilibrium point on the supply curve (S).
In microeconomic theory, the partial equilibrium supply and demand economic model originally developed by Alfred Marshall attempts to describe, explain, and predict changes in the price and quantity of goods sold in competitive markets. The model represents a first approximation for describing a market that is not perfectly competitive. It formalizes the theories used by some economists before Marshall and is one of the most fundamental models of some modern economic schools, widely used as a basic building block in a wide range of more detailed economic models and theories. The theory of supply and demand is important for some economic schools understanding of a market economy in that it is an explanation of the mechanism by which many resource allocation decisions are made. However, unlike general equilibrium models, supply schedules in this partial equilbrium model are fixed by unexplained forces.
In general, the theory claims that when goods are traded in a market at a price where consumers demand more goods than firms are prepared to supply, this shortage (or excess demand) will tend to lead to increases in the price of the goods. Those consumers who are prepared to pay more will bid up the market price. Conversely, prices will tend to fall when the quantity supplied exceeds the quantity demanded (i.e., when there is a glut, market surplus, or excess supply). This price/quantity adjustment mechanism causes the market to approach an equilibrium point, at which the market clears and there is no longer any impetus to change. This theoretical point of stability is defined as the point where producers are prepared to sell exactly the same quantity of goods as consumers want to buy, so there is no endogenous force causing prices to change.
Assumptions and definitions
The theory of supply and demand usually assumes that markets are perfectly competitive. This means that there are many small buyers and sellers, each of which is unable to influence the price of the good in each market on his own. This assumption is central to the simple form of supply and demand theory that is taught in introductory economics. In many actual economic transactions, the assumption fails because some individual buyers or sellers have enough market power to influence prices. In this situation, the simple microeconomic theory of supply and demand is incomplete and more sophisticated analysis is needed. However the simple theory presented here does apply, and accurately describes many real life market interactions. In many other cases it is a good first order approximation to some of the major effects in the market.
Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where so-called market failures lead to resource allocation that is suboptimal by some standard. In such cases, economists may attempt to find policies that will avoid waste; directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating 'missing' markets to enable efficient trading where none had previously existed. This is studied in the field of collective action.
Demand
Demand is the quantity that consumers are willing and able to buy at a given price over a period of time. For example, a consumer may be willing to purchase 30 bags of potato chips in the next year if the price is $1 per bag, and may be willing to purchase only 10 bags if the price is $2 per bag. A demand schedule can be constructed that shows the quantity demanded at each given price. It can be represented on a graph as a line or curve by plotting the quantity demanded at each price. It can also be described mathematically by a demand equation. The main determinants of the quantity one is willing to purchase will typically be the price of the good, one's level of income, personal tastes, the price of substitute goods, and the price of complementary goods.
Supply
Supply is the quantity that producers are willing to sell at a given price. For example, the chip manufacturer may be willing to produce 1 million bags of chips if the price is $1 and substantially more if the market price is $2. The main determinants of the amount of chips a company is willing to produce will typically be the market price of the good and the cost of producing it. In fact, supply curves are constructed from the firm's long-run cost schedule.
Simple supply and demand curves
Mainstream economic theory centers on creating a series of supply and demand relationships, describing them as equations, and then adjusting for factors which produce "stickiness" between supply and demand. Analysis is then done to see what "trade offs" are made in the "market" which is the negotiation between sellers and buyers. Analysis is done as to what point the ability of sellers to sell becomes less useful than other opportunities. This is related to "marginal" costs - or the price to produce the last unit that can be sold profitably, versus the chance of using the same effort to engage in some other activity.
Graph of simple supply and demand curves
The slope of the demand curve (downward-to-the-right) indicates that a greater quantity will be demanded when the price is lower. On the other hand, the slope of the supply curve (upward-to-the-right) tells us that as the price goes up, producers are willing to produce more goods. The point where these curves intersect is the equilibrium point. At a price of P producers will be willing to supply Q units per period of time and buyers will demand the same quantity. P in this example, is the equilibriating price that equates supply with demand.
In the figures, straight lines are drawn instead of the more general curves. This is typical in analysis looking at the simplified relationships between supply and demand because the shape of the curve does not change the general relationships and lessons of the supply and demand theory. The shape of the curves far away from the equilibrium point are less likely to be important because they do not affect the market clearing price and will not affect it unless large shifts in the supply or demand occur. So straight lines for supply and demand with the proper slope will convey most of the information the model can offer. In any case, the exact shape of the curve is not easy to determine for a given market. The general shape of the curve, especially its slope near the equilibrium point, does however have an impact on how a market will adjust to changes in demand or supply. See the below section on elasticity.
It should be noted that on supply and demand curves both are drawn as a function of price. Neither is represented as a function of the other. Rather the two functions interact in a manner that is representative of market outcomes. The curves also imply a somewhat neutral means of measuring price. In practice any currency or commodity used to measure price is also the subject of supply and demand.
Effects of being away from the equilibrium point
Now consider how prices and quantities not at the equilibrium point tend to move towards the equilibrium. Assume that some organization (say government or industry cartel) has the ability to set prices. If the price is set too high, such as at P1 in the diagram to the right, then the quantity produced will be Qs. The quantity demanded will be Qd. Since the quantity demanded is less than the quantity supplied there will be an oversupply (also called surplus or excess supply). On the other hand, if the price is set too low, then too little will be produced to meet demand at that price. This will cause an undersupply problem (also called a shortage).
Now assume that individual firms have the ability to alter the quantities supplied and the price they are willing to accept, and consumers have the ability to alter the quantities that they demand and the amount they are willing to pay. Businesses and consumers will respond by adjusting their price (and quantity) levels and this will eventually restore the quantity and the price to the equilibrium.
In the case of too high a price and oversupply, (seen in the diagram at the left) the profit maximizing businesses will soon have too much excess inventory, so they will lower prices (from P1 to P) to reduce this. Quantity supplied will be reduced from Qs to Q and the oversupply will be eliminated. In the case of too low a price and undersupply, consumers will likely compete to obtain the good at the low price, but since more consumers would like to buy the good at the price that is too low, the profit maximizing firm would raise the price to the highest they can, which is the equilibrium point. In each case, the actions of independent market participants cause the quantity and price to move towards the equilibrium point.
Demand curve shifts
When more people want something, the quantity demanded at all prices will tend to increase. This can be referred to as an increase in demand. The increase in demand could also come from changing tastes, where the same consumers desire more of the same good than they previously did. Increased demand can be represented on the graph as the curve being shifted right, because at each price point, a greater quantity is demanded. An example of this would be more people suddenly wanting more coffee. This will cause the demand curve to shift from the initial curve D0 to the new curve D1. This raises the equilibrium price from P0 to the higher P1. This raises the equilibrium quantity from Q0 to the higher Q1. In this situation, we say that there has been an increase in demand which has caused an extension in supply.
Conversely, if the demand decreases, the opposite happens. If the demand starts at D1, and then decreases to D0, the price will decrease and the quantity supplied will decrease - a contraction in supply. Notice that this is purely an effect of demand changing. The quantity supplied at each price is the same as before the demand shift (at both Q0 and Q1). The reason that the equilibrium quantity and price are different is the demand is different.
Supply curve shifts
When the suppliers' costs change the supply curve will shift. For example, assume that someone invents a better way of growing wheat so that the amount of wheat that can be grown for a given cost will increase. Producers will be willing to supply more wheat at every price and this shifts the supply curve S0 to the right, to S1 - an increase in supply. This causes the equilibrium price to decrease from P0 to P1. The equilibrium quantity increases from Q0 to Q1 as the quantity demanded increases at the new lower prices. Notice that in the case of a supply curve shift, the price and the quantity move in opposite directions.
Conversely, if the quantity supplied decreases, the opposite happens. If the supply curve starts at S1, and then shifts to S0, the equilibrium price will increase and the quantity will decrease. Notice that this is purely an effect of supply changing. The quantity demanded at each price is the same as before the supply shift (at both Q0 and Q1). The reason that the equilibrium quantity and price are different is the supply is different.
Market 'clearance'
The market 'clears' at the point where all the supply and demand at a given price balance. That is, the amount of a commodity available at a given price equals the amount that buyers are willing to purchase at that price. It is assumed that there is a process that will result in the market reaching this point, but exactly what the process is in a real situation is an ongoing subject of research. Markets which do not clear will react in some way, either by a change in price, or in the amount produced, or in the amount demanded. Graphically the situation can be represented by two curves: one showing the price-quantity combinations buyers will pay for, or the demand curve; and one showing the combinations sellers will sell for, or the supply curve. The market clears where the two are in equilibrium, that is where the curves intersect. In a general equilibrium model, all markets in all goods clear simultaneously and the 'price' can be described entirely in terms of tradeoffs with other goods. For a century most economists believed in Say's Law, which states that markets, as a whole, would always clear and thus be in balance.
Elasticity
An important concept in understanding supply and demand theory is elasticity. In this context, it refers to how supply and demand change in response to various stimuli. One way of defining elasticity is the percentage change in one variable divided by the percentage change in another variable (known as arch elasticity because it calculates the elasticity over a range of values - This can be contrasted with point elasticity that uses differential calculus to determine the elasticity at a specific point). Thus it is a measure of relative changes.
Often, it is useful to know how the quantity supplied or demanded will change when the price changes. This is known as the price elasticity of demand and the price elasticity of supply. If a monopolist decides to increase the price of their product, how will this affect their sales revenue? Will the increased unit price offset the likely decrease in sales volume? If a government imposes a tax on a good, thereby increasing the effecive price, how will this affect the quantity demanded?
If you do not wish to calculate elasticity, a simpler technique is to look at the slope of the curve. Unfortunately, this has units of measurement of quantity over monetary unit (For example, liters per euro, or battleships per million yen), which is not a convenient measure to use for most purposes. So, for example if you wanted to compare the effect of a price change of gasoline in Europe versus the United States, there is a complicated conversion between gallons per dollar and liters per euro. This is one of the reasons why economists often use relative changes in percentages, or elasticity. Another reason is that elasticity is more than just the slope of the function: It is the slope of a function in a coordinate space, that is, a line with a constant slope will have different elasticity at various points.
Lets do an example calculation. We have said that one way of calculating elasticity is the percentage change in quantity over the percentage change in price. So, if the price moves from $1.00 to $1.05, and the quantity supplied goes from 100 pens to 102 pens, the slope is 2/0.05 or 40 pens per dollar. Since the elasticity depends on the percentages, the quantity of pens increased by 2%, and the price increased by 5%, so the elasticity is 2/5 or 0.4.
Since the changes are in percentages, changing the unit of measurement or the currency will not affect the elasticity. If the quantity demanded or supplied changes a lot when the price changes a little, it is said to be elastic. If the quantity changes little when the prices changes a lot, it is said to be inelastic. An example of perfectly inelastic supply, or zero elasticity, is represented as a vertical supply curve. (See that section below)
Elasticity in relation to variables other than price can also be considered. One of the most common to consider is income. How would the demand for a good change if income increased or decreased? This is known as the income elasticity of demand. For example how much would the demand for a luxury car increase if average income increased by 10%? If it is positive, this increase in demand would be represented on a graph by a positive shift in the demand curve, because at all price levels, a greater quantity of luxury cars would be demanded.
Another elasticity that is sometimes considered is the cross elasticity of demand which measures the responsiveness of the quantity demanded of a good to a change in the price of another good. This is often considered when looking at the relative changes in demand when studying complement and substitute goods. Complement goods are goods that are typically utilized together, where if one is consumed, usually the other is also. Substitute goods are those where one can be substituted for the other and if the price of one good rises, one may purchase less of it and instead purchase its substitute.
Cross elasticity of demand is measured as the percentage change in demand for the first good that occurs in response to a percentage change in price of the second good. For an example with a complement good, if, in response to a 10% increase in the price of fuel, the quantity of new cars demanded decreased by 20%, the cross elasticity of demand would be -20%/10% or, -2.
Vertical supply curve
It is sometimes the case that the supply curve is vertical: that is the quantity supplied is fixed, no matter what the market price. For example, the amount of land in the world can be considered fixed. In this case, no matter how much someone would be willing to pay for a piece of land, the extra cannot be created. Also, even if no one wanted all the land, it still would exist. These conditions create a vertical supply curve, giving it zero elasticity (ie. - no matter how large the change in price, the quantity supplied will not change).
In the short run near vertical supply curves are even more common. For example, if the Super Bowl is next week, increasing the number of seats in the stadium is almost impossible. The supply of tickets for the game can be considered vertical in this case. If the organizers of this event underestimated demand, then it may very well be the case that the price that they set is below the equilibrium price. In this case there will likely be people who paid the lower price who only value the ticket at that price, and people who could not get tickets, even though they would be willing to pay more. If some of the people who value the tickets less sell them to people who are willing to pay more (i.e. scalp the tickets), then the effective price will rise to the equilibrium price.
The graph below illustrates a vertical supply curve. When the demand 1 is in effect, the price will be p1. When demand 2 is occurring, the price will be p2. Notice that at both values the quantity is Q. Since the supply is fixed, any shifts in demand will only affect price.
Other market forms
In a situation in which there are many buyers but a single monopoly supplier that can adjust the supply or price of a good at will, the monopolist will adjust the price so that his profit is maximised given the amount that is demanded at that price. This price will be higher than in a competitive market. A similar analysis using supply and demand can be applied when a good has a single buyer, a monopsony, but many sellers.
Where there are both few buyers or few sellers, the theory of supply and demand cannot be applied because both decisions of the buyers and sellers are interdependent - changes in supply can affect demand and vice versa. Game theory can be used to analyse this kind of situation. See also oligopoly.
The supply curve does not have to be linear. However, if the supply is from a profit maximizing firm, it can be proven that supply curves are not downward sloping (i.e. if the price increases, the quantity supplied will not decrease). Supply curves from profit maximizing firms can be vertical, horizontal or upward sloping. While it is possible for industry supply curves to be downward sloping, supply curves for individual firms are never downward sloping).
Standard microeconomic assumptions cannot be used to prove that the demand curve is downward sloping. However, despite years of searching, no generally agreed upon example of a good that has an upward sloping demand curve has been found (also known as a giffen good). Non-economists sometimes think that certain goods would have such a curve. For example, some people will buy a luxury car because it is expensive. In this case the good demanded is actually prestige, and not a car, so when the price of the luxury car decreases, it is actually changing the amount of prestige so the demand is not decreasing since it is a different good (see Veblen good). Even with downward sloping demand curves, it is possible that an increase in income may lead to a decrease in demand for a particular good, probably due to the existence of more attractive alternatives which become affordable: a good with this property is known as an inferior good.
An example: Supply and demand in a 6-person economy
Supply and demand can be thought of in terms of individual people interacting at a market. Suppose the following six people participate in this simplified economy:
Alice is willing to pay $10 for a sack of potatoes.
Bob is willing to pay $20 for a sack of potatoes.
Cathy is willing to pay $30 for a sack of potatoes.
Dan is willing to sell a sack of potatoes for $5.
Emily is willing to sell a sack of potatoes for $15.
Fred is willing to sell a sack of potatoes for $25.
There are many possible trades that would be mutually agreeable to both people, but not all of them will happen. For example, Cathy and Fred would be interested in trading with each other for any price between $25 and $30. If the price is above $30, Cathy is not interested, since the price is too high. If the price is below $25, Fred is not interested since the price is too low. However at the market, Cathy will discover that there are other sellers willing to sell at well below $25, so she will not trade with Fred at all. In an efficient market, each seller will get as high a price as possible, and each buyer will get as low a price as possible.
Imagine that Cathy and Fred are bartering over the price. Fred offers $25 for a sack of potatoes. Before Cathy can agree, Emily offers a sack of potatoes for $24. Fred is not willing to sell at $24, so he drops out. At this point, Dan offers to sell for $12. Emily won't sell for that amount so it looks like the deal might go through. At this point Bob steps in and offers $14. Now we have two people willing to pay $14 for a sack of potatoes (Cathy and Bob), but only one person (Dan) willing to sell for $14. Cathy notices this, and doesn't want to lose a good deal, so she offers Dan $16 for his potatoes. Now Emily also offers to sell for $16, so there are two buyers and two sellers at that price (note that they could have settled on any price between $15 and $20), and the bartering can stop. But what about Fred and Alice? Well, Fred and Alice are not willing to trade with each other since Alice is only willing to pay $10 and Fred will not sell for any amount under $25. Alice can't outbid Cathy or Bob to purchase from Dan so Alice will not be able to get a trade with them. Fred can't underbid Dan or Emily so he will not be able to get a trade with Cathy. In other words, a stable equilibrium has been reached.
A supply and demand graph could also be drawn from this. The demand would be:
1 person is willing to pay $30 (Cathy).
2 people are willing to pay $20 (Cathy and Bob).
3 people are willing to pay $10 (Cathy, Bob, and Alice).
The supply would be:
1 person is willing to sell for $5 (Dan).
2 people are willing to sell for $15 (Dan and Emily).
3 people are willing to sell for $25 (Dan, Emily, and Fred).
Supply and demand match when the quantity traded is two sacks and the price is between $15 and $20. Whether Dan sells to Cathy, and Emily to Bob, or the other way round, and what precisely is the price agreed cannot be determined. This is the only limitation of this simple model. When considering the full assumptions of perfect competition the price would be fully determined since there would be enough participants to determine the price. For example, if the "last trade" was between someone willing to sell at $15.50 and someone willing to pay $15.51, then the price could be determined to the penny. As more participants enter, the more likely there will be a close bracketing of the equilibrium price.
It is important to note that this example violates the assumption of perfect competition in that there are a limited number of market participants. However this simplification shows how the equilibrium price and quantity can be determined in an easily understood situation. The results are similar when unlimited market participants and the other assumptions of perfect competition are considered.
Decision-making
Much of economics assumes that individuals seek to maximize their happiness or utility: however, whether they rationally attempt to optimize their well-being given available information is a source of much debate. In this view, which underpins much of economic writing, individuals make choices between alternatives based on their estimation of which will yield the best results. Many important economic ideas, such as the "efficient market hypothesis" rest on this view of decision making.
However, this framework, once called "homo economicus" - has for decades been the focus of unease even by those who apply it. Milton Friedman once defended the idea by saying that inaccurate assumptions could produce accurate results. Alfred Marshall was careful to differentiate the tendency to maximize happiness, with maximizing economic well being. The limits of rationality have been the subject of intense study, for example Herbert Simon's model for "bounded rationality", which was awarded a Nobel Prize in 1978. More recently, irrational behavior and imperfect information have increasingly been the subject of formal modelling, often referred to as behavioral economics, for which Daniel Kahneman won a Nobel Prize in 2002. An example is the growing field of behavioral finance which combines previous theory with cognitive psychology.
The new model of information and decision making focuses on asymmetrical information, when some participants have key facts that others do not, and on decision making based, not on the economic pressures, but on the decisions of other economic actors. Asymmetrical information and behavioral dynamics lead to different conclusions: in a world of asymmetrical information, markets are generally not efficient, and inefficiences grow up as means of hedging against information. While not yet universally accepted, it is increasingly influential in policy, for example the writing of Joseph Stiglitz and financial modelling.
Criticism of Marshall's theory of supply and demand
Marshall's theory of supply and demand runs counter to the ideas of economists from Adam Smith and David Ricardo through the creation of the marginalist school of thought. Although Marshall's theories are dominant in elite universities today, not everyone has taken the fork in the road that he and the marginalists proposed. One theory counter to Marshall is that price is already known in a commodity before it reaches the market, negating his idea that some abstract market is conveying price information. The only thing the market communicates is whether or not an object is exchangable or not (in which case it would change from an object to a commodity). This would mean that the producer creates the goods without already having customers - blindly producing, hoping that someone will buy them (buy meaning exchange money for the commodities). Modern producers often have market studies prepared well in advance of production decisions, however, misallocation of factors of production can still occur.
Keynesian economics also runs counter to the theory of supply and demand. In Keynesian theory, prices can become "sticky" or resistant to change, especially in the case of price decreases. This leads to a market failure. Modern supporters of Keynes, such as Paul Krugman, have noted this in recent history, such as when the Boston housing market dried up in the early 1990's, with neither buyers nor sellers willing to exchange at the price equilibrium.
Gregory Mankiw's work on the irrationality of actors in the markets also undermines Marshall's simplistic view of the forces involved in supply and demand.
ECONOMIC RATIONALISM - macroeconomics
Economic rationalism is an Australian term in macroeconomics, applicable to the economic policy of many governments around the world, in particular during the 1980s.
In the USA the near equivalent terminology was supply-side economics.
The origins of the term are unclear however it is likely that they relate to the works of the English philosopher John Stuart Mill (1806-1873) and his treatise on economic thought and the contrast between Rationalism and Romantacism.
To a large extent the term merely means economic liberalism.
Economic rationalism can have positive results when the 'self-interest' is considered as the financial wellbeing of society at large. This financial wellbeing may however be at the expense of environmental and cultural considerations. Economic rationalist policy is often opposed by environmentalists, multiculturalists and others at the opposite end of the political spectrum to economic rationalists themselves.
Criticism Of
Economic rationalism is economic policy without social moral consideration, or "the view that commercial activity ... represents a sphere of activity in which moral considerations, beyond the rule of business probity dictated by enlightened self-interest, have no role to play." (Quiggin 1997)
As Margaret Thatcher famously said, "There is no such thing as society. There are individuals, and there are families." Which makes one wonder: what is it, precisely, that a Prime Minister does?
Natural Rate of Unemployment and Deflation
By the end of 2001, Japanese unemployment had come to exceed 5 %, reaching a historical high. The analyses so far done on this sorry development by such influential publications as the Economic and Fiscal White Paper of Japan (2001) and the Labor Economic White Paper of Japan (2002) have concluded that structural factors are largely responsible for this, and therefore the current level of unemployment is close to its natural rate. The purpose of this paper is to refute this view by demonstrating, with Phillips curve analysis, UV analysis and analysis based on Okun’s law, that the prime cause of the two to three percentage points’ rise in Japanese unemployment between 1990 and 2001 had been deflation – a cyclical factor rather than a structural one. The analysis also shows that the natural rate of unemployment for the year 2001 is between 2.5 % and 3.5 %.
Phillips curve-based analysis
In a stable macro-economic environment, it is the norm that a NAIRU (non-accelerating inflation rate of unemployment) type Phillips curve relationship holds – in the longer run, the Phillips curve becomes vertical, and in the short run, the expected rate of inflation, supply-demand gaps and supply shocks determine the inflation rate. We first demonstrate that this relationship is not observable in Japan in the recent years.
Next, we make reference to the general equilibrium model of Akerlof, Dickens and Perry (1996) which implies that when downward rigidity of nominal wages prevail, zero inflation/deflation will lead to higher unemployment in the long term as well as the short term.
We then proceed to demonstrate that Japan’s decade-long deflation in the 1990s put severe strain the wage adjustment capacity of Japanese firms, which have been regarded as possessing much flexibility in this regard. Based on this observation, we argue that recent Japanese inflation rates and unemployment rates could be better explained as a case of the non-linear Phillips’ curve relationship of Akerlof et-al, rather than by the rightward shift of the ordinary, vertical, long-term Phillips curve.
UV analysis
We first summarize the theory behind the UV analysis and the method of its estimation. We then make a critical assessment of the UV analysis employed by the two government White Papers mentioned above. Specifically, the studies of the White Papers (1) fail to take into account structural factors in trying to explain the shift of the Beveridge curve; (2) are estimating shifrs of nomal UV curves which is convex to the origin, but in fact, clockwise, circular moves are historically observed in the Japanese UV curve. As a result, what is estimated in the White Papers includes not only structural factors but also cyclical factors; (3) do not take into consideration the possibility that due to deflation, the ability of the labor market to adjust supply- and demand-variations may be weakened.
With these points in mind, we proceed to estimate the natural rate of unemployment through UV analysis, by breaking down the unemployment rate into vacancy rates and several possible structural factors. The resulting figure is between 2.92 and 3.25 %.
Our estimation of the UV curve is steeper than the ones in the White Papers. The difference can be explained by the second point raised above, that the White Papers fail to take into consideration the circular move of the UV curve. As a result, the slope of the curve is underestimated; structural unemployment is overestimated.
Okun’s law-based analysis
Here we break down the unemployment rate into the output gap and the natural rate of unemployment, the latter of which is explained by structural factors. The resulting estimation for the natural rate of unemployment in 2001 is between 2.63 and 3.43 %.
The stickiness and lag movement of unemployment rate, which were believed to have been causing the circular move of the UV curve, are observed clearly. We thus conclude that with respect to Japan, non-linear Okun’s law is better applicable than the linear one.
A Beveridge curve is a graphical representation of the relationship between unemployment and number of job vacancies. It has vacancies on the vertical axis and unemployment on the horizontal, it slopes downwards as a higher rate of unemployment normally occurs with a lower rate of vacancies. If it moves outwards over time, then a given level of vacancies would be associated with higher and higher levels of unemployment, which would imply decreasing efficiency in the labour market. Inefficient labour markets are due to mismatches between available jobs and the unemployed and an immobile labour force. The position on the curve can indicate the current state of the economy in a business cycle. For example, the recessionary periods are indicated by high unemployment and low vacancies, corresponding to a position on the lower side of the 45 degree line, and equally high vacancies and low unemployment indicate the expansionary periods, above the 45 degree line.
The Beveridge Curve can move for the following reasons:
The matching process will determine how efficiently workers find new jobs. Improvements in the matching system would shift the curve towards the origin, because an efficient matching process will find jobs faster- filling vacancies and employing the unemployed. Improvements can be the introduction of agencies (‘job centres’), unionisation, and increasing the mobility of labour.
Labour force size; as the number looking for jobs increases, the unemployment rate increases, shifting the curve outwards from the origin. Labour force size can increase due to changes in the participation rate, population age and immigration.
Long-term unemployment will push the curve outward from the origin. This could be caused by; deterioration of human capital or a negative perception of the unemployed by the potential employers.
Frictional unemployment; a decrease in this would reduce the number of firms searching for employees and the number of unemployed searching for jobs. This would shift the curve towards the origin. Frictional unemployment is due to job losses, resignations and job creation.
The curve is named after William Beveridge (1879-1963).
In macroeconomics, the Phillips curve is a supposed inverse relationship between inflation and unemployment. The British economist A.W. Phillips observed an inverse relationship between inflation and unemployment in the British economy in the century up to 1958 -- when inflation was high, unemployment was low, and vice-versa. As seen to the right, when drawn on a graph with the inflation rate on the vertical axis and the unemployment rate on the horizontal axis, the relationship between the variables showed a downward sloping curve, the Phillips curve (PC).
It is little known that the American economist Irving Fisher pointed to this kind of Phillips curve relationship back in the 1920s. On the other hand, Phillips' original curve described the behavior of money wages. So some believe that the PC should be called the "Fisher curve."
In the years following his 1958 paper, many economists in the advanced industrial (rich) countries believed that Phillips' results showed that there was a permanently stable relationship between inflation and unemployment. One implication of this for government policy was that governments should tolerate a reasonably high rate of inflation as this would lead to lower unemployment – there would be a trade-off between inflation and unemployment. For example, monetary policy and/or fiscal policy (i.e., deficit spending) could be used to stimulate the economy, raising gross domestic product and lowering the unemployment rate, as shown by the change marked A in the diagram. Moving along the Phillips curve, this would lead to a higher inflation rate, the cost of enjoying lower unemployment rates.
To a large extent, a leftward movement along the PC describes the path of the U.S. economy during the 1960s, though this move was not a matter of deciding to achieve low unemployment as much as an unplanned side-effect of war on the Vietnam war. In other rich countries, the economic boom was more the result of conscious social-democratic or Keynesian policies.
Stagflation
In the 1970s however, many countries experienced high levels of both inflation and unemployment also known as stagflation. The original theories based on the Phillips curve suggested that this could not happen, and the idea that there was a simple, predictable, and persistent relationship between inflation and unemployment was abandoned by most if not all macroeconomists.
New theories, such as rational expectations and the NAIRU (non-accelerating inflation rate of unemployment) arose to explain how stagflation could occur. The latter theory – also known as the theory of the "natural" rate of unemployment – distinguished between the short-term Phillips curve and the long-term one. The short-term PC looked like a normal PC but shifted in the long run as expectations changed (see below). In the long run, only a single rate of unemployment (the NAIRU or "natural" rate) was consistent with a stable inflation rate. The long-run PC was thus vertical, so there was no trade-off between inflation and unemployment.
In the diagram, the long-run Phillips curve is the vertical red line. The NAIRU theory says that if the unemployment rate stays below this line, as after change A, inflationary expectations will rise. This will shift the short-run Phillips curve upward, as indicated by the arrow labelled B. This would make the trade-off between unemployment and inflation worse. That is, there would be more inflation at each unemployment rate than before. Thus, by pointing to the problem of endogenously-caused "inflationary acceleration" the theory explained stagflation.
The name "NAIRU" arises because with actual unemployment below it, inflation accelerates, while with unemployment above it, inflation decelerates. With the actual rate equal to it, inflation is stable, neither accelerating nor decelerating.
The rational expectations theory said that expectations of inflation were equal to what actually happened, with some minor and temporary errots. This in turn suggested that the short-run period was so short that it was non-existent: any effort to reduce unemployment below the NAIRU, for example, would immediately cause inflationary expectations to rise and thus imply that the policy would fail. Unemployment would never deviate from the NAIRU except due to random and transitory mistakes in developing expectations about future inflation rates. In this perspective, any deviation of the actual unemployment rate from the NAIRU was an illusion.
However, in the 1990s in the U.S., it became increasingly clear that the NAIRU was unknown and likely changing in an unpredictable way. In the late 1990s, the actual unemployment rate fell below 4 percent of the labor force, much lower than almost all estimates of the NAIRU. But inflation stayed very moderate rather than accelerating. So, just as the Phillips curve had become a subject of debate, so did the NAIRU.
Further, the concept of rational expectations had become subject to much doubt when it became clear that the main assumption of models based on it was that there exists a single (unique) equilibrium in the economy that is set ahead of time, determined independent of demand conditions. The experience of the 1990s suggests that this assumption cannot be sustained.
The Phillips curve today
Pragmatic economists, such as Robert J. Gordon (http://faculty-web.at.nwu.edu/economics/gordon/indexlayers.html) of Northwestern University continue to use the Phillips curve. However, unlike the Phillips curve that was popular in the 1960s, the new version shifts, so that the "trade-off" can worsen (as in the 1970s) or get better (as in the 1990s). This forms what Gordon calls the triangle model, in which the actual inflation rate is determined by the sum of short-term Phillips curve inflation, supply shocks, and built-in inflation.
The last reflects inflationary expectations and the price/wage spiral. Supply shocks and changes in built-in inflation are the main factors shifting the short-run PC and changing the trade-off. In this theory, it is not only inflationary expectations that can cause stagflation. For example, the steep climb of oil prices during the 1970s could have this result.
Changes in built-in inflation follow the partial-adjustment logic behind most theories of the NAIRU: Low unemployment encourages high inflation, as with the simple Phillips curve. But if unemployment stays low and inflation stays high for a long time, as in the late 1960s in the U.S., both inflationary expectaions and the price/wage spiral accelerate. This shifts the short-run Phillips curve upward and rightward, so that more inflation is seen at any given unemployment rate. (This is with shift B in the diagram.) High unemployment encourages low inflation, again as with a simple Phillips curve. But if unemployment stays high and inflation stays low for a long time, as in the early 1980s in the U.S., both inflationary expectations and the price/wage spiral slow. This shifts the short-run Phillips curve downward and leftward, so that less inflation is seen at each unemployment rate.
In between these two lies the NAIRU, where the Phillips curve does not have any inherent tendency to shift, so that the inflation rate is stable. However, there seems to be a range in the middle between "high" and "low" where built-in inflation stays stable. The ends of this "non-accelerating inflation range of unemployment rates" change over time.
Theoretical questions
The Phillips curve started as an empirical observation in search of a theoretical explanation. There are several major explanations of the short-term PC regularity.
To Milton Friedman there is a short-term correlation between inflation shocks and employment. When an inflationary surprise occurs, workers are fooled into accepting lower pay because they do not see the fall in real wages right away. Firms hire them because they see the inflation as allowing higher profits for given nominal wages. This is a movement along the Phillips curve as with change A. Eventually, workers discover that real wages have fallen, so they push for higher money wages. This causes the Phillips curve to shift upward and to the right, as with B.
Some economists reject this theory because it implies that workers suffer from money illusion. However, one of the characteristics of a modern industrial economy is that workers do not encounter their employers in an atomized and perfect market. They operate in a complex combination of imperfect markets, monopolies, monopsonies, labor unions, and other institutions. In many cases, they may lack the bargaining power to act on their expectations, no matter how rational they are, or their perceptions, no matter how free of money illusion they are. It is not that high inflation causes low unemployment (as in Milton Friedman's theory) as much as vice-versa. Low unemployment raises worker bargaining power, allowing them to successfully push for higher nominal wages. To protect profits, employers raise prices, so that low unemployment causes inflation.
Similarly, built-in inflation is not simply a matter of subjective "inflationary expectations" but also reflects the fact that high inflation can gather momentum and continue beyond the time when it was started, due to the objective price/wage spiral.
DENGAN NAMA ALLAH YANG MAHA PENGASIH LAGI PENYAYANG, UCAPAN SELAWAT & SALAM BUAT NABI MUHAMMAD S.A.W SERTA KELUARGA BAGINDA Assalamualaikum ILMU (KNOWLEDGE), AMAL (PRACTICE), IMAN (CONVICTION) AND AKAL (COGNITIVE INTELLIGENCE) are the basis of this blog that was derived from the AKAR concept of ILMU, AMAL, AKAL and IMAN.From this very basic concept of Human Capital, the theme of this blog is developed i.e. ILMU AMAL JARIAH which coincidentally matches with the initials of my name IAJ.
Dr Ismail Aby Jamal
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment