Economics is a powerful social science that deals with the fundamental dilemma we face in all human societies: human desires are bounded by what we can produce and how we organise ourselves in light of these constraints. The field studies quite possibly the most complicated phenomena of any science. Every economic choice – what we desire, what we can produce – is influenced by a huge range of factors from diverse fields like biology, psychology, engineering and so on.
Like any science, the complexity of the phenomena studied by economics mean that simplifications and generalisations of human behaviour must be considered in light of falsifiable models and theory.
Consider the mainstream neoclassical framework, which makes the following assumptions about human behaviour:
- Rational preferences: Individuals have well-ordered and well-behaved preferences amongst the set of possibilities available to them.
- Utility maximising individuals and profit maximising firms: Individuals will choose the highest utility bundle available and firms will choose production that maximises their profit.
- Full information for individuals and firms: Agents in the economy are aware of all prices, quality, etc. in the economy.
This basic framework is surprisingly powerful as a tool for building economic theory. From these assumptions we can build the neoclassical theory of the firm, demand and supply curves. Making the additional assumption of a large number of firms and preferences with the property of local non-satiation (there is no point such that an increase in consumption would make the consumer worse off) we can obtain the first fundamental welfare theorem, which implies that free markets tend towards pareto efficient equilibria. This means that the allocation of resources achieved by a free market cannot be changed to make any individual better off without making someone else worse off i.e. there is no ‘waste’.
The simple neoclassical framework can be very powerful as a basic framework for explaining economic phenomena and is applicable in a variety of real-life economic problems. However, all too often, economic laypeople (and even some people trained in economics) retain only the conclusions of the framework, disregarding the fact that these conclusions were predicated on assumptions that do not hold in all cases.
In doing this they miss a lot of the ‘richness’ of economic theory, which occurs when we relax the basic assumptions of human behaviour. In fact, one of the most interesting areas of economic research is information economics, which does away with the assumption of perfect information for buyer and seller and instead uses asymmetric information, where a party in a transaction has better or more information than the other. One of the most famous papers in the field of information economics is the 1970 paper “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism” by George Akerlof, who later went on to win the Nobel Prize in Economics alongside Joseph Stiglitz and Michael Spence (also for their work in information economics). The paper is famous for its use of the used car market (although other markets are analysed) as an example of a market where there exists an information asymmetry with respect to the quality of the vehicle.
Akerlof supposes there is a used car market that has some cars that of high quality and some cars of low quality (lemons). Buyers cannot tell whether the car they are purchasing is a lemon or high quality, but they do know, or can estimate, the average quality of cars on the market. Sellers who have had time to drive and become very familiar with the car are able to better judge the quality of the car. We therefore have an information asymmetry where one side of the market has a better assessment of the quality of goods.
Naturally, buyers will pay a higher amount for a high-quality car and sellers will seek a higher price for said high-quality car. But since buyers cannot tell the difference between good and bad cars, we can deduce that they must sell at the same price. High-quality sellers might raise their prices to reflect the higher quality of their cars, but remembering the fact that the buyer is unable to assess quality, the sellers of lemons can simply raise their prices to a matching level. This fact changes buyer behaviour. For a randomly selected car, buyers will not be willing to pay the price they would be willing to pay for a car they know is high-quality, on the chance that the car is a lemon. They will instead adjust their willingness to pay to be based on the average quality of a given car – more lemons in the market will lower the willingness to pay as there is now a higher chance of any given car being a lemon. That is, their willingness to pay will be in-between what the buyers would pay for a high-quality car and that which buyers would pay for a lemon, based on their estimate of the proportion of high-quality cars to lemons.
This is good for sellers of lemons. They can now obtain a price higher than that which they would be able to obtain if buyers had perfect information. For the sellers of high-quality cars, however, the price declines below that which they could obtain in a perfect information market. Naturally this will encourage some of the sellers of high-quality cars to exit the market, shrinking the proportion of high-quality cars on sale. Over time this leads to an even lower price (and even less high-quality sellers) as buyer adjust their willingness to pay downwards as high-quality cars leave the market and they are more likely to end up with a lemon. This feedback loop eventually leads to no high-quality cars being sold. Our market is no longer efficient because we could make a pareto improvement (i.e make someone better off without making anyone worse off). This pareto improvement is possible if we encourage high-quality car sellers to trade with buyers at the perfect information price. This failure of information symmetry is known as the adverse selection problem.
Obviously, the used car example Akerlof gives in The Market for Lemons is not realistic. We know that cars are not of binary quality. But just like the basic neoclassical framework, it is a useful simplification which we can build upon to analyse a broader set of problems. In fact, adverse selection has a powerful effect when building insurance markets where the buyer of insurance has better information about their health than the seller.
In such a market, just as in the market for cars, the insurer must charge the same price to all buyers (as they lack the information to differentiate buyers) and this price must be based on an estimate of the average risk of the insurance being used. However, if the insurer sets the cost of insurance at the price based on the average risk, the insurance policy will only be purchased by people who know they will have a better than average risk and will be forgone by people with lower than average risk. This in turn drives the cost of providing insurance up for the insurer, relative to the cost of providing the insurance based on the average risk. The insurer is now only covering a self-selecting group of people with the highest health risks. This rise in cost will either raise the price of taking out insurance or force insurers to leave the market. In either case this is a market failure – individuals willing to pay for insurance and insurers willing to provide it cannot trade due to information asymmetry.
What can be done?
Government intervention can restore trade and create efficiency, solving the problem of adverse selection. For example, in insurance markets, mandatory insurance programs can bring about efficient outcomes by ensuring those who take out insurance are not simply high-risk individuals. In the used car market ‘lemon laws’ can solve the information problem, allowing buyers to return defective vehicles if they discover their car is a dud.
Solutions need not be solely governmental. In the case of insurance, free voluntary screening to estimate the need for insurance is a possible solution for some markets where low cost and accurate screening is available. It works by offering lower prices to those who are screened. A price based on the average risk of the unscreened population is set for the rest. This gives the lowest risk portion of the population the incentive to reveal their information and undercut the remaining portion to obtain a lower price. This remaining portion will now face a higher price due to the low risk portion leaving the group, giving the lowest risk portion of the new group the incentive to reveal their own information. Eventually, through this cascading effect, all portions of the population reveal their information. If this screening is low cost and accurate, the insurance market may be able to function efficiently. In the used car market warranties and brands allow high quality sellers to guarantee quality to the buyers, which can help counter the effects of information asymmetry.
The commonly held, dogmatic adherence to the idea that free markets are always and everywhere efficient forgets the assumptions we made to get to that conclusion and is not helpful in building the predictive and modelling power of economics. The valuable insights obtained from information and behavioural economics (which modifies the assumption of rationality) and other economics sub fields demonstrate the utility of questioning these assumptions. We should move to further integrate them into both mainstream economic theory and the public economic conscience, alongside our standard neoclassical model conclusions. Doing so will significantly increase the predictive and modelling power of economics, which will further allow us to improve our decision making and better allow us to deal with the big problems of the field.
Further Reading/Mathematical Treatment: