News, notes, other stuff

13 December, 2011

Economics - Keynes General Theory


Copyright: epSos.de

Economics lecture notes

If you have a lot of money then you can buy any kind of sex you like.” An interesting statement, one which is probably true. If you're feeling libellous, then you could just ask Max Mosley.

It is maintained in our society that money is the root driving force of the world, a thing in itself. Economists would certainly agree. The general rule is that people will never stop spending, because satisfaction can never be achieved – the more you consume, the more you want to consume. This is the premise of the economic problem, and economics is all about how you can solve this problem while keeping everybody relatively happy, or at the very least, employed.

Economists will only talk in terms of human want, they will completely ignore need. Figuratively speaking, the needy have no cash to spend, and are therefore non-entities (but not really, as there are statistics for the economically active unemployed.)

'Need' has no reference point; in Fregian terms, it makes sense but it has no reference within economics. 'Want' makes sense: how much are you prepared to pay for something you want? The statement “I think everybody needs houses” is therefore meaningless in economic terms.

People derive something called utility from spending – a verifiable phenomenon, and therefore it provides a reference point. Consumers will always seek to maximise their utility. Utilitarianism is the idea that everyone left to their own devices will only seek to obtain things that please them, and it treats people as entirely self-contained individuals. It has to assert the latter, or else the first premise wouldn't work. Adam Smith's ideas about the hidden hand of the market are in essence utilitarian – assuming that everyone involved is a sensible, rational human being and knows what it good for them.

David Ricardo – The labour theory of value
Theory: that there was embedded labour power in objects, and this affects their worth. As an example, consider 1 biro versus 1 grand piano – the piano is worth more because more labour has been expended in making the piano than the biro.

Thomas Malthus – the iron law of population.
Theory: population grows exponentially, it is geometric.
Resources, however, are arithmetic i.e. finite in a way, and there is never enough to feed the population. Famine is the only way to bring the population back in to line.

Ricardo + Malthus = Marx – the iron law of wages
Theory: Everyone wants to do a certain job because they've heard it's easy and stress-free but well rewarded (journalist does not seem to fit here, but we'll use it as an example anyway.) If there is suddenly a glut of young, talented people all trying to seek jobs in that industry at the same time, then the employer will have their pick. It then becomes much harder to get a job, and so on. There is a recurring crisis of over production and under consumption. Things are only worth what people are willing to pay for them, and if everyone is unemployed, the price of an object (an apple) is going to go down; wages of those producing the apple are then cut and everyone still can't afford to buy the apple.

Classical economists regarded money as something that was neutral and transparent, and simply a medium for the transmission of value. Frenchman Jean-Baptiste Say believed that 'products were paid for by products' and that the sale of one item in essence allowed a businessman to purchase other things with the profits. John Keynes refuted this and pointed out that the existence of savings accounts meant that Say's Law was completely meaningless.

Thanks to Keynes, modern economists hold a completely different view and believe that money exerts a power on the real world. This power can make people behave in peculiar ways, and is completely de attached from the classical theory of economics and the theories of Adam Smith.


The world economy is described as a “gigantic confidence trick” - taxation only gathers enough money to pay off the interest that the government's debt is building up. In an active capitalist society, that debt can never be paid off. You could always print more money if people are starting to feel the pinch, but then you'll find out that the value of money plummets to the mud like the Germans did in the 1920s. Hyper-inflation was bad for some, but the comedic mental image of workers carrying home stacks of cash in wheelbarrows never gets old for me. The problem lay in that the money that the government printed wasn't backed by gold – Alfred Marshalls quantity theory of money – and it really was only worth the paper it was printed on.
The roaring 20s, by comparison, were the result of deficit spending for WWI. This led to the supply of money expanding – the multiplier/accelerator effect – which in turn led to widespread stock market speculation and rising prices.

The inevitable stock market crash led to the Great Depression in the 1930s - 40s. The government responded to the unemployment crisis by lowering the cost(value?) of money itself and reducing wages. The second world war at the time was good in Keynesian terms as war destroys surplus value and keeps people employed.

John Maynard Keynes

Cheeky.
Keynes was an extremely influential modern economist, born in 1883. He wrote the The General Theory of Employment, Interest, and Money and published in in 1936 in the midst of the Great Depression. Time magazine said of Keynes in 1999, “His radical idea that governments should spend money they don't have may have saved capitalism." Again, that relates back to the 'confidence trick' from earlier – spending money that doesn't exist isn't necessarily a bad idea, for reasons that Keynes outlines in his book.

For our seminar, we studied Paul Krugman's interpretation of the text. He defends Keynes from the right-wing pressure groups that initially stopped circulation of the book in America – it was regarded by them as a “leftist tract”, propounding socialism and calling for a large central government and high taxes. The opposite could not be more true – despite writing his book at a time where great poverty was being attributed to the failure of capitalism and socialism could easily be seen as the 'sane' way of dealing with the economy, Keynes was adamant that the failure was down to a small number of preventable technical causes. He saw the problem of mass unemployment as narrow and technical, and therefore thought that the solution could be narrow and technical, too. He denied that total government takeover of the economy could help matters but that instead, governments should just polices to “insure adequate effective demand.” This less intrusive intervention detailed in The General Theory is given by Krugman in the following points:

Economies can and often do suffer from an overall lack of demand, which leads to involuntary unemployment
The economy’s automatic tendency to correct shortfalls in demand, if it exists at all, operates slowly and painfully
Government policies to increase demand, by contrast, can reduce unemployment quickly
Sometimes increasing the money supply won’t be enough to persuade the private sector to spend more, and government spending must step into the breach

Keynes goes about meticulously detailing every aspect of the current model of economics, mainly to lend credibility to himself so that no-one might accuse him of trying to change something that he doesn't fully understand. It means that as a whole, General Theory is a cumbersome read, but Keynes write in the preface: “The composition of this book has been for the author a long struggle of escape, and so must the reading of it be.”

Krugman says that step by step, Keynes is liberating economists from the confines of the Great Depression, the confines created by classical economics.

The conception of classical economics that Keynes fought so hard to overturn was that of a Say's Law abiding model of a barter economy, where supply automatically creates its own demand because wages and income are immediately spent, not invested. It was impossibly simplistic.

He also attacked the notion that falling wages increase employment, an idea which was popular with other economists at the time.

Something else he did was refute the business cycle theory of boom and bust. Other economists of the day were approaching it in the wrong way and asking the wrong questions, asking about the mechanics inherent in a bust following a boom. Keynes instead identified that a bust embodies mass unemployment, and asked how to create more employment. Other economists were also obsessed with identifying the complex disequilibrium of the business cycle, whereas Keynes made the case that the market can actually operate normally without everyone being employed, that it is in effect a sort of stable equilibrium.

Keynes also laid out the assertion that there is strong irrational element in economic behaviour. He arrived at this conclusion through psychological observation rather than strict interpretation of figures.

The difference between macro- and micro-economics

Microeconomics is a 'bottoms-up' approach to looking at the economy; it will look at the decisions of specific individuals and businesses, whereas macroeconomics is a 'top-down' approach that looks at the decisions of countries and governments. Microeconomics looks at supply and demand and market competition. Macroeconomics will look at things like nation income, unemployment levels and the behaviour of every economically active member as a whole.

Hermeneutics Seminar Paper


Here is the paper that I brought along to the seminar on 3/11/11. I'm also going to include ideas that we discussed afterwards because until we spoke about it I didn't really have a clue what I'd written.

Hermeneutics is, in short, the study of meaning in a text - or, as one German theologian (Friedrich Schleiermacher – I had fun trying to pronounce that) called it, "...the art of avoiding misunderstanding." The may make it sound a little bit like when you were forced to dissect poetry for GSCE English, but hermeneutics was first used as a tool for interpreting religious texts such as the bible - texts which have at times been wilfully misinterpreted to suit the needs of others - so having some sort of pseudo-scientific method of understanding the author's intentions is rather important. Scholars sought to achieve this through analysis of the specific words used, the syntax, and by making allowances for any historical context which may be able to explain away apparent contradictions in manner or custom.


Hermeneutics was essentially a precursor to modern philosophy of language, semiotics, and analytical philosophy: the latter being a method of tackling philosophical issues one at a time rather than coming up with sweeping answers to complicated questions - a trait all too commonly seen in idealism.
It also paved the way for the type of language seen in computer programming: language which must be logically watertight, as a computer is extremely literal and cannot infer any sort meaning.

Friedrich Frege, a German philosopher and mathematician, published a paper in 1892 titled 'On Sense and Reference', which is thought to be the first original work on the theory of meaning. In this, Frege questions whether the 'sense' of a sign (which is the way in which a term refers to the object) is truly linked to the meaning that the sign is expressing.


He uses "The morning star is identical with the evening star" as an example to illustrate this point, saying that the statement is nonsensical when viewed logically - but as soon as you apply what he calls reference (the understanding of what a word or symbol is referring to - in this case, both names refer to Venus at different times of the day) and sense (which is how a sign presents its intended meaning) you can understand the thought conveyed in the sentence.

So, when broken down in to terms that somebody like myself could understand:
Proposition: A = B ...does it? Why and how?
To understand this, you must understand the sense of both A and B to understand that both are referring to the same thing.
With this, you can see that A does equal B in this case, at least - but only because you have the right context. Frege is saying that this is how people can derive meaning from utterances that are essentially meaningless.

Frege speaks of meaning in three levels:

  • Sign - the sounds you make, the words on a page, or a symbol
  • Sense - expressing some sort of meaning: Anthony Kenny says that it is not merely a 'mental image', but  that it is "a common property of all users of the language" - so a sign should be universally understood.
  • Reference - the ultimate understanding and connection you make in your own mind upon witnessing the sign and sense.
Frege also thinks we are obsessed with the truth-value of sentences. He highlights this by looking at works of fiction - even though we could not possibly have any reference for Odysseus in Odyssesy, we will be driven to accepting as the reference of a sentence its truth-value whether or not it has an actual basis in reality. The conception of a man called Odysseus still makes sense, even though we have no frame of reference to put him in. We want to impose some sort of value on the proposition in a sentence, even if we know that the object it refers to does not exist. This is how human beings are able to discuss metaphysical or abstract concepts.

Towards the end of his life, Frege became less concerned with applying his systems of logic to language, and more interested in the 'colour' present in expressions of thoughts. He said that this colour in language was seen in the form of interjections such as a swear word or a statement like 'Thank god!', which express emotion even though they're not full statements. Another example he gave was using an emotionally charged word in place of a plainer one – 'cur' instead of 'dog.'
No, I don't know what a cur is either, but it certainly sounds horrible. Frege said that such utterances have no effect on the logic of the sentence – if somebody uses 'cur' instead of 'dog' but does not feel hostile while doing so, it will not automatically render the statement false. He is essentially implying that language is absolute – what you say is what you mean, regardless of your real feelings. I get the feeling that Frege might not have understood jokes very well.

In 'The Thought', he explains the importance of context – in order to grasp the thought expressed in the statement using the present tense verb: 'It is snowing' then you need to know when it was uttered to know if it was true. Two people saying the same thing – 'I am hungry' are expressing two different thoughts – one could easily be true while the other is false. The same sentence can express a different thought in different contexts.

Bertrand Russell was another eminent thinker on the philosophy of language. He took particular issue with the idea of 'pragmatism' in regards to truth statements, where pragmatists would ask whether it was good to believe that a given proposition was true. He said: “It is far easier to settle the plain question of fact 'Have popes always been infallible?' than to settle the question of whether the effects of thinking them infallible are on the whole good.”
He was also uncomfortable with the idea that named objects which have a sense can be denied existence, e.g. “The round square does not exist.” Russell arrived at the conclusion that if such statements are not referring to things with proper names then they are logically permissible, because you can argue that it is false that there is an object xyz that is both round and square.

Another example given by Russell is that of truth-value gaps, highlighted in 'The present kind of France is bald.' Though the sentence has meaning, it is identified as positively false by Russell as not only is there no King of France, he cannot possibly be bald if he does not exist.

Ordinary Language

A concept proposed by Frege and Russell. It is an aim to construct language as a precise instrument for the purposes of logic and mathematics. They felt that is was important that a language should contain expressions that had a definite and unclouded sense. If sentences are allowed that are ambiguous or lacking in a truth value (that is, flowery synonyms and expressions of thought, or purely 'comment' in media law terms) then logical deduction becomes impossible.

Ludwig Wittgenstein, in his book the 'Tractatus Logico-Philosophicus', said that “language disguised the structure of thought beyond recognition. It was the task of philosophy to uncover, by analysis, the naked form of thought beneath the drapery of ordinary language.” He said that complex propositions were made up of elementary ones, and that those in turn were little slices of reality. He also believed that each of these tiny propositions were essentially mental pictures.

In the 1920s and 30s, Wittgenstein revised his opinion slightly. He thought that ordinary language was already embedded in society, and that members of certain social circles (e.g. religion) or those carrying out an activity (rugby or something) would adhere to conventions and structures that he called 'language games.' He compares understanding language to knowing how to play chess – it is a state of being rather than a process. Those in these circles use language with certain senses that only members of those circles will understand the reference for – for those outside the circle, their utterances are meaningless or don't have the same meaning.

In this, he asks why the marks on a page as ascribed any special significance – the answer is that they are given significance through understanding meaning. Anthony Kenny states that we go through certain mental events when hearing an utterance in a language we know – emotions, metal images, etc, that just do not occur when hearing an utterance in a language we do not know.

As a side note, the above is precisely why I find visiting a non-English speaking country so relaxing; you can't tune in to peoples inane conversations by accident, it's all just white noise.

Ordinary language explained badly

So, ordinary language. During the seminar, thinking of it in terms of computer programming really helped me to understand the whole thing.

Frege's assertion that statements can only be true, false or meaningless brings us to the workings of a computer at the most basic level. Binary code; 01010100101001 – it is simply on or off, true or false. There is no gray area inbetween, no 0.5 – that is utterly without meaning. Computers are relentlessly rational and cannot infer any meaning beyond pure black and white logic.

When Frege says that statements are meaningless if the words make sense but have no reference, he is taking things very literally just as a machine would: a computer only will understand and use the words or bits of code that you define to it. Computers today are so powerful not only because of the physical capabilities of processors and hardware, but also because the programming used is almost infinitely complex. Visually, it would correspond to a tree with millions of branches; and yet, right at the very core, there would be a list of very basic definitions and exceptions for rudimentary things that have to be there to enable the rest of the code to make sense. From those basics, more definitions are piled up from there. By defining certain terms (e.g. the computer recognises the command RUN as an order to open a certain program up) the computer is given sense, and the framework for the reference is the idiosyncratic syntax in a programming language by which a statement is given meaning. This is why a statement like “you're evil” would make no sense to a computer, and why in futuristic sitcoms most robots can't understand sarcasm.

A simple 'language' that most people have a grasp of is HTML, the set of rules that format graphical representations of web pages – somewhere in a browser, the rules of HTML are defined; e.g, the <b></b> tag is recognised as transforming text to bold. <b> is meaningless on its own in this blog post even though we have the sense of what it is used for, but in a html file it has sense and reference and becomes a true statement.

The reason computers crash is when a logical path is thrown up that leads to a vague, contradictory or meaningless expression. If the code cannot resolve whether a proposition is ultimately true or false, then it becomes unstuck.

Logic gates are exactly the same, and are used primary in data story systems and computer memory. A logic gate must use Fregian logic to validate what seems like a simple proposition, i.e. “There is nobody on the road.” That's not specific enough.
Ugh.

In Fregian terms, this would have to be expressed as something like, “For all possible roads, there is no man on this road. - FALSE.”

Modernism, Hearst and Harmsworth

Modernism

Modernism is starkly contrasted to Empiricism and Romanticism - while the paradigm of the latter two was change, the defining characteristic of modernism was that of relativity. Modernism defines an era where things are no longer seen as black and white as they once were and sees the introduction of nihilistic philosophers such as Nietzsche and Schopenhauer. Music, too, became more pessimistic and not as much of an expression of joy as in years past; it instead was obsessed with rather Freudian themes of death and fatality.

Relativity is the somewhat unsettling idea that there is no centre to the universe; no grand or even careless plan. In relativity, there are no overriding correct or false ideas, and in Freud's case, no conception of the 'self.' Morally, you might speak of 'cultural relativity' in defence of a certain culture's treatment of people or animals - if there is no real way of knowing what is morally sound, then who are we to criticise anyone? This sort of idea directly opposes Immanuel Kant's moral absolutism, and also defies totalitarian ideals which are largely based on mechanics of Kant's ethical maxims and then applied with no regards to cruel consequences.

And he wasn't even hiding behind Movember as an excuse.
Nietzsche

Friedrich Wilhelm Nietzsche was a German philosopher born in 1844. Nietzsche loved to compose music, and he described the almost familial relationship he had with friend and famed composer Wagner as his greatest lifetime achievement. In his first book - “The art of tragedy and the spirit of music”- he said that the beauty of Wagner's music is in tragedy. Both men shared a mutual admiration for the writing of Arthur Schopenhauer who was a huge influence on their works; the tragedy experienced through Wagner's music, and the pessimism seen in Nietzsche's essays. My own, personal favourite quote from a Schopenhauer book I read when I was younger regarding the disequilibrium of pain and pleasure is the world is: “Compare the happiness of the animal eating to the pain of the animal being eaten.” It is fair to say that he was a rather gloomy man.

A famous quote from Nietzsche is “God is dead, and we have killed him.” He also said that humanity must be destroyed. That we are just apes, and we know where we came from. That we don't know where we're going and that modern medicine means we have effectively stopped evolving - which will culminate in the devolution of the human species. Now, this might even be something you have thought in passing when you see (what you think is) a particularly horrible human being shouting/swearing at/blowing smoke in the faces of their offspring in public. I know I have. You wonder who let these people have children in the first place then realise you are straying dangerously close to eugenics. And history has shown that support of eugenics as an ideology and the inevitable mass genocide that follows is rather unsurprisingly not a great idea.

Internal moral debate is one thing, so it stands to reason that politically the idea espoused by Nietzsche above is an extremely divisive one. There are two paths, both with powerful ramifications; the first path involves evolving through technology. The future is not organic; it is intellectual. This has inspired the plot of many a decent sci-fi flick; Bladerunner, the Matrix, to name a couple. The second path involves eugenics, which ultimately will throw someone like Hitler in to the mix. Hitler (or more accurately, someone he had appointed to do his dirty work for him) tried to selectively breed people as though they were dogs to create the 'master' Aryan race – as a consequence, undesirable people had to be eliminated from the gene pool to prevent contamination, free up resources for the desirables, and most importantly maintain the climate of fear that is endemic in a totalitarian regime. In terms of attempting to 'improve' the gene pool, “The Final solution” was first geared not towards Jews, but disabled or mentally handicapped people. The Natural Law - the rules by which all humans are simply agents looking to uphold - was that of survival of the fittest.

There's a third option, which is education. Science helps us to evolve, and understand the world. It constantly unravels new questions, which lends more strength to relativity. And if relativity really is the opposite to totalitarian ideology, then you could fairly argue that that's no bad thing.


William Randolph Hearst, Alfred Harmsworth/Lord Northcliffe

Citizen Kane - a film produced by and starring
Orson Welles that is widely believed
to be a biopic of Hearst's own life
William Randolph Hearst and Lord Northcliffe were extremely influential men in the world of journalism, whose influence can still be seen today; sensationalism ('yellow journalism') as a byproduct of the circulation war between Hearst and Pulitzer, and Harmsworth with his clever application of his knowledge of the industry and of target audiences and marketing tricks in 'Answers' and when working for the Daily Mail.

Hearst was born in 1863 to a wealthy family. He took control of the San Francisco Examiner from his father, and then moved to New York to again take over The New York Journal. He later became involved in politics, and ran for mayor and later governor of New York. Hearst completely changed the format of every paper he came in to control of, moving from heavy text-based stuff to front pages full of visually striking photographs. At the peak of Hearst's 'publishing empire' he owned more than 30 newspapers. Hearst hired top quality writers in order to compete with the New York World, owned and published by Joesph Pulitzer. In a particularly cheeky move, he even headhunted writers and a cartoonist from the New York World's Sunday edition, and during the fierce battle for readership Hearst would sanction more and more outlandish stories with apt headlines to grab readers in. The authenticity of such stories could sometimes be dubious, but the practice of yellow journalism is the predecessor of red-top tabloid journalism today. Apart from the 'dubious in origin' thing, because everything you read in the Daily Star is 1000% true facts, folks.

Alfred Harmsworh, a.k.a. Lord Northcliffe, was born in 1865 in Ireland. He was essentially the English Hearst in terms of being a successful publisher. Much like French chemist Louis Pasteur, nobody seemed to think that he was really anything special academically until he ventured out in the work and set up his magazine 'Answers to Correspondents on Every Subject Under the Sun', or simply 'Answers'. It contained bizarre articles asking questions such as whether Jews ride bikes. It also ran competitions and offered its readers cash prizes and special toy surprises free gifts. When the Daily Mail launched in 1896, Harmsworth brought these ideas with him.


Articles in the Daily Mail weren't to exceed over 250 words - Harmsworth argued that the rising level of literacy meant that people were more able than ever to pick and choose what they read that and that the paper needed to provide short and interesting news articles.