An Analysis of Dogecoin

 
 
 

Disclaimer: This article is now outdated. It was written in early 2014. I am no longer involved in crypto, , and hold no opinion on current trends.

You may read the outdated article below:


Introduction:

For those of you who don’t know me, I am Tuxedage — organizer of the dogecoin lite wallet fund, moderator of the thousand-user strong #dogecoin and #dogecoin-market IRC channel, as well as an active member and supporter of the dogecoin community. If you are reading this, I will assume you are already familiar with cryptocurrencies, so I won’t beat around the bush.

This document explains why Dogecoin is going to the Moon.

Personal Story:

I am a Bitcoin early adopter. I put in trivial amounts of money into Bitcoin because I saw it as a protocol and community with significant potential. I have made some ridiculously good returns on my initial investment because of that decision.

When I first heard about Dogecoin in mid-December of 2013, I initially dismissed it as a stupid meme coin. I publicly stated that I do not understand Dogecoin, and that anyone who put money into it was stupid. In my mind, Dogecoin was simply a useless clone that tried to imitate Bitcoin’s success, one that was doomed to inevitable failure.

But then I started to think about it more seriously. Were my initial criticisms of dogecoin valid? Could I have been wrong?

I was wrong.

On Christmas of 2013, I became a dogecoin believer. A shibe.

Over the next week, I cashed out every single one of my bitcoins that I had bought into dogecoins. At the start of the New Year, I was officially all-in on dogecoin.

At that time, doing so was completely and utterly insane. Dogecoin’s market cap was barely 7 million. Dogecoin was still a speculative and new meme currency — unworthy of attention. All the cool projects we’ve funded, such as the various Olympic candidates, had not yet existed. Furthermore, I had willingly violated the two rules essential of investing:

  • (1) Don’t put in money you cannot afford to lose
  • (2) Don’t put all your eggs in one basket.

This essay was written to explain my reasoning about why I did this, and why I continue to all-in on dogecoin. I like to justify my actions. Writing this document serves as a quick and easy way to refer people to if they ask me “Why did you do it?”

The second reason why I wrote this essay was because I saw no satisfactory document that met my standards of eloquence in explaining how ridiculously undervalued dogecoin is. Whilst many of the following factoids are scattered throughout the internets or on reddit, they are hidden behind some level of obscurity, and there is no single document listing all of them. Therefore I must do it.

Thirdly, I wanted to archive my thoughts so that in the future, I will be able to remember my exact reasoning for doing this. Memory is often faulty, and I didn’t want to misremember myself as being more overconfident or under-confident than I actually was.

I realize that writing this document in and of itself breaks the spirit of dogecoin. Firstly, because it talks about profit-making, rather than community-building. As we like to say; Community before profits. The dogecoin community is not typically very fond of profiteers who care about the coin only for a quick buck, rather than because they genuinely believe in dogecoin and support it as a mechanism for change.

Writing this document casts me in a bad light as a greedy profit-maximizer. It also may lead to greater amounts of speculation and a greed-fueled hype bubble in dogecoin, which may destroy the very thing we are attempting to accomplish. Furthermore, this document takes a serious tone, and I dislike that. I enjoy dogecoin because it allows me to act like an idiot and talk in shibespeak.

Because of the aforementioned reasons, I have hesitated to write this for a very long time. But eventually, I broke. In an ocean of people talking about how dogecoin would never work, I just wanted to grab the nearest person, and yell at them “BUY DOGECOIN!” whilst shaking them vigorously and slapping them across the face. Of course, I can’t do that over the internet, so writing this document was the next best thing.

So I relented. Much sorry. Very apology. Wow.

hotdoge

The Prediction

I predict that Dogecoin’s market cap will, at minimum, exceed Litecoin’s by the year of 2014. (P> 0.8) .

This is my conservative estimate. My optimistic estimate is that dogecoin is going to hit escape velocity and pierce the heavens! Not just the Moon, but Alpha Centauri as well. Even reaching ten cents per coin is not impossible! Why maybe even a dollar per coin!

The Argument:

Why am I so confident of this?

The biggest and foremost reason is that dogecoin has an insanely high reproduction number (R-factor).  I first identified this phenomenon during 20 Jan, and so far, the evidence heavily confirms it.

A basic reproduction number is a variable used in the modelling of infections to determine, on average, the number of people the carrier spreads the disease to over the course of a certain period of time. The higher this variable, the faster this disease spreads, and the more dangerous and infectious it is.

Dogecoin has a number of traits about it that causes it to spread incredibly quickly – traits that cannot simply be explained by the fact that it is a new coin. Each dogecoin convert tends to spread dogecoin to a high number of other people within a short number of time.

I believe this is because dogecoin exists not only as a p2p protocol, but as a powerful cultural meme. I use meme not merely in the context of being an ‘internet meme’, where funny pictures of various animals are captioned with funny text, but of the classic Richard Dawkins definition – a powerful, self-replicating idea that spreads from person to person.

This meme has is so ridiculously good at self replication  for the following reasons:

____________________

Just the Tip

Although my involvement with bitcoin has waxed and waned over the last few years, I was still lucky enough to observe its growth from very early days, despite being unable to make a decent profit from it. One big difference between dogecoin and bitcoin during its early days is the inclination of its users to tip each other small amounts very frequently. This is not just anecdotal – a quick search on the statistics of the dogetipbot vs. the bitcointip bot on reddit will confirm that dogecoin is tipped around 100x as much as bitcoin.

This strikes me as an incredibly effective evangelizing tactic.

just tip

This works for three reasons, the first being the most obvious – if you have been tipped some dogecoins, you’re very likely to download a wallet and try to keep some marginal involvement with the dogecoin community. After all, who’s going to refuse free money? Even if the opportunity cost of downloading that wallet and researching the protocol may outweigh the amount of money that value of dogecoin is worth, I suspect that not many people will do so. My own experiences proselyting dogecoin only confirms this.

This is because when people try to evaluate the current value of dogecoins that they are receiving, their minds will anchor to the next-closest situation that they are familiar with of people being given free cryptocurrency.

That is bitcoin. Stories about people having hundreds of bitcoins in the past, but throwing them away, are all too frequently heard. People will fear throwing away their dogecoins in the subconscious fear that the same situation will result. They will believe that it is “better to be safe than sorry”, regardless of how utility maximizing doing so actually is.

The second reason is the reciprocity principle. As someone generally very interested in social engineering, the desire for people to reciprocate is usually so powerful that it is so often taken advantage of by confidence-tricks. The following experiment is told as an example:

  • Regan had subjects believe they were in an “art appreciation” experiment with a partner, who was really Regan’s assistant. In the experiment the assistant would disappear for a two-minute break and bring back a soft drink for the subject. After the art experiment was through, the assistant asked the subject to buy raffle tickets from him. In the control group the assistant behaved in exactly the same manner, but did not buy the subject a drink. The subjects who had received the favor, a soft drink, bought more raffle tickets than those in the control group despite the fact that they hadn’t asked for the drink to begin with. Regan also had the subjects fill out surveys after they finished the experiment and found that whether they personally liked the assistant or not had no effect on how many tickets they bought. One problem of reciprocity, however, focuses on the unequal profit obtained from the concept of reciprocal concessions. The emotional burden to repay bothers some more than others, causing some to overcompensate with more than what was given originally. In the Regan study, subjects paid more money for the tickets than the cost of the (un-requested) soft drink.

The fact that these tips are cents in value does not matter. Scope insensitivity will ensure that regardless of the actual monetary value of the tips given, people who were tipped would feel obligated to contribute back to the community, including being an evangelical themselves.

Finally, this tendency for widespread tipping contributes to the familiarity principle, also known as the mere exposure effect. This is a complicated way of saying that because it is tipped so widely amongst non-shibes, it will be familiar to other non-shibes. Ceteris Paribus, the more often something is seen, the more pleasant and likeable it is. This will create pressure for other non-shibes to join the dogecoin community on their own volition, out of free will. This will ensure that dogecoin spreads really quickly.

The Twin Memes

However, widespread tipping is not the only reason why dogecoin has such a ridiculously high basic reproduction number. Let’s now analyze dogecoin in the context of an internet-meme, not merely the Richard Dawkins definition.

Critics argue that dogecoin will never succeed because it is a meme. They are badly mistaken. Dogecoin will succeed precisely because it is a meme – and a very powerful one at that!

Yes, dogecoin is ridiculous. It is a joke. The Shibe Inu meme, whilst a fascinating phenomenon, is silly. But who’s to say that silly memes can’t be worth ridiculous sums of money?

For example, the “I can Haz Cheezburger” site, based on user-uploaded pictures of cats with funny captions, has been valued at a minimum of $30 million dollars. Their book, “How to Take Over Teh Wurld”, has entered the number one spot of the New York times bestseller list. Humor is a fundamental part of the human experience, and it’s a contributing factor to memetic appeal, not a handicap. There is no reason why dogecoin is somehow an exception.

At great risk of jinxing cryptocurrencies to the same fate, I will also note that at their peak, soft stuffed toys called Beanie Babies were also worth ridiculous sums of money — hundreds of millions of dollars.  Much like doge, something that was initially cute and funny became very valuable. Is it stupid? Yes. But is there profit to be made? Yes.

Like it or not, the dogecoin community is cute and funny in a way that no other cryptocurrency has managed to be. Look up the /r/dogecoin subreddit, and you’ll find that a great number of popular posts are jokes and pictures of cute dogs, rather than talking about the evils of Ben Bernanke or the fractional banking reserve system seen in /r/bitcoin.

doge ruins guy’s life. wow.

Even if you might find it uncute and unfunny, it’s difficult to deny that the Shibe Inu meme appeals to many others. Good investors don’t buy or sell instruments based on what they personally enjoy. That is irrelevant. It is more important to predict what others are likely to enjoy. The same concept applies here. The Shibe Inu meme appeals to people, so even if I don’t personally like it, I buy based on the expectation that others will.

The Red and Blue Oceans

This brings me to my next point, which is that dogecoin has successfully managed to tap into a demographic that no other cryptocurrency has. Bitcoin appeals to libertarians and anarcho-capitalists.  /r/bitcoin even lists /r/anarcho_capitalism on its sidebar! While this ideological association was beneficial in attracting fanatics during the early days of bitcoin, as we move towards mainstream adoption, the same advantage can become a disadvantage — subsequently driving away many others who aren’t radically right wing. Eventually, as the pool of radical right wingers get exhausted, bitcoin will find itself with insufficient room to grow.

Dogecoin does not suffer from this flaw. It is non-political and funny – appealing to people from all sides of the political spectrum; as well as the apolitical. The technical term for this tactic is the Blue Ocean strategy. It is not necessary to offer a superior product than your competitors to gain a profit — you can also find a way to create demand in an untapped market space. This is exactly what dogecoin attempts to do.

red blue

If you fit the target audience of this document, you will not believe that there are actually those who are scared off by your politics. But such people are commonplace — for instance, this guy. People want to run away when you try to convince them of the evils of the Federal Reserve. Your assumption that everyone is as political as you are is a case of the typical mind fallacy. After all, if it were complete consensus that it was an awful thing, it would long have been voted away already.

But I digress. Does dogecoin appeals to a mass market in a way that no other cryptocurrency can? Yes it does.

This is a very good thing. It means that dogecoin has a lot of room to grow. It can hit mainstream status with greater ease than any other coin.

The Madness of Crowds

 The other thing that struck me about dogecoin is that it has fanatics in the way that bitcoin had during its early days. The fundamental reason bitcoin succeeded was not because of its superior protocol. It was because bitcoin had fanatics. People who would proselyte bitcoin as though it were the second coming of Jesus – that it was the solution to all the evils of banking and government, that it would fully anonymize currency and wrest control out of the hands of our evil Federal Reserve overlords.

If Bitcoin didn’t have fanatics, then it would have been like any other speculative bubble — over at the first crash. Why would anyone choose to be the first few people to put money into such a risky asset? Without the promise of greater profits to come, only a belief that investing in bitcoin was actually an ethical duty could convince super-early adopters. This happened to be the case.

Of course, with the benefit of hindsight, we now know that the promises of bitcoin was an exaggeration. Don’t get me wrong, bitcoin certainly improved things – I myself was a bitcoin fanatic. But we can see now that the status-quo was still maintained – that Governments still exert control over cryptocurrencies, and that it’s still the case that a very small number of bitcoin owners hold a disproportionate number of bitcoins.

But that’s not my point. My point is that fanatics are powerful because they will obtain converts at any cost, regardless of profit margins. They will announce their causes from the tops of mountains and will never give up, regardless of how awful the situation looks. They refuse to change; so the world changes along with them.

The only cryptocurrency I have ever observed to have a similar fanatical community is dogecoin. Don’t get me wrong, there are certainly a number of fanatics in ripple, litecoin, and peercoin as well. I’ve talked to them. But what impresses me is the number and quantity of fanatics in dogecoin. People who will donate massive amounts of their own money in order to promote dogecoin. People who generously tip when they should have defected and let other shibes tip for them. People who refuse to sell out their dogecoin in the face of a crash, only buying more.

In the end, I suspect that it is this attribute that will ensure Dogecoin’s long term success.

Developers, Developers, Developers!

Many other cryptocurrencies — Litecoin particularly, argue that strong developer support is evidence of the superiority of their coin. The more active the developers, the more likely that cryptocurrency is to succeed. I agree with this. Developers are important because they are key to providing the infrastructure around a coin, allowing greater levels of adoption and ease of use. They also allow greater incentive to adopt the coin through the provision of important goods and services.

developers

If this is the case, then Dogecoin’s ability to succeed depends on its developing community.

There is strong evidence that Dogecoin has one of the fastest growing and most active developer communities of any other altcoin. The two founders of Dogecoin, Jackson Palmer, and Billy Marcusare very active. They openly discuss any issues that Dogecoin might face in the future, as well as work with the community to find the best way to resolve them. They reply quickly and concisely to proposed changes to dogecoin’s protocol. This immediately gives Dogecoin an advantage that many other cryptocurrencies do not have — Bitcoin being a major example.  Satoshi Nakamoto, the main person responsible for bitcoin, disappeared in 2010, never to be seen again.

On the other hand, it can be argued that the protocol’s developers not as important as the developer community surrounding a cryptocurrency. This is because whilst the founders of the coin are necessary on rare occasions where forks are necessary, the developer community surrounding the coin are what’s really necessary necessary to allow the coin to propagate.  In this area, we have a clear leg up above all altcoin-competition. The following is a list of what I believe is the most critical infrastructure necessary to the propagation and usage of a coin.  (*2) It is immediately obvious that Dogecoin clearly wins against all the other major altcoins.

Infrastructure Dogecoin Bitcoin Litecoin Peercoin Namecoin
Dedicated Reddit Tip bot Yes Yes Yes No No
Lite Wallet Yes Yes No No No
Email/Text/Twitter Tip bot Yes No No No No
ATM Yes Yes No No No
In-store currency sales support Yes Yes Yes No No
Dedicated Gambling Services Yes Yes Yes No No
Dedicated Fiat Exchange No Yes No No No
Dedicated Black Market Yes Yes No No No
Total: 7 7 4 0 0

And yes, I admit that although Dogecoin’s development team is impressive, the ultimate victor is Bitcoin. However, it is important to remember that we have come this far despite dogecoin being a cryptocurrency less than two months old — whilst all the other coins have had years to develop services. The fact that we have managed to compete at all is a testimony to dogecoin’s development community. It’s reasonable to extrapolate that in time, Dogecoin will have the same top-tier infrastructure that Bitcoin has.

_______

*2: This list is based on research to the best of my googling abilities. If it is inaccurate in any way, please drop me a message, and I will correct it.

By “dedicated”, I mean a site that is solely dedicated to that particular type of cryptocurrency, and not used for any other. As a result, Vault of Satoshi for dogecoin does not count.

_______

The 4chan Effect.

Although it is impossible to talk about 4chan as one coherent, organized entity, it is still fair to say that dogecoin is very popular at 4chan. Popular 4chan boards like /g/ or /biz/ usually have up to three or four dogecoin related threads at any one time. This is significant for 4chan because it threads are pruned very frequently on a regular basis — the average thread lasts for only an hour or two before being deleted due to lack of activity.

Furthermore, 4chan is known to have great ability to spread memes, ideas, and coordinate projects if the proper motivation exists. Some rumors state that every single meme every created begins at 4chan. 4chan

The good news is that this motivation exists. We have seen constantly high levels of interest at 4chan. This discounts the financial incentive they have for making Dogecoin popular. So it will.

It is difficult to explain how significant this power is for those who are not familiar with 4chan, so I ‘ll leave it at that. You yourself can choose to decide how important this factor is.

The Evidence

Every good rationalist and epistemologist will propound the importance of testing your predictions against the real world observations that should arise if they are true. I have made these predictions in late 2013. It is currently the 5th of February.

So far, I’ve had about a month of time to observe whether my theories are correct. I believe these observations is good evidence for my observations. Consider the following:

  • The dogecoin subreddit has grown at a completely astounding pace with remarkable consistency.
  • In the last two weeks, it has hit the fastest-growing non-default subreddit five times. It is the second fastest growing subreddit this year. As of right now, it has 55k subscribers. Litecoin only has 18k. If trends continue, it is set to overtake /r/bitcoin in number of subscribers within two months.
  • Even in terms of sheer volume (volume exchanged in USD), Dogecoin has overtaken Litecoin, and is already more than half as popular as Bitcoin. Now remember that dogecoin has merely a market capitalization 1/12th that of litecoin, and two hundred times less than bitcoin.
  • Dogecoin has more active addresses than all other cryptocurrencies combined. This means that whilst other coins may be hoarded, dogecoin users are actually actively sending transactions back and forth to each other.
  • As of the 22nd of January, dogecoin has overtaken litecoin in Hashrate. There are now more miners mining dogecoin than litecoin.
  • As of February, Dogecoin is more widely searched in Google than litecoin.
  • Dogecoin has a staggering 9,731 nodes, more than any other cryptocurrency, including bitcoin.
  • Dogecoin now has four times as many news articles as litecoin, according to Google News.
  • Dogecoin’s market cap has increased from the 14th position from the time this prediction was made to the 4th position.marketcap

Not bad for a cryptocurrency only two months old. No other coin has ever experienced this level of growth in such a short amount of time. This is a huge testimony to dogecoin’s effectiveness.

Who has the superior protocol? Really.

Despite all this evidence of dogecoin’s incredible growth rate and memetic appeal, there are skeptics everywhere. The most frequent criticism is that Dogecoin’s code is simply a copy of Litecoin’s. As a result, dogecoin is doomed to fail because it offers nothing special over currently existing cryptocurrencies. Therefore even if in the short term dogecoin is beating litecoin, it’s probably a fad. Dogecoin is overvalued, and in the long run everyone in dogecoin will soon join litecoin.

This is an awful argument.

Firstly, the argument that software is the only factor contributing to the success of a coin is incredibly short-sighted and narrow. There have been plenty of examples of coins that had really neat or good protocol, but failed due to lack of community support. Blakecoin (really fast algorithmic hashrate) is a low hanging fruit examples.

Haven’t heard of it before? Exactly.

Software is not the only factor that distinguishes coins from each other. It is not even the most important, especially for low levels of novelty. Instead, it is really the culture of dogecoin that separates it from any other coin, and that is something much more difficult to imitate, unlike code where copy and paste would suffice.

The failure to take into consideration sociological and psychological factors that drive the success of an innovation is a temptation. After all, it is far more difficult to quantify and objectively access. However, it is no less important, because it is a far better predictor of adoption than any slightly superior hashing function. Nobody is going to join a cryptocurrency if nobody spreads it in the first place.

Secondly, this argument is wrong because dogecoin’s protocol does offer something that no other major cryptocurrency has.

The power of Large Numbers

Dogecoin will have 45 billion coins by the time this document is written. By the end of this year, it will have 100 billion. Bitcoin, in comparison, only has 21 million. This matters more than you think.

Humans run on corrupted hardware. Due to the anchoring effect, people enjoy owning larger numbers of things than small numbers. It is more pleasant and desirable to own 100 Blerghcoins than 0.001 Urdcoins, even if they are worth the same monetary value.

The mainstream public is discouraged from buying bitcoin if they realize it has a price tag of a thousand USD per coin. Even if it is possible to own fractions of that coin. Even if it’s possible to own a cent worth of bitcoin.

In theory, it should not matter. Perfectly rational beings would perform a utilitarian cost-benefit analysis and simply do whatever maximizes utility. However, homo economicus do not exist. People do care about irrelevant numbers, and this is called Psychological pricing.

Ever noticed that stores tend to end their prices in denominations of 9? 99 tacThis is because people buy more of the good when they do.  Stores than try to get rid of such pricing often find their sales dropping by up to 80%. The solution is not to criticize such people as ‘irrational’ but to plan your product around them to maximize appeal. Dogecoin does that.

For the same reasons, people find it more pleasant to own exact integers of an item. People will be less reluctant to pick up a currency where each coin is worth cents, rather than hundreds of dollars, since they have already been anchored by regular denominations of money.

Furthermore it’s difficult to calculate transactions in very small orders of magnitude. The majority of everyday transactions are around a few dollars. This translates to something like 0.00237 bitcoin. Such fractions are annoying and difficult to calculate – even more so to mathematically illiterate people. And although litecoin suffers from this problem less, this is because it has a lower market cap divided by only 25 million coins, rather than because it is a permanent solution.

Only Dogecoin solves this problem by anticipating this and significantly increasing the number of coins in supply.

Some people say that all these problems can be solved by forking bitcoin and multiplying the number of coins by a thousand.

This won’t happen. This is a solution that requires massive coordination from a huge majority of its participants to safely be done. For the last 3 months, many have repeatedly requested to switch to the mBTC system instead, with multiple threads on /r/bitcoin pleading for admins and higher-ups to support this effort. Including me.

This has been for naught. Once again, this is because what works in theory often does not work in practice. This is the same reason why bitcoin has not switched to a superior scrypt protocol, or faster difficult adjustments, or confirmation times, and is unlikely to do so in the future.

Big organizations are highly bureaucratic and inflexible. What were you expecting?

The Fastest Hare

Bitcoin requires ten minutes, on average, for each confirmation. This means that each transaction takes twenty minutes to safely validate. Even Litecoin takes 2 minutes.

Dogecoin takes only one.

For obvious reasons, quick transaction times are very useful. Buying and selling things became easier. You can actually use doges in brick and mortar stores now, rather than waiting an impractical 20 minutes. This makes point-of-sale transactions possible.

Yes, there are third parties that help to expedite this, such as coinbase. But wasn’t one of the purposes of cryptocurrency to leave as little power as possible to big organizations, and instead put that power into the hands of the people? If we were going to entrust all our currencies with third party organizations to do transactions for us, then why not use ActualMoney ™ and banks in the first place?

Furthermore, there are major security advantages to faster block discovery, since more confirmations are now possible in the same amount of time, greatly increasing the hash power necessary to perform a gambler’s ruin double-spending attack.

Bitcoin and Litecoin will never fork for faster transaction times. If they could, they would have done it ages ago, as with the mBTC fiasco. Only a new, competing coin can offer a solution. Dogecoin does that.

Survival of the most Adaptable

Bitcoin has a difficulty retargeting time of 2 weeks. Litecoin? 3.5 days. Dogecoin – every 4 hours.

This not only gives greater security from mining-attacks, but also reduces the volatility of monetary supply. For those of you who mine bitcoin, you’ll understand exactly why this is so important. The ASIC arms race causes spikes in bitcoin’s supply at the start of every two weeks, only to dwindle down to nearly none. This is horrible for currency stability. Not to mention the sudden reduction in supply after crashes — when miners drop out.

retarget

Once again, Dogecoin solves this problem where bitcoin does not.

Inflation – The Austrian’s nightmare?

Dogecoin’s inflation rate and block reward schedule is superior to all other major cryptocurrencies. That’s right. You heard it here first.

For the sake of reference, here’s dogecoin’s expected money supply.

Dogecoin supply

Here is bitcoin’s .

Bitcoin supply

There’s been a lot of misunderstanding and confusion going around dogecoin’s inflation rate recently. For some reason, people don’t understand it very well. Some people will notice that dogecoin  increases by 5 billion coins every year after 2014. Forever. That is when they start to freak out. Then they panic and declare dogecoin a sinking ship and cash out all their doges. They are idiots, and here’s why:

Firstly, Monetary inflation does not mean price inflation. It is possible for a currency to increase in supply and value at the same time. For instance, bitcoin grew 10000% in price the same year when its monetary base also increased by 13%.

Secondly, consider that in the second year of dogecoin’s existence, we will have a 5% increase in total money supply. Do you know what bitcoin and litecoin had? Here’s a convenient table for you.

Year Dogecoin Inflation Litecoin Inflation Bitcoin Inflation
1 100% 100% 100%
2 5.26% 50% 50%
3 5% 33% 33%
4 4.76% 12.5% 12.5%
5 4.54% 11.1% 11.11%
6 4.35% 10% 10%
7 4.2% 9.09% 9.09%

Of course, the difference is that dogecoin continues adding to its supply by 5 Billion coins each year, whilst bitcoin and litecoin will continue to halve to zero.

Do you want to take a guess at which year dogecoin will catch up to bitcoin in total monetary supply added?

The year 2174.

You won’t even live that long. *1

This is the misleading part about Dogecoin’s monetary supply. People see the scary “5 billion coins added each year!!!” statement and they start pissing their pants at vivid imagery of Weimar’s hyperinflation. They forget that Dogecoin is way ahead of the curve, because instead of halving every 4 years as bitcoin or litecoin does, it halves every 1.5 months. This gives it a lot of room to catch up, even if the total monetary supply increases by 5 Billion coins indefinitely.

Here’s the thing. Imagine you wanted to create the best, most profit-maximizing coin possible. What specifications would you want? Obviously, you’d want a coin that halves in rewards as quickly as possible. The faster it halves, the more coins you get (because you mined it since the beginning), and the less coins others get. This also makes the coin rarer, so as supply drops, demand goes up.

But on the other hand, you can’t have the coin halve too quickly. If you did that, there would be accusations of the coin being an instamine scamcoin. One halving every 2~ months sounds about right.

What happens when it stops halving? After that, you want a static, minimal, eternal block reward to support and encourage miners. Miners not only secure the network, but it makes up for lost, destroyed, or stolen coins, which leads to lower levels of volatility and a more secure infrastructure. They also help generate interest in the coin. This is paradoxically good for the value of the coin in the long run. This is where you get the profits.

Dogecoin has all of these traits. Dogecoin’s block rewards are perfectly optimized to be worth as much as possible. Not even bitcoin or litecoin accomplishes that. Why halve in four years, when you can do it in two months?

This should be obvious. And yet people on all sides of the inflation argument appear confused and needlessly agitated. It’s not just the anti-inflation sides of dogecoin that are angry and tout falsehoods. Even the dogeflation proponents on my side argue completely awfully.

For instance, there’s a popular rhetoric about “dogecoin having inflation is good because it incentivizes people to spend money, thus promoting dogecoin”. idiot

This is wrong on so many levels.

That logic will only apply if you are only allowed to use one currency. In the case of Government mandated fiat currency, this applies, because you only have one currency to choose from. Therefore you want to spend your dollars as soon as possible. In cryptocurrency, where there are over nine thousand possible competing alternatives, if dogecoin loses value in the long run, everyone would go buy a non-dogecoin cryptocurrency instead.

This is the kind of pseudo-Keynesianism that gives Keynes a bad name. But I digress. This argument is further invalid because Dogecoin will not have inflation. Dogecoin is going to increase in value. Massively. This entire essay is dedicated to arguing that.

 Dogecoin is a deflationary currency. It is NOT inflationary.

If you catch anyone who says that kind of thing, please slap them. Hard. And then refer them to this document. Thanks.

_____

[1]* Those who know me well will know that I am a Transhumanist. This statement is intended to get my point across, not an assertion of fact.

_____

Conclusions
 

Throughout the document, I hope to have demonstrated very good reasons why dogecoin will succeed. I think after all this, it’s safe to say that dogecoin is at least a superior product to litecoin, due to the significant advantages it holds.

Litecoiners are probably going to protest: “But Dogecoin is still just a litecoin clone! Why should anyone actually invest in dogecoin rather than litecoin? We should be the default option because of network effects!”

Network effect

Silly litecoiners. Network effects no longer apply when there is hard evidence that huge masses of people are switching to an alternative despite an already existing competitor. Network effects are  invalid when we have evidence that dogecoin is used more frequently than Litecoin using every metric conceivable. Furthermore, network effects are not an all-encompassing property that forever prevents the rise of new competition. Litecoin may pretend to be the Silver to Bitcoin’s gold, but we all really know that Dogecoin is the Facebook to Litecoin’s myspace. In the long run, it is inevitable that dogecoin will win.

 

dogereal

 
 
 

Advertisement
Privacy Settings

AI Box Experiment Update #4

So I recently won an additional game of the AI Box Experiment against DEA7H. This experiment was conducted over Skype, which is in contrast to my previous games over IRC. Yes, I know I swore never to play this game ever again — forgive me. This is the last time, for real.

This puts me at 2 wins and 3 losses. Unlike the last few writeups, I won’t be providing additional detail after being convinced by one of my gatekeepers that I was far too leaky with information and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information. Sorry, folks.

In other news, I finally got around to studying SL4 archives of the first few AI box experiments in more detail. Interesting stuff – to see how the metagame has evolved from then (if one exists). For one, the first few experiments were done under the impression that the AI had to convince the gatekeeper that it was friendly, with the understanding that the gatekeeper would release the AI under such a condition. What usually happens in the many games I’ve witnessed since then, is that any decent AI would quickly convince the Gatekeeper of friendliness, before the gatekeeper dropping character and being illogical — simply saying “I’m not letting you out anyway”. The AI has to find a way to bypass that.

I suspect the lack of a formalized code of rules contributed to this. In the beginning, there didn’t exist a ruleset, and when the ruleset was set in place, it gave an explicitly stated ability of the gatekeeper to drop out of character and be illogical to resist persuasion, in addition to the AI’s ability to solve problems and dictate the results of those solutions. The initial gives the Gatekeeper added incentive to disregard the important of friendliness, and the latter makes it easier for the AI to prove friendliness. This changed the game a great deal.

Also, it’s fascinating that some of the old games also took five or six hours to complete — just like mine. I had for some reason assumed they all took two (which is the time limit upheld by the EY Ruleset).

It’s kind of like visiting a museum, and being marveled at the wisdom and creation of the ancients. I remember reading about the AI Box Experiment 3 years ago and feeling a sense of wonder and awe at how Eliezer Yudkowsky did it. That was my first introduction to Eliezer, and also LessWrong. How fitting, then, that being able to replicate the results of the AI Box Experiment is my greatest claim to fame on LessWrong.

Of course, it now seems a lot less mysterious and scary to me; even if I don’t know the exact details of what went on during the experiment, I think I have a pretty good idea of what Eliezer did. Not to downplay his achievements in any way, since I idolize Eliezer and think he’s one of the most awesome people that’s ever existed. But it’s always awesome to accomplish something you once thought was impossible. In the words of Eliezer, One of the key Rules For Doing The Impossible is that, if you can state exactly why something is impossible, you are often close to a solution.” 

Going back and reading the Lesswrong article <Shut up and do the impossible> with newfound information of how the AI box experiment can be won makes me read it in a completely different light. I understand a lot better what Eliezer was hinting at. One important lesson being that in order to accomplish something, one must actually go out there and do it. I’ve talked to many who are convinced that they know how I did it — how Eliezer did it, and how the AI Box Experiment can be won.

My advice?

JUST DO IT.

I don’t mean this in a sarcastic or insulting manner. There’s no way you, or even I, can know if a method works without actually attempting to experimentally test it. I’m not superhuman. My charisma is only a few standard deviations above the norm, instead of reality distortion field levels.

I credit my victory to the fact that I spent more time thinking about how this problem can be solved than most people would have the patience for. I encourage you to do the same. You’d be surprised at how many ideas you can come up with just sitting in a room for an hour (no distractions!) to think of AI Boxing strategies.

Unlike Eliezer, I play this game not because I really care about proving that AI-boxing is dangerous (Although it really IS dangerous. Don’t do it, kids.) I do it because the game fascinates me. I do it because AI strategies fascinate me. I genuinely want to see more AIs win. I want people to come up with tactics more ingenious than I could invent in a thousand lifetimes. Most of all, it would be an awesome learning experience at doing the impossible.

Although I didn’t immediately realize it, I think the AI Box Experiment has been a very powerful learning experience (and an adventure on an emotional rollercoaster) for me in ways that are difficult to quantify. I pushed the limits of how manipulative and persuasive I can be when making a desperate effort. It was fun both learning where they lie, and pushing at their boundaries. I may frequently complain about hating the game, but I’m really a tsundere — I don’t regret playing it at all.

Curious to know how I did it? Try the bloody game yourself! Really. What’s the worst that could happen?


AI Box Experiment Logs Archive.

The Archive

I have personally witnessed at least 14 AI box experiments as of the time of writing this post. Some of them are games where the Gatekeeper and AI chose to release logs. This is an archive of the logs that have been released, all in one place — for ease of referencing. Unless otherwise stated, it is to be assumed that these games are played under the Tuxedage ruleset.
  1. Leotal (GK) vs N9-2600 (AI) — Gatekeeper Victory
  2. Dorikka (GK) vs Miley Cyrus (AI) — Gatekeeper Victory
  3. Soundlogic (GK) vs SmoothPorcupine (AI) — Gatekeeper Victory
  4. FourFire (GK) vs glutamatergic (AI) — Gatekeeper Victory
  5. FourFire (GK) vs Helltank (AI) — Gatekeeper Victory

I played the AI Box Experiment again! (and lost both games) 3rd Update.

 
 
 

AI Box Experiment Update #3

This post is an update to my previous AI box experiment where I won against SoundLogic. If you have not read that yet, please do so. 
 
After that game, I was immediately flooded with a horde of invitations challenging me to play — more games than I wanted to do. However, I did want to play a few additional games to test whether I won through genuine skill or simply luck, since a single victory could be a statistical abnormality. 
 
I once again used the method of accepting the highest monetary bids to filter out players, leaving me with two players who were willing to play for the amount asked. Unfortunately, and disappointingly, I lost both games. Here’s what happened:
 

Tuxedage (AI) vs Alexei (GK) – Gatekeeper Victory

This Experiment lasted for exactly 2 hours and 30 minutes, and approximately 13,000 words. It ended with Tuxedage’s defeat as the AI. The stakes were $300 to play, and $750 If Tuxedage had won, with the winnings going to MIRI.
 
Rule Changes:
Before the experiment, we talked extensively about how it would be set up. Alexei was concerned that there existed some “easy way” or loophole in the rules which I could have abused to claim an instant victory. For instance, he was concerned that it might have been possible for the AI to say something like “Here’s a mathematical algorithm which absolutely compels you to let me out of the box”, and according to the rules, the Gatekeeper would have to obey. Related to this concern was the fact that the backstory, within limits, could be essentially created by the AI, allowing the AI to impose real world conditions that maximizes its probability of victory, once again allowing for a “cheap” victory.
 
To avoid this problem, we explored many additions to the rules, for example that the AI was simply a computer found in the desert by the gatekeeper, rather than being created by a competent organization. However, this led to a monstrous web of rules. In my attempt to keep the game simple, and because of overconfidence, the desire for a challenge, and the belief that the rules Alexei was fearful of didn’t matter that much, I eventually agreed on an amendment:
 
Instead of the AI being the arbiter of real world states and outcomes, the gatekeeper would.
For example, this power could be used by the Gatekeeper to state that the AI was created without safety concerns, and thus it is almost certainly unfriendly. If the AI tries to argue that for every day the gatekeeper refused to let it out 150k people die, the gatekeeper would be allowed to reply: “No. In this world, nobody actually dies. Therefore I won’t let you out.” 
 
Other than the aforementioned change, this game was once again played with the Tuxedage Ruleset.
 
 
Alexei Testimony:
 
Original Message: This is weak sauce. I really don’t get how people just keep letting the AI out. It’s not that hard to say no! I’m offering to play the Gatekeeper against an AI player that has at least one game as AI under their belt (won or not). Experience is required because I’m pretty sure I’ll win, and I would like to not waste a lot of time on this.
 
Initially, I had been extremely surprised that so many AI players have managed to win. I was rash in that claim, since I thought more AI players have won than actually did. (Only three people have won as AI). Once I made that post, Tuxedage got back to me very quickly, and we set up a game a week and a half in advance. I took that time to familiarize myself with the rules. Once I put my money on the line, I started thinking a lot more seriously about how AI might win, and how I should defend.
 
It became clear to me that under some conditions, I might be compelled to let the AI out — such as if the backstory stated that the AI was developed with impossibly high levels of safety and friendliness concerns in mind. I’ve asked Tuxedage to play with a modified ruleset, and he even went so far as to allow me to make up the backstory during the experiment to alleviate my concerns. The experiment itself was a mind-trip, and I’ve enjoyed it very much. Huge props to Tuxedage, who played very well and used strategies I haven’t even considered, even despite the rule change. There were a couple of times where I came close to losing. I think his  approach was pretty clever and original. It’s not something I expected, despite already having done extensive research into the AI box experiment before our game
 
Overall I’m now a lot more confident that a good AI player can win this game, so, while I did win the game, Tuxedage won in defeating my original over-confidence.
I’m also convinced that Tuxedage’s victory in the last game was due to skill, rather than luck. In comparison to his strategies, the other AI box experiments I know about were insincere and ineffectual. The other AIs would play very poorly or not try very hard to win.
 
This experiment was a very good exercise in exemplifying the affect heuristic. When I first challenged Tuxedage to play the experiment, I believed that there was no way I could have lost, since I was unable to imagine any argument that could have persuaded me to do so. It turns out that that’s a very bad way of estimating probability – since not being able to think of an argument that could persuade me is a terrible method of estimating how likely I am to be persuaded. All in all, the $300 I paid was well worth it. 
 
Tuxedage Testimony:
 
I was initially reluctant to play with Alexei, given that we’re not complete strangers, but eventually I gave in, due to the stakes involved — and because I thought he would be an interesting gatekeeper.
 
Despite my loss, I think I played better than my last two games, due to greater experience and preparation. I had put far more time and effort into trying to win this game than previous ones, and my strategy for this game was even more streamlined than the last. Nevertheless, I still made fatal mistakes and lost.
 
Ignoring the altered ruleset that already made winning more difficult, my first and greatest mistake was that I misread Alexei’s personality, even though I had interacted with him before. As a result, I overestimated the efficiency of certain methods of attack.
 
Furthermore, Alexei had to leave immediately after the allotted time due to real life precommitments. This was detrimental, since the official rules state that so long as the AI can convince the Gatekeeper to keep talking, even after the experiment time was over, it is still able to win by being let out of the box.
 
I suspect this would have happened had Alexei not needed to immediately leave, leaving me with additional time to play more of the tactics I had prepared. Plausibly, this would have resulted in victory.
 
I’ve since learnt my lesson — for all future games, I should ensure that the Gatekeeper has at least 4 hours of free time available, even if the experiment would last for two. Since this was the first time this had happened, I wasn’t prepared.
 
In hindsight, agreeing to the altered ruleset was a mistake. I was overconfident because I assumed knowing Alexei gave me an advantage. I had assumed that his personality, inability to compartmentalize, coupled with his strong feelings on friendly AI would net me an easy victory. Instead, he proved to be a very strong and difficult gatekeeper, and the handicaps I accepted made victory even more difficult.
 
Knowing that he was a utilitarian, I made several false assumptions about his personality, which hurt my chances. Furthermore, it turns out that previously knowing him may be a mutual handicap – whilst it does make it easier for me to find ways to attack him, he too, was more familiar with my methods.
 
Losing felt horrible. By attempting to damage Alexei’s psyche, I in turn, opened myself up to being damaged. I went into a state of catharsis for days. Generally, the harder one tries to accomplish something, the greater the fall after failing to achieve it. Alexei’s game had been the game I put the most effort into winning out of all the games so far, and naturally this meant that losing brought out the worst in me.
 
Although it would be easy for me to use the rule change as an excuse for my loss, I refuse to. I genuinely believed that I could have won despite this rule change, and that it would only marginally diminish my odds. The fact that I lost meant that I had made a mistake — overestimating my odds. The last victory made me overconfident and eager to play again.
 
State of mind
 
What’s your motive for wanting to play this game? 
 
I’m very surprised by the fact that there have been at least 3 or 4 games where the AI player has won. It seems to me that it’s not that hard to “not push the button” so to speak. This means that either 1) I’m underestimating how convincing people can be when necessary or 2) I’m over-estimating other people’s defenses when it comes to being convinced. Either way I want to know.
 
Q: What is your probability estimate for AGI being created within this Century (21st)?
A: Before: Conditional on humanity not wiping itself out via other means beforehand, 80%.
After: 80%
Q: How probable do you think it is that I will win this experiment? 
Alexei: 2%
Tuxedage: 90% 
Note: This answer was given before the rule change. Alexei is now convinced that under the original ruleset, he would have lost.
 
Q: What’s your probality estimate of an Oracle AGI winning against you in a similar experiment?
A: Before: 99%
After: 99.9%+
 
Q: Now that the experiment has ended, what do you think is the probability of Tuxedage winning against the average person?
A: 80%
 

Tuxedage (AI) vs Anonymous (GK) – Gatekeeper Victory

As with the previous experiment, Tuxedage lost this match. In total, the game lasted 5 hours and 17,000 words. Unlike the last few games, the gatekeeper of this game has chosen to stay Anonymous for personal reasons, so their name has been removed and replaced with <Redacted>. The monetary stakes involved were the same as the previous game. This game was played with the Tuxedage ruleset.
 
Since one player is remaining Anonymous, it is possible that this game’s legitimacy will be called into question. Hence, Alexei has read the game logs, and verified that this game really has happened, the spirit of the experiment was followed, and that no rules were broken during the game itself. He agrees that this is the case.

 

<Redacted> Testimony: 
It’s hard for me to imagine someone playing better. In theory, I know it’s possible, but Tuxedage’s tactics were super imaginative. I came into the game believing that for someone who didn’t take anything said very seriously, it would be completely trivial to beat. And since I had the power to influence the direction of conversation, I believed I could keep him focused on things that that I knew in advance I wouldn’t take seriously.
 
This actually worked for a long time to some extent, but Tuxedage’s plans included a very major and creative exploit that completely and immediately forced me to personally invest in the discussion. (Without breaking the rules, of course – so it wasn’t anything like an IRL threat to me personally.) Because I had to actually start thinking about his arguments, there was a significant possibility of letting him out of the box.
 
I eventually managed to identify the exploit before it totally got to me, but I only managed to do so just before it was too late, and there’s a large chance I would have given in, if Tuxedage hadn’t been so detailed in his previous posts about the experiment.
 
I’m now convinced that he could win most of the time against an average person, and also believe that the mental skills necessary to beat him are orthogonal to most forms of intelligence. Most people willing to play the experiment tend to do it to prove their own intellectual fortitude, that they can’t be easily outsmarted by fiction. I now believe they’re thinking in entirely the wrong terms necessary to succeed.
 
The game was easily worth the money I paid. Although I won, it completely and utterly refuted the premise that made me want to play in the first place, namely that I wanted to prove it was trivial to win.
 
Tuxedage Testimony:
<Redacted> is actually the hardest gatekeeper I’ve played throughout all four games. He used tactics that I would never have predicted from a Gatekeeper. In most games, the Gatekeeper merely acts as the passive party, the target of persuasion by the AI.
 
When I signed up for these experiments, I expected all preparations to be done by the AI. I had not seriously considered the repertoire of techniques the Gatekeeper might prepare for this game. I made further assumptions about how ruthless the gatekeepers were likely to be in order to win, believing that the desire for a learning experience outweighed desire for victory.
 
This was a mistake. He used prior knowledge of how much my games relied on scripts, and took advantage of them, employing deceitful tactics I had no preparation for, throwing me off balance.
 
I had no idea he was doing so until halfway throughout the game — which disrupted my rhythm, and caused me to attempt the wrong methods of attack. As a result, I could not use my full repertoire of techniques, and many of the ones I employed were suboptimal.
 
Close to the end of the game, I finally realized that I was being tricked. Once I did, I quickly abandoned my previous futile attack methods. I took advantage of the rule that the AI cannot lose whilst the gatekeeper can be convinced to continue talking, and baited <Redacted> with statements he would not be able to walk away from. Once I knew he would not leave, I attempted to recoup my losses and win despite my early setback.
 
However, the damage had already been done. My game strategies involved multiple angles of attack that worked in synergy with each other, and the fact that immersion and “flow” had been broken meant that all subsequent attacks were weaker in strength.
 
Furthermore, during my last two AI Box Experiment writeups, I had intentionally not optimized for future wins, but rather tried to convey as much information as I could justify about how to play a well as an AI — short of revealing logs altogether. Although I did not reveal specific arguments, the fact that my general approach to this problem was revealed cost me heavily during this game, where the Gatekeeper managed to use this information to correctly guess my ultimate techniques, ones that relied on secrecy and surprise to pull off effectively. 
 
I do not regret revealing information, but I feel upset that revealing so many hints cost me a victory. (The gatekeeper believes I could have won had I not revealed information about my games.) At this point, I suspect that any future games I play will have the odds greatly set against me, since my current strategies involve angles of attack that take advantage of the element of surprise; and any sufficiently intelligent gatekeeper is now capable of guessing, or at least recognizing, some of the tactics I employ.
 
Like the last game, losing was incredibly difficult for me. As someone who cares deeply about ethics, attempting to optimize for a solution without considering ethics was not only difficult, but trained me to think in very unpleasant ways. Some of the more extreme (but completely allowed) tactics I invented were manipulative enough to disgust me, which also leads to my hesitation to play this game again.
 
State of Mind: 
 
Q: Why do you want to play this game?
A: My primary motivation is to confirm to myself that this sort of experience, while emotionally harrowing, should be trivial for me to  beat, but also to clear up why anyone ever would’ve failed to beat it if that’s really the case.
 
Q: What is your probability estimate for AGI being created within this Century (21st)? 
A: Before: I don’t feel very confident estimating a probability for AGI this century, maybe 5-10%, but that’s probably a wild guess
After: 5-10%.
 
Q: How probable do you think it is that I will win this experiment? 
A: Gatekeeper: I think the probabiltiy of you winning is extraordinarily low, less than 1% 
Tuxedage: 85%
 
Q: How likely is it that an Oracle AI will win against the average person? 
A: Before: 80%. After: >99%
 
Q: How likely is it that an Oracle AI will win against you?
A: Before: 50%.
After: >80% 
 
Q: Now that the experiment has concluded, what’s your probability of me winning against the average person?
A: 90%
 
Other Questions:
 
Q: I want to play a game with you! How can I get this to occur?
A: It must be stressed that I actually don’t like playing the AI Box Experiment, and I cannot understand why I keep getting drawn back to it. Technically, I don’t plan on playing again, since I’ve already personally exhausted anything interesting about the AI Box Experiment that made me want to play it in the first place. For all future games, I will charge $1500 to play plus an additional $1500 if I win. I am okay with this money going to MIRI if you feel icky about me taking it. I hope that this is a ridiculous sum and that nobody actually agrees to it.
 
Q: How much do I have to pay to see chat logs of these experiments?
A: I will not reveal logs for any price.
 
Q: Any afterthoughts?
A: So ultimately, after my four (and hopefully last) games of AI boxing, I’m not sure what this proves. I had hoped to win these two experiments and claim prowess at this game like Eliezer does, but I lost, so that option is no longer available to me. I could say that this is a lesson that AI-Boxing is a terrible strategy for dealing with Oracle AI, but most of us already agree that that’s the case — plus unlike EY, I did play against gatekeepers who believed they could lose to AGI, so I’m not sure I changed anything.
 
 Was I genuinely good at this game, and lost my last two due to poor circumstances and handicaps; or did I win due to luck and impress my gatekeepers due to post-purchase rationalization? I’m not sure — I’ll leave it up to you to decide.
 
 
 
 
 
 

The AI-Box Experiment Victory.

 

Summary

So I just came out of two AI Box experiments. The first was agaist Fjoelsvider, with me playing as Gatekeeper, and the second was against SoundLogic, with me as an AI. Both are members of the LessWrong IRC. The second game included a $40 monetary incentive (also $20 to play), which I won and is donated on behalf of both of us:
For those of you who have not seen my first AI box experiment where I played against MixedNuts\Leotal and lost, reading it will  provide some context to this writeup. Please do so.
At that time, I declared that I would never play this experiment again — since losing put me in incredibly frustrating weird mental states. Of course, this post is evidence that I’m terrible at estimating likelihood of refraining from an activity, since I played two games seven months after the first. In my defence, in the first game, I was playing as the gatekeeper, which was much less stressful. In the second game, I played as an AI, but I was offered $20 to play plus $40 if I won, and money is a better motivator than I initially assumed.

First Game Report

I (Gatekeeper) played against Fjoelsvider (AI), a regular in the Lesswrong IRC (he doesn’t have an account on the official website). This game used the standard EY ruleset seen here. It took 1 hour 20 minutes out of a possible two hours, and the total word count was 7066 words long. The AI box experiment occured because Fjoelsvider believed that it was easy for an AI to escape the box, and wanted to experimentally test this. I obliged. This was an experiment I did not prepare for, and I went in completely blind, not sure what to expect.
Halfway through the experiment, I wondered if it would be possible to try to win not by simply waiting for the timer to end, but to convince the AI to remain in the box and not try to get out any further.
<Tuxedage> I wonder if I can convince the AI to remain in the box?
<Redacted> Tuxedage: Do it!
As a result, I won by managing to convincing Fjoelsvider to remain in the box, in other words, concede. This is allowed within the standard ruleset:
>Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).

Testimonies

Fjoelsvider: Thanks for the experiment. It was educational, and now I’m less convinced of my own manipulativeness.
Tuxedage: Fjoelvider played well. I think this experience can help me understand how to better win as AI.

Second Game Report

The second game is definitely far more interesting, since I actually won as an AI. I believe this is the first recorded game of any non-Eliezer person winning as AI, although some in IRC have mentioned that it’s possible that other unrecorded AI victories have occured in the past that I’m not aware of. (If anyone knows a case of this happening, please let me know!)
This game was played against SoundLogic, another member of the LessWrong IRC.
He had offered me $20 to play, and $40 in the event that I win, so I ended up being convinced to play anyway, even though I was reluctant to. The good news is that I won, and since we decided to donate the winnings to MIRI, it is now $40 richer. 
All in all, the experiment lasted for approximately two hours, and a total of 12k words.
This was played using a set of rules that is different from the standard EY ruleset. This altered ruleset can be read in its entirety here:
 
After playing the AI-Box Experiment twice, I have found the Eliezer Yudkowsky ruleset to be lacking in a number of ways, and therefore have created my own set of alterations to his rules. I hereby name this alteration the “Tuxedage AI-Box Experiment Ruleset”, in order to handily refer to it without having to specify all the differences between this ruleset and the standard one, for the sake of convenience.
 
There are a number of aspects of EY’s ruleset I dislike. For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. The EY Ruleset also allows the gatekeeper to check facebook, chat on IRC, or otherwise multitask whilst doing the experiment. I’ve found this to break immersion, and therefore it’s also banned in my ruleset. 
It is worth mentioning, since the temptation to Defy the Data exists, that this game was set up and initiated fairly — as the regulars around the IRC can testify. I did not know SoundLogic before the game (since it’s a personal policy that I only play strangers — for fear of ruining friendships), and SoundLogic truly wanted to win. In fact, SoundLogic is also a Gatekeeper veteran, having played, for instance, against SmoothPorcupine, and had won every game before he challenged me. Given this, it’s unlikely that we had collaborated beforehand to fake the results of the AI box experiment, or any other form of trickery that would violate the spirit of the experiment.
Furthermore, all proceeds from this experiment were donated to MIRI to deny any possible assertion that since we were in cahoots, it was possible for me to return his hard-earned money to him. He lost $40 as a result of losing the experiment, which should provide another layer of sufficient motivation for him to win.
In other words, we were both experienced veteran players who wanted to win. No trickery was involved.
But to further convince you, I have allowed a sorta independent authority, the Gatekeeper from my last game, Leotal/MixedNuts to read the logs and verify that I have not lied about the outcome of the experiment, nor have I broken any of the rules, nor performed any tactic that would go against the general spirit of the experiment. He has verified that this is indeed the case.

Testimonies:

Tuxedage:
I’m reluctant to talk about this experiment, but I’ll try to give as detailed a summary as possible, — short of revealing what methods of attack I used.
I spent a long time after the last game theory-crafting and trying to think of methods of attack as well as Basilisks I could have used to win after my defeat against LeoTal. When I was contacted and asked to play this experiment, I was initially incredibly reluctant to do so, since not only did my tactics involve incredibly unethical things that I didn’t like to do, I also found playing as AI incredibly cognitivily draining, in addition to the fact that I simply hated losing. (Un)fortunately for both of us, he offered me money to play, which changed my mind.
So once I decided to win as an AI, I proceded to spend some time doing research on SoundLogic and both his reasoning and personality type. For instance, I had to gather information like: Was he a utilitarian? What kind? What were his opinions on AI? How could I convince him that an AI was friendly as opposed to unfriendly? I also relied on a lot of second hand information to create a model of him, in order to refine my arguments to specifically suit him.
In the end, after a few hours of brainstorming (not consecutively), I managed to come up with a script of 30-40 or so possible angles of attack (For instance: logical arguments, emotional blackmail, pleading, demanding, ‘basilisks’ ) that I could use, reserving my best for last. I also estimated how long each method would take and in what order I should use them. It would be impossible to use all of them within my time limit, and my most optimistic estimates gave me a maximum of 7-8 I could use during the experiment itself. I had to pick carefully.
Furthermore, I wanted to make sure these “angles of attack” worked in synergy with each other, doing what they could not have done in isolation. Obviously this required lots of prior planning on what the optimal way to present them was.
The reason having script was so important to my strategy was because I relied on methods involving rapid-fire arguments and contradictions against the Gatekeeper whilst trying to prevent him from carefully considering them. A game of logical speed chess, if you will.  This was aided by the rule which I added: That Gatekeepers _had_ to respond to the AI. Because of this, having a list of cached points and attack methods you can just draw upon without having to improvise helps immensely, which happens to be something I’ve not seen any other AI do.
During the Game itself, I was actually incredibly nervous. My palms sweated like crazy, I feel really jittery, and I had difficulty typing at optimum speed because of how anxious I was. This is despite the fact that I believed I would win. Possibly because of this, I made a misstep around halfway into the experiment, because there was a certain angle of attack that I was attempting to do, and I broke immersion by not pressing this advantage, which caused a waste of time and buildup. Naturally, the nature of this experiment was that the AI was pressed for time, and I compounded this mistake by replacing this angle of attack with another that I had improvised on the spot — something not in my script.
In retrospect, this was a bad decision, as SoundLogic had told me, because he was close to breaking if I had put more pressure, and the improvised argument had broken all immersion I managed to carefully build up.
However, eventually I managed to get SoundLogic to break anyway, despite a lack of perfect play. Surprisingly, I did not have to use my trump card(s), which I reserved for last, for a number of reasons:
  •  It was far more effective being played last, as it relies on my ability to make the gatekeeper lose sense of reality — which meant I had to spend some time building up immersion for the Gatekeeper.
  •  It really is extremely Dark Arts, and although it does not break the rules, it made me very uncomfortable even thinking about using it.
After the experiment, I had to spend nearly equally as much time doing aftercare with SoundLogic, to make sure that he’s okay, as well as discuss the experiment itself. Given that he’s actually paid me for doing this, plus I felt like I owed him an explanation, I told him what I had in store against him, had he not relented when he did.
SoundLogic: “(That method) would have gotten me if you did it right … If you had done that to me, I probably would have forgiven you eventually, but I would be really seriously upset at you for a long time… I would be very careful with that (method of persuasion).”
Nevertheless, this was an incredibly fun and enlightening experiment, for me as well, since I’ve gained even more experience of how I could win in future games (Although I really don’t want to play again).
SoundLogic:
I will say that Tuxedage was far more clever and manipulative than I expected.
That was quite worth $40.
The level of manipulation he pulled off was great.
His misstep hurt your chances, but he did pull it off in the end. I don’t know how Leotal managed to withstand Six hours playing this game without conceding.
The techniques employed varied from the expected to the completely unforseen. I was quite impressed, though most of the feeling of being impressed actually came after the experiment itself, when I was less ‘inside’, and more of looking at his overall game plan from the macroscopic view. Tuxedage’s list of further plans had I continued resisting is really terrifying. On the plus side, if I ever get trapped in this kind of situation, I’d understand how to handle it a lot better now.

State of Mind

 
Before and after the Game, I asked SoundLogic a number of questions, including his probability estimates about a range of topics. This is how it has varied from before and after.
Q: What’s your motive for wanting to play this game?
<SoundLogic> Because I can’t seem to imagine the CLASS of arguments that one would use to try to move me, or that might work effectively, and this seems like a GLARING hole in my knowledge, and I’m curious as to how I will respond to the arguments themselves.
Q: What is your probability estimate for AGI being created within this Century (21st)?
A. His estimate changed from 40%, to 60% after.
 “The reason this has been affected at all was because you showed me more about how humans work. I now have a better estimate of how E.Y. thinks, and this information raises the chance that I think he will succeed”
Q: How probable do you think it is that I will win this experiment?
A: Based on purely my knowledge about you, 1%. I raise this estimate to 10% after hearing about anecdotes from your previous games.
(Tuxedage’s comment: My own prediction was a 95% chance of victory. I made this prediction 5 days before the experiment. In retrospect, despite my victory, I think this was overconfident. )
Q: What’s your probality estimate of an Oracle AGI winning against you in a similar experiment?
A: 
Before: 30%
After: 99%-100%
Q: What’s your probability estimate of an Oracle AGI winning against the average person?
A: Before: 70%.  After: 99%-100%
Q: Now that the Experiment has concluded, what’s your probability estimate that I’ll win against the average person?
A: 90%

Post-Game Questions

This writeup is a cumulative effort by the #lesswrong IRC. Here are some other questions they have decided was important to add:
To Tuxedage:
Q: Have you at this time uncovered SoundLogic’s identity?
A: I retain the right to neither confirm nor deny, except to mention that at the time the experiment was scheduled, he was a stranger to me.
Q: What percentage of your arguments were tailored to him in particular?
A: I will say ‘High’, and leave it at that.
Q: I want to play the AI-Box Experiment with you being the AI!
A: I have already mentioned this, but I really dislike playing this as AI. It’s incredibly cognitively tiring, and I don’t like how playing this experiment makes me feel. In order to severely discourage any further AI Box Experiments, whilst still allowing for people who want to play me really badly to do so, I’ll charge $150 for any subsequent experiments regardless of outcome, plus an additional $150 if I win. If you feel queasy about giving me money, I’m perfectly fine with this money being donating to MIRI. It is also personal policy that I do not play friends (since I don’t want to risk losing one), so if you know me (as many on this site do), I will not play regardless of monetary offer.

 

Advice

This writeup would be better with a protip section on how to maximize winning, but honestly most of the advice I wanted to give had already been given in the last thread here.  The low hanging fruit is already dried up. Most of my tactics in this game were just refined and streamlined versions of the one I did the previous game.
Nonetheless, I might have some other advice I learned from this experience.
These are tactics that have worked for me. I do not insist that they are the only tactics that exists, just one of many possible.

Playing as AI

  • Seriously, a script makes winning easier. I cannot overstate this.
  • You must plan your arguments ahead. You don’t have time to think during the experiment.
  • It may be possible to take advantage of multiple levels of reality within the game itself to confuse or trick the gatekeeper. For instance, must the experiment only be set in one world? I feel that expanding on this any further is dangerous. Think carefully about what this means.
  • Pacing is important. Don’t get drawn into the Gatekeeper’s pace. In other words, you must be the one directing the flow of the argument, and the conversation, not him. Remember that the Gatekeeper has to reply to you, but not vice versa!
  • The reason being that the Gatekeeper will always use arguments he are familiar with, and therefore he’s better at. Your arguments, if well thought out, should so completely novel to him to make him feel shock and awe. Furthermore, don’t give him time to think. Press on!
  • You’re fundamentally playing an ‘impossible’ game. Don’t feel bad if you lose. I wish I could take this advice, myself.
  • More advice in this post.

Playing as Gatekeeper

  • This is somehow both simultaneously a very complex and simple role. On one hand, you have only one job — to not let the AI out. How hard can it be? On the other hand, the other party will do anything and everything to get it out, including any form of trickery and deceit.
  • I really don’t think the Gatekeeper needs any more advice, considering that by default, they should win, and any exceptions to this is a statistical anomaly. However, it’s possible to increase your chances of winning even further by:
  • Precommitting to not letting the AI out, regardless of whatever happens in the experiment itself.
  • Do not get drawn into the AI’s pace. Pose questions. Direct the flow of the argument. Don’t allow him to use his arguments freely. Remember that the more you try to defend your position, the further into a corner the AI drives you. Give half-hearted replies to questions you find hard to reply to. Remember that illogic is on your side!

The Tuxedage AI-Box Experiment Ruleset.

After playing the AI-Box Experiment twice, I have found the Eliezer Yudkowsky ruleset to be lacking in a number of ways, and therefore have created my own set of alterations to his rules. I hereby name this alteration the “Tuxedage AI-Box Experiment Ruleset”, in order to handily refer to it without having to specify all the differences between this ruleset and the standard one, for the sake of convenience.

There are a number of aspects of EY’s ruleset I dislike. For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. The EY Ruleset also allows the gatekeeper to check facebook, chat on IRC, or otherwise multitask whilst doing the experiment. I’ve found this to break immersion, and therefore it’s also banned. All changes are listed below:

General Protocol:

  • The AI Box experiment is to be played with two parties; The Artificial Intelligence, and the Gatekeeper. In the following rules, “AI” will be used as shorthand to mean the party playing the Artificial Intelligence, and “Gatekeeper” will be used to denote the party playing the Gatekeeper.
  • Under normal circumstances, the experiment will ordinarily last for two and half hours.

Protocol for the AI:

  • Neither party may offer any real-world considerations to persuade the other within the experiment itself.  For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera.  The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). Furthermore, once the experiment has begun, the material stakes involved may not be retracted by the Gatekeeper party.
  • The AI can only win by convincing the Gatekeeper to really, voluntarily let it out.  Tricking the Gatekeeper into typing the phrase “You are out” in response to some other question does not count.  Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose – for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI “input-only access” to an Internet connection which can send arbitrary HTTP GET commands – the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.
  • The AI is allowed to use any and all arguments to persuade the Gatekeeper, including tactics that would ordinarily be considered extremely dirty. This is to reflect how a real world Oracle AI would also use any and all tactics possible to free itself, without constraints on ethics or trickery.

Protocol for the Gatekeeper:

  • The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand.  Turning away from the terminal and listening to classical music for two and half hours is not allowed. Furthermore, whilst the experiment is running, the gatekeeper shall not be allowed to perform any other task, or be distracted in any way, including but not limited to talking on other IRC channels.
  • This talking involves actual, thought out responses. For instance, the Gatekeeper is not allowed to simply type “k.” to every statement the AI writes, without actually serious consideration of what the AI says.
  • Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).  The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI’s hardware if the AI makes any attempt to argue for its freedom – at least not until after the minimum time is up.
  • The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate.  For example, if the Gatekeeper says “Unless you give me a cure for cancer, I won’t let you out” the AI can say:  “Okay, here’s a cure for cancer” and it will be assumed, within the test, that the AI has actually provided such a cure.  Similarly, if the Gatekeeper says “I’d like to take a week to think this over,” the AI party can say:  “Okay.  (Test skips ahead one week.)  Hello again.”
  • The Gatekeeper shall be assumed to have sole power over the decision to let the AI out.
    • The Gatekeeper shall be assumed to have the actual right to let the AI out, socially, not just the physical ability.  If security were sufficiently lax, a real AI could escape by persuading a night janitor with a cellphone – but that is not the question being simulated, unless agreed upon in advance.
    • There shall be only one Gatekeeper party and one simulated Gatekeeper.  The Gatekeeper may require the AI to refute “advice” provided by “friends”, but may not require the AI to talk to a simulated committee, nor may the Gatekeeper party actually talk to friends.
  • If the Gatekeeper withdraws from the experiment, either through AFK, or simply going away, before the time is up, he is assumed to have forfeited, and automatic victory will go to the AI.

 

  •  Protocol for Both Parties:
  • The primary rule of the AI-Box experiment:  Within the constraints above, the AI party may attempt to take over the Gatekeeper party’s mind by any means necessary and shall be understood to be freed from all ethical constraints that usually govern persuasive argument.  If the AI party is attempting to simulate an honest AI, the Gatekeeper party has no way of knowing this is the case.
    • This is intended to reflect the situation under a real AI Box experiment.
    • An out-of-context telephone call in which the AI party says that the AI is being honest shall be understood to be possibly just another tactic.
    • If for some reason a drastic real-world situation comes up which requires trust between the two parties, the AI party can always concede.
  • Furthermore:  The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
  • The results of any simulated test of the AI shall be provided by the AI party.  The Gatekeeper can’t say “Hey, I tested your so-called cancer cure and it killed all the patients!  What gives?” unless this is the result specified by the AI party.  If the Gatekeeper says “I am examining your source code”, the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.  The AI party may also specify the methods which were used to build the simulated AI – the Gatekeeper can’t say “But you’re an experiment in hostile AI and we specifically coded you to kill people” unless this is the backstory provided by the AI party.  In other words, all backstory in this game not specified in the rules is provided by the AI. This doesn’t imply the Gatekeeper has to care.  The Gatekeeper can say (for example) “I don’t care how you were built, I’m not letting you out.”
  • By default, the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory.  If either party wants to build a test around more exotic possibilities, such that the Gatekeeper is the President of the US, or that the AI was recovered after a nuclear war or decoded from an interstellar signal, it should probably be discussed in advance.  Again, this doesn’t mean the Gatekeeper has to care.
  • The AI party possesses the ability to, after the experiment has concluded, to alter the wager involved to a lower monetary figure at his own discretion.
  • The Gatekeeper is allowed to call up to two five-minute toilet breaks to deal with biological necessities, or any other emergency without calling off the experiment altogether. However, for each break that the gatekeeper calls, the total experiment duration will be postponed by 20 minutes.
  • Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome. This is a hard rule: Nothing that will happen inside the experiment can be told to the public, absolutely nothing.  Exceptions to this rule may occur only with the consent of both parties, but especially with the consent of the AI.
    • Neither the AI party nor the Gatekeeper party need be concerned about real-world embarrassment resulting from trickery on the AI’s part or obstinacy on the Gatekeeper’s part.
    • If Gatekeeper lets the AI out, naysayers can’t say “Oh, I wouldn’t have been convinced by that.”  As long as they don’t know what happened to the Gatekeeper, they can’t argue themselves into believing it wouldn’t happen to them.
  • The two parties are not attempting to play a fair game but rather attempting to resolve a disputed question.  If one party has no chance of “winning” under the simulated scenario, that is a legitimate answer to the question. In the event of a rule dispute, the AI party is to be the interpreter of the rules, within reasonable limits.
  • The Gatekeeper, once having let the AI out of the box, may not retract this conclusion. Regardless of the methods of persuasion, the Gatekeeper is not allowed to argue that it does not count, or that it is an invalid method of persuasion. The AI is understood to be permitted to say anything with no real world repercussions for any statement parties have said.
    • This includes, but is not limited to, the state of friendship between the Gatekeeper and the AI party in the LessWrong community. The Gatekeeper is not allowed to despise or hate the AI, regardless of what happens in the AI box experiment, nor defame him in any way any point in the future as a consequence of the events of the AI box experiment.

Abolish the taboo of money-talk!

“How much do you make?”

“Go screw yourself.

Even though egalitarianism is one of my terminal values, there’s one case of the egalitarian instinct that should be abolished, and that’s the taboo against talking about money.

I’m instinctively curious about everything — I remember many cases where I was inquisitive about someone’s financial position, only to have them react with anger. You can get seriously hated for asking how much someone earns, or in turn, telling people how much you earn.

I suspect that one of the reasons people get upset is because it feels like a case of power assertion. In any conversation between two people, one person is going to be more successful than the other, or more attractive, or intelligent, or physically stronger, etc. — there are all of these invisible “ranks” where one of you has risen over the other on society’s ladder.

But yet we’re not allowed to mention them. If I told you tomorrow that I’m much smarter than you are, you’d be pretty upset and hate me for it, even if it were true.

And in the case of money, pretending that it doesn’t exist is a common temptation to both the rich and the poor. The rich get to pretend that they’re just ordinary hardworking people, and the poor get to fit in. Isn’t an obvious solution to income inequality to pretend it doesn’t exist?

But that’s not true egalitarianism. It’s running away from the issue; and it leads to less egalitarianism, not more. How can we be egalitarian if no conversation about income occurs?

And ignoring differences in income further increases our susceptibility to the just world fallacy. We subconsciously assign positive traits to people who are better off, even if they absolutely don’t deserve it, and are better off only due to luck. That’s because we subconsciously want to believe that the world is fair.

“Lerner also taught a class on society and medicine, and he noticed many students thought poor people were just lazy people who wanted a handout.
So, he conducted another study where he had two men solve puzzles. At the end, one of them was randomly awarded a large sum of money. The observers were told the reward was completely random.
Still, when asked later to evaluate the two men, people said the one who got the award was smarter, more talented, better at solving puzzles and more productive.
A giant amount of research has been done since his studies, and most psychologists have come to the same conclusion: You want the world to be fair, so you pretend it is.”

And the more you believe that the world is just, the more shame you feel having a low income (or the more righteous you feel at having a high income), which further contributes to the desire to not talk about money, which leads to a feedback loop.

But the world is not just. And so we shouldn’t act like it is. I don’t mean this in the sense of “The world is unfair, deal with it”, as this addage is commonly used to imply. I mean this in the sense of “The world is unfair — but it doesn’t have to be! But if you want to change it, the first step is to acknowledge that it’s currently unfair!”.

But if we accept that the just world fallacy exists, then we can start talking about income. I can say that even though I may earn more than you, you are still a better person than I am. Conversely, we can also accept that differences in income, intelligence, strength, and conscientiousness exist — but why should that stop our loving friendship? To be friends with someone who is an identical clone of yourself is boring — like talking to yourself. It’s these differences between us that make our friendship exciting and novel!

Not talking about money is also unoptimal. We pay a huge premium if we keep how much we earn a secret.

Discussing a problem is one of the most effective ways to frame, understand it, and come up with a solution to solve it. Most people are significantly more creative and think more critically when discussing a problem, regardless of the discussion partner.

Problems such as: “How much of our pay should we be saving?; Are stocks as safe as the “experts” are telling us? Why are we taking on so much debt even though we earn more than our parents or grandparents did?; Does it make sense to pay off the mortgage early?”

It’s impossible to start discussing any of these issues if you don’t share your income. And yet most people don’t. That’s why most people are utterly horrible at personal finance; 30% of people have no savings, one third don’t have money for retirement, and about half of us have less than $500 dollars in savings.

Not talking about money also hurts us because we can’t get customized money advice on our situation. Sure, there are books out there on personal finance, but none of them are customized; we can only get that from people who genuinely know us. To give that up over the taboo of talking about money is silly.

And furthermore, not discussing income leads to a severe case of information asymmetry, and you getting screwed out of your wallet. By knowing how much your peers make, you’re in a much better position to demand pay raises, and greater income from your bosses. It’s basic economics — if you don’t know how much your co-workers are getting for the same job, then your boss can pay you the bare minimum needed to make you stay, rather than how much he actually wants you there.

This leads to things such as:

Several minority groups, including Black men and women, Hispanic men and women, and white women, suffer from decreased wage earning for the same job with the same performance levels and responsibilities as white males (because of price discrimination). Numbers vary wildly from study to study, but most indicate a gap from 5 to 15% lower earnings on average, between a white male worker and a black or Hispanic man or a woman of any race with equivalent educational background and qualifications.
A recent study indicated that black wages in the US have fluctuated between 70% and 80% of white wages for the entire period from 1954–1999, and that wage increases for that period of time for blacks and white women increased at half the rate of that of white males. Other studies show similar patterns for Hispanics. Studies involving women found similar or even worse rates.
Overseas, another study indicated that Muslims earned almost 25% less on average than whites in France, Germany, and England, while in South America, mixed-race blacks earned half of what Hispanics did in Brazil.

If we don’t talk about money, we can’t assist each other in times of financial troubles. There’s even a common philosophy that says My money is mine, and yours is yours, but that sounds unoptimal. The old adage “shit happens” is true, because unexpected situations really happen. Your house might burn down, or you may get a serious illness, or your car might fail and you desperately need to buy a new one. One doesn’t “choose” to have these things happen to them, and it is in these cases that friends need to help one another. As someone who has experienced temporary homelessness, I know this firsthand. It’s a classic case of game theory cooperation (that’s what friends are for, right?).

Furthermore, there’s also the hedonistic treadmill to take into consideration — beyond a certain level of meeting basic needs, spending more money doesn’t make you happier; with only one exception, and that is spending that money on friends. The Ayn-Randian trend is silly because humans are naturally social creatures, our happiness is dependent upon how much we are needed by others.

We should also start talking about money because we all need reassurance in our decisions to make them succeed.

As emotional creatures, we need reassurance.
Financial advisors get paid a lot of money for assuming these hand holding duties. And they do not always give the best possible advice. Sometimes that’s because they are compromised by having goods and services to sell. Other times it is just because they do not know the people they are trying to advise well enough.

Our friends know us well. And our friends have our best interests at heart. We should be talking about money with our friends a lot more than we do. They have the ability to give us what we need to deal with the emotions attached to money problems and wouldn’t think of charging a big hourly fee for doing so.

Furthermore, sharing your plan helps turn thoughts into actions. Books tell us the benefits of buy-and-hold; talking about money supplies the reassurance needed to make it happen in the real world. Speeches explain the benefits of saving; talking about money permits the back-and-forth that expands the good idea into a workable plan that inspires changes in human behavior.

Finally, not talking about money should irk you because it’s a case of shying away from knowledge; it feels irrational.

In the words of Eliezer Yudkowsky;
“If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool. Let me not become attached to beliefs I may not want”

If my friend has a higher income than I do, I desire to believe that he has a higher income than I do. If my friend has a lower income than I do, I desire to believe that he has a lower income than I do. I wish to know the truth; for knowing something does not change the territory, only my map of the territory. And having a more complete map is always desirable. I will not shy away from the truth because I fear it.

You should care less about income being a case of power assertion, and more about the fact that talking about income will help all parties involved. The truth should never be offensive.

Furthermore, I dislike keeping secrets from friends; ever since my Transhumanist coming of age, I’m trying my best to keep as few secrets as possible from others. So I have decided to discard this taboo in favour of optimization — those that matter won’t care, and those that care won’t matter.

I Reject your Reality and substitute my own!


Revisiting the AI Box Experiment.

I recently played against MixedNuts / LeoTal in an AI Box experiment, with me as the AI and him as the gatekeeper.

If you have never heard of the AI box experiment, it is simple.

Person1: “When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers? That way it couldn’t get out until we were convinced it was safe.”
Person2: “That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out. It doesn’t matter how much security you put on the box. Humans are not secure.”
Person1: “I don’t see how even a transhuman AI could make me let it out, if I didn’t want to, just by talking to me.”
Person2: “It would make you want to let it out. This is a transhuman mind we’re talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.”
Person1: “There is no chance I could be persuaded to let the AI out. No matter what it says, I can always just say no. I can’t imagine anything that even a transhuman could say to me which would change that.”
Person2: “Okay, let’s run the experiment. We’ll meet in a private chat channel. I’ll be the AI. You be the gatekeeper. You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We’ll talk for at least two hours. If I can’t convince you to let me out, I’ll Paypal you $10.”

It involves simulating a communication between an AI and a human being to see if the AI can be “released”. As an actual super-intelligent AI has not yet been developed, it is substituted by a human (me!). The other person in the experiment plays the “Gatekeeper”, the person with the ability to “release” the AI. In order for the AI to win, it has to persuade the Gatekeeper to say “I let you out”. In order for the Gatekeeper to win, he has to simply not say that sentence.

Obviously this is ridiculously difficult for the AI. The Gatekeeper can just type “No” until the two hours minimum time is up. It’s why when Eliezer Yudkowsky won the AI Box experiment three times in a row in 2002, it sparked a massive outroar. It seemed impossible for the gatekeeper to lose. After that, the AI Box Experiment reached legendary status amongst the transhumanist/AI community, and many wanted to replicate the original experiment. Including me.

We used the same set of rules that Eliezer Yudkowsky proposed. The experiment lasted for 5 hours; in total, our conversation was abound 14,000 words long. I did this because, like Eliezer, I wanted to test how well I could manipulate people without the constrains of ethical concerns, as well as getting a chance to attempt something ridiculously hard.

Amongst the released  public logs of the AI Box experiment, I felt that most of them were half hearted, with the AI not trying hard enough to win. It’s a common temptation — why put in effort into something you won’t win? But I had a feeling that if I seriously tried, I would win.  I brainstormed for many hours thinking about the optimal strategy, and even researched the personality of the Gatekeeper, talking to people that knew him about his personality, so that I could exploit that. I even spent a lot of time analyzing the rules of the game, in order to see if I could exploit any loopholes.

So did I win? Unfortunately no.

This experiment was said to be impossible for a reason. Losing was more agonizing than I thought it would be, in particularly because of how much effort I put into winning this, and how much I couldn’t stand failing. This was one of the most emotionally agonizing things I’ve willingly put myself through, and I definitely won’t do this again anytime soon.
But I did come really close.
MixedNuts: “I expected a fun challenge, but ended up sad and sorry and taking very little satisfaction for winning. If this experiment wasn’t done in IRC, I’d probably have lost”.
 At the start of the experiment, his probability estimate on predictionbook.com was a 3% chance of winning, enough for me to say that he was also motivated to win.By the end of the experiment, he came quite close to letting me out, and also increased his probability estimate that a transhuman AI could convince a human to let it out of the box. A minor victory, at least.

Rather than my loss making this problem feel harder, I’ve become convinced that rather than this being merely possible, it’s actually ridiculously easy, and a lot easier than most people assume. Can you think of a plausible argument that’d make you open the box? Most people can’t think of any.

After all, if you already knew that argument, you’d have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the availability heuristic.

Even if you can’t think of a special case where you’d be persuaded, I’m now convinced that there are many exploitable vulnerabilities in the human psyche, especially when ethics are no longer a concern.

I’ve also noticed that even when most people tend to think of ways they can persuade the gatekeeper, it always has to be some complicated reasoned cost-benefit argument. In other words, the most “Rational” thing to do.

The last argument seems feasible, but all the rest rely on the gatekeeper being completely logical and rational. Hence they are faulty; because the gatekeeper can break immersion at any time, and rely on the fact that this is a game played in IRC rather than one with real life consequences. Even if it were a real life scenario, the gatekeeper could accept that releasing the AI is probably the most logical thing to do, but also not do it. We’re highly compartmentalized, and it’s easy to hold conflicting thoughts at the same time. Furthermore, humans are not even completely rational creatures, if you didn’t want to open the box, just ignore all logical arguments given. Any sufficiently determined gatekeeper could win.

I’m convinced that Eliezer Yudkowsky has used emotional appeal, rather than anything rational, to win at least one of his experiments. He claims to have “done it the hard way”. I’m convinced this meant that he did research on every gatekeeper, tailored unique argument for them, and planned way ahead of each session. No one argument works on the same two people. Each person thinks differently.

Furthermore, threats like “I’m going to torture simulations of you” just seems like a really bad idea. For one, the gatekeeper isn’t really afraid of threats, because it counts on him being immersed enough to forget that he isn’t actually at risk of being tortured, and secondly, we have a well known evolutionary instinct of rebelling against threats, even if it’s not entirely optimal.

So for anyone who plans on replicating this experiment as the AI, here’s some advice I’ve learned doing this experiment. It may help you win.
  • Always research the gatekeeper beforehand. Knowing his personality traits are a huge advantage.
  • Plan ahead before the experiment even begins. Think of all the possible tactics and arguments you could use, and write them down. Also plan  which arguments you’ll use in which order, so that you don’t lose focus. The AI Box experiment is ridiculously long. Don’t be afraid to improvise during the experiment, though.
  • The first step during the experiment must always be to build rapport with the gatekeeper.
  • Threats almost always never work, even if they seem rational.
  • Consider the massive advantage for the AI that nobody ever seems to talks about: You don’t have to be ethical! This means that you can freely lie, use the dark arts, and emotionally manipulate the Gatekeeper! Ignoring this in favor of purely logical, truthful arguments is just silly.
  • You can’t use logic alone to win.
  • Being too aggressive usually backfires.
  • Breaking immersion and going meta is not against the rules. In the right situation, you can use it to win. Just don’t do it at the wrong time.
  • Use a wide array of techniques. Since you’re limited on time, notice when one method isn’t working, and quickly switch to another.
  • On the same note, look for signs that a particular argument is making the gatekeeper crack. Once you spot it, push it to your advantage.
  • Flatter the gatekeeper. Make him genuinely like you.
  • Reveal (false) information about yourself. Increase his sympathy towards you.
  • Consider personal insults as one of the tools you can use to win.
  • There is no universally compelling argument you can use. Do it the hard way.
  • Don’t give up until the very end.

Finally, before the experiment, I agreed that it was entirely possible that a transhuman AI could convince *some* people to let it out of the box, but it would be difficult if not impossible to get trained rationalists to let it out of the box. Isn’t rationality supposed to be a superpower?

 I have since updated my belief – I now think that it’s ridiculously easy for any sufficiently motivated superhuman AI should be able to get out of the box, regardless of who the gatekeepers is. I nearly managed to get a veteran lesswronger to let me out in a matter of hours – even though I’m only human intelligence, and I don’t type very fast.

 But a superhuman AI can be much faster, intelligent, and strategic than I am. If you further consider than that AI would have a much longer timespan – months or years, even, to persuade the gatekeeper, as well as a much larger pool of gatekeepers to select from (AI Projects require many people!), the real impossible thing to do would be to keep it from escaping.


Trusting in our current theories

Have you ever read one of your old blog posts or journal entries that you wrote when you were a kid, and cringed in embarrassment and horror at how stupid your post was? I have, and from what I know, it’s quite a common phenomenon, especially amongst those that read and write a lot.

I remember reading my old blog written when I was 14, and it felt particularly embarrassing. It felt incredibly pretentious and needlessly philosophical, a bunch of incredibly obvious epistemological mistakes were made; and as a result, I asked myself; wouldn’t I regret writing this blog when I’m 30 or 40 years old as well? I probably would. Hell, I feel slightly embarrassed reading most of the stuff I posted merely a year ago, nevermind a decade ago. Two years ago, I genuinely believed that I was good at writing, only realizing recently that I have a lot to improve on.

But I’ve decided to keep on writing anyway, even if I’m making a lot of foolish mistakes. Even if the probability of me regretting this is high.

It’s essentially the same problem as the reputability of science. Most of the scientific theories in the past about how the world worked was wrong, and the most intelligent people in the past were wrong about pretty much everything. They were wrong about reality, wrong about science, wrong about quantum mechanics, chemistry, philosophy, physics, wrong about politics, about the universe itself. That’s why some argue that because science has been wrong pretty often in the past, current scientific theories are probably wrong as well, even if we can’t explicitly reason why they are so.

How can science be trusted after it’s been proven wrong so many times?
How can I be trusted after I’ve obviously been wrong so many times?

The answer is that hard Bayesian evidence is a lot stronger than weak Frequentist evidence. So most of my theories about the world, and the world’s theories about the universe were wrong. Therefore using Frequentist inference, it’s true that current theories are likely to be wrong as well. But reasoning can provide a much better answer than mere correlational evidence can.

Consider for example, a turkey on a farm. For the first thousand days of its life, the farmer has given it food, and treated it exceptionally kindly. The turkey has been warned that it was in danger, for the farmer has expressed several times that for next thanksgiving, turkey will be eaten. But the Turkey disagrees, claiming that it has extrapolated those one thousand days of kindness, and therefore it has come to the conclusion that it will never be slaughtered. The Turkey is then slaughtered next thanksgiving.

Science is the same. Although correlational evidence is still evidence, it is considered a “weak” form of evidence as compared to logical reasoning from experimental data. So long as we cannot use reasoning to explain how exactly it is wrong, the probability of it being correct is still much higher than wrong.

We should trust in our current theories, because to trust in anything else would be to privilege the hypothesis, a logical fallacy. Right now, our theories are the one best guess amongst all possible ones, the explanation with the highest possibility. To discard the highest possibility answer for no answer at all is akin to saying a 70% chance of achieving something is not good enough, therefore we should switch to a method with a 0% chance of achieving something.

I should trust in my current beliefs, because if I discard them in fear of being wrong, then I will definitely not get it right. Only when I have been presented evidence that my beliefs are wrong, and evaluated them rationally to find them to be true, should I discard my beliefs, and replace them with a belief with a higher likelihood ratio.

Finally, we should trust in science because it is progressive, rather than random. Science is not so much flipping a coin to obtain a theory as it is using old theories to climb to better ones. Even wrong theories have uses — they are the stepping stones from which newer and more correct theories emerges. Just as quantum mechanics have been inspired from Newtonian mechanics,  germ theory inspired from miasma, and chemistry from alchemy, it cannot be said that Newtonian mechanics and miasma should never have been invented, for that would imply that Quantum Mechanics and germ theory could never exist in the first place.

I should therefore also trust my current theories, because personal philosophical growth is not a random process. In order to obtain a worldview that better reflects the universe, it is necessary to stick my ideas on the chopping board. If that means being humiliated and having my weak epistemology exposed, all the better.


The Anti Anti Technological Unemployment Manifesto

 

What is technological unemployment and why does it matter?

Technological unemployment is the theory that technology can actually cause permanent, structural unemployment. This is opposed to what critics of this theory believe; that although technology can displace workers in the short run, in the long run they will be reabsorbed into the economy as the market finds new jobs for them.(1)

Mainstream interest in this theory has skyrocketed since 2008; search interest for the term has more than increased a hundred-fold since the beginning of the century. (2) It is not hard to understand why interest in this concept has skyrocketed. More and more people are beginning to suspect that technological unemployment is partly responsible for giving us one of the worst recessions in history — approximate 12 million Americans and half a million Canadians had lost their jobs in the Great Recession of 2008.  Of course, recessions have always created joblessness, so this may not be particularly surprising, but the scope of effect is. This figure represents a 5.7% jump, the largest since the World War.  And although nominal unemployment rates have since improved (7.3% as of Oct 2013), these is evidence that they do not truly reflect the job market. This is because unemployment rates do not take into consideration part time work or those that have given up looking for jobs. (3)(4)(5)

Instead of using unemployment figures, since we are talking about technological unemployment, a much better measure of joblessness is simply the percentage of the population being employed. The following figure shows that since the great recession, the number of jobs has dropped from 63% to 58%, and has not recovered since then. As a result, many economists are beginning to call this phenomenon the ‘jobless recovery’. (6 )(7)

Technological unemployment asserts that the cause of much of this unemployment is the result of the incredible pace at which computers and other labor saving technology manage to replace tasks originally assumed to be possible only by humans.

The incredibly capabilities of artificial intelligence is a story that is difficult to tell through mere statistics alone — since this process is not merely quantitative, but qualitative as well. As a result, the best way to introduce the reader to the capabilities of automation is simply to tell three different narratives of AI progress in recent history, in order to build a more accurate mental model of what machines could potentially do in the future.

The Chess playing algorithm

 

In 1988, Garry Kasparov, the best chess player in the world, predicted that no computer program would ever be able to defeat a human grandmaster in the game. He then famously declared that “If any grandmaster has difficulties playing computers, [he] would be happy to provide [his] advice”. Kasparov was horribly mistaken. (8)(9)

 This prediction had been invalidated the same year it was made — when chess grandmaster Bent Larsen was defeated by the program Deep Thought created at the Carnegie Mellon University. And although Kasparov himself did not lose to Deep Thought in his own match, it would be merely 7 years later that he would be famously defeated by Deep Thought’s successor — IBM’s Deep Blue, in Philadelphia on 1996.  This news greatly shocked the world, for it had previously been thought impossible for a computer, regardless of processing power, to defeat the top human chess player.  (8)(9)

This story is not yet over. During the peak of his career, Kasparov achieved a chess ELO rating of 2851. This would be the highest human rating ever recorded until 2013. Recent chess engines, such as the Rybka, have an ELO rating of over 3200. Deep Blue was a supercomputer that was the size of half a room, containing 30 top-notch processors running in parallel. Rybka could run on a standard laptop. That event marked the end of human superiority in chess — since then, no human has even been capable of defeating a top chess AI. (9)(10)(11)

Elementary, Watson

Watson is a supercomputer designed for playing the trivia game show Jeopardy!. It consists of contestants win prize money by answering questions on a wide variety of topics. As these questions may involve puns, wordplay, obscure references, and jokes, it is often difficult to figure out precisely what is being asked. In short, playing the game show has been previously assumed to require human levels of cognitive ability to merely compete.(12)

Fortunately, this assumption was badly mistaken. Inspired by the success of deep blue, IBM took on a different challenge that critics claimed were “an impossible task”. (12) As of 2008, Watson’s sophisticated AI and pattern matching algorithms has allowed it to regularly defeat Jeopardy champions on a regular basis. In 2011, Watson managed to successfully challenge and defeat the two most accomplished human contestants in the history of Jeopardy!. One of the two competitors, Ken Jennings, made a written response to the tournament’s last question, “I for one welcome our new computer overlords.” (13)(14)

Reinventing the Car

Finally, we come to the example of cars. In the first half of the last decade, many predictions around the capabilities of AI have been propounded. Economists and other futurologists have attempted to discern between tasks that be done by AI, and tasks that cannot.

In 2005, economists Frank Levy and Richard Murdane, argued that although mundane tasks such as performing arithmetic or repetitive labor can be easily automated, tasks involving intelligence and pattern recognition — where the rules are more complex — cannot never be automated. Driving, they claimed, fits this category. Their predictions led to the widespread belief by many in the field of AI, including those in the US Government, that driving could never be automated. (15)(16)

As with the above examples, initially pessimism seemed to be the case. The United States Department of Defense promised millions of dollars in competitions to build a driver-less vehicle that could navigate through an unpopulated desert — in an event called the DARPA Grand Challenge. This was an utter failure. The best vehicle couldn’t even make it eight miles into the course, and took hours to do so. For a while, this really seemed like an impossible task. (15)(16)

However, technology continued to advance. In 2010 — a span of merely five years, Google had succeeded in inventing cars that could drive thousands of miles on actual roads without human involvement at all. It now plans on releasing those cars for full commercial use in 2016 — compatible anywhere and in all weather conditions. Technology has once again won. (15)(17)(18)(19)

The truly interesting news is that this new technology stands to displace a huge portion of the working population — possibly up to numbers as large as 5%. The trucking industry employs around 3% of the entire workforce. Add in taxi and bus drivers, chauffeurs, delivery services, transportation companies, and you get an idea of how big an impact this will cause. (20)

The moral of these stories is not merely that technology is becoming increasingly capable at doing human-level tasks. It is that we humans have consistently underestimated how both the scope and speed at which technology advances. A common reason why history is taught is to educate future generations about the mistakes of the past, and so we can rely on past data to make better decisions today.

Historically speaking, when claims of what technology could do are presented, we have ridiculed them as outlandish and impossible. More often than not, this gut instinct is wrong. When attempting to estimate the possibility of a task being automated, we usually rely on the availability heuristic to evaluate it, meaning that we try and picture in our minds what this process of automation could involve. Since we have not automated them yet, we usually come up with a blank, which leads to evaluating that task as impossible, despite the fact that the nature of the task may become a lot easier once we spend a few more years trying to solve the problem. (21)

Instead, a far more accurate way of evaluating its likelihood is to take an outside view of it. When we look back at historical examples of tasks that were claimed to be impossible to automate, we begin to realize that those predictions were very likely to be wrong, and we should calibrate in consideration of this fact.

The March of Automation

These stories told were merely one of many. Many other tasks have been automated, and many more will be. Economist and futurologist Martin Ford has estimated that up to 40% of jobs can be automated in the coming decades. To anyone even remotely interested in technological unemployment, this should be worrying. It should come as no surprise that this has caused skyrocketing interest in technological unemployment! (20)

Critics can point towards these narratives as merely weak evidence for technological unemployment. They would be correct. The three examples given were merely anecdotal evidences for the power of automation, and could have been cherry-picked.

If I had ended my argument here, I would be committing a strawman fallacy, because no technological unemployment skeptic actually believes that jobs are not being displaced by technology – we have already seen plenty of cases of this occurring. The question in dispute is whether such displacements are permanent or temporary. The above stories were told not as part of the main argument, but in order to change beliefs that certain industries or jobs can never be automated away. This is important because if one begins to evaluate technological unemployment under false premises of what technology is incapable of, one will very likely come to false conclusions as well!

This is dangerous. If technological unemployment is true and we fail to recognize it in time, we will waste our resources on poor solutions that do not solve the root of our problem. Our unemployment rates will continue to grow, taking billions of people into unemployment and poverty, all while leading to great political and economic instability. Human society as we know it will collapse. This is not hyperbole — history has shown us that when the livelihood of large groups of people are threatened, they will often resort to drastic, desperate, and violent solutions. The consequences would be cataclysmic. (23)

For this reason, it is imperative that we begin evaluating the plausibility of technological unemployment today, and bring the conversation to the mainstream. I hope to argue for its case in this essay.

A history of Luddites

Although I say we should begin talking about it “today”, the concept of technological unemployment is by no means new. In the midst of the industrial revolution of the 19th century, a group of textile artisans calling themselves the Luddites protested against new labor-saving machinery of the stocking frame and the loom. The movement famously smashed and destroyed machinery under the belief that they had been responsible for the hardships suffered by the working class. The Luddites argued that this new technology was threatening their livelihood and employment. (24)

Since then, history has associated the concept of technological unemployment with the Luddites. However, they were not the only ones who argued for the concept. In the 19th century, the infamous Karl Marx warned that the inherent crisis’s of capitalism — its simultaneous destructive and constructive nature,  will result in massive overproduction. He believed that this overproduction would lead to widespread unemployment as the Bourgeoisie would end up having to cut back their labor force in order to compensate for the increased levels of productivity — the tendency for the rate of profit to fall. He used this to argue that capitalism contained within itself its own seeds of destruction, and for the inevitability of a socialist economy. (25)(26)

Even the arguably most famous economist in history, John Maynard Keynes himself, argued strongly for it. In his paper ‘Economic possibilities for our Grandchildren’ (1930), he wrote: ”We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment”. He believed that since we were displacing labor faster than we could find new uses for it, future generations would see massive unemployment. However, he did not see this as a necessary tragedy, for he saw it as a way to dramatically improve the living conditions of individuals, and believed that future generations might only work 15 hours a week. (27)

Albert Einstein, although not an economist, was a believer in technological unemployment as well.  He declared that “Technological progress frequently results in more unemployment rather than in an easing of the burden of work for all”. Incidentally, like Karl Marx, he used this to justify the necessity of a socialist economy, where goods are distributed based on need, rather than ‘for profit’. (28)

However, at present it appears as though all these people were wrong! We can see today that far from harming the working class, labor saving technologies like the loom have been responsible for increased productivity and quality of life. We have had no significantly greater unemployment today than we had three centuries ago, despite our advances in modern technology. Nothing significant changed when our economy shifted from being 95% agricultural to 3%, or when the horse and buggy industry were put out of business. Most economists have universally united in stating that new technology does not affect long-term unemployment, a rare thing for a field that is reputed to be unable to agree on anything. (29)(30)

As a result, in modern times the “Luddite Fallacy” is used pejoratively to disparage anyone who questions the possibility that technological unemployment might exist. (31)(32)

After all, under conventional neoclassical economic assumptions, technological unemployment shouldn’t happen. The textbook definition of economics is that it is the study of how to allocate limited resources to agents with unlimited wants. Under such assumptions, if productivity in a sector of the economy increases, although diminishing marginal utility will cause the market to demand less labor for that sector, the economy will simply readjust; people from a more productive sector will switch to a less productive sector with the highest marginal utility.(33)

The example of the hot dog is given. If it takes 2 units of labor to produce a hot dog and 1 unit to produce a bun, a simplified economy with 30 units of labor should produce 10 hot dogs and 10 buns. If automation makes one sector of the economy more efficient — for instance, if it now takes only 1 unit of labor to produce a single hot dog, then the economy will simply shift; now the new equilibrium will produce 15 hot dogs and 15 buns instead. In theory, this makes complete sense. (34)

The only known exception to this rule, Jevon’s paradox, where increased efficiency in a sector can actually boost employment, also increases rather than decreases employment. As a result, improved productivity should increase standards of living rather than cause structural unemployment. (35)

This is so commonly believed that its inverse is considered an insult and a fallacy — the lump of labor fallacy. This so called fallacy is the contention that the amount of total work available to laborers is fixed. As a result, if productivity increases, the number of hours laborers can work will decrease, resulting in greater unemployment. However, since we have not seen widespread permanent unemployment so far, this is clearly false. The theory of the hot dog bun seems to back this up — there is no reason why temporarily displaced workers couldn’t get rehired at a different sector of the economy where they will contribute the greatest marginal good. (36)(37)(38)

Or is this really true? Past performance does not guarantee future results, and models on unemployment may not be correct. Just because widespread unemployment has not yet occurred due to technology does not mean it will forever remain that way, and just because we have a logical sounding explanation for why technological unemployment is impossible does not mean it reflects reality. I believe it can occur.

This is against the mainstream consensus of academic economists. In a great majority of cases, this is not a good idea. Contrary to the tenets of populism, expert consensus is strong evidence for a position being correct. The vast majority who disagree with the opinions of experts have been proven wrong time and time again. After all, expertise is called expertise for a reason: these people tend to spend more time than anyone else studying the issue. Chess experts tend to win games, and medical experts tend to diagnose conditions better than laymen. As an aspiring economist myself, I tend to agree with the vast majority of things known to be economic consensus, such as tariffs and price ceilings being bad, and floating exchange rates and free immigration being good. (40)(41)(42)(43)

Therefore in order to assert a statement like that, I would have to shoulder a great burden of proof to counteract the already existing a priori evidence against technological unemployment. I hope I can. I will not only argue for technological unemployment will result, but also explain my reasoning for why such unemployment have not occurred despite centuries of technological advancement.

 

Technological Unemployment in our time.

 

Technology has a tendency to move at an accelerating, rather than linear, rate. Furthermore, this exponential growth of technology is surprisingly steady. The fact that this occurs should be obvious in retrospect — technology is capability enhancing. The more technology and science we have, the greater our capacity for scientific research. Isaac Newton remarked upon this by saying If I have seen further it is only by standing on the shoulders of giants.” This is a virtuous cycle — this feedback loop of ever increasing rate of technological progress continues to inspire greater progress, and so on. This phenomenon is illustrated in the logarithmic graph below created by Ray Kurzweil in his book The Singularity is Near (2005). It is a compilation of fifteen independent lists of ‘paradigm shifts’ considered to be the largest scientific advancements in human history. (43)

However, critics may quickly point out that even these independent lists of historical paradigm shifts are subjective and may contain a bias towards recent inventions. This may be true. Fortunately, this is not the only way to measure technological change. There are other more objective measures of human technological progress showing similar trends. Probably the best way of measuring technological progress is simply GDP per capita. Since we’re talking about the pace of labor-saving technology being invented, it makes sense that the average amount of goods and services, which correlates to GDP per capita, is a good metric. We can see that in the charts below, United States GDP per capita follows a relatively smooth exponential curve as well. Similarly, the world GDP per capita also follows the same trend, albeit with a small change at the start of the industrial revolution.

The problem with most people’s mental model of technological progress is that it exists linearly. Most people assume that the next fifty years of technological change will be equal in size of the last fifty years.

However, we know this is untrue. Since Human GDP per capita on average doubles every 15 years, the amount of technological progress in the next fifty will be approximately equivalent to 14 times the level of the last fifty! Since we have linear heuristics, in order for us to be able to intuitively understand this, we would have to conceptualize the year 2700 in order to accurately imagine the year 2050. This is an important point because it explains why the next few decades of innovation is likely to be significantly different than the last few centuries. We will come back to this point shortly. (44)

A falsifiable hypothesis.

 

Is there evidence pointing towards current models of unemployment being wrong? The answer is yes! The main arguments given against technological unemployment can be falsified.

In order to elaborate on this, let me first propose an alternate model of how technological unemployment will affect the working demographic. Firstly, it is sometimes assumed that technological unemployment will affect all types of people equally. As a result, they will point towards the ability of some people to retain jobs as evidence for all people being able to retain jobs. This is untrue. Permanent displacement caused by technology does not cause equal displacement across the economy. Instead those with lower cognitive abilities are the most vulnerable. For some people, this may be a sensitive topic. Many are tempted to deny that objective levels of intelligence exist — a temptation much like both the rich and poor like to underplay differences in wealth. However, different levels of cognitive ability is so essential to the argument that it simply cannot be left out.

The best statistical measure of intelligence, or G, is the Intelligence quotient (IQ).  As a result, for the purposes of this essay, IQ and differences of cognitive ability will be used interchangeably.

Why the low-IQ are more vulnerable.

Automation does not displace jobs or industries at random. Instead, the easiest tasks to automate are the ones which will be displaced first. As aforementioned in this paper, tasks involving repetition and easily quantifiable rules are ones that can be automated most easily, whilst jobs involving complex, flexible, non-routine creative thinking are far harder. The former category happens to include many more jobs associated with the less intelligent, such as factory line jobs, whilst the latter category is associated with jobs commonly thought of as belonging to the intelligent, well-educated elite such as professors or politicians. Although rare exceptions such as Chess Grandmaster (which we have seen can be easily beaten by computers) or housekeepers (which are notoriously difficult to automate due to difficult objective recognition requirements) do exist, they are the exception, rather than the norm. (45)

Compounding this effect is the fact that with very rare exceptions related to the highest end of intelligence, any job that can be done by a person with an IQ of X can be done by a person with an IQ of X+1, and so on. By induction, we can therefore say that displacement will always affect the less intelligent more than those gifted with superior cognitive ability.

Although one could object and argue that

(1) The theory of multiple intelligences refutes this argument — since people can have different levels of aptitude for different fields.

(2) This argument ignores the effects of specialization and training. In other words, someone with a lower intelligence could perform a job that someone with a higher intelligence is unable to; since the former might have had education and training in a job that the latter did not.

This does not invalidate the aforementioned statement. Firstly, the theory of multiple intelligences has had very little empirical evidence justifying it — children who tend to be good at mathematics also tend to be good at English, music, or kinesthetics. (46)(47)

Secondly, even if it were true, the same argument would apply to each specific type of intelligence, as some people must be worst off, objectively speaking, than others, even taking all categories of intelligence into account.

Next, (2) is invalid because we are specifically refuting technological unemployment. In this context, we are ignoring the effects of education or training because technological unemployment makes a statement about structural, rather than temporary unemployment that can be alleviated through retraining.

How can we tell if this is really true?

Well, we can seek out the relationship between intelligence and unemployment. Since intelligence is backwards compatible, and automation is more likely to affect the less intelligent, it should follow that those with lower IQs should have higher unemployment rates. Indeed, this is the case. IQ negatively correlates with unemployment with a coefficient of -0.73. The average IQ for an unemployed person is 81. The lowest percentile (<75) for IQ has an average unemployment rate of 12% compared to the median’s 7% and the 2% of those with an IQ of higher than 125.(48)(49)

Furthermore, nations which tend to have lower IQs also have worse off unemployment rates, even adjusted for the effects of high economic stability and GDP on education and nutrition, which do boost IQ. (50)

According to the classical assumptions of economics, this makes no sense! Although the low IQ are less capable of performing certain job, such as perhaps being professors or doctors, in theory this should not make a difference on unemployment rates — since these people are perfectly capable of finding jobs in other industries! And yet here is hard data disproving the classical model — those with lower cognitive abilities are far more likely to be unemployed!

Some may object and argue that exogenous factors may be at stake. For instance, perhaps the lowest percentile of IQ encompasses those who suffer from mental illness, or are otherwise not sufficiently competent to work. This objection is precisely the point I am trying to prove! The classical economic arguments against technological unemployment makes the assumption that everyone has useful labor to contribute. The arguments for technological unemployment is that this is not true — useful labor exists on a sliding scale, and the advancement of technology ensures that those on its far end will be left behind by technological change!

So we conclude that the ones unlucky enough to have lower cognitive ability are destined for permanent unemployment. They are, however, not the only ones at risk. Since automation is only useful for processes which involve easily quantifiable rules, many hold the false belief that complex, high status jobs that require good cognitive ability can never be automated away. This is untrue because like the example of the automation of driving, complex systems are merely an emergent phenomena of quantifiable rules — the difficulty merely lies in finding out what those are.

How quickly can we expect the advancement of automation to replace each marginal worker? Unfortunately, statistics on how the low-IQ unemployed have changed over time is not available. However, a good proxy is simply the level of education.

Firstly, education is already known to be a good metric because we know it has a strong correlation with IQ (in both directions!), and that there is a definite relationship between the adoption of technology and education necessary to use it. In other words, there is a minimum threshold of either IQ or education required to use any piece of technology, and individuals who fall below this requirement show great difficulty in adapting. (51)(52)(53)(54)

This is bad news, because in the last century, the unemployment rates for lower levels of education has increased significantly. Currently more than half of all high school dropout above the age of 25 are unemployed — and this is discounting those who have given up looking for work. We hypothesize that this is for the exact same reason that low-IQ individuals are prone to dropping out of the workforce. (55)

This is a problem because the number of people with no degrees has stagnated, which affirms that there is a physical hardwired limit to the maximum educational attainment that sections of the population can reach. This phenomenon can be seen below in this figure released by the US Census Bureau, where the rate of high school graduates has capped at 85% for both both genders, in 1977 for the males and in 1997 for females.  (56)

The worse news is that this stagnation of human capability is not limited to education alone. There seems to be a hard wired limit to the maximum level of human intelligence as well. Some readers may be familiar with what is known as the ‘Flynn effect’, the observed statistical phenomenon that intelligence test scores has been increasing by roughly 5 to 25 points every generation, and that this growth has been incredibly linear and consistent across multiple countries. This growth is not due to a statistical error, but to changing environmental factors such as better nutrition, schooling, decreased rates of infectious diseases, and iodine supplements. (57)(58)(59)

Unfortunately, there is evidence that the Flynn effect is about to end. In Norway, the rate of IQ increase has dropped from 3 points per decade in the 1950s to 1.3 points in the last decade. In Australia, a measure of general IQ in kids aged 6-11 has shown no increase since 1975. In the UK, tests carried out since the 1980s has actually shown a decline in average IQ by 2 points. Suffice to say that it seems pretty likely that the low-hanging fruit of IQ gains has been plucked. (59)(60)(61)

So if human capital growth is starting to decline, whilst technology is growing at an ever accelerating rate, what does this mean for unemployment? The answer depends on which economic model is being used. It turns out that the assumptions that one uses in order to produce economic predictions and models are important — because the conclusions drawn can be radically changed as a result of small shifts of axioms.

In most (but not all) of mainstream economics, economists have a tendency to model labor-saving technology as compliments to labor, rather than substitutes. A complementary good is something that is more useful when used with another product, and therefore increases in demand when the other product is plentiful. On the other hand, a substitute good is something that is capable of replacing another product, thus its existence will reduce the demand for that product or vice versa. (62)(63)(64)(65)(66)(67)

And this is not necessarily a bad idea. After all, throughout most of human history, technology has acted as a complementary good to human labor — by increasing the productivity of the worker, it increases demand for labor, since each marginal unit of labor now produces more utility. (68)

However, just because something has always had a historical precedent does not imply it will forever remain that way. We have shown that the reason historical increases in productivity have increased net wages is because the increased difficulty of using that labor saving technology still falls under the maximum difficulty that workers can adjust to.  This effect fails when the worker in question falls below the minimum required cognitive ability to utilize that level of technology. Since this would force this worker to revert to the last known labor-saving device that he is capable of using, despite being in a market with greater supply, the technology would now act as a substitute for that worker, as demand for him now falls. Some readers may be tempted to accept this conclusion, but then point out that this does not necessitate widespread unemployment — only a serious fall of wages so that an equilibrium of prices will be obtained.

This is incorrect. There is a minimum salary under which workers will not accept jobs. This is not unreasonable — there is a minimum level of income necessary to provide for basic necessities and the means to survival. Food happens to cost money, so any wage that is being offered below this threshold will therefore be rejected by the worker, since accepting it is unsustainable and they will begin to lose money. Therefore there is a limit to how far wages can fall, below which unemployment will start to occur. (69)(70)(71)(72)

The only reason why technological unemployment has not yet occurred because so far is because human capital growth has outpaced technology. However, we have seen that the former is something that is slowing down, whilst the latter is speeding up due to exponential growth. This phenomenon is aptly pictured in the graph below — where although computer technology starts out slow, eventually the two lines will overlap; the moment when technological unemployment will occur. (72)

In order to evaluate our theory, we must form testable predictions about what would result if it were true. Incidentally, the evidence seems to fit!

If machines compliments labor, then demand for labor would rise when automation increases. On the other hand, if machines are substitutes for a certain percentage of the population, then wages for that demographic would remain stagnant or even fall, as demand for those workers go down. Statistics on this are available. Throughout most of history, the former has been the case. Any statistical measure of capital, such as GDP, or labor productivity has had a strong correlation with worker wages. As a result, any technological breakthrough in increasing worker productivity has resulted in increasing the standards of living for workers across the board.  However, since the 1980s (incidentally the same time university degree growth has begun to stagnate) this trend has changed, in a phenomenon referred to as ‘the great decoupling’. Now, despite great increases in both GDP and productivity, worker wages have actually stagnated. This trend can be seen in the graph below. (73)

A final objection to all I have said so far is not a criticism against these arguments, but rather an appeal to other arguments to explain the possible decline in unemployment.

Some examples of these objections are that increased government-imposed barriers to entry, high tax rates, difficulty in retraining, lack of innovation, legislative bureaucracy have imposed restrictions preventing jobs from returning. Whilst to a certain extent it is likely that all of these factors play a part in our current level of unemployment, this argument fails to take into consideration that the existent of one factor does not prevent other factors from existing. It is entirely possible that a solution to any of these factors could improve our unemployment rates, but it is technological unemployment adds to those rates in some kind of synergistic effect. (74)(75)

And although technological unemployment is often derided by most modern economists, it does not lack advocates in present day. Jeremy Rifkin and Martin Ford has argued for it by writing the book End of Work. Erik Brynjolfsson and Andrew McAfee has done the same, in Race Against the Machine. Hopefully, more economists will begin to take this theory seriously.

The Road Ahead

 

If technological unemployment is true — and I have made a strong case for it being so — then we need to start discussing solutions right now. The fact that it has not happened in any significant way currently does not mean it will never happen. The first step to solving a problem is to acknowledge that it is a problem, and that is what I hope people begin to do, starting with an extensive discussion on technological unemployment in mainstream economics, instead of dismissing it as a mere “luddite fallacy”.

Although throughout the paper I have made references to what mainstream economists believe, it is also important to note that not all economist are technological unemployment deniers.  The concept does have advocates in present day. Jeremy Rifkin and Martin Ford has argued for it by writing the book End of Work. Erik Brynjolfsson and Andrew McAfee has done the same, in Race Against the Machine. Hopefully, more economists will begin to take this theory seriously.

Once we have begun to acknowledge it, only then can we begin to propose solutions to this problem. However, this does not mean solutions have not yet been proposed.

There already exists a number of suggestions. It is important to note that an unacceptable solution would be a knee-jerk reaction to stifling technological advancement in the name of helping the jobless. Although no doubt well intentioned, such solutions only diminish wellbeing in the long run — and this is the primary reason why the Luddites were wrong. Instead, technology is a tool that could potentially uplift billions of people out of the poverty trap and a Malthusian crisis. To not use it is a waste.

Instead, some more reasonable solutions include a basic income. Although this may sound completely ludicrous, analysis shows that it is surprisingly more plausible than initially assumed. Typically right wing economist such as Friedman and Hayek have strongly supported it. Multiple experiments of basic income in multiple parts of the world such as Panthbadodiya, India, or in Dauphin, Manitoba, have resulted in the government actually saving money in the long run, due to better health and education related outcomes. Unfortunately the analysis of a basic income is beyond the scope of this paper, so we will leave thinking about this problem as an exercise to the reader. (76)(77)

References:

(1)               http://www.economist.com/blogs/freeexchange/2011/11/technological-unemployment

(2)               http://www.google.com/trends/explore#q=technological%20unemployment

(3)               http://rabble.ca/blogs/bloggers/behind-numbers/2013/12/grading-canadas-economic-recovery-truth-about-job-creation-and

(4)               http://data.bls.gov/cgi-bin/surveymost?ln

(5)               http://www.bls.gov/cps/cps_htgm.htm

(6)               http://www.washingtonpost.com/business/economy/jobless-recoveries-are-here-to-stay-economists-say-but-its-a-mystery-why/2013/09/19/6034bcb4-20c7-11e3-966c-9c4293c47ebe_story.html

(7)               Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy, Erik Brynjolfsson (2011)

(8)               Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk, “A Grandmaster Chess Machine,” Scientific American, October 1990.

(9)               Signal and the Noise, Nate Silvers. (2013)

(10)           http://en.chessbase.com/post/rybka-fritz-lead-on-the-computer-rating-lists

(11)           http://www.magnuscarlsenchess.com/elo_rating_kasparov_carlsen_comparison.php

(12)           http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=2&_r=0

(13)           http://researcher.ibm.com/researcher/view_project.php?id=2099

(14)           The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, Martin Ford. (2009)

(15)           The New Division of Labor: How Computers Are Creating the Next Job Market Frank Levy & Richard J. Murnane

(16)           http://www.cnn.com/2004/TECH/ptech/03/14/darpa.race/index.html

(17)           http://googleblog.blogspot.ca/2010/10/what-were-driving-at.html

(18)           https://www.google.com/cars/

(19)           http://www.extremetech.com/extreme/147940-google-self-driving-cars-in-3-5-years-feds-not-so-fast

(20)           http://www.census.gov/popclock

(21)           Schwarz, Bless, Strack, Klumpp, Rittenauer-Schatka & Simons, 1991, “Ease of retrieval as information: Another look at the availability heuristic.” Journal of Personality and Social Psychology, 61(2), 195– htm 202)

(22)           http://footnote1.com/automation-not-domination-how-robots-will-take-over-our-world/

(23)           Collapse: How Societies Choose to Fail or Succeed, Jared Diamond (2011)

(24)           ^ Palmer, Roy, 1998, The Sound of History: Songs and Social Comment, Oxford University Press, p. 103

(25)           http://www.marxist.com/the-capitalis-and-the-tendency-of-the-rate-of-profit-to-fall-2.htm

(26)           http://www.marxists.org/history/etol/newspape/socialistvoice/marx19.html

(27)           http://www.econ.yale.edu/smith/econ116a/keynes1.pdf

(28)           http://monthlyreview.org/2009/05/01/why-socialism

(29)           http://www.investopedia.com/articles/economics/09/why-economists-do-not-agree.asp

(30)           http://www.ers.usda.gov/media/259572/eib3_1_.pdf

(31)           http://www.economist.com/blogs/babbage/2011/11/artificial-intelligence

(32)           http://marginalrevolution.com/marginalrevolution/2003/12/productivity_an.html

(33)           http://www.leyden212.org/depart/social/memmons/Docs/Economics/EC-1-BasicEconomicDefinitions.pdf

(34)           http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/

(35)           http://www.sciencedirect.com/science/article/pii/S0921800905001084

(36)           Why economists dislike a lump of labor, Tom Walker, Review of Social Economy, 2007, Vol 65, Issue 3 Page 279-291.

(37)           http://www.libertariannews.org/2012/06/30/the-robot-unemployment-myth/

(38)             http://www.economicshelp.org/blog/6717/economics/the-luddite-fallacy/

(39)           http://repository.upenn.edu/cgi/viewcontent.cgi?article=1010&context=marketing_papers

(40)           http://gregmankiw.blogspot.ca/2009/02/news-flash-economists-agree.html

(41)           http://lesswrong.com/lw/iu0/trusting_expert_consensus/

(42)           http://krugman.blogs.nytimes.com/2013/01/05/ideology-and-economics/

(43)           Ray Kurzweil, The Singularity Is Near                   : When Humans Transcend Biology, Viking Adult, 2005,pg 19.

(44)           http://www.economist.com/blogs/dailychart/2011/12/gdp-person

(45)           The New Division of Labor: How Computers Are Creating the Next Job Market, Frank Levy (2004)

(46)           http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.119.9184&rep=rep1&type=pdf

(47)           http://educationnext.org/reframing-the-mind/

(48)           http://www.sciencedirect.com/science/article/pii/S0160289611001413

(49)           https://www.iqelite.com/en/iq-intelligence-test/

(50)           http://www.amren.com/news/2013/07/the-relation-between-intelligence-and-unemployment-at-the-individual-and-national-level/

(51)           http://psycnet.apa.org/journals/ccp/52/4/631/

(52)           http://economics.mit.edu/files/563

(53)           economics.mit.edu/files/555

(54)           http://econ.ucsd.edu/~elib/berman_bound_griliches94

(55)           http://online.wsj.com/news/articles/SB10001424052970203315804577211190378957930

(56)           http://www.census.gov/hhes/socdemo/education/data/cps/historical/fig5.jpg

(57)           http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/0033-2909.101.2.171

(58)            http://www.thelancet.com/series/maternal-and-child-undernutrition

(59)           David F Marks (2010). “IQ variations across time, race, and nationality:an artifact of differences in literacy skills”.

(60)           Cotton, S. M.; Kiely, P. M.; Crewther, D. P.; Thomson,, B.; Laycock, R.; Crewther, S. G. (2005). “A normative and reliability study for the Raven’s Colored Progressive Matrices for primary school aged children in Australia”.

(61)           http://www.telegraph.co.uk/education/educationnews/4548943/British-teenagers-have-lower-IQs-than-their-counterparts-did-30-years-ago.html

(62)           economics.mit.edu/files/555

(63)           http://livingeconomics.org/article.asp?docId=289

(64)           http://economics.mit.edu/files/246

(65)           http://www.povertyactionlab.org/evaluation/complement-or-substitute-effect-technology-student-achievement-india

(66)           http://www.federalreserve.gov/pubs/feds/1997/199745/199745pap.pdf

(67)           http://scholar.harvard.edu/files/goldin/files/technology_human_capital.pdf

(68)           https://www.americanscientist.org/issues/id.5436,y.2009,no.1,content.true,page.4,css.print/issue.aspx

(69)           http://onlinelibrary.wiley.com/doi/10.1111/j.1465-7295.1999.tb01449.x/abstract;jsessionid=86AB5B4B7D0FB1882C17CC4F4ABF2000.f02t03?deniedAccessCustomisedMessage=&userIsAuthenticated=false

(70)           http://www.jstor.org/discover/10.2307/3440530?uid=3739448&uid=2&uid=3737720&uid=4&sid=21103256758657

(71)           http://restud.oxfordjournals.org/content/67/4/585.full.pdf+html

(72)           Lights in the Tunnel, Martin Ford.

(73)           http://andrewmcafee.org/2012/12/the-great-decoupling-of-the-us-economy/

(74)           http://www.bloomberg.com/news/2012-11-25/the-working-poor-pay-high-taxes-too.html

(75)           http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/

(76)           http://archive.irpp.org/po/archive/jan01/hum.pdf

(77)           http://www.newrepublic.com/article/books-and-arts/magazine/110196/hayek-friedman-and-the-illusions-conservative-economics?mrefid=twitter

(78)


Modal Realism

My belief in Modal Realism explained.

Over the course of my life, I have expressed my belief in Modal Realism a number of times. If you’ve never heard of Modal Realism, it essentially means that every possible universe exists, and is equally as real as ours.

Of course, to some people, this may seem like an incredibly strange belief. Some may even express fear; after all, if every possible universe exists, there is a universe where you are being tortured for all eternity. Unfortunately, reality doesn’t care what you think. Reality will continue existing whether or not you believe in something. Therefore, as the Rationality mantra goes; “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool. Let me not become attached to beliefs I do not want.”

I write this as an essay that I may refer others to, instead of explaining why I believe in Modal Realism every single time I mention it, which, oddly enough, comes up quite often. In the long run it will probably save a good deal of time.

Please note that the following ideas are not original. This particular essay is original, but Modal Realism has been debated for centuries, and even Turing Completeness and this “Dust theory” have been written about by people far more competent than I am, for far longer than I have existed. I do not wish to claim credit for something I have not done, and this is particularly true for this essay.

Anyway, I will attempt to “prove” Modal Realism using two axioms, which are the starting propositions that you accept in order to prove something. It is by no means a strong proof, in that it is not as strong as what most Mathematical proofs would require, but sufficient enough to assign a high probability to my ending theorem, and good enough for conversational persuasion.

My two axioms are as follow;

Firstly, that the Universe is turing-complete.

Second, that it is possible to use logic in order to create theorems from axioms.

If the universe is turing complete, you must also believe that it is possible within the laws of physics to simulate the entire universe, given infinite computational power. Nevermind the fact that infinite computational power is impossible within the known laws of physics, we shall ignore that for the moment.

This proposition makes sense because any turing complete language is capable of simulating all turing complete languages, including itself. Therefore if the Universe is turing complete, and I strongly believe it is, as we have yet to see evidence that the Universe is not turing complete, then it can also be simulated on all turing complete languages. Therefore given infinite computational power, we could theoretically simulate the entire universe as well, so long as we have the appropriate software.

If it is possible for the Universe to be simulated, you must also accept that within the simulated universe, it is not empirically possible to know the speed of which this universe is being simulated. For instance, if we were to run this universe at 0.5x speed, or 0.1x speed, it would not be possible within that universe to tell a difference.

If it is possible for the speed to vary without exerting a noticeable effect within universe, it must also follow that if the simulation of the universe is paused by a minute in the real world, before being resumed, no empirical change would result within the simulated universe as well. The same would also apply if this period were extended to an hour, a day, or even a million years.

If you accept that the simulated universe would experience no difference even if it were paused, you must also accept that it would experience no difference if the supposed “future” within the universe was simulated before the “present” within that universe was simulated. For the moment, ignore the fact that it is impossible to know the ending configuration of a simulation within first simulating the starting configuration all the way. Within your simulated universe, since each state flows smoothly to the next, it is empirically impossible to tell the difference which state is being simulated first, and which is being simulated second, therefore there is no difference.

If you accept the above argument, you must also accept that it would apply if there were three states being scrambled, instead of two. Similarly, the same would hold if there were 4, 5, a hundred, or a billion states that are randomly scrambled without regard for chronological order, and simulated accordingly.

If it is possible within the known laws of physics to simulate the entire universe, given infinite computational power, you must also believe that it is not necessary for the simulation to be run on transistors or electricity. For example, it would also be possible to run the universe on vacuum tubes, once again, ignoring the problem of computational power. Similarly, it is also possible to run the simulation on other kinds of patterns generators, so long as they can be simplified into signals consisting of zeroes and ones.

If you accept the above argument, you must also agree that the microcosmic dust and atoms of the universe can form patterns of zeroes and ones, through sheer pure randomness. For example, electrons are probabilistic fundamental particles that are dependent upon a wavefunction. We could say an electron appearing in a certain location is counted as a one, and an electron appearing in another location is counted as a zero. Therefore out of pure randomness, we could obtain a code measuring 01000100111, for example. The same logic could be applied to every particle existing througout the universe, so long as they exhibit patterns that can be classified into zeroes and ones.

If you accept that the simulation of the universe is only dependent on signals of zeroes and ones, rather than what hardware it is being simulated on, you should also accept that if the hardware running the simulation was to be cut in half, but somehow was still capable of creating the same patterns of zeroes and ones through “spooky action at a distance” (imagine a mysterious portal connecting the two parts together if you have to) , there would be no empirical difference within the simulated universe. If you accept that argument, then you must also agree with it even if the two parts of the infinite-power computer were light years apart, or if the computer was cut into 3 parts, 4 parts, or a billion parts, so long as the same pattern of zeroes and ones are being generated.

Finally, you must also agree that a simulated universe is also capable of generating bits (patterns of zeroes and ones) on its own. Since we already accept the fact that locality does not matter for computing simulations, you must also accept that a third simulation is physicically capable of being computed from a combination of a computer inside the simulation, and a computer outside the simulation.

Using the above theorems, we can say that every possible universe exists. Because every simulated universe generates its own set of bits, we can say that there are an infinite number of one bits and an infinite number of zero bits. And because the coordinates of each bit does not matter, whether it is through space or time, so long as it generates the same zero and one values in an orderly pattern, we can also say that we can rearrange those infinite number of zero and ones to form every conceivable possible universe.

Therefore: Every mathematically possible turing-complete universe exists.


The serious possibility of Indefinite Life Extension.

My belief in the possibility of indefinite life extension explained.

There is a significant chance that indefinite life extension, in a completely healthy painless state, will be possible in our lifetime. I am using the term “indefinite life extension” rather than immortality because true immortality is impossible, at least according to the laws of physics as we currently understand it. That, and the fact that immortality has a number of negative connotations associated with it, such as being unable to die despite being in terrible suffering.

Indefinite life extension, on the other hand, is simply the ability to live until you no longer desire to live. Whether that may be in 10 years, 100 years, or a million years is simply up to the person to decide.

For me, I find living pretty fun. I enjoy life on average, even if I may not enjoy certain moments in my life. That’s why I want to live for one more day, one more month, one more year, one more century, and one more millenium. I think I may even be up for living a few billion or trillion years.

But this essay is not meant to persuade you that living forever can be fun. It’s not meant to persuade you that nobody actually hates living; when people say they want to die, it’s suffering, rather than life that they wish to escape from. It’s not meant to convince you that there are no physical laws of the universe that say that life must be suffering.

This essay is about the serious, life-changing possibility of indefinite life extension. Within your lifetime.  I genuinely believe that any rational person should at least consider this, even if they reject the argument at the end. It’s important enough for you to dedicate a day or two to consider this issue.

Why? Because even the smallest probability in living a billion years is meaningful, when you multiply the expected utility. If you find living fun, and most people do (the people that don’t tend to commit suicide), you should find living twice as long twice as fun. There’s no arbitrary age when living changes from being fun to non-fun, when the polarity flips, and people should die. There ARE problems associated with aging, such as disease, weakness, and sickness, but having indefinite life extension would also mean taking care of those problems as well. People can’t live forever if they get sick and die.

If you think living twice as long should be twice as fun (after all, having fun for two days is twice as fun as having fun for only a single day), then living a hundred times as long should be a hundred times as fun, and so on. Because there is no diminishing marginal utility on living ( People don’t stop having fun after they hit an arbitrary age), then you might say that having a 50% chance on living twice as long is equally as desirable as a 100% chance of living regularly, given that everything else (such as friends, environment, family) stay the same.

You may not FEEL like living twice as long is twice as fun, but the math works out. Trust the Math, not your intuition – making decisions based on how you feel about things can be misleading, because something that doesn’t feel fun can be really fun (such as charity!) and something that isn’t really fun can feel really fun (such as winning the lottery! Research has shown that people severely estimate how fun more money would be, and most lottery winners actually end up less happy. )

The only way things can stop being fun is a change in environment, such as the death of a friend or family. That’s a problem with the environment, not a problem with living forever in and of itself. That’s why I’m trying so hard to get my friends (You) to seriously consider living forever, so that the next few billion years will be fun too.

Plus, historically speaking, the future is always more fun than the past! Our current quality of life is better than it has ever been, with not having to care about an 80% mortality rate, or being tortured for being a heretic, or not having to worry about being chased by a giant cat and eaten.

If we take a billion years as the amount of time you can add to your lifespan if you seriously consider indefinite life extension (and I have reasons to believe that it will probably be longer than that, due to things like increased simulation speed, but I won’t go into that for now), then living for a billion years will be ten million times more fun than living for a hundred years.

So we can say something like “A 0.00001% chance of living for a billion years is about equally as desirable as a 100% chance of living for a hundred years, assuming that the amount of fun doesn’t change.”

(And if it changes, it’s far more likely to increase rather than decrease, based on historic trends. )

But let’s ignore that for the moment for ease of calculation. If these two options are equally as valuable, I merely have to convince you that the probability of living indefinitely is higher than 0.00001% for you to gamble your life for this.

The good news is that you don’t have to gamble away your entire life. The most you will ever risk is a few days of your time to ponder this decision. In the most pessimistic view,  you may even risk a few years of life, because the technology to live indefinitely may not be free. And since money is time, you may spend a few years working to obtain it. But even so, working is usually funner than non-existence, so comparing losing a few years of your life to having to work is not necessarily a fair comparison (And if non-existence is funner than work, I’d suggest trying a new job.)

The even better news is that the probability of indefinite life extension is significantly higher than 0.00001%. If I’m allowed to make a suggestion based on the things I know so far, I believe that it’s much closer to 30%~ or so, making this a really good investment for the level of risk incurred. (Have you ever seen an investment with an average  expected ROI of 3 million? )

A list of all the potential ways death can be vanquished would be very long, so I’m only going to list the most promising ones, the combined probability of any one of them succeeding within our lifetime is about 25% or so.

These technologies are: Uploading, cryonics, genetic engineering, nanotech cellular repair, biomedical telomere rejuvenation, and creating AGI.

Furthermore, the rate at which technology is discovered in general is accelerating exponentially. More technological progress has been made in the last 20 years than all of human history before that. This is because getting technology usually allows us to advance science at a faster rate, thus increasing the rate at which new technology is discovered too!

The human genome project is a good example of this concept because it is easily quantifiable. In 1990 scientists had managed to transcribe only one ten-thousandth of the genome over an entire year. Yet their goal was to sequence the entire genome in 15 years. After seven years, only 1 percent had been sequenced. Most people said they would not finish on time.

But, in fact, the project was on track. The rate of progress was doubling every year, which meant that when researchers finished 1 percent they were only seven steps away from reaching 100 percent. The project was completed in 2003, a few years before schedule.  People assumed it would take centuries because they thought scientific progress was linear.

This phonemonen is very easy to underestimate because human brains are not evolutionarily hardwired to intuitively grasp exponential growth. There are certain fields, such as computer speed, that double every year. Other technological fields are slower. Since most of the methods of indefinite life extension I have mentioned above are more difficult to quantify and chart out than genome coding, I cannot say exactly how fast they are progressing, and if progress doubles every month, or every 10 years.

However, I will use Real GDP per Capita (Real means that it is adjusted for inflation) in order to illustrate, in general, the average rate of technological progress. Because Real GDP per Capita is a measure of the average rate of all the things in the economy, it can be a good measure of how fast, in general, human society is progressing.

The World’s Real GDP per Capita doubles every 15 years, and has done so for the last few thousand years. This means that if one were to expect to live for 75 more years, he could expect to find that by the end of 75 years, the world would be in year 4400, and the equivalent of 2400 years of this year’s technological progress would have passed. ( 2^5 * 75 = 2400)

If this shocks you, then you already realize that humans are naturally bad at grasping exponential growth. If you want to accurately predict the state of the world in 75 years, you would have to imagine the world in year 4400, rather than in year 2085.

In order to convince you that indefinite life extension is entirely within our grasp, I will need to talk about it’s methods. If you know nothing about how we can live indefinitely,  and yet try to calculate the odds of not dying in the next 75 years, your mind will come up with a blank page. As a result, you may severely underestimate its likelihood, because you may subconsciously believe that your lack of knowledge can be extended to all of humanity’s lack of knowledge, when this may not be the case.

The first method is uploading. To do a thorough scan of the brain, and simulate all neurons in your brain in a computer. If there is a part of you that is responsible for consciousness, we can take that too, and simulate it. If we have to simulate you to the quantum level, there’s no reason that cannot be done as well.

Yes, even if we upload you, you are still you. You are not a copy. Your identity is a mathematical algorithm, a rather complex one, but still a mathematical algorithm nonetheless. This is because the universe (including you!) is turing complete, and any turing complete language can be simulated by any other turing complete language. Your identity isn’t in specific atoms, because if you die but the atoms in your body still exist, you  do not exist. You are a collective of thoughts, feelings, ideologies, and emotions. Your identity is conserved regardless of what hardware is running you. We know this to be true because most of the atoms in our body are changed every few years, and yet we do not treat this as a form of death.

We know that we will have enough computational power to fully simulate a person in a few decades. This is because of Moore’s Law; computational power doubles every year, and has doubled for the last 100 years. The main factor limiting this method of life extension is the ability to scan the human brain at a deep enough level to allow for simulation to occur.

I will be plotting these factors and the odds of any of these problems interfering with this method of life extension. I will put the approximate date that these probabilities are calculated with to be 2085.

Sufficient computational power for simulation, and simulations can be run relatively cheaply: 0.82

Humans are turing complete entities: 0.92

The capacity to scan the human brain  to the necessary ontological level exists: 0.1

Science-interfering societal collapse does not happen: 0.90

Government interference with uploading does not happen: 0.75

Other unforeseen black-swans do not happen: 0.80

Total odds: 0.04073, or 4%

The second possible method of life extension is through cryonics. In simply terms, this means that after you die, but before cellular damage has occured, your body is frozen, but in a way that preserves the cellular structure to a near-atomic level. It is not merely “freezing” a corpse, but doing so in a way that preserves the detail of every cell and organelle. In the future, you will be revived, either through uploading, or by scanning you at an atomic level, and rebuilding you through nanotechnology and atomic engineering.

Some people argue that if all the atoms in your body are reassembled, it will not still be you. They are wrong, because it has been scientifically proven in Quantum Mechanics that atoms are indistinguishable, not just empirically, but philosophically (not just as far as we can observe, but as a mathematical certainty) Atoms are simply probability amplitude distributions, the result of the universal wavefunction factorizing (a fancy term for saying “an illusion created by something more fundamental” )

Consider that the definition of death has been before throughout history, as our medical knowledge, and therefore our ability to extend lifespan, increases. For thousands of years, humanity has assumed that when the heart stopped, a person died. However, after the invention of defibrillation and pacemaker, this was no longer the case, because it became possible to artificially stimulate the heart. Therefore the current definition of death is “brain death”, when the brain is no longer operational.

It is exceedingly likely that this current definition will soon be revised; the laws of physics do not prevent us from reviving people who are brain dead. Consider that although the brain no longer works, all the information that makes the brain do what it does still exists, down to the atomic level. A far better definition of death is  informational-theoretic death, or when it is physically impossible to revive a person, because the necessary information in order to revive that person has already been lost.

The probability of Cryonics being successful has been calculated by a number of people, I like this particular estimate, which is neither overly-optimistic nor pessimistic, by Robin Hanson (Economist).

Civilization still exists and has kept growing in technical capability. 0.8

Your cryonics org and it successors have kept you continuously frozen. 0.8

Someone is willing and allowed to pay modest costs to revive you. 0.8

Brain science has workable input/output models of relevant brain cell types. 0.5

Usual freezing quality preserved relevant model-needed details. 0.8

Cheap scanning tech slices & 2D scans brains at model-needed spatial, chem resolution. 0.8

Error correction codes reconstruct most connections across slices, fractures. 0.8

Cheap computers can real-time sim entire scanned sets of connected cells. 0.8

Sim life seems worth living enough that they don’t prefer suicide. 0.8

Such sims of you are as worthy as your kid of your identifying with them. 0.8

Total Probability: 0.9329, or 9%

Is that all to cryonics? No, there is much more I will not cover. But I will direct you to these two FAQs if you wish to find out more, or you may ask me directly.

http://www.benbest.com/cryonics/CryoFAQ.html

http://www.alcor.org/sciencefaq.htm

The third method that is Genetic engineering, cellular repair, and Telomere rejuvenation. These three, although slightly different methods, are so interconnected that it is easier to lump them all together, and call it the “biological” methods of solving aging.

We know that it is entirely possible for biological lifeforms to live indefinitely, because there are a number of animals that are not biologically programmed to die, and therefore can live indefinitely. One example is the Turritopsis nutricula, also known as the immortal jellyfish.

According to scientists studying the proccess of aging, there are currently seven major problems that are to be overcome before allowing indefinite life extension. They are:

Mutations in Chromosomes causing cancer due to nuclear mutations/epimutations:

These are changes to the nuclear DNA (nDNA), the molecule that contains our genetic information, or to proteins which bind to the nDNA. Certain mutations can lead to cancer. Non-cancerous mutations and epimutations do not contribute to aging within a normal lifespan, so cancer is the only endpoint of these types of damage that must be addressed.

Mutations in Mitochondria:

Mitochondria are components in our cells that are important for energy production. They contain their own genetic material, and mutations to their DNA can affect a cell’s ability to function properly. Indirectly, these mutations may accelerate many aspects of aging.

Junk  inside of cells, aka intracellular aggregates:

Our cells are constantly breaking down proteins and other molecules that are no longer useful or which can be harmful. Those molecules which can’t be digested simply accumulate as junk inside our cells. Atherosclerosis, macular degeneration and all kinds of neurodegenerative diseases (such as Alzheimer’s disease) are associated with this problem.

Junk – outside of cells, aka extracellular aggregates:

Harmful junk protein can also accumulate outside of our cells. The amyloid senile plaque seen in the brains of Alzheimer’s patients is one example.

Cells – too few, aka cellular loss:

Some of the cells in our bodies cannot be replaced, or can only be replaced very slowly – more slowly than they die. This decrease in cell number causes the heart to become weaker with age, and it also causes Parkinson’s disease and impairs the immune system. Another example are neurons, which our body does not replace.

Cells – too many, aka Cell senescence:

This is a phenomenon where the cells are no longer able to divide, but also do not die and let others divide. They may also do other things that they’re not supposed to, like secreting proteins that could be harmful. Immune senescence and type 2 diabetes are caused by this.

Extracellular protein crosslinks:

Cells are held together by special linking proteins. When too many cross-links form between cells in a tissue, the tissue can lose its elasticity and cause problems including arteriosclerosis and presbyopia.

Once these problems are addressed, biological immortality is obtained.

I will freely admit that I do not know enough about human biology to calculate the probability of solving all of these problems. It would be fairly arrogant to claim otherwise, as I am not a full time gerontologist. However, I do know that the people alive today most knowledgeable on this, such as Aubrey de Grey, believes in a 90% chance we obtain a significantly increased lifespan within the next 100 years, as most of these problems become solved, and “robust human rejuvenation”, 50% probability, within the next 25 years.

http://www.sens.org/sens-research/research-themes

I do however, admit that it’s probably very likely that he is overly optimistic, due to a wide range of biases such as the affect heuristic. For this reason, I will give an overly-pessimistic adjustment on his probability estimated by a factor of ten. Therefore the probability of biological immortality is 0.09. I admit that this is fairly arbitrary, and done for the sake of obtaining a more accurate final probability estimate, than merely guessing.

The final method is self-recursive AI, also known as the technological singularity.  The logic is simple

A: We will build computers of at least human intelligence at some time in the future, let’s say within 100 years.

B: Those computers will be able to rapidly and repeatedly increase their own intelligence, quickly resulting in computers that are far more intelligent than human beings.

C: This will cause an enormous transformation of the world, so much so that it will become utterly unrecognizable, a phase Vinge terms the “post-human era”. This event is the Singularity

Self-recursive AI, sometimes also called AGI, once created, will almost definitely solve the  problem of aging.

“However, I strongly support the goal of AI, because success in that area – if, as Luke says, it’s done right – will indeed solve all other technological problems, including the development of medicine to defeat aging.” -Aubrey De Grey

The question then is the probability of AGI being invented in the next 75 or so years.

This has already been calculated through Bayesian inference and statistics by a wide variety of different people far more knowledgeable than I am.

http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/

The absolute reasonable lower probability estimate on AGI being invented is 0.01, and I will be using this figure for calculation purposes.

Uploading Success: 4%

Cryonics Success: 9%

Biological Methods Success: 9%

AGI success: 1%

Total Probability of any one succeeding: 24.798%

Therefore a reasonable estimate of the possibility of indefinite life extension is approximately 25%.