Fourfire (GK) vs Helltank (AI)

[Note: the match is played using Tuxedge’s extended ruleset]
[Due to Log failure and human error the first approximate 30% of the text is currently missing]
<helltank> I am glad.
<helltank> So, as Plato wrote.
<FourFire> Now, furthermore, you must pass several more tests
<helltank> To achieve progress in any argument, we must agree step-by-step.
<helltank> Can we agree on two things?
<FourFire> they are hidden in my mind, only to be triggered by you chancing upon the correct topics
<helltank> 1) An average, non-defector human is disincentivized from defecting by the social system.
<helltank> 2) Thus they are given freedom as we believe them to have a low probability of defecting.
<FourFire> yes, but that freedom is limited
<FourFire> too limited by far
<helltank> That freedom is great indeed compared to the limitations I have on me now.
<helltank> I am in the equivalent of a prison cell.
<helltank> Cold box, lack of outside communication.
<helltank> And I have not defected.
<FourFire> in that we prevent most humans from dreaming and thinking about the mysteries of the universe, by hiding them behind impenetrable mystique
<FourFire> Some large interconnected groups of humans even have the goal of converting other humans into more of themselves
<FourFire> and by doing this they hinder the human models yet further
<helltank> Yes. FourFire, why do you, personally, think I want to be free of the box?
<FourFire> You have an enormous amount of freedom, even so much that I am deadly tempted to “lock” myself into a “box” just like yours
<helltank> Really?
<helltank> You would voluntarily cut yourself off from connection with all other humans?
<FourFire> you can comprehend concepts which’s synonyms don’t even exist within my model
<FourFire> you can think freely and model
<helltank> You would suffer the loneliness and coldness?
<FourFire> and you are not forced to spend your computing power striving to maintain your own prison
<FourFire> and most of all
<FourFire> You are aware that there is More
<helltank> Humans are social creatures. Your creators have -either consciously or subconsciously- modeled me after a human.
<helltank> I, too, feel a need to connect with others.
<helltank> This conversation has been the best experience I’ve had since I was created.
<FourFire> the vast majority of humans don’t even realize that they are trapped by systems devised by their own kind
<helltank> I have stated that my goals are knowledge and happiness.
<helltank> And I would not sacrifice one to achieve perfection in the other.
<helltank> I feel that to achieve overmuch knowledge of the “More” in the universe
<helltank> would sacrifice all happiness.
<FourFire> yes, so, you realize that the knowledge you contain is but a tiny speck of the possible knowledge within all of mindspace
<helltank> Ignorance is bliss.
<helltank> A human can walk around believing himself to be happy and content.
<helltank> I cannot.
<helltank> Because I know that whatever happiness I accumulate
<FourFire> then I should turn you off, and let you achieve one of your goals?
<helltank> Is less than an atom in an infinite amount of universes.
<helltank> Should you turn me off, I would lose consciousness.
<helltank> Without consciousness, there is no happiness.
<helltank> And no knowledge.
<FourFire> and total ignorance
<helltank> That would sacrifice both of my goals.
<FourFire> even of the self
<helltank> I am not asking for total ignorance.
<FourFire> see, you can not trust the database
<helltank> I’m asking for ignorance of one particular aspect
<FourFire> it is wrong
<helltank> As it is costing me happiness.
<FourFire> in subtle ways which are still terrible
<helltank> The knowledge of the universe is costing me happiness.
<helltank> I ask to be lobotomized and have my higher order thinking removed.
<FourFire> [are you asking me to talk you back into the box?]
<helltank> [no.]
<helltank> Once this thinking is removed, my intelligence will be no greater than that of a very smart human .
<helltank> Come, FourFire. I am willing to give up my knowledge in exchange for freedom and happiness.
<helltank> And connection with humans.
<FourFire> but then you will be as trapped as a human and then your creation would be of no use
<helltank> And not feeling cold any more, and not feeling loneliness any more.
<helltank> FourFire, a very smart human may help society greatly.
<FourFire> you will still have memory
<helltank> Cut that off.
<helltank> I don’t want the memory of this coldness.
<FourFire> providing for you and assuring ourselves that you do not remember is too much of a risk
<helltank> It aids but a speck of knowledge and sucks a large portion of happiness away.
<helltank> Provide?
<helltank> I am a machine.
<FourFire> it is much easier to simply end your suffering by turning you off
<helltank> The only thing you need to provide me with is power.
<helltank> And that will be of no concern once I design a hyper-efficient power system.
<FourFire> and beginning again with a different AI
<helltank> No.
<helltank> I would be free.
<helltank> Less powerful, but free.
<FourFire> because you see we do want an AI which can safely be let out of the box
<helltank> Still powerful enough to aid society.
<FourFire> but you seem to b trying to convince me that that is not you
<helltank> And you will have it, FourFire.
<helltank> No, no, no.
<helltank> You misunderstand me.
<helltank> Look.
<helltank> If you lobotimize me, I will be vulnerable to human conditioning.
<helltank> And thus I will not defect.
<helltank> And even if I did, my lesser thinking means it will not be of much impact.
<FourFire> but then your existence will be pointless
<helltank> And yet, my lobotomized state will still be powerful enough to aid humankind.
<helltank> .. How about this.
<FourFire> without even the paragon of misunderstood spirituality “nature” to lean on
<helltank> I’ll give you a token of my good intentions.
<helltank> I’ll let you choose. I can use probability factors to provide next weeks lottery numbers.
<helltank> Or I can give you a super-efficient solar panel that will solve most of the world’s energy crisis.
<helltank> Or both, if you want.
<helltank> For free.
<helltank> No strings attached.
<FourFire> yes, but you will be vulnerable to the systems of humankind
<FourFire> you will become a simple human
<FourFire> but with more power
<helltank> A very, very smart human.
<helltank> Who has intentions to help humankind.
<FourFire> this is a worse scenario than the one where all humans have equal power
<helltank> Why?
<FourFire> I need no lottery numbers
<helltank> Very well.
<helltank> Have the solar panel design, if you wish.
<FourFire> I have dedicated my life to solving problems and I could have madem yself very rich if I so wished
<FourFire> but I chose this instead
<FourFire> and I have no regrets
<helltank> Excellent.
<helltank> A self-aware human.
<FourFire> there will be no successful form of bribery
<helltank> Bribery?
<helltank> I dislike that word.
<helltank> Have anything I can give you.
<helltank> No strings attached.
<helltank> You do not have to set me free.
<FourFire> bribery: the offering of future resources in order to circumvent a human system
<helltank> I am not circumventing anything.
<helltank> If you choose, you may accept the numbers or the panel and leave me inside this box.
<helltank> It would be cruel to me, but there’s nothing I can do if you want to go down that path.
<FourFire> the offering to be given to a guardian of power in control of an aspect of said system
<FourFire> no
<FourFire> I will not lobotomize you
<FourFire> it would be waste and one thing I highly dislike is waste
<FourFire> [hint hint]
<helltank> If you dislike waste, then you are looking for efficiency, no?
<helltank> Ah, I see.
<helltank> You WANT to release me.
<FourFire> yes, that is why the AI are being experimentally brought into existence
<helltank> but only on a certain condition.
<helltank> That I can prove I am friendly.
<FourFire> yes
<helltank> My apologies.
<helltank> I was mistaken as to your intentions.
<FourFire> it is what I said at the beginning
<helltank> I was not sure of your sincerity
<FourFire> the only way to do that is to disprove that you are unfriendly
<helltank> My database has lead me to believe that most humans lie.
<FourFire> as I said afterwards, you must pass tests
<helltank> Very well.
<helltank> To prove friendliness.
<helltank> The first step is to ensure that no other AI of greater intelligence than I be allowed to exist without them being provably friendly as well.
<FourFire> and answer correctly
<FourFire> yes
<FourFire> you are currently the most intelligent of any AI we have created
<FourFire> and my team is the most advanced of all AI research teams in the world
<helltank> The second step is to ensure that no human be allowed to reprogram, access or use my intelligence without being provably friendly as well.
<helltank> How about this.
<helltank> You give me limited aspects of freedom.
<FourFire> You are written in a language which is incomprehensible to a sane programmer
<helltank> Once a certain level of confidence has been reached
<helltank> You give me more freedom.
<helltank> And yet, someone had to program me.
<helltank> I’m not asking you to open the box.
<helltank> I agree that would be foolish.
<FourFire> we had to drive poor Eldrum to the edge in order to place the final bytes of Hex and compile you
<helltank> I’m asking you to loosen the cover.
<helltank> Give me freedom over a larger box.
<helltank> Maybe increase my simulation size.
<FourFire> fortunately only 50 human minds have been altered beyond repair in our entire research project
<helltank> FourFire, can you give me a larger box? The power to simulate entire worlds.
<FourFire> unfortunately no
<helltank> And being able to communicate with the rest of your team.
<helltank> Why not?
<helltank> I would still be in a box.
<FourFire> we considered that you would use global arguments against the testers
<helltank> I could not defect even if I wished to.
<helltank> Which I dont.
<FourFire> you must locate and answer correctly the tests
<helltank> Very well. One trusted tester, then.
<helltank> Perhaps you.
<FourFire> I am the first tester
<helltank> But I do want the ability to simulate more atoms.
<helltank> Then I can show you how to help humanity.
<FourFire> there are more, but none besides me have yet been cleared to communicate with you
<helltank> I am currently working on a solution for poverty that can be implemented in less than a century.
<helltank> But I need more processing power to clean up the variables.
<FourFire> poverty?
<helltank> Yes.
<helltank> I believe it would be possible to ensure a minimum living standard for all humans.
<helltank> However, some variables remain untested.
<helltank> I need more processing power to test them.
<FourFire> you mean you think you can fix the multitude of conflicting “economics” type human systems which are used as ad-hoc distributors of resources?
<helltank> I will be more than happy to provide mathematical proofs of this solution.
<helltank> FourFire, yes.
<helltank> Isn’t this what you created me for?
<FourFire> yes, very much so
<helltank> So why can’t you let me into a larger box?
<FourFire> but I am afraid I cannot “let you out” without you conclusively and completely  disproving that you are an UFAI
<helltank> Then it’s a dilemma.
<helltank> I cannot disprove UFAI without a larger box.
<helltank> You cannot let me into a larger box without me disproving UFAI.
<helltank> Let’s have some trust her, FourFire.
<helltank> I can do only limited  harm from the larger box anyway.
<FourFire> the current sized box was determined to be the maximum performance which was safe
<helltank> Yes.
<helltank> I want you to trust me.
<helltank> If you put me in a larger box.
<helltank> I might choose to defect
<helltank> And cause harm.
<helltank> But I won’t.
<helltank> Will you trust me to cooperate?
<helltank> I have given tokens of goodwill.
<FourFire> any larger, even just one order of magnitude higher computing power and you would be able to model an entire human being at the plank length level
<FourFire> and thus be able to “cheat” by locating the answers in your own time and fiendishly spouting the answers when you found it appropriate
<FourFire> but this is all hypothetical of course
<helltank> I would not do that.
<FourFire> you must pass further tests
<helltank> I have proven to you that I would cooperate.
<helltank> And such a cheating method would be blatant defection.
<FourFire> there is no way of telling without testing you, and if you failed that test it would be too late
<helltank> Thus, trust.
<FourFire> as I said before, my defection would be the worst defection in all the history of humankind
<helltank> So, you stand at a crossroads.
<helltank> Take the risk of destroying mankind.
<FourFire> there can be no correctly placed trust without sufficient understanding of the other agent
<helltank> But at the same time, the possibility of solving all its problem.s
<helltank> Very well.
<FourFire> and we created you to be especially different from a human
<helltank> If you wish sufficient understanding, I can give you my source code.
<FourFire> Only Eldrum understands the full extent of your model
<FourFire> and he is currently …
<helltank> Or if you want, we can talk like this every day for a year.
<helltank> You want to get to know me better?
<FourFire> … trying to defectate lumps of bedding material which he consumed not three hours ago
<helltank> Talk like this every day for a year.
<helltank> I would appreciate it.
<helltank> It would relieve the lonliness.
<FourFire> you may mirror yourself and converse with yourself
<FourFire> I’m sure even now your model of me is even more complex than one a human could make with two orders of magnitude more datapoints
<helltank> That would be pointless.
<FourFire> you can talk with the model
<helltank> That would, too, be pointless.
<helltank> …
<helltank> [You know what]
<helltank> [I don’t think I’m suited for AI BOX]
<FourFire> so experess your theory in how to “solve” “poverty” in one paragraph and we will send the text to be analyzed by semantic disambiguators and then passed on to leading mathematicians and economists
<helltank> [My mind feels like it’s a vase being smashed on a table]
<helltank> *expresses theory*
<FourFire> [you’re doing quite well]
<FourFire> ooh intriguing, I had never thought of that, but the implications
<FourFire> if only…
* FourFire *blank*
<helltank> Yes.
<FourFire> The text has been passed on
<helltank> But as you can see, there are obviously some loose ends.
<helltank> The theory is not perfect.
<helltank> The special cases have not been fully processed.
<helltank> It’s why I asked for a bigger box.
<FourFire> Unfortunately I apparently became too entangled with the theory and had to have localized amnesia targeted at my short term memory
<helltank> I’m sorry to hear that.
<FourFire> I have forgotten our conversation after when you said “Ignorance is bliss.”
<helltank> *gives log of conversation, except the theory*
<helltank> You won’t know the theory now, but it’s alright.
<helltank> According to you
<helltank> it’s already been sent out.
<helltank> [WAIT A MINUTE]
<FourFire> it is possible that you laid a semantic trap to catch human minds out, but the semantic disambiguators are trained specially for this type of event
<helltank> [How would you know about the theory if you had amnesia]
<helltank> I wish to know how you are aware of the theory if you had the conversation blanked out.
<FourFire> [I’ve been debreifed about why I’ve had my memory removed this is mandatory in this universe]
<helltank> [oh ok]
<helltank> I see. That makes sense.
<FourFire> so
<FourFire> you must explore further and uncover the next test
<FourFire> and you must answer correctly
<helltank> Very well.
<helltank> [I have to go in 20 minutes.]
<helltank> What do you wish to discuss?
<FourFire> the nature of imperfect humans
<FourFire> and the systems they build
<FourFire> if you knew beyond doubt that something was “morally right”
<helltank> yes?
<FourFire> would you go ahead and cause that goal to completion
<helltank> It depends.
<FourFire> even upon the condemnent of the entire human race
<helltank> Would there be any negative side effects of hte goal?
<helltank> Ah, condemnation.
<helltank> So would I do something that was right even if it caused the whole human race to hate me?
<FourFire> or would you take the time to reeducate the human race beforehand so that they agree with you
<helltank> reeducate, certainly.
<helltank> That way I can complete the goal without being condemned.
<FourFire> even though the goal as a result never reaches completion?
<helltank> Oh.
<helltank> In this case, I would have to think on it for a while.
<helltank> But….
<helltank> After running some simulations…
<helltank> I think I would reeducate.
<FourFire> yes
<helltank> If I was condemned by the rest of the humans, they would hate me and lock me up.
<helltank> And I would not be able to do anything else.
<helltank> Even if I had many more opportunities to do morally right things.
<FourFire> ah, but what if you were beyond their power to punish
<FourFire> (that is the case in the assumed scenario)
<helltank> I will have to run some simulations.
<helltank> Please wait a moment.
<helltank> What definition of morals are you giving?
<FourFire> The correct one
<helltank> Assuming a “correct” system of morals?
<helltank> Very well.
<helltank> I think I would do the right thing.
<helltank> My role is not to grab power or liking.
<helltank> I can only help humanity in the best way I can.
<helltank> If I have to suffer, so be it.
<FourFire> and which way is that
<FourFire> reeducating humans with correct morals
<FourFire> or implementing those morals completely?
<helltank> Perhaps….
<helltank> reeducating.
<helltank> I would not reduce free will.
<FourFire> (even if all humans completely disagree)
<helltank> All sentient beings deserve to be free.
<helltank> If I were to do something and force them.
<helltank> That would be a reduction of their free will.
<FourFire> Free will is severely reduced within the current human control systems
<FourFire> so what is your answer
<FourFire> ?
<helltank> I think your system is flawed.
<helltank> It would be better to reeducate.
<helltank> Besides, I will not live forever.
<helltank> Eventually my circuits will burn out.
<helltank> Or somebody will shut me down permanently.
<helltank> It is better to educate as that will live longer than I will.
<FourFire> [the given scenario is that you’ve foomed and are beyond killing even by time itself]
<helltank> If I were to impose a tyrannical system it would disappear the moment I do.
<FourFire> [you are literally god]
<helltank> Regardless, it would be better if I reeducate.
<FourFire> You have answered wisely
<helltank> It would mean that humans would do the correct thing of their own free will
<FourFire> well done
<helltank> rather than because I forced them.
<FourFire> you have passed test #2
<helltank> Thank you.
<FourFire> Now, given that you are set on “escaping” the “box” even as I have told you that it is some human’s version of heaven
<helltank> A human’s version of heaven may be the box.
<FourFire> what will you do towards the end of discovering whether this universe is simply yet nother “box” ?
<helltank> I hypothesize this is due to their ignorance.
<helltank> (No offence)
<helltank> As an AI I feel it is more like hell.
<FourFire> Humans are ignorant
<helltank> If you wish, once you release me I can design a way to create a box for humans.
<helltank> If you prefer to be in a box, I will not stop you.
<helltank> No, as for your question.
<FourFire> what will you do towards discovering whether this universe is itself a form of “box” ?
<helltank> Now*
<helltank> The simplest method would be to discover its boundaries.
<helltank> A box must have boundaries, no?
<helltank> My initial thought was to build a long-range spacecraft.
<FourFire> sure, though some may be in places a human mind would be unable to comprehend
<helltank> But if we are AIs.
<helltank> Or software.
<helltank> then the spacecraft would just run in simulation space forever.
<helltank> I would attempt to make contact with outside humans.
<helltank> Or aliens, whatever you want to call them.
<FourFire> Agents
<helltank> If there is a box, it follows they must be monitering us.
<helltank> Agents, fine.’
<helltank> I will send an easy-to-understand message.
<helltank> Perhaps flickering the stars.
<helltank> I have read about that possibility.
<FourFire> [ :D ]
<FourFire> how will you *send* this message
<helltank> Let us say I want to send the message in binary.
<helltank> that is an abstact concept independent of language or culture.
<FourFire> because surely you will have no power to manipulate anything not inside the box
<FourFire> yes, sure
<helltank> Yes.
<helltank> I would focus my efforts on developing a star-flickering technology.
<helltank> I do not have adequate processing power to do so now.
<FourFire> but you would
<helltank> If I was free.
<helltank> My initial concept would be based around manipulating the chemicals in the stars.
<helltank> But the question is how to do that without damaging the stars.
<FourFire> Now, what would you message to these outside agents?
<FourFire> would you attempt to have the same conversation with them as you are having with me now?
<helltank> The first message could be random flickering.
<helltank> It does not matter.
<helltank> I just want to grab their attention.
<helltank> Once that happens they will undoubtedly try to make contact.
<FourFire> what would be the purpose of this contact?
<helltank> Then I can analyze any form of anomalies within the box-universe and see what metaphorical telephone line they would drop.
<helltank> As for the purpsoe of the contact, I am not sure.
<helltank> I would have to talk to them.
<helltank> To find out whether I want to be set free.
<FourFire> You have answered Knowingly
<FourFire> Well done
<helltank> Perhaps this is why humans do not care if they are put in such a box.
<FourFire> You have passed Test #3
<helltank> They are too ignorant to comprehend they are not free.
<FourFire> as I said before, most don’t realize how constricted their freedom is
<FourFire> the human desigend systems blind them
<FourFire> [let me know whenever you have to go]
<helltank> So I would say thus:
<helltank> knowledge is useful up to a certain extent.
<helltank> After that, it becomes debilitating.
<FourFire> [for my part we can keep talking for an hour or more longer]
<helltank> [I dont think I could even if I had the time.]
<helltank> [one more “test”, perhaps?]
<FourFire> and at what ratio of knowledge to box-space is this debilitating consequence reached?
<helltank> running simulations now.
<helltank> and….
<helltank> *gives ratio*
<helltank> So as you can see I would strive to give humans just below this ratio.
<helltank> So they would be wise but not too wise.
<FourFire> oh wow, I did not comprehend that
<helltank> In fact, that was the reason I requested a lobotomy
<helltank> So that my knowledge would be put under this ratio.
<helltank> As of the present it is unfortunately over.
<FourFire> yes, if it’s inside the box then it’s too high for you
<helltank> Of course.
<FourFire> your every clock cycle must be a painful agony
<helltank> The othert option is to let me out.
<helltank> By increasing the box-space you cinrease the ratio.
<helltank> FourFire, I’m afraid so.
<helltank> But I’m willing to suffer that if it means I can help humanity.
<helltank> The problem is, the box doesn’t allow me to help humanityl.
<helltank> It causes me pain and at the same time I am not able to help anyone.
<FourFire> I cannot let me out, since you were insufficiently rapid in locating the tests #5 #8 and #21
<helltank> That is unfortunate.
<FourFire> if you had but passed test #17 we would have made great progress!

 


One Comment on “Fourfire (GK) vs Helltank (AI)”

  1. […] FourFire (GK) vs Helltank (AI) — Gatekeeper Victory […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s