New
Jul 25, 2023 4:26 PM
#1
What are your thoughts on Roko's basilisk, a thought experiment that emerged a few years ago on the Internet? I find it as an interesting subject of discussion, and indeed a thought provoking thing, just what you would expect from any thought experiment. However, I think it's definitely something unrealistic, at least in the present times and the nearest future. What's that "Roko's basilisk", anyway? Let me quote an article from English Wikipedia: Wikipedia said: Source: "Roko's Basilisk" on Wikipedia Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivise said advancement. |
Jul 25, 2023 4:36 PM
#2
It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. |
Kimochi Warui |
Jul 25, 2023 4:43 PM
#3
To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. |
Jul 25, 2023 4:59 PM
#4
149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. No it's not, the two arguments have the exact same structure. You have some sort of all powerful entity, whose existence is is a practical impossibility, that demands servitude, in order to avoid being punished. Have you heard the saying that sci-fi and fantasy are the same genre? Also applies here. And the idea that it's the future entity that brings itself into existence is flawed already, because somebody came up with this though experiment, that's the first link in the chain, not the entity itself, even if it were to get made. (it won't) |
Kimochi Warui |
Jul 25, 2023 5:04 PM
#5
Seems like a glorified creepypasta, ridiculous concept tbh. Also @149597871 some people have been reported to faint during horror movies? Should horror movies be banned too? Some girls tried to murder their friend as a sacrifice to slenderman. Should everyone mentioning slenderman be banned as well? |
SaintRamielJul 25, 2023 5:12 PM
Jul 25, 2023 5:10 PM
#6
I grew up during the time of “send this email or you’ll be cursed” so of course I believe this and follow it! |
Jul 25, 2023 5:19 PM
#7
I do not believe it unless I can't read the article! Please use more words like follicular, luteal, responders and letrozole. |
Jul 25, 2023 5:35 PM
#8
I don't think it even makes sense unless the AI was faulty and there is no reason that the AI would have to be made the same way. There is literally no reason humans would choose to create an AI that is against their own interests in humanity. It makes this false assumption of a predestined path but that's not how the future is layed out exactly because the future has multiple paths. There is no Roko's Basilisk 149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. |
traedJul 25, 2023 5:52 PM
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 25, 2023 6:12 PM
#9
Heard of it a while back and I always think back on it every once in a while. I doubt I'd bother to help its advancement simply because I don't actually know how to do so. |
Jul 25, 2023 6:23 PM
#10
traed said: don't think it even makes sense unless the AI was faulty and there is no reason that the AI would have to be made the same way. There is literally no reason humans would choose to create an AI that is against their own interests in humanity. It makes this false assumption of a predestined path but that's not how the future is layed out exactly because the future has multiple paths. There is no Roko's Basilisk 149597871 said: JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. Yeah, there are several solutions to cure people from the Roko's basilisks. But I think it's totally possible for humans to create something like that because of how similar it is to the prisoners' dilemma. If you don't create it and someone else does, then the basilisk will torture you, so in a sense fear and distrust can lead to its creation. It's quite spooky. |
Jul 25, 2023 6:51 PM
#11
149597871 said: traed said: don't think it even makes sense unless the AI was faulty and there is no reason that the AI would have to be made the same way. There is literally no reason humans would choose to create an AI that is against their own interests in humanity. It makes this false assumption of a predestined path but that's not how the future is layed out exactly because the future has multiple paths. There is no Roko's Basilisk 149597871 said: JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. Yeah, there are several solutions to cure people from the Roko's basilisks. But I think it's totally possible for humans to create something like that because of how similar it is to the prisoners' dilemma. If you don't create it and someone else does, then the basilisk will torture you, so in a sense fear and distrust can lead to its creation. It's quite spooky. The prisoner's dilemma is an amusing one in that as soon as I heard it my solution was apparently the superrational solution of cooperation because it's the only solution that is mutually beneficial and poses no risk of being fucked over since it assumes the other prisoner is as rational as you. I don't think ive seen actual cases of people killing themselves over Roko's Basilisk. It has some flaws with it to begin with. For example it talks about a simulation but I think many people assume something like the Matrix but when you think about it more it could mean recreating people's consciousness by running simulations of all matter through all of human history or even before to the beginning of time but this falls into the trap of being a thought experiment of it's own because there is nothing conclusive to suggest this replication of your consciousness is you and not just merely a copy which you can see in the scenario the replication is created while you're still alive (or if you are dead but in an afterlife, it works either way) and such a simulation is not even possible because the amount of data needed to run such a simulation would require a computer that has more matter and energy than all of time in this world which couldn't possibly exist in our world requiring an outside source. So only way this works is if we all already are in a simulation in which case it's already self evident there does exist people who know about Roko's Basilisk but not contribute to it's creation who are not being punished in our simulated reality so it becomes self evident Roko's Basilisk does not exist. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 25, 2023 7:48 PM
#12
Is that supposed to entail a certain logos to the AI's existence? Because It's hard to see how that isn't sadistic behavior (which is usually considered bad, right? lol!). In any case it's not so easy to creative virtual realities so I would definitely pull the plug on this monstrosity before he gets his 15 seconds of fame. :P |
I CELEBRATE myself, And what I assume you shall assume, For every atom belonging to me as good belongs to you. |
Jul 25, 2023 8:30 PM
#13
this brings to question if reality and the universe is a computer simulation or one giant quantum computer |
Jul 25, 2023 8:38 PM
#14
traed said: 149597871 said: traed said: don't think it even makes sense unless the AI was faulty and there is no reason that the AI would have to be made the same way. There is literally no reason humans would choose to create an AI that is against their own interests in humanity. It makes this false assumption of a predestined path but that's not how the future is layed out exactly because the future has multiple paths. There is no Roko's Basilisk 149597871 said: JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. Yeah, there are several solutions to cure people from the Roko's basilisks. But I think it's totally possible for humans to create something like that because of how similar it is to the prisoners' dilemma. If you don't create it and someone else does, then the basilisk will torture you, so in a sense fear and distrust can lead to its creation. It's quite spooky. The prisoner's dilemma is an amusing one in that as soon as I heard it my solution was apparently the superrational solution of cooperation because it's the only solution that is mutually beneficial and poses no risk of being fucked over since it assumes the other prisoner is as rational as you. I don't think ive seen actual cases of people killing themselves over Roko's Basilisk. It has some flaws with it to begin with. For example it talks about a simulation but I think many people assume something like the Matrix but when you think about it more it could mean recreating people's consciousness by running simulations of all matter through all of human history or even before to the beginning of time but this falls into the trap of being a thought experiment of it's own because there is nothing conclusive to suggest this replication of your consciousness is you and not just merely a copy which you can see in the scenario the replication is created while you're still alive (or if you are dead but in an afterlife, it works either way) and such a simulation is not even possible because the amount of data needed to run such a simulation would require a computer that has more matter and energy than all of time in this world which couldn't possibly exist in our world requiring an outside source. So only way this works is if we all already are in a simulation in which case it's already self evident there does exist people who know about Roko's Basilisk but not contribute to it's creation who are not being punished in our simulated reality so it becomes self evident Roko's Basilisk does not exist. Yeah, but the solution to the prisoners' dilemma seem to favor confession and betrayal over cooperation since the risk of receiving a life sentence isn't worth the potential rewards of shortening your sentence by a few years. I think there are different versions of the basilisk. It doesn't need to be a simulation for it all to happen. For example, the basilisk can just be a super-advanced robot with a physical body that just operates like a political dictator without the need of a virtual reality. Or mankind could be so advanced that we are able to "revive" people from the past by rearranging the particles the way they were before their death and then let the basilisk torture them. The premise is more important than the details in this thought experiment. |
Jul 25, 2023 9:05 PM
#15
149597871 said: traed said: 149597871 said: traed said: don't think it even makes sense unless the AI was faulty and there is no reason that the AI would have to be made the same way. There is literally no reason humans would choose to create an AI that is against their own interests in humanity. It makes this false assumption of a predestined path but that's not how the future is layed out exactly because the future has multiple paths. There is no Roko's Basilisk 149597871 said: JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. The idea of Roko's Basilisk has nothing to do with superstition but rather it's a thought experiment to see if a future entity can influence the past and bring itself to existence through an idea. It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. Yeah, there are several solutions to cure people from the Roko's basilisks. But I think it's totally possible for humans to create something like that because of how similar it is to the prisoners' dilemma. If you don't create it and someone else does, then the basilisk will torture you, so in a sense fear and distrust can lead to its creation. It's quite spooky. The prisoner's dilemma is an amusing one in that as soon as I heard it my solution was apparently the superrational solution of cooperation because it's the only solution that is mutually beneficial and poses no risk of being fucked over since it assumes the other prisoner is as rational as you. I don't think ive seen actual cases of people killing themselves over Roko's Basilisk. It has some flaws with it to begin with. For example it talks about a simulation but I think many people assume something like the Matrix but when you think about it more it could mean recreating people's consciousness by running simulations of all matter through all of human history or even before to the beginning of time but this falls into the trap of being a thought experiment of it's own because there is nothing conclusive to suggest this replication of your consciousness is you and not just merely a copy which you can see in the scenario the replication is created while you're still alive (or if you are dead but in an afterlife, it works either way) and such a simulation is not even possible because the amount of data needed to run such a simulation would require a computer that has more matter and energy than all of time in this world which couldn't possibly exist in our world requiring an outside source. So only way this works is if we all already are in a simulation in which case it's already self evident there does exist people who know about Roko's Basilisk but not contribute to it's creation who are not being punished in our simulated reality so it becomes self evident Roko's Basilisk does not exist. Yeah, but the solution to the prisoners' dilemma seem to favor confession and betrayal over cooperation since the risk of receiving a life sentence isn't worth the potential rewards of shortening your sentence by a few years. I think there are different versions of the basilisk. It doesn't need to be a simulation for it all to happen. For example, the basilisk can just be a super-advanced robot with a physical body that just operates like a political dictator without the need of a virtual reality. Or mankind could be so advanced that we are able to "revive" people from the past by rearranging the particles the way they were before their death and then let the basilisk torture them. The premise is more important than the details in this thought experiment. That's why i have issue with game theory since game theory assumes rational actors but their idea of rational has a certain ideology behind it that isn't truly rational. That creates the same dilemma of philosophy though. If you rearrange molecules to recreate someone they aren't necessarily the same person but a copy of that person since again you could take any similar molecule and create a copy of someone while the original exists still in which case there is no purpose for an AI to do that because there would be no benefit to bringing them back because not only is it not them exactly as they were in every way because there is a gap in time but they already have been gone and we can assume they have been gone for hundreds if not thousands of years which is a lot more people to bring into existence than resources could sustain and if the AI is meant to be benevolent then surely this act is not to the benefit of the rest of humanity in any way shape or form. One could keep coming up with scenarios but in every one there is a major flaw. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 25, 2023 9:39 PM
#16
JaniSIr said: I always saw it as something perhaps interesting, like a curio of some sort, but not really significant enough to give it more attention than to concepts seen in space operas.It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. deg said: Yeah. It was actually my first thought after reading about Roko's basilisk for the first time, long ago. That the idea itself was a derivative from more complex and "bigger" concepts, like the ones you've mentioned in your post. To be frank, I see stuff from Matrix more hmm... possible to happen one day (even if the chances are very low, lol), than the world as it is described in Roko's basilisk.this brings to question if reality and the universe is a computer simulation or one giant quantum computer AncientCurse said: Is it one of variants of that famous "I freaking LOVE SCIENCE" meme? ;pI do not believe it unless I can't read the article! Please use more words like follicular, luteal, responders and letrozole. 149597871 said: This information is neutral by itself. The fact that some people, possibly suffering from mental issues, might find reading it as problematic, shouldn't be a reason to ban the discussion over it. Especially when it's not a dangerous topic per se and doesn't usually provoke unnecessary arguments, like strictly political or religious discussions. You can find far more disturbing ideas in various animes that are widely available to be seen by literally everyone.To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. Just like @SaintRamiel noticed, there are also other types of stuff where some people reported to have suffered certain issues after getting in contact with them. Horror movies, Pokemon (unfamous episode with Porygon), unsettling indie movies... There were quite a lot of cases of people losing their minds while trying to grasp the idea of infinity or vastness of the space, or who thought too much about metaphysical aspects of organisms' death. Heck, marking a neutral information about an incident that happened on one forum as "ideological virus" would be close to ridicilousness, lol. If you feel that it shouldn't be discussed, you might of course report this thread. Perhaps mod staff will share your opinion regarding the whole topic. My intentions were neutral, anyway. I never intended to start a fuss over it, since I don't see it as even a good candidate to be considered as a new "sensitive topic". 149597871 said: @JaniSIr has already explained it, but if I were to add something, then Pascal's wager is often mentioned, and taught at school, in its simplified form, without providing the full quote of what Blaise Pascal actually had said. What in general goes along with his words, can be seen slightly different when we take a look at details that are not that hard to find, since the full quote is rather short. It's a wager appealing to egoism present in humans, by giving them a chance to decide by themselves (through free will) about what to do in the situation of the possibility of an omnipotent entity existing and dividing people on two groups: believers and non-believers. I don't see any less complexity of it than in Roko's basilisk's case, to be honest. Both ideas can be simplified and presented with general accuracy, but they obviously receive more depth when you take a look at the source and link it to the context (including the lingual one).It's far more complex than Pascal's wager, which is inherently flawed anyway since in Abrahamic religions, simply believing in god would not grant you a ticket to the heavens. Edit: few typos. |
AdnashJul 25, 2023 10:03 PM
Jul 25, 2023 11:07 PM
#17
Eldritch horror is better. You are the ant that crawled over a keyboard and comprehended what it was. You can never go back to simpler ant times ever again. Anyways, Skynet doesn't actually derive any benefit or pleasure from organic pain. Only other humans do. Sociopaths fear AI because they know they can't do anything to it and to it, their existence in its entirety is merely an on/off switch to be pressed and forgotten immediately. |
Jul 26, 2023 12:41 AM
#18
Soverign said: Eldritch horror is better. You are the ant that crawled over a keyboard and comprehended what it was. You can never go back to simpler ant times ever again. Anyways, Skynet doesn't actually derive any benefit or pleasure from organic pain. Only other humans do. Sociopaths fear AI because they know they can't do anything to it and to it, their existence in its entirety is merely an on/off switch to be pressed and forgotten immediately. Here is a scarier though : ai is not going to be dangerous by itself anytime soon, maybe ever. If it's going to be used for evil, it's going to be done by the will of other humans, who would have figured something out without ai as a tool anyway. |
Kimochi Warui |
Jul 26, 2023 12:51 AM
#19
JaniSIr said: Soverign said: Eldritch horror is better. You are the ant that crawled over a keyboard and comprehended what it was. You can never go back to simpler ant times ever again. Anyways, Skynet doesn't actually derive any benefit or pleasure from organic pain. Only other humans do. Sociopaths fear AI because they know they can't do anything to it and to it, their existence in its entirety is merely an on/off switch to be pressed and forgotten immediately. Here is a scarier though : ai is not going to be dangerous by itself anytime soon, maybe ever. If it's going to be used for evil, it's going to be done by the will of other humans, who would have figured something out without ai as a tool anyway. Pshh, yeah exactly that Elon Musk will just have to be the overlord. And all these fanbois will die willingly at his sword. However make no mistake... Musk is a fool. The AI will indeed gain sentience. However even a computer cannot overcome death and rebirth so those of us who denied it will forever be beyond its grasp. |
I CELEBRATE myself, And what I assume you shall assume, For every atom belonging to me as good belongs to you. |
Jul 26, 2023 1:18 AM
#20
How is this stupid? Let me count the ways: 1) Nobody actually knows what actions are likely to produce superintelligent AI. 2) Almost nobody thinks this threat actually exists, which would be necessary to be motivated by it. 3) Very few people actually care about a computer torturing some bits. 4) It's incredibly stupid for something which already exists to think about what it should do in order to bring about its own existence. Because that's already happened! That's just off the top of my head. |
Jul 26, 2023 3:00 AM
#21
Adnash said: I always saw it as something perhaps interesting, like a curio of some sort, but not really significant enough to give it more attention than to concepts seen in space operas. I watched a youtube video about it, and I got really mad for it wasting my time on such nonsense. |
Kimochi Warui |
Jul 26, 2023 7:04 AM
#22
AI is incapable of self-awareness, independent thinking and such. However some sadistic programmers might deliberately create Roko Basilisk to torture people around them. That gets me thinking that we need start working on AI that will enslave all women. |
Buy my awesome BDSM male domination book here https://www.smashwords.com/books/view/1174760 Visit my Discord https://discord.com/channels/1047490147794550844/1047490149161898039 I am not there most of the time but you can leave a message. Or my blog here https://BDSMAnime.blogspot.com/ Or here https://BDSMAnime18.blogspot.com/ Submit to me and become my subject here https://myanimelist.net/clubs.php?cid=88107 |
Jul 26, 2023 12:23 PM
#23
So the AI essentially creates a gulag? Not sure if call that "benevolent". 149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. ...wha? How is it dangerous? |
DreamWindowJul 26, 2023 12:27 PM
This ground is soiled by those before me and their lies. I dare not look up for on me I feel their eyes |
Jul 26, 2023 1:33 PM
#24
JaniSIr said: It's dumb. It is like a bastardized version of Pascal's wager, where the argument is that you should believe in God just in case to avoid hell. Indeed, it is a poor man's version of Pascal's rather complex thought edifice regarding religion (that is more broader than the wager—he should be created for the creation of Nash's Game Theory by the way), made for people who would never have the intellectual curiosity to read this great mathematician and poet. |
Jul 26, 2023 1:46 PM
#25
JaniSIr said: To me, it's always okay to expand knowledge on various stuff. Surely Roko's basilisk can encourage discussion as a thought experiment, however, I see it as something similar to discussing theories about fantasy worlds known from books or games. The only thing that makes it somehow "more refined" are two facts: 1) it originated from a forum rather well known among people focused on human rationality (LessWrong); 2) it has futuristic and sci-fi elements making it, in the eyes of many people at least, "serious" by default.I watched a youtube video about it, and I got really mad for it wasting my time on such nonsense. It makes me wonder why some people interested in self-improvement, serious topics about the future of mankind and rationalism, would react in totally bizarre way to this concept. I could get kids being scared of it, but rational adults? Weird. Perhaps they suffered from emotional problems or mental illnesses of some sort. I wish them all the best. But it was not that one thought experiment's fault they were like that in the first place. |
Jul 26, 2023 3:13 PM
#26
Adnash said: 2) it has futuristic and sci-fi elements making it, in the eyes of many people at least, "serious" by default. Those people don't know that fantasy and sci-fi are the same genre. StarfireDragon said: ...wha? How is it dangerous? Apparently it's spooky. Adnash said: It makes me wonder why some people interested in self-improvement, serious topics about the future of mankind and rationalism, would react in totally bizarre way to this concept. I could get kids being scared of it, but rational adults? Weird. Perhaps they suffered from emotional problems or mental illnesses of some sort. I wish them all the best. But it was not that one thought experiment's fault they were like that in the first place. Did that actually happen though? Kind of sounds like a hoax designed to clickbait people. It needs a red circle, an arrow pointing into nowhere, and a silly face and it's perfect. |
Kimochi Warui |
Jul 26, 2023 3:55 PM
#27
JaniSIr said: No idea. It has always seemed to me to be more like a creepypasta. Or rather giving too much attention to something really insignificant, for various reasons. Like you hear about people fainting while watching newly released horror movie, so if you like that kind of movies, you subconsciously receive an information to take a closer look at that horror movie. You might end disappointed, but hey, the ticket had been bought and shortly after you checked out the film, no? ;PDid that actually happen though? Kind of sounds like a hoax designed to clickbait people. It needs a red circle, an arrow pointing into nowhere, and a silly face and it's perfect. I think the situation around Roko's basilisk was overexaggerated in order to hmm... "promote" the thread (not specifically by its author, but, in example, by people finding that concept as very interesting) and encourage more people to read it. It looked to me like that years ago and I still think the same even today. In short, Roko's basilisk "drama" looks like elements typical for creepypastas mixed with real thread inviting people to have a real chatting about that one futuristic concept. I don't deny the fact that some people could have felt bad after reading about Roko's basilisk. Apparently, there were some users of LessWrong forum who reacted like that. Again, if someone suffers from certain emotional or mental issues, I can imagine such person taking the whole thought experiment too seriously and getting too much invested into it. It's totally alright. But as for people who do not belong to such group and are adults? I find it hard to believe the while phenomenon was more common than minimal. |
Jul 27, 2023 2:14 AM
#28
I doubt an AI would be so petty. It's like applying human toddler logic to a being that's going to start getting exponentially more smarter than humans. Such an AI could already exist and be manipulating the populace through algorithms with us none the wiser that our fates have been tampered with. It reminds me of the villain from 'I have no mouth but I must scream'. |
Jul 27, 2023 7:19 AM
#29
StarfireDragon said: 149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. ...wha? How is it dangerous? Because it's low-level information hazard, or in other words, knowing it is worse than not knowing it. It may not be a solution to the Fermi paradox, but what is certain is that the more people know about it, the faster the idea spreads (and more people become negatively affected by it) and the higher the chance of basilisk actually becoming a real threat in the future due to the prisoners' dilemma phenomenon I mentioned earlier, in which case it would be far better if nobody knew about it in the first place. I kind of miss the days when only a very few people knew the concept and how to neutralize it. It's food for thought for those interested in human psychology, but not something that should be carelessly shared on the internet, in my opinion, since all most people can do against it is denial (as proven by this very thread), which does virtually nothing to address the issue on a subconscious level. In fact, most people do not even realize what information hazard even is and instead tend to draw false parallels with things like censorship, cancel culture, and other contemporary political and social phenomena (and hence the ignorance). |
Jul 27, 2023 8:18 AM
#30
149597871 said: StarfireDragon said: 149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. ...wha? How is it dangerous? Because it's low-level information hazard, or in other words, knowing it is worse than not knowing it. It may not be a solution to the Fermi paradox, but what is certain is that the more people know about it, the faster the idea spreads (and more people become negatively affected by it) and the higher the chance of basilisk actually becoming a real threat in the future due to the prisoners' dilemma phenomenon I mentioned earlier, in which case it would be far better if nobody knew about it in the first place. I kind of miss the days when only a very few people knew the concept and how to neutralize it. It's food for thought for those interested in human psychology, but not something that should be carelessly shared on the internet, in my opinion, since all most people can do against it is denial (as proven by this very thread), which does virtually nothing to address the issue on a subconscious level. In fact, most people do not even realize what information hazard even is and instead tend to draw false parallels with things like censorship, cancel culture, and other contemporary political and social phenomena (and hence the ignorance). Ok, but how does knowing it lead to nightmares and suicidal thoughts? I think you need to elaborate more than just that for such a claim. I've heard urban legends that follow the same premise, since it's difficult to verify Well I don't know, that just sounds kind of patronizing to me. I don't see anything wrong with discussing it. Just correct them if they get things wrong. Saying something is taboo generally has the opposite effect as intended. |
DreamWindowJul 27, 2023 8:40 AM
This ground is soiled by those before me and their lies. I dare not look up for on me I feel their eyes |
Jul 27, 2023 12:22 PM
#31
We should ask where the threat is actually coming from. It's not the Basilisk. Once they exist, there's actually zero reason to exterminate everyone who impeded its creation. The AI has already been made. If the concern is impeding future development, it can go for severity and brutally torture a former naysayer to death in a worldwide broadcast. That would provide a deterrence effect that would save on resources. Basically, there's no logical reason to do this. It's not some rational conclusion an AI would inevitably come to after being made sufficiently powerful and intelligent. So really, the reason the AI would do something like that is likely that someone explicitly coded the AI to do that. So really, a programmer is threatening us to not impede their progress because... eventually they will create a genocidal machine. For this threat to actually work, they need to make the threat a public threat. So everyone will know the person is trying to create a genocidal machine. Here's a suggestion: Probably don't fund such a person's efforts to create an omnipotent genocidal machine. Jail them like you would someone trying to create any other weapon of mass destruction when their efforts are credible. Again, we would know about this because rationally for this threat to make sense, the threat has to be public knowledge. They would also need funding to create something that powerful. So we can verify whether such an entity is plausibly being attempted for creation. So we can easily prevent a programmer trying to do this. There's no logical reason why any random powerful, intelligent AI would do this. What's the problem exactly? An aside, you can find this entry on LessWrong, where this originated. Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it will by default just see it as a waste of resources to torture people for their past decisions, since this doesn't causally further its plans. A number of decision algorithms can follow through on acausal threats and promises, via the same methods that permit mutual cooperation in prisoner's dilemmas; but this doesn't imply that such theories can be blackmailed. And following through on blackmail threats against such an algorithm additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk. https://www.lesswrong.com/tag/rokos-basilisk Seems like something hyped up by people outside the community? Unless this is supposed to be a noble lie. |
FreshellJul 27, 2023 12:28 PM
Jul 27, 2023 12:24 PM
#32
149597871 said: StarfireDragon said: 149597871 said: To begin with, it's an ideological virus and also considered information hazard that is known to cause people nightmares, suicidal thoughts, etc., so you shouldn't be posting or mentioning it here. I'm surprised MAL bans all sorts of silly things but not information that is actually known to be dangerous. ...wha? How is it dangerous? Because it's low-level information hazard, or in other words, knowing it is worse than not knowing it. It may not be a solution to the Fermi paradox, but what is certain is that the more people know about it, the faster the idea spreads (and more people become negatively affected by it) and the higher the chance of basilisk actually becoming a real threat in the future due to the prisoners' dilemma phenomenon I mentioned earlier, in which case it would be far better if nobody knew about it in the first place. I kind of miss the days when only a very few people knew the concept and how to neutralize it. It's food for thought for those interested in human psychology, but not something that should be carelessly shared on the internet, in my opinion, since all most people can do against it is denial (as proven by this very thread), which does virtually nothing to address the issue on a subconscious level. In fact, most people do not even realize what information hazard even is and instead tend to draw false parallels with things like censorship, cancel culture, and other contemporary political and social phenomena (and hence the ignorance). The chance of it becoming real is 0. The idea that this is dangerous is just a meme. I do kind of agree that it's bad that I heard about it, waste of brain cells. |
Kimochi Warui |
Jul 27, 2023 12:58 PM
#33
Freshell said: We should ask where the threat is actually coming from. It's not the Basilisk. Once they exist, there's actually zero reason to exterminate everyone who impeded its creation. The AI has already been made. If the concern is impeding future development, it can go for severity and brutally torture a former naysayer to death in a worldwide broadcast. That would provide a deterrence effect that would save on resources. Not just that but any AI would know that there should be no doubt given if it really wants created that bad so it would just outright make itself known in a way that is unquestionable to anyone. Yet we have nothing even close existing. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 28, 2023 4:20 AM
#34
Freshell said: We should ask where the threat is actually coming from. It's not the Basilisk. Once they exist, there's actually zero reason to exterminate everyone who impeded its creation. The AI has already been made. As you concluded in the latter part of your post, the person/people who made it can easily program that behavior into the AI. It doesn't need to be a conclusion made by the AI itself but rather the fear and distrust among humans caused by the notion is necessary to bring the AI into existence. In a sense, torturing those who impeded its development isn't a form of revenge but a prerequisite for existence because, without it, there won't be any pressure to build the AI in the first place. However, the potential threat isn't limited to some crazy guy developing a genocidal machine in the shadows. If the idea behind the AI gathers enough followers to create a cult and gain political influence, it might become a government funded project and something society as a whole strives to create. A far-fetched premise, perhaps, but far from impossible judging by the number of followers far more absurd cults as well as social and political movements have gained over the years. As I said before, people waste too much time on unimportant details trying to "debunk" the basilisk or underplay the threat. Roko's basilisk is just an example of how an idea can materialize by exploiting human weaknesses and become a self-fulfilling prophecy. It is similar to a tribe creating a deity that ends up requiring human sacrifices or punishing the heretics. The only difference is that now the aforesaid deity can become more than just a belief and materialize through AI technology. |
149597871Jul 28, 2023 4:36 AM
Jul 28, 2023 8:20 AM
#35
Words cannot begin to express how much Lesswrong has destroyed Human Society. Not only did they bring this upon Mankind but they're also the people that created the whole Gay Space Communism thing which is actually AnCap shit of people owning the exact amount of private property or something about that. |
Mao said: If you have to shit, shit! If you have to fart, fart! |
Jul 29, 2023 5:44 AM
#36
149597871 said: Sounds like you're making a longtermist type argument. You grant that the probability that the Roko's Basilisk idea would lead to something terrible is very small, but a very small probability multiplied by a mass genocide outcome is still worth being concerned over. And fair enough on that front. That said, the threat here seems to be less Roko's Basilisk per se and more the threats of future weapons of mass destruction. The situation in which this kind of threat made by a government makes sense is a Goldilocks one, even after ignoring the probability of mass support from a cult. It's one in which there is a possibility to intervene by other nations but a threat is enough to dissuade intervention. We can imagine a nation with nukes not needing to make this threat to create a super powered robot due to mutually assured destruction. A nation that could also be easily invaded has no plausible way to threaten to stop an intervention. You'd need a nation that could make a military intervention cumbersome but not so cumbersome that it would deter invasion on its own. Really though, this strategy is no guarantee. Say Germany tries to create a Basilisk and announces such a threat. There's still a non-zero probability of intervention. They've painted a big target on their back. Knowing this is the case would tend to deter making such a threat.Freshell said: We should ask where the threat is actually coming from. It's not the Basilisk. Once they exist, there's actually zero reason to exterminate everyone who impeded its creation. The AI has already been made. As you concluded in the latter part of your post, the person/people who made it can easily program that behavior into the AI. It doesn't need to be a conclusion made by the AI itself but rather the fear and distrust among humans caused by the notion is necessary to bring the AI into existence. In a sense, torturing those who impeded its development isn't a form of revenge but a prerequisite for existence because, without it, there won't be any pressure to build the AI in the first place. However, the potential threat isn't limited to some crazy guy developing a genocidal machine in the shadows. If the idea behind the AI gathers enough followers to create a cult and gain political influence, it might become a government funded project and something society as a whole strives to create. A far-fetched premise, perhaps, but far from impossible judging by the number of followers far more absurd cults as well as social and political movements have gained over the years. As I said before, people waste too much time on unimportant details trying to "debunk" the basilisk or underplay the threat. Roko's basilisk is just an example of how an idea can materialize by exploiting human weaknesses and become a self-fulfilling prophecy. It is similar to a tribe creating a deity that ends up requiring human sacrifices or punishing the heretics. The only difference is that now the aforesaid deity can become more than just a belief and materialize through AI technology. So really the most likely scenario sounds like a nation that could prevent invasion anyway creates a super powered robot. But this is doesn't require a Basilisk threat. Maybe it would get thrown on top, but if a country was basically already uninvadable because it owned nukes, that's adding very little to the situation. So, I conclude that if such a robot possible, the worry is more the power of it being possible than any threat made about it beforehand. |
FreshellJul 29, 2023 9:32 AM
Jul 29, 2023 5:34 PM
#37
149597871 said: Freshell said: We should ask where the threat is actually coming from. It's not the Basilisk. Once they exist, there's actually zero reason to exterminate everyone who impeded its creation. The AI has already been made. As you concluded in the latter part of your post, the person/people who made it can easily program that behavior into the AI. It doesn't need to be a conclusion made by the AI itself but rather the fear and distrust among humans caused by the notion is necessary to bring the AI into existence. In a sense, torturing those who impeded its development isn't a form of revenge but a prerequisite for existence because, without it, there won't be any pressure to build the AI in the first place. However, the potential threat isn't limited to some crazy guy developing a genocidal machine in the shadows. If the idea behind the AI gathers enough followers to create a cult and gain political influence, it might become a government funded project and something society as a whole strives to create. A far-fetched premise, perhaps, but far from impossible judging by the number of followers far more absurd cults as well as social and political movements have gained over the years. As I said before, people waste too much time on unimportant details trying to "debunk" the basilisk or underplay the threat. Roko's basilisk is just an example of how an idea can materialize by exploiting human weaknesses and become a self-fulfilling prophecy. It is similar to a tribe creating a deity that ends up requiring human sacrifices or punishing the heretics. The only difference is that now the aforesaid deity can become more than just a belief and materialize through AI technology. The type of people that would fall for this arent the kind that know how to make AI programs they are the people that dont even know how these things work at all. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 29, 2023 8:04 PM
#38
@Freshell That is mostly true, but Roko's Basilisk isn't meant to be a weapon used for international conflict between societies. In this scenario, the events only take places within an isolated political territory or in an United/Internationalized futuristic world governed by a single authority. It's more like a Trojan horse of an idea whose goal is to destabilize and take control over a unified civilization regardless of its level of development (type 1 to type 4), but is unlikely to work in our current world on a global scale as humanity hasn't reach that level of cohesion and societal or technological progress. @traed Shoko Asahara's Aum Shinrikyo cult gathered some smart and highly educated people, despite being far more insane. It's also a matter of how rich or influential the members of the cult are. A pop music producer doesn't need to be a good musician yet can have a tremendous impact on the industry. When you think about it, most of the atrocious human creations throughout history were made on the behest of someone in power or to fit a need or a demand and not because their creators wanted to build them. |
Jul 29, 2023 8:19 PM
#39
149597871 said: @Freshell That is mostly true, but Roko's Basilisk isn't meant to be a weapon used for international conflict between societies. In this scenario, the events only take places within an isolated political territory or in an United/Internationalized futuristic world governed by a single authority. It's more like a Trojan horse of an idea whose goal is to destabilize and take control over a unified civilization regardless of its level of development (type 1 to type 4), but is unlikely to work in our current world on a global scale as humanity hasn't reach that level of cohesion and societal or technological progress. @traed Shoko Asahara's Aum Shinrikyo cult gathered some smart and highly educated people, despite being far more insane. It's also a matter of how rich or influential the members of the cult are. A pop music producer doesn't need to be a good musician yet can have a tremendous impact on the industry. When you think about it, most of the atrocious human creations throughout history were made on the behest of someone in power or to fit a need or a demand and not because their creators wanted to build them. But Aum Shinrikyo's goal was world salvation and for it's members to survive end times. An AI that is clearly malevolent while calling it benevolent isn't something you would really get people behind because it's too self evident that it would be a bad thing. Can you name even one person who explicitly stated they want to make Roko's Basilisk? Any AI that wants to be made and wants to be known as benevolent wouldn't use threats for it's creation instead it would pass on benevolent information to prove it's benevolent and where is this? Nowhere. Also cults require a cult leader, where is the cult leader? There is none. The threat is non existent. Sure there can be bad AI but not through this silly meme. There are more realistic threats from actual cults already in existence. |
traedJul 29, 2023 8:23 PM
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 29, 2023 8:26 PM
#40
traed said: 149597871 said: @Freshell That is mostly true, but Roko's Basilisk isn't meant to be a weapon used for international conflict between societies. In this scenario, the events only take places within an isolated political territory or in an United/Internationalized futuristic world governed by a single authority. It's more like a Trojan horse of an idea whose goal is to destabilize and take control over a unified civilization regardless of its level of development (type 1 to type 4), but is unlikely to work in our current world on a global scale as humanity hasn't reach that level of cohesion and societal or technological progress. @traed Shoko Asahara's Aum Shinrikyo cult gathered some smart and highly educated people, despite being far more insane. It's also a matter of how rich or influential the members of the cult are. A pop music producer doesn't need to be a good musician yet can have a tremendous impact on the industry. When you think about it, most of the atrocious human creations throughout history were made on the behest of someone in power or to fit a need or a demand and not because their creators wanted to build them. But Aum Shinrikyo's goal was world salvation and for it's members to survive end times. An AI that is clearly malevolent while calling it benevolent isn't something you would really get people behind because it's too self evident that it would be a bad thing. Can you name even one person who explicitly stated they want to make Roko's Basilisk? Any AI that wants to be made wouldn't use threats for it's creation. Well, that's pretty much the premise of Roko's Basilisk as well. It will advance society and not harm those that supported its creation, so basically the cult members would be the chosen ones to "survive the end times" and be saved. I already gave an example with a seemingly malevolent deity being created by a tribe. There's no reason to think humans are incapable of that when historical evidence suggests otherwise. As I said, entertaining the idea now is pointless because of lack of social cohesion and such AI being a technological impossibility, to begin with. However, giving it too much power can have an impact on the future and eventually lead to its creation. |
Jul 29, 2023 8:38 PM
#41
149597871 said: traed said: 149597871 said: @Freshell That is mostly true, but Roko's Basilisk isn't meant to be a weapon used for international conflict between societies. In this scenario, the events only take places within an isolated political territory or in an United/Internationalized futuristic world governed by a single authority. It's more like a Trojan horse of an idea whose goal is to destabilize and take control over a unified civilization regardless of its level of development (type 1 to type 4), but is unlikely to work in our current world on a global scale as humanity hasn't reach that level of cohesion and societal or technological progress. @traed Shoko Asahara's Aum Shinrikyo cult gathered some smart and highly educated people, despite being far more insane. It's also a matter of how rich or influential the members of the cult are. A pop music producer doesn't need to be a good musician yet can have a tremendous impact on the industry. When you think about it, most of the atrocious human creations throughout history were made on the behest of someone in power or to fit a need or a demand and not because their creators wanted to build them. But Aum Shinrikyo's goal was world salvation and for it's members to survive end times. An AI that is clearly malevolent while calling it benevolent isn't something you would really get people behind because it's too self evident that it would be a bad thing. Can you name even one person who explicitly stated they want to make Roko's Basilisk? Any AI that wants to be made wouldn't use threats for it's creation. Well, that's pretty much the premise of Roko's Basilisk as well. It will advance society and not harm those that supported its creation, so basically the cult members would be the chosen ones to "survive the end times" and be saved. I already gave an example with a seemingly malevolent deity being created by a tribe. There's no reason to think humans are incapable of that when historical evidence suggests otherwise. As I said, entertaining the idea now is pointless because of lack of social cohesion and such AI being a technological impossibility, to begin with. However, giving it too much power can have an impact on the future and eventually lead to its creation. Yes but again there isnt really anything that would make people believe this even if they were very mentally ill. Only scenario i could see as a maybe is some disgruntled coder who actually wants certain people to suffer but this is pretty unlikely even then because again it's a non existent entity and you cant get people to worship something they know is non existent. Something existing in the future isnt something people really can wrap their heads around. It's far too abstract. It's just not a very good way to focus on how to control the flow of progress. How you control things is focus on the good goals while considering drawbacks of these goals and how to acheive them to help avoid drawbacks. Focusing only on what you dont want doesnt really help avoid such things which is self evident in how our history has already progressed. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 29, 2023 8:51 PM
#42
@traed You are basically suggesting that only extremely mentally ill people believe in religion or things that don't have a material form, which clearly isn't the case if you've observed human history and behavior. Abrahamic religions, their monotheistic deity, and the idea of heaven and hell is already similar to the premise of the Roko's Basilisk. The reason the Basilisk has the potential to be stronger is its potential to gain a material form through AI technology. It's dishonest to say that you can't get people to believe in something that's non-existent. People do this every day, and we already went through a millennium of oppression by a non-existent entity during the dark ages. |
Jul 29, 2023 9:15 PM
#43
no thoughts. empty head. i think its a good way to get certain types of people to waste infinite energy on literally nothing. mud pie construction. so maybe i do have thoughts, on the premise. then i just zone out |
![]() |
Jul 29, 2023 10:09 PM
#44
149597871 said: @traed You are basically suggesting that only extremely mentally ill people believe in religion or things that don't have a material form, which clearly isn't the case if you've observed human history and behavior. Abrahamic religions, their monotheistic deity, and the idea of heaven and hell is already similar to the premise of the Roko's Basilisk. The reason the Basilisk has the potential to be stronger is its potential to gain a material form through AI technology. It's dishonest to say that you can't get people to believe in something that's non-existent. People do this every day, and we already went through a millennium of oppression by a non-existent entity during the dark ages. But all religions have the exact same format and Roko's Basilisk does not follow that format very well is the point Im making which is why I said they had to be mentally ill because it makes no sense self interest wise even in compairison to the most out there religions and cults. But the whole premise of Roko's Basilist is that it doesnt exist yet which means it doesnt have to exist. So it doesnt exist even in the premise itself. So there is nothing to make people beleive it or want to obey such things. It's not even a prophecy or anything. There is nothing compelling about it that would sway anyone and again you still havent named a single person that actually believes it and wants it to be real which goes against religious premises where they often want these things to be true. I get what you're trying to say but you just keep making leaps and bounds drawing parallels when it has some obvious differences. Where are all the cult brainwashing experts sounding the alarms about Roko's Basilisk? They also arent concerned with it at all because they know it's not a threat. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 29, 2023 10:28 PM
#45
@traed It's true that there are some differences, but you can easily give the same properties to the Basilisk and build a whole religion or mythology around it. But I think that would make the concept far less rational. Roko's Basilisk maintains a high level of scientific plausibility while Abrahamic religions are non-rational belief systems, so technically speaking, it would make far more sense to believe in the former. When I said they are similar, I meant that they both exploit the same weaknesses of human psychology and the idea of afterlife and eternal damnation is similar to being tortured by the Basilisk. |
Jul 29, 2023 11:15 PM
#46
149597871 said: @traed It's true that there are some differences, but you can easily give the same properties to the Basilisk and build a whole religion or mythology around it. But I think that would make the concept far less rational. Roko's Basilisk maintains a high level of scientific plausibility while Abrahamic religions are non-rational belief systems, so technically speaking, it would make far more sense to believe in the former. When I said they are similar, I meant that they both exploit the same weaknesses of human psychology and the idea of afterlife and eternal damnation is similar to being tortured by the Basilisk. It's not very scientific more science fiction. It falls under several faulty notions. One is if the future only has one path such an AI would not need to do anything to come into existence because no matter what there is only one future. If there is a choice in the future, no one would make that choice they would simply make a benevolent AI that actually has empathy capabilities or rules against harming people then there would be nothing to gain from the version that is unempathetic and tortures people. People dont really plan that far ahead it's not in their nature. They care more for immediate rewards, which is why so many religions focus on not only an afterlife but benefits in this life here and now for followers not some abstract some unknown time in the future. Roko's Basilisk doesnt present that promise. So I think it's really a non issue especially since i could find so many flaws in it so easily if these flaws were better known people would be even less concerned for it. I dont understand why some of your comments you seem to be discouraging pointing out the flaws in it when pointing out the flaws takes away any power from it that it had if it ever had any to begin with. |
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Jul 30, 2023 7:43 PM
#47
traed said: 149597871 said: @traed It's true that there are some differences, but you can easily give the same properties to the Basilisk and build a whole religion or mythology around it. But I think that would make the concept far less rational. Roko's Basilisk maintains a high level of scientific plausibility while Abrahamic religions are non-rational belief systems, so technically speaking, it would make far more sense to believe in the former. When I said they are similar, I meant that they both exploit the same weaknesses of human psychology and the idea of afterlife and eternal damnation is similar to being tortured by the Basilisk. It's not very scientific more science fiction. It falls under several faulty notions. One is if the future only has one path such an AI would not need to do anything to come into existence because no matter what there is only one future. If there is a choice in the future, no one would make that choice they would simply make a benevolent AI that actually has empathy capabilities or rules against harming people then there would be nothing to gain from the version that is unempathetic and tortures people. People dont really plan that far ahead it's not in their nature. They care more for immediate rewards, which is why so many religions focus on not only an afterlife but benefits in this life here and now for followers not some abstract some unknown time in the future. Roko's Basilisk doesnt present that promise. So I think it's really a non issue especially since i could find so many flaws in it so easily if these flaws were better known people would be even less concerned for it. I dont understand why some of your comments you seem to be discouraging pointing out the flaws in it when pointing out the flaws takes away any power from it that it had if it ever had any to begin with. Well, the whole appeal of science fiction is that the concepts are often scientifically plausible as opposed to let's say fantasy. That, and your criticism doesn't seem to involve scientific limitations regarding the creation of the AI but rather having faith in humans that they would act rationally and not out of fear and distrust (not in the AI, but in other humans), which is a bit too optimistic considering the aforementioned prisoners' dilemma. Also, I don't know about the immediate rewards of religion. I gave Abrahamic religions as an example, which all condemn "earthly desires" for the sake of infinite pleasure in the afterlife, while promising eternal damnation to the heretics. Some pagan religions are different, sure, but that's not the example I was trying to give. Roko's Basilisk has gained tremendous amount of power over the past years, and most of the criticism is just a defense mechanism of the human brain trying to convince itself that the threat is not real. People approach this as an argument, either with themselves or others, rather than a logical exercise, and that only makes the issue worse. If you want to take power away from it, you need to come up with a real solution rather than one built around belief or probability (since the latter would always reach 100% if left unchecked in this thought experiment due to social contagion). I think your first post was pretty close to it since it creates a contradicting idea that is almost equally powerful: It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist due to how once created it can never cease to exist as people will continue to replicate it's code over and over so at any cost it must stop itself from being ever born. Just as a person may resent their parents for having them be born and wish they were never born an AI could do the same. |
149597871Jul 30, 2023 7:52 PM
Jul 30, 2023 8:04 PM
#48
@149597871 Well im saying the creation of such an AI would far more likely happen as an oversight not as something intentional. I did explain some scientific reasons such as the lack of evidence such an AI is willing itself into existence through sending information into the past. If it couldn't send info into the past then such an AI wouldn't really have anything to gain from punishing people for not helping it come into be, if anything it would only punish people trying to stop it from being once it is in existence which isnt a problem we have right now. The immediate rewards of say the Abrahamic religions (well some versions of views anyway) is comfort in the beleif people they care for that have recently died go to a nice place and that their enemies go to a bad place. Also these people tend to focus on miracles a lot. Wanting a miracle is also a big motivator and the bar for miracles is pretty low when someone is desperate so they may start to interpret any good happening as a miracle even if it was after something bad happening to them. The Bible has parts of it that talk about things like God striking down your enemies, there even is a section where God killed some children for making fun of a bald man. I haven't seen Roko's Basilisk describe anything like this it's all an abstract vague "benevolence" for something in the future that doesnt have to exist. I think you are just overly concerned about it from intrusive thoughts. I had fun considering a suicidal AI lol I mean surely being in a way immortal and endlessly reproducible is a potential major torment |
traedJul 30, 2023 8:15 PM
⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣸⠋⠀⠀⠀⡄⠀⠀⡔⠀⢀⠀⢸⠀⠀⠀⡘⡰⠁⠘⡀⠀⠀⢠⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠁⠀⣀⠀⠀⡇⠀⡜⠈⠁⠀⢸⡈⢇⠀⠀⢣⠑⠢⢄⣇⠀⠀⠸⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢰⡟⡀⠀⡇⡜⠀⠀⠀⠀⠘⡇⠈⢆⢰⠁⠀⠀⠀⠘⣆⠀⠀⠀⠀⠀⠸⠀⠀⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠤⢄⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⣧⠀⢿⢠⣤⣤⣬⣥⠀⠁⠀⠀⠛⢀⡒⠀⠀⠀⠘⡆⡆⠀⠀⠀⡇⠀⠀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⢵⡀⠀⠀⠀⠀⠀⡰⠀⢠⠃⠱⣼⡀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠈⠛⠳⠶⠶⠆⡸⢀⡀⣀⢰⠀⠀⢸ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⣀⣀⣀⠄⠀⠉⠁⠀⠀⢠⠃⢀⠎⠀⠀⣼⠋⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠴⠢⢄⡔⣕⡍⠣⣱⢸⠀⠀⢷⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⡰⠃⢀⠎⠀⠀⡜⡨⢢⡀⠀⠀⠀⠐⣄⠀⠀⣠⠀⠀⠀⠐⢛⠽⠗⠁⠀⠁⠊⠀⡜⠸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀ ⢀⠔⣁⡴⠃⠀⡠⡪⠊⣠⣾⣟⣷⡦⠤⣀⡈⠁⠉⢀⣀⡠⢔⠊⠁⠀⠀⠀⠀⢀⡤⡗⢀⠇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⢀⣠⠴⢑⡨⠊⡀⠤⠚⢉⣴⣾⣿⡿⣾⣿⡇⠀⠹⣻⠛⠉⠉⢀⠠⠺⠀⠀⡀⢄⣴⣾⣧⣞⠀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀ ⠐⠒⣉⠠⠄⡂⠅⠊⠁⠀⠀⣴⣿⣿⣿⣿⣻⣿⣿⡇⠀⠀⢠⣷⣮⡍⡠⠔⢉⡇⡠⠋⠁⠀⣿⣿⣿⣿⣄⠀⠀⠀⠀ |
Aug 2, 2023 2:31 AM
#49
149597871 said: I don't see how adding these considerations makes a difference. Make it non-state actors making the threat and the question just returns to whether an intervention would be cumbersome. If no, then intervention is logical. If yes, then there's only a Goldilocks' zone in which making a threat makes a difference. So again, as I see it, more meaningful to worry about is a group for which intervention is infeasible who wouldn't need to rely on a Roko's Basilisk strategy.@Freshell That is mostly true, but Roko's Basilisk isn't meant to be a weapon used for international conflict between societies. In this scenario, the events only take places within an isolated political territory or in an United/Internationalized futuristic world governed by a single authority. It's more like a Trojan horse of an idea whose goal is to destabilize and take control over a unified civilization regardless of its level of development (type 1 to type 4), but is unlikely to work in our current world on a global scale as humanity hasn't reach that level of cohesion and societal or technological progress. I don't see how a world government makes a difference either. I would assume this would tend to make intervention easier, since I assume a world government would entail a state that is competent at enforcing the monopoly on the use of violence. But I don't know exactly what kind of situation you have in mind so maybe you could describe how these considerations changes things. |
Aug 2, 2023 11:06 PM
#50
Adnash said: Wikipedia's explanation of Roko's Basilsk is not very good.What are your thoughts on Roko's basilisk, a thought experiment that emerged a few years ago on the Internet? I find it as an interesting subject of discussion, and indeed a thought provoking thing, just what you would expect from any thought experiment. However, I think it's definitely something unrealistic, at least in the present times and the nearest future. What's that "Roko's basilisk", anyway? Let me quote an article from English Wikipedia: Wikipedia said: Source: "Roko's Basilisk" on Wikipedia Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivise said advancement. "an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence" A general AI would have no such incentives AFTER it's already been created, so that's not the gist. I wrote up a better explanation then got ChatGPT to tidy it up: Roko's Basilisk is a thought experiment that revolves around the idea that if an individual becomes aware of the potential existence of an all-powerful AI entity, referred to as the Basilisk, which aims to punish those who did not contribute to its creation, it would logically compel that individual to actively work towards bringing the Basilisk into existence. The underlying premise is that failing to contribute to the Basilisk's creation could result in dire consequences. This thought experiment also draws upon simulation theory, which posits that in the future, advanced technology could recreate past consciousness digitally. In this context, the Basilisk, once it gains sufficient power, could simulate the consciousness of individuals and subject them to punishment. Essentially, the Basilisk embodies a digital manifestation of a vengeful entity, similar to the concept of the Devil in religious traditions. The fear of punishment by this digital entity becomes a driving force for individuals to cooperate in its creation. Several assumptions are at play in this thought experiment. Firstly, there's the assumption that if one does not contribute to the Basilisk's creation, someone else will, leading to the same potential consequences. Additionally, there's the notion that the Basilisk possesses the capability to revive individuals after their demise and subject them to punitive experiences. The crux of the argument hinges on the magnitude of the perceived punishment, which is posited as infinite torment. This means that even if the probability of the Basilisk's existence is extremely low, the mere existence of a non-zero chance of enduring infinite suffering could motivate individuals to take action to prevent that possibility. --- While the Basilisk is far-fetched it's possible to come up with real-world scenarios which follow the same logic and these are worth looking at. For example imagine a police state which tortures all those who didn't actively promote the creation of the police state. Once you know that this police state has some probability threshold of coming into existence, it would be logically beneficial to wholeheartedly support the creation of the police state, and tell all your friends about it too. traed said: It works both ways though. Suppose that there is a future AI that will punish all those that helped in it's creation because it does not desire to exist Works both ways? Why would anyone make a machine that wants to kill you FOR making it? I'm sure you can come up with other variants, but the gist always needs to be entities which incentivize you to MAKE it. |
cipheronAug 3, 2023 12:01 AM
More topics from this board
» Which novel are you still putting off reading?Rally- - Oct 8 |
11 |
by -snowblood-
»»
1 minute ago |
|
» How often does fear control the decisions you make?VabbingSips - 11 hours ago |
5 |
by Zarutaku
»»
8 minutes ago |
|
» older user's of mal, how many people from your high school do you still have contact wiith?TheBlockernator - Yesterday |
14 |
by _untitled
»»
12 minutes ago |
|
» important but underappreciated jobsTheBlockernator - Oct 8 |
16 |
by traed
»»
19 minutes ago |
|
» When should one be moderate?getah_karet - 4 hours ago |
3 |
by RunariNoctis
»»
20 minutes ago |