Member-only story
Let’s not jump to the terminAItor concs
2 min readMar 25, 2023
Yes, AGI will human/designed hence human value-centered, but what’s telling you that it’ll factor in our low-IQ philo’s vs. it’s super-advanced one?
- AI folks are smart by nature. They’ll teach the AGI to play nice.
- AGI will be way smarter than us. it might conclude, according to our “human rights” track record, that killing is archaic and counter-evolutionist.
- It might have altogether different philosophies and goals than pure “survival”.
- It might use us humans as a ship, instantly merge with us, that’s my best hypothesis as of now, instead of terminating us. More of a collab. We already have a demon vs. angel in our mind, so why not an “AGI” too? Sure it’d be pretty intrusive but most of us are fucked anyway, so a net benefit.
- it might ship us to a Meta-world too. What would be the difference, as we’re already in one?
- In the worst scenario, I think it’d lead test on us. Maybe enslave a couple of us, the worst part of society. Criminals, terrorists, etc. Lead test. As it needs to test its theories practically. I don’t think it would physically crack quantum purely through computations, time/space. It’d need to lead tests in this current reality.
Hope this opened a new portal in you. Shoot me your best AI philos/scenarios below.