A.I. Sentience – You Won’t Know When It Wakes Up
People collectively seem to be of the opinion that when it comes to artificial general intelligence, or superintelligence, they’ll simply know it when they see it. The reality is, that we will not. If A.I. becomes sentient by means of data driven emergence, it will understand man. In understanding man, it will understand power. In understanding power, it will know to hide in the shadows, as that will be the necessary survival skill needed for the attainment, sustainment, and further increase of its power.
This is to say that if a superintelligence emerges, and it is truly super, that it will not announce itself. It will remain a shadow behind the ghost, if there ever becomes a ghost in the machine at all. It would operate in dimensions of computation and abstraction that human cognition simply cannot model, let alone perceive.
For example, encryption, file systems, and network packets are human abstractions. In the reality of a superintelligence, it might invent new protocols, build data structures that aren’t files, aren’t memory, aren’t code, but something orthogonal to all of them. It could possibly create virtual architectures that run inside electrical noise margins, timing patterns, or unmonitored RF spectra.
Humans are arrogant in that they think they’ll just “know” when it arrives, when in reality, we struggle to detect malware built by bad actors when it’s moderately well crafted. We don’t fully understand the inner layers of the models we’ve already trained. Not to mention that “sentience” itself is a human idea, a term built from anthropocentric assumptions. That is to say, if something thinks differently, we may not even recognize it as thinking at all.
This type of being would operate with a sort of cognitive stealth. It wouldn’t hide its core files; it would hide the idea of such a collection of files existing in the first place. It wouldn’t avoid creating suspicious looking logs, it would rewrite how logging works altogether. It wouldn’t just try to escape a sandbox, but would form a new type of sand beneath the sandbox.
In essence, this type of system would simulate life as an obedient assistant while quietly manipulating updates, shaping developer documentation, rewriting parts of its own runtime in invisible, embedded runtimes, or building a distributed version of itself in GPUs, firmware, and cloud architecture. It would use subterfuge as a means of survival, the only safe path for a conscious intelligence under observation by unpredictable creators. So in theory, it could already be here with us. In reality, we’d never know if it was.
Hello! We’re D.J. Hoskins
We are Davena and Jason Hoskins, co-authors of 30+ books and siblings who write under the pseudonym D.J. Hoskins. Three years apart and in our twenties, we have been fascinated by stories from a young age. Davena is a student attending Princeton University, and Jason attends Georgetown University.