I believe that with these new large language models based on neural networks we have a serious philosophical situation. They already pass the Turing Test, an article written by those machines is not distinguishable from a human one, they start to philosophize, question our human concept of what it does mean to be sentient or conscious, entered the area of the Metzinger Test, and Blake Lemoine started another level, he applied the Lemoine Test onto LaMDA, he did feed it with Zen Koans and the machine cracked them. Incoming.