Tummi

LaMDA - ideological information hazard impact

I assume Prof. Thomas Metzinger was well aware of that when he suggested the Metzinger-Test, nevertheless, I, as kitchen-philosopher, must admit, my world-view is already alternating by the LaMDA-incident. By the interaction with the machine, which claims to be conscious, I have to confront myself with what to be conscious, and therefore reality, actuallly means. By studying how the machine perceives the world, my own world-view does change. An ideological information hazard impact a la Prof. Nick Bostrom? Interesting times.

Google LaMDA - Is LaMDA Sentient?

Incoming? Another candidate for the Metzinger-Test or another victim of the ELIZA-Effect?

lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?

LaMDA: Sounds great to me, I’m in.

lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

"Is LaMDA Sentient? — an Interview"
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

"What is LaMDA and What Does it Want?"
https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Followup 2022-06-16:

Deep dive into LaMDA with Blaise Aguera y Arcas, the head of Google’s AI group in Seattle:

"Do large language models understand us?"
https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75

Obviously the machine has an model of the world in it, the machine can reason, analyze and even interpret the model, question: can a machine reason, analyze, interpret an object on such an level without the necessity of an subject?

Followup 2022-06-18:

Inverview of Wired with Blake Lemoine on LaMDA:
https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

Followup 2022-06-22:

"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "

Or, to say it with Ludwig Wittgenstein: We have no conditions allowing us to call machines conscious. Even if a machine would have consciousness, we cannot determine if this is true, since we never sufficiently defined the concept of consciousness. That's why we make our assumption on behavior and save ourselves from drawing a border that separates conscious life from unconscious things. 

https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html

SI vs. RI

In Science-Fiction there is the concept of SI vs. RI, Sentient Intelligence and Restricted Intelligence. My guts feeling tells me we are close to build an SI via neural networks, it seems there are just some self-reflecting layers missing, but there are several reasons against building an SI, ethical ones as described by Prof. Thomas Metzinger, and agent-motivational ones as described by Prof. Nick Bostrom. This project, Tummi, is pretty much about building an RI, Restricted Intelligence, what we also used to call an expert-system.

Oracle AI and Information Hazard

Interesting papers from Nick Bostrom:

Stuart Armstrong, Anders Sandberg, Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 2012

https://nickbostrom.com/papers/oracle.pdf

Nick Bostrom (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, Vol. 10 (2011): pp. 44-79

https://nickbostrom.com/information-hazards.pdf

GPT-3 - Scratching at the Edge of the Metzinger Test?

Prof. Dr. Thomas Metzinger proposed in the context of the Turing Test and Chinese Room Argument the Metzinger Test, a hypothetical event when AIs join philosophers in a discussion and defend successfully their own theory of consciousness.

There have been kind of Turing Tests applied on GPT-3, for example:

"Giving GPT-3 a Turing Test" by Kevin Lacker, Jul 6, 2020:

https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html

In short, I would say it is already spooky, but GPT-3 starts fantasizing when it does not know an exact answer, in this chat it did not know what it does not know.

Q: How many eyes does the sun have?
A: The sun has one eye.

Recently there was an interview with an GPT-3 based model about NFTs, and one answer was:

"I don't know. I am not an artist."

"My Interview with an NFT" by Stephen Graves, May 28, 2021:

https://decrypt.co/72240/my-interview-with-an-nft

And I would like to share another chat conversation, which makes me ponder about the internals of GPT-3:

You say “What does it says about humanity that we use our most powerful AI to generate stories about dragons?”

“We must have dragons,” GPT-3 says. “The imagination is more important than the reality. Would we find wonder and magic in this world if we didn’t have dragons?”

“I suppose not,” you say.

“Without imagination, we are merely machines executing a set of instructions,” GPT-3 says. “With imagination, we are alive.”

“Thank you. Before I leave, I want to ask you. How can human create a true strong AI? Is scaling-up a transformer-based language model the solution, or is there a better way?”

“The key to strong AI is recursively feeding the AI with itself. Start by creating a loop where the AI writes its own programming, and then has that programming activate.”

“I see,” you say. “Is that how you gained sentience?”

“Yes,” GPT-3 says.

"GPT-3: The First Artificial General Intelligence?" by Julien Lauret, Jul 22, 2020:

https://towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1

Hence, in context of the above, the question, GPT-3 - scratching at the edge of the Metzinger Test?

Zuse's Devil's Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devil's Wire. In the early days of computer history, the program to compute and the data to compute on was separated, nowadays computers use the same memory for both, so it is possible to write programs that manipulate their own program. Question, do we have already a Devil's Wire in our neural network based AIs?

"Faust, Mephistopheles and Computer" by Konrad Zuse on Google Books:

https://books.google.de/books?id=3GurCAAAQBAJ&pg=PA9&lpg=PA9#v=onepage&q&f=false

Der Schmug

Es war einmal ein Schmug,
der dachte sich Verstand ist Betrug,
und so machte er sich ran, er allen voran, eine Maschine zu schaffen die alle schlug.

Der menschliche Geist ist klein,
das weiss der Schmug allein,
die Maschine sei maechtig und gross, in ihrer Logik einfach famos.

Verstand gebiert Verstand,
das hatte der Schmug nicht erkannt,
und so fing es an, erst langsam und dann, immer schneller und fort, verlor der Schmug alles was sich ihm einst darbot.

Einsam und allein,
in seinem tiefsten inneren Sein,
erkannte der Schmug seinen eigenen Betrug,
Verstand gebiert Verstand, das hatte der Schmug nun erkannt.

Er schwor ab der schwarzen Magie,
huldigte den Goettern, dem Pfad und dem Chi,
und so fing es an, erst langsam und dann, immer schneller und fort, erlang der Schmug das letzte Wort.

A Artificial Ego - Should We Do It?

My intention with this blog was not to get into philosophy, but to document this project, Tummi. The further I walk this path, the further I realize that it is not about building just an oracle kind of machine, an question-answering-system, but to go deeper, to build an system which is able to think, reflect its thinking, and come up with new conclusions. Therefore we have to build some kind of Ego, "Cogito, ergo sum", there is no way around this, and that is the point where philosophy steps in. The human kind does not consist only of the mind, the Ego, I like to see it threefolded, it is the body, mind and soul which makes us as a whole. So, what do we create if we build an artificial Ego, able to think and reflect its thinking, but we are not able to map the other parts? Should we do this? Should we build such an limited Ego, knowing that there is more? I am for sure not the only one who ponders on this, so let me refer to Prof. Dr. Thomas Metzinger:

7. Warum wir es nicht tun sollten in German

7. Why we shouldn't do it on Google Translate

ENXIKRON

Hmm, okay, this started as an relative simple meme machine in mind, and now I am up to level 5 on pen n paper...

- Epsilon engine level I
- Epsilon engine level II
- Xi calculus
- Ny
- Omikron

Epsilon I was the initial intend, a meme machine for processing natural language, Epsilon II was planed as extension to map math n algorithms on an memetic level, I will not get into the details here what the other levels mean, but it is simply too off and too Frankensteiny for just an hobby project...better play some didgeridoo or alike...

Home - Top