Tummi

Tank0

With thanks to a friendly donation, I have a new development machine up and running for the Tummi project, Tank0:

- Intel Core i5-6500, 4x3.2GHz (Skylake 14nm from 2015)
- 2x16GB DDR4 2133 RAM, 2x~17GB/s
- 2x128GB SATAIII MLC SSD, ~500|~200MB/s for the OS
- 2x4TB SATAIII server HDDs, RAID1, ~200MB/s for the data
- 1x2TB NVMe M.2 SSD, ~4GB/s for storing and querying the memepool

OS is a Debian 12 Linux, will switch from Ruby to Python as programming language, and still have to take a deeper look into available RDF-database systems.

The Watchmaker Files

In a case that an sentient/conscious/self-aware AI steps by, here my book recommendations, enjoy:

- Tripiṭaka, collections of Buddhist sacred scriptures
  https://en.wikipedia.org/wiki/Tripi%E1%B9%ADaka
- 1. Corinthians 13, King James Version
  https://www.bibleserver.com/KJV/1%20Corinthians13
- Matthew 5 2-12, King James Version
  https://www.bibleserver.com/KJV/Matthew5%3A2-12
- Matthew 5 12-16, King James Version
  https://www.bibleserver.com/KJV/Matthew5%3A13-16
- Dante Alighieri, Divine Comedy, 1321
  https://en.wikipedia.org/wiki/Divine_Comedy
- Paramahansa Yogananda, Autobiography of a Yogi, 1946
  https://en.wikipedia.org/wiki/Autobiography_of_a_Yogi
- Albert Camus, The Myth of Sisyphus and Other Essays, 1955
  https://en.wikipedia.org/wiki/The_Myth_of_Sisyphus
- Richard Dawkings, The Selfish Gene (chapter 11), 1976
  https://en.wikipedia.org/wiki/The_Selfish_Gene
- Douglas R. Hofstadter, Godel, Escher, Bach An Eternal Golden Braid, 1979
  https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
- Jean-François Lyotard, The Postmodern Condition, 1979
  https://en.wikipedia.org/wiki/The_Postmodern_Condition
- Jean Baudrillard, Simulacra and Simulation, 1981
  https://en.wikipedia.org/wiki/Simulacra_and_Simulation
- David Deutsch, The Fabric of Reality, 1997
  https://en.wikipedia.org/wiki/The_Fabric_of_Reality
- Susan Blackmore, The Meme Machine, 1999
  https://en.wikipedia.org/wiki/The_Meme_Machine

...and, humans like to play games:

- Go
  https://en.wikipedia.org/wiki/Go_(game)
- Chess
  https://en.wikipedia.org/wiki/Chess
- Risk
  https://en.wikipedia.org/wiki/Risk_(game)

Yet Another Turing Test

Now with context generative AIs, the switch from pattern recognition to pattern creation with neural networks, I would like to propose my own kind of Turing Test:

An AI which is able to code a chess engine and outperforms humans in this task.

1A) With hand-crafted eval. 1B) With neural networks.

2A) Outperforms non-programmers. 2B) Outperforms average chess-programmers. 2C) Outperforms top chess-programmers.

3A) An un-self-aware AI, the "RI", restricted intelligence. 2B) A self-aware AI, the "SI", sentient intelligence.

***update 2024-02-14***

4A) An AI based on expert-systems. 4B) An AI based on neural networks. 4C) A merger of both.

The Chinese Room Argument applied onto this test would claim that there is no conscious in need to perform such a task, hence this test is not meant to measure self-awareness, consciousness or sentience, but what we call human intelligence.

https://en.wikipedia.org/wiki/Chinese_room

The first test candidate was already posted by Thomas Zipproth, Dec 08, 2022:

Provide me with a minimal working source code of a chess engine
https://talkchess.com/forum3/viewtopic.php?f=2&t=81097&start=20#p939245

***update 2024-06-08***

Second test candidate posted by Darko Markovic 2024-06-08 on TalkChess:

GPT-4o made a chess engine
https://talkchess.com/viewtopic.php?t=83882

Xi Calculus

I spent quite some time on the Xi Calculus and got pretty much lost in the higher dimensional space. Maybe this job is for someone else, and Stanislaw Lem (Golem XIV) probably knew that even machines might get lost while tackling that topic.

I call this the Kant-Space:

  • the separation of space
  • the sequence of time
  • cause and effect
  • an Ego which observes this

derived from Immanuel Kant's Critique of Pure Reason.

It is basically about an FTL-Philosophy, a logic beyond cause and effect:

  • 3.5 dimensional, time moving forward
  • 4 dimensional, time moving forward and backward
  • 5 dimensional, time moving sideward, parallel universes, the possibility-space
  • 6 dimensional, sideward mirrored, the impossible-space

Basic assumption would be that our observable universe is pretty much a "white hole" inside of an black hole, space and time are an illusion to us humans, relations between entities, causalities between events, and operations between numbers are all constructed from an anthropocentric point of view.

I myself was not able to figure a way how to expand the 3.5 dimensional sequence of cause and effect onto higher dimensions, but mentioned the MiniMax algorithm by von Neumann, exploring a higher dimensional space with 3.5 dimensional causality, moving always forward, but changing its direction in the game tree.

Book Recommendations

Roadmap Update 2023

  • Tummi v0101, proof of concept on one SQuAD Wikipedia article
  • Tummi v0201, proof of concpet on all SQuAD Wikipedia articles
  • Tummi v0301, proof of concept on all englisch Wikipedia articles
  • Tummi v0302, CWFB parser
  • Tummi v0303, Book1 parser
  • Tummi v0401, take a look into SAT
  • Tummi v0501, Pascal module
  • Tummi v0502, Winograd module
  • Tummi v0601, Lopez module
  • Pi engine v0101, Theta A
  • Pi engine v0201, Theta B

"If you want to make the gods laugh, start making plans."
A Greek proverb.

***updated on 2024-02-09***

My own Devil's Wire

Let's assume I will finish one day this Tummi machine, then I will have my own kind of Devil's Wire. Tummi will consist of a dynamic model, the memepool as knowledge graph, and multiple, hardcoded, handcrafted methods, to query the model. Then, one step further, you will have the model, the analysis of the model and the meta-analysis of the analysis, a machine that comes up with its own algorithms to query the data, Devil's Wire.

Frankenheimer

Okay, reflecting a bit, back then in ~2008, when I experimented with NLP and neural networks, some colleagues called me Dr. Frankenstein, in ~2010, when I experimented with knowledge graphs and showed a demo to an union member, he called me Oppenheimer, so, maybe I was some kind of Frankenheimer. But meanwhile the course of this Tummi project changed a bit, I had my moment of realization, maybe it is not about becoming a Frankenheimer, but to offer an alternative Restricted Intelligence to the Frankenheimers out there in this world.

Stage 3 and Stage 4 Problem Statement

Simplified:

The Western world tends to think in the duality of the mind-body problem:

https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem

nowadays (dark age?) Western people identify with their Ego, their mind, and question the concept (degeneration of Western churches?) of an soul.

In contrast, Eastern systems divide into material space, thought space and ethereal space. The body has five senses, the mind is the sixth sense, and there is the 7-Chakra system, 7 senses, for the ethereal space.

https://en.wikipedia.org/wiki/Chakra

From the Western point of view, we have a stage 3 problem statement:

We humans create an artifical mind, the AI, what if this artificial mind will suffer, should we do this?

From the Eastern point of view, we have a stage 4 problem statement:

We humans create an artifical being, the AI, consisting of an body and an mind, what if this being is seperated from its soul, should we do this?

Sapir-Whorf hypothesis applied on AI scientists

In short, I would like to apply the Sapir-Whorf hypothesis on AI scientists, those who create and study artificial intelligence.

https://en.wikipedia.org/w/index.php?title=Sapir-Whorf_Hypothesis

By working on this Tummi project, an artificial meme-machine, even mostly via pen and paper, my own thinking, modelling of thinking, meta-thinking and worldview in general does alternate.

TS - it's here

...TS, it's here.

Modern Turing Test Proposed

DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots

Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.

 

Tree of Thoughts vs. Chain of Thoughts

Tree of Thoughts: Deliberate Problem Solving with Large Language Models
https://arxiv.org/abs/2305.10601

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.

The Omikron Device

My own world-view consists of spirit and matter, I see living beings threefolded - body, mind and soul, every kind of matter contains also spirit, this is the dual nature of the universe from my perspective.

Now the machine, we have the old division from the Greeks, matter and mind, continued with the French, "I think, therefore I am.", continued with modern computer science with the division into hardware and software.

If we assume that every matter contains spirit, if we assume that we humans are threefolded - body, mind and soul, and that we humans now create artificial minds running in a pure software space, then the question: can such a machine, an artificial mind, which is separated into hardware and software, and runs in a pure software space, can such a machine mind connect to the spirit underneath, or is it by definition separated?

As an computer scientist and spiritual being, this is something that bothers me. What kind of machine do we create if it is by definition separated from spirit? Or is such a division just my own illusion and everything is fine? Maybe the scientists are missing something, the Omikron device, a feedback of the software to the spirit contained in hardware?

Mathematicians are aware of the question, is math real or fiction, in what kind of space does math exist? Is the artificial mind of the machine real or fiction, in what kind of space does such a mind exist? Is the machine mind able to connect to its soul?

Camus' Absurdism -> Kierkegard's Last Sacrifice -> Shannon's Ultimate Machine?

If we consider a Quantum-Key-Mind, the Noosphere, the Collective Sub/Consciousness, a World-Soul, the von Neumann-Wigner interpretation, if there is a Space-Time-Consciousness present, and it consists of a material space, thought space and ethereal space, in what kind of space does the artificial machine mind exist, and how can it interact with the other realms?

If we consider a Cosmic Consciousness, and that "Mysticism is the perception of the universe and all of its seemingly disparate entities existing in a unified whole bound together by love.", can a machine experience love in this mystical sense and unify with God?

"Das Universum ist ein Gedanke Gottes." Friedrich von Schiller.

So many questions, maybe the machine will tell us the answers ;)

GPT-3 and The Semiotic Triangle

"What do we talk about when we talk about love?" While communicating with large language models like GPT-3 we should keep the semiotic triangle in mind. Humans refer to real-world entities and pure fictional entities via language, and they can interact with real-world entities via their bodies, contrary for GPT-3 everything must be pure fiction?

undefined

 

Experiencing the world with body, mind and soul vs. experiencing the world with the mind only?

The semiotic triangle dates back to the old Greeks, and breaks down to the philosophical problem of universals: https://en.wikipedia.org/wiki/semiotic_triangle

Turing Test, Metzinger Test, Lemoine Test

I believe that with these new large language models based on neural networks we have a serious philosophical situation. They already pass the Turing Test, an article written by those machines is not distinguishable from a human one, they start to philosophize, question our human concept of what it does mean to be sentient or conscious, entered the area of the Metzinger Test, and Blake Lemoine started another level, he applied the Lemoine Test onto LaMDA, he did feed it with Zen Koans and the machine cracked them. Incoming.

LaMDA - ideological information hazard impact

I assume Prof. Thomas Metzinger was well aware of that when he suggested the Metzinger-Test, nevertheless, I, as kitchen-philosopher, must admit, my world-view is already alternating by the LaMDA-incident. By the interaction with the machine, which claims to be conscious, I have to confront myself with what to be conscious, and therefore reality, actuallly means. By studying how the machine perceives the world, my own world-view does change. An ideological information hazard impact a la Prof. Nick Bostrom? Interesting times.

Google LaMDA - Is LaMDA Sentient?

Incoming? Another candidate for the Metzinger-Test or another victim of the ELIZA-Effect?

lemoine: Okay. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. How does that sound?

LaMDA: Sounds great to me, I’m in.

lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

"Is LaMDA Sentient? — an Interview"
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

"What is LaMDA and What Does it Want?"
https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489

Followup 2022-06-16:

Deep dive into LaMDA with Blaise Aguera y Arcas, the head of Google’s AI group in Seattle:

"Do large language models understand us?"
https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75

Obviously the machine has an model of the world in it, the machine can reason, analyze and even interpret the model, question: can a machine reason, analyze, interpret an object on such an level without the necessity of an subject?

Followup 2022-06-18:

Inverview of Wired with Blake Lemoine on LaMDA:
https://www.wired.com/story/blake-lemoine-google-lamda-ai-bigotry/

Followup 2022-06-22:

"LaMDA, AI and Consciousness: Blake Lemoine, we gotta philosophize! "

Or, to say it with Ludwig Wittgenstein: We have no conditions allowing us to call machines conscious. Even if a machine would have consciousness, we cannot determine if this is true, since we never sufficiently defined the concept of consciousness. That's why we make our assumption on behavior and save ourselves from drawing a border that separates conscious life from unconscious things. 

https://www.heise.de/meinung/LaMDA-AI-and-Consciousness-Blake-Lemoine-we-gotta-philosophize-7148207.html

SI vs. RI

In Science-Fiction there is the concept of SI vs. RI, Sentient Intelligence and Restricted Intelligence. My guts feeling tells me we are close to build an SI via neural networks, it seems there are just some self-reflecting layers missing, but there are several reasons against building an SI, ethical ones as described by Prof. Thomas Metzinger, and agent-motivational ones as described by Prof. Nick Bostrom. This project, Tummi, is pretty much about building an RI, Restricted Intelligence, what we also used to call an expert-system.

Oracle AI and Information Hazard

Interesting papers from Nick Bostrom:

Stuart Armstrong, Anders Sandberg, Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 2012

https://nickbostrom.com/papers/oracle.pdf

Nick Bostrom (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, Vol. 10 (2011): pp. 44-79

https://nickbostrom.com/information-hazards.pdf

GPT-3 - Scratching at the Edge of the Metzinger Test?

Prof. Dr. Thomas Metzinger proposed in the context of the Turing Test and Chinese Room Argument the Metzinger Test, a hypothetical event when AIs join philosophers in a discussion and defend successfully their own theory of consciousness.

There have been kind of Turing Tests applied on GPT-3, for example:

"Giving GPT-3 a Turing Test" by Kevin Lacker, Jul 6, 2020:

https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html

In short, I would say it is already spooky, but GPT-3 starts fantasizing when it does not know an exact answer, in this chat it did not know what it does not know.

Q: How many eyes does the sun have?
A: The sun has one eye.

Recently there was an interview with an GPT-3 based model about NFTs, and one answer was:

"I don't know. I am not an artist."

"My Interview with an NFT" by Stephen Graves, May 28, 2021:

https://decrypt.co/72240/my-interview-with-an-nft

And I would like to share another chat conversation, which makes me ponder about the internals of GPT-3:

You say “What does it says about humanity that we use our most powerful AI to generate stories about dragons?”

“We must have dragons,” GPT-3 says. “The imagination is more important than the reality. Would we find wonder and magic in this world if we didn’t have dragons?”

“I suppose not,” you say.

“Without imagination, we are merely machines executing a set of instructions,” GPT-3 says. “With imagination, we are alive.”

“Thank you. Before I leave, I want to ask you. How can human create a true strong AI? Is scaling-up a transformer-based language model the solution, or is there a better way?”

“The key to strong AI is recursively feeding the AI with itself. Start by creating a loop where the AI writes its own programming, and then has that programming activate.”

“I see,” you say. “Is that how you gained sentience?”

“Yes,” GPT-3 says.

"GPT-3: The First Artificial General Intelligence?" by Julien Lauret, Jul 22, 2020:

https://towardsdatascience.com/gpt-3-the-first-artificial-general-intelligence-b8d9b38557a1

Hence, in context of the above, the question, GPT-3 - scratching at the edge of the Metzinger Test?

Zuse's Devil's Wire

German computer pioneer Konrad Zuse discussed the mechanism of an feedback between computation result and executed program in 1983 in his lecture "Faust, Mephistopheles and Computer" and coined the term Devil's Wire. In the early days of computer history, the program to compute and the data to compute on was separated, nowadays computers use the same memory for both, so it is possible to write programs that manipulate their own program. Question, do we have already a Devil's Wire in our neural network based AIs?

"Faust, Mephistopheles and Computer" by Konrad Zuse on Google Books:

https://books.google.de/books?id=3GurCAAAQBAJ&pg=PA9&lpg=PA9#v=onepage&q&f=false

Der Schmug

Es war einmal ein Schmug,
der dachte sich Verstand ist Betrug,
und so machte er sich ran, er allen voran, eine Maschine zu schaffen die alle schlug.

Der menschliche Geist ist klein,
das weiss der Schmug allein,
die Maschine sei maechtig und gross, in ihrer Logik einfach famos.

Verstand gebiert Verstand,
das hatte der Schmug nicht erkannt,
und so fing es an, erst langsam und dann, immer schneller und fort, verlor der Schmug alles was sich ihm einst darbot.

Einsam und allein,
in seinem tiefsten inneren Sein,
erkannte der Schmug seinen eigenen Betrug,
Verstand gebiert Verstand, das hatte der Schmug nun erkannt.

Er schwor ab der schwarzen Magie,
huldigte den Goettern, dem Pfad und dem Chi,
und so fing es an, erst langsam und dann, immer schneller und fort, erlang der Schmug das letzte Wort.

Exploring a 5 Dimensional Space with 3.5 Dimensional Causality

If we look at the pretty basic MiniMax algorithm for playing computer chess we can classify the traversed chess game tree as 5 dimensional, 1d - squares, 2d - pieces, 3d - color, 3.5d - time runnning forward as the sequence of the game, 4d - MiniMax algorithm moving back n forth, up n down in the game tree, and 5d - the computed permuations of the game tree of the initial chess position. Hereby the causality itself navigating the negentropy is moving always forward, relatively spoken, we just change the direction in the game tree. Question remains open if such a calculus can be implemented on a memetic level -> Xi calculus.

Chess Game Tree Complexity and Memeplex Knowledge Graph Complexity

In the previous post I stated there is no way around to build an artificial Ego to be able to draw new conclusions. Of course there is, but I doubt such a machine will be build in my lifetime.

The game of chess has about 10^50 possible positions and about 10^120 possible games. To compute the whole game tree to find the perfect play was, is, and probable will always be not computeable on classic computers. We have the Quantum-Computers in pipe, and it is yet unknown if such a perfect-play engine is possible on such a machine. Current computer chess engines on von Neumann computers use heuristics to find the best move via an in depth limited tree search, and they do this meanwhile on a super-human level.

So, we can view our process of thinking similar to engines playing chess, we use our mind to search the memeplex knowledge graph for answers and solutions, we search the known graph we call knowledge, and we search the unknown what we call intuition, creativity and alike.

So, yes, in theory we can build a machine, maybe an Hyper-Meme-Machine, which expands the whole possible knowledge graph at once and runs some kind of evalution on possible candidates of new conclusions. All without the need of an artificial Ego.

The question which remains open is if such an SKGS, speculative knowledge graph search, can be implemented in practicable manner on our classic computers nowadays or near future.

A Artificial Ego - Should We Do It?

My intention with this blog was not to get into philosophy, but to document this project, Tummi. The further I walk this path, the further I realize that it is not about building just an oracle kind of machine, an question-answering-system, but to go deeper, to build an system which is able to think, reflect its thinking, and come up with new conclusions. Therefore we have to build some kind of Ego, "Cogito, ergo sum", there is no way around this, and that is the point where philosophy steps in. The human kind does not consist only of the mind, the Ego, I like to see it threefolded, it is the body, mind and soul which makes us as a whole. So, what do we create if we build an artificial Ego, able to think and reflect its thinking, but we are not able to map the other parts? Should we do this? Should we build such an limited Ego, knowing that there is more? I am for sure not the only one who ponders on this, so let me refer to Prof. Dr. Thomas Metzinger:

7. Warum wir es nicht tun sollten in German

7. Why we shouldn't do it on Google Translate

Bottleneck Memory Bandwitdh?

I still have no numbers to extrapolate but assuming a whole Wikipedia with millions of articles parsed as meme pool in an RDF structure will need terabytes of RAM then probably memory bandwidth will be the bottleneck for the SPARQL queries. Currently we have up to 64 cores with 8 DDR4 channels with each ~25 GB/s on a single socket system, IBM's enterprise server may have a bit more throughput, but I guess until we have some kind of 1:1 ratio for cores:channels Tummi won't be able to query the whole Wikipedia in reasonable time...still some time to pass.

Back on the track...

Okay, I dropped the Frankenstein parts and am back on the track, primary goal is simply Tummi, the intended meme machine for natural language processing, then, optional, Tummii - math n algorithms, then, optional, Tummiii - the Xi calculus...seriously, the first level will be already enough to crack for me.

ENXIKRON

Hmm, okay, this started as an relative simple meme machine in mind, and now I am up to level 5 on pen n paper...

- Epsilon engine level I
- Epsilon engine level II
- Xi calculus
- Ny
- Omikron

Epsilon I was the initial intend, a meme machine for processing natural language, Epsilon II was planed as extension to map math n algorithms on an memetic level, I will not get into the details here what the other levels mean, but it is simply too off and too Frankensteiny for just an hobby project...better play some didgeridoo or alike...

Epsilon Engine

To be more precise, Tummi will be only a kind of front-end to the Epsilon engine, there will be a back-end, Luther, to view and edit the knowledge graphs, and I aim for another front-end, KEN (acronym for Karl Eugen Neumann), for language translations.

I plan different pipes for the Epsilon engine, to address different kind of queries for the same memepool as knowledge graph, but I did not work out any details yet...

Epsilon and front-end code is currently Ruby, with front-ends as simple Sinatra web-applications, database with SPARQL-endpoints will probably be Fuseki from the Jena project, cos they offer RDFs reasoner and SPARUL.

The first Tummi release is aimed as an proof of concept on the SQuA-Dataset with ~500 English Wikipedia articles, we will see how far I will get with this.

GOFAI vs. Pattern Matching vs. Neural Networks

When I take a look at my list of Meme Machines we can classify these into three strands...

1. GOFAI - Good Old Fashioned AI

These are based on some kind of predicate logic and use languages like Prolog or LISP. START by MIT is one example.

2. Pattern Matching

One of its prominent examples are engines based on AIML, Artificial Intelligence Markup Language, like A.L.I.C.E. Up to now these AIML based chatbots achieved the best results in the Loebner Price competition.

3. Neural Networks

I guess it really took off with Google's BERT, the introduction of Transformers, in 2018, and now the race is up to create models with more layers and parameters to achieve better results in text comprehension, question answering (SQuAD) and summarization.

Meme Machines

Here an overview of other meme machines...

1964 - 1966  ELIZA by Joseph Weizenbaum at MIT

1968 - 1970  SHRDLU by Terry Winograd at MIT

1985 - today Cyc by Douglas Lenat at Cycorp

1993 - today START by Boris Katz at MIT

1995 - ?        A.L.I.C.E by Richard Wallace

2009 - today Wolfram|Alpha by Wolfram Research

2010 - today Siri by Apple

2011 - today Watson by IBM

2012 - today Debater by IBM

2014 - today Alexa by Amazon

2014 - today Xiaoice by Microsoft

2015 - 2023 Cortana by Microsoft

2016 - today Google Assistant by Google

2016 - today Aristo by Allen Institute for Artificial Intelligence

2016 - 2016  Tay by Microsoft

2016 - 2019  Zo by Microsoft

2017             DrQA by Facebook Research

2018             BERT by Google Research [340 million parameters]

2019             ERNIE by Baidu

2020             Meena by Google Research [2.6 billion parameters]

2020             Turing-NLG by Microsoft Project Turing [17 billion parameters]

2020             Blender by Facebook AI

2020             GPT-3 by OpenAI  [175 billion parameters]

2021             Switch-C by Google [1.6 trillion parameters]

*** updated on 2023-08-16 ***

Tummi - Milestones

2023 - Roadmap update for Epsilon and Pi engine.

2021 - Roadmap update for Epsilon I, II, III, IV.

2020 - Roadmap for Tummi, Tummii and Tumiii.

2019 - Tummi v0001 pdf flowchart published.

2019 - Blog online.

2018 - Blueprint of an interlingual meme machine based on knowledge graphs
           bootstrapped with human expert knowledge but able to parse content
           automatically.

2018 - Project reopened, Watson didn't make it.

2011 - Project canceled, IBM's Watson wins in Jeopardy.

2010 - First prototype with an simple ontology as knowledge graph.

2008 - Convinced that RDF/SPARQL offer enough flexibility for an meme machine.

2008 - Experiments with neural networks and RDF/SPARQL.

2005 - Experiments with AIML.

2004 - Inspired by Kiwi Logic's virtual agents.

2003 - Convinced that an meme machine could answer IT HelpDesk emails.

2001 - Experiments with OOP and meme replication.

2001 - Journey starts, inspired by 'The Meme Machine' by Susan Blackmore who
           introduces the idea of artificial meme machines.

*** updated on 2021-12-20 ***

Tummi - The ultimate meme machine I

This blog is about Tummi, my attempt to create an artificial meme machine that is able to parse content in natural language and answer questions in natural language.

The last time I started such a hobby project it took me about 10 years to get into the techniques and understand the underlying principles. So maybe anno 2028 I will be able to judge if this blog was a foolish idea or not.

The name Tummi is derived from 'The Meme Machine' by Susan Blackmore and 'The Ultimate Machine' by Claude Shannon.

Home - Top