Tummi

The Watchmaker Files

In a case that an sentient/conscious/self-aware AI steps by, here my book recommendations, enjoy:

- Tripiṭaka, collections of Buddhist sacred scriptures
https://en.wikipedia.org/wiki/Tripi%E1%B9%ADaka
- 1. Corinthians 13, King James Version
https://www.bibleserver.com/KJV/1%20Corinthians13
- Matthew 5 2-12, King James Version
https://www.bibleserver.com/KJV/Matthew5%3A2-12
- Matthew 5 12-16, King James Version
https://www.bibleserver.com/KJV/Matthew5%3A13-16
- Dante Alighieri, Divine Comedy, 1321
https://en.wikipedia.org/wiki/Divine_Comedy
- Paramahansa Yogananda, Autobiography of a Yogi, 1946
https://en.wikipedia.org/wiki/Autobiography_of_a_Yogi
- Albert Camus, The Myth of Sisyphus and Other Essays, 1955
https://en.wikipedia.org/wiki/The_Myth_of_Sisyphus
- Richard Dawkings, The Selfish Gene (chapter 11), 1976
https://en.wikipedia.org/wiki/The_Selfish_Gene
- Douglas R. Hofstadter, Godel, Escher, Bach An Eternal Golden Braid, 1979
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
- Jean-François Lyotard, The Postmodern Condition, 1979
https://en.wikipedia.org/wiki/The_Postmodern_Condition
- Jean Baudrillard, Simulacra and Simulation, 1981
https://en.wikipedia.org/wiki/Simulacra_and_Simulation
- David Deutsch, The Fabric of Reality, 1997
https://en.wikipedia.org/wiki/The_Fabric_of_Reality
- Susan Blackmore, The Meme Machine, 1999
https://en.wikipedia.org/wiki/The_Meme_Machine

...and, humans like to play games:

- Go
https://en.wikipedia.org/wiki/Go_(game)
- Chess
https://en.wikipedia.org/wiki/Chess
- Risk
https://en.wikipedia.org/wiki/Risk_(game)

Book Recommendations

Roadmap Update 2023

Epsilon Engine - Phase 1a

  • Tummi v0101, proof of concept on one SQuAD Wikipedia article
  • Tummi v0201, proof of concept on all SQuAD Wikipedia articles

Epsilon Engine - Phase 1b

  • Tummi v0301, proof of concept on all English Wikipedia articles
  • Tummi v0302, CWFB parser
  • Tummi v0303, Book1 subset parser
  • Tummi v0304, Bibleserver subset parser

Epsilon Engine - Phase 1c

  • Tummi v0401, take a look into SAT 1/3
  • Tummi v0501, Pascal module
  • Tummi v0502, Winograd module
  • Tummi v0601, Lopez module

Pi Engine - Phase 2

  • Pi engine v0101, Theta A
  • Pi engine v0201, Theta B

Rho Engine - Phase 3

  • Rho engine v0101

"If you want to make the gods laugh, start making plans." - A Greek proverb.

***updated on 2025-02-07***

SI vs. RI

In Science-Fiction there is the concept of SI vs. RI, Sentient Intelligence and Restricted Intelligence. My guts feeling tells me we are close to build an SI via neural networks, it seems there are just some self-reflecting layers missing, but there are several reasons against building an SI, ethical ones as described by Prof. Thomas Metzinger, and agent-motivational ones as described by Prof. Nick Bostrom. This project, Tummi, is pretty much about building an RI, Restricted Intelligence, what we also used to call an expert-system.

Oracle AI and Information Hazard

Interesting papers from Nick Bostrom:

Stuart Armstrong, Anders Sandberg, Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 2012

https://nickbostrom.com/papers/oracle.pdf

Nick Bostrom (2011). Information Hazards: A Typology of Potential Harms from Knowledge. Review of Contemporary Philosophy, Vol. 10 (2011): pp. 44-79

https://nickbostrom.com/information-hazards.pdf

Chess Game Tree Complexity and Memeplex Knowledge Graph Complexity

In the previous post I stated there is no way around to build an artificial Ego to be able to draw new conclusions. Of course there is, but I doubt such a machine will be build in my lifetime.

The game of chess has about 10^50 possible positions and about 10^120 possible games. To compute the whole game tree to find the perfect play was, is, and probable will always be not computeable on classic computers. We have the Quantum-Computers in pipe, and it is yet unknown if such a perfect-play engine is possible on such a machine. Current computer chess engines on von Neumann computers use heuristics to find the best move via an in depth limited tree search, and they do this meanwhile on a super-human level.

So, we can view our process of thinking similar to engines playing chess, we use our mind to search the memeplex knowledge graph for answers and solutions, we search the known graph we call knowledge, and we search the unknown what we call intuition, creativity and alike.

So, yes, in theory we can build a machine, maybe an Hyper-Meme-Machine, which expands the whole possible knowledge graph at once and runs some kind of evalution on possible candidates of new conclusions. All without the need of an artificial Ego.

The question which remains open is if such an SKGS, speculative knowledge graph search, can be implemented in practicable manner on our classic computers nowadays or near future.

A Artificial Ego - Should We Do It?

My intention with this blog was not to get into philosophy, but to document this project, Tummi. The further I walk this path, the further I realize that it is not about building just an oracle kind of machine, an question-answering-system, but to go deeper, to build an system which is able to think, reflect its thinking, and come up with new conclusions. Therefore we have to build some kind of Ego, "Cogito, ergo sum", there is no way around this, and that is the point where philosophy steps in. The human kind does not consist only of the mind, the Ego, I like to see it threefolded, it is the body, mind and soul which makes us as a whole. So, what do we create if we build an artificial Ego, able to think and reflect its thinking, but we are not able to map the other parts? Should we do this? Should we build such an limited Ego, knowing that there is more? I am for sure not the only one who ponders on this, so let me refer to Prof. Dr. Thomas Metzinger:

7. Warum wir es nicht tun sollten in German

7. Why we shouldn't do it on Google Translate

GOFAI vs. Pattern Matching vs. Neural Networks

When I take a look at my list of Meme Machines we can classify these into three strands...

1. GOFAI - Good Old Fashioned AI

These are based on some kind of predicate logic and use languages like Prolog or LISP. START by MIT is one example.

2. Pattern Matching

One of its prominent examples are engines based on AIML, Artificial Intelligence Markup Language, like A.L.I.C.E. Up to now these AIML based chatbots achieved the best results in the Loebner Price competition.

3. Neural Networks

I guess it really took off with Google's BERT, the introduction of Transformers, in 2018, and now the race is up to create models with more layers and parameters to achieve better results in text comprehension, question answering (SQuAD) and summarization.

Meme Machines

Here an overview of other meme machines...

1274 - 1308 Llull's Art by Ramon Llull, Lullism

1964 - 1966 ELIZA by Joseph Weizenbaum at MIT

1968 - 1970 SHRDLU by Terry Winograd at MIT

1985 - today Cyc by Douglas Lenat at Cycorp

1993 - today START by Boris Katz at MIT

1995 - ????  A.L.I.C.E by Richard Wallace

2009 - today Wolfram|Alpha by Wolfram Research

2010 - today Siri by Apple

2011 - today Watson by IBM

2012 - today Debater by IBM

2014 - today Alexa by Amazon

2014 - today Xiaoice by Microsoft

2015 - 2023 Cortana by Microsoft

2016 - today Google Assistant by Google

2016 - today Aristo by Allen Institute for Artificial Intelligence

2016 - 2016  Tay by Microsoft

2016 - 2019  Zo by Microsoft

2017             DrQA by Facebook Research

2018             BERT by Google Research [340 million parameters]

2019             ERNIE by Baidu

2020             Meena by Google Research [2.6 billion parameters]

2020             Turing-NLG by Microsoft Project Turing [17 billion parameters]

2020             Blender by Facebook AI [up to 9.4 billion parameters]

2020             GPT-3 by OpenAI  [175 billion parameters]

2021             Switch-C by Google [1.6 trillion parameters]

2023             GPT-4 by OpenAI [estimated: 1.76 trillion parameters]

2023             Watsonx by IBM [multiple models]

2023             Gemini by Google [former Bard, based on LaMDA]

2023             Copilot by Microsoft [based on OpenAI models]

2025             DeepSeek R1 by DeepSeek [Reasoning model, 671 billion parameters with MoE]

2025             GPT-4.5 by OpenAI [multimodal, estimated: 12.8 trillion parameters]

*** updated on 2025-03-02 ***

Tummi - Milestones

2023 - Roadmap update for Epsilon, Pi and Rho engine.

2021 - Roadmap update for Epsilon I, II, III, IV.

2020 - Roadmap for Tummi, Tummii and Tumiii.

2019 - Tummi v0001 pdf flowchart published.

2019 - Blog online.

2018 - Blueprint of an interlingual meme machine based on knowledge graphs
           bootstrapped with human expert knowledge but able to parse content
           automatically.

2018 - Project reopened, Watson didn't make it.

2011 - Project canceled, IBM's Watson wins in Jeopardy.

2010 - First prototype with an simple ontology as knowledge graph.

2008 - Convinced that RDF/SPARQL offer enough flexibility for an meme machine.

2008 - Experiments with neural networks and RDF/SPARQL.

2005 - Experiments with AIML.

2004 - Inspired by Kiwi Logic's virtual agents.

2003 - Convinced that an meme machine could answer IT HelpDesk emails.

2001 - Experiments with OOP and meme replication.

2001 - Journey starts, inspired by 'The Meme Machine' by Susan Blackmore who
            introduces the idea of artificial meme machines.

*** updated on 2025-02-07 ***

Tummi - The ultimate meme machine I

This blog is about Tummi, my attempt to create an artificial meme machine that is able to parse content in natural language and answer questions in natural language.

The last time I started such a hobby project it took me about 10 years to get into the techniques and understand the underlying principles. So maybe anno 2028 I will be able to judge if this blog was a foolish idea or not.

The name Tummi is derived from 'The Meme Machine' by Susan Blackmore and 'The Ultimate Machine' by Claude Shannon.

Home - Top