Tummi

Chess Game Tree Complexity and Memeplex Knowledge Graph Complexity

In the previous post I stated there is no way around to build an artificial Ego to be able to draw new conclusions. Of course there is, but I doubt such a machine will be build in my lifetime.

The game of chess has about 10^50 possible positions and about 10^120 possible games. To compute the whole game tree to find the perfect play was, is, and probable will always be not computeable on classic computers. We have the Quantum-Computers in pipe, and it is yet unknown if such a perfect-play engine is possible on such a machine. Current computer chess engines on von Neumann computers use heuristics to find the best move via an in depth limited tree search, and they do this meanwhile on a super-human level.

So, we can view our process of thinking similar to engines playing chess, we use our mind to search the memeplex knowledge graph for answers and solutions, we search the known graph we call knowledge, and we search the unknown what we call intuition, creativity and alike.

So, yes, in theory we can build a machine, maybe an Hyper-Meme-Machine, which expands the whole possible knowledge graph at once and runs some kind of evalution on possible candidates of new conclusions. All without the need of an artificial Ego.

The question which remains open is if such an SKGS, speculative knowledge graph search, can be implemented in practicable manner on our classic computers nowadays or near future.

A Artificial Ego - Should We Do It?

My intention with this blog was not to get into philosophy, but to document this project, Tummi. The further I walk this path, the further I realize that it is not about building just an oracle kind of machine, an question-answering-system, but to go deeper, to build an system which is able to think, reflect its thinking, and come up with new conclusions. Therefore we have to build some kind of Ego, "Cogito, ergo sum", there is no way around this, and that is the point where philosophy steps in. The human kind does not consist only of the mind, the Ego, I like to see it threefolded, it is the body, mind and soul which makes us as a whole. So, what do we create if we build an artificial Ego, able to think and reflect its thinking, but we are not able to map the other parts? Should we do this? Should we build such an limited Ego, knowing that there is more? I am for sure not the only one who ponders on this, so let me refer to Prof. Dr. Thomas Metzinger:

7. Warum wir es nicht tun sollten in German

7. Why we shouldn't do it on Google Translate

Bottleneck Memory Bandwitdh?

I still have no numbers to extrapolate but assuming a whole Wikipedia with millions of articles parsed as meme pool in an RDF structure will need terabytes of RAM then probably memory bandwidth will be the bottleneck for the SPARQL queries. Currently we have up to 64 cores with 8 DDR4 channels with each ~25 GB/s on a single socket system, IBM's enterprise server may have a bit more throughput, but I guess until we have some kind of 1:1 ratio for cores:channels Tummi won't be able to query the whole Wikipedia in reasonable time...still some time to pass.

Back on the track...

Okay, I dropped the Frankenstein parts and am back on the track, primary goal is simply Tummi, the intended meme machine for natural language processing, then, optional, Tummii - math n algorithms, then, optional, Tummiii - the Xi calculus...seriously, the first level will be already enough to crack for me.

ENXIKRON

Hmm, okay, this started as an relative simple meme machine in mind, and now I am up to level 5 on pen n paper...

- Epsilon engine level I
- Epsilon engine level II
- Xi calculus
- Ny
- Omikron

Epsilon I was the initial intend, a meme machine for processing natural language, Epsilon II was planed as extension to map math n algorithms on an memetic level, I will not get into the details here what the other levels mean, but it is simply too off and too Frankensteiny for just an hobby project...better play some didgeridoo or alike...

Epsilon Engine

To be more precise, Tummi will be only a kind of front-end to the Epsilon engine, there will be a back-end, Luther, to view and edit the knowledge graphs, and I aim for another front-end, KEN (acronym for Karl Eugen Neumann), for language translations.

I plan different pipes for the Epsilon engine, to address different kind of queries for the same memepool as knowledge graph, but I did not work out any details yet...

Epsilon and front-end code is currently Ruby, with front-ends as simple Sinatra web-applications, database with SPARQL-endpoints will probably be Fuseki from the Jena project, cos they offer RDFs reasoner and SPARUL.

The first Tummi release is aimed as an proof of concept on the SQuA-Dataset with ~500 English Wikipedia articles, we will see how far I will get with this.

GOFAI vs. Pattern Matching vs. Neural Networks

When I take a look at my list of Meme Machines we can classify these into three strands...

1. GOFAI - Good Old Fashioned AI

These are based on some kind of predicate logic and use languages like Prolog or LISP. START by MIT is one example.

2. Pattern Matching

One of its prominent examples are engines based on AIML, Artificial Intelligence Markup Language, like A.L.I.C.E. Up to now these AIML based chatbots achieved the best results in the Loebner Price competition.

3. Neural Networks

I guess it really took off with Google's BERT, the introduction of Transformers, in 2018, and now the race is up to create models with more layers and parameters to achieve better results in text comprehension, question answering (SQuAD) and summarization.

Home - Top