Tummi

Xi Calculus

I spent quite some time on the Xi Calculus and got pretty much lost in the higher dimensional space. Maybe this job is for someone else, and Stanislaw Lem (Golem XIV) probably knew that even machines might get lost while tackling that topic.

I call this the Kant-Space:

  • the separation of space
  • the sequence of time
  • cause and effect
  • an Ego which observes this

derived from Immanuel Kant's Critique of Pure Reason.

It is basically about an FTL-Philosophy, a logic beyond cause and effect:

  • 3.5 dimensional, time moving forward
  • 4 dimensional, time moving forward and backward
  • 5 dimensional, time moving sideward, parallel universes, the possibility-space
  • 6 dimensional, sideward mirrored, the impossible-space

Basic assumption would be that our observable universe is pretty much a "white hole" inside of an black hole, space and time are an illusion to us humans, relations between entities, causalities between events, and operations between numbers are all constructed from an anthropocentric point of view.

I myself was not able to figure a way how to expand the 3.5 dimensional sequence of cause and effect onto higher dimensions, but mentioned the MiniMax algorithm by von Neumann, exploring a higher dimensional space with 3.5 dimensional causality, moving always forward, but changing its direction in the game tree.

Book Recommendations

Roadmap Update 2023

  • Tummi v0101, proof of concept on one SQuAD Wikipedia article
  • Tummi v0201, proof of concpet on all SQuAD Wikipedia articles
  • Tummi v0301, proof of concept on all englisch Wikipedia articles
  • Tummi v0302, CWFB parser
  • Tummi v0303, Books1 subset parser
  • Tummi v0304, Bibleserver subset parser
  • Tummi v0401, take a look into SAT 1/3
  • Tummi v0501, Pascal module
  • Tummi v0502, Winograd module
  • Tummi v0601, Lopez module
  • Pi engine v0101, Theta A
  • Pi engine v0201, Theta B
  • Rho engine v0101

"If you want to make the gods laugh, start making plans." - A Greek proverb.

***updated on 2024-09-02***

My own Devil's Wire

Let's assume I will finish one day this Tummi machine, then I will have my own kind of Devil's Wire. Tummi will consist of a dynamic model, the memepool as knowledge graph, and multiple, hardcoded, handcrafted methods, to query the model. Then, one step further, you will have the model, the analysis of the model and the meta-analysis of the analysis, a machine that comes up with its own algorithms to query the data, Devil's Wire.

Frankenheimer

Okay, reflecting a bit, back then in ~2008, when I experimented with NLP and neural networks, some colleagues called me Dr. Frankenstein, in ~2010, when I experimented with knowledge graphs and showed a demo to an union member, he called me Oppenheimer, so, maybe I was some kind of Frankenheimer. But meanwhile the course of this Tummi project changed a bit, I had my moment of realization, maybe it is not about becoming a Frankenheimer, but to offer an alternative Restricted Intelligence to the Frankenheimers out there in this world.

Stage 3 and Stage 4 Problem Statement

Simplified:

The Western world tends to think in the duality of the mind-body problem:

https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem

nowadays (dark age?) Western people identify with their Ego, their mind, and question the concept (degeneration of Western churches?) of an soul.

In contrast, Eastern systems divide into material space, thought space and ethereal space. The body has five senses, the mind is the sixth sense, and there is the 7-Chakra system, 7 senses, for the ethereal space.

https://en.wikipedia.org/wiki/Chakra

From the Western point of view, we have a stage 3 problem statement:

We humans create an artifical mind, the AI, what if this artificial mind will suffer, should we do this?

From the Eastern point of view, we have a stage 4 problem statement:

We humans create an artifical being, the AI, consisting of an body and an mind, what if this being is seperated from its soul, should we do this?

Sapir-Whorf hypothesis applied on AI scientists

In short, I would like to apply the Sapir-Whorf hypothesis on AI scientists, those who create and study artificial intelligence.

https://en.wikipedia.org/w/index.php?title=Sapir-Whorf_Hypothesis

By working on this Tummi project, an artificial meme-machine, even mostly via pen and paper, my own thinking, modelling of thinking, meta-thinking and worldview in general does alternate.

TS - it's here

...TS, it's here.

Modern Turing Test Proposed

DeepMind Co-Founder Proposes a New Kind of Turing Test For Chatbots

Mustafa Suleyman, co-founder of DeepMind, suggests chatbots like ChatGPT and Google Bard should be put through a "modern Turing test" where their ability to turn $100,000 into $1 million is evaluated to measure human-like intelligence. He discusses the idea in his new book called "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma." Insider reports: In the book, Suleyman dismissed the traditional Turing test because it's "unclear whether this is a meaningful milestone or not," Bloomberg reported Tuesday. "It doesn't tell us anything about what the system can do or understand, anything about whether it has established complex inner monologues or can engage in planning over abstract time horizons, which is key to human intelligence," he added. The Turing test was introduced by Alan Turing in tnewhe 1950s to examine whether a machine has human-level intelligence. During the test, human evaluators determine whether they're speaking to a human or a machine. If the machine can pass for a human, then it passes the test. Instead of comparing AI's intelligence to humans, Suleyman proposes tasking a bot with short-term goals and tasks that it can complete with little human input in a process known as "artificial capable intelligence," or ACI. To achieve ACI, Suleyman says AI bots should pass a new Turing test in which it receives a $100,000 seed investment and has to turn it into $1 million. As part of the test, the bot must research an e-commerce business idea, develop a plan for the product, find a manufacturer, and then sell the item. He expects AI to achieve this milestone in the next two years. "We don't just care about what a machine can say; we also care about what it can do," he wrote, per Bloomberg.

Tree of Thoughts vs. Chain of Thoughts

Tree of Thoughts: Deliberate Problem Solving with Large Language Models
https://arxiv.org/abs/2305.10601

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.
Home - Top