HomeTechnologyComponents → OntoCore
OntoCore - Ontologic operating system Core (Under construction)

Basic Properties
Integrating Architecture

abstract machine core OntoCore
OntoCore is the Ontologic operating system Core (OntoCore (OC)) and accordingly the component of OntoLi+-x which comprises the core of

  • operating system (os),
  • Virtual Machine (VM),
  • Virtual Virtual Machine (VVM), and
  • Abstract Machine (AM),

and handles all kinds of basic operating system kernel tasks.

basic properties

  • verified,
  • reflective,
  • n-dimensional,
  • real-time capable,
  • capability-based,
  • contract-based, and
  • system-inherent Ontologics and SoftBionics (SB)
    • Artificial Intelligence (AI),
    • Machine Learning (ML)

corresponding subcomponents:

  • capability-based microkernel OntoL4,
  • verified microkernel OntoS1, and
  • capability-based microkernel OntoS1,

and also the other Ontologic components.

We reduced the Application Binary Interface (ABI) of operating system primitives even more than others respectively removed them practically mostly and conceptually completely with a kernel-less microkernel, so that a Haven is already the Heaven.
To show this, we recall the relevant features of operating systems that are referenced in the section Exotic Operating System of the webpage Links to Software. The following text passages can be found in the original materials about these operating systems.

1. Kernel-Less Operating System (KLOS)
In KLOS each process has its own private address space and virtual memory mapping and functions independent of all other processes in the system.
This is made possible by the use of a subtle trick involving segmentation and Task State Segments (TSS).

The so-called Event Core is the heart of the KLOS architecture. The amount of processing done in the Event Core is minimal and restricted to transferring control to the unprivileged domain and performing an optional flush of the Translation Look-aside Buffer (TLB; a cache that memory management hardware (MMU) uses to improve virtual address translation speed). It is important to note that the Event Core is not just another name for the kernel as viewed in traditional operating systems. Unlike traditional kernels, in KLOS there are no down-calls to the Event Core.

2. Systems Programming using Address-spaces and Capabilities for Extensibility (SPACE)
Virtualized resources as the fundamental abstraction presented to the application developer are removed. Instead SPACE supports spaces, domains, and portals as structuring mechanisms, which can be used to implement system services in multiple protection domains.

The key to SPACE is the exploitation of protection domains to structure the operating system in terms of multiple kernels. At this point, our Ontologic System Architecture (OSA) integrates a Multi-Agent System (MAS), an Agent-Based Operating System (ABOS), and a microkernel.
By using the protection domains in processor architectures that support kernel-based systems, it is possible to build an operating system without relying on a central kernel.
SPACE generalizes the processor architecture by abstracting address spaces, privileged modes, and exceptions. These mechanisms are used to construct software using spaces, domains, and portals.
In SPACE the only abstractions present in the kernel are a generalization of the exception or interrupt handling mechanism. This generalized exception handling mechanism could be implemented in hardware; the result would be a truly kernel-less operating system .
Ultimately, the kernel is the hardware.

Address spaces correspond to Auxiliary Memory Unit (AMU) contexts, but instead of two privileged modes, SPACE supports an arbitrary number.
Exceptions are extended in SPACE through three related mechanisms:

  • 1. The first mechanism translates a hardware exception and related information (e.g. processor exception registers) into a generalized exception and parameter pair (X,v).
  • 2. The second mechanism maps generalized exceptions into portals. A portal associates an (X,v) pair with a generalized exception handler.
    Portals also specify the portal type implementation to use for traversing the portal to the exception handler and later returning.
    The ability to traverse a portal is a capability. A cross-domain call is then defined as passing through a portal. A cross-domain (communication) mechanism is used as a generalization of interrupt handling.
  • 3. The third mechanism used to extend exceptions in SPACE allows different implementations of the generalized exception and portal type mechanisms to be associated with each address space.

This is questioning the use in a practical operating system: elimination of privileged mode, tagging of memory operations, hardware support for portals/token-chains.
Hardware protection does not come for free, though its costs are diffuse and difficult to quantify. Costs of hardware protection include maintenance of page tables, soft TLB misses, crossprocessor TLB maintenance, hard paging exceptions, and the additional cache pressure caused by operating system code and data supporting hardware protection. In addition, TLB access is on the critical path of many processor designs and so might affect both processor clock speed and pipeline depth. Hardware protection increases the cost of calls into the kernel and process context switches. On processors with an untagged TLB, such as most current implementations of the x86 architecture, a process context switch requires flushing the TLB, which incurs refill costs .
(See also Fluke Alta.)

3. Flux -kernel environment (Fluke)
Fluke provides a virtualizable architecture so that microkernels meet recursive virtual machines.
A complete virtual machine interface is provided at each level; efficiency derives from needing to implement only new functionality at each level. This infrastructure allows common operating system functionality, such as process management, demand paging, fault tolerance, and debugging support, to be provided by cleanly modularized, independent, stackable virtual machine monitors, implemented as ordinary user processes.

3.1 Alta
Software-based protection mechanisms can take several forms, including type-safe languages, annotated code systems, and checkable (or provable) code accompanied by a proof. All of these systems use compile-time, load-time, or run-time checks to prevent a program from making illegal memory accesses.
Alta provides essentially the same operating system and process model as does the basic Fluke, but executes Java byte code instead of machine language, and uses type safety instead of the MMU to provide protection.

3.2 Flask
Flask is a security-enhanced version of the Fluke kernel and operating system, that applies Mandatory Access Control (MAC) to the microkernel of Fluke

4. VirtuOS
VirtuOS exploits hardware-based virtualization (e.g. Xen or another hypervisor ) to isolate and protect vertical slices of existing operating system kernels (e.g. Linux ) in separate service domains, and to provide full protection of isolated system domains respectively improved isolation of kernel parts or components in virtualized containers (see e.g. BSD Unix based OntoLix). Each service domain represents a partition of an existing kernel, which implements a subset of that kernel's functionality.
VirtuOS's user library dispatches system calls directly to service domains using an exception-less system call model (see Linux kernel with FlexSC and also external synchrony (see Xsyncfs)), avoiding the cost of a system call trap in many cases (compare with kernel-less KLOS without privileged mode or ring transition and with hardware-based virtualized resources, and SPACE without privileged mode but with support of an arbitrary number of them and without virtualized resources but hardware-based (protection for) cross-domain calls).

It uses the Peripheral Component Interconnect (PCI) passthrough and input-output memory management unit (IOMMU) facilities of hardware-based virtual machine monitors to provide service domains with direct access to physical devices.
The hypervisor's PCI passthrough mode allows guest domains other than Dom0 direct access to PCI devices, without requiring emulation or paravirtualization. These guests have full ownership of those devices and can access them using unchanged drivers.
To make PCI passthrough safe, the physical presence of an IOMMU is required. An IOMMU remaps and restricts addresses and interrupts used by memory-mapped I/O devices. It thus protects from faulty devices that may make errant DMA accesses or inject unassigned interrupts. Thus, devices and drivers are isolated so that neither failing devices nor drivers can adversely affect other domains (replace the IOMMU with contract-based channels (CBCs) and manifest-based programs (MBPs) like in 9. Singularity).

Exception-less system calls avoid this overhead. Instead of executing system calls in the context of the current task, a user-level library places system call requests into a buffer that is shared with kernel worker threads that execute the system call on the task's behalf, without requiring a mode switch. Effective exceptionless system call handling assumes that kernel worker threads run on different cores from the user threads they serve, or else the required context switch and its associated cost would negate its benefits.
A key challenge to realizing the potential gains of this model lies in how to synchronize user and kernel threads with each other. Since application code is generally designed to expect a synchronous return from the system call, user-level M:N threading is required, in which a thread can context-switch with low overhead to another thread while a system call is in progress. Alternatively, applications can be rewritten to exploit asynchronous communication, such as for event-driven servers [using libflexsc also based on FlexSC-Threads]. VirtuOS uses the exception-less model for its system call dispatch, but the kernel worker threads execute in a separate virtual machine.
Exception-less system calls are also used for an event-driven or event-based server, and the general-purpose distributed memory caching system Memcached .

5. Spring
Sun engineers used non-standard terminology for a number of common components, which makes discussing the system somewhat confusing. For instance, Mach tasks are referred to as domains, ports as doors, and the kernel as the nucleus.

5.1 Nucleus
The Spring kernel was divided into two parts: a virtual memory system and the nucleus.
IPC model with doors between domains and name service.
The Spring kernel is not multi-threaded and almost immediately hands off the vast majority of requests to the servers, so under this model it is only the servers which, in theory, need to be threaded.

5.2 InterProcess Communication (IPC)
One major difference between Mach and Spring was the IPC system. In Mach, the system was arranged as a set of one-way asynchronous pipes (ports) between programs, a concept derived from Unix pipes. In programming, however, the most common method of communications is the procedure call, or call/return, which Mach did not support directly. Call/return semantics could only be supported via additional code in higher-level libraries based on the underlying ports mechanism, thereby adding complexity.

Spring directly supported call/return semantics in the basic communications system. This resulted in a change of terminology from ports in Mach, to doors in Spring. Doors were known to the kernel only; programs were handed [somekind of] a handle to the door with an identifier which was unique to that program. The system worked similarly to ports for the initial message; messages sent to a door were examined by the nucleus in order to find the target application and translate the door handle, but the nucleus then recorded small amounts of information from the caller in order to be able to return data quickly.

Additionally, the Mach model was asynchronous - the call would return if and when the server had data. This followed the original Unix model of pipes, which allowed other programs to run if the server was busy. However, for a call/return system this has serious drawbacks, because the task scheduler has to run to select the next program to be serviced. Hopefully this was the server the call was requesting data from, but it this was not guaranteed. Under Spring, IPC is synchronous; control is immediately passed to the server without running the scheduler, improving the round trip time in the common case when the server can immediately return.

5.3 Subcontracts
Although not directly related to Spring per se, existing mechanisms for supporting different flavors of calls were not well defined. In order to provide a richer interface, the concepts of subcontracts (see also Singularity) has been developed.

5.4 Related Work
The L4 microkernel shares a number of features with Spring's kernel. In particular it also uses a synchronous call/return system for IPC, and has a similar Virtual Machine (VM) model.

5.5 Interface Definition Language (IDL)
Spring concentrated on programmability; making the system easier to develop on. The primary addition in this respect was the development of a rich Interface Definition lLanguage (IDL). In addition to functions and their parameters, Spring's interfaces also included information about what errors can be raised and the namespace they belong to. Given a proper language, programs, including operating system servers, could import multiple interfaces and combine them as if they were objects native to that language - notably C++. Some time later the Spring IDL was adopted with minor changes as the Common Object Request Broker Architecture Interface Definition Language (CORBA IDL).

6. Pebble (for embedded applications)
Pebble shares with SPACE the idea of cross-domain communication as a generalization of interrupt handling.

6.1 Key Ideas
Minimal supervisor-mode nucleus, responsible for little more than context switches.

  • Most operating system functionality is provided by servers that execute in user mode without special privileges.

Safe extensibility: the system is constructed from untrusted components and reconfigured while running.

  • Software components execute in separate protection domains enforced by hardware (MMU).

Inter-domain communication is performed by portals, which are generated dynamically.
Portals may manipulate parameters.

6.2 Hardware vs. Software Protection in
Component-Based Applications
Software protection: type safe languages (e.g. Java) or software fault isolation.
Traditional hardware protection: components run in separate protection domains enforced by MMU.

  • Slow communication encourages co-location of components.

Pebble approach: hardware protection with efficient communication via specialized portals.

  • Components may be written in any language.
  • The cost of portal call is small enough so that there is no performance reason to co-locate components in a single protection domain.

6.3 Notes: Design Philosophy
A small kernel:

  • The kernel is responsible only for switching between protection domains.
  • If something can be run at user level, it is.
  • Pebble kernel is much smaller than contemporary kernels (Unix, Inferno, L4; An old version of L4 was ment at that time. Microkernels of the actual L4 versions are also very small.)

Typical Pebble components: CPU scheduler, device drivers (including interrupt handling), VM, file systems.

  • portal code is generated dynamically and performs portal-specific actions.
  • portal code runs in kernel mode and may access or modify private data structures belonging to certain servers (e.g. VM).
  • portal code has no loops and may not be blocked.
  • each protection domain has a private portal table.

No performance reason to co-locate services (run multiple services in the same protection domain).

  • cross-domain transfers via portals are very efficient.
  • can run untrusted code written in any language (not just type-safe languages such as Java) in a separate protection domain under hardware memory protection without a large performance penalty.

6.4 Related Work
Pebble includes a combination of old ideas:

  • Microkernels: Mach and L4.
  • Protection domains and portals: SPACE.
  • Specialized code generation: Synthesis and Synthetix.
  • Efficient parameter passing: Lightweight Remote Procedure Call (LRPC).
  • Object oriented OS: Spring.
  • Dynamically replaceable components: Kea.
  • User-level implementation of high-level abstractions: Exo-Kernel

Is it just a combination of L4 and Synthesis.

6.5 Pebble vs. Related Operating System
Pebble takes the ideas of Synthesis and Synthetix to the extreme.
What happens when all system calls and all IPC are customized?
Pebble has a smaller nucleus than modern kernels like L4, while keeping their performance advantage for specialized system calls and IPC. (This comparison is based on older implementations of the L4 microkernel and must not hold for the latest implementations of it.)
Pebble extends operating system functionality by calling new user-level servers via efficient portals. Other operating systems load code into the kernel or provide a library of low-level abstractions.
Pebble provides safety by enforcing protection domains with hardware MMU. Other operating systems use special languages, static or dynamic checks for code that is loaded into the kernel.

7. OntoS1
OntoS1 is our verified microkernel based on our (verified) OntoL4 microkernel.

Instead of leveraging hardware based protection to speed up (some of) the context switching, like for example TrustZone (TZ), a verified kernel can be used to run all operating system servers and applications in protected mode respectively without any priviliged mode (see KLOS and SPACE)

8. Coyotos
Type safety is another way to enforce operating system security. Coyotos combines capabilities with language-level verification techniques.

9. Singularity
Singularity systems incorporate three key architectural features:

  • 1. SoftwareIisolated Processes (SIPs; Isolation-Based Processes (IBPs)) for protection of programs and system services provide an environment for program execution protected from external interference.
  • 2. Contract-Based Channels (CBCs) for communication enable fast, verifiable message-based communication between processes.
  • 3. Manifest-Based Programs (MBPs) for verification of system properties define the code that runs within software-isolated processes and specify its verifiable behavioral properties.

At the same time two collaborating research groups have begun to verify the L4 microkernel as well, because we were acting in these fields for the development of our OS.

9.1 Software-Isolated Process (SIP)
SIPs cannot share writable memory with other SIPs and are isolated by software verification, not hardware protection. In other words, SIPs are closed object spaces.

SIPs rely on programming language type (see Exokernel and SPIN ) and memory safety for isolation (see Exokernel also by means of software protection techniques, like type safe languages and sandboxing), instead of memory management hardware. Through a combination of static verification and runtime checks, Singularity verifies that user code in a SIP cannot access memory regions outside the SIP. With process isolation provided by software rather than hardware, multiple SIPs can reside in a single physical or virtual address space.
Richer systems can be built by layering SIPs into multiple address spaces at both user and kernel protection levels. Aiken et al. evaluate the trade-offs between software and hardware isolation (compare with the metaspace concept of Cognac based on Apertos and hardware-based virtualized containers of exception-less VirtuOS).
Communication between SIPs incurs low overhead and SIPs are inexpensive to create and destroy, as compared to hardware protected processes (see Table 1). Unlike a hardware protected process, a SIP can be created without creating page tables or flushing TLBs. Context switches between SIPs also have very low overhead as TLBs and virtually addressed caches need not be flushed.
Protection domains can provide an additional level of hardware-based protection or isolation around SIPs. Each protection domain consists of a distinct virtual address space (compare with kernel-less KLOS without privileged mode or ring transition and with hardware-based virtualized resources, and SPACE without privileged mode but with support of an arbitrary number of them and without virtualized resources but hardware-based (protection for) cross-domain call).

9.2 Contract-Based Channel (CBC)
A channel is a bi-directional message conduit with exactly two endpoints. A channel endpoint belongs to exactly one thread at a time.
Communication across a channel is described by a verifiable channel contract. The two ends of a channel are not symmetric in a contract.
A contract consists of message declarations and a set of named protocol states. Message declarations state the number and types of arguments for each message and an optional message direction. Each state specifies the possible message sequences leading to other states in the state machine (see communication of agents listed in the section Intelligent/Cognitive Agent and handling of state machines in the Agent-based Unified Modeling Language (AUML) listed in the section Formal Modeling of the webpage Links to Software).

9.3 Manifest-Based Program (MBP)
Every component in Singularity is described by a manifest, including the kernel, device drivers, and user applications.
A manifest describes an MBP's code resources, its required system resources, its desired capabilities, and its dependencies on other programs.
It is a machine-checkable, declarative expression of the MBP's expected behavior. The primary purpose of the manifest is to allow static and dynamic verification of properties of the MBP. For example, the manifest of a device driver provides sufficient information to allow installtime verification that the driver will not access hardware used by a previously installed device driver. Additional MBP properties which are verified by Singularity include type and memory safety, absence of privileged mode instructions, conformance to channel contracts, usage of only declared channel contracts, and correctly versioned ABI usage.
A basic manifest is insufficient to verify that a MBP is type-safe or that it uses only a specific subset of channel contracts. Verification of the safety of compiled code requires additional metadata in MBP binary files.
To facilitate static verification of as many run-time properties as possible, code for Singularity MBPs is delivered to the system as compiled Microsoft Intermediate Language (MSIL) binaries (MSIL is a superset of the Common Intermediate Language (CIL) and CPU independent byte code format).

When an MBP is invoked, the manifest guides the placement of code into a SIP for execution, the connection of channels between the new SIP and other SIPs, and the granting of access to system resources by the SIP (see object and agent migration).

9.4 Compile-Time Reflection (CTR)
The core feature of compile-time reflection (CTR) is a high-level construct in Sing#, called a transform, which allows programmers to write inspection and generation code in a pattern matching and template style. The generated code and metadata can be statically verified to ensure it is well-formed, type-safe, and not violate system safety properties. At the same time, a programmer can avoid the complexities of reflection APIs.
[Interestingly to note are the points that we have the Evolutionary operating system (Evoos) with Run-Time Reflection (RTR) and we use proper reflection or RTR because the OntoBot is based on the SimAgent Toolkit that utilizes on-the-fly compilers. This implies that an agent, specifically a cognitive agent, as part of an operating system or an overall system architecture has not been envisioned by the developers of the Singularity operating system.]

10. Barrelfish
Message-based rather than shared-data communication offers tangible benefits: instead of sequentially manipulating shared data structures, which is limited by the latency of remote data access, the ability to pipeline and batch messages encoding remote operations allows a single core to achieve greater throughput and reduces interconnect utilization.

11. Asbestos
Asbestos proposes the marriage of previous ideas in systems: the capability-based operating system, the per-process namespace, the virtualizable kernel interface (the logical extension of system call interposition libraries), and the decentralized MAC. We added the Distributed Mandatory Access Control (DMAC).
Asbestos uses decentralized, fine-grained mandatory access control (MAC) primitives to solve this problem in a flexible and scalable manner. Subjects on the system, such as processes, I/O channels, and files, are assigned labels, and special privilege is needed to remove a label once applied.
The Asbestos operating system makes non-discretionary access control mechanisms available to unprivileged users, giving them fine-grained, end-to-end control over the dissemination of information.
Asbestos provides protection through a new labeling scheme that, unlike schemes in previous operating systems, allows data to be declassified by individual users within categories they control. The categories, called tags, use the same namespace as communication endpoints, making them a kind of generalization of capabilities. As in a capability system, processes can dynamically generate new tags and distribute them independently. Processes can specify temporary label restrictions on sent messages to avoid the unintentional use of privilege.
Asbestos's new process abstraction, the event process, efficiently supports server processes that must spin off many versions inhabiting distinct security compartments. Event processes impose less overhead on the operating system than forked address spaces, so many thousands of them can coexist without resource strain.

12. SASOS4Fun
The SASOS4Fun is an architecture of a Single Address Space Operating System (SASOS) with many interesting features, like

  • Software Isolated Processes (SIPs),
  • the support of the Object-Oriented (OO 1) and Ontology-Oriented (OO 2) paradigm,
  • even the support of the Ontologic(-Oriented) (OO 3) paradigm by taking a kernel-less architecture and the Zero Ontology O#,
  • a type-safe language, and
  • a log-structured file system based on consistent hashing with finger tables and hash tables, that:
    • store intermediate key-score pairs, with the score being the hash of the data used as the data address,
    • have table sizes, which can be determined by sampling their input, so that insertions and queries of a key-data address pair are done with a guaranteed high performance of needing only constant time (O(1)), and also
    • organize the key-score pairs within each hash table slot as a B+ tree for preventing the inacurracy of sampling from degrading the performance,

and that in addition offers optional on-the-fly/incremental partioning, formating, checking, and repairing, so that the file system does not suffer from the problem of scalability, as well as configurable checkpoint, snapshot, or/and versioning management system.

13. But there is one more thing:
We removed that remaining hardware generalization of the exception or interrupt handling mechanism, system call trap, hardware MMU and IOMMU, etc. of kernel-less operating systems with the integration of system features like:

  • capability-based (see L4),
  • verified,
  • cross-domain communication (see Spring, SPACE, and Pebble),
  • hypervisor (see L4 and also VirtuOS),
  • exception-less system call model (see FlexSC and VirtuOS),
  • external synchrony (see Xsyncfs),
  • asynchrony (see Integrating Architecture and VirtuOS),
  • type safety instead of the MMU (see Fluke Alta and Coyotos),
  • verifiable Contract-Based Channels (CBCs) and Manifest-Based Programs (MBPs) (see Singularity) instead of the IOMMU,
  • message-based system rather than shared-data communication (see Barrelfish) over CBCs for example,
  • and so on

as a dynamic, flexible way as required, whereby our OntoCore has also the integrated system features of:

  • verified OntoS1,
  • Design by Smart Contract,
  • Ontology-Oriented (OO 2) and Ontologic(-Oriented) (OO 3) instead of the MMU and the IOMMU,
  • Ontology-Oriented (OO 2) and Ontologic(-Oriented) (OO 3) for CBC and the MBP,
  • and much more.

OntoCore with AI microkernel
OntoL4 and Artificial Intelligence (micro)kernel (AI microkernel)
trusted multi-agent real-time artificial intelligence operating system kernel

  • VM respectively operating system core itself with the semantics, the Binary Decision Diagrams (BDDs), the Binary Satisifiability (SAT) solvers
  • graph-based transformations can be realized with a relational database and a deductive database respectively their BDD based substitutes, so we got a homogen system of logic paradigms, graphs, databases, and graph-based functionalities
  • "The key problem is that some Boolean functions do not have a compact representation as BDDs and the size of the BDD representation of a Boolean function is very sensitive to the variable ordering used. Bounded model checking [1 [Symbolic model checking without BDDs]] has been proposed as a technique for overcoming the space problem by replacing BDDs with SATisfiability (SAT) checking techniques because typical SAT checkers use only polynomial amount of memory."
  • combines methods based on Binary Decision Diagrams (BDDs) and Boolean SATisfiability (SAT) Solvers to increase the efficiency of SAT-based Bounded Model Checking (BMC)
  • variants of the operating system core respectively Artificial Intelligence (AI) system core or AI microkernel of the OntoCore of our Ontologic Systems (OSs) comprising the verified microkernel OntoS1.
  • optimized variant of the initial variant with a Linux or BSD Unix-like operating system for an embedded system
  • special variant of the optimized variant with an operating system based on a dialect of the programming language C or another suited programming language (see also the Further steps of the 2nd of May 2016) and a Binary Decision Diagram (BDD; see the Website update of the 27th of May 2016 (yesterday)) and a size of 1 to 5 MB.
  • special variants based on an object-relational paradigm with the Horn clause logic and the First-Order Logic (FOL) extended and represented with stable model, well-founded, or/and answer set semantics (see the work "Knowledge Bus: Generating Application-focused Databases from Large Ontologies" and "A Hidden Herbrand Theorem-Combining the Object and Logic Paradigms"), and a
    • Binary Decision Diagram (BDD) for the Datalog and the Prolog part (see the work "Well-Founded Semantics for Extended Logic Programs with Dynamic Preferences"),
    • Boolean SATisfiability (SAT) solver instead of a BDD (see the work "Bounded LTL Model Checking with Stable Models"), and
    • Boolean SATisfiability (SAT) solver and a BDD (see the works "Improving SAT-based Bounded Model Checking by Means of BDD-based Approximate Traversals" and "CirCUs: A Hybrid Satisfiability Solver").

Smodels can also be used for computing answer sets, and other interesting works are included in the OntoBot just right from the start of our OS.
Furthermore, we listed the point "well-structured and -formed" on the webpage Overview, because we are already discussing the syntactical side of the bridge from syntax ("the chalk on the board drawn by grammar rules") to semantic ("the meaning of the symbols drawn on the board with the chalk") and the internal of the abstract machine core. For sure, we should have added the point "well-founded" as well on said webpage, but we left it for the joy of searching and testing the intelligence of external entities, as it is the case with Well-Structured Transistion System (WSTS), which is a simple implication.

Moreover, the trade-offs between complexity, flexibility, and efficiency on the one side and addition of extensions, expressiveness, loss of expected conclusions, etc. on the other side, which taken together describe a network of continua or a regulatory system, and the related decision-making, selection, or configuration problem, as mentioned in the document "Well-Founded Semantics for Extended Logic Programs with Dynamic Preferences", are done automatically by our reflective OS as well.

Luckily, these results coincide with our general concept of an n-dimensional (fractal, reflective, holonic) OS with continua between arbitrary extrema everywhere, like the following for example:

  • zero and one (see also many-valued or n-valued logics and probability theory for example),
  • timeless and real-time,
  • centralized and distributed,
  • monolithic system and multiple subsystems,
  • monolithic operating system and kernel less multi-agent-based operating system (continuum includes e.g. microkernel with actor operating system (added on the 5th of May 2016)),
  • hardware-level and software-level virtualization (added on the 5th of May 2016),
  • this and that,

which can be created, and manually or automatically configured, adjusted, and extended as well as planned, assured, controlled, and improved by the Quality Management (QM) system (added on the 5th of May 2016) at runtime. Obviously, it is possible to provide an all in one operating system core that covers all requirements.
We come back to some aspects of this n-dimensional foundation when we discuss for example the core abstract machine of our Ontologic System Architecture (OSA).

Integrated System of Abstract Machines
combine a variant of our OntoCore comprising one of the verified microkernels OntoL4 and OntoS1 with:

  • an abstract machine, like e.g. the Simple Extensible Abstract Machine (SEAM in the work "A Virtual Machine for Multi-Language Execution"; see also the work "Virtual Virtual Machine (VVM)"),
    and a:
  • graph traversal and transformation machine and language for common graphs, such as Maude or PROGRES for example,
  • hypergraph rewriting framework, such as "Mapping Relational Operations onto Hypergraph Model" for example, and
  • Multi-dimensional Dynamic (Linear Temporal) Logic Programming (MD(LT)LP) Multi-Agent System (MAS) environment, such as Minerva for example,

to three different systems (see also the Further steps of the 28th of May 2016 and 1st of June 2016).

Data Integration
The OntoCore component features a data structure.
data structures

  • Binary Decision Tree (BDT),
  • Binary Decision Diagram (BDD),
    • Ordered BDD (OBDD),
  • and so on

on the one side and

  • Binary Search Tree (BST; also called ordered or sorted binary trees),
  • self-balancing BST,
    • B-tree,
      • B+-tree,
      • B*-tree,
  • R-tree,
  • red-black tree,
  • and so on

on the other side.
We had the initial idea to overlay both groups of data structures respectively to integrate them and their applications, including

  • transition system,
  • graph transformation system,
  • reasoning,
  • verification,
  • unification of data and process in relation with for example
    • First-Order Logic (FOL) programming,
    • High(er)-Order Logic (HOL) programming,
    • Concurrent Logic (CL) programming,
    • Multi-dimensioinal Dynamic Logic (MDL) programming,
    • Concurrent Constraint (CC) programming,
    • and so on,
  • Disjunctive Deductive DataBase (DDDB),
  • and so on

on the one side and

  • File System (FS),
  • Relational DataBase Management System (RDMBS),
  • Object-Oriented DBMS (OODMBS),
  • and so on

on the other side, because they have the same goals and functionalities. This integration would be a part of our OntoS1 and Onto# (see again the microkernels in the section Exotic Operating System and for example "A Virtual Machine for Multi-Language Execution" called Simple Extensible Abstract Machine (SEAM)), as well as our OntoBase and Ontologic File System (OntoFS) components.

so that it fits together with the component and the underlying Ontologic data storage Base (OntoBase) and Ontologic File System (OntoFS).

"Evolving algebras provide models for arbitrary computational processes, that can at once be viewed as abstract machine, and as formal specifications. [... T]he states of evolving algebras are interpreted vocabularies [..., which] is a collection of names [or signs]."

This is especially interesting seen from different points of view and for various reasons:

  • For example, the concept itself is interesting, because it integrates the fields of abstract machine and formal specification, so that in these fields again everything is connected with each other somehow.
  • It also reflects on a more foundational level the same approach done before with common, real-time, and reflective operating systems and agent-based systems and their specifications and implementations. In fact, this led to the foundational concept and design of one part of our Ontologic System (OS), specifically of the design of the OntoCore and OntoBot components.
  • But even better, an evolving algebra is basically a state transition system, which leads us to Well-Structured Transition Systems (WSTS) in this relation again (see also the Website update of the 2nd of June 2016 for example). Everything is connected with each other in a complex but still readily comprehensible manner. Keeping our OS understandable by selecting a small set of prior art was also a goal when creating it.
  • Furthermore, we have interpreted vocabularies as the states of the WSTS and also domain models respectively ontologies, which is directly connected with for example natural languages, natural intelligence, and artificial intelligence as well. A more far reaching implication is that all these abstract machines and virtual machines as well as operating systems are not needed at all seen from the conceptual point of view. Simple said, computing can be done instantaneously like breathing air.
  • In addition, we are talking about a kernel-less reflective system, which can be applied here another time as well, as it was done with the ultimate grounding of our OS, which is simply said the void or nothing and everything simultaneously (see the related Clarification of the 29th of April 2016). In fact, our ontologic model constitutes a totally different kind of computing substrate or better said it constitutes somekind of a computing aether or computing spirit.

We also concluded around the year 2003 that fuzzy logic is only a lazy approach for solving problems developed and used by scientists and engineers, because of their inabilities to measure precisely, handle complexity thoroughly, and program correctly, and showed with our abstract machine core that fuzzy logic, probabilistic logic, and similar paradigms can be realized with the classical boolean logic in the very end.

Another important point to mention is, that using an approach based on fuzzy logic, probabilistic logic, and similar paradigms, is a profound contradiction to the important requirement of trust in and reliability of a system, specifically in relation with the application of Artificial Intelligence (AI) on the large scale. One cannot trust such a system due to its design. In contrast, our Ontologic System (OS) provides results that are founded on a rational, correct, and verified basis. If we get a wrong result, then we have the possibility to correct the processing, which is not given with a system based on uncertainty, imprecision, and chance respectively the theory of probabilistic. We do not dice.

some details of our Caliber and OntoCore (OC) software component, which comprises the operating system, virtual machine, and abstract machine core, that both matured more in this way.
Specifically the processing, interplay, and integration of special

  • theories and concepts, such as for example the
    • ones listed on the webpage Literature and
    • category theory,
  • techniques based on for example
    • graphs,
    • Binary Decision Diagrams (BDDs),
    • Artificial Neural Networks (ANNs),
    • multi-modalities (e.g. semantic segmentation),


  • technologies

as part of the ontologic computing.

The work "On the iisomorphism of sign, logic, and language" is one of the mathematical expressions and the source of inspirations for our

  • abstract machine core,
  • way of handling infiite(ly-many) valued logics, such as for example
    • fuzzy logic,
    • probabilistic logic, and
    • possibilistic logic,

    by classical logics, and

  • foundation of our ontologic paradigm,

because it has a "semiotic model which is only linearly complex" or O(n).

Multi-Agent System (MAS) Virtual Machine (VM) based multimodal user interface, which by the way confirmed this part of our approach after we had already developed it ourselves and integrated its MAS VMs with the core agent-oriented or agent- based operating system. But what about an OO 1, 2, and 3 BDI MAS VM ROS with M⁴UI including a 3D computer graphics software with game engine?

Haskell "Modeling a Separation Kernel [on L4]", which shows that our Ontologic System (OS) is sound all the way down to the bare metal and even beyond, and also keep in mind that "P-logic is a modal mu-calculus that supports direct expression of recursively-defined properties of complex data structures. In a modal logic, the intended domain of interpretations is not simply sets, but a family of sets with a particular structure."
As we have seen with the work "Categorial Type Logics" as well (see also the related note in the Website update of the 17th of May 2016), this work also presents multimodality in a different field than a User Interface (UI) system (see also the work "Modal logics and mu-calculi: an introduction" and our multilingual multimodal multiparadigmatic OntoBot software component).

"Concurrency is Logic" and "Concurrent Constraint Programming is Logic"

Now add the works related with

  • Petri nets,
  • concurrent systems,
  • rewriting logic,
  • graph logic,
  • linear logic,
  • "Dynamic Logic Programming with Multiple Dimensions" or Multi-dimensional DLP,
  • "Higher-Order Logic as Constraint Logic Programming" respectively
  • our multimodal multiparadigmatic OntoBot software component,

guess how hypergraph rewriting works on the textual level respectively as a symbol system, and find out again by taking this path that "Concurrent Constraint Programming is Logic".
In this context we would also like to mention that the Simple Extensible Abstract Machine (SEAM) described in "A Virtual Machine for Multi-Language Execution" is also tightly related with another multiparadigmatic programming language used for reflective systems, Concurrent Constraint Programming (CCP), and Natural Language Processing (NLP).

  • object-oriented, capability-based, and smart contract-based programming language, and
  • programming and proving framework

  • validated,
  • verified,
  • capability-based, and
  • smart contract-based

operating system kernel OntoL4, with the capability-based, secure scripting language and the POSIX extension Capsicum, and apply them as required and reasoned by our OntoCore and OntoBot

Model Integration
Boolean functions are canonically represented by a multi-rooted Directed Acyclic Graph (DAG). The Binary Decision Diagram (BDD) data structure is used as a susbstitute for relational databases (see e.g. SQL) and deductive databases, inference engines, and reasoners (see e.g. Datalog and Prolog). Therefore, we used them for example as a substitute for Resource Description Framework (RDF) triple stores (see e.g. the related query language SPARQL Protocol and RDF Query Language (SPARQL)) and f-logic or frame-logic based Web Ontology Language (OWL) databases, inference engines, and reasoners (see e.g. XSB based on Prolog, Flora-2 relying on XSB, and F-OWL based on Flora-2) as well.
Furthermore, graph-based transformations can be realized with such relational and deductive databases respectively their BDD based substitutes, so we got a homogen system of logic paradigms, graphs, databases, and graph-based functionalities.

Let us connect some dots in the following:

  • Linear-time Temporal Logic or Linear Temporal Logic (LT) lead us to agent systems based on LTL in the section Intelligent/Cognitive Agent.
  • Modal logic leads us to dynamic logic.
  • Modal logic leads us to arrow logic and other logics (see the Website update of April, May, and June 2016).
  • Dynamic logic leads us to the Dynamic Logic Programming Agent Architecture Minerva listed in the section Intelligent/Cognitive Agent.
  • Binary relation leads us to the Relational Data Model (RDM) and the triple store.
  • Binary relation, proof checking and verification lead us to the Binary Decision Diagram (BDD) and everything listed in the section Formal Verification.
  • RDM lead us to Relational DataBase Management System (RDBMS).
  • Triple store leads us to Object-Oriented Database Management System (OODBMS), graph traversal, 3D graphical interface, Natural Language Processing (NLP), Virtual Reality (VR), and Artificial Intelligence (AI) including semantic (world wide) web (see the work Investigation of the use of the object-oriented paradigm in the construction of a triple store based on dynamic hashing.)
  • Binary relation and arrow logic leads us to the Arrow system.
  • Arrow system leads us to reflection and rewriting.
  • Graphical logic or visual logic leads us to automatic graph transformation alone or in combination with a DBMS (see the Website update of April and May 2016).
  • Rewriting leads us to OntoBot.
  • Rewriting and graph transformation lead us to PROGRES.
  • Model based system engineering leads us to the Unified Modelling Language (UML) and our models, such as e.g. the Structured Petri-Net-Entity-Relationship Model, and everything listed in the section Formal Modeling.
  • Formal modeling leads us to OntoBlender.
  • Real-Time Future Interval Logic (RTFIL) and Real-Time Graphical Interval Logic (RTGIL) lead us to concurrent programming and concurrent systems and real-time systems.
  • Concurrent and real-time systems lead us to real-time operating systems, real-time actor systems, real-time agent systems, etc. and everything listed in the section Exotic Operating System.
  • This and that lead us to this and that by binary relations, and finally to our Ontologic System. We leave it for the children to continue with connection the dots.

Now we see here:

  • repetitions of single techniques, which lead to their meta-level,
  • repetitions of combinations of these single techniques, which lead to their meta-level,
  • increase of density and complexity of system functionality on the one hand,
  • emergence of an easily understandable coherent core as result on the other hand, and
  • attainment of a critical mass of highly artificially intelligent functionality, including reflection, of the coherent core,

which shows nicely how everything fits and works together. More remarkably, we even have not mentioned at this point the many other features of our OS such as the other SoftBionics (SB) techniques, including e.g. Machine Learning (ML), multi-modal user interface, AR, VR, and MR, and so on, as well as our reduction of this coherent core and synthesizes of the OntoCore (OC).

It should be noted that we already did all these works in the period of the year 1999 to 2005, and (virtually) all of the related website updates and clarifications are only the resolution of what we have publicated in a very dense and minimalistic way in November of 2006 and the proof that the implementation of the OS was already possible at that time. The only really new additions were the

  • substitution of simple graphs with hypergraphs 2 months later in January 2007,
  • further reduction of the combination of a microkernel and a binary relation based datastore to the OntoCore (OC) in May 2016, though the latter has to be seen more as somekind of a manual beautifying because the OS is able to do such steps all alone, and
  • Content-Addressable Storage (CAS) (see the Ontonics, OntoLix and OntoLinux Further steps of the 2nd of May 2016 for Associative Memory (AM)).

quote from the section Hint of the webpage Project Status:
"If you are trying to build your own system, develop a strategy for automating as much as possible without stepping out of the whole concept."

Following this path leads to our abstract machine core in a first step and ends at our Ontologic System (OS) with its Ontologic(-Oriented) (OO 3) and Ontologic Computing (OC) paradigms in the last step.

The OntoCore component is integrated with the ..., which increases the power of our Ontologic System Architecture (OSA) even more with

    • ,
    • , and
    • ,
  • , and also
  • , and
  • .

But keep in mind that our abstract machine core respectively ontologic model is language transparent, agnostic, independent.

Knowledge Integration
In this way, we could further ... with ...

the functionalities and capabilities of the fields of

  • Computer-Aided technologies (CAx) as well as
  • Knowledge-Based Engineering (KBE) Environment (KBEE)

with the combined functionalities of the OntoBot and the OntoBlender, that are used as part of the special assistance and recommender system when conducting highly complex numerical calculations and simulations for example, while the Natural Language Processing (NLP) feature makes a natural interaction with the various CAx assistants possible.

Intelligence Integration
Another original and unique feature of our Ontologic System (OS) created by C.S. is our range of verified Artificial Intelligence (AI) functionalities, which comprises both the AI functionalities and the software that implements said AI functionalities.

The very foundation is our verified and real-time capable OntoCore with its capability-based microkernel OntoS1 and Artificial Intelligence (AI) system core or AI microkernel.
In addition, others and we calculated all variants of a foundational n-layered Artificial Neural Network (ANN) respectively the complete ANN. What we did with SoftBionics (SB) around the year 2005 was to develop it further as:

  • a building block system for all kinds of ANN that are not based on probabilistic methods but comprising the latter ones (see also the same discussion in relation with (formal) fuzzy logic, probabilistic logic, possibility logic, and discretized logic in the Clarificaiton of the 14th of May 2016),
  • a part of hybrid AI systems (AI 3), such as for example Computer Vision (CV) or more precisely electromagnetic radiaton imaging, because we have our Ontoscope technology, and
  • verfied ANN,
  • proof-carrying AI (see webpage Overview),
  • and so on.

Logic without Syntax

related parts of the OntoCore and the OntoBot, which realize our common sense AI functionality

Through the seamless integration of OntoCore with

that are directly connected with SoftBionic (SB) components, such as for example

  • Artificial Intelligence (AI)
  • Machine Learning (ML),
  • Evolutionary Computing (EC), and
  • other softcomputing techniques,

as well as

  • Natural Language Processing (NLP),
  • Natural Image Processing (NIP), and
  • other modalities or senses.

An essential part of our OS, specifically of the belief system, are also all aspects of trust, including

  • authentication and
  • verification,

and all kinds of trusted environments are included in our OS.

With our OS devices such as e.g. a smart router or a gatekeeper are not needed at all with the solutions already included in the foundational components, specifically in the OntoCore and the Linux kernel, that are even smarter on the one hand and provide even more advantages on the other hand.

Because we have


  • many multimedia systems work in real-time as well, such as many Augmented Reality (AR) and Virtual Reality (VR) environments and systems referenced in the section Mixed Reality of the webpage Links to Software for example,

we are pleased to make the information official that we connected the hovercams of the original and unique multimedia project Hovercity with the OntoGlobe/OntoEarth project of Ontonics, so that our virtual earth is no longer made up of still images only and instead is full of life.

The OntoCore component provides the foundation for the development of many interesting, sophisticated, and complex Multilingual Multimodal Multidimensional Multiparadigmatic Multimedia (M⁵) software and hardware applications and systems respectively Ontologic Applications by incorporating

  • ,
  • , and
  • .

Interestingly, the work also mentions the Object-Oriented Database Management System (OODBMS) Oggetto, graph traversals, 3D graphical interface, Natural Language Processing (NLP), and as highlights Virtual Reality (VR) and Artificial Intelligence (AI).
This work was not a source of inspiration for the development of our Ontologic System, but was found when looking for further literature related with Oggetto. See also the OntoCore (OC) software component, specifically its variants based on BDDs

we have already proven the existence of Artificial Consciousness with our Ontologic System, specifically with the derivation and creation of our abstract machine core OntoCore and our Caliber/Calibre.

These system properties should have been understood already as essential properties of the Ontologic System (OS). Indeed, the OS is a kind of magic, real tangible magic.

© and/or ® 2006-2014
Christian Stroetmann GmbH