top of page
Fast.jpg

Extremely fast handling of big Big-Data

For the first time, Relavance is bringing to market a technology that attains the ultimate goal in data management. It is the only truly associative atomic database model where each piece of information is atomic in nature and can be associated with any other piece of information. There are NO restrictions, NO constraints, NO rows, NO views, and NO cubes.

​

Unlike linear database models, associative databases are in three dimensions by default and technically in (N) dimensions. The advantages are numerous and impressive.

​

​

New Associative Intelligent Technologies™© as Enablers for Data Management Automation

​

Raw Disk - Hardware Storage Efficiency Optimization Methodologies (5-10x)

A methodology for directly mapping data onto a permanent storage system utilizing a deterministic algorithm that takes exactly 4 lookups to access any of 4 billion file / storage nodes, unlike the current methods using b-trees and modified b-trees that take an average of 24 lookups for half of 1 billion file nodes. It represents an efficiency and performance increase approaching a full order of magnitude over existing file storage systems, and does not suffer from the root node, single point of failure limitation they do. It self-optimizes as it goes, learning to fine-tune goal-oriented performance features.

​

Link Box - Inter-Processor Communication Efficiency Optimization Methodologies (5-10x)

A methodology for interconnecting sets of up to 64 processing nodes, each consisting of up to 64 cores, in a networked cluster enabling point to point communication utilizing existing NIC hardware with set-up times in 10 ‘s of micro-seconds instead of milliseconds as is typical today, without requiring specialized hardware such as that which is used in the very high-performance Infiniband and Myrinet products.

​

Network Memory System - ‘n’-Dimensional Informational Inter-Connect Optimization

An architecture and methodologies to enable the fully automated, intelligent distribution of datasets over a network of processing nodes, usually in the form of a networked server farm of actual or virtual machines. This works at a base network communications level as a self-optimizing, point to point data routing overlay.

Automated Concept / Model-Based Data Distribution (over ‘n’ processing nodes)

Built on the previous technologies, this is a methodology and architecture which enables automatic assimilation, segmentation and distribution of any dataset over a distributed set of processing nodes according to a run-time definable, high-level organizational model which can generically accommodate any / every dataset. The system intrinsically tracks where every piece of data gets distributed to and provides access to each piece of data without requiring any search query or map-reduce operations.

​

Virtual-PK-Based Data Organization (enabling cross-domain / cross-base poly-table querying)

A methodology for the generalization of all datasets through Correlation / Integration with a meta-architecture to enable automatic compatibility between every dataset thus allowing cross-dataset querying without the need of runtime joins or implementation of a data warehouse.

Self-Distributing Queries (over ‘n’ processing nodes)

A methodology based on run-time definable, high-level organizational models such that the resulting organization facilitates completely generic ad-hoc querying across any model complexity and any number of processing nodes, complete with auto-segmentation of queries.

​

Byte-Stream Factoring and Feature Identification and Extraction

A methodology using adaptive machine learning and Associative Intelligence techniques for doing automatic analysis of data feeds to identify data types and further extracting data features appropriate to any relevant knowledge-base classification and categorization of the data type elements.

​

​

​

bottom of page