LLMS Love Graphs
As we move into the next phase of AI adoption, where organisations must feed their own data into large language models to realise their benefits, it's crucial to model the relationships explicitly. Large language models thrive on these relationships; that's where the power lies.
The Semantic Layer
By employing these open technologies and standards, any organisation can construct a Shared Semantic Layer, which gives a uniform and consistent understanding of data. Most importantly, this makes technologists consciously think about designing data products in a way that directly delivers business value in business terms
The Semantic Router
Here's the idea: the router initially maps the question to relevant classes within the organisation's upper ontology—a structured representation of key concepts that your business is focused on. Utilising these classes, the router then retrieves the corresponding 'Semantic Data Product' from the organisation's Semantic Layer
Human In The Loop
Knowledge Graphs and Ontologies provide the sophisticated tools required to articulate our complex desires and ethical frameworks. They offer a robust mechanism for anchoring humanity at the critical juncture where fluid information crystallizes into discrete units. By weaving thoughtful and considered human insight into this AI feedback loop, we're not just making incremental improvements; we're laying the architectural groundwork for a more enlightened form of artificial intelligence.
The Working Memory Graph
To build a WMG, the LLM processes a question and returns a graph of nodes using URLs as identifiers, these URLs link to ground truths stored in the organisation's Knowledge Graph. The WMG can also incorporate nodes representing conceptual understanding, establishing connections between the LLM's numerical vectors and the KG's ontological classes.
Reinventing The Wheel
So let's get Data Mesh and Data Contracts right, by building them upon the solid foundations provided by Knowledge Graph technology. Let’s reinvent the wheel in the right way, by founding it upon a proven technology that honours interconnectivity.
Seeing The Big Picture
Some of us are talking about Data Meshes, while others are talking about Semantic Layers and yet another group is talking about Enterprise Search etc. I can’t help wondering if we are all just talking about different aspects of the same thing. When each aspect is connected they combine to form one thing: a Knowledge Graph.
Transformers and GNNs
Transformers analyse sentences by assigning importance to each word in relation to others, helping them predict or generate the next words in a sentence. This 'attention mechanism' evaluates pairwise interactions between all tokens in a sequence, and these interactions can be seen as edges in a complete graph. Thus, Transformers can be thought of as graph-based models where tokens represent nodes and attention weights represent edges
Learning Occurs In Networks
All learning occurs in networks. This ranges from gene expression networks to networks of cells, from neural networks in brains to artificial neural networks in large generative models like Chat GPT.
Can LLM Reason?
While LLMs are capable of performing inductive reasoning, they will likely struggle with true deductive reasoning. There is, however, a caveat: LLMs may learn to mimic deductive reasoning so convincingly that it becomes difficult to tell whether they are truly reasoning or merely simulating it. Currently, many AI labs are likely training the next generation of models on large reasoning datasets, hoping that, with sufficiently vast datasets, deep networks, and human reviewers, these models will approximate reasoning to a degree that is functionally indistinguishable from true reasoning. Huge amounts of money and resources are being spent on the bet that simply scaling up will work
AI Hype
The leaking of the Q* algorithm coincided with a period of high volatility within OpenAI, culminating in the sacking and subsequent reinstatement of Sam Altman, and ultimately the departure of Chief Scientist Ilya Sutskever. The leak also generated a lot of speculation about the name. The ‘Q’ part seemed relatively uncontroversial, with most commentators agreeing that it was likely a reference to Q-learning. Q-learning is a type of reinforcement learning; it’s a model-free algorithm, which means it doesn’t require a model of the environment. Instead, it learns from experience by interacting.
ICLR 2023 Knowledge Graph Citations
Knowledge Graph citations are quite defused within the wider GNN cluster, from which I conclude that Knowledge Graphs have broad application within GNNs
Everything Connects to Everything Else
As Leonardo Da Vinci said “Learn how to see. Realize that everything connects to everything else. Graphs take interconnectivity seriously, and Knowledge Graphs allow any organisation to connect most of its internal data together