Power Laws
Networks are ubiquitous, manifesting themselves in cells, ecologies and societies! As we harness the capability of these networks to model and interpret our complex data structures, it's vital to understand their uniqueness, especially as we increasingly employ them in AI.
Traditionally, many phenomena in our lives follow the 'normal distribution'. Visually, this is represented by a bell-shaped curve. For instance, if we were to measure the heights of all students in a classroom, a few shorter students would appear on the left, the majority would cluster around the centre, and a few taller ones would be on the right. Such distributions formed the backbone of early machine-learning algorithms. For example, 'normal' error distribution was assumed in many linear regression implementations.
However, networks, especially 'scale-free networks', don’t adhere to the principles of normal distribution. Instead, they follow 'power laws'. Visually, a power-law distribution shows a sharp rise followed by a lengthy, tapering tail. Consider a social platform's subscriber dynamics as an analogy: a tiny fraction boasts millions of followers (the sharp rise), while the vast majority have significantly fewer (the tapering tail).
Deep neural networks, especially those using transformers, discern relationships. Similarly, knowledge graphs capture relationships between structured data items. These relationships form networks, and as we merge our AI and Data networks together, it becomes important to understand the implications of the distinction in distribution models.
So, what could power laws mean for us? Here are some speculative thoughts:
⭕ Reinforcing feedback loops are related to many observed power-law distributions. Can power laws help us introduce more non-linear dynamics in AI?
⭕ Phase transitions in thermodynamic systems are associated with the emergence of power-law distributions of certain quantities. Graphs undergo something similar: when a certain level of connectivity is reached, there's an exponential transition at the point of becoming a fully connected graph. Given that phase transitions are related to the concept of emergence, might power laws help explain or predict the emergent capabilities that we see in large neural networks?
⭕ Are these phase transitions somehow linked to the self-organising complexity found in nature? Do scale-free networks provide a way to reverse entropy within a local system? Is this why life relies on networks so extensively? If so, how can we best harness and comprehend this anti-entropic force in our machine learning and data networks?
⭕ And crucially, as we pave the way for these next-generation DATA+AI networks we must be cautious. The pitfalls of power laws have starkly manifested in digital platforms like the web and social media. Drawing lessons from these occurrences is essential.
⭕ Embrace Complexity: https://medium.com/experience-stack/embrace-complexity-part-1-39483f10a47f