Blog

  • Reinforcement Learning through the Lens of Categorical Cybernetics

    This is an overview of the 'RL lens', a construction that we recently introduced to understand some reinforcement learning algorithms like Q-learning
  • The Blurry Boundary between Economics and Operations Research

    In which we bring back together the estranged fraternal disciplines of economics and operations research and map out how we can combine them to design cybernetic economies.
  • Exploring best response dynamics

    I explore the effect of players following their best response dynamics in large random normal form games.
  • The Build Your Own Open Games Engine Bootcamp — Part I: Lenses

    The first installment of a multi-part series demistifying the underlying mechanics of the open games engine in a simple manner.
  • Building a Neural Network from First Principles using Free Categories and Para(Optic)

    In this post we will look at how dependent types can allow us to effortlessly implement the category theory of machine learning directly, opening up a path to new generalisations.
  • Enriched Closed Lenses

    I'm going to record something that I think is known to everyone doing research on categorical cybernetics, but I don't think has been written down somewhere: an even more general version of mixed optics that replaces the backwards actegory with an enrichment. With it, I'll make sense of a curious definition appearing in The Compiler Forest.
  • Modular Error Reporting with Dependent Lenses

    Dependent lenses are useful for general-purpose programming, but in which way exactly? This post demonstrates the use of dependent lenses as input/output-conversion processes, using parsing and error location reporting as a driving example.
  • Value Chain Integrity

    In which we discuss how knowledge travels thru the economy, and how, when and where it forms clusters.
  • Colimits of Selection Functions

    In Towards Foundations of Categorical Cybernetics we built a category whose objects are selection functions and whose morphisms are lenses. It was a key step in how we justified open games in that paper: they're just parametrised lenses weighted by selection functions. In this post I'll show that by adding dependent types and stirring, we can get a nicer category that does the same job but has all colimits, and comes extremely close to having all limits. Fair warning: this post assumes quite a bit of category-theoretic background.
  • On Organization

    In which we describe organization and organizations as tectonic plates shaped by clashing beliefs.
  • Learning with Invariant Preferences

    A system whose architecture has invariant preferences will act in a way to bring about or avoid certain states of the world, no matter what it learns. A lot of people have already put a lot of thought into the issue of good and bad world-states, including very gnarly issues of how to agree on what they should be - what I'm proposing is a technological missing link, how to bridge from that level of abstraction to low-level neural network architectures.
  • The Attention-Seeking Rational Actor

    In which we establish an underlying model for human behavior and claim that all economies are just a variation of the attention economy.
  • Stocks, Flows, Transformations: The Cybernetic Economy

    An Economic Pattern Language (@econpatterns for short) takes the economy and disassembles it into its constituent parts. But first, this blog post describes the economy as a whole.
  • Iteration with Optics

    In this post I'll describe the theory of how to add iteration to categories of optics. Iteration is required for almost all applications of categorical cybernetics beyond game theory, and is something we've been handling only semi-formally for some time. The only tool we need is already one we have inside the categorical cybernetics framework: parametrisation weighted by a lax monoidal functor. I'll end with a conjecture that this is an instance of a general procedure to force states in a symmetric monoidal category.
  • Passive Inference is Compositional, Active Inference is Emergent

    This post is a writeup of a talk I gave at the Causal Cognition in Humans and Machines workshop in Oxford, about some work in progress I have with Toby Smithe. To a large extent this is my take on the theoretical work in Toby's PhD thesis, with the emphasis shifted from category theory and neuroscience to numerical computation and AI. In the last section I will outline my proposal for how to build AGI.
  • How to Stay Locally Safe in a Global World

    Suppose your name is x and you have a very important state machine that you cherish with all your heart. Because you love this state machine so much, you don't want it to malfunction and you have a subset which you consider to be safe. If your state machine ever leaves this safe space you are in big trouble so you ask the following question.
  • AI Safety Meets Value Chain Integrity

    Advanced AI making economic decisions in supply chains and markets creates poorly-understood risks, especially by undermining the fundamental concept of individuality of agents. We propose to research these risks by building and simulating models.
  • About the CyberCat Institute blog

    This is a short summary of the post. It is meant to explain how to write for our blog.
  • A Software Engine For Game Theoretic Modelling - Part 2

    Some time ago, in a previous blog post, we introduced our software engine for game theoretic modelling. In this post, we expand more on how to apply the engine to use cases relevant for the Ethereum ecosystem. We will consider an analysis of a simplified staking protocol. Our focus will be on compositionality – what this means from the perspective of representing protocols and from the perspective of analyzing protocols.
  • What is Categorical Cybernetics?

    Categorical cybernetics, or CyberCat to its friends, is – no surprise – the application of methods of (applied) category theory to cybernetics. The "category theory" part is clear enough, but the term "cybernetics" is notoriously fluid, and throughout history has meant more or less whatever the writer wanted it to mean. So, let’s lay down some boundaries.