
From Loops to Klein Bottles: Uncovering Hidden Topology in High Dimensional Data
Motivation: Dimensionality reduction is vital to the analysis of high dimensional data. It allows for better understanding of the data, so that one can formulate useful analyses. Dimensionality reduction that produces a set of points in a vector space of dimension n, where n s much smaller than the number of features N in the data set. If the number n is 1, 2, or 3, it is possible to visualize the data and obtain insights. If n is larger, then it is more difficult. One interesting situation, though, is where the data concentrates around a non-linear surface whose dimension is 1, 2, or 3, but can only be embedded in a dimension higher than 3. We will discuss such examples in this post.

How to Use BluelightAI Cobalt with Tabular Data
BluelightAI Cobalt is built to quickly give you deep insights into complex data. You may have seen examples where Cobalt quickly reveals something hidden in text or image data, leveraging the power of neural embedding models. But what about tabular data, the often-underappreciated workhorse of machine learning and data science tasks? Can Cobalt bring the power of TDA to understanding structured tabular datasets?
Yes! Using tabular data in Cobalt is easy and straightforward. We’ll show how to do this with a quick exploration of a simple tabular dataset from the UCI repository. This dataset consists of physiochemical data on around 6500 samples of different wines, together with quality ratings and a tag for whether the wine is red or white.

Geometry of Features in Mechanistic Interpretability
This post is motivated by the observation in Open Problems in Mechanistic Interpretability by Sharkey, Chugtai, et al that “SDL (sparse dictionary learning) leaves feature geometry unexplained”, and that it is desirable to utilize geometric structures to gain interpretability for sparse autoencoder features.
We strongly agree, and the goal of this post is to describe one method for imposing such structures on data sets in general. Of course, it applies particularly to the case of sparse autoencoder features in LLM’s. The need for geometric structures on feature sets applies generally in the data science of wide data sets (those with many columns), such as occur as the activation data sets in complex neural networks. We will give some examples in the life sciences, and conclude with one derived from LLM’s.

Topological Data Analysis and Mechanistic Interpretability
In this post, we’ll look at some ways to use topological data analysis (TDA) for mechanistic interpretability.
We’ll first show how one can apply TDA in a very simple way to the internals of convolutional neural networks to obtain information about the “responsibilities” of the various layers, as well as about the training process. For LLM’s, though, simply approaching weights or activations “raw” yields limited insights, and one needs additional methods like sparse autoencoders (SAEs) to obtain useful information about the internals. We will discuss this methodology, and give a few initial examples where TDA helps reveal structure in SAE feature geometry.

BluelightAI Cobalt— the Platform for Illuminating and Improving AI Models— Now Available Globally on PyPI
Groundbreaking AI illumination and improvement platform BluelightAI Cobalt is now available for worldwide distribution via PyPI, the Python Package Index.
Recognizing the growing need for data-centric quality and performance solutions for the AI/ML market, BluelightAI, the leading provider of Topological Data Analysis (TDA) for datasets and models, boldly opened distribution and use of the Cobalt platform to data scientists globally.
Benefiting from its uncanny ability to reveal previously hidden data patterns, teams around the world are using BluelightAI Cobalt for accomplishments such as selecting the best performance embedding models for ecommerce search, fraud pattern detection, data curation for model building, and comprehensive error analysis prior to deployment.

Intelligent Search Results for Ecommerce: BluelightAI and Marqo Join Forces
Improving Ecommerce Search with Automated Query Analysis
The vector database market was estimated to be $1.6 billion in 2023 and projected to reach $13.3 billion by 2033. Ecommerce is becoming a driving force in the vector database market, where these databases power advanced product search, enabling precise, personalized, relevant results. Together with Marqo, we are enhancing search and driving an increase in revenue for ecommerce companies.

Curate Your Datasets with Cobalt for Higher Performing Models
Any model serving customers has a certain risk/reward ratio that must be considered. The reward is the potential value the business receives (the automation of routine or mundane tasks, the associated saved costs, etc.).