Mapping Concept Evolution in Qwen3
David Fooshee (PhD), John Carlsson (PhD), Gunnar Carlsson (PhD)
We often describe Large Language Models (LLMs) as "black boxes." We observe the input and the output, but the internal machinery – the billions of calculations occurring in between – remains largely opaque. We observe that the model understands concepts, but we rarely discern how it constructs them. It is vital that we understand the “how”, because it will give us better information about how to control the LLMs and AI, and also diagnose possible malfunctions, like the introduction of unacceptable biases or the production of undesirable language or modes of communication. This kind of control will also make the adaptation of LLM technology to specific application domains, such as financial or legal documents, simpler and more direct.