Supercharging AI with Supercomputing
Here at Origami Labs, we like to punch above our weight. One way we do this is by collaborating with others, typically bringing our domain expertise together with their complex systems.
We are currently utilising the Isambard-AI cluster at the Bristol Centre for Supercomputing (BriCS), supplied by the AI Research Resource (AIRR) and funded by UK Research and Innovation (UKRI), to develop state-of-the-art, end-to-end trained computer vision models. We’re helping to beta-test the Phase 2 system in exchange for access to high power computing resources we simply couldn’t afford alone.
One issue we’ve identified with using off-the-shelf foundation models is that you don’t actually know how a model has been trained. Sure, there’s a paper and a repository providing details, but how do you ensure that the model provided matches the description?
By selectively using trusted national compute resources, we ensure full transparency over the datasets and pipelines that shape our models. Rather than relying on opaque, third-party pretrained weights, we are training from the ground up – controlling every layer of the process. This gives us confidence in the provenance of our models and actively mitigates the risk of adversarial data poisoning or supply chain compromise.
This approach reflects our broader philosophy; systems deployed in the real world must be built on verifiable, secure, and reproducible foundations. Whether we’re optimising ISR tasking workflows using novel computing architectures, or building robust AI pipelines, we prioritise integrity at every layer of the stack.
Utilising AIRR resources is one way we reinforce that commitment, not as a requirement, but as a deliberate part of building trusted, sovereign, high-performance AI.