Research papers
Submitted for publication where the work warrants it. Technical reports for the rest.
Active research in emerging technologies — funded under the Australian Government R&D Tax Incentive.
Datakey is a research-first firm with active engagements spanning distributed systems, graph neural networks, software and neural architectures, on-device AI, zero-boundary databases and GPU computing.
If you have an idea or project that needs structured exploration, experimentation, and new knowledge to validate, a partnership with us can deliver your project at substantially reduced costs.
The Australian R&D Tax Incentive is a refundable tax offset for qualifying research and development. For eligible companies the scheme returns up to 43.5% of qualifying R&D costs, substantially reducing your project bottom line.
43.5%
refundable tax offset on qualifying R&D for eligible Australian companies.
Datakey has 3+ years of structured research under this scheme applied to internal and client engagements. Done well, this approach makes ambitious research affordable and deepens the technology itself.
The primary pathway: the client funds the R&D project, structured to qualify for the scheme. Datakey contributes research depth and technical execution.
Engagement scope ranges from focused research efforts over 1–3 months through to multi-year programmes with deepening outcomes.
Our preferred engagement. Where research is publicly valuable, we structure the research as a public contribution via peer-reviewed publications and open-source releases. Same engagement structure, different community outcome.
Submitted for publication where the work warrants it. Technical reports for the rest.
Repositories, libraries, and reference implementations as open source contributions.
The technical deliverables the research facilitated.
Registrations and the technical record for your tax claim.
The engineering layer beneath distributed intelligent systems. Systems-level architecture for coordination at scale.
Multiple specialised models coordinating through joint embeddings or common datasets for shared focus.
Intelligence coordinating across nodes, devices, and organisational boundaries — browser, device, edge, and cloud — in real-time.
Apple Silicon unified memory, zero-copy CPU/GPU compute, on-device inference at scale.
On-device intelligence where privacy, locality, and autonomy are primary constraints. Intelligence pipelines running on user hardware.
Database engines with zero serialisation from wire to both CPU and GPU computing bounds.
Foundational research toward a new database architecture, where CPU and GPU operate on the same physical bytes for massively parallel GPU-level query execution. Read the whitepaper →
Benchmarks paper measuring throughput, atomic operations, and query performance against incumbents — Redis, Kafka, Postgres, DuckDB, pgvector, Chroma, Qdrant, Milvus. Project · mgraph →
Zero-copy methodologies, decentralised permission structures, and cryptographic verifiable lineage for high-throughput graph synchronisation.
Cooperative frameworks where multiple specialised models coordinate through graph-based representations, applied to AI productivity SaaS at scale. trudi.ai →
Graph-based knowledge representation, retrieval-augmented generation over structured ontologies, and domain-aware search applied to AI productivity SaaS. trudi.ai →
Other engagement models
Research depth surfaced as judgement on strategic direction before technical work begins.
Read more →02Research output implemented as paid delivery on top of the substrate the work produced.
Read more →03Same engagement structure with closed-IP outcomes and shared upside pathways.
Read more →Capabilities
Where most active research lives: graph neural networks, on-device inference, distributed coordination.
Read more →02Where systems-architecture thesis becomes delivery capability: wire formats, transports, runtime layout.
Read more →03Why we research what we research, in this order, and how IP is structured around it.
Read more →We reply within two business days. If a call would be faster, book a thirty minute conversation.