LogoTensorFusion Docs
LogoTensorFusion Docs
HomepageDocumentation

Getting Started

OverviewKubernetes InstallVM/Server Install(K3S)Helm On-premises InstallHost/GuestVM InstallTensorFusion Architecture

Application Operations

Create WorkloadConfigure AutoScalingMigrate Existing WorkloadBest Practices

Customize AI Infra

Production-Grade DeploymentConfig QoS and BillingBring Your Own CloudManaging License

Maintenance & Optimization

Upgrade ComponentsSetup AlertsGPU Live MigrationPreload ModelOptimize GPU Efficiency

Troubleshooting

HandbookTracing/ProfilingQuery Metrics & Logs

Reference

Comparison

Compare with NVIDIA vGPUCompare with MIG/MPSCompare with Run.AICompare with HAMi

Getting Started

Get started with TensorOS - the plug-and-play AI platform

Getting Started with TensorOS

TensorOS is currently under active development. This documentation will be updated as features become available. Please contact us for early access.

What is TensorOS?

TensorOS is a full-stack AI platform that combines IaaS, PaaS, and SaaS into a single plug-and-play solution for small and medium enterprises. It simplifies AI infrastructure by providing everything you need to run AI workloads out of the box.

Key Features

  • Plug-and-Play Deployment — Deploy as a software platform or all-in-one hardware appliance
  • Full-Stack AI Platform — Integrated IaaS + PaaS + SaaS in one package
  • GPU Virtualization — Powered by TensorFusion Engine for true GPU memory/error isolation
  • Built-in Model Management — Train, fine-tune, and serve models from a unified interface
  • Resource Optimization — Automatic GPU scheduling and oversubscription

Coming Soon

Detailed installation guides, configuration references, and tutorials are being prepared. In the meantime:

  • Check out TensorFusion Engine for the underlying GPU virtualization technology
  • Contact our sales team for a demo or early access

Table of Contents

Getting Started with TensorOS
What is TensorOS?
Key Features
Coming Soon