Skip to content

How It Works

TMA.sh turns a GitHub repository into a live Telegram Mini App. You connect a repo, link a Telegram bot, and push code. The platform handles the rest: building, hosting, CDN distribution, and bot configuration.

Every deployment follows the same path from source code to a live Mini App:

  1. Connect — Link a GitHub repository and a Telegram bot to a TMA.sh project.
  2. Push — Push code to the repository. A GitHub webhook notifies TMA.sh.
  3. Build — A build container clones the repo, installs dependencies, and runs the build command. The output is a static directory (HTML, CSS, JS, assets).
  4. Upload — Built assets are uploaded to R2 (Cloudflare’s object storage), and KV routing tables are updated to point the project’s subdomain to the new deployment.
  5. Configure — The Telegram bot’s menu button URL is updated automatically to point to the deployment.
  6. Live — The Mini App is accessible at {project}.tma.sh.
git push → GitHub webhook → Build container → R2 upload → KV routing → Live

The entire pipeline typically completes in under a minute for small projects.

TMA.sh supports two deployment types:

Production — Triggered by pushes to the default branch (usually main). The production deployment is what your users see. It is served at {project}.tma.sh and any custom domains you configure.

Preview — Triggered by pull requests. Each PR gets its own URL at pr{number}--{project}.tma.sh and its own test bot. Preview deployments are useful for QA, stakeholder review, and testing changes before they reach production. See Preview Environments for details.

By default, every push to the default branch triggers a production deployment. This behavior is controlled by the auto-deploy toggle in project settings. When disabled, deployments must be triggered manually from the dashboard or CLI.

Auto-deploy applies per-project. You can have some projects deploy on every push while others require manual intervention.

The platform is organized around a few core entities:

Organization
├── Project
│ ├── Deployment (production or preview)
│ ├── Bot (production + preview bots)
│ ├── Secret (environment variables, scoped by environment)
│ └── Domain (custom domains)

Every user gets a personal organization on signup. All projects belong to an organization, not directly to a user. This means team collaboration is built in from day one — invite members to your organization and they get access to all its projects.

A project represents a single Telegram Mini App. It is linked to one GitHub repository and one or more Telegram bots. Each project has its own subdomain ({project}.tma.sh), build configuration, secrets, and deployment history.

A deployment is an immutable snapshot of your built application at a specific commit. Production deployments serve live traffic. Preview deployments are tied to pull requests. Old deployments are retained for instant rollback.

Each project has at least one Telegram bot for production. Preview environments get their own bots with environment: 'preview' so that testing never affects your production bot’s state or menu configuration. Bots can be scoped to one of three environments: production, preview, or development.

Environment variables injected at build time. Secrets are scoped by environment (production, preview, or development), so you can use different API keys or configuration for testing versus production.

Projects are served at {project}.tma.sh by default. You can add custom domains (e.g., app.example.com) with automatic TLS provisioning.

TMA.sh runs entirely on Cloudflare’s infrastructure:

ComponentServicePurpose
APIWorkersRequest handling, webhook processing, bot management
DatabaseD1Organizations, projects, deployments, bots, secrets
AssetsR2Built application files (HTML, JS, CSS, images)
RoutingKVSubdomain-to-deployment mapping, fast lookups
Build queueQueuesOrdered build job processing
Build executionContainersIsolated build environments
User API routesWorkers for PlatformsPer-project server-side API endpoints

This architecture means deployments are globally distributed with no single point of failure, and scaling is handled automatically.