Agents, tokens, shell access, and host operations
Loopback’s agent is the on-machine component that connects your servers to the control plane. This page covers tokens, installation, remote shell, and related host actions—with extra depth for security and operations buyers.
For the fleet narrative (why agents exist next to Kubernetes and bare metal), read Agent and fleet management.
Why agents exist
Agents enable:
- Heartbeat and version reporting (feeds reconciliation and support).
- Command channel for maintenance and diagnostics (capabilities vary by deployment).
- Network mesh configuration (WireGuard) applied consistently across hosts.
- Optional modules such as the eBPF host firewall (Host firewall (eBPF)).
- Integration with monitoring sources and staged update deliveries.
Without an agent, a host may remain unknown or unmanaged beyond raw cloud provisioning—Loopback cannot enforce intent at the OS edge.
Agent tokens
Agent tokens are workspace-scoped credentials used when installing the agent. They are not user passwords and must not be emailed casually.
Create a token
You can create tokens with:
- Optional description (ops note: ticket id, environment, owner).
- Long-lived vs short-lived semantics (exact behavior is deployment defaults—confirm with your operator).
RBAC: token mint, list, and revoke are separate capabilities; see Access control & permissions and Auditing and fine-grained access.
Security practices
- Rotate tokens on a schedule aligned with CI secret rotation.
- Revoke immediately if a token leaks (treat like a bearer secret).
- Limit who can mint tokens - anyone who can creates a path for unauthorized machines to join the workspace fleet if network path allows.
- Prefer short-lived tokens for high-risk environments.
Failure modes
- Expired token during install → re-mint and rerun bootstrap.
- Blocked egress → agent cannot check in; fix firewall / proxy / air-gap mirror with operator guidance.
Installing the agent
The platform serves an install script (URL from the console or API) that:
- Downloads the agent binary for your environment (stable vs preview channel depends on deployment).
- Configures systemd on Linux with:
- API endpoint (production vs staging).
- Token placeholder you replace or pass via environment.
- Optional verbose diagnostics flags in non-production.
Typical bootstrap
- Create agent token in Loopback UI/API.
- On the server: run the curl | bash style flow as root only when your policy permits unattended scripts—some enterprises prefer packaged installs; ask your operator.
- Verify agent checks in - host transitions to managed states in UI.
- Confirm network mesh and any modules converge (may take minutes).
Agent updates
Update deliveries (operator-controlled) can roll out new agent versions in waves to avoid thundering herds. Hosts should briefly report updating states; prolonged failure needs support triage (disk, permissions, SELinux, proxy).
Details: Agent install and updates.
Shell sessions
Loopback can open interactive shell sessions to hosts through the API. This is a break-glass feature:
- Power level: often equivalent to root SSH for many workflows.
- Risk: insider threat, credential exposure in scrollback, accidental production changes.
RBAC: treat shell as tier-0; most production roles should not have it. See Access control & permissions.
Recommended governance
- Require ticket ID in process (even if not enforced in software).
- Time-bound access via temporary role assignment in your IdP/ITSM where possible.
- Log session start/end to SIEM; review monthly samples for pattern anomalies.
- Prefer immutable infra patterns so shells are rare; if you shell daily, fix automation gaps instead.
When shells are still justified
- Vendor support joint debugging with read-only observers on the call.
- Disaster recovery when automation is blocked by an external dependency.
- Bootstrap of legacy systems during migration windows.
Host power and hardware actions
Where the provider API supports it, Loopback exposes power management (on/off/reset classes). Expect asynchronous behavior—cloud APIs can take tens of seconds; refresh host status after a short wait.
Caution: hard power events are disruptive; align with maintenance windows and Kubernetes cordon/drain flows for cluster nodes.
Load-balancer integration on hosts
Hosts may participate in Loopback-managed edge fabrics (WGLB-class) and host-integrated firewall modules (LBFW / eBPF). Effective policy is layered with Firewalls and Load balancing.
Ask your operator for the network diagram that matches your workspace—edge ACLs, host firewall, and cloud security groups must be designed together.