Skip to content

SYSTEM Cited by 2 sources

Amazon Managed Grafana

Amazon Managed Grafana (AMG) is AWS's fully managed Grafana service — the Grafana visualization engine operated by AWS, with native integrations to AWS data sources (CloudWatch, AMP / Managed Prometheus, Timestream, X-Ray, OpenSearch), authentication via AWS SSO / IAM Identity Center, and per-workspace isolation. Stub page — expand on future AMG-internals sources.

Why it shows up on this wiki

AMG is the customer-facing dashboarding tier that pairs with the AWS observability stack:

  • CloudWatch / CloudWatch Logs Insights = metrics + logs source.
  • Amazon Managed Prometheus = Prometheus-API-compatible metrics source.
  • X-Ray = traces source.
  • AMG = the dashboard / alert / visualisation surface over all of the above.

AWS operates the Grafana server, including upgrades, HA, and authentication integration; customers get per-workspace role-based access and per-workspace plugin / data-source configuration.

Tenancy shape seen in ingested sources

At Generali Malaysia, AMG is used to give each application owner per-project dashboards scoped to an EKS namespace. One AMG workspace exposes many dashboards; dashboards are sliced by the cluster / namespace dimension that already segments the K8s workload. This is the same tenancy axis as the cost-allocation-tag scheme (patterns/eks-cost-allocation-tags) — one consistent dimension across billing and observability.

Dashboards shown:

  • Cluster health
  • Node performance
  • Pod resource utilization
  • Application performance indicators (custom per-project)

All driven from CloudWatch as data source via the CloudWatch → AMG integration.

Seen in

  • sources/2026-03-23-aws-generali-malaysia-eks-auto-mode — the per-namespace observability substrate for Generali's multi-tenant EKS cluster. One AMG workspace, N dashboards (one per business- unit project). CloudWatch as data source, not direct Prometheus.
  • sources/2026-04-06-aws-unlock-efficient-model-deployment-simplified-inference-operator-setup-on-amazon-sagemaker-hyperpod — the dashboard surface for LLM-inference observability on AWS's 2026-04-06 SageMaker HyperPod Inference Operator. Post cites "built-in integration with HyperPod Observability provides immediate visibility into inference metrics, cache performance, and routing efficiency through Amazon Managed Grafana dashboards." Key inference metrics surfaced via AMG: time-to-first-token (TTFT), end-to-end inference latency, GPU utilization. No internal architecture disclosed — AMG appears here as the dashboard layer, not the metrics pipeline.
Last updated · 200 distilled / 1,178 read