# Dashboards reference

{/* DO NOT EDIT: generated via: bazel run //doc/admin/observability:write_monitoring_docs */}

This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.

To learn more about Sourcegraph's metrics and how to view these dashboards, see [our metrics guide](https://sourcegraph.com/docs/admin/observability/metrics).

## Frontend

<p class="subtitle">Serves all end-user browser and API requests.</p>

To see this dashboard, visit `/-/debug/grafana/d/frontend/frontend` on your Sourcegraph instance.

### Frontend: Search at a glance

#### frontend: 99th_percentile_search_request_duration

<p class="subtitle">99th percentile successful search request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-99th_percentile_search_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
```
</details>

<br />

#### frontend: 90th_percentile_search_request_duration

<p class="subtitle">90th percentile successful search request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-90th_percentile_search_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
```
</details>

<br />

#### frontend: timeout_search_responses

<p class="subtitle">Timeout search responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-timeout_search_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_search_streaming_response{status=~"timeout|partial_timeout",source="browser"}[5m])) / sum(increase(src_search_streaming_response{source="browser"}[5m])) * 100
```
</details>

<br />

#### frontend: hard_error_search_responses

<p class="subtitle">Hard error search responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-hard_error_search_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_search_streaming_response{status="error",source="browser"}[5m])) / sum(increase(src_search_streaming_response{source="browser"}[5m])) * 100
```
</details>

<br />

#### frontend: search_no_results

<p class="subtitle">Searches with no results every 5m</p>

Refer to the [alerts reference](alerts#frontend-search_no_results) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_search_streaming_response{status="no_results",source="browser"}[5m])) / sum(increase(src_search_streaming_response{source="browser"}[5m])) * 100
```
</details>

<br />

#### frontend: search_alert_user_suggestions

<p class="subtitle">Search alert user suggestions shown every 5m</p>

Refer to the [alerts reference](alerts#frontend-search_alert_user_suggestions) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (alert_type)(increase(src_search_streaming_response{status="alert",alert_type!~"timed_out",source="browser"}[5m])) / ignoring(alert_type) group_left sum(increase(src_search_streaming_response{source="browser"}[5m])) * 100
```
</details>

<br />

#### frontend: page_load_latency

<p class="subtitle">90th percentile page load latency over all routes over 10m</p>

Refer to the [alerts reference](alerts#frontend-page_load_latency) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))
```
</details>

<br />

### Frontend: Search-based code intelligence at a glance

#### frontend: 99th_percentile_search_codeintel_request_duration

<p class="subtitle">99th percentile code-intel successful search request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-99th_percentile_search_codeintel_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
```
</details>

<br />

#### frontend: 90th_percentile_search_codeintel_request_duration

<p class="subtitle">90th percentile code-intel successful search request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-90th_percentile_search_codeintel_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
```
</details>

<br />

#### frontend: hard_timeout_search_codeintel_responses

<p class="subtitle">Hard timeout search code-intel responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-hard_timeout_search_codeintel_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
```
</details>

<br />

#### frontend: hard_error_search_codeintel_responses

<p class="subtitle">Hard error search code-intel responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-hard_error_search_codeintel_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
```
</details>

<br />

#### frontend: partial_timeout_search_codeintel_responses

<p class="subtitle">Partial timeout search code-intel responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-partial_timeout_search_codeintel_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100
```
</details>

<br />

#### frontend: search_codeintel_alert_user_suggestions

<p class="subtitle">Search code-intel alert user suggestions shown every 5m</p>

Refer to the [alerts reference](alerts#frontend-search_codeintel_alert_user_suggestions) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
```
</details>

<br />

### Frontend: Search GraphQL API usage at a glance

#### frontend: 99th_percentile_search_api_request_duration

<p class="subtitle">99th percentile successful search API request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-99th_percentile_search_api_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
```
</details>

<br />

#### frontend: 90th_percentile_search_api_request_duration

<p class="subtitle">90th percentile successful search API request duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-90th_percentile_search_api_request_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
```
</details>

<br />

#### frontend: hard_error_search_api_responses

<p class="subtitle">Hard error search API responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-hard_error_search_api_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))
```
</details>

<br />

#### frontend: partial_timeout_search_api_responses

<p class="subtitle">Partial timeout search API responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-partial_timeout_search_api_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))
```
</details>

<br />

#### frontend: search_api_alert_user_suggestions

<p class="subtitle">Search API alert user suggestions shown every 5m</p>

Refer to the [alerts reference](alerts#frontend-search_api_alert_user_suggestions) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))
```
</details>

<br />

### Frontend: Site configuration client update latency

#### frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Duration since last successful site configuration update (by instance)</p>

The duration since the configuration client used by the "frontend" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_conf_client_time_since_last_successful_update_seconds{job=~`(sourcegraph-)?frontend`,instance=~`${internalInstance:regex}`}
```
</details>

<br />

#### frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Maximum duration since last successful site configuration update (all "frontend" instances)</p>

Refer to the [alerts reference](alerts#frontend-frontend_site_configuration_duration_since_last_successful_update_by_instance) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`(sourcegraph-)?frontend`,instance=~`${internalInstance:regex}`}[1m]))
```
</details>

<br />

### Frontend: Codeintel: Precise code intelligence usage at a glance

#### frontend: codeintel_resolvers_total

<p class="subtitle">Aggregate graphql operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_resolvers_99th_percentile_duration

<p class="subtitle">Aggregate successful graphql operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_resolvers_errors_total

<p class="subtitle">Aggregate graphql operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_resolvers_error_rate

<p class="subtitle">Aggregate graphql operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: codeintel_resolvers_total

<p class="subtitle">Graphql operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_resolvers_99th_percentile_duration

<p class="subtitle">99th percentile successful graphql operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: codeintel_resolvers_errors_total

<p class="subtitle">Graphql operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_resolvers_error_rate

<p class="subtitle">Graphql operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Codeintel: Auto-index enqueuer

#### frontend: codeintel_autoindex_enqueuer_total

<p class="subtitle">Aggregate enqueuer operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

<p class="subtitle">Aggregate successful enqueuer operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_errors_total

<p class="subtitle">Aggregate enqueuer operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_error_rate

<p class="subtitle">Aggregate enqueuer operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_total

<p class="subtitle">Enqueuer operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_99th_percentile_duration

<p class="subtitle">99th percentile successful enqueuer operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_errors_total

<p class="subtitle">Enqueuer operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_autoindex_enqueuer_error_rate

<p class="subtitle">Enqueuer operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Codeintel: dbstore stats

#### frontend: codeintel_uploads_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100603` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: codeintel_uploads_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: codeintel_uploads_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Workerutil: lsif_indexes dbworker/store stats

#### frontend: workerutil_dbworker_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_total{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: workerutil_dbworker_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: workerutil_dbworker_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_total{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_index_jobs',job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Codeintel: lsifstore stats

#### frontend: codeintel_uploads_lsifstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100803` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100813` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Codeintel: gitserver client

#### frontend: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100903` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100912` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=100913` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Codeintel: uploadstore stats

#### frontend: codeintel_uploadstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploadstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploadstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploadstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: codeintel_uploadstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploadstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: codeintel_uploadstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: codeintel_uploadstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Gitserver: Gitserver Client

#### frontend: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Gitserver: Gitserver Repository Service Client

#### frontend: gitserver_repositoryservice_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_repositoryservice_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Batches: dbstore stats

#### frontend: batches_dbstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_dbstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_dbstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_dbstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: batches_dbstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_dbstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: batches_dbstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_dbstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Batches: service stats

#### frontend: batches_service_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_service_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_service_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_service_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: batches_service_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_service_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: batches_service_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_service_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Batches: HTTP API File Handler

#### frontend: batches_httpapi_total

<p class="subtitle">Aggregate http handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_httpapi_99th_percentile_duration

<p class="subtitle">Aggregate successful http handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_httpapi_errors_total

<p class="subtitle">Aggregate http handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_httpapi_error_rate

<p class="subtitle">Aggregate http handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: batches_httpapi_total

<p class="subtitle">Http handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_httpapi_99th_percentile_duration

<p class="subtitle">99th percentile successful http handler operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: batches_httpapi_errors_total

<p class="subtitle">Http handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: batches_httpapi_error_rate

<p class="subtitle">Http handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Out-of-band migrations: up migration invocation (one batch processed)

#### frontend: oobmigration_total

<p class="subtitle">Migration handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_99th_percentile_duration

<p class="subtitle">Aggregate successful migration handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_errors_total

<p class="subtitle">Migration handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_error_rate

<p class="subtitle">Migration handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101603` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Out-of-band migrations: down migration invocation (one batch processed)

#### frontend: oobmigration_total

<p class="subtitle">Migration handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_99th_percentile_duration

<p class="subtitle">Aggregate successful migration handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_errors_total

<p class="subtitle">Migration handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: oobmigration_error_rate

<p class="subtitle">Migration handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

### Frontend: Zoekt Configuration GRPC server metrics

#### frontend: zoekt_configuration_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)
```
</details>

<br />

#### frontend: zoekt_configuration_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) ))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### frontend: zoekt_configuration_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101822` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101831` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101832` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101840` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101841` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101842` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101850` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101860` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Frontend: Zoekt Configuration GRPC "internal error" metrics

#### frontend: zoekt_configuration_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_configuration" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_configuration" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_configuration" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=101912` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_internal_error="true",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Frontend: Zoekt Configuration GRPC retry metrics

#### frontend: zoekt_configuration_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_configuration" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_retried="true",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: zoekt_configuration_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Frontend: Internal Api GRPC server metrics

#### frontend: internal_api_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))
```
</details>

<br />

#### frontend: internal_api_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)
```
</details>

<br />

#### frontend: internal_api_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) ))
```
</details>

<br />

#### frontend: internal_api_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### frontend: internal_api_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102121` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102122` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102130` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102131` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102132` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102140` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102141` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102142` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
```
</details>

<br />

#### frontend: internal_api_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102150` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### frontend: internal_api_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102160` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Frontend: Internal Api GRPC "internal error" metrics

#### frontend: internal_api_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "internal_api" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "internal_api" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "internal_api" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",is_internal_error="true",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Frontend: Internal Api GRPC retry metrics

#### frontend: internal_api_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "internal_api" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",is_retried="true",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### frontend: internal_api_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Frontend: Internal service requests

#### frontend: internal_indexed_search_error_responses

<p class="subtitle">Internal indexed search error responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-internal_indexed_search_error_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
```
</details>

<br />

#### frontend: internal_unindexed_search_error_responses

<p class="subtitle">Internal unindexed search error responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-internal_unindexed_search_error_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
```
</details>

<br />

#### frontend: 99th_percentile_gitserver_duration

<p class="subtitle">99th percentile successful gitserver query duration over 5m</p>

Refer to the [alerts reference](alerts#frontend-99th_percentile_gitserver_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))
```
</details>

<br />

#### frontend: gitserver_error_responses

<p class="subtitle">Gitserver error responses every 5m</p>

Refer to the [alerts reference](alerts#frontend-gitserver_error_responses) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100
```
</details>

<br />

#### frontend: observability_test_alert_warning

<p class="subtitle">Warning test alert metric</p>

Refer to the [alerts reference](alerts#frontend-observability_test_alert_warning) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102420` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(owner) (observability_test_metric_warning)
```
</details>

<br />

#### frontend: observability_test_alert_critical

<p class="subtitle">Critical test alert metric</p>

Refer to the [alerts reference](alerts#frontend-observability_test_alert_critical) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102421` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(owner) (observability_test_metric_critical)
```
</details>

<br />

### Frontend: Authentication API requests

#### frontend: sign_in_rate

<p class="subtitle">Rate of API requests to sign-in</p>

Rate (QPS) of requests to sign-in

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))
```
</details>

<br />

#### frontend: sign_in_latency_p99

<p class="subtitle">99 percentile of sign-in latency</p>

99% percentile of sign-in latency

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))
```
</details>

<br />

#### frontend: sign_in_error_rate

<p class="subtitle">Percentage of sign-in requests by http code</p>

Percentage of sign-in requests grouped by http code

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100
```
</details>

<br />

#### frontend: sign_up_rate

<p class="subtitle">Rate of API requests to sign-up</p>

Rate (QPS) of requests to sign-up

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))
```
</details>

<br />

#### frontend: sign_up_latency_p99

<p class="subtitle">99 percentile of sign-up latency</p>

99% percentile of sign-up latency

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))
```
</details>

<br />

#### frontend: sign_up_code_percentage

<p class="subtitle">Percentage of sign-up requests by http code</p>

Percentage of sign-up requests grouped by http code

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
```
</details>

<br />

#### frontend: sign_out_rate

<p class="subtitle">Rate of API requests to sign-out</p>

Rate (QPS) of requests to sign-out

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102520` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))
```
</details>

<br />

#### frontend: sign_out_latency_p99

<p class="subtitle">99 percentile of sign-out latency</p>

99% percentile of sign-out latency

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102521` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))
```
</details>

<br />

#### frontend: sign_out_error_rate

<p class="subtitle">Percentage of sign-out requests that return non-303 http code</p>

Percentage of sign-out requests grouped by http code

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102522` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
 sum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
```
</details>

<br />

#### frontend: account_failed_sign_in_attempts

<p class="subtitle">Rate of failed sign-in attempts</p>

Failed sign-in attempts per minute

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102530` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_frontend_account_failed_sign_in_attempts_total[1m]))
```
</details>

<br />

#### frontend: account_lockouts

<p class="subtitle">Rate of account lockouts</p>

Account lockouts per minute

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102531` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_frontend_account_lockouts_total[1m]))
```
</details>

<br />

### Frontend: External HTTP Request Rate

#### frontend: external_http_request_rate_by_host

<p class="subtitle">Rate of external HTTP requests by host over 1m</p>

Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (host) (rate(src_http_client_external_request_count{host=~`${httpRequestHost:regex}`}[1m]))
```
</details>

<br />

#### frontend: external_http_request_rate_by_host_by_code

<p class="subtitle">Rate of external HTTP requests by host and response code over 1m</p>

Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host and response code.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (host, status_code) (rate(src_http_client_external_request_count{host=~`${httpRequestHost:regex}`}[1m]))
```
</details>

<br />

### Frontend: Cody API requests

#### frontend: cody_api_rate

<p class="subtitle">Rate of API requests to cody endpoints (excluding GraphQL)</p>

Rate (QPS) of requests to cody related endpoints. completions.stream is for the conversational endpoints. completions.code is for the code auto-complete endpoints.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102700` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum by (route, code)(irate(src_http_request_duration_seconds_count{route=~"^completions.*"}[5m]))
```
</details>

<br />

### Frontend: Cloud KMS and cache

#### frontend: cloudkms_cryptographic_requests

<p class="subtitle">Cryptographic requests to Cloud KMS every 1m</p>

Refer to the [alerts reference](alerts#frontend-cloudkms_cryptographic_requests) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_cloudkms_cryptographic_total[1m]))
```
</details>

<br />

#### frontend: encryption_cache_hit_ratio

<p class="subtitle">Average encryption cache hit ratio per workload</p>

- Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
min by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))
```
</details>

<br />

#### frontend: encryption_cache_evictions

<p class="subtitle">Rate of encryption cache evictions - sum across all instances of a given workload</p>

- Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))
```
</details>

<br />

### Frontend: Periodic Goroutines

#### frontend: running_goroutines

<p class="subtitle">Number of currently running periodic goroutines</p>

The number of currently running periodic goroutines by name and job.
A value of 0 indicates the routine isn`t running currently, it awaits it`s next schedule.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (src_periodic_goroutine_running{job=~".*frontend.*"})
```
</details>

<br />

#### frontend: goroutine_success_rate

<p class="subtitle">Success rate for periodic goroutine executions</p>

The rate of successful executions of each periodic goroutine.
A low or zero value could indicate that a routine is stalled or encountering errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*frontend.*"}[5m]))
```
</details>

<br />

#### frontend: goroutine_error_rate

<p class="subtitle">Error rate for periodic goroutine executions</p>

The rate of errors encountered by each periodic goroutine.
A sustained high error rate may indicate a problem with the routine`s configuration or dependencies.

Refer to the [alerts reference](alerts#frontend-goroutine_error_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*frontend.*"}[5m]))
```
</details>

<br />

#### frontend: goroutine_error_percentage

<p class="subtitle">Percentage of periodic goroutine executions that result in errors</p>

The percentage of executions that result in errors for each periodic goroutine.
A value above 5% indicates that a significant portion of routine executions are failing.

Refer to the [alerts reference](alerts#frontend-goroutine_error_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*frontend.*"}[5m])) / sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*frontend.*"}[5m]) > 0) * 100
```
</details>

<br />

#### frontend: goroutine_handler_duration

<p class="subtitle">95th percentile handler execution time</p>

The 95th percentile execution time for each periodic goroutine handler.
Longer durations might indicate increased load or processing time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102920` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_duration_seconds_bucket{job=~".*frontend.*"}[5m])))
```
</details>

<br />

#### frontend: goroutine_loop_duration

<p class="subtitle">95th percentile loop cycle time</p>

The 95th percentile loop cycle time for each periodic goroutine (excluding sleep time).
This represents how long a complete loop iteration takes before sleeping for the next interval.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102921` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_loop_duration_seconds_bucket{job=~".*frontend.*"}[5m])))
```
</details>

<br />

#### frontend: tenant_processing_duration

<p class="subtitle">95th percentile tenant processing time</p>

The 95th percentile processing time for individual tenants within periodic goroutines.
Higher values indicate that tenant processing is taking longer and may affect overall performance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102930` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_tenant_duration_seconds_bucket{job=~".*frontend.*"}[5m])))
```
</details>

<br />

#### frontend: tenant_processing_max

<p class="subtitle">Maximum tenant processing time</p>

The maximum processing time for individual tenants within periodic goroutines.
Consistently high values might indicate problematic tenants or inefficient processing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102931` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (rate(src_periodic_goroutine_tenant_duration_seconds_sum{job=~".*frontend.*"}[5m]) / rate(src_periodic_goroutine_tenant_duration_seconds_count{job=~".*frontend.*"}[5m]))
```
</details>

<br />

#### frontend: tenant_count

<p class="subtitle">Number of tenants processed per routine</p>

The number of tenants processed by each periodic goroutine.
Unexpected changes can indicate tenant configuration issues or scaling events.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102940` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (src_periodic_goroutine_tenant_count{job=~".*frontend.*"})
```
</details>

<br />

#### frontend: tenant_success_rate

<p class="subtitle">Rate of successful tenant processing operations</p>

The rate of successful tenant processing operations.
A healthy routine should maintain a consistent processing rate.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102941` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*frontend.*"}[5m]))
```
</details>

<br />

#### frontend: tenant_error_rate

<p class="subtitle">Rate of tenant processing errors</p>

The rate of tenant processing operations that result in errors.
Consistent errors indicate problems with specific tenants.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102950` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*frontend.*"}[5m]))
```
</details>

<br />

#### frontend: tenant_error_percentage

<p class="subtitle">Percentage of tenant operations resulting in errors</p>

The percentage of tenant operations that result in errors.
Values above 5% indicate significant tenant processing problems.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=102951` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*frontend.*"}[5m])) / (sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*frontend.*"}[5m])) + sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*frontend.*"}[5m])))) * 100
```
</details>

<br />

### Frontend: Database connections

#### frontend: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})
```
</details>

<br />

#### frontend: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})
```
</details>

<br />

#### frontend: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})
```
</details>

<br />

#### frontend: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})
```
</details>

<br />

#### frontend: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#frontend-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))
```
</details>

<br />

#### frontend: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103030` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))
```
</details>

<br />

#### frontend: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103031` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))
```
</details>

<br />

#### frontend: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103032` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))
```
</details>

<br />

### Frontend: (frontend|sourcegraph-frontend) (CPU, Memory)

#### frontend: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#frontend-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
```
</details>

<br />

#### frontend: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
```
</details>

<br />

#### frontend: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^(frontend|sourcegraph-frontend).*"})
```
</details>

<br />

#### frontend: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#frontend-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^(frontend|sourcegraph-frontend).*"} / container_spec_memory_limit_bytes{name=~"^(frontend|sourcegraph-frontend).*"}) by (name) * 100.0 
```
</details>

<br />

#### frontend: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^(frontend|sourcegraph-frontend).*"} / container_spec_memory_limit_bytes{name=~"^(frontend|sourcegraph-frontend).*"}) by (name) * 100.0 
```
</details>

<br />

#### frontend: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^(frontend|sourcegraph-frontend).*"} / container_spec_memory_limit_bytes{name=~"^(frontend|sourcegraph-frontend).*"}) by (name) * 100.0 
```
</details>

<br />

### Frontend: Container monitoring (not available on server)

#### frontend: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod (frontend\|sourcegraph-frontend)` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p (frontend\|sourcegraph-frontend)`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' (frontend\|sourcegraph-frontend)` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs (frontend\|sourcegraph-frontend)` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)
```
</details>

<br />

#### frontend: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#frontend-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
```
</details>

<br />

#### frontend: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#frontend-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
```
</details>

<br />

#### frontend: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))
```
</details>

<br />

### Frontend: Provisioning indicators (not available on server)

#### frontend: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#frontend-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
```
</details>

<br />

#### frontend: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#frontend-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
```
</details>

<br />

#### frontend: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#frontend-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
```
</details>

<br />

#### frontend: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#frontend-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
```
</details>

<br />

#### frontend: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#frontend-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend).*"})
```
</details>

<br />

### Frontend: Golang runtime monitoring

#### frontend: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#frontend-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})
```
</details>

<br />

#### frontend: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#frontend-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})
```
</details>

<br />

### Frontend: Kubernetes monitoring (only available on Kubernetes)

#### frontend: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#frontend-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100
```
</details>

<br />

### Frontend: Search: Ranking

#### frontend: total_search_clicks

<p class="subtitle">Total number of search clicks over 6h</p>

The total number of search clicks across all search types over a 6 hour window.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h]))
```
</details>

<br />

#### frontend: percent_clicks_on_top_search_result

<p class="subtitle">Percent of clicks on top search result over 6h</p>

The percent of clicks that were on the top search result, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="1",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100
```
</details>

<br />

#### frontend: percent_clicks_on_top_3_search_results

<p class="subtitle">Percent of clicks on top 3 search results over 6h</p>

The percent of clicks that were on the first 3 search results, excluding searches with very few results (3 or fewer).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="3",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100
```
</details>

<br />

#### frontend: distribution_of_clicked_search_result_type_over_6h_in_percent

<p class="subtitle">Distribution of clicked search result type over 6h</p>

The distribution of clicked search results by result type. At every point in time, the values should sum to 100.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_search_ranking_result_clicked_count{type="repo"}[6h])) / sum(increase(src_search_ranking_result_clicked_count[6h])) * 100
```
</details>

<br />

#### frontend: percent_zoekt_searches_hitting_flush_limit

<p class="subtitle">Percent of zoekt searches that hit the flush time limit</p>

The percent of Zoekt searches that hit the flush time limit. These searches don`t visit all matches, so they could be missing relevant results, or be non-deterministic.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(zoekt_final_aggregate_size_count{reason="timer_expired"}[1d])) / sum(increase(zoekt_final_aggregate_size_count[1d])) * 100
```
</details>

<br />

### Frontend: Email delivery

#### frontend: email_delivery_failures

<p class="subtitle">Email delivery failure rate over 30 minutes</p>

Refer to the [alerts reference](alerts#frontend-email_delivery_failures) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_email_send{success="false"}[30m])) / sum(increase(src_email_send[30m])) * 100
```
</details>

<br />

#### frontend: email_deliveries_total

<p class="subtitle">Total emails successfully delivered every 30 minutes</p>

Total emails successfully delivered.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum (increase(src_email_send{success="true"}[30m]))
```
</details>

<br />

#### frontend: email_deliveries_by_source

<p class="subtitle">Emails successfully delivered every 30 minutes by source</p>

Emails successfully delivered by source, i.e. product feature.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (email_source) (increase(src_email_send{success="true"}[30m]))
```
</details>

<br />

### Frontend: Sentinel queries (only on sourcegraph.com)

#### frontend: mean_successful_sentinel_duration_by_query

<p class="subtitle">Mean successful sentinel search duration by query</p>

Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)
```
</details>

<br />

#### frontend: mean_sentinel_stream_latency_by_query

<p class="subtitle">Mean successful sentinel stream latency by query</p>

Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)
```
</details>

<br />

#### frontend: 90th_percentile_successful_sentinel_duration_by_query

<p class="subtitle">90th percentile successful sentinel search duration by query</p>

90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: 90th_percentile_successful_stream_latency_by_query

<p class="subtitle">90th percentile successful sentinel stream latency by query</p>

90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: 90th_percentile_unsuccessful_duration_by_query

<p class="subtitle">90th percentile unsuccessful sentinel search duration by query</p>

90th percentile search duration of _unsuccessful_ sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~`searchblitz.*`, status!=`success`}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: 75th_percentile_successful_sentinel_duration_by_query

<p class="subtitle">75th percentile successful sentinel search duration by query</p>

75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: 75th_percentile_successful_stream_latency_by_query

<p class="subtitle">75th percentile successful sentinel stream latency by query</p>

75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103831` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: 75th_percentile_unsuccessful_duration_by_query

<p class="subtitle">75th percentile unsuccessful sentinel search duration by query</p>

75th percentile search duration of _unsuccessful_ sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103840` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~`searchblitz.*`, status!=`success`}[$sentinel_sampling_duration])) by (le, source))
```
</details>

<br />

#### frontend: unsuccessful_status_rate

<p class="subtitle">Unsuccessful status rate</p>

The rate of unsuccessful sentinel queries, broken down by failure type.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103850` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)
```
</details>

<br />

### Frontend: Incoming webhooks

#### frontend: p95_time_to_handle_incoming_webhooks

<p class="subtitle">P95 time to handle incoming webhooks</p>

p95 response time to incoming webhook requests from code hosts.

							Increases in response time can point to too much load on the database to keep up with the incoming requests.

							See this documentation page for more details on webhook requests: (https://sourcegraph.com/docs/admin/config/webhooks/incoming)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=103900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum  (rate(src_http_request_duration_seconds_bucket{route=~"webhooks|github.webhooks|gitlab.webhooks|bitbucketServer.webhooks|bitbucketCloud.webhooks"}[5m])) by (le, route))
```
</details>

<br />

### Frontend: Search aggregations: proactive and expanded search aggregations

#### frontend: insights_aggregations_total

<p class="subtitle">Aggregate search aggregations operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: insights_aggregations_99th_percentile_duration

<p class="subtitle">Aggregate successful search aggregations operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: insights_aggregations_errors_total

<p class="subtitle">Aggregate search aggregations operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: insights_aggregations_error_rate

<p class="subtitle">Aggregate search aggregations operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

#### frontend: insights_aggregations_total

<p class="subtitle">Search aggregations operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: insights_aggregations_99th_percentile_duration

<p class="subtitle">99th percentile successful search aggregations operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,extended_mode)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
```
</details>

<br />

#### frontend: insights_aggregations_errors_total

<p class="subtitle">Search aggregations operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
```
</details>

<br />

#### frontend: insights_aggregations_error_rate

<p class="subtitle">Search aggregations operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/frontend/frontend?viewPanel=104013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
```
</details>

<br />

## Git Server

<p class="subtitle">Stores, manages, and operates Git repositories.</p>

To see this dashboard, visit `/-/debug/grafana/d/gitserver/gitserver` on your Sourcegraph instance.

#### gitserver: go_routines

<p class="subtitle">Go routines</p>



This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
go_goroutines{app="gitserver", instance=~`${shard:regex}`}
```
</details>

<br />

#### gitserver: disk_space_remaining

<p class="subtitle">Disk space remaining</p>

Indicates disk space remaining for each gitserver instance. When disk space is low, gitserver may experience slowdowns or fails to fetch repositories.

Refer to the [alerts reference](alerts#gitserver-disk_space_remaining) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(src_gitserver_disk_space_available{instance=~`${shard:regex}`} / src_gitserver_disk_space_total{instance=~`${shard:regex}`}) * 100
```
</details>

<br />

#### gitserver: high_memory_git_commands

<p class="subtitle">Number of git commands that exceeded the threshold for high memory usage</p>

This graph tracks the number of git subcommands that gitserver ran that exceeded the threshold for high memory usage.
This graph in itself is not an alert, but it is used to learn about the memory usage of gitserver.

If gitserver frequently serves requests where the status code is KILLED, this graph might help to correlate that
with the high memory usage.

This graph spiking is not a problem necessarily. But when subcommands or the whole gitserver service are getting
OOM killed and this graph shows spikes, increasing the memory might be useful.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sort_desc(sum(sum_over_time(src_gitserver_exec_high_memory_usage_count{instance=~`${shard:regex}`}[2m])) by (cmd))
```
</details>

<br />

#### gitserver: running_git_commands

<p class="subtitle">Git commands running on each gitserver instance</p>

A high value signals load.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance, cmd) (src_gitserver_exec_running{instance=~`${shard:regex}`})
```
</details>

<br />

#### gitserver: git_commands_received

<p class="subtitle">Rate of git commands received</p>

per second rate per command

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (cmd) (rate(src_gitserver_exec_duration_seconds_count{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

#### gitserver: git_command_cpu_usage_seconds_by_scope

<p class="subtitle">Git command CPU usage seconds by requester scope</p>

CPU time consumed by git subcommands, grouped by propagated requester scope and CPU kind.
Use this to identify high-CPU callers and whether time is spent in user or system CPU.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
topk(20, sum by (scope, kind) (rate(src_gitserver_exec_cpu_seconds_total{instance=~`${shard:regex}`}[5m])))
```
</details>

<br />

#### gitserver: echo_command_duration_test

<p class="subtitle">Echo test command duration</p>



Refer to the [alerts reference](alerts#gitserver-echo_command_duration_test) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_gitserver_echo_duration_seconds)
```
</details>

<br />

#### gitserver: repo_corrupted

<p class="subtitle">Number of times a repo corruption has been identified</p>

A non-null value here indicates that a problem has been detected with the gitserver repository storage.
Repository corruptions are never expected. This is a real issue. Gitserver should try to recover from them
by recloning repositories, but this may take a while depending on repo size.

Refer to the [alerts reference](alerts#gitserver-repo_corrupted) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_repo_corrupted[5m]))
```
</details>

<br />

#### gitserver: repository_clone_queue_size

<p class="subtitle">Repository clone queue size</p>

Refer to the [alerts reference](alerts#gitserver-repository_clone_queue_size) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100030` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_gitserver_clone_queue)
```
</details>

<br />

#### gitserver: src_gitserver_client_concurrent_requests

<p class="subtitle">Number of concurrent requests running against gitserver client</p>

This metric is only for informational purposes. It indicates the current number of concurrently running requests by process against gitserver gRPC.

It does not indicate any problems with the instance, but can give a good indication of load spikes or request throttling.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100031` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job, instance) (src_gitserver_client_concurrent_requests)
```
</details>

<br />

### Git Server: Gitserver (CPU, Memory)

Gitserver leverages memory mapping to optimize file reads: it is generally expected to consume all the memory provided to it, if it can. When it finds data that is not available in memory yet, this causes a 'page fault', and the data is loaded into memory from disk.

A trend to watch out for: when something in-application happens to take a lot of memory, and active file previously used nearly all remaining memory, then:

1. 'Memory (RSS)' goes up, due to in-application usage
2. 'Memory usage (Active file)' goes down, as file data held in memory is evicted
3. 'Page faults' go up, as less data is held in memory (and with that, IOPS, disk read throughput, ...)

This can also happen without 'Memory (RSS)' increasing, if the provisioned memory is insufficent to start with.
A small degree of this is behaviour generally expected, but if it happens significantly or causes user-noticeable impact, it's likely gitserver could benefit from more memory. Look for more user-facing metrics to make a final determination on appropriate resource allocation.

_See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps._

#### gitserver: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#gitserver-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}
```
</details>

<br />

#### gitserver: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}
```
</details>

<br />

#### gitserver: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^gitserver.*"})
```
</details>

<br />

#### gitserver: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#gitserver-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^gitserver.*"} / container_spec_memory_limit_bytes{name=~"^gitserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### gitserver: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^gitserver.*"} / container_spec_memory_limit_bytes{name=~"^gitserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### gitserver: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^gitserver.*"} / container_spec_memory_limit_bytes{name=~"^gitserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### gitserver: memory_major_page_faults

<p class="subtitle">Gitserver page faults</p>

The number of major page faults in a 5 minute window for gitserver. If this number increases significantly, it indicates that more git API calls need to load data from disk. There may not be enough memory to efficiently support the amount of API requests served concurrently.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
rate(container_memory_failures_total{failure_type="pgmajfault", name=~"^gitserver.*"}[5m])
```
</details>

<br />

#### gitserver: cpu_throttling_time

<p class="subtitle">Container CPU throttling time %</p>

A high value indicates that the container is spending too much time waiting for CPU cycles.

Refer to the [alerts reference](alerts#gitserver-cpu_throttling_time) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100130` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m])) * 100)
```
</details>

<br />

#### gitserver: cpu_usage_seconds

<p class="subtitle">Cpu usage seconds</p>

- This value should not exceed 75% of the CPU limit over a longer period of time.
- We cannot alert on this as we don`t know the resource allocation.
- If this value is high for a longer time, consider increasing the CPU limit for the container.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100131` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m]))
```
</details>

<br />

### Git Server: Gitservice for internal cloning

#### gitserver: gitservice_request_duration

<p class="subtitle">95th percentile gitservice request duration per shard</p>

A high value means any internal service trying to clone a repo from gitserver is slowed down.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{instance=~`${shard:regex}`}[5m])) by (le, gitservice))
```
</details>

<br />

#### gitserver: gitservice_request_rate

<p class="subtitle">Gitservice request rate per shard</p>

Per shard gitservice request rate

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_gitservice_duration_seconds_count{instance=~`${shard:regex}`}[5m])) by (gitservice)
```
</details>

<br />

#### gitserver: gitservice_requests_running

<p class="subtitle">Gitservice requests running per shard</p>

Per shard gitservice requests running

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_gitserver_gitservice_running{instance=~`${shard:regex}`}) by (gitservice)
```
</details>

<br />

### Git Server: Gitserver cleanup jobs

#### gitserver: janitor_tasks_total

<p class="subtitle">Total housekeeping tasks by type and status</p>

The rate of housekeeping tasks performed in repositories, broken down by task type and success/failure status

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_janitor_tasks_total{instance=~`${shard:regex}`}[5m])) by (housekeeping_task, status)
```
</details>

<br />

#### gitserver: p90_janitor_tasks_latency_success_over_5m

<p class="subtitle">90th percentile latency of successful tasks by type over 5m</p>

The 90th percentile latency of successful housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="success"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: p95_janitor_tasks_latency_success_over_5m

<p class="subtitle">95th percentile latency of successful tasks by type over 5m</p>

The 95th percentile latency of successful housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="success"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: p99_janitor_tasks_latency_success_over_5m

<p class="subtitle">99th percentile latency of successful tasks by type over 5m</p>

The 99th percentile latency of successful housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="success"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: p90_janitor_tasks_latency_failure_over_5m

<p class="subtitle">90th percentile latency of failed tasks by type over 5m</p>

The 90th percentile latency of failed housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100320` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="failure"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: p95_janitor_tasks_latency_failure_over_5m

<p class="subtitle">95th percentile latency of failed tasks by type over 5m</p>

The 95th percentile latency of failed housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100321` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="failure"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: p99_janitor_tasks_latency_failure_over_5m

<p class="subtitle">99th percentile latency of failed tasks by type over 5m</p>

The 99th percentile latency of failed housekeeping tasks, broken down by task type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100322` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_gitserver_janitor_tasks_latency_bucket{instance=~`${shard:regex}`, status="failure"}[5m])) by (le, housekeeping_task))
```
</details>

<br />

#### gitserver: pruned_files_total_over_5m

<p class="subtitle">Files pruned by type over 5m</p>

The rate of files pruned during cleanup, broken down by file type

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100330` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_janitor_pruned_files_total{instance=~`${shard:regex}`}[5m])) by (filetype)
```
</details>

<br />

#### gitserver: data_structure_count_over_5m

<p class="subtitle">Data structure counts over 5m</p>

The count distribution of various Git data structures in repositories

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100340` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_janitor_data_structure_count_bucket{instance=~`${shard:regex}`}[5m])) by (le, data_structure))
```
</details>

<br />

#### gitserver: janitor_data_structure_size

<p class="subtitle">Data structure sizes</p>

The size distribution of various Git data structures in repositories

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100350` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_janitor_data_structure_size_bucket{instance=~`${shard:regex}`}[5m])) by (le, data_structure))
```
</details>

<br />

#### gitserver: janitor_time_since_optimization

<p class="subtitle">Time since last optimization</p>

The time elapsed since last optimization of various Git data structures

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100360` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_gitserver_janitor_time_since_last_optimization_seconds_bucket{instance=~`${shard:regex}`}[5m])) by (le, data_structure))
```
</details>

<br />

#### gitserver: janitor_data_structure_existence

<p class="subtitle">Data structure existence</p>

The rate at which data structures are reported to exist in repositories

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100370` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_janitor_data_structure_existence_total{instance=~`${shard:regex}`, exists="true"}[5m])) by (data_structure)
```
</details>

<br />

### Git Server: Git Command Corruption Retries

#### gitserver: git_command_retry_attempts_rate

<p class="subtitle">Rate of git command corruption retry attempts over 5m</p>

The rate of git command retry attempts due to corruption detection.
A non-zero value indicates that gitserver is detecting potential corruption and attempting retries.
This metric helps track how often the retry mechanism is triggered.

Refer to the [alerts reference](alerts#gitserver-git_command_retry_attempts_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_retry_attempts_total{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

#### gitserver: git_command_retry_success_rate

<p class="subtitle">Rate of successful git command corruption retries over 5m</p>

The rate of git commands that succeeded after retry attempts.
This indicates how effective the retry mechanism is at resolving transient corruption issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_retry_success_total{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

#### gitserver: git_command_retry_failure_rate

<p class="subtitle">Rate of failed git command corruption retries over 5m</p>

The rate of git commands that failed even after all retry attempts were exhausted.
These failures will result in repository corruption marking and potential recloning.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_retry_failure_total{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

#### gitserver: git_command_retry_different_error_rate

<p class="subtitle">Rate of corruption retries that failed with non-corruption errors over 5m</p>

The rate of retry attempts that failed with errors other than corruption.
This indicates that repository state or environment changed between the original command and retry attempt.
Common causes include network issues, permission changes, or concurrent repository modifications.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_retry_different_error_total{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

#### gitserver: git_command_retry_success_ratio

<p class="subtitle">Ratio of successful corruption retries to total corruption retry attempts over 5m</p>

The percentage of retry attempts that ultimately succeeded.
A high ratio indicates that most corruption errors are transient and resolved by retries.
A low ratio may indicate persistent corruption issues requiring investigation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_gitserver_retry_success_total{instance=~`${shard:regex}`}[5m])) / sum(rate(src_gitserver_retry_attempts_total{instance=~`${shard:regex}`}[5m]))
```
</details>

<br />

### Git Server: Periodic Goroutines

#### gitserver: running_goroutines

<p class="subtitle">Number of currently running periodic goroutines</p>

The number of currently running periodic goroutines by name and job.
A value of 0 indicates the routine isn`t running currently, it awaits it`s next schedule.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (src_periodic_goroutine_running{job=~".*gitserver.*"})
```
</details>

<br />

#### gitserver: goroutine_success_rate

<p class="subtitle">Success rate for periodic goroutine executions</p>

The rate of successful executions of each periodic goroutine.
A low or zero value could indicate that a routine is stalled or encountering errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: goroutine_error_rate

<p class="subtitle">Error rate for periodic goroutine executions</p>

The rate of errors encountered by each periodic goroutine.
A sustained high error rate may indicate a problem with the routine`s configuration or dependencies.

Refer to the [alerts reference](alerts#gitserver-goroutine_error_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: goroutine_error_percentage

<p class="subtitle">Percentage of periodic goroutine executions that result in errors</p>

The percentage of executions that result in errors for each periodic goroutine.
A value above 5% indicates that a significant portion of routine executions are failing.

Refer to the [alerts reference](alerts#gitserver-goroutine_error_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*gitserver.*"}[5m])) / sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*gitserver.*"}[5m]) > 0) * 100
```
</details>

<br />

#### gitserver: goroutine_handler_duration

<p class="subtitle">95th percentile handler execution time</p>

The 95th percentile execution time for each periodic goroutine handler.
Longer durations might indicate increased load or processing time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100520` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_duration_seconds_bucket{job=~".*gitserver.*"}[5m])))
```
</details>

<br />

#### gitserver: goroutine_loop_duration

<p class="subtitle">95th percentile loop cycle time</p>

The 95th percentile loop cycle time for each periodic goroutine (excluding sleep time).
This represents how long a complete loop iteration takes before sleeping for the next interval.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100521` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_loop_duration_seconds_bucket{job=~".*gitserver.*"}[5m])))
```
</details>

<br />

#### gitserver: tenant_processing_duration

<p class="subtitle">95th percentile tenant processing time</p>

The 95th percentile processing time for individual tenants within periodic goroutines.
Higher values indicate that tenant processing is taking longer and may affect overall performance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100530` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_tenant_duration_seconds_bucket{job=~".*gitserver.*"}[5m])))
```
</details>

<br />

#### gitserver: tenant_processing_max

<p class="subtitle">Maximum tenant processing time</p>

The maximum processing time for individual tenants within periodic goroutines.
Consistently high values might indicate problematic tenants or inefficient processing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100531` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (rate(src_periodic_goroutine_tenant_duration_seconds_sum{job=~".*gitserver.*"}[5m]) / rate(src_periodic_goroutine_tenant_duration_seconds_count{job=~".*gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: tenant_count

<p class="subtitle">Number of tenants processed per routine</p>

The number of tenants processed by each periodic goroutine.
Unexpected changes can indicate tenant configuration issues or scaling events.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100540` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (src_periodic_goroutine_tenant_count{job=~".*gitserver.*"})
```
</details>

<br />

#### gitserver: tenant_success_rate

<p class="subtitle">Rate of successful tenant processing operations</p>

The rate of successful tenant processing operations.
A healthy routine should maintain a consistent processing rate.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100541` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: tenant_error_rate

<p class="subtitle">Rate of tenant processing errors</p>

The rate of tenant processing operations that result in errors.
Consistent errors indicate problems with specific tenants.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100550` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: tenant_error_percentage

<p class="subtitle">Percentage of tenant operations resulting in errors</p>

The percentage of tenant operations that result in errors.
Values above 5% indicate significant tenant processing problems.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100551` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*gitserver.*"}[5m])) / (sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*gitserver.*"}[5m])) + sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*gitserver.*"}[5m])))) * 100
```
</details>

<br />

### Git Server: Network I/O pod metrics (only available on Kubernetes)

#### gitserver: network_sent_bytes_aggregate

<p class="subtitle">Transmission rate over 5m (aggregate)</p>

The rate of bytes sent over the network across all pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`.*gitserver.*`}[5m]))
```
</details>

<br />

#### gitserver: network_received_packets_per_instance

<p class="subtitle">Transmission rate over 5m (per instance)</p>

The amount of bytes sent over the network by individual pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### gitserver: network_received_bytes_aggregate

<p class="subtitle">Receive rate over 5m (aggregate)</p>

The amount of bytes received from the network across pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`.*gitserver.*`}[5m]))
```
</details>

<br />

#### gitserver: network_received_bytes_per_instance

<p class="subtitle">Receive rate over 5m (per instance)</p>

The amount of bytes received from the network by individual pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### gitserver: network_transmitted_packets_dropped_by_instance

<p class="subtitle">Transmit packet drop rate over 5m (by instance)</p>

An increase in dropped packets could be a leading indicator of network saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100620` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### gitserver: network_transmitted_packets_errors_per_instance

<p class="subtitle">Errors encountered while transmitting over 5m (per instance)</p>

An increase in transmission errors could indicate a networking issue

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100621` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### gitserver: network_received_packets_dropped_by_instance

<p class="subtitle">Receive packet drop rate over 5m (by instance)</p>

An increase in dropped packets could be a leading indicator of network saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100622` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### gitserver: network_transmitted_packets_errors_by_instance

<p class="subtitle">Errors encountered while receiving over 5m (per instance)</p>

An increase in errors while receiving could indicate a networking issue.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100623` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

### Git Server: VCS Clone metrics

#### gitserver: vcs_syncer_999_successful_clone_duration

<p class="subtitle">99.9th percentile successful Clone duration over 1m</p>

The 99.9th percentile duration for successful `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_successful_clone_duration

<p class="subtitle">99th percentile successful Clone duration over 1m</p>

The 99th percentile duration for successful `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_successful_clone_duration

<p class="subtitle">95th percentile successful Clone duration over 1m</p>

The 95th percentile duration for successful `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_successful_clone_rate

<p class="subtitle">Rate of successful Clone VCS operations over 1m</p>

The rate of successful `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_clone_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="true"}[1m]))
```
</details>

<br />

#### gitserver: vcs_syncer_999_failed_clone_duration

<p class="subtitle">99.9th percentile failed Clone duration over 1m</p>

The 99.9th percentile duration for failed `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100720` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_failed_clone_duration

<p class="subtitle">99th percentile failed Clone duration over 1m</p>

The 99th percentile duration for failed `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100721` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_failed_clone_duration

<p class="subtitle">95th percentile failed Clone duration over 1m</p>

The 95th percentile duration for failed `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100722` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_clone_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_failed_clone_rate

<p class="subtitle">Rate of failed Clone VCS operations over 1m</p>

The rate of failed `Clone` VCS operations. This is the time taken to clone a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100730` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_clone_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="false"}[1m]))
```
</details>

<br />

### Git Server: VCS Fetch metrics

#### gitserver: vcs_syncer_999_successful_fetch_duration

<p class="subtitle">99.9th percentile successful Fetch duration over 1m</p>

The 99.9th percentile duration for successful `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_successful_fetch_duration

<p class="subtitle">99th percentile successful Fetch duration over 1m</p>

The 99th percentile duration for successful `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_successful_fetch_duration

<p class="subtitle">95th percentile successful Fetch duration over 1m</p>

The 95th percentile duration for successful `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_successful_fetch_rate

<p class="subtitle">Rate of successful Fetch VCS operations over 1m</p>

The rate of successful `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_fetch_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="true"}[1m]))
```
</details>

<br />

#### gitserver: vcs_syncer_999_failed_fetch_duration

<p class="subtitle">99.9th percentile failed Fetch duration over 1m</p>

The 99.9th percentile duration for failed `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_failed_fetch_duration

<p class="subtitle">99th percentile failed Fetch duration over 1m</p>

The 99th percentile duration for failed `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_failed_fetch_duration

<p class="subtitle">95th percentile failed Fetch duration over 1m</p>

The 95th percentile duration for failed `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100822` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_fetch_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_failed_fetch_rate

<p class="subtitle">Rate of failed Fetch VCS operations over 1m</p>

The rate of failed `Fetch` VCS operations. This is the time taken to fetch a repository from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_fetch_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="false"}[1m]))
```
</details>

<br />

### Git Server: VCS Is_cloneable metrics

#### gitserver: vcs_syncer_999_successful_is_cloneable_duration

<p class="subtitle">99.9th percentile successful Is_cloneable duration over 1m</p>

The 99.9th percentile duration for successful `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_successful_is_cloneable_duration

<p class="subtitle">99th percentile successful Is_cloneable duration over 1m</p>

The 99th percentile duration for successful `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_successful_is_cloneable_duration

<p class="subtitle">95th percentile successful Is_cloneable duration over 1m</p>

The 95th percentile duration for successful `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="true"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_successful_is_cloneable_rate

<p class="subtitle">Rate of successful Is_cloneable VCS operations over 1m</p>

The rate of successful `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_is_cloneable_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="true"}[1m]))
```
</details>

<br />

#### gitserver: vcs_syncer_999_failed_is_cloneable_duration

<p class="subtitle">99.9th percentile failed Is_cloneable duration over 1m</p>

The 99.9th percentile duration for failed `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100920` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_99_failed_is_cloneable_duration

<p class="subtitle">99th percentile failed Is_cloneable duration over 1m</p>

The 99th percentile duration for failed `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100921` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_95_failed_is_cloneable_duration

<p class="subtitle">95th percentile failed Is_cloneable duration over 1m</p>

The 95th percentile duration for failed `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100922` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (type, le) (rate(vcssyncer_is_cloneable_duration_seconds_bucket{type=~`${vcsSyncerType:regex}`, success="false"}[1m])))
```
</details>

<br />

#### gitserver: vcs_syncer_failed_is_cloneable_rate

<p class="subtitle">Rate of failed Is_cloneable VCS operations over 1m</p>

The rate of failed `Is_cloneable` VCS operations. This is the time taken to check to see if a repository is cloneable from the upstream source.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=100930` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (rate(vcssyncer_is_cloneable_duration_seconds_count{type=~`${vcsSyncerType:regex}`, success="false"}[1m]))
```
</details>

<br />

### Git Server: Gitserver: Gitserver Backend

#### gitserver: concurrent_backend_operations

<p class="subtitle">Number of concurrently running backend operations</p>

The number of requests that are currently being handled by gitserver backend layer, at the point in time of scraping.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_gitserver_backend_concurrent_operations
```
</details>

<br />

#### gitserver: gitserver_backend_total

<p class="subtitle">Aggregate  operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_backend_99th_percentile_duration

<p class="subtitle">Aggregate successful  operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_backend_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_backend_errors_total

<p class="subtitle">Aggregate  operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_backend_error_rate

<p class="subtitle">Aggregate  operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))) * 100
```
</details>

<br />

#### gitserver: gitserver_backend_total

<p class="subtitle"> operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_backend_99th_percentile_duration

<p class="subtitle">99th percentile successful  operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_gitserver_backend_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))
```
</details>

<br />

#### gitserver: gitserver_backend_errors_total

<p class="subtitle"> operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101022` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_backend_error_rate

<p class="subtitle"> operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101023` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))) * 100
```
</details>

<br />

### Git Server: Gitserver: Gitserver Client

#### gitserver: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100
```
</details>

<br />

#### gitserver: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m])))
```
</details>

<br />

#### gitserver: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100
```
</details>

<br />

### Git Server: Gitserver: Gitserver Repository Service Client

#### gitserver: gitserver_repositoryservice_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m])) / (sum(increase(src_gitserver_repositoryservice_client_total{job=~"^*.*"}[5m])) + sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m]))) * 100
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^*.*"}[5m])))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m]))
```
</details>

<br />

#### gitserver: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^*.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^*.*"}[5m]))) * 100
```
</details>

<br />

### Git Server: Repos disk I/O metrics

#### gitserver: repos_disk_reads_sec

<p class="subtitle">Read request rate over 1m (per instance)</p>

The number of read requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_writes_sec

<p class="subtitle">Write request rate over 1m (per instance)</p>

The number of write requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_read_throughput

<p class="subtitle">Read throughput over 1m (per instance)</p>

The amount of data that was read from the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_write_throughput

<p class="subtitle">Write throughput over 1m (per instance)</p>

The amount of data that was written to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_read_duration

<p class="subtitle">Average read duration over 1m (per instance)</p>

The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101320` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### gitserver: repos_disk_write_duration

<p class="subtitle">Average write duration over 1m (per instance)</p>

The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101321` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### gitserver: repos_disk_read_request_size

<p class="subtitle">Average read request size over 1m (per instance)</p>

The average size of read requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101330` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### gitserver: repos_disk_write_request_size)

<p class="subtitle">Average write request size over 1m (per instance)</p>

The average size of write requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101331` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### gitserver: repos_disk_reads_merged_sec

<p class="subtitle">Merged read request rate over 1m (per instance)</p>

The number of read requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101340` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_writes_merged_sec

<p class="subtitle">Merged writes request rate over 1m (per instance)</p>

The number of write requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101341` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### gitserver: repos_disk_average_queue_size

<p class="subtitle">Average queue size over 1m (per instance)</p>

The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), gitserver could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101350` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

### Git Server: Git Service GRPC server metrics

#### gitserver: git_service_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))
```
</details>

<br />

#### gitserver: git_service_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)
```
</details>

<br />

#### gitserver: git_service_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))) ))
```
</details>

<br />

#### gitserver: git_service_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${git_service_method:regex}`,grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### gitserver: git_service_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101420` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101421` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101422` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101430` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101431` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101432` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101440` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101441` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101442` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
```
</details>

<br />

#### gitserver: git_service_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101450` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### gitserver: git_service_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101460` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${git_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Git Server: Git Service GRPC "internal error" metrics

#### gitserver: git_service_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "git_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "git_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "git_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "git_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "git_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "git_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "git_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "git_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "git_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",is_internal_error="true",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Git Server: Git Service GRPC retry metrics

#### gitserver: git_service_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "git_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "git_service" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",is_retried="true",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: git_service_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "git_service" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${git_service_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Git Server: Repository Service GRPC server metrics

#### gitserver: repository_service_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m]))
```
</details>

<br />

#### gitserver: repository_service_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method)
```
</details>

<br />

#### gitserver: repository_service_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m]))) ))
```
</details>

<br />

#### gitserver: repository_service_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${repository_service_method:regex}`,grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### gitserver: repository_service_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101720` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101721` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101722` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101730` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101731` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101732` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101740` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101741` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101742` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))
```
</details>

<br />

#### gitserver: repository_service_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101750` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### gitserver: repository_service_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101760` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${repository_service_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Git Server: Repository Service GRPC "internal error" metrics

#### gitserver: repository_service_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "repository_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "repository_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "repository_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "repository_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repository_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "repository_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repository_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "repository_service" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repository_service" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverRepositoryService",is_internal_error="true",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Git Server: Repository Service GRPC retry metrics

#### gitserver: repository_service_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "repository_service" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverRepositoryService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverRepositoryService"}[2m])))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "repository_service" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverRepositoryService",is_retried="true",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### gitserver: repository_service_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "repository_service" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=101902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverRepositoryService",grpc_method=~"${repository_service_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Git Server: Site configuration client update latency

#### gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Duration since last successful site configuration update (by instance)</p>

The duration since the configuration client used by the "gitserver" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_conf_client_time_since_last_successful_update_seconds{job=~`.*gitserver`,instance=~`${shard:regex}`}
```
</details>

<br />

#### gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Maximum duration since last successful site configuration update (all "gitserver" instances)</p>

Refer to the [alerts reference](alerts#gitserver-gitserver_site_configuration_duration_since_last_successful_update_by_instance) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*gitserver`,instance=~`${shard:regex}`}[1m]))
```
</details>

<br />

### Git Server: HTTP handlers

#### gitserver: healthy_request_rate

<p class="subtitle">Requests per second, by route, when status code is 200</p>

The number of healthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code=~"2.."}[5m]))
```
</details>

<br />

#### gitserver: unhealthy_request_rate

<p class="subtitle">Requests per second, by route, when status code is not 200</p>

The number of unhealthy HTTP requests per second to internal HTTP api

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code!~"2.."}[5m]))
```
</details>

<br />

#### gitserver: request_rate_by_code

<p class="subtitle">Requests per second, by status code</p>

The number of HTTP requests per second by code

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code) (rate(src_http_request_duration_seconds_count{app="gitserver"}[5m]))
```
</details>

<br />

#### gitserver: 95th_percentile_healthy_requests

<p class="subtitle">95th percentile duration by route, when status code is 200</p>

The 95th percentile duration by route when the status code is 200

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code=~"2.."}[5m])) by (le, route))
```
</details>

<br />

#### gitserver: 95th_percentile_unhealthy_requests

<p class="subtitle">95th percentile duration by route, when status code is not 200</p>

The 95th percentile duration by route when the status code is not 200

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code!~"2.."}[5m])) by (le, route))
```
</details>

<br />

### Git Server: Database connections

#### gitserver: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})
```
</details>

<br />

#### gitserver: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})
```
</details>

<br />

#### gitserver: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})
```
</details>

<br />

#### gitserver: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})
```
</details>

<br />

#### gitserver: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#gitserver-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102220` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))
```
</details>

<br />

#### gitserver: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102230` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))
```
</details>

<br />

#### gitserver: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102231` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))
```
</details>

<br />

#### gitserver: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102232` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))
```
</details>

<br />

### Git Server: Container monitoring (not available on server)

#### gitserver: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod gitserver` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p gitserver`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' gitserver` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the gitserver container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs gitserver` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)
```
</details>

<br />

#### gitserver: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#gitserver-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}
```
</details>

<br />

#### gitserver: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#gitserver-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}
```
</details>

<br />

#### gitserver: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))
```
</details>

<br />

### Git Server: Provisioning indicators (not available on server)

#### gitserver: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#gitserver-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])
```
</details>

<br />

#### gitserver: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])
```
</details>

<br />

#### gitserver: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#gitserver-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])
```
</details>

<br />

#### gitserver: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Git Server is expected to use up all the memory it is provided.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])
```
</details>

<br />

#### gitserver: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#gitserver-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^gitserver.*"})
```
</details>

<br />

### Git Server: Golang runtime monitoring

#### gitserver: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#gitserver-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*gitserver"})
```
</details>

<br />

#### gitserver: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#gitserver-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*gitserver"})
```
</details>

<br />

### Git Server: Kubernetes monitoring (only available on Kubernetes)

#### gitserver: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#gitserver-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/gitserver/gitserver?viewPanel=102600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100
```
</details>

<br />

## Postgres

<p class="subtitle">Postgres metrics, exported from postgres_exporter (not available on server).</p>

To see this dashboard, visit `/-/debug/grafana/d/postgres/postgres` on your Sourcegraph instance.

#### postgres: connections

<p class="subtitle">Active connections</p>

Refer to the [alerts reference](alerts#postgres-connections) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"}) OR sum by (job) (pg_stat_activity_count{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
```
</details>

<br />

#### postgres: usage_connections_percentage

<p class="subtitle">Connection in use</p>

Refer to the [alerts reference](alerts#postgres-usage_connections_percentage) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(pg_stat_activity_count) by (job) / (sum(pg_settings_max_connections) by (job) - sum(pg_settings_superuser_reserved_connections) by (job)) * 100
```
</details>

<br />

#### postgres: transaction_durations

<p class="subtitle">Maximum transaction durations</p>

Refer to the [alerts reference](alerts#postgres-transaction_durations) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin",job!="codeintel-db"}) OR sum by (job) (pg_stat_activity_max_tx_duration{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
```
</details>

<br />

### Postgres: Database and collector status

#### postgres: postgres_up

<p class="subtitle">Database availability</p>

A non-zero value indicates the database is online.

Refer to the [alerts reference](alerts#postgres-postgres_up) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
pg_up
```
</details>

<br />

#### postgres: invalid_indexes

<p class="subtitle">Invalid indexes (unusable by the query planner)</p>

A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.

Refer to the [alerts reference](alerts#postgres-invalid_indexes) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (relname)(pg_invalid_index_count)
```
</details>

<br />

#### postgres: pg_exporter_err

<p class="subtitle">Errors scraping postgres exporter</p>

This value indicates issues retrieving metrics from postgres_exporter.

Refer to the [alerts reference](alerts#postgres-pg_exporter_err) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
pg_exporter_last_scrape_error
```
</details>

<br />

#### postgres: migration_in_progress

<p class="subtitle">Active schema migration</p>

A 0 value indicates that no migration is in progress.

Refer to the [alerts reference](alerts#postgres-migration_in_progress) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
pg_sg_migration_status
```
</details>

<br />

### Postgres: Object size and bloat

#### postgres: pg_table_size

<p class="subtitle">Table size</p>

Total size of this table

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (relname)(pg_table_bloat_size)
```
</details>

<br />

#### postgres: pg_table_bloat_ratio

<p class="subtitle">Table bloat ratio</p>

Estimated bloat ratio of this table (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (relname)(pg_table_bloat_ratio) * 100
```
</details>

<br />

#### postgres: pg_index_size

<p class="subtitle">Index size</p>

Total size of this index

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (relname)(pg_index_bloat_size)
```
</details>

<br />

#### postgres: pg_index_bloat_ratio

<p class="subtitle">Index bloat ratio</p>

Estimated bloat ratio of this index (high bloat = high overhead)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (relname)(pg_index_bloat_ratio) * 100
```
</details>

<br />

### Postgres: Provisioning indicators (not available on server)

#### postgres: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#postgres-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
```
</details>

<br />

#### postgres: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#postgres-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
```
</details>

<br />

#### postgres: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#postgres-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
```
</details>

<br />

#### postgres: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#postgres-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
```
</details>

<br />

#### postgres: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#postgres-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^(pgsql|codeintel-db|codeinsights).*"})
```
</details>

<br />

### Postgres: Kubernetes monitoring (only available on Kubernetes)

#### postgres: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#postgres-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/postgres/postgres?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) * 100
```
</details>

<br />

## Precise Code Intel Worker

<p class="subtitle">Handles conversion of uploaded precise code intelligence bundles.</p>

To see this dashboard, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker` on your Sourcegraph instance.

### Precise Code Intel Worker: Codeintel: LSIF uploads

#### precise-code-intel-worker: codeintel_upload_handlers

<p class="subtitle">Handler active handlers</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})
```
</details>

<br />

#### precise-code-intel-worker: codeintel_upload_processor_upload_size

<p class="subtitle">Sum of upload sizes in bytes being processed by each precise code-intel worker instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(instance) (src_codeintel_upload_processor_upload_size{job="precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: codeintel_upload_processor_total

<p class="subtitle">Handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration

<p class="subtitle">Aggregate successful handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_upload_processor_errors_total

<p class="subtitle">Handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_upload_processor_error_rate

<p class="subtitle">Handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Codeintel: dbstore stats

#### precise-code-intel-worker: codeintel_uploads_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Codeintel: lsifstore stats

#### precise-code-intel-worker: codeintel_uploads_lsifstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats

#### precise-code-intel-worker: workerutil_dbworker_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_total{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: workerutil_dbworker_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: workerutil_dbworker_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_total{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_errors_total{domain='codeintel_upload',job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Codeintel: gitserver client

#### precise-code-intel-worker: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Codeintel: uploadstore stats

#### precise-code-intel-worker: codeintel_uploadstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: codeintel_uploadstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Precise Code Intel Worker: Database connections

#### precise-code-intel-worker: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#precise-code-intel-worker-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100620` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100630` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100631` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))
```
</details>

<br />

#### precise-code-intel-worker: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100632` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))
```
</details>

<br />

### Precise Code Intel Worker: Precise-code-intel-worker (CPU, Memory)

#### precise-code-intel-worker: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
```
</details>

<br />

#### precise-code-intel-worker: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
```
</details>

<br />

#### precise-code-intel-worker: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^precise-code-intel-worker.*"})
```
</details>

<br />

#### precise-code-intel-worker: memory_rss

<p class="subtitle">Memory (RSS)</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^precise-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^precise-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### precise-code-intel-worker: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^precise-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^precise-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### precise-code-intel-worker: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^precise-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^precise-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

### Precise Code Intel Worker: Container monitoring (not available on server)

#### precise-code-intel-worker: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod precise-code-intel-worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p precise-code-intel-worker`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' precise-code-intel-worker` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the precise-code-intel-worker container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs precise-code-intel-worker` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)
```
</details>

<br />

#### precise-code-intel-worker: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
```
</details>

<br />

#### precise-code-intel-worker: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
```
</details>

<br />

#### precise-code-intel-worker: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100803` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))
```
</details>

<br />

### Precise Code Intel Worker: Provisioning indicators (not available on server)

#### precise-code-intel-worker: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
```
</details>

<br />

#### precise-code-intel-worker: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
```
</details>

<br />

#### precise-code-intel-worker: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
```
</details>

<br />

#### precise-code-intel-worker: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
```
</details>

<br />

#### precise-code-intel-worker: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100912` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^precise-code-intel-worker.*"})
```
</details>

<br />

### Precise Code Intel Worker: Golang runtime monitoring

#### precise-code-intel-worker: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})
```
</details>

<br />

#### precise-code-intel-worker: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})
```
</details>

<br />

### Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)

#### precise-code-intel-worker: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#precise-code-intel-worker-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100
```
</details>

<br />

## Syntactic Indexing

<p class="subtitle">Handles syntactic indexing of repositories.</p>

To see this dashboard, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing` on your Sourcegraph instance.

### Syntactic Indexing: Syntactic indexing scheduling: summary

####syntactic-indexing:

<p class="subtitle">Syntactic indexing jobs proposed for insertion over 5m</p>

Syntactic indexing jobs are proposed for insertion into the queue
based on round-robin scheduling across recently modified repos.

This should be equal to the sum of inserted + updated + skipped,
but is shown separately for clarity.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_syntactic_enqueuer_jobs_proposed[5m]))
```
</details>

<br />

####syntactic-indexing:

<p class="subtitle">Syntactic indexing jobs inserted over 5m</p>

Syntactic indexing jobs are inserted into the queue if there is a proposed
repo commit pair (R, X) such that there is no existing job for R in the queue.

If this number is close to the number of proposed jobs, it may indicate that
the scheduler is not able to keep up with the rate of incoming commits.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_syntactic_enqueuer_jobs_inserted[5m]))
```
</details>

<br />

####syntactic-indexing:

<p class="subtitle">Syntactic indexing jobs updated in-place over 5m</p>

Syntactic indexing jobs are updated in-place when the scheduler attempts to
enqueue a repo commit pair (R, X) and discovers that the queue already had some
other repo commit pair (R, Y) where Y is an ancestor of X. In that case, the
job is updated in-place to point to X, to reflect the fact that users looking
at the tip of the default branch of R are more likely to benefit from newer
commits being indexed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_syntactic_enqueuer_jobs_updated[5m]))
```
</details>

<br />

####syntactic-indexing:

<p class="subtitle">Syntactic indexing jobs skipped over 5m</p>

Syntactic indexing jobs insertion is skipped when the scheduler attempts to
enqueue a repo commit pair (R, X) and discovers that the queue already had the
same job (most likely) or another job (R, Y) where Y is not an ancestor of X.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_syntactic_enqueuer_jobs_skipped[5m]))
```
</details>

<br />

### Syntactic Indexing: Workerutil: syntactic_scip_indexing_jobs dbworker/store stats

#### syntactic-indexing: workerutil_dbworker_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_total{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: workerutil_dbworker_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: workerutil_dbworker_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_total{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_errors_total{domain='syntactic_scip_indexing_jobs',job=~"^syntactic-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Syntactic Indexing: Codeintel: gitserver client

#### syntactic-indexing: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^syntactic-code-intel-worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

#### syntactic-indexing: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_total{job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^syntactic-code-intel-worker.*"}[5m])))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m]))
```
</details>

<br />

#### syntactic-indexing: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^syntactic-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^syntactic-code-intel-worker.*"}[5m]))) * 100
```
</details>

<br />

### Syntactic Indexing: Database connections

#### syntactic-indexing: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="syntactic-code-intel-worker"})
```
</details>

<br />

#### syntactic-indexing: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="syntactic-code-intel-worker"})
```
</details>

<br />

#### syntactic-indexing: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="syntactic-code-intel-worker"})
```
</details>

<br />

#### syntactic-indexing: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="syntactic-code-intel-worker"})
```
</details>

<br />

#### syntactic-indexing: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#syntactic-indexing-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100320` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="syntactic-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="syntactic-code-intel-worker"}[5m]))
```
</details>

<br />

#### syntactic-indexing: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100330` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="syntactic-code-intel-worker"}[5m]))
```
</details>

<br />

#### syntactic-indexing: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100331` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="syntactic-code-intel-worker"}[5m]))
```
</details>

<br />

#### syntactic-indexing: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100332` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="syntactic-code-intel-worker"}[5m]))
```
</details>

<br />

### Syntactic Indexing: Syntactic-code-intel-worker (CPU, Memory)

#### syntactic-indexing: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}
```
</details>

<br />

#### syntactic-indexing: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}
```
</details>

<br />

#### syntactic-indexing: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^syntactic-code-intel-worker.*"})
```
</details>

<br />

#### syntactic-indexing: memory_rss

<p class="subtitle">Memory (RSS)</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^syntactic-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^syntactic-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### syntactic-indexing: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^syntactic-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^syntactic-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### syntactic-indexing: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^syntactic-code-intel-worker.*"} / container_spec_memory_limit_bytes{name=~"^syntactic-code-intel-worker.*"}) by (name) * 100.0 
```
</details>

<br />

### Syntactic Indexing: Container monitoring (not available on server)

#### syntactic-indexing: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod syntactic-code-intel-worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p syntactic-code-intel-worker`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' syntactic-code-intel-worker` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the syntactic-code-intel-worker container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs syntactic-code-intel-worker` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^syntactic-code-intel-worker.*"}) > 60)
```
</details>

<br />

#### syntactic-indexing: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}
```
</details>

<br />

#### syntactic-indexing: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}
```
</details>

<br />

#### syntactic-indexing: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^syntactic-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^syntactic-code-intel-worker.*"}[1h]))
```
</details>

<br />

### Syntactic Indexing: Provisioning indicators (not available on server)

#### syntactic-indexing: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}[1d])
```
</details>

<br />

#### syntactic-indexing: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}[1d])
```
</details>

<br />

#### syntactic-indexing: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}[5m])
```
</details>

<br />

#### syntactic-indexing: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntactic-code-intel-worker.*"}[5m])
```
</details>

<br />

#### syntactic-indexing: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^syntactic-code-intel-worker.*"})
```
</details>

<br />

### Syntactic Indexing: Golang runtime monitoring

#### syntactic-indexing: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*syntactic-code-intel-worker"})
```
</details>

<br />

#### syntactic-indexing: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*syntactic-code-intel-worker"})
```
</details>

<br />

### Syntactic Indexing: Kubernetes monitoring (only available on Kubernetes)

#### syntactic-indexing: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#syntactic-indexing-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntactic-indexing/syntactic-indexing?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*syntactic-code-intel-worker"}) / count by (app) (up{app=~".*syntactic-code-intel-worker"}) * 100
```
</details>

<br />

## Redis

<p class="subtitle">Metrics from both redis databases.</p>

To see this dashboard, visit `/-/debug/grafana/d/redis/redis` on your Sourcegraph instance.

### Redis: Redis Store

#### redis: redis-store_up

<p class="subtitle">Redis-store availability</p>

A value of 1 indicates the service is currently running

Refer to the [alerts reference](alerts#redis-redis-store_up) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
redis_up{app="redis-store"}
```
</details>

<br />

### Redis: Redis Cache

#### redis: redis-cache_up

<p class="subtitle">Redis-cache availability</p>

A value of 1 indicates the service is currently running

Refer to the [alerts reference](alerts#redis-redis-cache_up) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
redis_up{app="redis-cache"}
```
</details>

<br />

### Redis: Provisioning indicators (not available on server)

#### redis: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])
```
</details>

<br />

#### redis: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])
```
</details>

<br />

#### redis: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])
```
</details>

<br />

#### redis: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])
```
</details>

<br />

#### redis: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#redis-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^redis-cache.*"})
```
</details>

<br />

### Redis: Provisioning indicators (not available on server)

#### redis: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])
```
</details>

<br />

#### redis: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])
```
</details>

<br />

#### redis: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])
```
</details>

<br />

#### redis: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#redis-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])
```
</details>

<br />

#### redis: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#redis-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^redis-store.*"})
```
</details>

<br />

### Redis: Kubernetes monitoring (only available on Kubernetes)

#### redis: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#redis-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100
```
</details>

<br />

### Redis: Kubernetes monitoring (only available on Kubernetes)

#### redis: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#redis-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/redis/redis?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100
```
</details>

<br />

## Worker

<p class="subtitle">Manages background processes.</p>

To see this dashboard, visit `/-/debug/grafana/d/worker/worker` on your Sourcegraph instance.

### Worker: Active jobs

#### worker: worker_job_count

<p class="subtitle">Number of worker instances running each job</p>

The number of worker instances running each job type.
It is necessary for each job type to be managed by at least one worker instance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100000` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum by (job_name) (src_worker_jobs{job=~"^worker.*"})
```
</details>

<br />

#### worker: worker_job_codeintel-upload-janitor_count

<p class="subtitle">Number of worker instances running the codeintel-upload-janitor job</p>

Refer to the [alerts reference](alerts#worker-worker_job_codeintel-upload-janitor_count) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-upload-janitor"})
```
</details>

<br />

#### worker: worker_job_codeintel-commitgraph-updater_count

<p class="subtitle">Number of worker instances running the codeintel-commitgraph-updater job</p>

Refer to the [alerts reference](alerts#worker-worker_job_codeintel-commitgraph-updater_count) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-commitgraph-updater"})
```
</details>

<br />

#### worker: worker_job_codeintel-autoindexing-scheduler_count

<p class="subtitle">Number of worker instances running the codeintel-autoindexing-scheduler job</p>

Refer to the [alerts reference](alerts#worker-worker_job_codeintel-autoindexing-scheduler_count) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-autoindexing-scheduler"})
```
</details>

<br />

### Worker: Database ANALYZE

#### worker: dbanalyze_running

<p class="subtitle">DB ANALYZE job running status</p>

Indicates whether the DB ANALYZE job is currently running (1) or idle (0) for each database. If there are corresponding load or latency spikes on the database, and this counter is (1), consider disabling the dbanalyze worker job and check if you see improvement.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (database) (src_dbanalyze_running{job=~"^worker.*"})
```
</details>

<br />

### Worker: Database record encrypter

#### worker: records_encrypted_at_rest_percentage

<p class="subtitle">Percentage of database records encrypted at rest</p>

Percentage of encrypted database records

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max(src_records_encrypted_at_rest_total) by (tableName)) / ((max(src_records_encrypted_at_rest_total) by (tableName)) + (max(src_records_unencrypted_at_rest_total) by (tableName))) * 100
```
</details>

<br />

#### worker: records_encrypted_total

<p class="subtitle">Database records encrypted every 5m</p>

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (tableName)(increase(src_records_encrypted_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: records_decrypted_total

<p class="subtitle">Database records decrypted every 5m</p>

Number of encrypted database records every 5m

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (tableName)(increase(src_records_decrypted_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: record_encryption_errors_total

<p class="subtitle">Encryption operation errors every 5m</p>

Number of database record encryption/decryption errors every 5m

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_record_encryption_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

### Worker: Codeintel: Repository commit graph updates

#### worker: codeintel_commit_graph_processor_total

<p class="subtitle">Update operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_commit_graph_processor_99th_percentile_duration

<p class="subtitle">Aggregate successful update operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_commit_graph_processor_errors_total

<p class="subtitle">Update operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_commit_graph_processor_error_rate

<p class="subtitle">Update operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Codeintel: Auto-index scheduler

#### worker: codeintel_autoindexing_total

<p class="subtitle">Auto-indexing job scheduler operations every 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
```
</details>

<br />

#### worker: codeintel_autoindexing_99th_percentile_duration

<p class="subtitle">Aggregate successful auto-indexing job scheduler operation duration distribution over 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
```
</details>

<br />

#### worker: codeintel_autoindexing_errors_total

<p class="subtitle">Auto-indexing job scheduler operation errors every 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
```
</details>

<br />

#### worker: codeintel_autoindexing_error_rate

<p class="subtitle">Auto-indexing job scheduler operation error rate over 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))) * 100
```
</details>

<br />

### Worker: Codeintel: dbstore stats

#### worker: codeintel_uploads_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: codeintel_uploads_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: codeintel_uploads_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Codeintel: lsifstore stats

#### worker: codeintel_uploads_lsifstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100603` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: codeintel_uploads_lsifstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Codeintel: gitserver client

#### worker: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100713` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Repositories

#### worker: syncer_sync_last_time

<p class="subtitle">Time since last sync</p>

A high value here indicates issues synchronizing repo metadata.
If the value is persistently high, make sure all external services have valid tokens.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)
```
</details>

<br />

#### worker: src_repoupdater_max_sync_backoff

<p class="subtitle">Time since oldest sync</p>

Refer to the [alerts reference](alerts#worker-src_repoupdater_max_sync_backoff) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_max_sync_backoff)
```
</details>

<br />

#### worker: src_repoupdater_syncer_sync_errors_total

<p class="subtitle">Site level external service sync error rate</p>

Refer to the [alerts reference](alerts#worker-src_repoupdater_syncer_sync_errors_total) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user",reason!="invalid_npm_path",reason!="internal_rate_limit"}[5m]))
```
</details>

<br />

#### worker: syncer_sync_start

<p class="subtitle">Repo metadata sync was started</p>

Refer to the [alerts reference](alerts#worker-syncer_sync_start) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))
```
</details>

<br />

#### worker: syncer_sync_duration

<p class="subtitle">95th repositories sync duration</p>

Refer to the [alerts reference](alerts#worker-syncer_sync_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, max by (le, family, success) (rate(src_repoupdater_syncer_sync_duration_seconds_bucket[1m])))
```
</details>

<br />

#### worker: source_duration

<p class="subtitle">95th repositories source duration</p>

Refer to the [alerts reference](alerts#worker-source_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, max by (le) (rate(src_repoupdater_source_duration_seconds_bucket[1m])))
```
</details>

<br />

#### worker: syncer_synced_repos

<p class="subtitle">Repositories synced</p>

Refer to the [alerts reference](alerts#worker-syncer_synced_repos) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_syncer_synced_repos_total[1m]))
```
</details>

<br />

#### worker: sourced_repos

<p class="subtitle">Repositories sourced</p>

Refer to the [alerts reference](alerts#worker-sourced_repos) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_source_repos_total[1m]))
```
</details>

<br />

#### worker: sched_auto_fetch

<p class="subtitle">Repositories scheduled due to hitting a deadline</p>

Refer to the [alerts reference](alerts#worker-sched_auto_fetch) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_sched_auto_fetch[1m]))
```
</details>

<br />

#### worker: sched_manual_fetch

<p class="subtitle">Repositories scheduled due to user traffic</p>

Check worker logs if this value is persistently high.
This does not indicate anything if there are no user added code hosts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100831` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_sched_manual_fetch[1m]))
```
</details>

<br />

#### worker: sched_loops

<p class="subtitle">Scheduler loops</p>

Refer to the [alerts reference](alerts#worker-sched_loops) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100840` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_sched_loops[1m]))
```
</details>

<br />

#### worker: src_repoupdater_stale_repos

<p class="subtitle">Repos that haven't been fetched in more than 8 hours</p>

Refer to the [alerts reference](alerts#worker-src_repoupdater_stale_repos) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100841` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_stale_repos)
```
</details>

<br />

#### worker: sched_error

<p class="subtitle">Repositories schedule error rate</p>

Refer to the [alerts reference](alerts#worker-sched_error) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100842` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repoupdater_sched_error[1m]))
```
</details>

<br />

#### worker: src_repoupdater_cleanup_failed_repos

<p class="subtitle">Repos that have failed cleanup more than 5 times consecutively</p>

Refer to the [alerts reference](alerts#worker-src_repoupdater_cleanup_failed_repos) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100843` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_cleanup_failed_repos)
```
</details>

<br />

### Worker: Repo state syncer

#### worker: state_syncer_running

<p class="subtitle">State syncer is running</p>

1, if the state syncer is currently running

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max (src_repo_statesyncer_running)
```
</details>

<br />

#### worker: repos_deleted_total

<p class="subtitle">Total number of repos deleted</p>

The total number of repos deleted across all gitservers by
the state syncer.
A high number here is not necessarily an issue, dig deeper into
the other charts in this section to make a call if those deletions
were correct.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_repo_statesyncer_repos_deleted)
```
</details>

<br />

#### worker: repos_deleted_from_primary_total

<p class="subtitle">Total number of repos deleted from primary</p>

The total number of repos deleted from the primary shard.
Check the reasons for why they were deleted.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (reason) (src_repo_statesyncer_repos_deleted{is_primary="true"})
```
</details>

<br />

#### worker: repos_deleted_from_secondary_total

<p class="subtitle">Total number of repos deleted from secondary</p>

The total number of repos deleted from secondary shards.
Check the reasons for why they were deleted.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=100903` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (reason) (src_repo_statesyncer_repos_deleted{is_primary="false"})
```
</details>

<br />

### Worker: External services

#### worker: src_repoupdater_external_services_total

<p class="subtitle">The total number of external services</p>

Refer to the [alerts reference](alerts#worker-src_repoupdater_external_services_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_external_services_total)
```
</details>

<br />

#### worker: repoupdater_queued_sync_jobs_total

<p class="subtitle">The total number of queued sync jobs</p>

Refer to the [alerts reference](alerts#worker-repoupdater_queued_sync_jobs_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_queued_sync_jobs_total)
```
</details>

<br />

#### worker: repoupdater_completed_sync_jobs_total

<p class="subtitle">The total number of completed sync jobs</p>

Refer to the [alerts reference](alerts#worker-repoupdater_completed_sync_jobs_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_completed_sync_jobs_total)
```
</details>

<br />

#### worker: repoupdater_errored_sync_jobs_percentage

<p class="subtitle">The percentage of external services that have failed their most recent sync</p>

Refer to the [alerts reference](alerts#worker-repoupdater_errored_sync_jobs_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_repoupdater_errored_sync_jobs_percentage)
```
</details>

<br />

#### worker: github_graphql_rate_limit_remaining

<p class="subtitle">Remaining calls to GitHub graphql API before hitting the rate limit</p>

Refer to the [alerts reference](alerts#worker-github_graphql_rate_limit_remaining) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (src_github_rate_limit_remaining_v2{resource="graphql"})
```
</details>

<br />

#### worker: github_rest_rate_limit_remaining

<p class="subtitle">Remaining calls to GitHub rest API before hitting the rate limit</p>

Refer to the [alerts reference](alerts#worker-github_rest_rate_limit_remaining) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (src_github_rate_limit_remaining_v2{resource="rest"})
```
</details>

<br />

#### worker: github_search_rate_limit_remaining

<p class="subtitle">Remaining calls to GitHub search API before hitting the rate limit</p>

Refer to the [alerts reference](alerts#worker-github_search_rate_limit_remaining) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101022` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (src_github_rate_limit_remaining_v2{resource="search"})
```
</details>

<br />

#### worker: github_graphql_rate_limit_wait_duration

<p class="subtitle">Time spent waiting for the GitHub graphql API rate limiter</p>

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101030` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="graphql"}[5m]))
```
</details>

<br />

#### worker: github_rest_rate_limit_wait_duration

<p class="subtitle">Time spent waiting for the GitHub rest API rate limiter</p>

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101031` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
```
</details>

<br />

#### worker: github_search_rate_limit_wait_duration

<p class="subtitle">Time spent waiting for the GitHub search API rate limiter</p>

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101032` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="search"}[5m]))
```
</details>

<br />

#### worker: gitlab_rest_rate_limit_remaining

<p class="subtitle">Remaining calls to GitLab rest API before hitting the rate limit</p>

Refer to the [alerts reference](alerts#worker-gitlab_rest_rate_limit_remaining) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101040` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (src_gitlab_rate_limit_remaining{resource="rest"})
```
</details>

<br />

#### worker: gitlab_rest_rate_limit_wait_duration

<p class="subtitle">Time spent waiting for the GitLab rest API rate limiter</p>

Indicates how long we`re waiting on the rate limit once it has been exceeded

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101041` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (rate(src_gitlab_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
```
</details>

<br />

#### worker: src_internal_rate_limit_wait_duration_bucket

<p class="subtitle">95th percentile time spent successfully waiting on our internal rate limiter</p>

Indicates how long we`re waiting on our internal rate limiter when communicating with a code host

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101050` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_internal_rate_limit_wait_duration_bucket{failed="false"}[5m])) by (le, urn))
```
</details>

<br />

#### worker: src_internal_rate_limit_wait_error_count

<p class="subtitle">Rate of failures waiting on our internal rate limiter</p>

The rate at which we fail our internal rate limiter.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101051` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (urn) (rate(src_internal_rate_limit_wait_duration_count{failed="true"}[5m]))
```
</details>

<br />

### Worker: Permissions

#### worker: user_success_syncs_total

<p class="subtitle">Total number of user permissions syncs</p>

Indicates the total number of user permissions sync completed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_repo_perms_syncer_success_syncs{type="user"})
```
</details>

<br />

#### worker: user_success_syncs

<p class="subtitle">Number of user permissions syncs [5m]</p>

Indicates the number of users permissions syncs completed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_repo_perms_syncer_success_syncs{type="user"}[5m]))
```
</details>

<br />

#### worker: user_initial_syncs

<p class="subtitle">Number of first user permissions syncs [5m]</p>

Indicates the number of permissions syncs done for the first time for the user.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_repo_perms_syncer_initial_syncs{type="user"}[5m]))
```
</details>

<br />

#### worker: repo_success_syncs_total

<p class="subtitle">Total number of repo permissions syncs</p>

Indicates the total number of repo permissions sync completed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_repo_perms_syncer_success_syncs{type="repo"})
```
</details>

<br />

#### worker: repo_success_syncs

<p class="subtitle">Number of repo permissions syncs over 5m</p>

Indicates the number of repos permissions syncs completed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_repo_perms_syncer_success_syncs{type="repo"}[5m]))
```
</details>

<br />

#### worker: repo_initial_syncs

<p class="subtitle">Number of first repo permissions syncs over 5m</p>

Indicates the number of permissions syncs done for the first time for the repo.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_repo_perms_syncer_initial_syncs{type="repo"}[5m]))
```
</details>

<br />

#### worker: users_consecutive_sync_delay

<p class="subtitle">Max duration between two consecutive permissions sync for user</p>

Indicates the max delay between two consecutive permissions sync for a user during the period.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time (src_repo_perms_syncer_perms_consecutive_sync_delay{type="user"} [1m]))
```
</details>

<br />

#### worker: repos_consecutive_sync_delay

<p class="subtitle">Max duration between two consecutive permissions sync for repo</p>

Indicates the max delay between two consecutive permissions sync for a repo during the period.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101121` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time (src_repo_perms_syncer_perms_consecutive_sync_delay{type="repo"} [1m]))
```
</details>

<br />

#### worker: users_first_sync_delay

<p class="subtitle">Max duration between user creation and first permissions sync</p>

Indicates the max delay between user creation and their permissions sync

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101130` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_repo_perms_syncer_perms_first_sync_delay{type="user"}[1m]))
```
</details>

<br />

#### worker: repos_first_sync_delay

<p class="subtitle">Max duration between repo creation and first permissions sync over 1m</p>

Indicates the max delay between repo creation and their permissions sync

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101131` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_repo_perms_syncer_perms_first_sync_delay{type="repo"}[1m]))
```
</details>

<br />

#### worker: permissions_found_count

<p class="subtitle">Number of permissions found during user/repo permissions sync</p>

Indicates the number permissions found during users/repos permissions sync.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101140` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type) (src_repo_perms_syncer_perms_found)
```
</details>

<br />

#### worker: permissions_found_avg

<p class="subtitle">Average number of permissions found during permissions sync per user/repo</p>

Indicates the average number permissions found during permissions sync per user/repo.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101141` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
avg by (type) (src_repo_perms_syncer_perms_found)
```
</details>

<br />

#### worker: perms_syncer_outdated_perms

<p class="subtitle">Number of entities with outdated permissions</p>

Refer to the [alerts reference](alerts#worker-perms_syncer_outdated_perms) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101150` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (type) (src_repo_perms_syncer_outdated_perms)
```
</details>

<br />

#### worker: perms_syncer_sync_duration

<p class="subtitle">95th permissions sync duration</p>

Refer to the [alerts reference](alerts#worker-perms_syncer_sync_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101160` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, max by (le, type) (rate(src_repo_perms_syncer_sync_duration_seconds_bucket[1m])))
```
</details>

<br />

#### worker: perms_syncer_sync_errors

<p class="subtitle">Permissions sync error rate</p>

Permissions sync errors are often transient and rarely actionable.
- Check the network connectivity the Sourcegraph and the code host.
- Check if API rate limit quota is exhausted on the code host.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101170` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (type) (ceil(rate(src_repo_perms_syncer_sync_errors_total[1m])))
```
</details>

<br />

#### worker: perms_syncer_scheduled_repos_total

<p class="subtitle">Total number of repos scheduled for permissions sync</p>

Indicates how many repositories have been scheduled for a permissions sync.
More about repository permissions synchronization [here](https://sourcegraph.com/docs/admin/permissions/syncing#scheduling)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101171` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(rate(src_repo_perms_syncer_schedule_repos_total[1m]))
```
</details>

<br />

### Worker: Gitserver: Gitserver Client

#### worker: gitserver_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: gitserver_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: gitserver_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Gitserver: Gitserver Repository Service Client

#### worker: gitserver_repositoryservice_client_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_gitserver_repositoryservice_client_total{job=~"^worker.*"}[5m])) + sum(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,scope)(rate(src_gitserver_repositoryservice_client_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: gitserver_repositoryservice_client_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_repositoryservice_client_total{job=~"^worker.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_repositoryservice_client_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Batches: dbstore stats

#### worker: batches_dbstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_dbstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_dbstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_dbstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: batches_dbstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_dbstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: batches_dbstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_dbstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Batches: service stats

#### worker: batches_service_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_service_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_service_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_service_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: batches_service_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_service_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: batches_service_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: batches_service_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Codeinsights: insights queue processor

#### worker: query_runner_worker_handlers

<p class="subtitle">Handler active handlers</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_query_runner_worker_processor_handlers{job=~"^worker.*"})
```
</details>

<br />

#### worker: query_runner_worker_processor_total

<p class="subtitle">Handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: query_runner_worker_processor_99th_percentile_duration

<p class="subtitle">Aggregate successful handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_query_runner_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: query_runner_worker_processor_errors_total

<p class="subtitle">Handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: query_runner_worker_processor_error_rate

<p class="subtitle">Handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Codeinsights: dbstore stats

#### worker: workerutil_dbworker_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: workerutil_dbworker_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: workerutil_dbworker_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101713` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='insights_query_runner_jobs',job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Periodic Goroutines

#### worker: running_goroutines

<p class="subtitle">Number of currently running periodic goroutines</p>

The number of currently running periodic goroutines by name and job.
A value of 0 indicates the routine isn`t running currently, it awaits it`s next schedule.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (src_periodic_goroutine_running{job=~".*worker.*"})
```
</details>

<br />

#### worker: goroutine_success_rate

<p class="subtitle">Success rate for periodic goroutine executions</p>

The rate of successful executions of each periodic goroutine.
A low or zero value could indicate that a routine is stalled or encountering errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*worker.*"}[5m]))
```
</details>

<br />

#### worker: goroutine_error_rate

<p class="subtitle">Error rate for periodic goroutine executions</p>

The rate of errors encountered by each periodic goroutine.
A sustained high error rate may indicate a problem with the routine`s configuration or dependencies.

Refer to the [alerts reference](alerts#worker-goroutine_error_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*worker.*"}[5m]))
```
</details>

<br />

#### worker: goroutine_error_percentage

<p class="subtitle">Percentage of periodic goroutine executions that result in errors</p>

The percentage of executions that result in errors for each periodic goroutine.
A value above 5% indicates that a significant portion of routine executions are failing.

Refer to the [alerts reference](alerts#worker-goroutine_error_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*worker.*"}[5m])) / sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*worker.*"}[5m]) > 0) * 100
```
</details>

<br />

#### worker: goroutine_handler_duration

<p class="subtitle">95th percentile handler execution time</p>

The 95th percentile execution time for each periodic goroutine handler.
Longer durations might indicate increased load or processing time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_duration_seconds_bucket{job=~".*worker.*"}[5m])))
```
</details>

<br />

#### worker: goroutine_loop_duration

<p class="subtitle">95th percentile loop cycle time</p>

The 95th percentile loop cycle time for each periodic goroutine (excluding sleep time).
This represents how long a complete loop iteration takes before sleeping for the next interval.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_loop_duration_seconds_bucket{job=~".*worker.*"}[5m])))
```
</details>

<br />

#### worker: tenant_processing_duration

<p class="subtitle">95th percentile tenant processing time</p>

The 95th percentile processing time for individual tenants within periodic goroutines.
Higher values indicate that tenant processing is taking longer and may affect overall performance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_tenant_duration_seconds_bucket{job=~".*worker.*"}[5m])))
```
</details>

<br />

#### worker: tenant_processing_max

<p class="subtitle">Maximum tenant processing time</p>

The maximum processing time for individual tenants within periodic goroutines.
Consistently high values might indicate problematic tenants or inefficient processing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101831` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (rate(src_periodic_goroutine_tenant_duration_seconds_sum{job=~".*worker.*"}[5m]) / rate(src_periodic_goroutine_tenant_duration_seconds_count{job=~".*worker.*"}[5m]))
```
</details>

<br />

#### worker: tenant_count

<p class="subtitle">Number of tenants processed per routine</p>

The number of tenants processed by each periodic goroutine.
Unexpected changes can indicate tenant configuration issues or scaling events.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101840` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (src_periodic_goroutine_tenant_count{job=~".*worker.*"})
```
</details>

<br />

#### worker: tenant_success_rate

<p class="subtitle">Rate of successful tenant processing operations</p>

The rate of successful tenant processing operations.
A healthy routine should maintain a consistent processing rate.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101841` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*worker.*"}[5m]))
```
</details>

<br />

#### worker: tenant_error_rate

<p class="subtitle">Rate of tenant processing errors</p>

The rate of tenant processing operations that result in errors.
Consistent errors indicate problems with specific tenants.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101850` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*worker.*"}[5m]))
```
</details>

<br />

#### worker: tenant_error_percentage

<p class="subtitle">Percentage of tenant operations resulting in errors</p>

The percentage of tenant operations that result in errors.
Values above 5% indicate significant tenant processing problems.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101851` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*worker.*"}[5m])) / (sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*worker.*"}[5m])) + sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*worker.*"}[5m])))) * 100
```
</details>

<br />

### Worker: Database connections

#### worker: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})
```
</details>

<br />

#### worker: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})
```
</details>

<br />

#### worker: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})
```
</details>

<br />

#### worker: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})
```
</details>

<br />

#### worker: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#worker-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101920` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))
```
</details>

<br />

#### worker: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101930` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))
```
</details>

<br />

#### worker: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101931` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))
```
</details>

<br />

#### worker: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=101932` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))
```
</details>

<br />

### Worker: Worker (CPU, Memory)

#### worker: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#worker-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}
```
</details>

<br />

#### worker: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}
```
</details>

<br />

#### worker: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^worker.*"})
```
</details>

<br />

#### worker: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#worker-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^worker.*"} / container_spec_memory_limit_bytes{name=~"^worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### worker: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^worker.*"} / container_spec_memory_limit_bytes{name=~"^worker.*"}) by (name) * 100.0 
```
</details>

<br />

#### worker: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^worker.*"} / container_spec_memory_limit_bytes{name=~"^worker.*"}) by (name) * 100.0 
```
</details>

<br />

### Worker: Container monitoring (not available on server)

#### worker: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod worker` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p worker`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' worker` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the worker container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs worker` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)
```
</details>

<br />

#### worker: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#worker-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}
```
</details>

<br />

#### worker: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#worker-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}
```
</details>

<br />

#### worker: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))
```
</details>

<br />

### Worker: Provisioning indicators (not available on server)

#### worker: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#worker-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])
```
</details>

<br />

#### worker: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#worker-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])
```
</details>

<br />

#### worker: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#worker-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])
```
</details>

<br />

#### worker: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#worker-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])
```
</details>

<br />

#### worker: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#worker-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^worker.*"})
```
</details>

<br />

### Worker: Golang runtime monitoring

#### worker: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#worker-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*worker"})
```
</details>

<br />

#### worker: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#worker-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*worker"})
```
</details>

<br />

### Worker: Kubernetes monitoring (only available on Kubernetes)

#### worker: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#worker-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100
```
</details>

<br />

### Worker: Own: repo indexer dbstore

#### worker: workerutil_dbworker_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='own_background_worker_store',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_total{domain='own_background_worker_store',job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

#### worker: workerutil_dbworker_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain='own_background_worker_store',job=~"^worker.*"}[5m])))
```
</details>

<br />

#### worker: workerutil_dbworker_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: workerutil_dbworker_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_total{domain='own_background_worker_store',job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_errors_total{domain='own_background_worker_store',job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Own: repo indexer worker queue

#### worker: own_background_worker_handlers

<p class="subtitle">Handler active handlers</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_own_background_worker_processor_handlers{job=~"^worker.*"})
```
</details>

<br />

#### worker: own_background_worker_processor_total

<p class="subtitle">Handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: own_background_worker_processor_99th_percentile_duration

<p class="subtitle">Aggregate successful handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_own_background_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: own_background_worker_processor_errors_total

<p class="subtitle">Handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))
```
</details>

<br />

#### worker: own_background_worker_processor_error_rate

<p class="subtitle">Handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
```
</details>

<br />

### Worker: Own: index job scheduler

#### worker: own_background_index_scheduler_total

<p class="subtitle">Own index job scheduler operations every 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m]))
```
</details>

<br />

#### worker: own_background_index_scheduler_99th_percentile_duration

<p class="subtitle">99th percentile successful own index job scheduler operation duration over 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_own_background_index_scheduler_duration_seconds_bucket{job=~"^worker.*"}[10m])))
```
</details>

<br />

#### worker: own_background_index_scheduler_errors_total

<p class="subtitle">Own index job scheduler operation errors every 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))
```
</details>

<br />

#### worker: own_background_index_scheduler_error_rate

<p class="subtitle">Own index job scheduler operation error rate over 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m])) / (sum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m])) + sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))) * 100
```
</details>

<br />

### Worker: Site configuration client update latency

#### worker: worker_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Duration since last successful site configuration update (by instance)</p>

The duration since the configuration client used by the "worker" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_conf_client_time_since_last_successful_update_seconds{job=~`^worker.*`,instance=~`${instance:regex}`}
```
</details>

<br />

#### worker: worker_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Maximum duration since last successful site configuration update (all "worker" instances)</p>

Refer to the [alerts reference](alerts#worker-worker_site_configuration_duration_since_last_successful_update_by_instance) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/worker/worker?viewPanel=102801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`^worker.*`,instance=~`${instance:regex}`}[1m]))
```
</details>

<br />

## Searcher

<p class="subtitle">Performs unindexed searches (diff and commit search, text search for unindexed branches).</p>

To see this dashboard, visit `/-/debug/grafana/d/searcher/searcher` on your Sourcegraph instance.

#### searcher: traffic

<p class="subtitle">Requests per second by code over 10m</p>

This graph is the average number of requests per second searcher is
experiencing over the last 10 minutes.

The code is the HTTP Status code. 200 is success. We have a special code
"canceled" which is common when doing a large search request and we find
enough results before searching all possible repos.

Note: A search query is translated into an unindexed search query per unique
(repo, commit). This means a single user query may result in thousands of
requests to searcher.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code) (rate(searcher_service_request_total{instance=~`${instance:regex}`}[10m]))
```
</details>

<br />

#### searcher: replica_traffic

<p class="subtitle">Requests per second per replica over 10m</p>

This graph is the average number of requests per second searcher is
experiencing over the last 10 minutes broken down per replica.

The code is the HTTP Status code. 200 is success. We have a special code
"canceled" which is common when doing a large search request and we find
enough results before searching all possible repos.

Note: A search query is translated into an unindexed search query per unique
(repo, commit). This means a single user query may result in thousands of
requests to searcher.

Refer to the [alerts reference](alerts#searcher-replica_traffic) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (rate(searcher_service_request_total{instance=~`${instance:regex}`}[10m]))
```
</details>

<br />

#### searcher: concurrent_requests

<p class="subtitle">Amount of in-flight unindexed search requests (per instance)</p>

This graph is the amount of in-flight unindexed search requests per instance.
Consistently high numbers here indicate you may need to scale out searcher.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (searcher_service_running{instance=~`${instance:regex}`})
```
</details>

<br />

#### searcher: unindexed_search_request_errors

<p class="subtitle">Unindexed search request errors every 5m by code</p>

Refer to the [alerts reference](alerts#searcher-unindexed_search_request_errors) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code)(increase(searcher_service_request_total{code!="200",code!="canceled",instance=~`${instance:regex}`}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total{instance=~`${instance:regex}`}[5m])) * 100
```
</details>

<br />

### Searcher: Cache store

#### searcher: store_fetching

<p class="subtitle">Amount of in-flight unindexed search requests fetching code from gitserver (per instance)</p>

Before we can search a commit we fetch the code from gitserver then cache it
for future search requests. This graph is the current number of search
requests which are in the state of fetching code from gitserver.

Generally this number should remain low since fetching code is fast, but
expect bursts. In the case of instances with a monorepo you would expect this
number to stay low for the duration of fetching the code (which in some cases
can take many minutes).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (searcher_store_fetching{instance=~`${instance:regex}`})
```
</details>

<br />

#### searcher: store_fetching_waiting

<p class="subtitle">Amount of in-flight unindexed search requests waiting to fetch code from gitserver (per instance)</p>

We limit the number of requests which can fetch code to prevent overwhelming
gitserver. This gauge is the number of requests waiting to be allowed to speak
to gitserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (searcher_store_fetch_queue_size{instance=~`${instance:regex}`})
```
</details>

<br />

#### searcher: store_fetching_fail

<p class="subtitle">Amount of unindexed search requests that failed while fetching code from gitserver over 10m (per instance)</p>

This graph should be zero since fetching happens in the background and will
not be influenced by user timeouts/etc. Expected upticks in this graph are
during gitserver rollouts. If you regularly see this graph have non-zero
values please reach out to support.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (rate(searcher_store_fetch_failed{instance=~`${instance:regex}`}[10m]))
```
</details>

<br />

### Searcher: Index use

#### searcher: searcher_hybrid_final_state_total

<p class="subtitle">Hybrid search final state over 10m</p>

This graph is about our interactions with the search index (zoekt) to help
complete unindexed search requests. Searcher will use indexed search for the
files that have not changed between the unindexed commit and the index.

This graph should mostly be "success". The next most common state should be
"search-canceled" which happens when result limits are hit or the user starts
a new search. Finally the next most common should be "diff-too-large", which
happens if the commit is too far from the indexed commit. Otherwise other
state should be rare and likely are a sign for further investigation.

Note: On sourcegraph.com "zoekt-list-missing" is also common due to it
indexing a subset of repositories. Otherwise every other state should occur
rarely.

For a full list of possible state see
[recordHybridFinalState](https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/sourcegraph/sourcegraph-public-snapshot%24+f:cmd/searcher+recordHybridFinalState).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (state)(increase(searcher_hybrid_final_state_total{instance=~`${instance:regex}`}[10m]))
```
</details>

<br />

#### searcher: searcher_hybrid_retry_total

<p class="subtitle">Hybrid search retrying over 10m</p>

Expectation is that this graph should mostly be 0. It will trigger if a user
manages to do a search and the underlying index changes while searching or
Zoekt goes down. So occasional bursts can be expected, but if this graph is
regularly above 0 it is a sign for further investigation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (reason)(increase(searcher_hybrid_retry_total{instance=~`${instance:regex}`}[10m]))
```
</details>

<br />

### Searcher: Cache disk I/O metrics

#### searcher: cache_disk_reads_sec

<p class="subtitle">Read request rate over 1m (per instance)</p>

The number of read requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_writes_sec

<p class="subtitle">Write request rate over 1m (per instance)</p>

The number of write requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_read_throughput

<p class="subtitle">Read throughput over 1m (per instance)</p>

The amount of data that was read from the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_write_throughput

<p class="subtitle">Write throughput over 1m (per instance)</p>

The amount of data that was written to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_read_duration

<p class="subtitle">Average read duration over 1m (per instance)</p>

The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100320` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### searcher: cache_disk_write_duration

<p class="subtitle">Average write duration over 1m (per instance)</p>

The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100321` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### searcher: cache_disk_read_request_size

<p class="subtitle">Average read request size over 1m (per instance)</p>

The average size of read requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100330` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### searcher: cache_disk_write_request_size)

<p class="subtitle">Average write request size over 1m (per instance)</p>

The average size of write requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100331` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### searcher: cache_disk_reads_merged_sec

<p class="subtitle">Merged read request rate over 1m (per instance)</p>

The number of read requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100340` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_writes_merged_sec

<p class="subtitle">Merged writes request rate over 1m (per instance)</p>

The number of write requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100341` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### searcher: cache_disk_average_queue_size

<p class="subtitle">Average queue size over 1m (per instance)</p>

The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), searcher could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100350` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

### Searcher: Searcher GRPC server metrics

#### searcher: searcher_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))
```
</details>

<br />

#### searcher: searcher_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)
```
</details>

<br />

#### searcher: searcher_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))) ))
```
</details>

<br />

#### searcher: searcher_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### searcher: searcher_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100420` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100421` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100422` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100430` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100431` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100432` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100440` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100441` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100442` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
```
</details>

<br />

#### searcher: searcher_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100450` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### searcher: searcher_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100460` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Searcher: Searcher GRPC "internal error" metrics

#### searcher: searcher_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### searcher: searcher_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "searcher" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "searcher" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "searcher" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",is_internal_error="true",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Searcher: Searcher GRPC retry metrics

#### searcher: searcher_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "searcher" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",is_retried="true",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### searcher: searcher_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Searcher: Codeintel: Symbols API

#### searcher: codeintel_symbols_api_total

<p class="subtitle">Aggregate API operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_api_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_api_99th_percentile_duration

<p class="subtitle">Aggregate successful API operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_api_errors_total

<p class="subtitle">Aggregate API operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_api_error_rate

<p class="subtitle">Aggregate API operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m])) / (sum(increase(src_codeintel_symbols_api_total{job=~"^searcher.*"}[5m])) + sum(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

#### searcher: codeintel_symbols_api_total

<p class="subtitle">API operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_api_99th_percentile_duration

<p class="subtitle">99th percentile successful API operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op,parseAmount)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^searcher.*"}[5m])))
```
</details>

<br />

#### searcher: codeintel_symbols_api_errors_total

<p class="subtitle">API operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_api_error_rate

<p class="subtitle">API operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100713` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m])) / (sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^searcher.*"}[5m])) + sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

### Searcher: Codeintel: Symbols parser

#### searcher: searcher

<p class="subtitle">In-flight parse jobs</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_codeintel_symbols_parsing{job=~"^searcher.*"})
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Parser queue size</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_codeintel_symbols_parse_queue_size{job=~"^searcher.*"})
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Parse queue timeouts</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100802` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_codeintel_symbols_parse_queue_timeouts_total{job=~"^searcher.*"})
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Parse failures every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100803` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
rate(src_codeintel_symbols_parse_failed_total{job=~"^searcher.*"}[5m])
```
</details>

<br />

#### searcher: codeintel_symbols_parser_total

<p class="subtitle">Aggregate parser operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_parser_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_99th_percentile_duration

<p class="subtitle">Aggregate successful parser operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_errors_total

<p class="subtitle">Aggregate parser operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_error_rate

<p class="subtitle">Aggregate parser operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100813` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m])) / (sum(increase(src_codeintel_symbols_parser_total{job=~"^searcher.*"}[5m])) + sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

#### searcher: codeintel_symbols_parser_total

<p class="subtitle">Parser operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_99th_percentile_duration

<p class="subtitle">99th percentile successful parser operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^searcher.*"}[5m])))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_errors_total

<p class="subtitle">Parser operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100822` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_parser_error_rate

<p class="subtitle">Parser operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100823` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^searcher.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

### Searcher: Codeintel: Symbols cache janitor

#### searcher: searcher

<p class="subtitle">Size in bytes of the on-disk cache</p>

no

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_diskcache_store_symbols_cache_size_bytes
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Cache eviction operations every 5m</p>

no

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
rate(src_diskcache_store_symbols_evictions_total[5m])
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Cache eviction operation errors every 5m</p>

no

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=100902` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
rate(src_diskcache_store_symbols_errors_total[5m])
```
</details>

<br />

### Searcher: Codeintel: Symbols repository fetcher

#### searcher: searcher

<p class="subtitle">In-flight repository fetch operations</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_codeintel_symbols_fetching
```
</details>

<br />

#### searcher: searcher

<p class="subtitle">Repository fetch queue size</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_codeintel_symbols_fetch_queue_size{job=~"^searcher.*"})
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_total

<p class="subtitle">Aggregate fetcher operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_99th_percentile_duration

<p class="subtitle">Aggregate successful fetcher operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_errors_total

<p class="subtitle">Aggregate fetcher operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_error_rate

<p class="subtitle">Aggregate fetcher operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m])) / (sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^searcher.*"}[5m])) + sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_total

<p class="subtitle">Fetcher operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_99th_percentile_duration

<p class="subtitle">99th percentile successful fetcher operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^searcher.*"}[5m])))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_errors_total

<p class="subtitle">Fetcher operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101022` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_repository_fetcher_error_rate

<p class="subtitle">Fetcher operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101023` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^searcher.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

### Searcher: Codeintel: Symbols gitserver client

#### searcher: codeintel_symbols_gitserver_total

<p class="subtitle">Aggregate gitserver client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_gitserver_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_99th_percentile_duration

<p class="subtitle">Aggregate successful gitserver client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_errors_total

<p class="subtitle">Aggregate gitserver client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_error_rate

<p class="subtitle">Aggregate gitserver client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m])) / (sum(increase(src_codeintel_symbols_gitserver_total{job=~"^searcher.*"}[5m])) + sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_total

<p class="subtitle">Gitserver client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_99th_percentile_duration

<p class="subtitle">99th percentile successful gitserver client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^searcher.*"}[5m])))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_errors_total

<p class="subtitle">Gitserver client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m]))
```
</details>

<br />

#### searcher: codeintel_symbols_gitserver_error_rate

<p class="subtitle">Gitserver client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^searcher.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^searcher.*"}[5m]))) * 100
```
</details>

<br />

### Searcher: Rockskip

#### searcher: p95_rockskip_search_request_duration

<p class="subtitle">95th percentile search request duration over 5m</p>

The 95th percentile duration of search requests to Rockskip in seconds. Lower is better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_rockskip_service_search_request_duration_seconds_bucket[5m])) by (le))
```
</details>

<br />

#### searcher: rockskip_in_flight_search_requests

<p class="subtitle">Number of in-flight search requests</p>

The number of search requests currently being processed by Rockskip.
								If there is not much traffic and the requests are served very fast relative to the polling window of Prometheus,
								it possible that that this number is 0 even if there are search requests being processed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_rockskip_service_in_flight_search_requests)
```
</details>

<br />

#### searcher: rockskip_search_request_errors

<p class="subtitle">Search request errors every 5m</p>

The number of search requests that returned an error in the last 5 minutes.
								The errors tracked here are all application errors, grpc errors are not included.
								We generally want this to be 0.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_rockskip_service_search_request_errors[5m]))
```
</details>

<br />

#### searcher: p95_rockskip_index_job_duration

<p class="subtitle">95th percentile index job duration over 5m</p>

The 95th percentile duration of index jobs in seconds.
								The range of values is very large, because the metric measure quick delta updates as well as full index jobs.
								Lower is better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_rockskip_service_index_job_duration_seconds_bucket[5m])) by (le))
```
</details>

<br />

#### searcher: rockskip_in_flight_index_jobs

<p class="subtitle">Number of in-flight index jobs</p>

The number of index jobs currently being processed by Rockskip.
								This includes delta updates as well as full index jobs.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_rockskip_service_in_flight_index_jobs)
```
</details>

<br />

#### searcher: rockskip_index_job_errors

<p class="subtitle">Index job errors every 5m</p>

The number of index jobs that returned an error in the last 5 minutes.
								If the errors are persistent, users will see alerts in the UI.
								The service logs will contain more detailed information about the kind of errors.
								We generally want this to be 0.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_rockskip_service_index_job_errors[5m]))
```
</details>

<br />

#### searcher: rockskip_number_of_repos_indexed

<p class="subtitle">Number of repositories indexed by Rockskip</p>

The number of repositories indexed by Rockskip.
								Apart from an initial transient phase in which many repos are being indexed,
								this number should be low and relatively stable and only increase by small increments.
								To verify if this number makes sense, compare ROCKSKIP_MIN_REPO_SIZE_MB with the repository sizes reported by gitserver_repos table.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101220` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_rockskip_service_repos_indexed)
```
</details>

<br />

#### searcher: p95_rockskip_index_queue_age

<p class="subtitle">95th percentile index queue delay over 5m</p>

The 95th percentile age of index jobs in seconds.
								A high delay might indicate a resource issue.
								Consider increasing indexing bandwidth by either increasing the number of queues or the number of symbol services.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101221` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum(rate(src_rockskip_service_index_queue_age_seconds_bucket[5m])) by (le))
```
</details>

<br />

#### searcher: rockskip_file_parsing_requests

<p class="subtitle">File parsing requests every 5m</p>

The number of search requests in the last 5 minutes that were handled by parsing a single file, as opposed to searching the Rockskip index.
								This is an optimization to speed up symbol sidebar queries.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101222` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_rockskip_service_file_parsing_requests[5m]))
```
</details>

<br />

### Searcher: Site configuration client update latency

#### searcher: searcher_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Duration since last successful site configuration update (by instance)</p>

The duration since the configuration client used by the "searcher" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
src_conf_client_time_since_last_successful_update_seconds{job=~`.*searcher`,instance=~`${instance:regex}`}
```
</details>

<br />

#### searcher: searcher_site_configuration_duration_since_last_successful_update_by_instance

<p class="subtitle">Maximum duration since last successful site configuration update (all "searcher" instances)</p>

Refer to the [alerts reference](alerts#searcher-searcher_site_configuration_duration_since_last_successful_update_by_instance) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*searcher`,instance=~`${instance:regex}`}[1m]))
```
</details>

<br />

### Searcher: Periodic Goroutines

#### searcher: running_goroutines

<p class="subtitle">Number of currently running periodic goroutines</p>

The number of currently running periodic goroutines by name and job.
A value of 0 indicates the routine isn`t running currently, it awaits it`s next schedule.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (src_periodic_goroutine_running{job=~".*searcher.*"})
```
</details>

<br />

#### searcher: goroutine_success_rate

<p class="subtitle">Success rate for periodic goroutine executions</p>

The rate of successful executions of each periodic goroutine.
A low or zero value could indicate that a routine is stalled or encountering errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*searcher.*"}[5m]))
```
</details>

<br />

#### searcher: goroutine_error_rate

<p class="subtitle">Error rate for periodic goroutine executions</p>

The rate of errors encountered by each periodic goroutine.
A sustained high error rate may indicate a problem with the routine`s configuration or dependencies.

Refer to the [alerts reference](alerts#searcher-goroutine_error_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*searcher.*"}[5m]))
```
</details>

<br />

#### searcher: goroutine_error_percentage

<p class="subtitle">Percentage of periodic goroutine executions that result in errors</p>

The percentage of executions that result in errors for each periodic goroutine.
A value above 5% indicates that a significant portion of routine executions are failing.

Refer to the [alerts reference](alerts#searcher-goroutine_error_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_errors_total{job=~".*searcher.*"}[5m])) / sum by (name, job_name) (rate(src_periodic_goroutine_total{job=~".*searcher.*"}[5m]) > 0) * 100
```
</details>

<br />

#### searcher: goroutine_handler_duration

<p class="subtitle">95th percentile handler execution time</p>

The 95th percentile execution time for each periodic goroutine handler.
Longer durations might indicate increased load or processing time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101420` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_duration_seconds_bucket{job=~".*searcher.*"}[5m])))
```
</details>

<br />

#### searcher: goroutine_loop_duration

<p class="subtitle">95th percentile loop cycle time</p>

The 95th percentile loop cycle time for each periodic goroutine (excluding sleep time).
This represents how long a complete loop iteration takes before sleeping for the next interval.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101421` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_loop_duration_seconds_bucket{job=~".*searcher.*"}[5m])))
```
</details>

<br />

#### searcher: tenant_processing_duration

<p class="subtitle">95th percentile tenant processing time</p>

The 95th percentile processing time for individual tenants within periodic goroutines.
Higher values indicate that tenant processing is taking longer and may affect overall performance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101430` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job_name, le) (rate(src_periodic_goroutine_tenant_duration_seconds_bucket{job=~".*searcher.*"}[5m])))
```
</details>

<br />

#### searcher: tenant_processing_max

<p class="subtitle">Maximum tenant processing time</p>

The maximum processing time for individual tenants within periodic goroutines.
Consistently high values might indicate problematic tenants or inefficient processing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101431` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (rate(src_periodic_goroutine_tenant_duration_seconds_sum{job=~".*searcher.*"}[5m]) / rate(src_periodic_goroutine_tenant_duration_seconds_count{job=~".*searcher.*"}[5m]))
```
</details>

<br />

#### searcher: tenant_count

<p class="subtitle">Number of tenants processed per routine</p>

The number of tenants processed by each periodic goroutine.
Unexpected changes can indicate tenant configuration issues or scaling events.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101440` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job_name) (src_periodic_goroutine_tenant_count{job=~".*searcher.*"})
```
</details>

<br />

#### searcher: tenant_success_rate

<p class="subtitle">Rate of successful tenant processing operations</p>

The rate of successful tenant processing operations.
A healthy routine should maintain a consistent processing rate.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101441` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*searcher.*"}[5m]))
```
</details>

<br />

#### searcher: tenant_error_rate

<p class="subtitle">Rate of tenant processing errors</p>

The rate of tenant processing operations that result in errors.
Consistent errors indicate problems with specific tenants.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101450` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*searcher.*"}[5m]))
```
</details>

<br />

#### searcher: tenant_error_percentage

<p class="subtitle">Percentage of tenant operations resulting in errors</p>

The percentage of tenant operations that result in errors.
Values above 5% indicate significant tenant processing problems.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101451` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*searcher.*"}[5m])) / (sum by (name, job_name) (rate(src_periodic_goroutine_tenant_success_total{job=~".*searcher.*"}[5m])) + sum by (name, job_name) (rate(src_periodic_goroutine_tenant_errors_total{job=~".*searcher.*"}[5m])))) * 100
```
</details>

<br />

### Searcher: Database connections

#### searcher: max_open_conns

<p class="subtitle">Maximum open</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="searcher"})
```
</details>

<br />

#### searcher: open_conns

<p class="subtitle">Established</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_open{app_name="searcher"})
```
</details>

<br />

#### searcher: in_use

<p class="subtitle">Used</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="searcher"})
```
</details>

<br />

#### searcher: idle

<p class="subtitle">Idle</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (src_pgsql_conns_idle{app_name="searcher"})
```
</details>

<br />

#### searcher: mean_blocked_seconds_per_conn_request

<p class="subtitle">Mean blocked seconds per conn request</p>

Refer to the [alerts reference](alerts#searcher-mean_blocked_seconds_per_conn_request) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101520` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="searcher"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="searcher"}[5m]))
```
</details>

<br />

#### searcher: closed_max_idle

<p class="subtitle">Closed by SetMaxIdleConns</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101530` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="searcher"}[5m]))
```
</details>

<br />

#### searcher: closed_max_lifetime

<p class="subtitle">Closed by SetConnMaxLifetime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101531` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="searcher"}[5m]))
```
</details>

<br />

#### searcher: closed_max_idle_time

<p class="subtitle">Closed by SetConnMaxIdleTime</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101532` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="searcher"}[5m]))
```
</details>

<br />

### Searcher: Searcher (CPU, Memory)

#### searcher: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#searcher-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}
```
</details>

<br />

#### searcher: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}
```
</details>

<br />

#### searcher: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^searcher.*"})
```
</details>

<br />

#### searcher: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#searcher-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^searcher.*"} / container_spec_memory_limit_bytes{name=~"^searcher.*"}) by (name) * 100.0 
```
</details>

<br />

#### searcher: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^searcher.*"} / container_spec_memory_limit_bytes{name=~"^searcher.*"}) by (name) * 100.0 
```
</details>

<br />

#### searcher: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^searcher.*"} / container_spec_memory_limit_bytes{name=~"^searcher.*"}) by (name) * 100.0 
```
</details>

<br />

### Searcher: Container monitoring (not available on server)

#### searcher: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod searcher` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p searcher`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' searcher` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the searcher container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs searcher` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^searcher.*"}) > 60)
```
</details>

<br />

#### searcher: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#searcher-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}
```
</details>

<br />

#### searcher: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#searcher-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101702` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}
```
</details>

<br />

#### searcher: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101703` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^searcher.*"}[1h]) + rate(container_fs_writes_total{name=~"^searcher.*"}[1h]))
```
</details>

<br />

### Searcher: Provisioning indicators (not available on server)

#### searcher: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#searcher-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[1d])
```
</details>

<br />

#### searcher: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#searcher-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[1d])
```
</details>

<br />

#### searcher: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#searcher-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[5m])
```
</details>

<br />

#### searcher: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#searcher-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[5m])
```
</details>

<br />

#### searcher: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#searcher-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^searcher.*"})
```
</details>

<br />

### Searcher: Golang runtime monitoring

#### searcher: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#searcher-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*searcher"})
```
</details>

<br />

#### searcher: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#searcher-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=101901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*searcher"})
```
</details>

<br />

### Searcher: Kubernetes monitoring (only available on Kubernetes)

#### searcher: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#searcher-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/searcher/searcher?viewPanel=102000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*searcher"}) / count by (app) (up{app=~".*searcher"}) * 100
```
</details>

<br />

## Syntect Server

<p class="subtitle">Handles syntax highlighting for code files.</p>

To see this dashboard, visit `/-/debug/grafana/d/syntect-server/syntect-server` on your Sourcegraph instance.

#### syntect-server: syntax_highlighting_errors

<p class="subtitle">Syntax highlighting errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_syntax_highlighting_requests{status="error"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
```
</details>

<br />

#### syntect-server: syntax_highlighting_timeouts

<p class="subtitle">Syntax highlighting timeouts every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_syntax_highlighting_requests{status="timeout"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
```
</details>

<br />

#### syntect-server: syntax_highlighting_panics

<p class="subtitle">Syntax highlighting panics every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_syntax_highlighting_requests{status="panic"}[5m]))
```
</details>

<br />

#### syntect-server: syntax_highlighting_worker_deaths

<p class="subtitle">Syntax highlighter worker deaths every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_syntax_highlighting_requests{status="hss_worker_timeout"}[5m]))
```
</details>

<br />

### Syntect Server: Syntect-server (CPU, Memory)

#### syntect-server: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#syntect-server-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}
```
</details>

<br />

#### syntect-server: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}
```
</details>

<br />

#### syntect-server: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^syntect-server.*"})
```
</details>

<br />

#### syntect-server: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#syntect-server-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^syntect-server.*"} / container_spec_memory_limit_bytes{name=~"^syntect-server.*"}) by (name) * 100.0 
```
</details>

<br />

#### syntect-server: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^syntect-server.*"} / container_spec_memory_limit_bytes{name=~"^syntect-server.*"}) by (name) * 100.0 
```
</details>

<br />

#### syntect-server: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^syntect-server.*"} / container_spec_memory_limit_bytes{name=~"^syntect-server.*"}) by (name) * 100.0 
```
</details>

<br />

### Syntect Server: Container monitoring (not available on server)

#### syntect-server: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod syntect-server` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p syntect-server`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' syntect-server` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the syntect-server container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs syntect-server` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^syntect-server.*"}) > 60)
```
</details>

<br />

#### syntect-server: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#syntect-server-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}
```
</details>

<br />

#### syntect-server: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#syntect-server-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}
```
</details>

<br />

#### syntect-server: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^syntect-server.*"}[1h]) + rate(container_fs_writes_total{name=~"^syntect-server.*"}[1h]))
```
</details>

<br />

### Syntect Server: Provisioning indicators (not available on server)

#### syntect-server: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#syntect-server-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[1d])
```
</details>

<br />

#### syntect-server: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#syntect-server-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[1d])
```
</details>

<br />

#### syntect-server: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#syntect-server-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[5m])
```
</details>

<br />

#### syntect-server: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#syntect-server-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[5m])
```
</details>

<br />

#### syntect-server: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#syntect-server-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^syntect-server.*"})
```
</details>

<br />

### Syntect Server: Kubernetes monitoring (only available on Kubernetes)

#### syntect-server: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#syntect-server-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*syntect-server"}) / count by (app) (up{app=~".*syntect-server"}) * 100
```
</details>

<br />

## Zoekt

<p class="subtitle">Indexes repositories, populates the search index, and responds to indexed search queries.</p>

To see this dashboard, visit `/-/debug/grafana/d/zoekt/zoekt` on your Sourcegraph instance.

#### zoekt: total_repos_aggregate

<p class="subtitle">Total number of repos (aggregate)</p>

Sudden changes can be caused by indexing configuration changes.

Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.

Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished indexing

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (__name__) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap"})
```
</details>

<br />

#### zoekt: total_repos_per_instance

<p class="subtitle">Total number of repos (per instance)</p>

Sudden changes can be caused by indexing configuration changes.

Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.

Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished processing

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (__name__, instance) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap",instance=~"${instance:regex}"})
```
</details>

<br />

#### zoekt: repos_stopped_tracking_total_aggregate

<p class="subtitle">The number of repositories we stopped tracking over 5m (aggregate)</p>

Repositories we stop tracking are soft-deleted during the next cleanup job.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(index_num_stopped_tracking_total[5m]))
```
</details>

<br />

#### zoekt: repos_stopped_tracking_total_per_instance

<p class="subtitle">The number of repositories we stopped tracking over 5m (per instance)</p>

Repositories we stop tracking are soft-deleted during the next cleanup job.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (increase(index_num_stopped_tracking_total{instance=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: average_resolve_revision_duration

<p class="subtitle">Average resolve revision duration over 5m</p>

Refer to the [alerts reference](alerts#zoekt-average_resolve_revision_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(resolve_revision_seconds_sum[5m])) / sum(rate(resolve_revision_seconds_count[5m]))
```
</details>

<br />

#### zoekt: get_index_options_error_increase

<p class="subtitle">The number of repositories we failed to get indexing options over 5m</p>

When considering indexing a repository we ask for the index configuration
from frontend per repository. The most likely reason this would fail is
failing to resolve branch names to git SHAs.

This value can spike up during deployments/etc. Only if you encounter
sustained periods of errors is there an underlying issue. When sustained
this indicates repositories will not get updated indexes.

Refer to the [alerts reference](alerts#zoekt-get_index_options_error_increase) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(get_index_options_error_total[5m]))
```
</details>

<br />

### Zoekt: Zoekt-indexserver (CPU, Memory)

#### zoekt: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#zoekt-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}
```
</details>

<br />

#### zoekt: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}
```
</details>

<br />

#### zoekt: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^zoekt-indexserver.*"})
```
</details>

<br />

#### zoekt: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#zoekt-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^zoekt-indexserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-indexserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### zoekt: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^zoekt-indexserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-indexserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### zoekt: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^zoekt-indexserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-indexserver.*"}) by (name) * 100.0 
```
</details>

<br />

### Zoekt: Zoekt-webserver (CPU, Memory)

Zoekt web server leverages memory mapping to optimize file reads: it is generally expected to consume all the memory provided to it, if it can. When it finds data that is not available in memory yet, this causes a 'page fault', and the data is loaded into memory from disk.

A trend to watch out for: when something in-application happens to take a lot of memory, and active file previously used nearly all remaining memory, then:

1. 'Memory (RSS)' goes up, due to in-application usage
2. 'Memory usage (Active file)' goes down, as file data held in memory is evicted
3. 'Page faults' go up, as less data is held in memory (and with that, IOPS, disk read throughput, ...)

This can also happen without 'Memory (RSS)' increasing, if the provisioned memory is insufficent to start with.
A small degree of this is behaviour generally expected, but if it happens significantly or causes user-noticeable impact, it's likely zoekt web server could benefit from more memory. Look for more user-facing metrics to make a final determination on appropriate resource allocation.

_See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps._

#### zoekt: cpu_usage_percentage

<p class="subtitle">CPU usage</p>

Refer to the [alerts reference](alerts#zoekt-cpu_usage_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}
```
</details>

<br />

#### zoekt: memory_usage_percentage

<p class="subtitle">Memory usage percentage (total)</p>

An estimate for the active memory in use, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}
```
</details>

<br />

#### zoekt: memory_working_set_bytes

<p class="subtitle">Memory usage bytes (total)</p>

An estimate for the active memory in use in bytes, which includes anonymous memory, file memory, and kernel memory. Some of this memory is reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_memory_working_set_bytes{name=~"^zoekt-webserver.*"})
```
</details>

<br />

#### zoekt: memory_rss

<p class="subtitle">Memory (RSS)</p>

The total anonymous memory in use by the application, which includes Go stack and heap. This memory is non-reclaimable, and high usage may trigger OOM kills. Note: the metric is named RSS to match the cadvisor name, but "anonymous" is more accurate.

Refer to the [alerts reference](alerts#zoekt-memory_rss) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_rss{name=~"^zoekt-webserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-webserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### zoekt: memory_total_active_file

<p class="subtitle">Memory usage (active file)</p>

This metric shows the total active file-backed memory currently in use by the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_total_active_file_bytes{name=~"^zoekt-webserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-webserver.*"}) by (name) * 100.0 
```
</details>

<br />

#### zoekt: memory_kernel_usage

<p class="subtitle">Memory usage (kernel)</p>

The kernel usage metric shows the amount of memory used by the kernel on behalf of the application. Some of it may be reclaimable, so high usage does not necessarily indicate memory pressure.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(container_memory_kernel_usage{name=~"^zoekt-webserver.*"} / container_spec_memory_limit_bytes{name=~"^zoekt-webserver.*"}) by (name) * 100.0 
```
</details>

<br />

### Zoekt: Memory mapping metrics

#### zoekt: memory_map_areas_percentage_used

<p class="subtitle">Process memory map areas percentage used (per instance)</p>

Processes have a limited about of memory map areas that they can use. In Zoekt, memory map areas
are mainly used for loading shards into memory for queries (via mmap). However, memory map areas
are also used for loading shared libraries, etc.

_See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps._

Once the memory map limit is reached, the Linux kernel will prevent the process from creating any
additional memory map areas. This could cause the process to crash.

Refer to the [alerts reference](alerts#zoekt-memory_map_areas_percentage_used) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(proc_metrics_memory_map_current_count{instance=~`${instance:regex}`} / proc_metrics_memory_map_max_limit{instance=~`${instance:regex}`}) * 100
```
</details>

<br />

#### zoekt: memory_major_page_faults

<p class="subtitle">Webserver page faults</p>

The number of major page faults in a 5 minute window for Zoekt webservers. If this number increases significantly, it indicates that more searches need to load data from disk. There may not be enough memory to efficiently support amount of repo data being searched.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
rate(container_memory_failures_total{failure_type="pgmajfault", name=~"^zoekt-webserver.*"}[5m])
```
</details>

<br />

### Zoekt: Search requests

#### zoekt: indexed_search_request_duration_p99_aggregate

<p class="subtitle">99th percentile indexed search duration over 1m (aggregate)</p>

This dashboard shows the 99th percentile of search request durations over the last minute (aggregated across all instances).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
```
</details>

<br />

#### zoekt: indexed_search_request_duration_p90_aggregate

<p class="subtitle">90th percentile indexed search duration over 1m (aggregate)</p>

This dashboard shows the 90th percentile of search request durations over the last minute (aggregated across all instances).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
```
</details>

<br />

#### zoekt: indexed_search_request_duration_p75_aggregate

<p class="subtitle">75th percentile indexed search duration over 1m (aggregate)</p>

This dashboard shows the 75th percentile of search request durations over the last minute (aggregated across all instances).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
```
</details>

<br />

#### zoekt: indexed_search_request_duration_p99_by_instance

<p class="subtitle">99th percentile indexed search duration over 1m (per instance)</p>

This dashboard shows the 99th percentile of search request durations over the last minute (broken out per instance).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
```
</details>

<br />

#### zoekt: indexed_search_request_duration_p90_by_instance

<p class="subtitle">90th percentile indexed search duration over 1m (per instance)</p>

This dashboard shows the 90th percentile of search request durations over the last minute (broken out per instance).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
```
</details>

<br />

#### zoekt: indexed_search_request_duration_p75_by_instance

<p class="subtitle">75th percentile indexed search duration over 1m (per instance)</p>

This dashboard shows the 75th percentile of search request durations over the last minute (broken out per instance).

Large duration spikes can be an indicator of saturation and / or a performance regression.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
```
</details>

<br />

#### zoekt: indexed_search_num_concurrent_requests_aggregate

<p class="subtitle">Amount of in-flight indexed search requests (aggregate)</p>

This dashboard shows the current number of indexed search requests that are in-flight, aggregated across all instances.

In-flight search requests include both running and queued requests.

The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100420` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name) (zoekt_search_running)
```
</details>

<br />

#### zoekt: indexed_search_num_concurrent_requests_by_instance

<p class="subtitle">Amount of in-flight indexed search requests (per instance)</p>

This dashboard shows the current number of indexed search requests that are-flight, broken out per instance.

In-flight search requests include both running and queued requests.

The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100421` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance, name) (zoekt_search_running{instance=~`${instance:regex}`})
```
</details>

<br />

#### zoekt: indexed_search_concurrent_request_growth_rate_1m_aggregate

<p class="subtitle">Rate of growth of in-flight indexed search requests over 1m (aggregate)</p>

This dashboard shows the rate of growth of in-flight requests, aggregated across all instances.

In-flight search requests include both running and queued requests.

This metric gives a notion of how quickly the indexed-search backend is working through its request load
(taking into account the request arrival rate and processing time). A sustained high rate of growth
can indicate that the indexed-search backend is saturated.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100430` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name) (deriv(zoekt_search_running[1m]))
```
</details>

<br />

#### zoekt: indexed_search_concurrent_request_growth_rate_1m_per_instance

<p class="subtitle">Rate of growth of in-flight indexed search requests over 1m (per instance)</p>

This dashboard shows the rate of growth of in-flight requests, broken out per instance.

In-flight search requests include both running and queued requests.

This metric gives a notion of how quickly the indexed-search backend is working through its request load
(taking into account the request arrival rate and processing time). A sustained high rate of growth
can indicate that the indexed-search backend is saturated.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100431` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance) (deriv(zoekt_search_running[1m]))
```
</details>

<br />

#### zoekt: indexed_search_request_errors

<p class="subtitle">Indexed search request errors every 5m by code</p>

Refer to the [alerts reference](alerts#zoekt-indexed_search_request_errors) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100440` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (code)(increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
```
</details>

<br />

#### zoekt: zoekt_shards_sched

<p class="subtitle">Current number of zoekt scheduler processes in a state</p>

Each ongoing search request starts its life as an interactive query. If it
takes too long it becomes a batch query. Between state transitions it can be queued.

If you have a high number of batch queries it is a sign there is a large load
of slow queries. Alternatively your systems are underprovisioned and normal
search queries are taking too long.

For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100450` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type, state) (zoekt_shards_sched)
```
</details>

<br />

#### zoekt: zoekt_shards_sched_total

<p class="subtitle">Rate of zoekt scheduler process state transitions in the last 5m</p>

Each ongoing search request starts its life as an interactive query. If it
takes too long it becomes a batch query. Between state transitions it can be queued.

If you have a high number of batch queries it is a sign there is a large load
of slow queries. Alternatively your systems are underprovisioned and normal
search queries are taking too long.

For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100451` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (type, state) (rate(zoekt_shards_sched[5m]))
```
</details>

<br />

### Zoekt: Git fetch durations

#### zoekt: 90th_percentile_successful_git_fetch_durations_5m

<p class="subtitle">90th percentile successful git fetch durations over 5m</p>

Long git fetch times can be a leading indicator of saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="true"}[5m])))
```
</details>

<br />

#### zoekt: 90th_percentile_failed_git_fetch_durations_5m

<p class="subtitle">90th percentile failed git fetch durations over 5m</p>

Long git fetch times can be a leading indicator of saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="false"}[5m])))
```
</details>

<br />

### Zoekt: Indexing results

#### zoekt: repo_index_state_aggregate

<p class="subtitle">Index results state count over 5m (aggregate)</p>

This dashboard shows the outcomes of recently completed indexing jobs across all index-server instances.

A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.

Legend:
- fail -&gt; the indexing jobs failed
- success -&gt; the indexing job succeeded and the index was updated
- success_meta -&gt; the indexing job succeeded, but only metadata was updated
- noop -&gt; the indexing job succeed, but we didn`t need to update anything
- empty -&gt; the indexing job succeeded, but the index was empty (i.e. the repository is empty)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (state) (increase(index_repo_seconds_count[5m]))
```
</details>

<br />

#### zoekt: repo_index_state_per_instance

<p class="subtitle">Index results state count over 5m (per instance)</p>

This dashboard shows the outcomes of recently completed indexing jobs, split out across each index-server instance.

(You can use the "instance" filter at the top of the page to select a particular instance.)

A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.

Legend:
- fail -&gt; the indexing jobs failed
- success -&gt; the indexing job succeeded and the index was updated
- success_meta -&gt; the indexing job succeeded, but only metadata was updated
- noop -&gt; the indexing job succeed, but we didn`t need to update anything
- empty -&gt; the indexing job succeeded, but the index was empty (i.e. the repository is empty)

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (instance, state) (increase(index_repo_seconds_count{instance=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: repo_index_success_speed_heatmap

<p class="subtitle">Successful indexing durations</p>

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le, state) (increase(index_repo_seconds_bucket{state="success"}[$__rate_interval]))
```
</details>

<br />

#### zoekt: repo_index_fail_speed_heatmap

<p class="subtitle">Failed indexing durations</p>

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le, state) (increase(index_repo_seconds_bucket{state="fail"}[$__rate_interval]))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p99

<p class="subtitle">99th percentile successful indexing durations over 5m (aggregate)</p>

This dashboard shows the p99 duration of successful indexing jobs aggregated across all Zoekt instances.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100620` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p90

<p class="subtitle">90th percentile successful indexing durations over 5m (aggregate)</p>

This dashboard shows the p90 duration of successful indexing jobs aggregated across all Zoekt instances.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100621` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p75

<p class="subtitle">75th percentile successful indexing durations over 5m (aggregate)</p>

This dashboard shows the p75 duration of successful indexing jobs aggregated across all Zoekt instances.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100622` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p99_per_instance

<p class="subtitle">99th percentile successful indexing durations over 5m (per instance)</p>

This dashboard shows the p99 duration of successful indexing jobs broken out per Zoekt instance.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100630` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p90_per_instance

<p class="subtitle">90th percentile successful indexing durations over 5m (per instance)</p>

This dashboard shows the p90 duration of successful indexing jobs broken out per Zoekt instance.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100631` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: repo_index_success_speed_p75_per_instance

<p class="subtitle">75th percentile successful indexing durations over 5m (per instance)</p>

This dashboard shows the p75 duration of successful indexing jobs broken out per Zoekt instance.

Latency increases can indicate bottlenecks in the indexserver.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100632` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p99

<p class="subtitle">99th percentile failed indexing durations over 5m (aggregate)</p>

This dashboard shows the p99 duration of failed indexing jobs aggregated across all Zoekt instances.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100640` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p90

<p class="subtitle">90th percentile failed indexing durations over 5m (aggregate)</p>

This dashboard shows the p90 duration of failed indexing jobs aggregated across all Zoekt instances.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100641` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p75

<p class="subtitle">75th percentile failed indexing durations over 5m (aggregate)</p>

This dashboard shows the p75 duration of failed indexing jobs aggregated across all Zoekt instances.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100642` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p99_per_instance

<p class="subtitle">99th percentile failed indexing durations over 5m (per instance)</p>

This dashboard shows the p99 duration of failed indexing jobs broken out per Zoekt instance.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100650` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p90_per_instance

<p class="subtitle">90th percentile failed indexing durations over 5m (per instance)</p>

This dashboard shows the p90 duration of failed indexing jobs broken out per Zoekt instance.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100651` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: repo_index_failed_speed_p75_per_instance

<p class="subtitle">75th percentile failed indexing durations over 5m (per instance)</p>

This dashboard shows the p75 duration of failed indexing jobs broken out per Zoekt instance.

Failures happening after a long time indicates timeouts.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100652` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

### Zoekt: Indexing queue statistics

#### zoekt: indexed_num_scheduled_jobs_aggregate

<p class="subtitle"># scheduled index jobs (aggregate)</p>

A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(index_queue_len)
```
</details>

<br />

#### zoekt: indexed_num_scheduled_jobs_per_instance

<p class="subtitle"># scheduled index jobs (per instance)</p>

A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
index_queue_len{instance=~`${instance:regex}`}
```
</details>

<br />

#### zoekt: indexed_indexing_delay_heatmap

<p class="subtitle">Repo indexing delay heatmap</p>

The indexing delay represents the amount of time between when Zoekt received a repo indexing job, to when the repo was indexed.
It includes the time the repo spent in the indexing queue, as well as the time it took to actually index the repo. This metric
only includes successfully indexed repos.

Large indexing delays can be an indicator of:
	- resource saturation
	- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better .

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le) (increase(index_indexing_delay_seconds_bucket{state=~"success|success_meta"}[$__rate_interval]))
```
</details>

<br />

#### zoekt: indexed_indexing_delay_p90_aggregate

<p class="subtitle">90th percentile indexing delay over 5m (aggregate)</p>

This dashboard shows the p90 indexing delay aggregated across all Zoekt instances.

The indexing delay represents the amount of time between when Zoekt received a repo indexing job, to when the repo was indexed.
It includes the time the repo spent in the indexing queue, as well as the time it took to actually index the repo. This metric
only includes successfully indexed repos.

Large indexing delays can be an indicator of:
	- resource saturation
	- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100720` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name)(rate(index_indexing_delay_seconds_bucket{state=~"success|success_meta"}[5m])))
```
</details>

<br />

#### zoekt: indexed_indexing_delay_p50_aggregate

<p class="subtitle">50th percentile indexing delay over 5m (aggregate)</p>

This dashboard shows the p50 indexing delay aggregated across all Zoekt instances.

The indexing delay represents the amount of time between when Zoekt received a repo indexing job, to when the repo was indexed.
It includes the time the repo spent in the indexing queue, as well as the time it took to actually index the repo. This metric
only includes successfully indexed repos.

Large indexing delays can be an indicator of:
	- resource saturation
	- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100721` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.50, sum by (le, name)(rate(index_indexing_delay_seconds_bucket{state=~"success|success_meta"}[5m])))
```
</details>

<br />

#### zoekt: indexed_indexing_delay_p90_per_instance

<p class="subtitle">90th percentile indexing delay over 5m (per instance)</p>

This dashboard shows the p90 indexing delay, broken out per Zoekt instance.

The indexing delay represents the amount of time between when Zoekt received a repo indexing job, to when the repo was indexed.
It includes the time the repo spent in the indexing queue, as well as the time it took to actually index the repo.

Large indexing delays can be an indicator of:
	- resource saturation
	- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100730` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, instance)(rate(index_indexing_delay_seconds{instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

#### zoekt: indexed_indexing_delay_p50_per_instance

<p class="subtitle">50th percentile indexing delay over 5m (per instance)</p>

This dashboard shows the p50 indexing delay, broken out per Zoekt instance.

The indexing delay represents the amount of time between when Zoekt received a repo indexing job, to when the repo was indexed.
It includes the time the repo spent in the indexing queue, as well as the time it took to actually index the repo.

Large indexing delays can be an indicator of:
	- resource saturation
	- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100731` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.50, sum by (le, instance)(rate(index_indexing_delay_seconds{instance=~`${instance:regex}`}[5m])))
```
</details>

<br />

### Zoekt: Compound shards

#### zoekt: compound_shards_aggregate

<p class="subtitle"># of compound shards (aggregate)</p>

The total number of compound shards aggregated over all instances.

This number should be consistent if the number of indexed repositories doesn`t change.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(index_number_compound_shards) by (app)
```
</details>

<br />

#### zoekt: compound_shards_per_instance

<p class="subtitle"># of compound shards (per instance)</p>

The total number of compound shards per instance.

This number should be consistent if the number of indexed repositories doesn`t change.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(index_number_compound_shards{instance=~`${instance:regex}`}) by (instance)
```
</details>

<br />

#### zoekt: average_shard_merging_duration_success

<p class="subtitle">Average successful shard merging duration over 1 hour</p>

Average duration of a successful merge over the last hour.

The duration depends on the target compound shard size. The larger the compound shard the longer a merge will take.
Since the target compound shard size is set on start of zoekt-indexserver, the average duration should be consistent.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(index_shard_merging_duration_seconds_sum{error="false"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="false"}[1h]))
```
</details>

<br />

#### zoekt: average_shard_merging_duration_error

<p class="subtitle">Average failed shard merging duration over 1 hour</p>

Average duration of a failed merge over the last hour.

This curve should be flat. Any deviation should be investigated.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(index_shard_merging_duration_seconds_sum{error="true"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="true"}[1h]))
```
</details>

<br />

#### zoekt: shard_merging_errors_aggregate

<p class="subtitle">Number of errors during shard merging (aggregate)</p>

Number of errors during shard merging aggregated over all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100820` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(index_shard_merging_duration_seconds_count{error="true"}) by (app)
```
</details>

<br />

#### zoekt: shard_merging_errors_per_instance

<p class="subtitle">Number of errors during shard merging (per instance)</p>

Number of errors during shard merging per instance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100821` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(index_shard_merging_duration_seconds_count{instance=~`${instance:regex}`, error="true"}) by (instance)
```
</details>

<br />

#### zoekt: shard_merging_merge_running_per_instance

<p class="subtitle">If shard merging is running (per instance)</p>

Set to 1 if shard merging is running.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100830` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (instance) (index_shard_merging_running{instance=~`${instance:regex}`})
```
</details>

<br />

#### zoekt: shard_merging_vacuum_running_per_instance

<p class="subtitle">If vacuum is running (per instance)</p>

Set to 1 if vacuum is running.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100831` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (instance) (index_vacuum_running{instance=~`${instance:regex}`})
```
</details>

<br />

### Zoekt: Network I/O pod metrics (only available on Kubernetes)

#### zoekt: network_sent_bytes_aggregate

<p class="subtitle">Transmission rate over 5m (aggregate)</p>

The rate of bytes sent over the network across all pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`.*indexed-search.*`}[5m]))
```
</details>

<br />

#### zoekt: network_received_packets_per_instance

<p class="subtitle">Transmission rate over 5m (per instance)</p>

The amount of bytes sent over the network by individual pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: network_received_bytes_aggregate

<p class="subtitle">Receive rate over 5m (aggregate)</p>

The amount of bytes received from the network across pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`.*indexed-search.*`}[5m]))
```
</details>

<br />

#### zoekt: network_received_bytes_per_instance

<p class="subtitle">Receive rate over 5m (per instance)</p>

The amount of bytes received from the network by individual pods

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: network_transmitted_packets_dropped_by_instance

<p class="subtitle">Transmit packet drop rate over 5m (by instance)</p>

An increase in dropped packets could be a leading indicator of network saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100920` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: network_transmitted_packets_errors_per_instance

<p class="subtitle">Errors encountered while transmitting over 5m (per instance)</p>

An increase in transmission errors could indicate a networking issue

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100921` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: network_received_packets_dropped_by_instance

<p class="subtitle">Receive packet drop rate over 5m (by instance)</p>

An increase in dropped packets could be a leading indicator of network saturation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100922` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

#### zoekt: network_transmitted_packets_errors_by_instance

<p class="subtitle">Errors encountered while receiving over 5m (per instance)</p>

An increase in errors while receiving could indicate a networking issue.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=100923` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
```
</details>

<br />

### Zoekt: Zoekt Webserver GRPC server metrics

#### zoekt: zoekt_webserver_grpc_request_rate_all_methods

<p class="subtitle">Request rate across all methods over 2m</p>

The number of gRPC requests received per second across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_request_rate_per_method

<p class="subtitle">Request rate per-method over 2m</p>

The number of gRPC requests received per second broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_started_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)
```
</details>

<br />

#### zoekt: zoekt_webserver_error_percentage_all_methods

<p class="subtitle">Error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))) ))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_error_percentage_per_method

<p class="subtitle">Error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,grpc_code!="OK",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)) ))
```
</details>

<br />

#### zoekt: zoekt_webserver_p99_response_time_per_method

<p class="subtitle">99th percentile response time per method over 2m</p>

The 99th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p90_response_time_per_method

<p class="subtitle">90th percentile response time per method over 2m</p>

The 90th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p75_response_time_per_method

<p class="subtitle">75th percentile response time per method over 2m</p>

The 75th percentile response time per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101022` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p99_9_response_size_per_method

<p class="subtitle">99.9th percentile total response size per method over 2m</p>

The 99.9th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101030` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p90_response_size_per_method

<p class="subtitle">90th percentile total response size per method over 2m</p>

The 90th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101031` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p75_response_size_per_method

<p class="subtitle">75th percentile total response size per method over 2m</p>

The 75th percentile total per-RPC response size per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101032` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p99_9_invididual_sent_message_size_per_method

<p class="subtitle">99.9th percentile individual sent message size per method over 2m</p>

The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101040` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.999, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p90_invididual_sent_message_size_per_method

<p class="subtitle">90th percentile individual sent message size per method over 2m</p>

The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101041` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_p75_invididual_sent_message_size_per_method

<p class="subtitle">75th percentile individual sent message size per method over 2m</p>

The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101042` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_response_stream_message_count_per_method

<p class="subtitle">Average streaming response message count per-method over 2m</p>

The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101050` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_all_codes_per_method

<p class="subtitle">Response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method, aggregated across all instances.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101060` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method, grpc_code)
```
</details>

<br />

### Zoekt: Zoekt Webserver GRPC "internal error" metrics

#### zoekt: zoekt_webserver_grpc_clients_error_percentage_all_methods

<p class="subtitle">Client baseline error percentage across all methods over 2m</p>

The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_error_percentage_per_method

<p class="subtitle">Client baseline error percentage per-method over 2m</p>

The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_all_codes_per_method

<p class="subtitle">Client baseline response codes rate per-method over 2m</p>

The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_internal_error_percentage_all_methods

<p class="subtitle">Client-observed gRPC internal error percentage across all methods over 2m</p>

The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_webserver" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_internal_error_percentage_per_method

<p class="subtitle">Client-observed gRPC internal error percentage per-method over 2m</p>

The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_webserver" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_internal_error_all_codes_per_method

<p class="subtitle">Client-observed gRPC internal error response code rate per-method over 2m</p>

The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_webserver" clients.

**Note**: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.

When debugging, knowing that a particular error comes from the grpc-go library itself (an `internal error`) as opposed to `normal` application code can be helpful when trying to fix it.

**Note**: Internal errors are detected via a very coarse heuristic (seeing if the error starts with `grpc:`, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",is_internal_error="true",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
```
</details>

<br />

### Zoekt: Zoekt Webserver GRPC retry metrics

#### zoekt: zoekt_webserver_grpc_clients_retry_percentage_across_all_methods

<p class="subtitle">Client retry percentage across all methods over 2m</p>

The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_webserver" clients.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_retry_percentage_per_method

<p class="subtitle">Client retry percentage per-method over 2m</p>

The percentage of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",is_retried="true",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
```
</details>

<br />

#### zoekt: zoekt_webserver_grpc_clients_retry_count_per_method

<p class="subtitle">Client retry count per-method over 2m</p>

The count of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",is_retried="true"}[2m])) by (grpc_method))
```
</details>

<br />

### Zoekt: Data disk I/O metrics

#### zoekt: data_disk_reads_sec

<p class="subtitle">Read request rate over 1m (per instance)</p>

The number of read requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_writes_sec

<p class="subtitle">Write request rate over 1m (per instance)</p>

The number of write requests that were issued to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_read_throughput

<p class="subtitle">Read throughput over 1m (per instance)</p>

The amount of data that was read from the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_write_throughput

<p class="subtitle">Write throughput over 1m (per instance)</p>

The amount of data that was written to the device per second.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_read_duration

<p class="subtitle">Average read duration over 1m (per instance)</p>

The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101320` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### zoekt: data_disk_write_duration

<p class="subtitle">Average write duration over 1m (per instance)</p>

The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101321` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### zoekt: data_disk_read_request_size

<p class="subtitle">Average read request size over 1m (per instance)</p>

The average size of read requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101330` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### zoekt: data_disk_write_request_size)

<p class="subtitle">Average write request size over 1m (per instance)</p>

The average size of write requests that were issued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101331` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
```
</details>

<br />

#### zoekt: data_disk_reads_merged_sec

<p class="subtitle">Merged read request rate over 1m (per instance)</p>

The number of read requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101340` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_writes_merged_sec

<p class="subtitle">Merged writes request rate over 1m (per instance)</p>

The number of write requests merged per second that were queued to the device.

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101341` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

#### zoekt: data_disk_average_queue_size

<p class="subtitle">Average queue size over 1m (per instance)</p>

The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).

Note: Disk statistics are per _device_, not per _service_. In certain environments (such as common docker-compose setups), zoekt could be one of _many services_ using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101350` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
```
</details>

<br />

### Zoekt: [indexed-search-indexer] Golang runtime monitoring

#### zoekt: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#zoekt-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*indexed-search-indexer"})
```
</details>

<br />

#### zoekt: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#zoekt-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*indexed-search-indexer"})
```
</details>

<br />

### Zoekt: [indexed-search] Golang runtime monitoring

#### zoekt: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#zoekt-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_goroutines{job=~".*indexed-search"})
```
</details>

<br />

#### zoekt: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#zoekt-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(instance) (go_gc_duration_seconds{job=~".*indexed-search"})
```
</details>

<br />

### Zoekt: Kubernetes monitoring (only available on Kubernetes)

#### zoekt: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#zoekt-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/zoekt/zoekt?viewPanel=101600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*indexed-search"}) / count by (app) (up{app=~".*indexed-search"}) * 100
```
</details>

<br />

## Prometheus

<p class="subtitle">Sourcegraph's all-in-one Prometheus and Alertmanager service.</p>

To see this dashboard, visit `/-/debug/grafana/d/prometheus/prometheus` on your Sourcegraph instance.

### Prometheus: Metrics

#### prometheus: metrics_cardinality

<p class="subtitle">Metrics with highest cardinalities</p>

The 10 highest-cardinality metrics collected by this Prometheus instance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
topk(10, count by (__name__, job)({__name__!=""}))
```
</details>

<br />

#### prometheus: samples_scraped

<p class="subtitle">Samples scraped by job</p>

The number of samples scraped after metric relabeling was applied by this Prometheus instance.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(job) (scrape_samples_post_metric_relabeling{job!=""})
```
</details>

<br />

#### prometheus: prometheus_rule_eval_duration

<p class="subtitle">Average prometheus rule group evaluation duration over 10m by rule group</p>

A high value here indicates Prometheus rule evaluation is taking longer than expected.
It might indicate that certain rule groups are taking too long to evaluate, or Prometheus is underprovisioned.

Rules that Sourcegraph ships with are grouped under `/sg_config_prometheus`. [Custom rules are grouped under `/sg_prometheus_addons`](https://sourcegraph.com/docs/admin/observability/metrics#prometheus-configuration).

Refer to the [alerts reference](alerts#prometheus-prometheus_rule_eval_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(rule_group) (avg_over_time(prometheus_rule_group_last_duration_seconds[10m]))
```
</details>

<br />

#### prometheus: prometheus_rule_eval_failures

<p class="subtitle">Failed prometheus rule evaluations over 5m by rule group</p>

Rules that Sourcegraph ships with are grouped under `/sg_config_prometheus`. [Custom rules are grouped under `/sg_prometheus_addons`](https://sourcegraph.com/docs/admin/observability/metrics#prometheus-configuration).

Refer to the [alerts reference](alerts#prometheus-prometheus_rule_eval_failures) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(rule_group) (rate(prometheus_rule_evaluation_failures_total[5m]))
```
</details>

<br />

### Prometheus: Alerts

#### prometheus: alertmanager_notification_latency

<p class="subtitle">Alertmanager notification latency over 1m by integration</p>

Refer to the [alerts reference](alerts#prometheus-alertmanager_notification_latency) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(integration) (rate(alertmanager_notification_latency_seconds_sum[1m]))
```
</details>

<br />

#### prometheus: alertmanager_notification_failures

<p class="subtitle">Failed alertmanager notifications over 1m by integration</p>

Refer to the [alerts reference](alerts#prometheus-alertmanager_notification_failures) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(integration) (rate(alertmanager_notifications_failed_total[1m]))
```
</details>

<br />

### Prometheus: Internals

#### prometheus: prometheus_config_status

<p class="subtitle">Prometheus configuration reload status</p>

A `1` indicates Prometheus reloaded its configuration successfully.

Refer to the [alerts reference](alerts#prometheus-prometheus_config_status) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
prometheus_config_last_reload_successful
```
</details>

<br />

#### prometheus: alertmanager_config_status

<p class="subtitle">Alertmanager configuration reload status</p>

A `1` indicates Alertmanager reloaded its configuration successfully.

Refer to the [alerts reference](alerts#prometheus-alertmanager_config_status) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
alertmanager_config_last_reload_successful
```
</details>

<br />

#### prometheus: prometheus_tsdb_op_failure

<p class="subtitle">Prometheus tsdb failures by operation over 1m by operation</p>

Refer to the [alerts reference](alerts#prometheus-prometheus_tsdb_op_failure) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
increase(label_replace({__name__=~"prometheus_tsdb_(.*)_failed_total"}, "operation", "$1", "__name__", "(.+)s_failed_total")[5m:1m])
```
</details>

<br />

#### prometheus: prometheus_target_sample_exceeded

<p class="subtitle">Prometheus scrapes that exceed the sample limit over 10m</p>

Refer to the [alerts reference](alerts#prometheus-prometheus_target_sample_exceeded) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
increase(prometheus_target_scrapes_exceeded_sample_limit_total[10m])
```
</details>

<br />

#### prometheus: prometheus_target_sample_duplicate

<p class="subtitle">Prometheus scrapes rejected due to duplicate timestamps over 10m</p>

Refer to the [alerts reference](alerts#prometheus-prometheus_target_sample_duplicate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
increase(prometheus_target_scrapes_sample_duplicate_timestamp_total[10m])
```
</details>

<br />

### Prometheus: Container monitoring (not available on server)

#### prometheus: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod prometheus` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p prometheus`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' prometheus` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the prometheus container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs prometheus` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^prometheus.*"}) > 60)
```
</details>

<br />

#### prometheus: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#prometheus-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}
```
</details>

<br />

#### prometheus: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#prometheus-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}
```
</details>

<br />

#### prometheus: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^prometheus.*"}[1h]) + rate(container_fs_writes_total{name=~"^prometheus.*"}[1h]))
```
</details>

<br />

### Prometheus: Provisioning indicators (not available on server)

#### prometheus: provisioning_container_cpu_usage_long_term

<p class="subtitle">Container cpu usage total (90th percentile over 1d) across all cores by instance</p>

Refer to the [alerts reference](alerts#prometheus-provisioning_container_cpu_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
quantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[1d])
```
</details>

<br />

#### prometheus: provisioning_container_memory_usage_long_term

<p class="subtitle">Container memory usage (1d maximum) by instance</p>

Refer to the [alerts reference](alerts#prometheus-provisioning_container_memory_usage_long_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[1d])
```
</details>

<br />

#### prometheus: provisioning_container_cpu_usage_short_term

<p class="subtitle">Container cpu usage total (5m maximum) across all cores by instance</p>

Refer to the [alerts reference](alerts#prometheus-provisioning_container_cpu_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[5m])
```
</details>

<br />

#### prometheus: provisioning_container_memory_usage_short_term

<p class="subtitle">Container memory usage (5m maximum) by instance</p>

Refer to the [alerts reference](alerts#prometheus-provisioning_container_memory_usage_short_term) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[5m])
```
</details>

<br />

#### prometheus: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total by instance</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

Refer to the [alerts reference](alerts#prometheus-container_oomkill_events_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^prometheus.*"})
```
</details>

<br />

### Prometheus: Kubernetes monitoring (only available on Kubernetes)

#### prometheus: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#prometheus-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/prometheus/prometheus?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*prometheus"}) / count by (app) (up{app=~".*prometheus"}) * 100
```
</details>

<br />

## Executor

<p class="subtitle">Executes jobs in an isolated environment.</p>

To see this dashboard, visit `/-/debug/grafana/d/executor/executor` on your Sourcegraph instance.

### Executor: Executor: Executor jobs

#### executor: multiqueue_executor_dequeue_cache_size

<p class="subtitle">Unprocessed executor job dequeue cache size for multiqueue executors</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
multiqueue_executor_dequeue_cache_size{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}
```
</details>

<br />

### Executor: Executor: Executor jobs

#### executor: executor_handlers

<p class="subtitle">Executor active handlers</p>

Refer to the [alerts reference](alerts#executor-executor_handlers) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_executor_processor_handlers{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"})
```
</details>

<br />

#### executor: executor_processor_total

<p class="subtitle">Executor operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: executor_processor_99th_percentile_duration

<p class="subtitle">Aggregate successful executor operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_executor_processor_duration_seconds_bucket{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: executor_processor_errors_total

<p class="subtitle">Executor operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: executor_processor_error_rate

<p class="subtitle">Executor operation error rate over 5m</p>

Refer to the [alerts reference](alerts#executor-executor_processor_error_rate) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Queue API client

#### executor: apiworker_apiclient_queue_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

#### executor: apiworker_apiclient_queue_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_queue_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Files API client

#### executor: apiworker_apiclient_files_total

<p class="subtitle">Aggregate client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_files_99th_percentile_duration

<p class="subtitle">Aggregate successful client operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_files_errors_total

<p class="subtitle">Aggregate client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_files_error_rate

<p class="subtitle">Aggregate client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

#### executor: apiworker_apiclient_files_total

<p class="subtitle">Client operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_files_99th_percentile_duration

<p class="subtitle">99th percentile successful client operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
```
</details>

<br />

#### executor: apiworker_apiclient_files_errors_total

<p class="subtitle">Client operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_apiclient_files_error_rate

<p class="subtitle">Client operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Job setup

#### executor: apiworker_command_total

<p class="subtitle">Aggregate command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">Aggregate successful command operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Aggregate command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Aggregate command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

#### executor: apiworker_command_total

<p class="subtitle">Command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">99th percentile successful command operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Job execution

#### executor: apiworker_command_total

<p class="subtitle">Aggregate command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">Aggregate successful command operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Aggregate command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Aggregate command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

#### executor: apiworker_command_total

<p class="subtitle">Command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">99th percentile successful command operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Job teardown

#### executor: apiworker_command_total

<p class="subtitle">Aggregate command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">Aggregate successful command operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Aggregate command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Aggregate command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100603` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

#### executor: apiworker_command_total

<p class="subtitle">Command operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_99th_percentile_duration

<p class="subtitle">99th percentile successful command operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
```
</details>

<br />

#### executor: apiworker_command_errors_total

<p class="subtitle">Command operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
```
</details>

<br />

#### executor: apiworker_command_error_rate

<p class="subtitle">Command operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
```
</details>

<br />

### Executor: Executor: Compute instance metrics

#### executor: node_cpu_utilization

<p class="subtitle">CPU utilization (minus idle/iowait)</p>

Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100700` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode!~"(idle|iowait)",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode="system",sg_instance=~"$instance"}) by (sg_instance) * 100
```
</details>

<br />

#### executor: node_cpu_saturation_cpu_wait

<p class="subtitle">CPU saturation (time waiting)</p>

Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100701` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
```
</details>

<br />

#### executor: node_memory_utilization

<p class="subtitle">Memory utilization</p>

Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100710` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
(1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance)) * 100
```
</details>

<br />

#### executor: node_memory_saturation_vmeff

<p class="subtitle">Memory saturation (vmem efficiency)</p>

Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers &gt;~100% may be sign of imminent memory exhaustion, while sustained 0% &lt; x &lt; ~100% figures are very serious.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100711` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
(rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) * 100
```
</details>

<br />

#### executor: node_memory_saturation_pressure_stalled

<p class="subtitle">Memory saturation (fully stalled)</p>

Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100712` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
```
</details>

<br />

#### executor: node_io_disk_utilization

<p class="subtitle">Disk IO utilization (percentage time spent in IO)</p>

Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100720` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
```
</details>

<br />

#### executor: node_io_disk_saturation

<p class="subtitle">Disk IO saturation (avg IO queue size)</p>

Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100721` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
```
</details>

<br />

#### executor: node_io_disk_saturation_pressure_full

<p class="subtitle">Disk IO saturation (avg time of all processes stalled)</p>

Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100722` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
```
</details>

<br />

#### executor: node_io_network_utilization

<p class="subtitle">Network IO utilization (Rx)</p>

Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100730` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO saturation (Rx packets dropped)</p>

Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100731` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO errors (Rx)</p>

Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100732` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_utilization

<p class="subtitle">Network IO utilization (Tx)</p>

Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100740` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO saturation (Tx packets dropped)</p>

Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100741` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO errors (Tx)</p>

Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100742` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

### Executor: Executor: Docker Registry Mirror instance metrics

#### executor: node_cpu_utilization

<p class="subtitle">CPU utilization (minus idle/iowait)</p>

Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100800` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode!~"(idle|iowait)",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode="system",sg_instance=~"docker-registry"}) by (sg_instance) * 100
```
</details>

<br />

#### executor: node_cpu_saturation_cpu_wait

<p class="subtitle">CPU saturation (time waiting)</p>

Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100801` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
```
</details>

<br />

#### executor: node_memory_utilization

<p class="subtitle">Memory utilization</p>

Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100810` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
(1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance)) * 100
```
</details>

<br />

#### executor: node_memory_saturation_vmeff

<p class="subtitle">Memory saturation (vmem efficiency)</p>

Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers &gt;~100% may be sign of imminent memory exhaustion, while sustained 0% &lt; x &lt; ~100% figures are very serious.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100811` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
(rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) * 100
```
</details>

<br />

#### executor: node_memory_saturation_pressure_stalled

<p class="subtitle">Memory saturation (fully stalled)</p>

Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100812` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
```
</details>

<br />

#### executor: node_io_disk_utilization

<p class="subtitle">Disk IO utilization (percentage time spent in IO)</p>

Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100820` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
```
</details>

<br />

#### executor: node_io_disk_saturation

<p class="subtitle">Disk IO saturation (avg IO queue size)</p>

Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100821` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
```
</details>

<br />

#### executor: node_io_disk_saturation_pressure_full

<p class="subtitle">Disk IO saturation (avg time of all processes stalled)</p>

Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100822` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
rate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
```
</details>

<br />

#### executor: node_io_network_utilization

<p class="subtitle">Network IO utilization (Rx)</p>

Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100830` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO saturation (Rx packets dropped)</p>

Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100831` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO errors (Rx)</p>

Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100832` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_utilization

<p class="subtitle">Network IO utilization (Tx)</p>

Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100840` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO saturation (Tx packets dropped)</p>

Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100841` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

#### executor: node_io_network_saturation

<p class="subtitle">Network IO errors (Tx)</p>

Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100842` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
```
</details>

<br />

### Executor: Golang runtime monitoring

#### executor: go_goroutines

<p class="subtitle">Maximum active goroutines</p>

A high value here indicates a possible goroutine leak.

Refer to the [alerts reference](alerts#executor-go_goroutines) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(sg_instance) (go_goroutines{sg_job=~".*sourcegraph-executors"})
```
</details>

<br />

#### executor: go_gc_duration_seconds

<p class="subtitle">Maximum go garbage collection duration</p>

Refer to the [alerts reference](alerts#executor-go_gc_duration_seconds) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/executor/executor?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Plane team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by(sg_instance) (go_gc_duration_seconds{sg_job=~".*sourcegraph-executors"})
```
</details>

<br />

## Global Containers Resource Usage

<p class="subtitle">Container usage and provisioning indicators of all services.</p>

To see this dashboard, visit `/-/debug/grafana/d/containers/containers` on your Sourcegraph instance.

### Global Containers Resource Usage: Containers (not available on server)

#### containers: container_memory_usage

<p class="subtitle">Container memory usage of all services</p>

This value indicates the memory usage of all containers.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
```
</details>

<br />

#### containers: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

This value indicates the CPU usage of all containers.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
```
</details>

<br />

### Global Containers Resource Usage: Containers: Provisioning Indicators (not available on server)

#### containers: container_memory_usage_provisioning

<p class="subtitle">Container memory usage (5m maximum) of services that exceed 80% memory limit</p>

Containers that exceed 80% memory limit. The value indicates potential underprovisioned resources.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
```
</details>

<br />

#### containers: container_cpu_usage_provisioning

<p class="subtitle">Container cpu usage total (5m maximum) across all cores of services that exceed 80% cpu limit</p>

Containers that exceed 80% CPU limit. The value indicates potential underprovisioned resources.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
```
</details>

<br />

#### containers: container_oomkill_events_total

<p class="subtitle">Container OOMKILL events total</p>

This value indicates the total number of times the container main process or child processes were terminated by OOM killer.
When it occurs frequently, it is an indicator of underprovisioning.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) >= 1
```
</details>

<br />

#### containers: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/containers/containers?viewPanel=100130` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|searcher|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) > 60)
```
</details>

<br />

## Code Intelligence &gt; Autoindexing

<p class="subtitle">The service at `internal/codeintel/autoindexing`.</p>

To see this dashboard, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing` on your Sourcegraph instance.

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Summary

####codeintel-autoindexing:

<p class="subtitle">Auto-index jobs inserted over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_dbstore_indexes_inserted[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_error_rate

<p class="subtitle">Auto-indexing job scheduler operation error rate over 10m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Service

#### codeintel-autoindexing: codeintel_autoindexing_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; GQL transport

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total

<p class="subtitle">Aggregate resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration

<p class="subtitle">Aggregate successful resolver operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total

<p class="subtitle">Aggregate resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate

<p class="subtitle">Aggregate resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total

<p class="subtitle">Resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration

<p class="subtitle">99th percentile successful resolver operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total

<p class="subtitle">Resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate

<p class="subtitle">Resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Store (internal)

#### codeintel-autoindexing: codeintel_autoindexing_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Background jobs (internal)

#### codeintel-autoindexing: codeintel_autoindexing_background_total

<p class="subtitle">Aggregate background operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration

<p class="subtitle">Aggregate successful background operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_errors_total

<p class="subtitle">Aggregate background operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_error_rate

<p class="subtitle">Aggregate background operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_total

<p class="subtitle">Background operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration

<p class="subtitle">99th percentile successful background operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_errors_total

<p class="subtitle">Background operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_background_error_rate

<p class="subtitle">Background operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Inference service (internal)

#### codeintel-autoindexing: codeintel_autoindexing_inference_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_inference_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Luasandbox service

#### codeintel-autoindexing: luasandbox_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100602` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100603` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: luasandbox_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Janitor task &gt; Codeintel autoindexing janitor unknown repository

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_unknown_repository_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_unknown_repository_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_janitor_unknown_repository_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100713` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Janitor task &gt; Codeintel autoindexing janitor unknown commit

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_unknown_commit_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_unknown_commit_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_janitor_unknown_commit_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100813` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Autoindexing: Codeintel: Autoindexing &gt; Janitor task &gt; Codeintel autoindexing janitor expired

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_expired_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_autoindexing_janitor_expired_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_autoindexing_janitor_expired_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100912` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-autoindexing: codeintel_autoindexing_janitor_expired_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100913` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

## Code Intelligence &gt; Code Nav

<p class="subtitle">The service at internal/codeintel/codenav`.</p>

To see this dashboard, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav` on your Sourcegraph instance.

### Code Intelligence &gt; Code Nav: Codeintel: CodeNav &gt; Service

#### codeintel-codenav: codeintel_codenav_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Code Nav: Codeintel: CodeNav &gt; LSIF store

#### codeintel-codenav: codeintel_codenav_lsifstore_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_lsifstore_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Code Nav: Codeintel: CodeNav &gt; GQL Transport

#### codeintel-codenav: codeintel_codenav_transport_graphql_total

<p class="subtitle">Aggregate resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration

<p class="subtitle">Aggregate successful resolver operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_errors_total

<p class="subtitle">Aggregate resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_error_rate

<p class="subtitle">Aggregate resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_total

<p class="subtitle">Resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration

<p class="subtitle">99th percentile successful resolver operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_errors_total

<p class="subtitle">Resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_transport_graphql_error_rate

<p class="subtitle">Resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Code Nav: Codeintel: CodeNav &gt; Store

#### codeintel-codenav: codeintel_codenav_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-codenav: codeintel_codenav_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

## Code Intelligence &gt; Policies

<p class="subtitle">The service at `internal/codeintel/policies`.</p>

To see this dashboard, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies` on your Sourcegraph instance.

### Code Intelligence &gt; Policies: Codeintel: Policies &gt; Service

#### codeintel-policies: codeintel_policies_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-policies: codeintel_policies_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Policies: Codeintel: Policies &gt; Store

#### codeintel-policies: codeintel_policies_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Policies: Codeintel: Policies &gt; GQL Transport

#### codeintel-policies: codeintel_policies_transport_graphql_total

<p class="subtitle">Aggregate resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration

<p class="subtitle">Aggregate successful resolver operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_errors_total

<p class="subtitle">Aggregate resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_error_rate

<p class="subtitle">Aggregate resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_total

<p class="subtitle">Resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration

<p class="subtitle">99th percentile successful resolver operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_errors_total

<p class="subtitle">Resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-policies: codeintel_policies_transport_graphql_error_rate

<p class="subtitle">Resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Policies: Codeintel: Policies &gt; Repository Pattern Matcher task

#### codeintel-policies: codeintel_background_policies_updated_total_total

<p class="subtitle">Lsif repository pattern matcher repositories pattern matcher every 5m</p>

Number of configuration policies whose repository membership list was updated

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_background_policies_updated_total_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

## Code Intelligence &gt; Uploads

<p class="subtitle">The service at `internal/codeintel/uploads`.</p>

To see this dashboard, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads` on your Sourcegraph instance.

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Service

#### codeintel-uploads: codeintel_uploads_total

<p class="subtitle">Aggregate service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_99th_percentile_duration

<p class="subtitle">Aggregate successful service operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_errors_total

<p class="subtitle">Aggregate service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100002` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_error_rate

<p class="subtitle">Aggregate service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100003` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_total

<p class="subtitle">Service operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_99th_percentile_duration

<p class="subtitle">99th percentile successful service operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_errors_total

<p class="subtitle">Service operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_error_rate

<p class="subtitle">Service operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Store (internal)

#### codeintel-uploads: codeintel_uploads_store_total

<p class="subtitle">Aggregate store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">Aggregate successful store operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_errors_total

<p class="subtitle">Aggregate store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_error_rate

<p class="subtitle">Aggregate store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_total

<p class="subtitle">Store operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_99th_percentile_duration

<p class="subtitle">99th percentile successful store operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_errors_total

<p class="subtitle">Store operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_store_error_rate

<p class="subtitle">Store operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; GQL Transport

#### codeintel-uploads: codeintel_uploads_transport_graphql_total

<p class="subtitle">Aggregate resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration

<p class="subtitle">Aggregate successful resolver operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_errors_total

<p class="subtitle">Aggregate resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_error_rate

<p class="subtitle">Aggregate resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_total

<p class="subtitle">Resolver operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration

<p class="subtitle">99th percentile successful resolver operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_errors_total

<p class="subtitle">Resolver operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_graphql_error_rate

<p class="subtitle">Resolver operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; HTTP Transport

#### codeintel-uploads: codeintel_uploads_transport_http_total

<p class="subtitle">Aggregate http handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration

<p class="subtitle">Aggregate successful http handler operation duration distribution over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_errors_total

<p class="subtitle">Aggregate http handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_error_rate

<p class="subtitle">Aggregate http handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_total

<p class="subtitle">Http handler operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration

<p class="subtitle">99th percentile successful http handler operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_errors_total

<p class="subtitle">Http handler operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_transport_http_error_rate

<p class="subtitle">Http handler operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Expiration task

#### codeintel-uploads: codeintel_background_repositories_scanned_total

<p class="subtitle">Lsif upload repository scan repositories scanned every 5m</p>

Number of repositories scanned for data retention

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_background_repositories_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_background_upload_records_scanned_total

<p class="subtitle">Lsif upload records scan records scanned every 5m</p>

Number of codeintel upload records scanned for data retention

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_background_upload_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_background_commits_scanned_total

<p class="subtitle">Lsif upload commits scanned commits scanned every 5m</p>

Number of commits reachable from a codeintel upload record scanned for data retention

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_background_commits_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_background_upload_records_expired_total

<p class="subtitle">Lsif upload records expired uploads scanned every 5m</p>

Number of codeintel upload records marked as expired

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100403` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_background_upload_records_expired_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads janitor unknown repository

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_unknown_repository_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_unknown_repository_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100510` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100511` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_janitor_unknown_repository_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100512` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_repository_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100513` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads janitor unknown commit

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_unknown_commit_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100601` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_unknown_commit_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100610` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100611` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_janitor_unknown_commit_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100612` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_unknown_commit_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100613` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads janitor abandoned

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100700` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_abandoned_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100701` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_abandoned_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100710` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100711` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_janitor_abandoned_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100712` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_abandoned_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100713` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads expirer unreferenced

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100800` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_expirer_unreferenced_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100801` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_expirer_unreferenced_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100810` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100811` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_expirer_unreferenced_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100812` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100813` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads expirer unreferenced graph

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100900` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_expirer_unreferenced_graph_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100901` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_expirer_unreferenced_graph_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100910` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100911` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_expirer_unreferenced_graph_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100912` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100913` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads hard deleter

#### codeintel-uploads: codeintel_uploads_hard_deleter_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_hard_deleter_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_hard_deleter_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_hard_deleter_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_hard_deleter_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_hard_deleter_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_hard_deleter_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_hard_deleter_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_hard_deleter_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_hard_deleter_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_hard_deleter_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads janitor audit logs

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_audit_logs_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_audit_logs_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_janitor_audit_logs_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101112` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_audit_logs_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101113` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Janitor task &gt; Codeintel uploads janitor scip documents

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_scip_documents_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_janitor_scip_documents_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_janitor_scip_documents_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_janitor_scip_documents_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101213` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Reconciler task &gt; Codeintel uploads reconciler scip metadata

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_reconciler_scip_metadata_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_reconciler_scip_metadata_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101310` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101311` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_reconciler_scip_metadata_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101312` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101313` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

### Code Intelligence &gt; Uploads: Codeintel: Uploads &gt; Reconciler task &gt; Codeintel uploads reconciler scip data

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_records_scanned_total

<p class="subtitle">Records scanned every 5m</p>

The number of candidate records considered for cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_reconciler_scip_data_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_records_altered_total

<p class="subtitle">Records altered every 5m</p>

The number of candidate records altered as part of cleanup.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_codeintel_uploads_reconciler_scip_data_records_altered_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_total

<p class="subtitle">Job invocation operations every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101410` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_99th_percentile_duration

<p class="subtitle">99th percentile successful job invocation operation duration over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101411` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum  by (le,op)(rate(src_codeintel_uploads_reconciler_scip_data_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_errors_total

<p class="subtitle">Job invocation operation errors every 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101412` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m]))
```
</details>

<br />

#### codeintel-uploads: codeintel_uploads_reconciler_scip_data_error_rate

<p class="subtitle">Job invocation operation error rate over 5m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101413` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
```
</details>

<br />

## Telemetry

<p class="subtitle">Monitoring telemetry services in Sourcegraph.</p>

To see this dashboard, visit `/-/debug/grafana/d/telemetry/telemetry` on your Sourcegraph instance.

### Telemetry: Telemetry Gateway Exporter: Events export and queue metrics

#### telemetry: telemetry_gateway_exporter_queue_size

<p class="subtitle">Telemetry event payloads pending export</p>

The number of events queued to be exported.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_telemetrygatewayexporter_queue_size)
```
</details>

<br />

#### telemetry: telemetry_gateway_exporter_queue_growth

<p class="subtitle">Rate of growth of events export queue over 30m</p>

A positive value indicates the queue is growing.

Refer to the [alerts reference](alerts#telemetry-telemetry_gateway_exporter_queue_growth) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(deriv(src_telemetrygatewayexporter_queue_size[30m]))
```
</details>

<br />

#### telemetry: src_telemetrygatewayexporter_exported_events

<p class="subtitle">Events exported from queue per hour</p>

The number of events being exported.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(increase(src_telemetrygatewayexporter_exported_events[1h]))
```
</details>

<br />

#### telemetry: telemetry_gateway_exporter_batch_size

<p class="subtitle">Number of events exported per batch over 30m</p>

The number of events exported in each batch. The largest bucket is the maximum number of events exported per batch.
If the distribution trends to the maximum bucket, then events export throughput is at or approaching saturation - try increasing  `TELEMETRY_GATEWAY_EXPORTER_EXPORT_BATCH_SIZE` or decreasing `TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL`.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le) (rate(src_telemetrygatewayexporter_batch_size_bucket[30m]))
```
</details>

<br />

### Telemetry: Telemetry Gateway Exporter: Events export job operations

#### telemetry: telemetrygatewayexporter_exporter_total

<p class="subtitle">Events exporter operations every 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_exporter_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_exporter_99th_percentile_duration

<p class="subtitle">Aggregate successful events exporter operation duration distribution over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_telemetrygatewayexporter_exporter_duration_seconds_bucket{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_exporter_errors_total

<p class="subtitle">Events exporter operation errors every 30m</p>

Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter_exporter_errors_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_exporter_error_rate

<p class="subtitle">Events exporter operation error rate over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100103` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_exporter_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m]))) * 100
```
</details>

<br />

### Telemetry: Telemetry Gateway Exporter: Events export queue cleanup job operations

#### telemetry: telemetrygatewayexporter_queue_cleanup_total

<p class="subtitle">Events export queue cleanup operations every 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_cleanup_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_cleanup_99th_percentile_duration

<p class="subtitle">Aggregate successful events export queue cleanup operation duration distribution over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_telemetrygatewayexporter_queue_cleanup_duration_seconds_bucket{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_cleanup_errors_total

<p class="subtitle">Events export queue cleanup operation errors every 30m</p>

Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter_queue_cleanup_errors_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_cleanup_error_rate

<p class="subtitle">Events export queue cleanup operation error rate over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100203` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_queue_cleanup_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m]))) * 100
```
</details>

<br />

### Telemetry: Telemetry Gateway Exporter: Events export queue metrics reporting job operations

#### telemetry: telemetrygatewayexporter_queue_metrics_reporter_total

<p class="subtitle">Events export backlog metrics reporting operations every 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_metrics_reporter_99th_percentile_duration

<p class="subtitle">Aggregate successful events export backlog metrics reporting operation duration distribution over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100301` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_telemetrygatewayexporter_queue_metrics_reporter_duration_seconds_bucket{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_metrics_reporter_errors_total

<p class="subtitle">Events export backlog metrics reporting operation errors every 30m</p>

Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter_queue_metrics_reporter_errors_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100302` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_queue_metrics_reporter_error_rate

<p class="subtitle">Events export backlog metrics reporting operation error rate over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100303` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m]))) * 100
```
</details>

<br />

### Telemetry: Telemetry persistence

#### telemetry: telemetry_v2_export_queue_write_failures

<p class="subtitle">Failed writes to events export queue over 5m</p>

Telemetry V2 writes send events into the `telemetry_events_export_queue` for the exporter to periodically export.

Refer to the [alerts reference](alerts#telemetry-telemetry_v2_export_queue_write_failures) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(increase(src_telemetry_export_store_queued_events{failed="true"}[5m])) / sum(increase(src_telemetry_export_store_queued_events[5m]))) * 100
```
</details>

<br />

#### telemetry: telemetry_v2_event_logs_write_failures

<p class="subtitle">Failed write V2 events to V1 'event_logs' over 5m</p>

Telemetry V2 writes also attempt to `tee` events into the legacy V1 events format in the `event_logs` database table for long-term local persistence.

Refer to the [alerts reference](alerts#telemetry-telemetry_v2_event_logs_write_failures) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum(increase(src_telemetry_teestore_v1_events{failed="true"}[5m])) / sum(increase(src_telemetry_teestore_v1_events[5m]))) * 100
```
</details>

<br />

### Telemetry: Telemetry Gateway Exporter: (off by default) User metadata export job operations

#### telemetry: telemetrygatewayexporter_usermetadata_exporter_total

<p class="subtitle">(off by default) user metadata exporter operations every 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_usermetadata_exporter_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_usermetadata_exporter_99th_percentile_duration

<p class="subtitle">Aggregate successful (off by default) user metadata exporter operation duration distribution over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum  by (le)(rate(src_telemetrygatewayexporter_usermetadata_exporter_duration_seconds_bucket{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_usermetadata_exporter_errors_total

<p class="subtitle">(off by default) user metadata exporter operation errors every 30m</p>

Refer to the [alerts reference](alerts#telemetry-telemetrygatewayexporter_usermetadata_exporter_errors_total) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_usermetadata_exporter_errors_total{job=~"^worker.*"}[30m]))
```
</details>

<br />

#### telemetry: telemetrygatewayexporter_usermetadata_exporter_error_rate

<p class="subtitle">(off by default) user metadata exporter operation error rate over 30m</p>

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/telemetry/telemetry?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_telemetrygatewayexporter_usermetadata_exporter_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_usermetadata_exporter_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_usermetadata_exporter_errors_total{job=~"^worker.*"}[30m]))) * 100
```
</details>

<br />

## OpenTelemetry Collector

<p class="subtitle">The OpenTelemetry collector ingests OpenTelemetry data from Sourcegraph and exports it to the configured backends.</p>

To see this dashboard, visit `/-/debug/grafana/d/otel-collector/otel-collector` on your Sourcegraph instance.

### OpenTelemetry Collector: Receivers

#### otel-collector: otel_span_receive_rate

<p class="subtitle">Spans received per receiver per minute</p>

Shows the rate of spans accepted by the configured reveiver

A Trace is a collection of spans and a span represents a unit of work or operation. Spans are the building blocks of Traces.
The spans have only been accepted by the receiver, which means they still have to move through the configured pipeline to be exported.
For more information on tracing and configuration of a OpenTelemetry receiver see https://opentelemetry.io/docs/collector/configuration/#receivers.

See the Exporters section see spans that have made it through the pipeline and are exported.

Depending the configured processors, received spans might be dropped and not exported. For more information on configuring processors see
https://opentelemetry.io/docs/collector/configuration/#processors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (receiver) (rate(otelcol_receiver_accepted_spans[1m]))
```
</details>

<br />

#### otel-collector: otel_span_refused

<p class="subtitle">Spans refused per receiver</p>



Refer to the [alerts reference](alerts#otel-collector-otel_span_refused) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (receiver) (rate(otelcol_receiver_refused_spans[1m]))
```
</details>

<br />

### OpenTelemetry Collector: Exporters

#### otel-collector: otel_span_export_rate

<p class="subtitle">Spans exported per exporter per minute</p>

Shows the rate of spans being sent by the exporter

A Trace is a collection of spans. A Span represents a unit of work or operation. Spans are the building blocks of Traces.
The rate of spans here indicates spans that have made it through the configured pipeline and have been sent to the configured export destination.

For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (exporter) (rate(otelcol_exporter_sent_spans[1m]))
```
</details>

<br />

#### otel-collector: otel_span_export_failures

<p class="subtitle">Span export failures by exporter</p>

Shows the rate of spans failed to be sent by the configured reveiver. A number higher than 0 for a long period can indicate a problem with the exporter configuration or with the service that is being exported too

For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.

Refer to the [alerts reference](alerts#otel-collector-otel_span_export_failures) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (exporter) (rate(otelcol_exporter_send_failed_spans[1m]))
```
</details>

<br />

### OpenTelemetry Collector: Queue Length

#### otel-collector: otelcol_exporter_queue_capacity

<p class="subtitle">Exporter queue capacity</p>

Shows the the capacity of the retry queue (in batches).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (exporter) (rate(otelcol_exporter_queue_capacity{job=~"^.*"}[1m]))
```
</details>

<br />

#### otel-collector: otelcol_exporter_queue_size

<p class="subtitle">Exporter queue size</p>

Shows the current size of retry queue

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (exporter) (rate(otelcol_exporter_queue_size{job=~"^.*"}[1m]))
```
</details>

<br />

#### otel-collector: otelcol_exporter_enqueue_failed_spans

<p class="subtitle">Exporter enqueue failed spans</p>

Shows the rate of spans failed to be enqueued by the configured exporter. A number higher than 0 for a long period can indicate a problem with the exporter configuration

Refer to the [alerts reference](alerts#otel-collector-otelcol_exporter_enqueue_failed_spans) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (exporter) (rate(otelcol_exporter_enqueue_failed_spans{job=~"^.*"}[1m]))
```
</details>

<br />

### OpenTelemetry Collector: Processors

#### otel-collector: otelcol_processor_dropped_spans

<p class="subtitle">Spans dropped per processor per minute</p>

Shows the rate of spans dropped by the configured processor

Refer to the [alerts reference](alerts#otel-collector-otelcol_processor_dropped_spans) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100300` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (processor) (rate(otelcol_processor_dropped_spans[1m]))
```
</details>

<br />

### OpenTelemetry Collector: Collector resource usage

#### otel-collector: otel_cpu_usage

<p class="subtitle">Cpu usage of the collector</p>

Shows CPU usage as reported by the OpenTelemetry collector.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100400` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (rate(otelcol_process_cpu_seconds{job=~"^.*"}[1m]))
```
</details>

<br />

#### otel-collector: otel_memory_resident_set_size

<p class="subtitle">Memory allocated to the otel collector</p>

Shows the allocated memory Resident Set Size (RSS) as reported by the OpenTelemetry collector.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100401` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (rate(otelcol_process_memory_rss{job=~"^.*"}[1m]))
```
</details>

<br />

#### otel-collector: otel_memory_usage

<p class="subtitle">Memory used by the collector</p>

Shows how much memory is being used by the otel collector.

* High memory usage might indicate thad the configured pipeline is keeping a lot of spans in memory for processing
* Spans failing to be sent and the exporter is configured to retry
* A high batch count by using a batch processor

For more information on configuring processors for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#processors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100402` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (rate(otelcol_process_runtime_total_alloc_bytes{job=~"^.*"}[1m]))
```
</details>

<br />

### OpenTelemetry Collector: Container monitoring (not available on server)

#### otel-collector: container_missing

<p class="subtitle">Container missing</p>

This value is the number of times a container has not been seen for more than one minute. If you observe this
value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.

- **Kubernetes:**
	- Determine if the pod was OOM killed using `kubectl describe pod otel-collector` (look for `OOMKilled: true`) and, if so, consider increasing the memory limit in the relevant `Deployment.yaml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `kubectl logs -p otel-collector`.
- **Docker Compose:**
	- Determine if the pod was OOM killed using `docker inspect -f '\{\{json .State\}\}' otel-collector` (look for `"OOMKilled":true`) and, if so, consider increasing the memory limit of the otel-collector container in `docker-compose.yml`.
	- Check the logs before the container restarted to see if there are `panic:` messages or similar using `docker logs otel-collector` (note this will include logs from the previous and currently running container).

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100500` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
count by(name) ((time() - container_last_seen{name=~"^otel-collector.*"}) > 60)
```
</details>

<br />

#### otel-collector: container_cpu_usage

<p class="subtitle">Container cpu usage total (1m average) across all cores by instance</p>

Refer to the [alerts reference](alerts#otel-collector-container_cpu_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100501` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_cpu_usage_percentage_total{name=~"^otel-collector.*"}
```
</details>

<br />

#### otel-collector: container_memory_usage

<p class="subtitle">Container memory usage by instance</p>

Refer to the [alerts reference](alerts#otel-collector-container_memory_usage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100502` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
cadvisor_container_memory_usage_percentage_total{name=~"^otel-collector.*"}
```
</details>

<br />

#### otel-collector: fs_io_operations

<p class="subtitle">Filesystem reads and writes rate by instance over 1h</p>

This value indicates the number of filesystem read and write operations by containers of this service.
When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with \{\{CONTAINER_NAME\}\} issues.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100503` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(name) (rate(container_fs_reads_total{name=~"^otel-collector.*"}[1h]) + rate(container_fs_writes_total{name=~"^otel-collector.*"}[1h]))
```
</details>

<br />

### OpenTelemetry Collector: Kubernetes monitoring (only available on Kubernetes)

#### otel-collector: pods_available_percentage

<p class="subtitle">Percentage pods available</p>

Refer to the [alerts reference](alerts#otel-collector-pods_available_percentage) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100600` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by(app) (up{app=~".*otel-collector"}) / count by (app) (up{app=~".*otel-collector"}) * 100
```
</details>

<br />

## Completions

<p class="subtitle">Cody chat and code completions.</p>

To see this dashboard, visit `/-/debug/grafana/d/completions/completions` on your Sourcegraph instance.

### Completions: Completions requests

#### completions: api_request_rate

<p class="subtitle">Rate of completions API requests</p>

Rate (QPS) of requests to cody chat and code completion endpoints.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/completions/completions?viewPanel=100000` on your Sourcegraph instance.


<details>
<summary>Technical details</summary>

Query:

```
sum by (code)(irate(src_http_request_duration_seconds_count{route=~"^cody.completions.*"}[5m]))
```
</details>

<br />

## Deep Search

<p class="subtitle">Monitoring for Deep Search question processing.</p>

To see this dashboard, visit `/-/debug/grafana/d/deepsearch/deepsearch` on your Sourcegraph instance.

### Deep Search: Question processing

#### deepsearch: deepsearch_questions_in_flight

<p class="subtitle">Number of questions currently being processed</p>

The number of deep search questions currently being processed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_deepsearch_questions_in_flight)
```
</details>

<br />

#### deepsearch: deepsearch_questions_in_flight_growth

<p class="subtitle">Rate of growth of in-flight questions over 1h</p>

A positive value indicates the queue is growing faster than it`s being processed.

Refer to the [alerts reference](alerts#deepsearch-deepsearch_questions_in_flight_growth) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(deriv(src_deepsearch_questions_in_flight[1h]))
```
</details>

<br />

#### deepsearch: deepsearch_question_processing_rate

<p class="subtitle">Questions processed per minute</p>

Rate of deep search questions being processed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_deepsearch_question_processing_total{operation="question"}[5m])) * 60
```
</details>

<br />

#### deepsearch: deepsearch_question_processing_error_rate

<p class="subtitle">Question processing error rate over 5m</p>

Percentage of deep search questions that result in an error.

Refer to the [alerts reference](alerts#deepsearch-deepsearch_question_processing_error_rate) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(src_deepsearch_question_processing_errors_total{operation="question"}[5m])) / (sum(rate(src_deepsearch_question_processing_total{operation="question"}[5m])) > 0) * 100
```
</details>

<br />

#### deepsearch: deepsearch_question_processing_p99_duration

<p class="subtitle">99th percentile question processing duration</p>

99th percentile time to process a deep search question.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum(rate(src_deepsearch_question_processing_duration_seconds_bucket{operation="question"}[5m])) by (le))
```
</details>

<br />

#### deepsearch: deepsearch_question_processing_p50_duration

<p class="subtitle">50th percentile question processing duration</p>

Median time to process a deep search question.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.50, sum(rate(src_deepsearch_question_processing_duration_seconds_bucket{operation="question"}[5m])) by (le))
```
</details>

<br />

### Deep Search: LLM streaming

#### deepsearch: deepsearch_llm_stream_fatal_errors

<p class="subtitle">Fatal LLM stream errors over 5m</p>

Number of fatal errors during LLM streaming in the last 5 minutes.

Refer to the [alerts reference](alerts#deepsearch-deepsearch_llm_stream_fatal_errors) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_deepsearch_question_processing_errors_total{operation="llm_stream_fatal"}[5m]))
```
</details>

<br />

#### deepsearch: deepsearch_llm_stream_non_fatal_errors

<p class="subtitle">Non-fatal LLM stream errors over 5m</p>

Number of non-fatal errors during LLM streaming.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/deepsearch/deepsearch?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Code Understanding team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_deepsearch_question_processing_errors_total{operation="llm_stream_non_fatal"}[5m]))
```
</details>

<br />

## Sourcegraph external API

<p class="subtitle">Monitoring for the Sourcegraph external API.</p>

To see this dashboard, visit `/-/debug/grafana/d/externalapi/externalapi` on your Sourcegraph instance.

### Sourcegraph external API: Request rate and errors

#### externalapi: externalapi_request_rate

<p class="subtitle">Requests per second by service</p>

Rate of external API requests by service.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (rpc_service)(rate(rpc_server_duration_milliseconds_count{rpc_service=~"$rpc_service"}[5m]))
```
</details>

<br />

#### externalapi: externalapi_request_rate_by_method

<p class="subtitle">Requests per second by method</p>

Rate of external API requests by RPC method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (rpc_method)(rate(rpc_server_duration_milliseconds_count{rpc_service=~"$rpc_service"}[5m]))
```
</details>

<br />

#### externalapi: externalapi_error_rate

<p class="subtitle">Error rate over 5m</p>

Percentage of external API requests that result in an error.

Refer to the [alerts reference](alerts#externalapi-externalapi_error_rate) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(rate(rpc_server_duration_milliseconds_count{rpc_service=~"$rpc_service",rpc_connect_error_code!=""}[5m])) / (sum(rate(rpc_server_duration_milliseconds_count{rpc_service=~"$rpc_service"}[5m])) > 0) * 100
```
</details>

<br />

#### externalapi: externalapi_errors_by_code

<p class="subtitle">Errors by error code over 5m</p>

Rate of external API errors by ConnectRPC error code.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (rpc_connect_error_code)(rate(rpc_server_duration_milliseconds_count{rpc_service=~"$rpc_service",rpc_connect_error_code!=""}[5m]))
```
</details>

<br />

### Sourcegraph external API: Latency

#### externalapi: externalapi_p99_duration

<p class="subtitle">99th percentile request duration</p>

99th percentile external API request duration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(rpc_server_duration_milliseconds_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

#### externalapi: externalapi_p90_duration

<p class="subtitle">90th percentile request duration</p>

90th percentile external API request duration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.90, sum by (le)(rate(rpc_server_duration_milliseconds_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

#### externalapi: externalapi_p50_duration

<p class="subtitle">50th percentile request duration</p>

Median external API request duration.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.50, sum by (le)(rate(rpc_server_duration_milliseconds_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

#### externalapi: externalapi_p99_duration_by_method

<p class="subtitle">99th percentile request duration by method</p>

99th percentile external API request duration per RPC method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le, rpc_method)(rate(rpc_server_duration_milliseconds_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

#### externalapi: externalapi_p50_duration_by_method

<p class="subtitle">50th percentile request duration by method</p>

Median external API request duration per RPC method.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.50, sum by (le, rpc_method)(rate(rpc_server_duration_milliseconds_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

### Sourcegraph external API: Request and response sizes

#### externalapi: externalapi_p99_request_size

<p class="subtitle">99th percentile request size</p>

99th percentile external API request message size.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(rpc_server_request_size_bytes_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

#### externalapi: externalapi_p99_response_size

<p class="subtitle">99th percentile response size</p>

99th percentile external API response message size.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/externalapi/externalapi?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.99, sum by (le)(rate(rpc_server_response_size_bytes_bucket{rpc_service=~"$rpc_service"}[5m])))
```
</details>

<br />

## Periodic Goroutines

<p class="subtitle">Overview of all periodic background routines across Sourcegraph services.</p>

To see this dashboard, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines` on your Sourcegraph instance.

### Periodic Goroutines: Periodic Goroutines Overview

#### periodic-goroutines: total_running_goroutines

<p class="subtitle">Total number of running periodic goroutines across all services</p>

The total number of running periodic goroutines across all services.
This provides a high-level overview of system activity.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(src_periodic_goroutine_running)
```
</details>

<br />

#### periodic-goroutines: goroutines_by_service

<p class="subtitle">Number of running periodic goroutines by service</p>

The number of running periodic goroutines broken down by service.
This helps identify which services are running the most background routines.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (job) (src_periodic_goroutine_running)
```
</details>

<br />

#### periodic-goroutines: top_error_producers

<p class="subtitle">Top 10 periodic goroutines by error rate</p>

The top 10 periodic goroutines with the highest error rates.
These routines may require immediate attention or investigation.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
topk(10, sum by (name, job) (rate(src_periodic_goroutine_errors_total[5m])))
```
</details>

<br />

#### periodic-goroutines: top_time_consumers

<p class="subtitle">Top 10 slowest periodic goroutines</p>

The top 10 periodic goroutines with the longest average execution time.
These routines may be candidates for optimization or load distribution.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
topk(10, max by (name, job) (rate(src_periodic_goroutine_duration_seconds_sum[5m]) / rate(src_periodic_goroutine_duration_seconds_count[5m])))
```
</details>

<br />

### Periodic Goroutines: Drill down

#### periodic-goroutines: filtered_success_rate

<p class="subtitle">Success rate for selected goroutines</p>

The rate of successful executions for the filtered periodic goroutines.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job) (rate(src_periodic_goroutine_total{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m]))
```
</details>

<br />

#### periodic-goroutines: filtered_error_rate

<p class="subtitle">Error rate for selected goroutines</p>

The rate of errors for the filtered periodic goroutines.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job) (rate(src_periodic_goroutine_errors_total{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m]))
```
</details>

<br />

#### periodic-goroutines: filtered_duration

<p class="subtitle">95th percentile execution time for selected goroutines</p>

The 95th percentile execution time for the filtered periodic goroutines.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job, le) (rate(src_periodic_goroutine_duration_seconds_bucket{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m])))
```
</details>

<br />

#### periodic-goroutines: filtered_loop_time

<p class="subtitle">95th percentile loop time for selected goroutines</p>

The 95th percentile loop time for the filtered periodic goroutines.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job, le) (rate(src_periodic_goroutine_loop_duration_seconds_bucket{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m])))
```
</details>

<br />

#### periodic-goroutines: filtered_tenant_count

<p class="subtitle">Number of tenants processed by selected goroutines</p>

Number of tenants processed by each selected periodic goroutine.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max by (name, job) (src_periodic_goroutine_tenant_count{name=~'${routineName:regex}', job=~'${serviceName:regex}'})
```
</details>

<br />

#### periodic-goroutines: filtered_tenant_duration

<p class="subtitle">95th percentile tenant processing time for selected goroutines</p>

The 95th percentile processing time for individual tenants.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100121` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by (name, job, le) (rate(src_periodic_goroutine_tenant_duration_seconds_bucket{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m])))
```
</details>

<br />

#### periodic-goroutines: filtered_tenant_success_rate

<p class="subtitle">Tenant success rate for selected goroutines</p>

The rate of successful tenant processing operations.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100130` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job) (rate(src_periodic_goroutine_tenant_success_total{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m]))
```
</details>

<br />

#### periodic-goroutines: filtered_tenant_error_rate

<p class="subtitle">Tenant error rate for selected goroutines</p>

The rate of tenant processing operations resulting in errors.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/periodic-goroutines/periodic-goroutines?viewPanel=100131` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Platform team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (name, job) (rate(src_periodic_goroutine_tenant_errors_total{name=~'${routineName:regex}', job=~'${serviceName:regex}'}[5m]))
```
</details>

<br />

## Background Jobs Dashboard

<p class="subtitle">Overview of all background jobs in the system.</p>

To see this dashboard, visit `/-/debug/grafana/d/background-jobs/background-jobs` on your Sourcegraph instance.

### Background Jobs Dashboard: DBWorker Store Operations

#### background-jobs: operation_rates_by_method

<p class="subtitle">Rate of operations by method (5m)</p>

shows the rate of different dbworker store operations

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100000` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op) (rate(src_workerutil_dbworker_store_total{domain=~"$dbworker_domain"}[5m]))
```
</details>

<br />

#### background-jobs: error_rates

<p class="subtitle">Rate of errors by method (5m)</p>

Rate of errors by operation type. Check specific operations with high error rates.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100001` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (op) (rate(src_workerutil_dbworker_store_errors_total{domain=~"$dbworker_domain"}[5m]))
```
</details>

<br />

#### background-jobs: p90_duration_by_method

<p class="subtitle">90th percentile duration by method</p>

90th percentile latency for dbworker store operations.

Investigate database query performance and indexing for the affected operations. Look for slow queries in database logs.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100010` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.9, sum by(le, op) (rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain=~"$dbworker_domain"}[5m])))
```
</details>

<br />

#### background-jobs: p50_duration_by_method

<p class="subtitle">Median duration by method</p>

median latency for dbworker store operations

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100011` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.5, sum by(le, op) (rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain=~"$dbworker_domain"}[5m])))
```
</details>

<br />

#### background-jobs: p90_duration_by_domain

<p class="subtitle">90th percentile duration by domain</p>

90th percentile latency for dbworker store operations.

Investigate database performance for the specific domain. May indicate issues with specific database tables or query patterns.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100012` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.9, sum by(le, domain) (rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain=~"$dbworker_domain"}[5m])))
```
</details>

<br />

#### background-jobs: p50_duration_by_method

<p class="subtitle">Median operation duration by method</p>

median latency for dbworker store operations by method

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100013` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.5, sum by(le, op) (rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain=~"$dbworker_domain"}[5m])))
```
</details>

<br />

#### background-jobs: dequeue_performance

<p class="subtitle">Dequeue operation metrics</p>

rate of dequeue operations by domain - critical for worker performance

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100020` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (rate(src_workerutil_dbworker_store_total{op="Dequeue", domain=~"$dbworker_domain"}[5m]))
```
</details>

<br />

#### background-jobs: error_percentage_by_method

<p class="subtitle">Percentage of operations resulting in error by method</p>

Refer to the [alerts reference](alerts#background-jobs-error_percentage_by_method) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100021` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (op) (rate(src_workerutil_dbworker_store_errors_total{domain=~"$dbworker_domain"}[5m])) / (sum by (op) (rate(src_workerutil_dbworker_store_errors_total{domain=~"$dbworker_domain"}[5m])) + sum by (op) (rate(src_workerutil_dbworker_store_total{domain=~"$dbworker_domain"}[5m])))) * 100
```
</details>

<br />

#### background-jobs: error_percentage_by_domain

<p class="subtitle">Percentage of operations resulting in error by domain</p>

Refer to the [alerts reference](alerts#background-jobs-error_percentage_by_domain) for 2 alerts related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100022` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (domain) (rate(src_workerutil_dbworker_store_errors_total{domain=~"$dbworker_domain"}[5m])) / (sum by (domain) (rate(src_workerutil_dbworker_store_errors_total{domain=~"$dbworker_domain"}[5m])) + sum by (domain) (rate(src_workerutil_dbworker_store_total{domain=~"$dbworker_domain"}[5m])))) * 100
```
</details>

<br />

#### background-jobs: operation_latency_heatmap

<p class="subtitle">Distribution of operation durations</p>

Distribution of operation durations - shows the spread of latencies across all operations

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100023` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le) (rate(src_workerutil_dbworker_store_duration_seconds_bucket{domain=~"$dbworker_domain"}[5m]))
```
</details>

<br />

### Background Jobs Dashboard: DBWorker Resetter

#### background-jobs: resetter_duration

<p class="subtitle">Time spent running the resetter</p>

Refer to the [alerts reference](alerts#background-jobs-resetter_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100100` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.95, sum by(le, domain) (rate(src_dbworker_resetter_duration_seconds_bucket{domain=~"$resetter_domain"}[5m])))
```
</details>

<br />

#### background-jobs: resetter_runs

<p class="subtitle">Number of times the resetter ran</p>

the number of times the resetter ran in the last 5 minutes

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100101` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (increase(src_dbworker_resetter_total{domain=~"$resetter_domain"}[5m]))
```
</details>

<br />

#### background-jobs: resetter_failures

<p class="subtitle">Number of times the resetter failed to run</p>

Refer to the [alerts reference](alerts#background-jobs-resetter_failures) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100102` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (increase(src_dbworker_resetter_errors_total{domain=~"$resetter_domain"}[5m]))
```
</details>

<br />

#### background-jobs: reset_records

<p class="subtitle">Number of stalled records reset back to 'queued' state</p>

the number of stalled records that were reset back to the queued state in the last 5 minutes

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100110` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (increase(src_dbworker_resetter_record_resets_total{domain=~"$resetter_domain"}[5m]))
```
</details>

<br />

#### background-jobs: failed_records

<p class="subtitle">Number of stalled records marked as 'failed'</p>

Refer to the [alerts reference](alerts#background-jobs-failed_records) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100111` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (increase(src_dbworker_resetter_record_reset_failures_total{domain=~"$resetter_domain"}[5m]))
```
</details>

<br />

#### background-jobs: stall_duration

<p class="subtitle">Duration jobs were stalled before being reset</p>

median time a job was stalled before being reset

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100120` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (le) (rate(src_dbworker_resetter_stall_duration_seconds_bucket{domain=~"$resetter_domain"}[5m]))
```
</details>

<br />

#### background-jobs: stall_duration_p90

<p class="subtitle">90th percentile of stall duration</p>

Refer to the [alerts reference](alerts#background-jobs-stall_duration_p90) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100121` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
histogram_quantile(0.9, sum by(le, domain) (rate(src_dbworker_resetter_stall_duration_seconds_bucket{domain=~"$resetter_domain"}[5m])))
```
</details>

<br />

#### background-jobs: reset_vs_failure_ratio

<p class="subtitle">Ratio of jobs reset to queued versus marked as failed</p>

ratio of reset jobs to failed jobs - higher values indicate healthier job processing

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100122` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
(sum by (domain) (increase(src_dbworker_resetter_record_resets_total{domain=~"$resetter_domain"}[1h]))) / on(domain) (sum by (domain) (increase(src_dbworker_resetter_record_reset_failures_total{domain=~"$resetter_domain"}[1h]) > 0) or on(domain) sum by (domain) (increase(src_dbworker_resetter_record_resets_total{domain=~"$resetter_domain"}[1h]) * 0 + 1))
```
</details>

<br />

### Background Jobs Dashboard: Worker Queue Metrics

#### background-jobs: aggregate_queue_size

<p class="subtitle">Total number of jobs queued across all domains</p>

Refer to the [alerts reference](alerts#background-jobs-aggregate_queue_size) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100200` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(max by (domain) (src_workerutil_queue_depth))
```
</details>

<br />

#### background-jobs: max_queue_duration

<p class="subtitle">Maximum time a job has been in queue across all domains</p>

Refer to the [alerts reference](alerts#background-jobs-max_queue_duration) for 1 alert related to this panel.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100201` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
max(src_workerutil_queue_duration_seconds)
```
</details>

<br />

#### background-jobs: queue_growth_rate

<p class="subtitle">Rate of queue growth/decrease</p>

Rate at which queue is growing. Positive values indicate more jobs are being added than processed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100202` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum(increase(src_workerutil_queue_depth[30m]))/1800
```
</details>

<br />

#### background-jobs: queue_depth_by_domain

<p class="subtitle">Number of jobs in queue by domain</p>

Number of queued jobs per domain. Large values may indicate workers are not keeping up with incoming jobs.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100210` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (max by (domain) (src_workerutil_queue_depth))
```
</details>

<br />

#### background-jobs: queue_duration_by_domain

<p class="subtitle">Maximum queue time by domain</p>

Maximum time a job has been waiting in queue per domain. Long durations indicate potential worker stalls.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100211` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (max by (domain) (src_workerutil_queue_duration_seconds))
```
</details>

<br />

#### background-jobs: queue_growth_by_domain

<p class="subtitle">Rate of change in queue size by domain</p>

Rate of change in queue size per domain. Consistently positive values indicate jobs are being queued faster than processed.

This panel has no related alerts.

To see this panel, visit `/-/debug/grafana/d/background-jobs/background-jobs?viewPanel=100212` on your Sourcegraph instance.

<sub>*Managed by the Sourcegraph Services team.*</sub>

<details>
<summary>Technical details</summary>

Query:

```
sum by (domain) (idelta(src_workerutil_queue_depth[10m])) / 600
```
</details>

<br />