Update monitoring.md (#2724)

Signed-off-by: patcher9 <patcher99@dokulabs.com>
This commit is contained in:
patcher9 2024-07-26 04:43:00 +05:30 committed by GitHub
parent 7fefac74ba
commit 71c957f8ee
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1,20 +1,20 @@
# GPT4All Monitoring
GPT4All integrates with [OpenLIT](https://github.com/openlit/openlit) open telemetry instrumentation to perform real-time monitoring of your LLM application and hardware.
GPT4All integrates with [OpenLIT](https://github.com/openlit/openlit) OpenTelemetry auto-instrumentation to perform real-time monitoring of your LLM application and GPU hardware.
Monitoring can enhance your GPT4All deployment with auto-generated traces for
Monitoring can enhance your GPT4All deployment with auto-generated traces and metrics for
- performance metrics
- user interactions
- GPU metrics like utilization, memory, temperature, power usage
- **Performance Optimization:** Analyze latency, cost and token usage to ensure your LLM application runs efficiently, identifying and resolving performance bottlenecks swiftly.
- **User Interaction Insights:** Capture each prompt and response to understand user behavior and usage patterns better, improving user experience and engagement.
- **Detailed GPU Metrics:** Monitor essential GPU parameters such as utilization, memory consumption, temperature, and power usage to maintain optimal hardware performance and avert potential issues.
## Setup Monitoring
!!! note "Setup Monitoring"
With [OpenLIT](https://github.com/openlit/openlit), you can automatically monitor metrics for your LLM deployment:
With [OpenLIT](https://github.com/openlit/openlit), you can automatically monitor traces and metrics for your LLM deployment:
```shell
pip install openlit
@ -25,7 +25,7 @@ Monitoring can enhance your GPT4All deployment with auto-generated traces for
import openlit
openlit.init() # start
# openlit.init(collect_gpu_stats=True) # or, start with optional GPU monitoring
# openlit.init(collect_gpu_stats=True) # Optional: To configure GPU monitoring
model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
@ -38,10 +38,12 @@ Monitoring can enhance your GPT4All deployment with auto-generated traces for
print(model.current_chat_session)
```
## OpenLIT UI
## Visualization
Connect to OpenLIT's UI to start exploring performance metrics. Visit the OpenLIT [Quickstart Guide](https://docs.openlit.io/latest/quickstart) for step-by-step details.
### OpenLIT UI
## Grafana, DataDog, & Other Integrations
Connect to OpenLIT's UI to start exploring the collected LLM performance metrics and traces. Visit the OpenLIT [Quickstart Guide](https://docs.openlit.io/latest/quickstart) for step-by-step details.
If you use tools like , you can integrate the data collected by OpenLIT. For instructions on setting up these connections, check the OpenLIT [Connections Guide](https://docs.openlit.io/latest/connections/intro).
### Grafana, DataDog, & Other Integrations
You can also send the data collected by OpenLIT to popular monitoring tools like Grafana and DataDog. For detailed instructions on setting up these connections, please refer to the OpenLIT [Connections Guide](https://docs.openlit.io/latest/connections/intro).