Package io.github.zhengzhengyiyi.api
Class AiClient
java.lang.Object
io.github.zhengzhengyiyi.api.AiClient
A client for interacting with a local Ollama AI server.
Provides methods to send chat requests and check server status.
This client communicates with the Ollama REST API running on localhost.
Usage Example:
AIClient client = new AIClient();
// Check server status first
client.checkServerStatus().thenAccept(available -> {
if (available) {
// Send chat request
client.sendChatRequest("tinyllama:latest", "Hello, how are you?")
.thenAccept(response -> System.out.println("AI Response: " + response));
} else {
System.out.println("Ollama server is not running");
}
});
- Since:
- 1.0.0
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionChecks if the Ollama server is running and accessible.sendChatRequest(String model, String message) Sends a chat request to the Ollama server with the specified model and message.
-
Constructor Details
-
AiClient
public AiClient()Constructs a new AIClient with a default HTTP client. The HTTP client is configured with default settings suitable for communicating with the local Ollama server.
-
-
Method Details
-
sendChatRequest
Sends a chat request to the Ollama server with the specified model and message.This method sends an asynchronous HTTP POST request to the Ollama chat API and returns a CompletableFuture that will be completed with the AI's response.
Request Flow:
- Escapes the message content for JSON safety
- Builds the JSON request body with model and message
- Sends POST request to /api/chat endpoint
- Parses the response to extract the AI's content
- Parameters:
model- the AI model to use for generating the response (e.g., "tinyllama:latest")message- the user's message to send to the AI- Returns:
- a CompletableFuture that will be completed with the AI's response text
- Throws:
RuntimeException- if the HTTP request fails or returns a non-200 status code- See Also:
-
checkServerStatus
Checks if the Ollama server is running and accessible.This method sends a GET request to the Ollama tags API endpoint to verify that the server is responsive. The check is performed asynchronously and does not block the calling thread.
- Returns:
- a CompletableFuture that will be completed with:
true- if the server responds with HTTP 200 statusfalse- if the server is unreachable or returns an error status
-