Class AiClient

java.lang.Object
io.github.zhengzhengyiyi.api.AiClient

public class AiClient extends Object
A client for interacting with a local Ollama AI server. Provides methods to send chat requests and check server status. This client communicates with the Ollama REST API running on localhost.

Usage Example:

 
 AIClient client = new AIClient();
 
 // Check server status first
 client.checkServerStatus().thenAccept(available -> {
     if (available) {
         // Send chat request
         client.sendChatRequest("tinyllama:latest", "Hello, how are you?")
               .thenAccept(response -> System.out.println("AI Response: " + response));
     } else {
         System.out.println("Ollama server is not running");
     }
 });
 
 
Since:
1.0.0
  • Constructor Details

    • AiClient

      public AiClient()
      Constructs a new AIClient with a default HTTP client. The HTTP client is configured with default settings suitable for communicating with the local Ollama server.
  • Method Details

    • sendChatRequest

      public CompletableFuture<String> sendChatRequest(String model, String message)
      Sends a chat request to the Ollama server with the specified model and message.

      This method sends an asynchronous HTTP POST request to the Ollama chat API and returns a CompletableFuture that will be completed with the AI's response.

      Request Flow:

      1. Escapes the message content for JSON safety
      2. Builds the JSON request body with model and message
      3. Sends POST request to /api/chat endpoint
      4. Parses the response to extract the AI's content
      Parameters:
      model - the AI model to use for generating the response (e.g., "tinyllama:latest")
      message - the user's message to send to the AI
      Returns:
      a CompletableFuture that will be completed with the AI's response text
      Throws:
      RuntimeException - if the HTTP request fails or returns a non-200 status code
      See Also:
      • escapeJson(String)
      • extractContentFromResponse(String)
    • checkServerStatus

      public CompletableFuture<Boolean> checkServerStatus()
      Checks if the Ollama server is running and accessible.

      This method sends a GET request to the Ollama tags API endpoint to verify that the server is responsive. The check is performed asynchronously and does not block the calling thread.

      Returns:
      a CompletableFuture that will be completed with:
      • true - if the server responds with HTTP 200 status
      • false - if the server is unreachable or returns an error status