Spring AI stalls indefinitely if you try to use a local model running in LMStudio. The issue is that LMStudio isn't properly handling HTTP/2 connections from Spring, which is the default being used.
It's a very simple fix though. When defining the OpenAiApi instance, set the webClientBuilder() and restClientBuilder() options to use HTTP/1.1 instead.
// define a custom HTTP connector for HTTP 1.1
JdkClientHttpConnector httpConnector = new JdkClientHttpConnector(HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1)
.connectTimeout(Duration.ofSeconds(30))
.build());
// define a custom HTTP connector for HTTP 1.1
JdkClientHttpRequestFactory httpRequestFactory = new JdkClientHttpRequestFactory(HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_1_1)
.connectTimeout(Duration.ofSeconds(30))
.build());
// build the OpenAI API client
OpenAiApi api = OpenAiApi.builder()
.baseUrl(url)
.apiKey(new NoopApiKey())
.webClientBuilder(
// Force HTTP/1.1 for streaming
WebClient.builder().clientConnector(httpConnector))
.restClientBuilder(
// Force HTTP/1.1 for non-streaming
RestClient.builder().requestFactory(httpRequestFactory))
.build();
Source: https://github.com/spring-projects/spring-ai/issues/2441