Spring AI with Ollama

Introduction to Spring AI with Ollama

Welcome to a comprehensive guide on leveraging Spring AI with Ollama to develop AI-driven applications using Java. This tutorial will cover everything from setting up Ollama locally, configuring your development environment, to creating an application that utilizes large language models for text generation.

Setting Up Ollama

Installing Ollama

To use Ollama locally, you must first install it on your machine. This involves downloading the Ollama software from the official repository and configuring it to run without Docker. Follow the installation instructions provided in the Ollama README file available on their official GitHub page.

Downloading the Mistral Model

After setting up Ollama, you can download the Mistral model directly from the Ollama management interface. Mistral is designed for a broad range of applications, offering robust text generation capabilities.


// Command to download the Mistral model through Ollama CLI
ollama model download mistral
Exploring Other Available Models

Ollama supports several models, each with unique characteristics tailored to different tasks:

  • Orca-mini: Ideal for small-scale, quick-response applications.
  • Llama2: Suitable for more demanding tasks requiring deeper context understanding.
  • Custom models: Developed for specialized tasks; check the Ollama repository for more details.

Integrating Ollama with Spring AI: Maven and Gradle Dependencies

Maven Dependency Configuration

To integrate Ollama with Spring AI in a Maven project, you need to add the following dependency to your pom.xml file. This dependency will include the necessary Spring AI Ollama libraries into your project.


<dependency>
    <groupid>org.springframework.ai</groupid>
    <artifactid>spring-ai-ollama-spring-boot-starter</artifactid>
    <version>1.0.0</version> 
</dependency>
Gradle Dependency Configuration

For Gradle-based projects, you will include the Spring AI Ollama dependency in your build.gradle file. This allows Gradle to manage the library and its associated dependencies.

// Add this to the dependencies block of your build.gradle
dependencies {
    implementation 'org.springframework.ai:spring-ai-ollama-spring-boot-starter:1.0.0' // Use the latest version
}
Repository Configuration

Since Spring AI artifacts are typically published in the Spring Milestone and Snapshot repositories, you might need to add these repositories to your build file if the dependencies are not found in Maven Central. Here's how you can do it:

// For Maven, add this to your pom.xml inside the <repositories> section
<repository>
    <id>spring-milestones</id>
    <name>Spring Milestones Repository</name>
   <url>https://repo.spring.io/milestone</url>
</repository>

// For Gradle, add this to your build.gradle
repositories {
    maven { url 'https://repo.spring.io/milestone' }
}

These configurations ensure that your project is set up to utilize the latest Spring AI capabilities with Ollama, allowing you to build and run AI-enhanced applications seamlessly.

Creating a Simple Java Application with Ollama

Sample Java Application

This example demonstrates how to create a Java application that uses the Ollama Mistral model to generate text responses. The application sets up a basic Spring Boot controller to handle API requests.


package codeKatha;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@SpringBootApplication
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

@RestController
class ChatController {

    @Autowired
    private OllamaChatClient chatClient;

    @GetMapping("/generateText")
    public String generateText(@RequestParam String prompt) {
        return chatClient.call(prompt).getResponse();
    }
}
Running Your Application

Compile and run your Spring Boot application. Access the endpoint '/generateText' with a prompt parameter to see how the Mistral model generates text based on your input.

Conclusion

Integrating Spring AI with Ollama offers powerful capabilities for developing AI-powered applications. This guide provides the foundational knowledge needed to implement LLMs in your Java projects, enabling you to harness the potential of AI in your software solutions.

Comments

Popular Posts on Code Katha

Java Interview Questions for 10 Years Experience

Sql Interview Questions for 10 Years Experience

Spring Boot Interview Questions for 10 Years Experience

Visual Studio Code setup for Java and Spring with GitHub Copilot

Java interview questions - Must to know concepts

Spring Data JPA

Data Structures & Algorithms Tutorial with Coding Interview Questions

Elasticsearch Java Spring Boot

Java interview questions for 5 years experience