Header Ad

Showing posts with label "AI". Show all posts
Showing posts with label "AI". Show all posts

Monday, September 9, 2024

Exploring Spring AI 1.0.0 M2: A Hands-On Guide with Code Samples

Welcome to our deep dive into Spring AI 1.0.0 M2, the latest release designed to simplify AI integration within the Spring framework. This update brings a host of enhancements that streamline the process of incorporating AI functionalities into your applications. In this post, we'll explore these new features and walk through a practical example of using Spring AI 1.0.0 M2 with a TensorFlow model.

What's New in Spring AI 1.0.0 M2?

Spring AI 1.0.0 M2 introduces several exciting updates:

  • Simplified Configuration: Less boilerplate code and easier setup for AI components.
  • Expanded AI Framework Support: Enhanced compatibility with TensorFlow, PyTorch, and other popular AI frameworks.
  • Performance Enhancements: Improved efficiency and scalability for AI applications.
  • Improved Documentation: Updated guides and examples for quicker implementation.

Getting Started with Spring AI 1.0.0 M2

To showcase the capabilities of Spring AI 1.0.0 M2, we'll create a simple Spring Boot application that uses TensorFlow for text classification. Follow these steps to get started:

1. Create a New Spring Boot Project

Generate a new Spring Boot project using Spring Initializr. Add the following dependencies:

  • Spring Web
  • Spring Boot DevTools
  • Spring Data JPA (optional)

2. Add TensorFlow Dependency

Update your pom.xml file with the TensorFlow dependency:

<dependencies> <!-- Spring Boot Starter Web --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- TensorFlow Java Library --> <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> <version>2.11.0</version> </dependency> </dependencies>

3. Build the AI Component

Create a service class to handle interactions with the TensorFlow model:

package com.example.springai.service;
import org.springframework.stereotype.Service; import org.tensorflow.SavedModelBundle; import org.tensorflow.Tensor; @Service public class TextClassificationService { private final SavedModelBundle model; public TextClassificationService() { // Load the pre-trained TensorFlow model this.model = SavedModelBundle.load("path/to/saved_model", "serve"); } public String classifyText(String text) { try (Tensor<String> inputTensor = Tensor.create(text.getBytes("UTF-8"), String.class)) { // Run inference Tensor<?> result = model.session().runner() .feed("input_tensor_name", inputTensor) .fetch("output_tensor_name") .run() .get(0); // Process result float[][] output = result.copyTo(new float[1][1]); return output[0][0] > 0.5 ? "Positive" : "Negative"; } catch (Exception e) { throw new RuntimeException("Error during text classification", e); } } }

Note: Replace "path/to/saved_model", "input_tensor_name", and "output_tensor_name" with the actual paths and names used in your TensorFlow model.

4. Create a Controller

Add a REST controller to expose an API endpoint for text classification:

package com.example.springai.controller; import com.example.springai.service.TextClassificationService; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class TextClassificationController { private final TextClassificationService classificationService; public TextClassificationController(TextClassificationService classificationService) { this.classificationService = classificationService; } @GetMapping("/classify") public String classify(@RequestParam String text) { return classificationService.classifyText(text); } }

Running the Application

  1. Build and Run Your Application

    Use Maven to build and run your Spring Boot application:

    mvn clean install mvn spring-boot:run
  2. Test the API

    Open your browser or use a tool like curl or Postman to test the endpoint:


    curl "http://localhost:8080/classify?text=I%20love%20Spring%20AI"

    You should receive a response indicating whether the text is classified as "Positive" or "Negative."

Monday, February 19, 2024

Spring boot into Action with Gemini: A Guide to Integrating AI Power into Your Microservices

 Unleashing Gemini's Power: Integrating with Spring Boot for Enhanced Microservices

In today's dynamic microservices landscape, seamless communication and flexible discovery are paramount. Gemini, Google's innovative language model, offers fascinating potential for enriching your applications with its generative capabilities. But how do you integrate this exciting technology with Spring Boot, the popular microservices framework? Fear not, for this blog will guide you through the process, providing a roadmap to success and a code example to kickstart your journey.

Why Gemini and Spring Boot?

  • Powerful Generation: Gemini generates text, translates languages, writes different kinds of creative content, and answers your questions in an informative way.
  • Flexible Integration: Spring Boot's modularity makes it easy to integrate various technologies, creating a foundation for Gemini's seamless integration.
  • Microservices Advantage: Integrating Gemini into your microservices architecture allows for centralized access and distributed usage, enhancing efficiency and scalability.

Integration Approaches:

  1. Direct API Calls: Use Spring's RestTemplate to make direct HTTP calls to the Gemini API from your Spring Boot application.
  2. Dedicated Service: Develop a separate microservice specifically for interacting with Gemini and expose its functionalities through a REST API.

Code Example (Direct API Calls):

Java
@SpringBootApplication
public class GeminiIntegrationApp {

    @Autowired
    private RestTemplate restTemplate;

    public static void main(String[] args) {
        SpringApplication.run(GeminiIntegrationApp.class, args);
    }

    @GetMapping("/generate-text/{prompt}")
    public String generateText(@PathVariable String prompt) {
        String url = "https://language.googleapis.com/v1/projects/<your-project-id>/locations/global/generateText";
        String requestBody = "{\"input\": {\"text\": \"" + prompt + "\"}}";
        HttpHeaders headers = new HttpHeaders();
        headers.set("Authorization", "Bearer <your-access-token>");
        HttpEntity<String> entity = new HttpEntity<>(requestBody, headers);
        ResponseEntity<Map<String, Object>> response = restTemplate.postForEntity(url, entity, Map.class);
        return (String) response.getBody().get("generatedText");
    }
}

Remember:

  • Replace <your-project-id> and <your-access-token> with your actual values.
  • This is a basic example, and you may need to adjust it based on your specific requirements and use case.
  • Explore advanced features like authentication, error handling, and caching for a robust solution.

Beyond the Basics:

  • Consider using a service discovery mechanism like Eureka or Consul for dynamic discovery and load balancing.
  • Implement security measures to protect your API access and sensitive data.
  • Explore advanced Gemini features like translation, code generation, and question answering to unlock its full potential.

Conclusion:

Integrating Gemini with Spring Boot opens a world of possibilities, empowering your microservices with the magic of generative AI. By following this guide and exploring further, you can unleash the power of Gemini and enhance your applications in exciting ways. Remember, the key lies in understanding your needs, selecting the right approach, and implementing the integration with security and efficiency in mind. Start your Gemini journey today and discover the transformative power of AI for your microservices!