Skip to content

Releases: pinecone-io/pinecone-java-client

v3.1.0 Release

27 Nov 00:56
Compare
Choose a tag to compare

This version of the Pinecone Java SDK introduces support for specifying a base URL for control and data plane operations.

Features

Support to pass base url for control and data plane operations

Users can now specify the base URL (host:port) for both control and data plane operations using the Builder inner class of the Pinecone class. This is achieved with the withHost() method, supporting both HTTP and HTTPS endpoints. For HTTP endpoints on the data plane, users must also disable TLS using the withTlsEnabled() method.

The following example demonstrates how to perform control and data plane operations against a locally running emulator:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.configs.PineconeConfig;
import org.openapitools.db_control.client.ApiClient;
import org.openapitools.db_control.client.api.ManageIndexesApi;
import org.openapitools.db_control.client.model.DeletionProtection;
import java.util.Arrays;

...

String host = "http://localhost:5080";
Pinecone pinecone = new Pinecone.Builder("apiKey")
        .withHost(host)     // set host to localhost and port to 5080
        .withTlsEnabled(false)  // disable TLS for data plane operations
        .build();

// Create serverless index
pinecone.createServerlessIndex("serverless-index", "cosine", 3, "aws","us-east-1", DeletionProtection.DISABLED);
// Create pod index
pinecone.createPodsIndex("pod-index", 3, "us-east-1-aws", "p1.x1");

// Describe serverless index
pinecone.describeIndex("serverless-index");
// Describe pod index
pinecone.describeIndex("pod-index");

// Get index connection object for serverless-index
Index serverlessIndexConnection = pinecone.getIndexConnection("serverless-index");
// Get index connection object for pod-index
Index podIndexConnection = pinecone.getIndexConnection("pod-index");

// Upsert records into serverless index
serverlessIndexConnection.upsert("v1", Arrays.asList(1f, 2f, 3f));
// Upsert records into pod index
podIndexConnection.upsert("v1", Arrays.asList(1f, 2f, 3f));

// Query by vectorId from serverless index
serverlessIndexConnection.queryByVectorId(3, "v1");
// Query by vectorId from pod index
podIndexConnection.queryByVectorId(3, "v1");

// Delete serverless index
pinecone.deleteIndex("serverless-index");
// Delete pod index
pinecone.deleteIndex("pod-index");

What's Changed

Full Changelog: v2.1.0...v3.0.0

What's Changed

New Contributors

v3.0.0 Release

24 Oct 16:14
Compare
Choose a tag to compare

This version of the Pinecone Java SDK introduces Inference and Import. It also supports version 2024-10 of the Pinecone API. You can read more about versioning here.

Features

Embed

The Inference has an operation called Embed which allows users to generate embeddings for input data.

import io.pinecone.clients.Inference;
import io.pinecone.clients.Pinecone;
import org.openapitools.inference.client.ApiException;
import org.openapitools.inference.client.model.Embedding;
import org.openapitools.inference.client.model.EmbeddingsList;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
...

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
Inference inference = pinecone.getInferenceClient();

// Prepare input sentences to be embedded
List<String> inputs = new ArrayList<>();
inputs.add("The quick brown fox jumps over the lazy dog.");
inputs.add("Lorem ipsum");

// Specify the embedding model and parameters
String embeddingModel = "multilingual-e5-large";

Map<String, Object> parameters = new HashMap<>();
parameters.put("input_type", "query");
parameters.put("truncate", "END");

// Generate embeddings for the input data
EmbeddingsList embeddings = inference.embed(embeddingModel, parameters, inputs);

// Get embedded data
List<Embedding> embeddedData = embeddings.getData();

Rerank

The Inference has another operation called Rerank which provides users the ability to rerank documents in descending relevance-order against a given query. Reranking documents is a common "second-pass" ranking strategy broadly used in retrieval applications.

import io.pinecone.clients.Inference;
import io.pinecone.clients.Pinecone;
import org.openapitools.inference.client.ApiException;
import org.openapitools.inference.client.model.RerankResult;

import java.util.*;
...

Pinecone pinecone = new Pinecone.Builder(System.getenv("PINECONE_API_KEY")).build();
Inference inference = pinecone.getInferenceClient();

// The model to use for reranking
String model = "bge-reranker-v2-m3";

// The query to rerank documents against
String query = "The tech company Apple is known for its innovative products like the iPhone.";

// Add the documents to rerank
List<Map<String, String>> documents = new ArrayList<>();
Map<String, String> doc1 = new HashMap<>();
doc1.put("id", "vec1");
doc1.put("my_field", "Apple is a popular fruit known for its sweetness and crisp texture.");
documents.add(doc1);

Map<String, String> doc2 = new HashMap<>();
doc2.put("id", "vec2");
doc2.put("my_field", "Many people enjoy eating apples as a healthy snack.");
documents.add(doc2);

Map<String, String> doc3 = new HashMap<>();
doc3.put("id", "vec3");
doc3.put("my_field", "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.");
documents.add(doc3);

Map<String, String> doc4 = new HashMap<>();
doc4.put("id", "vec4");
doc4.put("my_field", "An apple a day keeps the doctor away, as the saying goes.");
documents.add(doc4);

// The fields to rank the documents by. If not provided, the default is "text"
List<String> rankFields = Arrays.asList("my_field");

// The number of results to return sorted by relevance. Defaults to the number of inputs
int topN = 2;

// Whether to return the documents in the response
boolean returnDocuments = true;

// Additional model-specific parameters for the reranker
Map<String, String> parameters = new HashMap<>();
parameters.put("truncate", "END");

// Send ranking request
RerankResult result = inference.rerank(model, query, documents, rankFields, topN, returnDocuments, parameters);

// Get ranked data
System.out.println(result.getData());

Import

AsyncIndex now exposes additional methods for working with Import operations. An Import is a long-running, asynchronous operation that gives users the ability to import vectors directly from object storage (e.g. S3) into a Pinecone index. It is intended to be used with large-scale jobs. For small-scale jobs (e.g. <1000 vectors), we recommend continuing to use upsert.

import org.openapitools.db_data.client.ApiException;
import org.openapitools.db_data.client.model.ImportErrorMode;
import org.openapitools.db_data.client.model.StartImportResponse;
...

// Initialize pinecone object
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
// Get async imports connection object
AsyncIndex asyncIndex = pinecone.getAsyncIndexConnection("PINECONE_INDEX_NAME");

// s3 uri
String uri = "s3://path/to/file.parquet";

// Start an import
StartImportResponse response = asyncIndex.startImport(uri, "123-456-789", ImportErrorMode.OnErrorEnum.CONTINUE);

// List imports
ListImportsResponse response = asyncIndex.listImports(100, "some-pagination-token");

// Describe import
ImportModel importDetails = asyncIndex.describeImport("1");

// Cancel import
asyncIndex.cancelImport("2");

Breaking Changes

Import Statements Update:

In this version, we have made changes to the openAPI generated code. Packagecontrol is now renamed to db_control so the import path for model classes for control plane such as ApiException, IndexModel, CollectionModel, etc. are now changed. Please ensure that you modify your code to reflect the new import paths as follows:

- import org.openapitools.control.client.ApiException;
+ import org.openapitools.db_control.client.ApiException;

- import org.openapitools.control.client.model.IndexModel;
+ import org.openapitools.db_control.client.model.IndexModel; 

- import org.openapitools.client.model.CollectionModel;
+ import org.openapitools.db_control.client.model.CollectionModel;

- import org.openapitools.client.model.IndexList;
+ import org.openapitools.db_control.client.model.IndexList;

What's Changed

Full Changelog: v2.1.0...v3.0.0

v2.1.0 Release

18 Sep 19:12
27ea488
Compare
Choose a tag to compare

Added: Support to disable TLS for data plane operations

This release adds the support for users to disable TLS verification for data plane operations. Users can disable it by setting enableTLS parameter of PineconeConfig class to false. We do not recommend going to production with TLS verification disabled. Following example shows how to disable TLS verification:

import io.pinecone.clients.Index;
import io.pinecone.configs.PineconeConfig;
import io.pinecone.configs.PineconeConnection;
import io.pinecone.unsigned_indices_model.QueryResponseWithUnsignedIndices;
import io.pinecone.proto.UpsertResponse;
import java.util.Arrays;

public class DisableTLSExample {
    public static void main(String[] args) {
        PineconeConfig config = new PineconeConfig("api");
        config.setHost("localhost:5081");
        config.setTLSEnabled(false);
        PineconeConnection connection = new PineconeConnection(config);
        Index index = new Index(connection, "example-index");
        
        // Data plane operations
        // 1. Upsert data
        UpsertResponse upsertResponse = index.upsert("v1", Arrays.asList(1f, 2f, 3f));
        // 2. Query data
        QueryResponseWithUnsignedIndices queryResponse = index.queryByVectorId(1, "v1", true, true);
    }
}

What's Changed

New Contributors

Full Changelog: v2.0.0...v2.1.0

v2.0.0 Release

19 Jul 19:52
e5f79bb
Compare
Choose a tag to compare

Added: API versioning

This updated release of the Pinecone Java SDK depends on API version 2024-07. This v2 SDK release line should continue to receive fixes as long as the 2024-07 API version is in support.

Added: Deletion Protection

Use deletion protection to prevent your most important indexes from accidentally being deleted. This feature is available for both serverless and pod indexes.

To enable this feature for existing pod indexes, use configurePodsindex()

import io.pinecone.clients.Pinecone;
import org.openapitools.control.client.model.DeletionProtection;
...
        
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
pinecone.configurePodsIndex(indexName, DeletionProtection.ENABLED);

When deletion protection is enabled, calls to deleteIndex() will fail until you first disable the deletion protection.

# To disable deletion protection for pods index
pinecone.configurePodsIndex(indexName, DeletionProtection.DISABLED);

If you want to enable this feature at the time of index creation, createIndex now accepts as an enum argument. The feature is disabled by default.

import io.pinecone.clients.Pinecone;
import org.openapitools.client.model.IndexModel;
import org.openapitools.control.client.model.DeletionProtection;

...

Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
        
String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";

IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension, cloud, region, DeletionProtection.ENABLED);

What's Changed

Full Changelog: v1.2.2...v2.0.0

v1.2.2 Release

28 Jun 18:36
8034dab
Compare
Choose a tag to compare

Added: Support for proxy configuration using proxyHost and proxyPort

Users can now configure proxy settings without the need to manually instantiate network handler for both data and control plane operations. Until now, users had to instantiate multiple Pinecone classes and pass customManagedChannel for data plane operations and configure OkHttpClient separately for control plane operations, involving more complex setup steps.

The update simplifies proxy configuration within the SDK, ensuring easier setup by allowing users to specify proxyHost and proxyPort.

Note:
Users need to set up certificate authorities (CAs) to establish secure connections. Certificates verify server identities and encrypt data exchanged between the SDK and servers

Example

  1. The following examples demonstrates how to configure proxy for data Plane operations via NettyChannelBuilder vs. using the newly added proxy config support:

Before:

import io.grpc.HttpConnectProxiedSocketAddress;
import io.grpc.ManagedChannel;
import io.grpc.ProxiedSocketAddress;
import io.grpc.ProxyDetector;
import io.pinecone.clients.Index;
import io.pinecone.configs.PineconeConfig;
import io.pinecone.configs.PineconeConnection;
import io.grpc.netty.GrpcSslContexts;
import io.grpc.netty.NegotiationType;
import io.grpc.netty.NettyChannelBuilder;
import io.pinecone.exceptions.PineconeException;

import javax.net.ssl.SSLException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.util.concurrent.TimeUnit;

import java.util.Arrays;

...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));

PineconeConfig config = new PineconeConfig(apiKey);
String endpoint = System.getenv("PINECONE_HOST");
NettyChannelBuilder builder = NettyChannelBuilder.forTarget(endpoint);

ProxyDetector proxyDetector = new ProxyDetector() {
    @Override
    public ProxiedSocketAddress proxyFor(SocketAddress targetServerAddress) {
        SocketAddress proxyAddress = new InetSocketAddress(proxyHost, proxyPort);
        
        return HttpConnectProxiedSocketAddress.newBuilder()
                .setTargetAddress((InetSocketAddress) targetServerAddress)
                .setProxyAddress(proxyAddress)
                .build();
    }
};

// Create custom channel
try {
    builder = builder.overrideAuthority(endpoint)
            .negotiationType(NegotiationType.TLS)
            .keepAliveTimeout(5, TimeUnit.SECONDS)
            .sslContext(GrpcSslContexts.forClient().build())
            .proxyDetector(proxyDetector);
} catch (SSLException e) {
    throw new PineconeException("SSL error opening gRPC channel", e);
}

// Build the managed channel with the configured options
ManagedChannel channel = builder.build();
config.setCustomManagedChannel(channel);
PineconeConnection connection = new PineconeConnection(config);
Index index = new Index(connection, "PINECONE_INDEX_NAME");
// Data plane operations
// 1. Upsert data
System.out.println(index.upsert("v1", Arrays.asList(1F, 2F, 3F, 4F)));
// 2. Describe index stats
System.out.println(index.describeIndexStats());

After:

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;

...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));

Pinecone pinecone = new Pinecone.Builder(apiKey)
        .withProxy(proxyHost, proxyPort)
        .build();

Index index = pinecone.getIndexConnection("PINECONE_INDEX_NAME");
// Data plane operation routed through the proxy server
// 1. Upsert data
System.out.println(index.upsert("v1", Arrays.asList(1F, 2F, 3F, 4F)));
// 2. Describe index stats
System.out.println(index.describeIndexStats());
  1. The following examples demonstrates how to configure proxy for control Plane operations via OkHttpClient vs. using the newly added proxy config support:
    Before:
import io.pinecone.clients.Pinecone;
import okhttp3.OkHttpClient;
import java.net.InetSocketAddress;
import java.net.Proxy;

...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));

// Instantiate OkHttpClient instance and configure the proxy
OkHttpClient client = new OkHttpClient.Builder()
        .proxy(new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost, proxyPort)))
        .build();

// Instantiate Pinecone class with the custom OkHttpClient object
Pinecone pinecone = new Pinecone.Builder(apiKey)
        .withOkHttpClient(client)
        .build();

// Control plane operation routed through the proxy server
System.out.println(pinecone.describeIndex("PINECONE_INDEX"));

After:

import io.pinecone.clients.Pinecone;

...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));

Pinecone pinecone = new Pinecone.Builder(apiKey)
        .withProxy(proxyHost, proxyPort)
        .build();

// Control plane operation routed through the proxy server
System.out.println(pinecone.describeIndex("PINECONE_INDEX"));

Fixed: Adding source tag and setting custom user agent string for gRPC

The user agent string was not correctly setup for gRPC calls and instead of using the custom sourceTag + user agent string, the user agent string would always default to "netty-java-grpc/<grpc_version>". This update fixes this issue so if the source tag is set by the users, it should be appended to the custom user agent string.

What's Changed

  • Allow : in the source tag and add pinecone_test as a source tag for all integration tests by @rohanshah18 in #137
  • Add proxy configuration for OkHTTPClient and NettyChannelBuilder by @rohanshah18 in #136
  • Fix useragent for grpc by @rohanshah18 in #138
  • Update pinecone client version to 1.2.2 and remove redundant client version declaration by @rohanshah18 in #139

Full Changelog: v1.2.1...v1.2.2

v1.2.1 Release

31 May 21:56
c7c02ee
Compare
Choose a tag to compare

Fixed: Uber jar

The META-INF/services directory contains service provider configuration files. It wasn't shaded correctly so the users were seeing NameResolverProvider error when running a data plane operation such as upsert using the uber jar.
Error:
Exception in thread "main" java.lang.IllegalArgumentException: Could not find a NameResolverProvider for index-name-somehost.pinecone.io.
The error is now fixed and users can use the uber jar for data plane operations successfully.

Example

The following example demonstrates how to use the uber jar in a pom.xml for a maven project:

<dependencies>
   <dependency>
      <groupId>io.pinecone</groupId>
      <artifactId>pinecone-client</artifactId>
      <version>1.2.1</version>
      <classifier>all</classifier>
   </dependency>
</dependencies>

What's Changed

Full Changelog: v1.2.0...v1.2.1

v1.2.0 Release

24 May 18:43
e01dac4
Compare
Choose a tag to compare

Added: apiException as the cause to HttpErrorMapper.mapHttpStatusError to facilitate easier debugging

When a request fails before sending or receiving an HTTP response, the exception cause is now considered. Previously, the client would return an empty error message in such cases.

Added: list vector IDs with pagination token and limit but without prefix

We have added the ability to list vector IDs with pagination token and limit but without prefix. Until now, the users had to use either of the following methods in order to utilize pagination token i.e. they must use it with prefix:

  1. list(String namespace, String prefix, String paginationToken)
  2. list(String namespace, String prefix, String paginationToken, int limit)

Example

The following demonstrates how to use the list endpoint with limit and pagination token to get vector IDs from a specific namespace.

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListResponse;
...

Pinecone pinecone = new Pinecone.Builder(System.getenv("PINECONE_API_KEY")).build();
String indexName = "example-index";
Index index = pinecone.getIndexConnection(indexName);
// get the pagination token
String paginationToken = index.list("example-namespace", 3).getPagination().getNext();
// get vectors with limit 3 with the paginationToken obtained from the previous step
ListResponse listResponse = index.list("example-namespace", 3, paginationToken);

What's Changed

New Contributors

Full Changelog: v1.1.0...v1.2.0

v1.1.0 Release

09 May 22:02
891debd
Compare
Choose a tag to compare

Added: List vector IDs

We have added the ability to list vector IDs as a part of data plane operations. By default, the list returns up to 100 IDs at a time by default in sorted order. If the limit parameter is set, list returns up to that number of IDs instead. The list operation can be called using any following methods:

  1. list()
  2. list(String namespace)
  3. list(String namespace, int limit)
  4. list(String namespace, String prefix)
  5. list(String namespace, String prefix, int limit)
  6. list(String namespace, String prefix, String paginationToken)
  7. list(String namespace, String prefix, String paginationToken, int limit)

Briefly, the parameters are explained below:

  1. prefix – The prefix with which vector IDs must start to be included in the response.
  2. paginationToken – The token to paginate through the list of vector IDs.
  3. limit – The maximum number of vector IDs you want to retrieve.
  4. namespace – The namespace to list vector IDs from.

Example

The following demonstrates how to use the list endpoint to get vector IDs from a specific namespace, filtered by a given prefix.

import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListResponse;
...

Pinecone pinecone = new Pinecone.Builder(System.getenv("PINECONE_API_KEY")).build();
String indexName = "example-index";
Index index = pinecone.getIndexConnection(indexName);
ListResponse listResponse = index.list("example-namespace", "prefix-");

What's Changed

Full Changelog: v1.0.0...v1.1.0

v1.0.0 Release

12 Apr 18:16
e259bd4
Compare
Choose a tag to compare
  • Existing users will want to checkout the v1.0.0 Migration Guide for a walkthrough of all the new features and changes.
  • New users should start with the README

Serverless indexes are currently in public preview, so make sure to review the current limitations and test thoroughly before using in production.

Changes overview

  • Renamed PineconeControlPlaneClient to Pinecone and added overloaded methods, so you are not required to
    construct request objects.
  • Added data plane wrappers Index and AsyncIndex, which will eliminate the need to create Java classes for request
    objects. The Index class is for blocking gRPC stub while AsyncIndex is an async gRPC class for data plane
    operations.
  • Removed PineconeClient and PineconeConnectionConfig, and renamed PineconeClientConfig to PineconeConfig.
    PineconeConfig supports setting custom gRPC-managed channels for data plane operations along with setting a source
    tag.
  • Updated dependencies to address vulnerabilities:
    • io.grpc:grpc-protobuf: from 1.57.0 to 1.61.0
    • io.grpc:grpc-stub: from 1.57.0 to 1.61.0
    • io.grpc:grpc-netty: from 1.57.0 to 1.61.0
    • com.squareup.okhttp3:okhttp → from 4.10.0 to 4.12.0
  • Added the following model classes to address the limitations of Java not having a native datatype for unsigned 32-bit integer, which is the expected datatype of Pinecone's backend API. Sparse indices will now accept Java long
    (rather than int), with the input range of [0, 2^32 - 1]. Everything outside of this range will throw a PineconeValidationException:
    • QueryResponseWithUnsignedIndices.java
    • ScoredVectorWithUnsignedIndices.java
    • SparseValuesWithUnsignedIndices.java
    • VectorWithUnsignedIndices
  • Added read units as a part of queryResponse.

What's Changed

  • Revert "Revert "Refactor dataplane "" by @rohanshah18 in #69
  • Add data plane wrapper and accept sparse indices as unsigned 32-bit integer by @rohanshah18 in #70
  • Refactor to improve user-experience and delete support for endpoint construction for data plane operations using projectId, env, and indexName by @rohanshah18 in #72
  • Add Usage to QueryResponseWithUnsignedIndices by @austin-denoble in #74
  • Regenerate gRPC classes after removing grpc-gateway / grpc -> OpenAPI spec metadata by @austin-denoble in #78
  • Add PineconeDataPlaneClient Interface by @austin-denoble in #76
  • Refactor configs and disable collections and configure tests by @rohanshah18 in #77
  • Fix Collection and ConfigureIndex tests by @austin-denoble in #79
  • Add upsert(List, String namespace), Rename clients, add createIndexConnection() and createAsyncIndexConnection() by @rohanshah18 in #80
  • Refactor CollectionTest, ConfigureIndexTest, and IndexManager to improve integration test speed and reliability by @austin-denoble in #81
  • Add Create Index Error + Optional-argument integration tests for Pod + Serverless by @austin-denoble in #82
  • Add data plane tests for serverless index by @rohanshah18 in #83
  • Add user-agent to control plane operations, add sourceTag, re-enable Pinecone client unit tests by @austin-denoble in #84
  • Update OkHttpClient dependency version to 4.12.0 to address vulnerability issues and clean up codebase by @rohanshah18 in #86
  • Poll for Index ready during cleanup in ConfigureIndexTest by @austin-denoble in #87
  • Update gRPC version to 1.60.2 to address vulnerability concerns and fix data plane integration tests by @rohanshah18 in #88
  • Handle nulls for some creation and configure methods and provide alternative method for creating collections by @aulorbe in #91
  • Add TestIndexResourcesManager and CleanupAllTestResourcesListener by @austin-denoble in #89
  • Abstract away Request objs in the ConfigureIndex method by @aulorbe in #93
  • Add concurrent HashMap for storing indexNames and connection objects by @rohanshah18 in #92
  • Add new CreateServerlessIndex method by @aulorbe in #94
  • [Fix] Accept additional properties in API JSON responses by @jhamon in #95
  • [Chore] Build output shows which tests are run by @jhamon in #96
  • [Chore] Bump GHA gradle action by @jhamon in #97
  • Add new createPodsIndex method by @aulorbe in #98
  • Deprecate createIndex method by @aulorbe in #105
  • Add new queryByVectorId and queryByVector functions to IndexInterface by @austin-denoble in #106
  • Add javadoc docstrings to Pinecone class by @austin-denoble in #104
  • Add doc-strings for unsigned indices model classes by @rohanshah18 in #108
  • Updated javadoc for Pinecone class by @ssmith-pc in #111
  • Add javadoc docstrings to IndexInterface, Index, AsyncIndex classes by @austin-denoble in #109
  • Add docstrings for configs by @rohanshah18 in #110
  • Update README and examples to reflect v1 changes by @aulorbe in #107

New Contributors

Full Changelog: v0.8.1...v1.0.0

v0.8.1 Release

09 Apr 21:22
9a53ca7
Compare
Choose a tag to compare

Updated: Class PodSpecMetadataConfig is replaced with java class CreateIndexRequestSpecPodMetadataConfig

When new properties are added to API responses, it causes the Java client to error. The control plane java generated code was hence updated to ignore additional fields of the api response. As a result of this change, users who were relying on PodSpecMetadataConfig will now have to replace it with CreateIndexRequestSpecPodMetadataConfig class.

Example

Following example shows how to replace PodSpecMetadataConfig with CreateIndexRequestSpecPodMetadataConfig.

// v0.8.0 
PodSpecMetadataConfig podSpecMetadataConfig = new PodSpecMetadataConfig();
List<String> indexedItems = Arrays.asList("A", "B", "C", "D");
podSpecMetadataConfig.setIndexed(indexedItems);

CreateIndexRequestSpecPod requestSpecPod = new CreateIndexRequestSpecPod()
                .pods(2)
                .podType("p1.x2")
                .replicas(2)
                .metadataConfig(podSpecMetadataConfig)
                .sourceCollection("step");

// v0.8.1: replace the class name
CreateIndexRequestSpecPodMetadataConfig podSpecMetadataConfig = new CreateIndexRequestSpecPodMetadataConfig();
List<String> indexedItems = Arrays.asList("A", "B", "C", "D");
podSpecMetadataConfig.setIndexed(indexedItems);

CreateIndexRequestSpecPod requestSpecPod = new CreateIndexRequestSpecPod()
                .pods(2)
                .podType("p1.x2")
                .replicas(2)
                .metadataConfig(podSpecMetadataConfig)
                .sourceCollection("step");

What's Changed

  • Update changelogs, sdk version, and user-agent for v0.8.1 release by @rohanshah18 in #103
  • [Fix] Accept additional properties in API JSON responses release by @jhamon #101

Full Changelog: v0.8.0...v0.8.1