Package com.rpl.rama

Interface Depot

All Superinterfaces:
AutoCloseable, Closeable, PartitionedObject

public interface Depot extends PartitionedObject, Closeable
Interface for thread-safe depot clients. These are also known as "foreign depots". A depot client is retrieved using a RamaClusterManager on a real cluster or InProcessCluster in a test environment.

Static methods on this class implement built-in depot partitioning schemes for use when declaring a depot in a module with declareDepot.
See Also:
  • Nested Class Summary

    Nested Classes
    Modifier and Type
    Interface
    Description
    static interface 
    Builder-style object to specify options when declaring a depot in a module
    static interface 
    Interface for implementing custom depot partitioners
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final Object
    Object used as return value in a depot migration to delete record.
  • Method Summary

    Modifier and Type
    Method
    Description
    append(Object data)
    Append data to depot using AckLevel.ACK.
    append(Object data, AckLevel ackLevel)
    Append data to depot using specified AckLevel.
    Append data to depot using AckLevel.ACK.
    appendAsync(Object data, AckLevel ackLevel)
    Append data to depot using specified AckLevel.
    static com.rpl.rama.impl.NativeDepotPartitioning
    Disallows depot clients from appending to this depot.
    getPartitionInfo(long partitionIndex)
    Retrieves the start and end offsets of the log for the specified partition index of the depot.
    getPartitionInfoAsync(long partitionIndex)
    Retrieves the start and end offsets of the log for the specified partition index of the depot.
    static com.rpl.rama.impl.NativeDepotPartitioning
    hashBy(com.rpl.rama.impl.NativeRamaFunction1 keyExtractor)
    Depot partitioning based on hashing a value from the data being appended.
    static <T extends RamaFunction1>
    com.rpl.rama.impl.NativeDepotPartitioning
    hashBy(Class<T> keyExtractorClass)
    Depot partitioning based on hashing a value from the data being appended.
    static <T extends RamaFunction1>
    com.rpl.rama.impl.NativeDepotPartitioning
    Depot partitioning based on hashing a value from the data being appended.
    static com.rpl.rama.impl.NativeDepotPartitioning
    Depot partitioning where each append goes to a random partition
    read(long partitionIndex, long startOffset, long endOffset)
    Reads records from the log for the specified partition index of the depot, between the given start and end offsets.
    readAsync(long partitionIndex, long startOffset, long endOffset)
    Reads records from the log for the specified partition index of the depot, between the given start and end offsets.

    Methods inherited from interface java.io.Closeable

    close

    Methods inherited from interface com.rpl.rama.PartitionedObject

    getObjectInfo
  • Field Details

  • Method Details

    • random

      static com.rpl.rama.impl.NativeDepotPartitioning random()
      Depot partitioning where each append goes to a random partition
      Returns:
      Partitioning scheme implementation
    • hashBy

      static com.rpl.rama.impl.NativeDepotPartitioning hashBy(com.rpl.rama.impl.NativeRamaFunction1 keyExtractor)
      Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.

      Example: Depot.hashBy(Ops.FIRST) hashes based on first value in appended data
      Parameters:
      keyExtractor - Built-in function to extract hashing value from appended data
      Returns:
      Partitioning scheme implementation
    • hashBy

      static <T extends RamaFunction1> com.rpl.rama.impl.NativeDepotPartitioning hashBy(Class<T> keyExtractorClass)
      Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.

      This version uses a custom extractor class. This class must be on the classpath of both the module and client.
      Parameters:
      keyExtractorClass - Class implementing function to extract hashing value from appended data
      Returns:
      Partitioning scheme implementation
      See Also:
    • hashBy

      static <T extends RamaFunction1> com.rpl.rama.impl.NativeDepotPartitioning hashBy(String key)
      Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.

      This version looks up the hashing value by looking up the specified key from the record. The record must implement the Map interface.
      Parameters:
      key - Key to use to extract the hashing value from the depot record
      Returns:
      Partitioning scheme implementation
    • disallow

      static com.rpl.rama.impl.NativeDepotPartitioning disallow()
      Disallows depot clients from appending to this depot. Appends can still be done from topologies using depotPartitionAppend. Useful for publishing an event stream from a module.
      Returns:
      Partitioning scheme implementation
    • append

      Map<String,Object> append(Object data)
      Append data to depot using AckLevel.ACK. Blocks until completion.
      Parameters:
      data - Data to append
      Returns:
      Map from topology name to streaming ack return for colocated stream topologies
      See Also:
    • append

      Map<String,Object> append(Object data, AckLevel ackLevel)
      Append data to depot using specified AckLevel. Blocks until completion.
      Parameters:
      data - Data to append
      ackLevel - Ack level to use
      Returns:
      Map from topology name to streaming ack return for colocated stream topologies if AckLevel.ACK is set. Otherwise returns empty map.
      See Also:
    • appendAsync

      CompletableFuture<Map<String,Object>> appendAsync(Object data)
      Append data to depot using AckLevel.ACK. Non-blocking.
      Parameters:
      data - Data to append
      Returns:
      Future that's delivered with map from topology name to streaming ack return for colocated stream topologies
    • appendAsync

      CompletableFuture<Map<String,Object>> appendAsync(Object data, AckLevel ackLevel)
      Append data to depot using specified AckLevel. Non-blocking.
      Parameters:
      data - Data to append
      ackLevel - Ack level to use
      Returns:
      Future that's delivered once AckLevel is met. If AckLevel.ACK is set, delivered with map from topology name to streaming ack return for colocated stream topologies. Otherwise, delivered map is empty.
    • getPartitionInfo

      DepotPartitionInfo getPartitionInfo(long partitionIndex)
      Retrieves the start and end offsets of the log for the specified partition index of the depot. Blocks until completion.
      Parameters:
      partitionIndex - The partition index for which to retrieve offset information.
      Returns:
      A DepotPartitionInfo object that provides methods to access the offsets.
    • getPartitionInfoAsync

      CompletableFuture<DepotPartitionInfo> getPartitionInfoAsync(long partitionIndex)
      Retrieves the start and end offsets of the log for the specified partition index of the depot. Non-blocking.
      Parameters:
      partitionIndex - The partition index for which to retrieve offset information.
      Returns:
      A Future that completes with a DepotPartitionInfo object providing methods to access the offsets.
    • read

      List<Object> read(long partitionIndex, long startOffset, long endOffset)
      Reads records from the log for the specified partition index of the depot, between the given start and end offsets. Blocks until completion.
      Parameters:
      partitionIndex - The partition index from which to read records.
      startOffset - The starting offset (inclusive) from which to fetch records.
      endOffset - The ending offset (exclusive) up to which records will be fetched.
      Returns:
      A list of records.
    • readAsync

      CompletableFuture<List<Object>> readAsync(long partitionIndex, long startOffset, long endOffset)
      Reads records from the log for the specified partition index of the depot, between the given start and end offsets. Non-blocking.
      Parameters:
      partitionIndex - The partition index from which to read records.
      startOffset - The starting offset (inclusive) from which to fetch records.
      endOffset - The ending offset (exclusive) up to which records will be fetched.
      Returns:
      A Future that completes with the list of records.