Package com.rpl.rama
Interface Depot
- All Superinterfaces:
AutoCloseable
,Closeable
,PartitionedObject
Interface for thread-safe depot clients. These are also known as "foreign depots". A depot
client is retrieved using a
Static methods on this class implement built-in depot partitioning schemes for use when declaring a depot in a module with
RamaClusterManager
on a real cluster or
InProcessCluster
in a test environment.
Static methods on this class implement built-in depot partitioning schemes for use when declaring a depot in a module with
declareDepot
.- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic interface
Builder-style object to specify options when declaring a depot in a modulestatic interface
Interface for implementing custom depot partitioners -
Field Summary
Fields -
Method Summary
Modifier and TypeMethodDescriptionAppend data to depot usingAckLevel.ACK
.Append data to depot using specifiedAckLevel
.appendAsync
(Object data) Append data to depot usingAckLevel.ACK
.appendAsync
(Object data, AckLevel ackLevel) Append data to depot using specifiedAckLevel
.static com.rpl.rama.impl.NativeDepotPartitioning
disallow()
Disallows depot clients from appending to this depot.getPartitionInfo
(long partitionIndex) Retrieves the start and end offsets of the log for the specified partition index of the depot.getPartitionInfoAsync
(long partitionIndex) Retrieves the start and end offsets of the log for the specified partition index of the depot.static com.rpl.rama.impl.NativeDepotPartitioning
hashBy
(com.rpl.rama.impl.NativeRamaFunction1 keyExtractor) Depot partitioning based on hashing a value from the data being appended.static <T extends RamaFunction1>
com.rpl.rama.impl.NativeDepotPartitioningDepot partitioning based on hashing a value from the data being appended.static <T extends RamaFunction1>
com.rpl.rama.impl.NativeDepotPartitioningDepot partitioning based on hashing a value from the data being appended.static com.rpl.rama.impl.NativeDepotPartitioning
random()
Depot partitioning where each append goes to a random partitionread
(long partitionIndex, long startOffset, long endOffset) Reads records from the log for the specified partition index of the depot, between the given start and end offsets.readAsync
(long partitionIndex, long startOffset, long endOffset) Reads records from the log for the specified partition index of the depot, between the given start and end offsets.Methods inherited from interface com.rpl.rama.PartitionedObject
getObjectInfo
-
Field Details
-
TOMBSTONE
Object used as return value in a depot migration to delete record.- See Also:
-
-
Method Details
-
random
static com.rpl.rama.impl.NativeDepotPartitioning random()Depot partitioning where each append goes to a random partition- Returns:
- Partitioning scheme implementation
-
hashBy
static com.rpl.rama.impl.NativeDepotPartitioning hashBy(com.rpl.rama.impl.NativeRamaFunction1 keyExtractor) Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.
Example:Depot.hashBy(Ops.FIRST)
hashes based on first value in appended data- Parameters:
keyExtractor
- Built-in function to extract hashing value from appended data- Returns:
- Partitioning scheme implementation
-
hashBy
static <T extends RamaFunction1> com.rpl.rama.impl.NativeDepotPartitioning hashBy(Class<T> keyExtractorClass) Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.
This version uses a custom extractor class. This class must be on the classpath of both the module and client.- Parameters:
keyExtractorClass
- Class implementing function to extract hashing value from appended data- Returns:
- Partitioning scheme implementation
- See Also:
-
hashBy
Depot partitioning based on hashing a value from the data being appended. This scheme always appends data with the same hashing value to the same depot partition and evenly spreads data with different hashing values across all partitions.
This version looks up the hashing value by looking up the specified key from the record. The record must implement theMap
interface.- Parameters:
key
- Key to use to extract the hashing value from the depot record- Returns:
- Partitioning scheme implementation
-
disallow
static com.rpl.rama.impl.NativeDepotPartitioning disallow()Disallows depot clients from appending to this depot. Appends can still be done from topologies usingdepotPartitionAppend
. Useful for publishing an event stream from a module.- Returns:
- Partitioning scheme implementation
-
append
Append data to depot usingAckLevel.ACK
. Blocks until completion.- Parameters:
data
- Data to append- Returns:
- Map from topology name to streaming ack return for colocated stream topologies
- See Also:
-
append
Append data to depot using specifiedAckLevel
. Blocks until completion.- Parameters:
data
- Data to appendackLevel
- Ack level to use- Returns:
- Map from topology name to streaming ack return for colocated stream topologies if
AckLevel.ACK
is set. Otherwise returns empty map. - See Also:
-
appendAsync
Append data to depot usingAckLevel.ACK
. Non-blocking.- Parameters:
data
- Data to append- Returns:
- Future that's delivered with map from topology name to streaming ack return for colocated stream topologies
-
appendAsync
Append data to depot using specifiedAckLevel
. Non-blocking.- Parameters:
data
- Data to appendackLevel
- Ack level to use- Returns:
- Future that's delivered once
AckLevel
is met. IfAckLevel.ACK
is set, delivered with map from topology name to streaming ack return for colocated stream topologies. Otherwise, delivered map is empty.
-
getPartitionInfo
Retrieves the start and end offsets of the log for the specified partition index of the depot. Blocks until completion.- Parameters:
partitionIndex
- The partition index for which to retrieve offset information.- Returns:
- A
DepotPartitionInfo
object that provides methods to access the offsets.
-
getPartitionInfoAsync
Retrieves the start and end offsets of the log for the specified partition index of the depot. Non-blocking.- Parameters:
partitionIndex
- The partition index for which to retrieve offset information.- Returns:
- A Future that completes with a
DepotPartitionInfo
object providing methods to access the offsets.
-
read
Reads records from the log for the specified partition index of the depot, between the given start and end offsets. Blocks until completion.- Parameters:
partitionIndex
- The partition index from which to read records.startOffset
- The starting offset (inclusive) from which to fetch records.endOffset
- The ending offset (exclusive) up to which records will be fetched.- Returns:
- A list of records.
-
readAsync
Reads records from the log for the specified partition index of the depot, between the given start and end offsets. Non-blocking.- Parameters:
partitionIndex
- The partition index from which to read records.startOffset
- The starting offset (inclusive) from which to fetch records.endOffset
- The ending offset (exclusive) up to which records will be fetched.- Returns:
- A Future that completes with the list of records.
-