Pravega Client API 101

2021-02-21 身是眼中人
Introduction

The fundamentals of stream semantics in Pravega are learned through familiarity with its client APIs. In this article, we will overview Pravega’s client APIs with a handful of simple examples. As we reach the end, you should see Pravega in action, understand the guarantees afforded by Pravega streams, and have some familiarity with several of the facilities provided by the client API.

Pravega client APIs provide read and write access to data streams. Streams store sequences of bytes. Writers commit new sequences of bytes at the tail position(s) of a stream. Writes to a single stream can be split across shards or segments; and, when writes are accompanied by routing keys, these writes can be ordered within their determined segments.

Streams have scaling policies that allow them to split into several parallel segments. In UNIX, a stream is akin to writing a file in append mode, where several writers are guaranteed to append after each other and not overwrite each other’s contents. An open append-only file typically has one data stream, whereas a stream in Pravega can have many parallel data streams, called segments, allowing an influx of writes to scale horizontally across a cluster, as according to scaling policies and routing keys. Unlike most other distributed message passing or data storage systems, the parallelism of a stream can change over time according to write throughput factors. Writes can be distributed across these parallel stream segments, and there can become more of them or fewer of them over time.

A stream with its scaling policy fixed to a single segment is very similar to an append-only file handle in UNIX, where multiple writers』 contents are naturally ordered by when their writes arrived, without a threat of overwrites. As a stream splits into more segments for more parallel writes, order can be preserved among specific writes by using keys. A routing key routes a write deterministically to one and only one segment, allowing a natural order of writes to continue to be preserved.

To use Pravega wisely, one must consider the above factors with respect to their application. What writes need to be ordered? How many different streams do you have? What routing keys are needed? Etc. For example, consider the impact of certain types of stream queries: putting all events from all sensors into a single stream may require reading all events from all sensors when reprocessing events from a single sensor.

In this article, we will first cover the event stream client and its semantics of ordered writes with sequential reads in single-segment streams, unordered writes with non-sequential reads in multi-segment streams, and ordered writes with semi-sequential reads using routing keys in multi-segment streams. We will then cover the batch and byte stream clients and their unique capabilities.

Overviewing single-segment streams that act like filesConnecting to and configuring Pravega

Rather than find a complicated example use-case that puts Pravega through its paces in terms of streams and segments and dynamic scaling, let’s first get to know Pravega’s client APIs by using streams with a fixed scaling policy of 1 segment. We』ll need a StreamConfiguration to define this:

StreamConfiguration streamConfig = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build();

With this streamConfig, we can create streams that feel a bit more like traditional append-only files. Streams exist within scopes, which provide a namespace for related streams. We use a StreamManager to tell a Pravega cluster controller to create our scope and our streams. But first, in order to work with Pravega, we will need to know at least one controller address:

URI controllerURI = URI.create("tcp://localhost:9090");

To use this URI, we must stand up a controller at localhost:9090 by cloning Pravega’s GitHub repo and running, ./gradlew :standalone:startStandalone:

Figure 1: Pravega running in standalone mode.

With a standalone server running, we can invoke the StreamManager to create our scope and a numbers stream:

try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.createScope("tutorial");streamManager.createStream("tutorial", "numbers", streamConfig);}

It is important to note that all managers, factories and clients ought to be closed and are auto-closeable. Being sure to close your instances will avoid resource leaks or untimely resource deallocation.

Whether we’re creating a reader or writer, we need a ClientConfig to at least tell the client its controller addresses:

ClientConfig clientConfig = ClientConfig.builder().controllerURI(controllerURI).build();Sequential writes and reads on a single-segment stream

Let’s start by creating an EventStreamWriter for storing some numbers:

EventWriterConfig writerConfig = EventWriterConfig.builder().build();EventStreamClientFactory factory = EventStreamClientFactory.withScope("tutorial", clientConfig);EventStreamWriter<Integer> writer = factory.createEventWriter("numbers", new JavaSerializer<Integer>(), writerConfig);

And use it to write some numbers (note that writeEvent() returns a CompletableFuture, which can be captured for use or will be resolved when calling flush() or close(), and, if destined for the same segment, the futures write in the order writeEvent() is called):

writer.writeEvent(1);writer.writeEvent(2);writer.writeEvent(3);writer.flush();

Notice how we’re able to write unadorned native Java objects and primitives to a stream. When instantiating the EventStreamWriter above, we passed in a JavaSerializer<Integer> instance. Pravega uses a Serializer interface in its writers and readers to simplify the act of writing and reading an object’s bytes to and from streams. The JavaSerializer can handle any Serializable object. If you』ll be passing around Strings, be sure to look at UTF8StringSerializer for a more compatible in-stream encoding. For interoperability, you』ll probably want to use a JsonSerializer, CborSerializer or AvroSerializer, but, for illustration purposes, we’re using the built in and less portable JavaSerializer. Pravega Schema Registry is another appealing option.

We can read back these events using a reader. Readers are associated with reader groups, which track the readers』 progress and allow more than one reader to coordinate over which segments they』ll read.

ReaderGroupManager readerGroupManager = ReaderGroupManager.withScope("tutorial", clientConfig);ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder().stream("tutorial/numbers").build();readerGroupManager.createReaderGroup("numReader", readerGroupConfig);

Here we used the ReaderGroupManager to create a new reader group on numbers called numReader.

Now we can attach a single EventStreamReader instance to our reader group and read our 3 numbers from our numbers stream:

EventStreamReader<Integer> reader = factory.createReader("myId", "numReader",new JavaSerializer<Integer>(), ReaderConfig.builder().build());
Integer intEvent;while ((intEvent = reader.readNextEvent(1000).getEvent()) != null) {System.out.println(intEvent);}
reader.close();

Outputs:

1
2
3

Note that readNextEvent() takes a timeout in milliseconds, which is the maximum amount of time that the call will block before returning an EventRead instance that returns null on getEvent(). In this example, we are terminating on the timeout that occurs when we reach the tail of the stream, but timeouts can occur before a reader reaches the tail. Something called checkpoints can also cause getEvent() to return null. A checkpoint is a special event processed by reader groups which allows each reader to ensure its progress is persisted and causes a reader group to save its positions across a stream. An event can be determined to be a checkpoint event with isCheckpoint(). For these reasons, a more robust looping mechanism should be used in practical applications. We will improve this loop the next time we see it below.

Reader group good-to-knows

If a reader is not properly closed, before it can come back online or before its work can be assigned to other readers, it will need to be explicitly marked as offline in the reader group, and, when doing so, it’s last known position can be provided. If a reader is gracefully shutdown, then it is removed from its reader group and its segments are redistributed among the remaining readers. A reader group cannot presume when a reader has shutdown ungracefully. A dead reader needs to be marked as offline for other readers to pick up its work or for that reader to rejoin the reader group, otherwise the reader’s assigned segments will never be read:

try (ReaderGroup readerGroup = readerGroupManager.getReaderGroup("numReader")) {readerGroup.readerOffline("myId", null);}

A reader group tracks each reader’s position in a stream. If all readers in a reader group are closed, when they come back online, they will automatically pick up where they left off. Should any readers remain online, they will pick up the work of the offline readers. When a closed or offline reader comes back online, it joins an active reader group as a new reader and negotiates a share of the work.

Pravega Samples has a ReaderGroupPruner example that shows how to detect any dead workers and call readerOffline(readerId, lastPosition) on their behalf. The null value for lastPosition above indicates that it should use the position(s) for that reader stored in the last reader group checkpoint or the beginning of the stream if there are no checkpoints. This could lead to duplicates if events are not idempotent or if checkpoints aren’t explicitly used to ensure exactly once processing.

In order to re-use reader group names and reader IDs when starting over from the beginning of a stream or from a newly chosen stream cut, you need to either delete and recreate the reader group or reset the reader group’s ReaderGroupConfig:

readerGroupManager.deleteReaderGroup("numReader");// ortry (ReaderGroup readerGroup = readerGroupManager.getReaderGroup("numReader")) {readerGroup.resetReaderGroup(readerGroupConfig);}

Pravega keeps reader group metadata for you inside internal metadata streams until you decide to delete it. When you are finished with a reader group, it is advised to delete the reader group as shown above.

Seeking around and to the tail end of a stream

We』ve just explored reading from the beginning of a stream to the end in sequential order. But can we seek through the stream? Of course, we can! Let’s explore that now using a new tailNumReader reader group that is given the current tail stream cut as its starting point (a stream cut is a set of positions across all stream segments within a specific time step or epoch, a tail is the end of a stream at that point in time):

StreamInfo streamInfo;StreamCut tail; // current end of streamtry (StreamManager streamManager = StreamManager.create(controllerURI)) {streamInfo = streamManager.getStreamInfo("tutorial", "numbers");tail = streamInfo.getTailStreamCut();}ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder().stream("tutorial/numbers", tail).build();readerGroupManager.createReaderGroup("tailNumReader", readerGroupConfig);

We can now create an EventStreamReader instance named tailReader that can read from the end of the stream. If we append to the stream, we will only read new events with this reader group. Let’s append to numbers and read with tailNumReader. As already mentioned, our while loop was a bit too simplistic previously. Let’s see about improving things further by checking for the end of stream by tracking unread bytes available:

writer.writeEvent(4);writer.writeEvent(5);writer.writeEvent(6);writer.flush();
try (ReaderGroup readerGroup = readerGroupManager.getReaderGroup("tailNumReader");EventStreamReader<Integer> tailReader = factory.createReader("tailId", "tailNumReader",new JavaSerializer<Integer>(), ReaderConfig.builder().build())) {Integer intEvent;while ((intEvent = tailReader.readNextEvent(2000).getEvent()) != null|| readerGroup.getMetrics().unreadBytes() != 0) {System.out.println(intEvent);}}

Outputs:

4
5
6
null

We』ve now skipped the first 3 numbers and read just the next 3 numbers that we subsequently wrote to the stream. Pravega’s client API offers StreamCuts, Positions and EventPointers for advancing through streams. A StreamCut advances an entire reader group to a specific position across the segments in a stream. StreamCuts can be saved off and used later.

With a 2-second timeout on readNextEvent() in the above example, we did output one null, because unreadBytes() is a lagging metric. But now, should an unexpected timeout occur mid-stream, our loop won’t terminate until we’re certain we』ve reached the end of the stream. The above while loop is robust, even if we unnecessarily retry a few times due to lag in the metric. Otherwise, we’re simplifying our read loops in this article by taking advantage of the facts that our streams aren’t continuous and our standalone server isn’t under load.

There are many ways to control your read loop. For example, in our Boomi Connector, we read time- and count-based micro-batches to be fed into Boomi pipelines. The Boomi Connector read loop shows the importance of handling ReinitializationRequiredException and TruncatedDataException in long-running applications or reusable integrations.

Overviewing multi-segment streams that act less like files

Besides single-segment streams, streams can have many parallel segments, and the number of those segments can change over time, where the successor and predecessor relation between segments is preserved across time. While this increases the parallelism of streams, it also introduces a semantic of non-sequential and semi-sequential reads, which applications must consider as they compose their solutions with Pravega.

Figure 2: Diagram showing how stream segments can split and merge over time.

Exploring stream parallelism further will help us to conceive of how to compose our applications with Pravega. We will look first at placing writes across parallel segments randomly and then deterministically by routing keys.

To do this, we』ll need to create a new parallel-numbers stream that has more than 1 segment, let’s say, 5:

StreamConfiguration parallelConfig = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(5)).build();try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.createStream("tutorial", "parallel-numbers", parallelConfig);}Non-sequential writes and reads on a multi-segment stream

If we again write successive integers to this stream of 5 fixed segments, we will find that they aren’t necessarily read back in order.

EventStreamWriter<Integer> paraWriter = factory.createEventWriter("parallel-numbers",new JavaSerializer<Integer>(), writerConfig);
paraWriter.writeEvent(1);paraWriter.writeEvent(2);paraWriter.writeEvent(3);paraWriter.writeEvent(4);paraWriter.writeEvent(5);paraWriter.writeEvent(6);paraWriter.close();
ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder().stream("tutorial/parallel-numbers").build();readerGroupManager.createReaderGroup("paraNumReader", readerGroupConfig);
try (EventStreamReader<Integer> reader = factory.createReader("paraId", "paraNumReader",new JavaSerializer<Integer>(), ReaderConfig.builder().build())) {Integer intEvent;while ((intEvent = reader.readNextEvent(1000).getEvent()) != null) {System.out.println(intEvent);}}

Outputs will be non-deterministic, for example:

6
1
4
5
2
3

Semi-sequential writes and reads preserving order with routing keys

For unrelated events, these kinds of out-of-order reads are acceptable, and the added stream parallelism allows us to operate at higher throughput rates when using multiple writers and readers.

Often, however, you』ll need to preserve the order of related events, and this is done in Pravega, as in other systems, with the use of per-event routing keys, where those events that share the same routing keys will be delivered to readers in the order they were acknowledged to the writers by the segment stores.

Let’s consider an example where we want to preserve the order of several independent series of events. In this last example with EventStreamClientFactory, we』ll use the decades of the number line, writing the numbers of each decade in order along with their respective routing keys of 「ones,」 「tens,」 「twenties,」 and so on:

try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.createStream("tutorial", "parallel-decades", parallelConfig);}EventStreamWriter<Integer> decWriter = factory.createEventWriter("parallel-decades",new JavaSerializer<Integer>(), writerConfig);
Map<String, Integer> decades = new LinkedHashMap<>();decades.put("ones", 0);decades.put("tens", 10);decades.put("twenties", 20);decades.entrySet().stream().forEach(decade -> {IntStream.range(decade.getValue(), decade.getValue() + 10).forEachOrdered(n -> {try {decWriter.writeEvent(decade.getKey(), n);} catch (InterruptedException | ExecutionException e) {throw new RuntimeException(e);}});});decWriter.close();

The 「ones」 (0-9), 「tens」 (10-19) and 「twenties」 (20-29) are written to our parallel-decades stream. Since we have 3 sequences to 5 segments, each decade sequence is potentially in its own segment.

When we read this back, the values of each decade will be in order, but the sequences may be interleaved, and the lower decades won’t necessarily come first:

ReaderGroupConfig readerGroupConfig = ReaderGroupConfig.builder().stream("tutorial/parallel-decades").build();readerGroupManager.createReaderGroup("paraDecReader", readerGroupConfig);EventStreamReader<Integer> reader = factory.createReader("decId", "paraDecReader",new JavaSerializer<Integer>(), ReaderConfig.builder().build());Integer intEvent;while ((intEvent = reader.readNextEvent(1000).getEvent()) != null) {System.out.println(intEvent);}reader.close();

For me, this outputs:

10
0
20
11
1
21
12
2
22
13
3
23
14
4
24
15
5
25
16
6
26
17
7
27
18
8
28
19
9
29

As we can see, the decades are interspersed, but they remain in order with respect to themselves. Notice how Pravega doesn’t store routing keys on its own, because it treats user data very transparently. To preserve routing keys, simply include them in your events, and consider using a JSON or Avro serializer.

Batch client API overview

When reading events, EventStreamReader preserves the successor-predecessor relationship between segments, in that a successor segment is read in its entirety before any subsequent segments are read. For some applications, such as search, this constraint is overly restrictive in terms of performance. The batch API frees you of this constraint and allows you to read segments in parallel irrespective of which segments succeed other segments.

While streams are expected to use auto-scaling policies, applications can also manually scale their streams using the Controller API. For the sake of brevity, instead of waiting for a stream to auto-scale, let’s set up an example where we manually scale the stream, causing the stream to have predecessor and successor segments, and then check how the Batch API allows us to process all segments in parallel, rather than waiting for predecessor segments to finish processing before moving on to successor segments.

Let’s create a single-segment stream and write to it as before:

try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.createScope("tutorial");streamManager.createStream("tutorial", "scaled-stream", streamConfig);}EventStreamWriter<Integer> scaledWriter = factory.createEventWriter("scaled-stream", new JavaSerializer<Integer>(), writerConfig);scaledWriter.writeEvent(1);scaledWriter.writeEvent(2);scaledWriter.writeEvent(3);scaledWriter.flush();

Now that we have an ordinal sequence in a single segment, let’s scale to two segments and write some more. In order to scale our segments, we need to understand that a segment is defined by a routing key range. A single-segment stream has one key range that is always 0.0 – 1.0. Routing keys are hashed to a decimal between 0 and 1, and writes are assigned to segments thereby (Fig. 2). To split our stream into two segments, we need to define two key ranges that divide the one key range, such as 0.0 – 0.5 and 0.5 – 1.0. We then use the Controller API to command our stream to undergo a scaling operation with the two new key ranges. We first retrieve the ID of the segment we want to seal and scale:

// get segment ID and scale streamScheduledExecutorService executor = ExecutorServiceHelpers.newScheduledThreadPool(4, "executor");Controller controller = new ControllerImpl(ControllerImplConfig.builder().clientConfig(clientConfig).build(), executor);StreamSegments streamSegments = controller.getCurrentSegments("tutorial", "scaled-stream").get();System.out.println("Number of segments: " + streamSegments.getNumberOfSegments());long segmentId = streamSegments.getSegments().iterator().next().getSegmentId();System.out.println("Segment ID to scale: " + segmentId);
Map<Double, Double> newKeyRanges = new HashMap<>();newKeyRanges.put(0.0, 0.5);newKeyRanges.put(0.5, 1.0);CompletableFuture<Boolean> scaleStream = controller.scaleStream(Stream.of("tutorial/scaled-stream"),Collections.singletonList(segmentId),newKeyRanges, executor).getFuture();

Output:

Number of segments: 1
Segment ID to scale: 0

Now that we』ve split our one key range into 2 key ranges, we have 2 segments on our stream after the 1st segment. When the scaleStream future completes, we will know if the scaling operation succeeded, and we will be able to write to our once serial, now parallel stream! Since our stream becomes unordered anyway now that it has more than 1 segment, let’s take advantage of that and write unordered in parallel:

// write 4 thru 9 to scaled streamif (scaleStream.join()) {List<CompletableFuture<Void>> writes = IntStream.range(4, 10).parallel().mapToObj(scaledWriter::writeEvent).collect(Collectors.toList());Futures.allOf(writes).join();} else {throw new RuntimeException("Oops, something went wrong!");}controller.close();scaledWriter.close();

If you’re fluent in Java, you may have noticed that ExecutorServiceHelpers and Futures are new. Pravega is chock-full of hidden gems with respect to API helpers and testing utilities.

Our stream is now composed of a single segment with ordered 1,2,3 followed by two segments with unordered 4 – 9, or {S1} -> {S2, S3}. When reading from such a stream under normal circumstances, a reader group will exhaust S1 before moving on to read from S2 and S3. The batch API will allow us to ignore the ordering of segments and process them all at once. The batch client presently hands out iterators, so let’s use this Java hack to transform iterators to streams (since I’m assuming Java 8):

private <T> java.util.stream.Stream<T> iteratorToStream(Iterator<T> itor) {return StreamSupport.stream(((Iterable<T>) () -> itor).spliterator(), false);}

First we』ll use BatchClientFactory to collect all segment ranges into a set so we can use parallelStream() to simulate shipping them off to workers for event processing:

// get segments and their events with batch clientBatchClientFactory batchClient = BatchClientFactory.withScope("tutorial", clientConfig);StreamSegmentsIterator segments = batchClient.getSegments(Stream.of("tutorial/scaled-stream"), null, null);Set<SegmentIterator<Integer>> segmentIterators = new HashSet<>();iteratorToStream(segments.getIterator()).collect(Collectors.toSet()).parallelStream().flatMap(segmentRange -> {System.out.println("Segment ID: " + segmentRange.getSegmentId());SegmentIterator<Integer> segmentIterator = batchClient.readSegment(segmentRange, new JavaSerializer<Integer>());segmentIterators.add(segmentIterator);return iteratorToStream(segmentIterator);}).forEach(System.out::println);segmentIterators.stream().forEach(SegmentIterator::close);batchClient.close();

Example output:

Segment ID: 0
Segment ID: 4294967298
7
5
4
1
2
3
Segment ID: 4294967297
6
8
9

Byte stream client API overview

The last thing we』ll look at is Pravega’s byte stream client. Byte streams are restricted to single-segment streams, but, unlike event streams, there is no event framing, so a reader cannot distinguish separate writes for you, and, if needed, you must do so by convention or by protocol in a layer above the reader. The byte stream client implements OutputStream, InputStream and NIO channels, allowing easy integration with Java. Let’s look at a simple example using ByteStreamClientFactory:

try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.createScope("tutorial");streamManager.createStream("tutorial", "int-bytestream", streamConfig);}try (ByteStreamClientFactory byteFactory = ByteStreamClientFactory.withScope("tutorial", clientConfig)) {try (ByteStreamWriter byteWriter = byteFactory.createByteStreamWriter("int-bytestream");DataOutputStream outStream = new DataOutputStream(byteWriter)) {outStream.writeInt(1);outStream.writeInt(2);outStream.writeInt(3);}try (ByteStreamReader byteReader = byteFactory.createByteStreamReader("int-bytestream");DataInputStream inStream = new DataInputStream(byteReader)) {System.out.println(inStream.readInt());System.out.println(inStream.readInt());System.out.println(inStream.readInt());}}

In this example, we used ByteStreamWriter and ByteStreamReader as the underlying OutputStream and InputStream for Java 1.0’s DataOutputStream and DataInputStream, respectively. One thing to know about ByteStreamReader is that, unlike EventStreamReader, it blocks forever on read and doesn’t time out.

Things to know not covered here

Pravega provides an additional revisioned stream client as well as Flink Connectors that expose Flink APIs over the event stream and batch clients. We will cover these in more detail in future articles. RevisionedStreamClient uses optimistic locking to provide strong consistency across writers, ensuring all writers have full knowledge of all stream contents as they issue writes in coordination with distributed processing. Lastly, our Flink Connectors provide end-to-end exactly once across pipelines of writers and readers, seamless integration with checkpoints and savepoints, and Table API with support for both batch and stream processing. Pravega’s 0.8 release introduces a new Key Value Table client built atop segments and state synchronizers.

Many advanced features of EventStreamClientFactory’s writers and readers have not been overviewed here but will get special attention in future articles. Some of these capabilities include automatic checkpoints, checkpoints used for exactly once processing, watermarks for time-oriented processing, writing with atomic transactions, and more.

Cleaning up after ourselves

As stated above, we should practice good hygiene and clean up after ourselves by deleting our reader groups, our streams and our scope (streams must be sealed, and scopes must be empty, to be deleted):

factory.close();readerGroupManager.deleteReaderGroup("numReader");readerGroupManager.deleteReaderGroup("tailNumReader");readerGroupManager.deleteReaderGroup("paraNumReader");readerGroupManager.deleteReaderGroup("paraDecReader");readerGroupManager.close();try (StreamManager streamManager = StreamManager.create(controllerURI)) {streamManager.sealStream("tutorial", "numbers");streamManager.deleteStream("tutorial", "numbers");streamManager.sealStream("tutorial", "parallel-numbers");streamManager.deleteStream("tutorial", "parallel-numbers");streamManager.sealStream("tutorial", "parallel-decades");streamManager.deleteStream("tutorial", "parallel-decades");streamManager.sealStream("tutorial", "scaled-stream");streamManager.deleteStream("tutorial", "scaled-stream");streamManager.sealStream("tutorial", "int-bytestream");streamManager.deleteStream("tutorial", "int-bytestream");streamManager.deleteScope("tutorial");}Conclusion

Understanding the fundamental aspects of streams as outlined above and their sequential, non-sequential and semi-sequential nature is all you need to compose applications with Pravega. Of course, Pravega has several more client interfaces and many advanced features, and we』ve only scratched the surface. But you』ve just learned the basics, and you’re ready to start building apps!

Final review

We』ve skimmed the surface area of Pravega’s client APIs, configuring streams, ordering writes in segments, creating reader groups and instantiating EventStreamWriters and EventStreamReaders.

We』ve seen how streams have varying semantics related to ordered and unordered writes and reads, depending on whether writes and reads happen within the same segment or across segments.

BatchClientFactory provides direct access to segments and, for the sake of raw performance, omits the successor ordering of segments in time across a stream.

The ByteStreamClientFactory APIs provide implementations of InputStream, OutputStream and NIO channels for working directly with the bytes in single-segment streams.

The basics covered here prepare an individual to think about how to decompose their problems according to the semantics provided by Pravega and how to begin composing solutions to those problems.

Get the code

Code for the above examples is available on GitHub in the form of easily executed JUnit tests:

https://github.com/pravega/blog-samples/tree/master/pravega-client-api-101

Acknowledgements

Special thanks belong to Flavio Junqueira, Sandeep Shridhar, Claudio Fahey, Ashish Batwara, and Tom Kaitchuck for their feedback, corrections and valuable insights. Thanks especially to the Pravega team as a whole for building such an awesome product and to you, the reader, for your patience in getting to this point and for all the solutions you』ll build on top of Pravega!

相關焦點

  • 爬蟲(101)爬點重口味的
    https://s.taobao.com/api?上車了根據我們的火眼金睛,對奶罩各種信息的分析,代碼應該這樣擼:import requestsimport jsonurl = "https://s.taobao.com/api
  • 從臺北看101,再從101看臺北
    以至於我現在只是看著這張101大樓的照片,也止不住地想起太多與臺北有關的回憶。臺北的高樓不多,天氣晴朗的日子裡,很容易就從城市各處眺望到這個曾是世界第一高樓的建築,象山是其中之一可以看到臺北101以及整個城市景色的地方。從捷運象山站下來,隨著指示牌就可以找到象山步道的入口。然而對於不愛運動的我來說,爬象山卻不是一件容易的事情。
  • 有聲閱讀:193.Taipei 101 臺北 101
    Taipei 101 臺北 101On the 47th floor
  • 일본판 프로듀스 101 시작
    일본판 『 PRODUCE 101』 시동 2020년 데뷔 글로벌 보이 그룹, 전형에 : 국민 프로듀서 대표 나인티나인 『 PRODUCE 101 JAPAN』은 11일보다 모집을 개시(5월
  • 章光101研發基地落戶樂清
    6月10日上午,隨著一陣喝彩聲,北京101毛髮研究院中藥研發基地落戶樂清。這意味著,自去年章光101集團國內生產線遷回家鄉後,該集團在「回家」之路上又邁出了一大步,通過科研引進,助力家鄉「兩區」建設。
  • 偷拍動感101辦公室私情大公開
  • 工程新典範:臺灣101大樓
  • 強盛化學Enox 101的緊急聲明!
    除此之外,我們的Enox 101等有機過氧化物產品被應用於生產口罩和防護服等重要物資。有幸能在這場中國「抗疫」行動中提供產品,並承擔社會責任,我們必定不遺餘力。但是,我們也注意到,近期市場上充斥著惡意斷供、串貨、囤積、假貨、短斤缺兩等不正當行為,我們對此深感遺憾,也將嚴肅關注和預防我方業務單位的這種行為的傾向。
  • 【101巨福利】101和DOTT甜品一起,送你一份聖誕大禮!
    仙居私家車廣播微信號:仙居FM101(←長按複製)美麗仙居,品質廣播!歡迎關注仙居最具品質廣播微信公眾號!    對於一個」吃貨「來說,把自己」吃「成了專家,繼而晉級為一名烘焙師,可是個新境界!今天,101的直播間就迎來了一位」吃「出夢想的人——Dott甜品工作室的胖子蛟。
  • 臺北夜景哪裡好,爬山去拍101
    是臺北101啦,不是什麼別的101…臺北101位於臺灣省臺北市信義區,樓高509.2米,地上樓層共有101層。
  • 臺北101購物中心考察隨想
    隨ICSC RECON Asia 2015參觀團考察臺北101購物中心,還沒等我們提問,接待方就主動道出了這組數據:1)GFA58,000平米,2)GLA36,000平米,3)MAT(動態年度銷售額)4.7億美元(位列臺灣購物中心第一)以及4)1400萬的人流量,5)198家商鋪分布在商場從地下一層到地上4層的空間裡。
  • 《創造101》主題曲,2倍速食用更佳
    一句話形容101女孩跳主題曲的樣子:可愛中不失嚴謹,俏皮中不失專業。那你看過二倍速的「pick me pick me up」嗎?注意!你的姨母笑馬上就要變成紅紅火火恍恍惚惚了!深呼吸,開始2倍速的《創造101》主題曲打開方式——張開雙手,然後光速旋轉.↓
  • 中國電影:101次求婚
    導演:陳正道 Leste Chen主演:黃渤 Bo Huang 林志玲 Chiling Lin 秦海璐 Hailu Qin 高以翔 Godfrey Gao  劇 情 介 紹: __________       在這部101
  • 「Produce 101 China」 opens a golden age of Chinese girl groups
    The official post of "Produce 101 China".In addition, 101 contestants performed the title song 「Produce 101」.
  • 20일에 공개된 프로듀스 X 101 연습생들 비주얼
    올봄을 뜨겁게 달굴 '프로듀스 X 101'의 연습생들이 최초로 모습을 드러냈다.지난 20일 Mnet '프로듀스 X 101' 측은 런웨이 쇼를 진행해 전 시즌 최초로 타이틀곡 공개에 앞서 국민 프로듀서들과의 만남의 시간을 가졌다.이날 연습생들은 이번 시즌 프로그램 타이틀인 'X자' 형태로 꾸며진 무대 위에 올랐다.
  • 誰在101公交車上丟了一個檔案袋?
    城市晚報訊 昨日10時許,長春公交集團郊線公司101
  • 菲幸運兒成101觀景臺第2000萬遊客
    ▲ 3月17日下午,菲律賓一家保險公司的經理Dimaunahan幸運地成為臺北地標建築101大樓的觀景臺第2000萬名客人。(中新社)  【本報訊】據臺灣「中央社」臺北17日電:臺北101觀景臺下午2時23分迎接第2000萬名遊客,由來自菲律賓的遊客Del A.Dimaunahan, C.P.A先生得到這個難得的歷史紀錄,開心大喊「謝謝老闆」。  臺北101觀景臺自2005年營運超過11年,在2012年4月27日才歡喜迎來第1000萬名旅客,如今短短4年後,又在今天迎來第2個1000萬人次。
  • BeautyCam美顏相機,創造101小姐姐都在用的拍照APP!
    >《創造101》>BeautyCam美顏相機特別贊助《創造101》<「BeautyCam美顏相機」特別贊助《創造101》節目>,將和《創造101》一起見證101位小姐姐們的成長,呵護大家的顏值,助力女團出道,讓追夢少女們想得美,拍得更美!