Loading...
Loading...

the history of the Teletype Model 33 and punched paper tape, Telex service and long-distance communication costs, the connection between TTY and the teletype company name, 110 baud modems and 10 characters per second transmission, Wang Laboratories field offices connected via Telex, the evolution from Java Enumeration to Iterator to Iterable, Vector.elements() and the absence of an Enumerable interface, the introduction of Iterator and Iterable in JDK 1.2 and 1.5 respectively, the legacy collections Vector and Hashtable and their method-level synchronization overhead, Java 8 lambdas and streams as the major language feature, default methods enabling compatible interface evolution, the long-standing problem of not being able to add methods to published interfaces, Brian Goetz as the main designer of the Spliterator concept, Eclipse Collections and Rich Iterable as an alternative to streams, the GS Collections to Eclipse Collections history, C# LINQ as a competing influence that pressured Java to add streams, the design decision to separate lazy stream operations from eager collection operations, intermediate vs terminal operations in stream pipelines, why streams cannot be consumed twice and the buffering problem with forking streams, primitive specializations of streams (IntStream, LongStream, DoubleStream) and the original compromise of Java primitives vs objects, Spliterator characteristics, the subsized optimization that avoids intermediate storage and merge steps for array-based collections, how Spliterator splitting works for parallel execution and the fork/join pool, Amdahl's law and minimizing single-threaded setup for parallel streams, why Spliterator.trySplit mutates in place rather than returning two new spliterators, HashSet being sized but not subsized due to bucket distribution, ArrayList vs LinkedList performance considerations for streams, streams from non-collection sources like BufferedReader.lines() and String.lines(), infinite streams with Stream.generate(), the limitations of streams for reactive or socket-based processing, the for-each approach as an alternative to to-list for live data sources, the upcoming topics of fork/join pools and parallel stream configuration, the JavaOne conference
Stuart Marks on twitter: @stuartmarks
Hi Stuart. Welcome back on the XFM podcast.
Hi Adam, good to be back. Thanks for inviting me back on and we have some exciting news to share.
So I got an email. So by the way, we are the best prepared guest on the show.
What makes you say that?
Yeah, because I received an email from you with, you know, some feedback to recent episodes and
with some pointers to episode with Morris. But the exciting news are what you told me is
that there will be a new JAP which will integrate a teletype and mobile 33 with Java, right?
So we could very easily create punch cards with Java 28.
Yeah, maybe actually the teletype, the teletype model 33.
There is a connection between previous topics from them to this. But the teletype model 33 didn't
have punched cards. It had punched paper tape. Ah, okay, do you see? Yeah, I mean, this
JAP is in the making. We cannot be, this is not released yet. This is a preview, right? So
it might be, it might be an incubating JAP. It might be incubating for a very long time.
This could be a good joke actually, right? Yeah, yeah, maybe announced it on April, April 1st or
something. Yeah. And but what's interesting is what you wrote is that this TTY comes from
the name of the company, right? Right, yeah. So there was a company called teletype.
And that's kind of what they did. You could hook one of these up to the phone line to another
teletype or to a computer. You could type on it. And what you typed was transmitted
over a 110-bod modem, which is about 10 characters per second to the other end. And so, yeah,
that was the original, the, the, the, the, the, the, the way we got on this topic was in a previous
episode, Morris Naftlin mentioned using punch paper tape. And he didn't know how, how he used it,
but, but I used to use it. And so that's how this teletype comes in. He had a punch paper tape,
punch and reader. And, and yeah. And so that's how people in the days, I guess, I don't know when
this started, 1960s, maybe, or maybe 1950s. But in those days, you could make long distance
phone calls, but they were very expensive, like several dollars per minute. And so if you wanted
to transmit written information very quickly, then the, the teletype machine would help you do that.
And it ran over a service called Telax. So it could be either run over private phone lines or
over the public phone lines. But basically, what you did is you, you offline, you, you typed your
message and punched it to paper tape. And then you made the very expensive phone call and then
read the paper tape in that you just punched. And that typed it at 10 characters per second,
which is faster than, I mean, it's not, it's faster than most people can type. Not everybody.
It's like change your pt, right? This is also similar.
But, you know, something like that, you can kind of see the characters appearing.
Yeah, exactly. Because it was an electric diprider.
Now, maybe, you know, this open AI is working on a new teletype, you know, this is why they make it
faster. This is why the new, so, so, so much resources. But I opened the teletype model 33
Wikipedia link, which I'll put to the show notes. And the machine looks actually surprising,
like a, yeah, okay, writing machine, right? And, yeah. And what I read is there were half a million
made in 1975. So it was, oh, 1974. Yeah. Okay. Yeah. So that was, that was just before faxes started
and becoming really popular. But yeah. So if you wanted to send written communication in real time
to somebody else who had a teletype machine, you could do that. And it would only take,
you know, it was relatively inexpensive, right? So if you picked up the phone and talk to somebody,
you know, if you spend a couple minutes on the phone, that's, you know, dollar, dollars, $10, $20.
Yeah. I made a call to Europe once to arrange some hotel reservations. And it was $50 to make
the phone, right? I mean, it's ridiculous. And yeah. So if you could type out your message,
you could use punch paper tape to type it very fast. And you'd only, you'd only be on the
expensive longgets in the skull for a minute or two. Yeah. Assuming that 10 characters per
seconds are faster than your, because you are paying for the same line, right? So you,
you're paying for time, regardless. You're paying for time. Yes. Exactly. Not the number of characters.
Yeah. So you say, you are saying 10, 10 characters per second was substantially faster than
a human could speak, right? Or more efficient, not faster, because you are, it's going better than
fourth, you have to say, you know, whatever you have to say. Yeah. Yeah. You know, you pick up
the phone and you say, hi, Adam, how are you? How's the weather in Germany? You know, I mean,
this is why German never say that, right? Just say, this is what's going on. And come on,
right? So this is like, yeah, exactly. But this is actually cool. But I remember, I think I
saw such machines. And also the TELX, they were in the post offices. Could it, is it possible?
I think so. I mean, because I remember the name TELX. Yeah. It's, it's, well, I assume that was in
Germany. Yeah. So in Germany, I mean, my recollection of this is not, not very good. But I think the
post office ran all the phone lines as well. And so in Germany. And so there was a lot more public
services provided by the Deutsche Post. So it wouldn't surprise me at all if there was, you know,
a service they provided where you could write out a message and they would send it to somebody
via TELX. And, um, uh, did you know the American movie, three amigos, uh, with Chevy Chase.
Yes. Yeah. Chevy Chase and Steve Martin and, yeah. And they send a message at the beginning of
the movie. And it was too expensive. So they had to shorten the message. And I think this was a
TELX. Well, I don't remember what, what era that was set in, but it was like horseback writing,
if I'm not mistaken. So, so that might have been telegram. Telegram. Okay. Okay. Telegram is
even like, Morse code and stuff like that. Yeah. More, I think it could be Morse code. I
forgot that this was a long time ago. So that was, that was earlier. And so then the thing about
TELX was that, um, in a small office, everybody had phone lines. And, you know, one of these machines
was not terribly expensive, but you could install them in small offices. Like, so I remember
a thousand dollars. Okay. Yeah. In a, in a, in a previous episode, we talked about my father's
history working for Wang laboratories. So that's where I saw TELX machine. And they, they connected
all of the field offices for the company via TELX. And they could send messages back and forth
that way. So internal corporate announcements were sent that way. So TELX is similar to modems,
right? Well, they, they transmitted characters over phone lines using modems. Yeah. And, and so,
yeah. So it was 110 pod. And remember the, the most recent modem where they became obsolete was,
I think, 56 kilowatt, I think something like that. 5664, maybe, but it was already ESDN, ISDN, I think,
I had the robotics, 33, six, I think this was already fast. And 50, 55 was compression, whatever,
they were. Right. Yeah. So that's pretty amazing. So you think about that. And that's like, like,
you know, 400 times faster or something like that. Mm-hmm. Crazy. Um, yeah. So because you prepared
already so well, I think we should start with Lambda from the Java 8 and Lambda streams, right?
So this would be the chronological order. We could. Yeah. So we, yeah, to pick up where we,
where we left off with a previous episode, I think I finished off Java 7. Yeah. So Java 8,
that was, um, well, I mean, what is there to say about it? It was kind of, I mean, yeah. So,
so the big, big, well, the big, the big feature that everybody remembers from Java 8 was Lambda
and streams. No question about that. Um, I think the other, the other thing that people forget,
but which is equally, if not more important, is default methods, which was the ability to,
to evolve, basically provided the abilities to evolve and interface. But you needed this,
for streams to make all collections compatible, right? With the earlier. Well, yeah. So
we could have gotten away without it. But I think it was one of those things that it made the
APIs a lot nicer. And so, so I think the idea of streams in Lambda stands on its own, right? So you
can, you know, suppose you'd create a stream from somewhere and, uh, you could chain things to it
and like filter a map, you know, flat map, that sort of thing. And you could use, you could use
Lambda's to plug functions into those, into those methods. And that all works great. Um,
the question is how you create streams. And so you could certainly have a bunch of utility methods
that you could have like a static utility method that you'd say, you know, create me a stream from
a collection and probably under the covers, what it would do, what would be to take an iterator,
do some things and turn the iterator into a stream. And that would work, but there would be
a couple of problems with it. One is that since the, the main way to get elements out of a collection
is with an iterator, an iterator presents elements only one at a time. And so it makes it really
difficult to run streams in parallel. So remember, that was one of the big, the big, the big deals
streams was to be able to run parallel. And so the, the, the, in object systems, the way to,
the way to specify an interface, but let the implementer provide an optimized implementation
that is, that is tailored towards the implementation is to have, you know, is to have an instance method.
And then you can, the, the implementer can override that. And so you can, you can see that in,
I mean, you see that all over the place at job. I mean, that's such an obvious thing, but it bears
repeating. And that's why, so, so that was the problem, which is, you can, it's still the case,
you can take any collection, turn it into an iterator and then turn it into a splitterator and
then turn it into a stream. But basically, you're, you're really forced into processing the elements
one at a time. And it would be really hard to recover any parallels in for them. But if you're
able to override the stream creation method within, in a particular collections implementation,
then the collection implementation can provide its own way of saying, oh, well, if we want to run
our entries in parallel, then I have access to all the elements. And so I can, I can split them up
and present them to an arbitrary number of threads. And that's how we get parallels. And so what I
just described was the action of a splitterator, but in order, and then, and so most of the collections
implement their own splitterators. And so the typical thing is an array list. So an array list
stores you came up with the name splitterator. Oh, no, no, no, that was, that was Brian gets.
Okay. Brian was the main, the main mover behind this. I mean, my contribution was to,
was to try stuff out and do investigations and, and, you know, make comments on the API.
I mean, we, there were, there were a bunch of interesting design decisions that we confronted
early on. But, but yeah, so I think he, yeah, there were, we went back and forth because we wanted,
well, it started off by saying, well, let's take the iterator concept and, and expand it so that
it is, it is splitable instead of one at a time. So we had a splitable iterator. And then from that
was sort of a split split split, you know, that, you know, that kind of turned into a splitterator.
But one of the things we realized early on was that this would not work at all unless
we were able to add methods to interfaces. And then the, and so this is, this has been a long
standing problem with Java up, up until Java eight. So if you had an interface and somebody
implemented it, you, the requirement was to implement all the methods. Great. And suppose you,
suppose you came along later and said, oh, I want to add a method to this interface. You could do
that, but the problem is there might be all kinds of implementations of that, there might be many
classes out there that implemented that interface. And they wouldn't have an implementation of that new
method. And so if an application called it, they would get a no such method error or something like
actually link a jar or some sort, I actually forget what would happen. So basically the rule was
established very early on in Java object design, which was once you, once you've created and published
an interface for people to, to implement with their classes, you cannot change it.
And of course, object systems need to evolve. Over time, you discover the need for new things,
you need to discover new techniques, better ways to express things. And so various, various systems
worked around this issue by saying, well, I have a food interface, but I need to extend it.
Well, I need to add, ideally, I would add a method. So I'll extend the food interface and I'll
create a new interface, food to that, that extends food and then add new methods there. And then
later on, oh, I need a more, more new method. So I'll create a food three that extends food to and
and so forth. And so there are some systems that evolve their interfaces that way, which is a real
pain because you can't just pass around a food and call new methods on it. So anyway, so this is
all lead up to, you know, in designing the streams and, you know, streams API and its infrastructure
like splitterators, it very quickly turned into a requirement that we would need to find a way
to extend interfaces comparably. And that's what default methods provide.
Yeah. And this was big feedback then. I remember at all Java ones, as well discussions,
you know, how to do that. Lots of criticism discussions. The community, there was a Java.net
still around and Java blocks and Java world. And everyone talked about default methods. And then,
of course, what's the difference between interface and apps that class, you know, then this is
right. This was the, the, the common discussion back then. Also, the syntax of Lambda. And I think
Java was a little bit under pressure because back then C sharp introduced Link. Link. And it
they were earlier than Java. And very popular. And now no one talks about it anymore, I would say.
I just like it. There's a, or it's just normal. And this is why everyone said, okay, we need it in
Java streams. And because C sharp has Link, this was a language integrated query, which is similar
to streams, right? Yeah. Well, it turns out there were a bunch of things all going on at the same
time. And I think the idea of extending iterator to, to do more things was, was, was certainly
fairly common knowledge. Now here, I'll give a shout out to my friend, Don Rob, who is one of the
original authors and maintainers of what is now Eclipse collections. But before that, it was
called the GS collections when it was a Goldman Sachs. And then before that, it was an internal
project called Karama. And so he was on the, he was on the Lambda expert group. And so there was
a lot of influence from his thinking. And so, so you can still see that in Eclipse collections today.
So what Eclipse collections did was they built out their hot, you know, basically streams gives you
higher order functions like map filter reduce, that kind of stuff. And so what Eclipse collections did
was they took the iterator concept, right? And well, you have an iterator, but a thing from what
you can get an iterator is an iterable. And they, they chose that as their extension point. And so
the primary interface in Eclipse collections is called rich iterable. And so rich iterable extends
the JDK iterable. But it extends, but it provides dozens of new higher order methods. And so it's
it's sort of function, you know, sort of functionally equivalent to streams. But it has, you know,
many, many more functions. So yes. And the interesting part is that it iterator came actually late
at the beginning with enumerations, right? Oh, well, in the, okay, so now you're, okay, bouncing
around in time. Yes. So in JDK 1.0, there was enumeration. Exactly. Yeah, iterator came along with the
original collections framework in 1.2. So that was, I mean, it did come up, come somewhat later,
but that was, that was very early on. The, yeah, and the enumeration had, has more elements and
elements, next element. And there was an naming and string tokenizer, of course, used enumerations.
I used this a lot. Yeah, this is what I remember. So this is a, I used the enumerations all the time.
And end in JDK 1.3, there was a naming enumeration introduced. It was interesting. But yeah,
I also remember, you know, the enumeration and then the introduction of iterator and iterable,
because there was a bigger deal, because we all, we used already enumeration project and then
iterators were on the horizon. Right. And, and they were similar. So the enumeration was, has more
elements and next element and the iterator was a little bit shorter. So there was like next,
and has next or something, right? Right. I am not entirely sure. I do not think,
actually, I should, I should be able to call this up pretty quick. I, I think iterable didn't come
until several releases later. Yeah. And so, and the reason there was that iterable is the thing
that was introduced in order to support the four each loop in Java five. So, so iterable was
JDK 1.5. Yep. Very late. But the enumeration worked basically just with vector. I can, I think,
drink tokenizer and I guess hash table, right? Or just, I assume, because back then there were no
collections. Right. Well, yeah, there were, there were what we call the legacy collections.
Yeah, which was threat safe, which are great. So hash table is threat safe and vector is threat
safe. So if they use them, no problems. And you could, hey, you could even use a virtual threat
with hash table and vectors right now, because there's no opinion, right? I suppose so. Yeah.
It means JDK 1.0 project can now migrate to JDK 25 and use virtual threats without any performance
penalties, right? Maybe. Yeah, we'll see. Maybe. Well, anyway, but the thing about them being,
I mean, certainly people took advantage of them being threat safe, but here's, here's a bit of
trivia, which is the, the, the, the, the problem is, I don't know if we talked about this in a
previous one, but the problem is that providing threat safety at the method level for vector and
hash table or hash table, as many of us call it, is, is at the wrong level, because usually you
want to do more like atomic transactions on things. And so, but the other thing is that I think
there was this idea that, well, the JIT compiler should be able to, or the, the JVM should be able to
to remove the overhead of synchronization. And it turns out that in the general case, that's not
actually true. And so, if you're not using, you, you definitely notice the overhead of using vector
versus say, array list in a single threaded environment. Yeah, because the entire, if, if you
access, you know, individual elements, basically, the entire thing is locked, the entire container is
synchronized, right? That, that, that's the problem. I'm just looking at the vector, because I was
curious about the enumeration, the relation between vector and, and enumeration. And it seems like
it is not directly implemented. So it seems like maybe collection,
implements enumeration, you know? So I don't know. Okay. So now you remember the teletype, and you
cannot remember the enumeration. Okay, that was actually before I started to work on the, on the JDK.
So all right, well, one thing I do, and you can't see this, our listeners can't see it, but I have
all of the old versions of the Java doc on my laptop. And so we can get to them at a moment's notice.
Hey, cool. Oh, right. Of course, there was, okay. Yeah, what was it?
Elements elements, vector dot elements returned an enumeration, but there's no interface that says,
this is a thing that can give you an enumeration. There's no enumerable as it were. Exactly. Because
I searched where elements comes from. And this is just a method. There is no interface, which defines
elements. Yeah. Okay. So in the very, very beginning of Java, so we had vector dot elements,
and you get the enumeration back, and enumeration was similar, if not almost identical to, to iterator,
I would say, right? So it's interesting. Why you had to rename that? This would also interest you,
remember? You could just reuse because it is similar somehow. It has more elements. It has
next element, and as iterator. So there is, of course, the simulator has more elements,
and next element, two methods, and an iterate, maybe because it was not generic, an iterator has
has next, next, and remove, and for you to remain. So, yeah. So remember, all of this stuff was
introduced in JDK 1.2 with the original collections framework. And so there were no generics then.
But yeah, I think that the part of it was, I don't know all of the reasons Josh
Block introduced iterator as an interface. I think part of the reason is that iterator is actually
a fairly reasonably well-known term of art in computer science, in language constructs,
so having an iterator. So I had heard of iterators before I started working on Java,
because other languages had them. And so that's one thing. And then for some odd but pragmatic
reasons, he also decided that that iterator would need to have a remove method, which is rather
strange. But it kind of makes sense, because the main use case there is suppose you want to go
through all the elements of a map. And now this is basic collection stuff in 1.2. And remember,
so the programming style at that time was to mutate things in place. So suppose you had a map
with a bunch of entries in it, key value pairs. And you wanted to look at all the entries in a map
and remove the ones that were no longer necessary. How would you do that? And so basically the original
collections way of doing this was to get the maps entry set and get an iterator from that.
And so now you get map entries, which are key value pairs. You can look at the key in the value and say,
oh, yeah, I don't need this anymore. And then you call it remove on the iterator to get rid of it.
Yeah. So so that's what it was there for. Although it does stick out because it's very strange,
because most of the time, you know, when you iterate, you just want to read the elements. So it's
strange to have remove in there. Nowadays, we would filter a new new list without. Yeah, exactly.
Usually you run through a stream and filter out the things you're you're you're not interested in.
That's what I was even an interesting thing called primitive iterator. Now you're surprised, huh?
Actually, you know, it's it's it's actually kind of interesting. And is that in is that a JDK eight?
Yeah, one eight. It was introduced primitive iterator of end. Yeah, it's it's it's a little weird because
I actually those are actually not used anywhere. And they're actually they actually if I'm not mistaken,
those can't can't actually be used. I think anywhere. I think, you know, it's kind of interesting.
I think it's the leftovers from because it's interesting. There's default method for each remaining,
where you can put into consumer. So you could maybe even pass you know with indirectly system
current amylis or something. So you would end up having an endless almost like a stream. Uh-huh.
So I will look at this because it looks interesting. This is like primitive iterator dot of end.
And then you get an inditerator. But what we covered. Oh, well, so I was going to say this is okay.
So one of the one of the issues with streams was it's very easy to talk about. Okay. So so we'll return
we'll return back to Java eight, right? So we talked about default methods. And so the idea was that
we needed to go around and add the ability to create streams from a bunch of existing classes.
Mostly collections, but but not always. I mean, there were several other other other places
in the in libraries where we wanted to create streams. But the main thing was collections because
if you had a bunch of elements in a collection, you wanted to stream them. So so that that was the
primary use case for devolved methods. Okay. So that's great because collections, collections can
contain objects. Because you needed the default method to add this stream method without breaking
the old ones, right? That's right. This is this was the use case. Yeah. Yeah. And and so so that
was great. That works fine for objects. And so we have collection, we have collection of E, which
generic, you know, the generic type variable. And so generics at the time and in fact, it's still the
case that generics must be of reference types. So they're all they're all objects. You can't have
generics over print. You cannot have generics over primitives. And so one of the compromises at the
time, I mean, it's it's basically it basically harps back to the original compromise made in Java.
Which was to to instead of having a pure object oriented model, Java 1.0 introduced this notion
of primitives. And the reason for that is the original the original team did not believe they could
make the system go fast enough if things like int were actually objects. And of course the small talk
guy say, oh, blah, blah, blah, blah, blah, you know, you know, in small talk, integer is an object
and so forth. And there were many, many arguments about that. But the original Java designers said,
you know, in order to make this work, I in order to have arrays of them and to treat like a
byte array as if it were straight memory, we really need to have primitives. So they made that
decision very, very early on. I don't know who made that decision. That was probably that James
was certainly James Gosling era position. Okay, so the the object model we have has, you know,
there's job-aligned object and hierarchy under that plus they're the eight primitives.
Okay, so when we introduced streams, we knew that people would want to operate on streams over
essentially primitive data. But we didn't want to add eight variations of streams. So there's this
compromise, which was, okay, what are the most useful primitives that people will likely want to
use with streams? And so the three of the primitives were chosen, int, long, and double.
And so that's why you said that. So not only is there, you know, Java utils stream dot stream
of t, but there's also int stream, long stream, and double stream. And we got on this topic because
you noticed primitive iterator. And so there's primitive iterator of integer or of int of long
of double. And those were added as part of the same stuff. But it turns out that I think
those, I'm not entirely sure. I did an exploration of those and it's like, oh, you got to be able to
do something really cool with the primitive iterators. But it turns out you can't. And I'm not sure why.
I think there are, I think we added them in anticipation. Since we had needed, we call these
primitive specializations of stream, it's pretty long stream, double stream. So as we added those
primitive specializations, we also needed primitive specializations of splitterator. We also
needed primitive special, and I think the, I wasn't part of this, this, this effort. But whoever added
those also probably said, hmm, we probably also need primitive specializations of iterator. And so
that's where those came from. And I think they might be used in a little corner of the API
somewhere. But in general, I don't think I've ever seen any code that actually uses them.
So I'm a bit disappointed because we are talking about three useless classes. And you are not
excited. So usually you would say, I will immediately deprecate them. And the next Java one,
I will delete them. You know, this would be the perfect case for you. Yeah. Because we are
talking here, I'm worth, or I'm speaking here worth, uh, uh, uh, Dr. Deplicator, Mr. Deplicator.
This is true. This is true. Uh, but they, that's, that's interesting. They could be the candidates,
right? Uh, maybe I'm clicking around the interface here. The problem is that if they're used in APIs,
then, uh, yeah, the problem is if they're using APIs, then it's hard to deprecate something because
that means that you have to go around and deprecate the methods that return them. So for instance,
you can get a primitive iterator dot of double from a splitterator dot of double.
And you can get a splitterator of double from a double stream. So, you know, you start point,
so they are hooked into the APIs. But in practice, I think they're not useful.
Um, but, um, but deprecating is difficult because it means we have to actually remove more stuff
from the APIs. And I would, so sorry, sorry to disappoint you. No, I thought, you know, you will,
so smooth deprecation procedure, you know, like just, yeah. Yeah. No, I'm, yeah, they're, yeah,
they're occasionally there is something that is completely, uh, completely useless.
Correct into the API, but mistake and we just get rid of those. But no, these have dependencies on
the rest of the APIs. So, yeah. So actually, um, this is a good news because it is hard to find
useless stuff in Java, right? Yeah. So I mean, if we will scan other languages, maybe, you know,
we could, uh, immediately remove 50 percent, but in Java, everything is useful. What, what's inside?
Just maybe, maybe, maybe. But interesting part. So if we had enumeration and enumeration works,
in a for loop, more or less, iterator because of iterable, we have the for each loop, but you are
still looping and streams are different. So, um, in streams, there is no loop. You're just saying
what you would like to do in magic happens behind the scenes. And this is why, um, if you say map
and then filter and at the end is, so I think it's called terminal operation. So there's the end
count or or to list or whatever. And if I call to list or count, then behind the scenes,
the iterator is called indirectly or splitterator goes to collection and, uh, and partitions,
the collections into depending on some heuristics. And it can be, um, multi-threaded or single-threaded,
depending what you are doing. And this is lots of magic. And this seems actually, right? So this is
actually one of the most complex stuff in Java, I would say, right? It's, it's pretty complex. Yeah.
And so if you, it's, it's not, you know, it's one of those things. It's like it's magic until you
understand it. And, uh, it is a little hard to understand, but it's, it's not that hard to understand.
But, but I think, and I give Brian a lot of credit for this because he came up with a splitterator
concept and he refined it over several, uh, you know, there were several, several cycles of
evolution in the splitterator. And I think you finally ended up, and this was mainly implementation
concerns. Um, but it's also a marriage of implementation concerns and, and what, what it would take
to implement a splitterator. Uh, so if you have a collection, implementing a splitterator is
not that hard, but you do have to understand a fair amount of stuff in order to do it effectively.
But, but it is a little weird because a splitterator can, can, can be run in sort of iterative
mode, or you can just say, get me the next element. Uh, or in fact, uh, process all the remaining
elements with, with this lambda. So that's, uh, for each remaining. Um, but then there is,
there's also this wrinkle in there, which is, uh, you can ask a splitterator to split itself.
And, and it's very strange. It's a little strange because, um, I always forget which,
which half is returned, but suppose you have a splitterator S1 and you say S1 dot split.
So the semantics are as, as close as possible, half S1 mutates itself so that it represents
only half of the original elements. And then it returns a new splitterator S2 that represents the
other half. And I always have to revisit this. I always forget which is which, which of S1 and S2
is on the right or the left. Today, we would maybe return two iterators and make the all
useless, right? To be immutable. Well, you know, it's an interesting design decision. I actually don't
know why I grind it that way. I think it was to, well, actually, I kind of, I can, I, I don't know,
but I can guess. And one of the things is, and there's a, it's a very pragmatic implementation reason.
From a functional programming standpoint, you'd say, oh, that's ridiculous. You should, you should,
you know, S1, you should say S1 dot split, and it should return S2 and S3, which each represent
half. And that's the clean, that, that argue with cleaner way to represent it in an API. But in
order to do that, you have to return two things, which means you need to return an aggregate,
which means you need to allocate memory. Plus, you need to allocate a new splitterator.
And actually, one of Brian, Brian did a bunch of the implementation here. And if I recall correctly,
this is a bit of a guess. But I think there's, I think it is operating. I think, I think it is
close to the truth, which is when you, when you're writing parallel programs, inevitably,
there is a certain amount of setup you have to do before you can actually start to run things in
parallel. And what you need to do is, is to drive the amount of single threaded setup code
to an absolute minimum, so that you can get to the parallel stuff as quickly as possible.
And this is Amdol's law. If you're familiar with that, there's a simple, it's a simple proportion.
But, you know, if you had an infinite number of processors, but, you know, let's see, I mean,
so we don't have visuals here, so let's try to hand wave this. Suppose,
as soon as possible, you have to distribute the work, right? This is the point.
Yeah. Well, so, so yeah, so the thing is, if you have some setup code that takes a certain amount
of time, no matter how many processors you have, if you could, if you could apply an infinite speed
up to the parallel portion, the single threaded portion will dominate. Exactly. And so,
so in order to get as most throughput as possible, you need to minimum, absolutely minimize the
amount of single threaded stuff. So I think that's why splitterator is why it is, right? So if you,
if you allocate more memory, it means more, it just means more work to do
before getting to the parallel stuff. Because what happens is that the splitting of splitterators
occurs during the parallel, during the setup processing before parallel, parallel stuff kicks in.
Yeah. So basically, I think the easiest way to think about this is an array, right? You have an array
of elements. You can process them one at a time left to right, or you can say, I have a splitterator
that represents half the, sorry, I have a splitterator that represents all the elements. And so,
what happens is the stream framework says, hmm, I have end processors to, that I want to try to keep
busy. So what I'm going to do is I'm going to call split on the splitterators until I get about
I think it's 2n or 4n splitterators. The reason it split, it kind of over splits,
the reason is to try to smooth out lumpiness in the data. But basically, so as part of the setup
process, the stream framework calls split a bunch of times. And the responsibility of the splitterator
isn't that hard, right? I, you know, I'm a split, all you have to do is say, I'm a splitterator
that represents a sub-range of an array. And I get called split, I have split called on me.
And I say, oh, okay, just divide it in half. And, you know, return a new splitterator that
represents that half and adjust myself so that I represent the first half. And then if you only
have one element, you set, you know, you return $0 or something like that. There's some way to say,
sorry, I can't split. So conceptually, it's actually not that bad. So, all right, it's not that
difficult, right? So the stream framework calls split a bunch of times on the splitterators.
And then from the splitterators, it dumps those into a four-joint pool as half. It wraps tasks
around them and dumps them into a four-joint pool. And then the four-joint pool does its work.
Yeah. And there's incorrect eristics, which returns an int, also interesting, with all that,
distinct, sorted, sized and non-nall, immutable and concurrent and subsized. So, lots of
characteristics going on. And so you can specify the behavior or suggest the behavior of the
of the splitterator. Yeah. And it's interesting because I think we, it's one of those things where
this goes back to, this is kind of a comment on our new release model versus the old release model.
In the old release model, we had a few years to develop stuff. And we weren't necessarily able to
prove out all the ideas. And I think the stream or the splitterator characteristics are an
example of that. Because most of those, you know, I'm, you know, in retrospect, I'm
in the opinion that most of those splitterator characteristics are not useful. Some of the, I mean,
some of them enable some optimizations, but they're, the optimizations are so,
are so narrow that it's not worthwhile. So, for instance, sorted is an interesting one.
I think the only optimization is if you have a stream pipeline and you, you, you put a sort
operator or sorted operator in the middle of that stream pipeline, it checks the splitterator and
says, oh, if it's sorted, I don't have to do anything. But, you know, I mean, how often is that
happen, right? So, however, there are, and this is Brian again. I know Brian has said this
repeatedly, the, the sized and sub-sized options really do enable the most important
optimization of all, which is the ability to avoid creating temporary or intermediate storage.
So, for instance, I suppose you have an array list. An array list has its collections,
our, our array list has its elements in an array. And so, it knows exactly how many of them there
are, right? And furthermore. And so, so that's what size means. And sub-sized is interesting,
which is if you, if you take an array list splitterator and split it, it knows exactly how large
each split is, right? So, if you have, you know, if you have 100 elements and you call split,
you get 50 in each split, split side, right? And so, you might say, well, how can you not know?
And so, I think the typical counter of the example of that is like a hash set. So, a hash set
hashes things into buckets in an array. And so, you know how big the array is, but you don't know how
how the elements are distributed within that array. And so, if you split the array in half,
you actually don't know how many are on in each half because, you know, you don't know how many,
you don't know how, you don't know how the buckets are populated. So, a hash set is sized,
but not sub-sized. So, that's the difference there. Okay, so let's go back to the array list.
So, sub-sized is really important because there are a lot of, I think it is.
What, what, what you said right now, hash set is that the hash set is an array,
is a hash of arrays. And you don't know how long the arrays within the
each buckets are, right? This is, you could calculate them, but it will cost time. So,
you just don't know it. Yeah. Yeah. So, well, hash set is implemented using hash,
ma, which, which implements its main table as an array. But each array slot is a bucket,
which might have zero, one, or more elements. And if there's more than one element,
they're in a linked list. Yeah, and there is no way to efficiently ask how, how big are you? And
this is, or it will be inefficient. And yeah. Yeah. Yeah. So, if you split the array in half,
which is kind of the obvious thing to do, you don't know how many are inside. And you could go count,
but that might take you a long time. Exactly. Then it's what it does to you. Okay. So, sub-sized is
important because, okay, so, so, back to array list, right? So, if you split an array sub-range,
then you know each, each split also knows its exact size. And that's pretty important because
there are a lot of things where, consider, suppose you have an array list, and you, you say,
array list dot stream dot map, something, something, something dot two list. So, what two list does
under the covers is it says, oh, all right. Well, I'm going to ask my splitterator what characteristics
it has. And it says, oh, it's sized. And not only that, it's sub-sized. So, I know exactly how
large a destination array to create. So, that's good. So, if I have 100 elements in my array list,
I know, I know the resulting list. And the list is, the resulting list is going to wrap an array.
So, it creates an array of 100 elements. And then, every split knows exactly how big it is. And
also, where it goes in the destination array. So, if you have a split that that takes elements
35 through 47 of the source array, the results from the map operation are going to go into elements
35 to 47 of the destination array. So, there's no, so there's, you know, you can imagine
that conceptually be easy to say, okay, an element is created and, you know, past each element
in the stream pipeline and then deposit it somewhere. And at the end, the splits are merged.
But if you have a splitterator source that is sub-sized, then you can skip the whole merging
operation because the operations deposit the results directly into the destination. So, what
means is it is no merge, but you are asking the petitions to get the elements, right? So, it's
like per reference. So, you're asking the first partition, the second, the third, the fourth,
instead of merging, right? This is why sub-sized is used as optimizations that you don't have to merge.
Yeah. So, I guess the way I would say it is that each of the tasks, there's enough information
for each task to know how to store its results directly into the results array. So, there's no,
otherwise, no excess copying, that's the. Okay. So, it's not like they copy in the right place
in advance without merging, right? So, this is the difference. Yeah. Okay. Yeah. So, in the general case,
you would have a task that allocates temporary storage for the number of elements that are processes
and then writes into there. And then there's a merge step that copies those elements into the final
estimation. And so, that step is skipped if your splitterator is sized and subsized.
Which means, so anyway, of all the characteristics,
but what means, Stuart, that we should pick the right collection, which is subsized at the
beginning, so we get the performance. Right. Yeah. And so, I mean, you know, I mean,
basically things that are array-based that way, right? So, list from list.of or an array list,
or a plane array, if you create a stream from an array, then that will also be sized and subsized.
A linked list? No. No. So, what means, actually, try to avoid linked lists, right?
Oh, yeah. I mean, yet another reason not to use linked lists.
Yeah. Yeah. Interesting one. I never thought about this, so I don't know. It's obvious.
But, yeah, you don't like linked lists, seems like. Yeah. You know, it's kind of, there's a whole history,
there's a whole history about linked lists, a lot of misinformation out there over the years.
And we, let's see, did we, I don't know if we covered linked lists in, I don't think so.
I was going to say, I was musing to myself whether Morris, Naftlin, and I covered this in the
second edition of the Java, Generics, and Connections book. I don't think we did, because I think,
well, anyway, or we might have touched on it briefly. I did do a fair amount of coaching with
Jose Pomard, who wrote a, within the past year, to wrote a fairly, fairly lengthy array list versus
linked list article. And you can find that on, either on dev.java or inside.java. I can dig up the link
for you. Yes. So this is interesting. But I thought the entire time about that. So it is actually,
we could have stream just with an iterator, which returns a splitterator without collections,
right? So I could, no read from a socket or whatever. And it could implement stream operations
without having a collection, actually. It should work. As long as the splitterator gets something,
you know, then would work. Yes. Well, and there are examples of that. Like Bufford Reader is an
example of that. So if you have a Bufford Reader that wraps a socket or a file or something like that,
you're going to ask it for a stream, you can, you can ask it for its lines. And it returns you a
stream line. There's files, string lines, right? As well. I think so. String returns lines. I think
there's a line, a lines method. And this is, yeah. So I'm pretty sure because it's a,
I got that. This is very useful with multi line strings. Because yeah. Yeah. Yeah. So all of those,
all those, all of those are true. Yeah. So there are a bunch of sources for, for, of streams that
are not collections. Yeah, you're right. This is string is the best example because string is not
a, yeah, it is not a collection like Java, it will connection. This is collection of characters,
but not like collection framework. And, and you are, and they're multi line string. It's
interesting. With lines, you get the lines. Oh, yeah. Of the string. And this is, I already know
the answer. So we cannot have reactive streams like. So we cannot, you know, endless streams. So
if we, for instance, we could read endlessly from, from a socket. Well, okay. So these, the Java
eight streams concept does support infinite streams. But I would not say that they're,
that it supports reactive streams. Actually, I don't know that much about reactive streams.
Infinite streams is like stream generate. And you put a random number to get the endless
stream. So we'll never end. Right. Right. Or, or you can get a, you can get a random,
from Java, you tell random, you can get various streams of, of ins longs and doubles.
Yeah. System, manner, or whatever. Yeah. Or though, though, I think those, those can
those be infinite? Or I think there are various overloads of those. You can, you can say,
I want a hundred random numbers. And it dumps into a stream. But you're right. I was actually,
there was stupid senders from you because what you, you can do is stream generate and just pass
whatever you can, you can surround socket with whatever produces, you know, bytes and you have it.
There is this infinite point that to a socket, which will, you know, because, but you have to do
something with it, right? Okay. If the socket blocks, we are done. Basically, we can say,
to list is problematic. But, uh, yeah, the problem is, right? It will wait until
completes. So nothing will happen. Yeah. So, so yeah, if you, I mean, suppose, suppose you did
this, right? Suppose you had a socket and you wrapped a reader around it and then a puffered reader
and then, and then lines and then said to list. Well, that would read every line from the socket.
And so if there was no data available in the socket, but the socket was still open, it would just
block. Exactly. But yes, it could, it would, it would accumulate an arbitrary number of lines,
which might fill up your available memory. Or, and I think the reason that people might do that is
they expect that, well, whoever is at the other end is going to send a, a, a finite, but unknown
amount of, of data. And then when the connection is closed, then we're going to get a nice list
of strings. Yeah. But the connection has to be closed. And it would be cool. This is what I
meant is if you could just, you know, have a stream, we just maps filters and whatever on the fly
without waiting until it's closed, you know? Uh, yeah. Well, and so I guess, yeah, it's,
it's interesting. Uh, I, I think they're, I, I don't think that supported out of the box by,
um, by, by Java streams. Um, if you, if you, instead of to list it, do for each, would
it? Um, I think if you, yeah, that's a good question. If, I think, well, if you want, if you were
willing to process them one at a time, um, then, yeah, then, then, sure. Um, I guess the question is,
I mean, people have, people have asked for things like batching and whatnot. Um, and so,
there's, there's nothing directly in the stream framework that, that does that. You can write
your own splitterator. That does some things. But, uh, and then, but the question is, you know,
what are your batching criteria, right? So, like, suppose you're reading stuff off a socket.
I mean, it comes over and over, over, over and over again, right? So if you, if you do something, um,
with, uh, stuff like Kafka, for instance, right? So this is something similar. You get the endless
streams of, of, of messages. And Kafka is, uh, Java, it's stream like API, which is declarative.
And what they just do, they say, okay, read from here and store there. And this is, or, or join
or whatever. And the, or if you are doing something like, uh, I would say, so change,
GPT like, right? So you get also stream of characters. And it would like to, to draw the characters
in terminal. And, um, and of course, what do you usually do? No threats and just write whatever.
But, um, I sometimes I really think about it. So I say, actually, streams should also work.
To list is problematic, but maybe for each. And it could be actually a nice abstraction,
because with stream, I could say, uh, I get the character. I could, you know, uh, convert with map
or filter out characters I'm not interested in. And, uh, and just, uh, just like transformation
pipeline, not necessary, uh, I'm not necessarily interested in a collection. Maybe I'm reading
character by character and doing something with it like a pipeline. Yeah, that's true. I think,
I think though that, I think one of the difficulties is even if you did something like for each,
then the, you know, for simple things, if all for each does is punished to some terminal buffer,
or something like that. Sure. That's fine. But my hunch is that you will sooner or later,
you will want to demarcate your, your batches. Um, and it obviously, it depends on what the
application is. Um, and, uh, so you, you know, well, one, one typical thing is that you, you,
you want to, you want to, you want to read stuff and build up a batch and then hand that
batch off so that you can process it all in parallel. But you don't want to, you know, you don't
want to block and then read three lines and then say, okay, I'm going to hand this off to 16
cores to, to, to handle three lines of parallel. Not course. What I thought about right now is
terminal applications, right? So I'm building a small Java utility and I'm receiving data from
somewhere and I writing to terminal or to file or, or both or, I don't know, very simple thing.
It doesn't have to be high-performance multi-core application could be a simple, simple, simple
utility. And I, and I thought this just, you know, um, simple utility stuff never, you know,
applications of us, whatever is just okay. Um, it would be a nice abstraction because, um,
map is nice. There are lots of now of mappers, filters, and you can pass on methods,
so the abstraction is right. And, um, and, uh, as, uh, as long as, you know, you have the
collection, then you can, you know, you can, uh, you can transform to another collection,
but if the collection is alive, so it, you know, it, it lives and, and provides data on the fly,
then, um, it doesn't work. Yeah. Yeah. Yeah. That would work. Okay. So I will experiment with it
because, uh, okay. Yeah. Um, collection. So we actually covered surprisingly,
as are the internals of, of streams. Mm-hmm. How well covered, right?
I think so. Actually, I did think of a couple, uh, key, uh, design decision that your listeners
might be interested in. Um, that's a one I, I talked about already, which was, whether to use
iterable as the extension point or stream is the extension point, and there's a, there's, uh,
there's, there's, there's, there's a reason for that. And the primary one was that, if
the, the streams operations are all lazy, whereas the collections operations are all immediate,
or eager, I guess. And so we wanted to keep them separate. So that's why, uh, if we put them on
iterable, then every collection would have had this, um, you know, a large number of mixture
of methods that were both lazy and eager. Uh-huh. You mean, if you will put, uh, map and everything
on iterable, this, this will it, right? Okay. Yeah. Mm-hmm. Yeah. And in fact, that's what
Eclipse collections did with rich iterable, right? So all of their, all of their collection
implementations have all of the, the regular collection methods and the higher functions,
right? And it's just a different design decision. Uh-huh. We wanted to organize our, our APIs
differently. Um, another one which caused a fair amount of controversy is that once you consume
a stream, you cannot consume it again. Annumeration, the name implies maybe that, uh, it could be lazy,
right? But iterable, iterator, uh, it implies it is eager because iter, iterator, next,
give me the next. It is like action-based. Yeah. I don't, I don't think that's, I don't think
that's true. Okay. This was my impression. I think you're reading too much into it. Um,
I think the main thing is that, uh, I mean, one of the things we did wrestle with for a long time
was so spoke. Okay. So, so step one was, was split the new higher order function, lazy
higher order functions off into a new interface called stream. Okay. Uh, now that we have,
yeah, on the one hand, we have collections. On the other hand, we have streams. Okay. So streams
have their own mini language API. And so you, you alluded to them earlier, there are intermediate
operations versus terminal operations. Uh-huh. And so, uh, you know, I think when you're learning
streams, you, um, uh, you, you know, you invoke the higher order functions and they're just lazy.
So if you just, you know, if you have a list, let's say list.stream.map, uh, you get nothing.
Yeah. Because map is an intermediate operation and nothing happens until you call a terminal
operation. Uh-huh. And so I think, you know, we spent a lot of time on that. Uh, one of the reasons
is that we wanted, we wanted the work that's actually done. We, we wanted, we wanted the term,
we wanted there to be a concept of terminal operation so that it could look upstream and choose a,
it can do some optimizations, like the size and subsized operation. Um, and there, you know,
there are different ways to do it. I think eclips, uh, scala and Kotlin have chosen differently. So
if you say stream, see if you have list.map, you get a list with all the elements mapped.
And I think that's more convenient for people, but, uh, if you're not careful, what it does is
it ends up creating an intermediate collection for, you know, so if you say, you know, list.map.filter.map,
or something like that, then um, an intermediate, we wanted to avoid having create an intermediate
collection for all of those. And so that's why they're lazy. And so that's why it's a separate
concept of a terminal operation. But then the other thing is once you've applied a terminal
operation to us, actually, once you've applied any operation to a stream. So if you say stream.map,
what, what you get is you get back a new stream. Well, what happens if you save the, the previous
stream and an attempt to chain something off of that? Well, you get, you get an illegal state
exception or something like that. So it's, this seems closed or something. Yeah. Yeah. The stream
has been consumed already. Yeah. And, uh, so that was, I mean, ideally, I mean, it doesn't come up in
practice. If you just, if you, if you write typical stream pipelines, but if you're not careful,
sometimes you can get in trouble. What, what people sometimes, it happens to me sometimes,
because, yeah, because sometimes I don't like to keep a list. I have this stream. And, uh,
for instance, in a record that I would like to keep the stream and not the list. And, and then,
of course, I can just use it once, right? Yeah. Yeah. And, um, because, um, otherwise, I would have
to call stream to list and then we'll have to convert the list to a stream again. But, um,
if so, okay, I have the stream already. I can use it later, but I can only use it once.
And I don't use it for efficiency. I use it because it's less code, you know, if I have this
stream, I don't have to do a tool list. And then stream again, I just have the stream. And I'm
careful when it works. So, but yeah. Yeah. Yeah. That's true. So if you're careful, it will work.
But I think sometimes people get into trouble when they want to say, okay, I have, I have, I have a
stream. I want to do some processing steps. And now I have a stream that represents some interesting
set of elements. And I want to process it two different ways. Exactly. I'll come, I can't fork
the stream. Yeah. And the problem is that if you have two stream, if you, if you wanted to add kind
of a fork operation in the middle of stream, then if, so you have, and then so now you have two
downstream consumers. And there's this big question, which, which arises, which is what if the downstream
consumers consume at different rates? Somebody has to do some buffering somewhere. And we don't know
that they're going to be consuming at different rates. So defensively, we have to do an arbitrary
amount of buffering. Okay. On the other hand, we could say, okay, so if you fork a stream,
then henceforth, the downstream consumers run in lockstep. And that's totally at odds with the parallel
parallel parallel execute strategy. Can we fork stream? We cannot, right? No. No. And so, so what,
well, what we did was you said, okay, so we don't, we don't want, we want, we want, we like the parallel
execution model of a straight stream pipeline, you know, what I described earlier with splitting and
dispatching out to the joint pool. But if you want to put a fork on a stream, the stream's framework
isn't going to do that for you automatically because, because it might require arbitrary buffering.
And so we don't want to do that under the covers. And so what we say is, if, if you're willing to
pay the, the hits do buffering, then what you should do is you should, you should dump the stream
elements into a list and then get two streams out of that list and process them independently. So,
so you can fork us, well, you, you can't fork a stream directly, but the way to fork a stream is
to, is to create your own intermediate collection. And we wanted that to be explicit in the code.
Mm-hmm. So that was one of the design. Yeah, it's interesting talking to you because your
angle is efficiency. And my angle first is no right, simple code first. Yeah. And, and then measure.
So we, and we see whether it's fast enough or not, usually. And yeah. But I have, of course,
reinvite you back because we have another topic, which we cover a little bit, our fork joint
pools, of course. Right. And also some exciting technologies, which we didn't cover are
tellyx a little bit, uh, tellyx, uh, was covered. But there is something of that ancient history.
Yeah. And let's wanted to talk about how to delete characters from a paper tape.
Uh, we, we had it last time, I think, with the paper tape. Yeah. I don't know. Up to you.
It's, it's just trivia. Don't worry. Yeah. But no, trivia is great. Uh, and the entire podcast is
about trivia. And, uh, so, yeah. There's, uh, uh, and lots of fun. Uh, and, uh, yeah. I would say I
have to. And, um, I'm, I'm interested. Maybe we, we can talk a little bit more about the parallel
because it's not that obvious whether it is always worth the note to call the parallel and, uh,
what happens if you do a parallel and stream and parallel. Yeah. And what is the relation to fork
joint pool? Maybe what is worth stealing? You know, there's lots of cliffhangers and, uh, and, uh,
how to, how to configure the, and because they were hacks in Java, I don't know whether you remember,
you know, how to pass the threat pool from application server to a parallel stream. Right. Yeah. Yeah.
There was, there was a, some discussion about that and, and yeah, there's a bunch of,
a bunch of issues there. I'm a little, well, I can certainly make some intelligent comments on it.
I'm not an expert in that. Just, just, yeah, there, there are a bunch of trivia. In verse case,
we, we, we will cover, you know, how, how to, it's a little more than trivia. There are really
some, some serious design considerations. And in verse case, we, we will talk about how to delete,
you know, the holes from the, yes. That's true. We are, we always can, we have enough
fallbacks, you know. Yeah. Okay. And don't forget, there's also deprecation.
In Java or in tapes. Uh, oh, in, in Java. Okay. I was already curious for the tapes. How to,
how to deprecate the holes? Okay. Yeah. Okay. Then thank you. And see you at, uh, J focus,
maybe, uh, no, I'm not going to be at J focus, but I will be at Java one. I hope, I, I think I
saw you at Java one last year. Maybe I'll see you. Yeah. I, I, I, I'm, I'm, I'm Java one as well.
I'm looking forward to the conference. Right. Yeah. And last time was great. I'm really curious.
A little bit afraid of this one because it cannot be improved anymore. So this, I'm really curious.
Yeah. It's a, last time last year was great. Yeah. So I'm, I'm, you know, I'm looking forward to it.
We're going to do our best to make it as, as good as we can. So, uh, so yeah. So looking forward
to seeing you in, in March. Yeah. Java one. See you there. Okay. Great. Talk to you again. Bye. Bye. Bye.
