Computing Thoughts Bruce Eckel's Programming Blog

A Model To Fund Open-Source Projects


Place a bounty on the next release of an open-source project. Until the bounty is met, those who need/want the newest release must pay an amount in order to get it. Previous releases remain free. Once the bounty is met, the new release becomes free.

The Problem of Funding

I think open-source is one of the most important cultural and business innovations produced (so far) by the computing and network revolution. We’ve only begun to feel the benefits of this shift and anything we can do to accelerate it will amplify those benefits.

The biggest hole in the system is funding. We’ve been trying to use existing donation models and it just isn’t working. If individuals are encouraged to donate to a tip jar, they tend to put it off (and eventually forget) because (A) they are just trying it out and don’t know yet if it’s going to be useful and (B) it’s a hassle to go through the tipping process and they are focused on trying to solve their problem right now. Once the moment has passed, it becomes easier and easier to forget.

Business folk don’t know what to do with open-source. It doesn’t fit into their world-view. A paid product is familiar territory and they are fine with buying it. But if it’s open-source, that means free, right? Why should we go through the hassle and paperwork if we can just use it? There are numerous disincentives to pay for it, and no noticeable benefits. Even if the programmers prod the purchasing department to do the right thing, it seems like needless work to the purchasing department. They’re not being stingy or mean—it’s just that everything else has higher priority than doing something that isn’t forced upon you. That’s just the way old-school business works.

Eventually cultural norms might develop, but in the meantime we need to provide an adapting structure between the world of open-source and the world of business.

Release Bounties

Here’s how it works. For projects that create significant releases containing distinct new features[^It might work in other situations as well but that is not how I’m initially imagining it], on your next release you set a funding bounty. Typically this bounty will be based on the amount of effort you put into the release, but other factors can weigh in as well, such as trying to hire a new person for the project or paying debts you’ve incurred in developing the project thus far. In my opinion it is valuable to be completely transparent about the needs provided by the bounty, because it helps engage customers in the process.

I think most projects will set different prices for individual users than for companies. One thing to keep in mind is that companies can have complex procurement processes so you need to factor in the time and effort involved (see later in this article about the benefits of having an external service to manage these issues). A typical way to charge a company might be based on number of programmers or employees in the company. Sometimes you’ll have to work out a special contract with that company, but if the result is beneficial enough it’s worth it.

Every time someone pays for the new release, the bounty is reduced by that amount. Once the bounty is paid off, the release becomes free to everyone. This way, customers who don’t care or can’t afford it have the option of waiting until the release becomes free. Customers that need it right away will pay for it. The rate at which the bounty is paid off gives feedback to the developers about how valuable the new feature set is.

You might even provide premiums like stickers, shirts, hats, etc., for individuals who are extra enthusiastic about the project and want to donate at a higher level in exchange for those items (much like the reward levels in services like Kickstarter).


The biggest benefit is the ability to create a steady income from your work. My hope is that it will liberate people from having to work a regular job so they can devote all their time to working on their open-source projects. I believe this will dramatically speed up software development in general, because more people will be able to work on the projects they love.

This mechanism also provides valuable feedback from your customers. Similar to systems like KickStarter (which doesn’t fund you unless your minimum goal is reached, thus telling you whether there’s enough interest in your project), paying down a bounty gives customers more investment in what features you produce next.

I think it could also open channels to companies who might be willing to fund the development of feature-sets that help those companies with their particular needs. Although such channels are hypothetically available now, once there’s a payment mechanism in place it becomes much easier for a company to budget, request, and pay for new feature sets.

One very significant benefit is that you don’t have to estimate. You create the feature set first, and then you know how much effort it took to create it and that tells you what the bounty is. It might also incentivize you to make smaller, more frequent releases. I think it will certainly encourage you to discover and add the features that people want to pay for, and thus provides a valuable form of feedback.

This system might even provide funding for an organization like the PSF. If the PSF were to create and host the infrastructure for this service, they could take a percentage for doing so (and I think folks in the Python open-source community would be enthusiastic in providing another way to fund the PSF, so the percentage could probably be higher than what Kickstarter, for example, charges). Under the umbrella of the PSF, payments made to open-source projects might even qualify as nonprofit donations, which could enable you to charge more to companies since they then benefit from a tax break. In addition, the PSF is probably better equipped to handle the procurement paperwork for companies, and once that process was completed for the PSF, bounty payments for all projects under the PSF could be made through that pre-existing account, making it much easier and faster for the company.

Other products might be enabled using this approach. If there’s a need for online training, for example, the current open-source approach doesn’t free up time for the developers to work on it, especially if they have regular jobs. You might (openly) add something to the current bounty to help support the development of such materials (perhaps hiring someone to do it), then create a new bounty on those materials once they’ve been created (or you might choose to always charge something for your online training even if it’s been funded through a bounty—people who want it will appreciate its existence either way).


A bounty looks a bit like “reverse-Kickstarter” since you are funding something that has already been created; this means there is much less risk because you presumably already know the project and trust the developers. One difference is that it is version-based rather than whole-project-based.

Because all previous versions of the open-source project are free, this allows individuals and companies to choose whether they want or need the newest version, or if they are fine waiting for the bounty to be paid by others.

This approach could also enable things like sponsorship for a version. One company might decide it wants to pay off the entire bounty and the developers might give that company some kind of public acknowledgement.

Once a pathway for open-source project support has been opened, it invites further collaboration. Companies might ask for and fund specific features that they need. It might even promote more training and consulting around the project.

I am not suggesting that this approach is the solution for all open-source projects. For example, in the previous post I talked about my experience with ReadTheDocs. That service doesn’t generally provide quantifiable features on the user end (however, there might be some features on the developer end that could support release bounties). This approach will generally be most appropriate for products and libraries that steadily add features within new releases, which I think covers a large number of projects.

This approach also doesn’t solve the (important) problem of funding the startup of an open-source project, which is different and closer to traditional crowdfunding, where you are taking a distinct risk that the project might never reach fruition. For that we probably need to draw on the wisdom earned both from venture investing and services like Kickstarter.

I might have been able to think of this approach because of my Reinventing Business project. That project has certainly changed the way I look at possibilities in the world.

View or add comments

Pycon 2016

Pycon was wonderful this year. It was held at the Portland convention center which I like, and there were about 40% women at the conference; my perception was that around the same percentage of women were speaking. I credit Guido Van Rossum with requesting that the conference and the community become more inclusive (his goal is 50%). This is an amazing accomplishment in our field; my usual experience is in the low single digits. It has been shown that diverse companies make more money, and I think that the Python community will grow and become more successful because of this goal.

It seemed to me that the women felt comfortable enough to talk about what it is actually like to be a woman in this field. My take: it’s easy to think things have gotten better, and perhaps they have, but the situation is still pretty miserable and we have a long way to go (concerning diversity in general). This situation is holding the whole field back and I’m proud that the Python community is leading the way.

I’ve gone to many Pycons, including giving two keynotes in past years and speaking at Pycons in other countries. But this year I wanted to expand my experience by doing things that I haven’t before.

This included volunteering. I stumbled into bag-stuffing the first day. I did this once years ago, and the process was completely different. I think this is because volunteering is often more self-organized. In this case, it had evolved from people stuffing a bag one at a time to people cycling around holding their bag open and others dropping items into the open bag. It seemed much more efficient to me, and I found it very interesting to see how it had evolved.

I also signed up ahead of time to work at the registration desk and the green room (where speakers get ready to go to their rooms). Apparently the registration software is improved every year, to make it easier for inexperienced people (like me) to quickly become functional. Years ago, when I worked with the now-defunct Software Development conference (which couldn’t adapt to the Internet), one of the big, expensive problems they always had was registration. It was interesting to see how the open-source solution (at least, I think it was open source) had improved things, and will no doubt continue to do so.

In the green room, it turned out that one of the hosts for one of the speaking rooms had not shown up, and the green-room manager handed me a radio and said “you’re it.” (The radios were awkward and unfamiliar and I wish we could have just texted instead, or even better, used some kind of app that maintained status on a big board in the green room).

There are runners that walk people from the green room to the room where they speak. This seems like a simple job, but it’s very useful because speakers can then focus on mentally preparing themselves rather than worrying about (and potentially becoming flustered) finding their room. The runner also talks to the speaker and I think this can be calming. Pycon often has first-time speakers so this is an important role.

The room manager role which I was doing is for three consecutive talks in a single room. I introduced each speaker and then held up signs for 15, 10, 5 minutes and one for stopping. Each sign was held up 5 minutes before the actual time. I didn’t find out how this time-shifting practice had evolved but apparently it works. All three speakers finished with enough time for questions, at which point I ran around with a microphone. Often this is a problem when people start asking their question before the mike arrives, but for each talk I reminded people that this was being recorded and that folks watching on YouTube want to hear the question. I was also watching for the next questioner while the current question was being asked, and bringing the microphone to the new questioner during that time. That might have helped as well, but I think the biggest effect was simply reminding the audience about home viewers.

I enjoyed all three presentations, and also being involved with the production.

In the last year I’ve started to watch, as part of my research, presentations on YouTube, Vimeo, etc. It’s been a huge addition to the way I learn. I also realize what a big effect this must be starting to have on the rest of the world. Anyone with Internet can have the benefits of conference presentations. Pycon has always been a leader in this area (and they get the presentations up very quickly), and I was quite aware this year that I can see any presentation on YouTube. For that reason I decided to try to only do things that were not recorded, and to take advantage of physical presence.

Considering that I hold all open-spaces conferences, and that the first open-spaces (and lightning talks) I ever saw was at Pycon, it surprised me that I wasn’t more attuned to them. I attended three this year: One on freelancing, one on conference organization (where I was able to contribute from my experiences), and one called “Random Storytelling.” I attended this last one because my friend (and book designer) Daniel Will-Harris has invented a technique for creating stories—in fact, he’s in China right now setting up a system to teach it to drama students there. I thought I should look into this in case there were some ideas for him.

It turned out to be telling stories from your life. For me, it felt like I had suddenly stepped out of Pycon and into Burning Man, and I ended up telling a story about the first time I went to Burning Man. The session was a very connecting experience, and a group of us ended up going out to dinner.

Post-Storytelling Dinner

The group included Audrey and Daniel Roy Greenfeld (second and third from left, and flanked by the ReadTheDocs guys), who met at Pycon and got married. Audrey created the Cookiecutter project which I spent some time with during sprints and created a basic introduction. Also at that dinner were the guys who do, which I also sprinted on.

For the first time at Pycon, I held two openspaces sessions, the first about Teal organizations and how we might start them, and the second to gather feedback for a book on Python concurrency, which looks very likely to be my next project.

The book session was extremely helpful. Kevin Quick came, who works at GoDaddy and helped create the Thespian Python actor system. This is in active use at GoDaddy and seems a likely candidate to demonstrate actors in the book.

My intent with the book is to expect Python proficiency but not to assume the reader knows anything about concurrency. This seemed to appeal to the group.

We also talked about print books vs. ebooks. I have been pushing for ebook-only publication, but this conversation set me back. Everyone in the group declared that, although they like ebooks well enough for “regular” books, for a programming book in particular, a print book is essential. One thing that impressed me was the description, which I have heard several times subsequently, of having a mental model of the physical book including where certain topics are located.

Wednesday night I had dinner with Guido Van Rossum and a group from Dropbox, also to discuss the book. Guido was the first person I told when I got the idea, to ask him if he thought it filled a need, and he was quite positive and said that just within Dropbox it would be good to have some way to educate people.

Dinner with Guido and DropBox

This group also declared they wanted a print version of the book. Both groups also expressed interest when I floated the idea of holding an openspaces conference (like the Winter Tech Forum) around Python concurrency, starting next summer.

After hearing this feedback and spending time during the Sprints with the ReadTheDocs group, I’ve started visualizing a scenario of publishing the book as it’s being developed (much as I did with the original version of Thinking in Java) on ReadTheDocs, then creating a print version for sale. If indeed people really want a print book, this model should produce enough income to pay for the project. If I enjoy it enough to create a seminar, then that would certainly make it worthwhile (I got rather burned out after giving C++ and Java seminars, but Python concurrency might be fun; in addition I’ve learned a lot of tricks over time that make seminars more enjoyable for both me and the participants).

ReadTheDocs provides different download formats (as well as just using it online), including HTML, PDF and Epub. During my time sprinting with them, we discussed the possibility of creating a process to produce a camera-ready PDF that could go directly to a print-on-demand service like LightningSource (which I use; they also make it effortless to switch to a press run and are owned by Ingram, which will handle distribution for you if you want). This process might become a business model to support ReadTheDocs: if you want to go to print, they could have several levels of paid help: basic (turn your docs into camera-ready PDF), self-publish (hand-hold you through the process but you remain the publisher) or ReadTheDocs might even become your publisher, recieving a portion of sales.


Finally, for the first time in all my Pycons I stayed for the sprints. This is one of those life experiences where learning something new can lead to regret that you haven’t been doing it all along, but oh well. And Guido pointed out that this might not be true, because I participated in what might have been the very first sprint after a Pycon in Washington DC, when a group of us went to the Zope Corporation in Virginia and were coached by Jim Fulton — Guido said this is the moment when Jim invented the sprint. But I probably wasn’t ready for it then.

People at the conference seem hesistant to join in, I think because they imagine that to sprint you have to understand the project and be good at something. My own experience disproves this. Indeed, much of the time my strength was my ignorance. I think the initial contact and onboarding experience is very important; if it’s not easy and positive it can turn people away. So if you are clueless about a project, just sit down at a sprint and declare that you’re going through the newbie install process—any project that I know of will be quite happy to have their onboarding checked, because they know it all too well to discover the missing parts.

Apparently the normal pattern is that about a quarter of the conference remains for the sprints (then trickles away over the days). The afternoon of the last day of the conference includes different projects giving tiny descriptions of what they are doing, followed by sprint coaching for new folks.

There is pre-sprint coaching, but you really can just show up and wander around and find some projects. You learn a ton regardless of whether you’re able contribute something, and it’s likely you can help in some way. If a project isn’t working for you, use the same “Law of Two Feet” as we have in open spaces and find another project.


This was my first sprint. I’ve read a lot of documentation hosted on, but I didn’t know how it worked. I started by following the installation directions for those wanting to build documentation on ReadTheDocs. I learned a lot by doing this and by being able to ask questions of the creators (see the conversation about publishing earlier in this post). The place I helped was discovering a configuration problem that, as it turns out, had been plaguing them and wasting a lot of time, so they were elated to be able to fix it. This is what I mean—anyone can be helpful, even if it’s just by bumbling around and tripping over a problem.

On top of that, I feel like I have enough of a grasp of ReadTheDocs that I can start using it for my own projects.


Build tools have always been a kind of hobby horse for me; at one point in the distant past I was pushing make to its limits, enough to see that its creators had given up in despair upon realizing that what they were really doing is creating a programming language.

Once you know that, you realize that the best solution is to start with an existing programming language, especially one like Python which is really good at manipulating everything. And Python decorators are perfect for annotating things to be tasks/targets.

I’ve made a couple of attempts to create a decorator-based build system but I’d much rather use something that someone else has spent a lot of time making. Invoke (Docs here and the github repository is here) by Jeff Forcier seems like it might be that tool. It’s quite thorough but it doesn’t yet do timestamp dependencies like make does. Although I couldn’t get close to making those changes myself, I was able to find a directed-acyclic-graph library that Jeff might be able to use for the task. As it was Python 2 and Jeff wants to use Python 2 and 3 libraries, I modified it to produce Pythondagger23 that works with both versions. I’m hoping it will be a step towards a more powerful Invoke.


When I found out what Cookiecutter does I decided it could be very important to me. I made a very introductory overview of the tool so others could quickly understand its value. They decided to use this as their official first introduction.

MicroBit Python

The MicroPython project had BBC MicroBits devices and asked us to create new demonstration examples.

My goal was just to get one working and to program something on the 5 by 5 grid of LEDs on the back, just as a kind of baseline test.

I went down a few wrong paths at first, probably because other parts of the project required creating a full set of build tools and flashing the MicroPython image onto the device. Fortunately I had help from (sorry I forgot your name) who coached me across installing parts of the system until he realized I just wanted to play with the LEDs, at which time he flashed my device from his system and we found that the MU Editor was the thing to get. We didn’t establish whether MU took care of everything necessary to make the connection to the device, or if some driver we had incidentally installed during our explorations was also necessary.

Here’s the code we ended up creating:

from microbit import *
import random

pixX = random.randint(0, 4)
pixY = random.randint(0, 4)

def step(n):
    r = n + random.randint(-1, 1)
    if r < 0:
        return 0
    if r > 4:
        return 4
    return r

while True:
    display.set_pixel(pixX, pixY, 6)
    pixX = step(pixX)
    pixY = step(pixY)

This makes a dot that wanders around the 5x5 matrix and bumps into walls and corners. We didn’t submit it because it seemed too simple compared to the other examples, but if the project still wants to use it, feel free.

The final night for me was Saturday, and I went to dinner with Barry Warsaw and the MailMan sprinters, so Barry and I were able to catch up. He’s still very happy working with Canonical. We figured out that we had both worked on the Gnu Emacs C++ mode. I created the mode way back when I was working at the University of Washington School of Oceanography with Tom Keffer (who later used our work to start Rogue Wave, supplier of C++ libraries). However, I didn’t do very much with it; just got the basic mode working. Sometime later Barry came along and spent a lot of time making it useful, especially the very tricky reformatting, which I remember taking a shot at but then discovering what a big job it was. This all happened years before we met each other during the (relatively) early days of Python, but he remembers my name at the top of the authors list. It was fun to see how our paths had crossed so long ago without either of us knowing it.

View or add comments

Java 8 Parallel Operations Are Not As Simple As They Seem

As an exploration of the uncertainties of streams and parallel streams, let’s look at a problem that seems simple: summing an incremental sequence of numbers. There turns out to be a surprising number of ways to do this, and I’ll take the risk of comparing them through timing—trying to be careful, but acknowledging that I might fall into one of the many fundamental pitfalls when timing code execution. The results may have some flaws (there’s no “warming up” of the JVM, for example), but I think it nonetheless gives some useful indications.

I’ll start with a timing method timeTest() which takes a LongSupplier, measures the length of the getAsLong() call, compares the result with a checkValue and displays the results. It also assigns the result to a volatile value just in case Java is tempted to optimize away the calculation.

Note that everything rigorously uses longs whereever possible; I spent a bit of time chasing down quiet overflows before realizing that I missed ‘long’s in important places.

All the numbers and discussions about time and memory refer to “my machine.” Your experience will probably be different.

import java.util.function.*;

public class Summing {
  static volatile long x;
  static void timeTest(String id, long checkValue,
    LongSupplier operation) {
    System.out.print(id + ": ");
    long t = System.currentTimeMillis();
    long result = operation.getAsLong();
    long duration = System.currentTimeMillis() - t;
    if(result == checkValue)
      System.out.println(duration + "ms");
      System.out.format("result: %d\ncheckValue: %d\n",
        result, checkValue);
    x = result; // Prevent optimization
  public static int SZ = 100_000_000;
  // This even works:
  // public static int SZ = 1_000_000_000;
  // Gauss's formula:
  public static final long CHECK =
    (long)SZ * ((long)SZ + 1)/2;
  public static void main(String[] args) {
    timeTest("Sum Stream", CHECK, () ->
      LongStream.rangeClosed(0, SZ).sum());
    timeTest("Sum Stream Parallel", CHECK, () ->
      LongStream.rangeClosed(0, SZ).parallel().sum());
    timeTest("Sum Iterated", CHECK, () ->
      LongStream.iterate(0, i -> i + 1)
    // Takes longer, runs out of memory above 1_000_000:
    // timeTest("Sum Iterated Parallel", CHECK, () ->
    //   LongStream.iterate(0, i -> i + 1)
    //     .parallel()
    //     .limit(SZ+1).sum());
/* Output:
Sum Stream: 625ms
Sum Stream Parallel: 158ms
Sum Iterated: 2521ms

The CHECK value is calculated using the formula created by Carl Friedrich Gauss while still in primary school in the late 1700’s.

This first version of main() uses the straightforward approach of generating a Stream and calling sum(). We see the benefits of streams in that a SZ of a billion is handled without overflow (I use a smaller number so the program doesn’t take so long to run). Using the basic range operation with parallel() is notably faster.

If iterate() is used to produce the sequence the slowdown is dramatic, probably because the lambda must be called each time a number is generated. But if we try to parallelize that, the result not only takes longer than the non-parallel version but it also runs out of memory when SZ gets above a million. Of course you wouldn’t use iterate() when you could use range(), but if you’re generating something other than a simple sequence you must use iterate(). Applying parallel() is a reasonable thing to attempt, but produces these surprising results. We can make some initial observations regarding the stream parallel algorithms:

  • Stream parallelism divides the incoming data into pieces so the algorithm(s) can be applied to those separate pieces.

  • Arrays split cheaply, evenly and with perfect knowledge of split sizes.

  • Linked Lists have none of these properties; “splitting” a linked list only means dividing it into “first element” and “rest of list,” which is relatively useless.

  • Stateless generators behave like arrays; the use of range above is stateless.

  • Iterative generators behave like linked lists; iterate() is an iterative generator.

Now let’s try solving the problem by filling an array with values, then summing over the array. Because the array is only allocated once, it seems unlikely we’ll run into garbage collection timing issues.

First we’ll try an array filled with primitive longs:

import java.util.*;

public class Summing2 {
  static long basicSum(long[] ia) {
    long sum = 0;
    final int sz = ia.length;
    for(int i = 0; i < sz; i++)
      sum += ia[i];
    return sum;
  // Approximate largest value of SZ before
  // running out of memory on my machine:
  public static int SZ = 20_000_000;
  public static final long CHECK =
    (long)SZ * ((long)SZ + 1)/2;
  public static void main(String[] args) {
    long[] la = new long[SZ+1];
    Arrays.parallelSetAll(la, i -> i);
    Summing.timeTest("Array Stream Sum", CHECK, () ->;
    Summing.timeTest("Parallel", CHECK, () ->;
    Summing.timeTest("Basic Sum", CHECK, () ->
    // Destructive summation:
    Summing.timeTest("parallelPrefix", CHECK, () -> {
      Arrays.parallelPrefix(la, Long::sum);
      return la[la.length - 1];
/* Output:
Array Stream Sum: 114ms
Parallel: 27ms
Basic Sum: 33ms
parallelPrefix: 49ms

The first limitation is memory size; because the array is allocated up front, we can’t create anything nearly as large as the previous version. Parallelizing speeds things up, even a bit faster than just looping through using basicSum(). Interestingly, Arrays.parallelPrefix() doesn’t speed things up as much as we might imagine. However, any of these techniques might be more useful under other conditions—that’s why you can’t make any deterministic statements about what to do, other than “you have to try it out.”

Finally, consider the effect of using wrapped Longs instead:

import java.util.*;

public class Summing3 {
  static long basicSum(Long[] ia) {
    long sum = 0;
    final int sz = ia.length;
    for(int i = 0; i < sz; i++)
      sum += ia[i];
    return sum;
  // Approximate largest value of SZ before
  // running out of memory on my machine:
  public static int SZ = 10_000_000;
  public static final long CHECK =
    (long)SZ * ((long)SZ + 1)/2;
  public static void main(String[] args) {
    Long[] La = new Long[SZ+1];
    Arrays.parallelSetAll(La, i -> (long)i);
    Summing.timeTest("Long Array Stream Reduce",
      CHECK, () ->, Long::sum));
    Summing.timeTest("Long Basic Sum", CHECK, () ->
    // Destructive summation:
    Summing.timeTest("Long parallelPrefix",CHECK, ()-> {
      Arrays.parallelPrefix(La, Long::sum);
      return La[La.length - 1];
/* Output:
Long Array Stream Reduce: 1046ms
Long Basic Sum: 21ms
Long parallelPrefix: 3287ms

Now the amount of memory available is approximately cut in half, and the amount of time required has exploded in all cases except basicSum(), which simply loops through the array. Surprisingly, Arrays.parallelPrefix() takes significantly longer than any other approach.

I separated the parallel() version because running it inside the above program caused a lengthy garbage collection, distorting the results:

import java.util.*;

public class Summing4 {
  public static void main(String[] args) {
    Long[] La = new Long[Summing3.SZ+1];
    Arrays.parallelSetAll(La, i -> (long)i);
    Summing.timeTest("Long Parallel",
      Summing3.CHECK, () ->,Long::sum));
/* Output:
Long Parallel: 1008ms

It’s slightly faster than the non-parallel() version, but not significantly.

A big reason for this increase in time is the processor memory cache. With the primitive longs in, the array la is contiguous memory. The processor can more easily anticipate the use of this array and keep the cache filled with the array elements that will be needed next. Accessing the cache is much, much faster than going out to main memory. It appears that the Long parallelPrefix calculation suffers because it not only reads two array elements for each calculation, plus writes the result back into the array, and each of these produces an out-of-cache reference for the Long.

With and, La is an array of Long, which is not a continguous array of data, but a contiguous array of references to Long objects. Even though that array will probably be kept in cache, the objects pointed to will almost always be out-of-cache.

These examples have used different SZ values to show the memory limits. To do a time comparison, here are the results with SZ set to the smallest value of 10 million:

Sum Stream: 69ms
Sum Stream Parallel: 18ms
Sum Iterated: 277ms
Array Stream Sum: 57ms
Parallel: 14ms
Basic Sum: 16ms
parallelPrefix: 28ms
Long Array Stream Reduce: 1046ms
Long Basic Sum: 21ms
Long parallelPrefix: 3287ms
Long Parallel: 1008ms

While the various new built-in “parallel” tools in Java 8 are terrific, I’ve seen them treated as magical panaceas: “All you need to do is throw in parallel() and it will run faster!” I hope I’ve begun to show that this is not the case at all, and that blindly applying the built-in “parallel” operations can sometimes even make things run a lot slower.

Language and library support for concurrency and parallelism seem like perfect candidates for the term “leaky abstraction,” but I’ve started to wonder whether there’s really any abstraction at all—when writing these kinds of programs you are never shielded from any of the underlying systems and tools, even including the CPU cache. Ultimately, if you’ve been very careful, what you create will work in a particular situation, but it won’t work in different situation. Sometimes the difference is the way two different machines are configured, or the estimated load for the program. This is not specific to Java per se—it is the nature of concurrent and parallel programming.

You might argue that a pure functional language doesn’t have these same restrictions. Indeed, a pure functional language solves a large number of problems. But ultimately, if you write a system that has, for example, a queue somewhere, if things aren’t tuned right and the input rate either isn’t estimated correctly or throttled (and throttling means different things and has different impacts in different situations), that queue will either fill up and block, or overflow. In the end, you must understand all the details, and any issue can break your system. It’s a very different kind of programming.

I’d like feedback on this; if I’ve made any obvious missteps please describe them in the comments. Thanks!

View or add comments

Video Of Belarus Conference Opening Keynote

Here is the opening keynote presentation I gave at the Jet Conference in Minsk, Belarus, last Fall, titled A Language is More Than a Language.

For some reason, all my travels in recent years have been exceptional, and this one was no different. Everyone there was really warm and enthusiastic.

View or add comments

Are Java 8 Lambdas Closures?

(Significantly rewritten 11/25/2015)

Based on what I’ve heard, I was surprised to discover that the short answer is “yes, with a caveat that, after explanation, isn’t terrible.” So, a qualified yes.

For the longer answer, we must first explore the question of “why, again, are we doing all this?”

Abstraction over Behavior

The simplest way to look at the need for lambdas is that they describe what computation should be performed, rather than how it should be performed. Traditionally, we’ve used external iteration, where we specify exactly how to step through a sequence and perform operations:

import java.util.*;

interface Pet {
    void speak();

class Rat implements Pet {
    public void speak() { System.out.println("Squeak!"); }

class Frog implements Pet {
    public void speak() { System.out.println("Ribbit!"); }

public class InternalVsExternalIteration {
    public static void main(String[] args) {
        List<Pet> pets = Arrays.asList(new Rat(), new Frog());
        for(Pet p : pets) // External iteration
        pets.forEach(Pet::speak); // Internal iteration

The for loop represents external iteration and specifies exactly how it is done. This kind of code is redundant, and duplicated throughout our programs. With the forEach, however, we tell it to call speak (here, using a method reference, which is more succinct than a lambda) for each element, but we don’t have to specify how the loop works. The iteration is handled internally, inside the forEach.

This “what not how” is the basic motivation for lambdas. But to understand closures, we must look more deeply, into the motivation for functional programming itself.

Functional Programming

Lambdas/Closures are there to aid functional programming. Java 8 is not suddenly a functional programming language, but (like Python) now has some support for functional programming on top of its basic object-oriented paradigm.

The core idea of functional programming is that you can create and manipulate functions, including creating functions at runtime. Thus, functions become another thing that your programs can manipulate (instead of just data). This adds a lot of power to programming.

A pure functional programming language includes other restrictions, notably data invariance. That is, you don’t have variables, only unchangeable values. This sounds overly constraining at first (how can you get anything done without variables?) but it turns out that you can actually accomplish everything with values that you can with variables (you can prove this to yourself using Scala, which is itself not a pure functional language but has the option to use values everywhere). Invariant functions take arguments and produce results without modifying their environment, and thus are much easier to use for parallel programming because an invariant function doesn’t have to lock shared resources.

Before Java 8, the only way to create functions at runtime was through bytecode generation and loading (which is quite messy and complex).

Lambdas provide two basic features:

  1. More succinct function-creation syntax.

  2. The ability to create functions at runtime, which can then be passed/manipulated by other code.

Closures concern this second issue.

What is a Closure?

A closure uses variables that are outside of the function scope. This is not a problem in traditional procedural programming – you just use the variable – but when you start producing functions at runtime it does become a problem. To see the issue, I’ll start with a Python example. Here, make_fun() is creating and returning a function called func_to_return, which is then used by the rest of the program:


def make_fun():
    # Outside the scope of the returned function:
    n = 0

    def func_to_return(arg):
        nonlocal n
        # Without 'nonlocal' n += arg produces:
        # local variable 'n' referenced before assignment
        print(n, arg, end=": ")
        arg += 1
        n += arg
        return n

    return func_to_return

x = make_fun()
y = make_fun()

for i in range(5):

print("=" * 10)

for i in range(10, 15):

""" Output:
0 0: 1
1 1: 3
3 2: 6
6 3: 10
10 4: 15
0 10: 11
11 11: 23
23 12: 36
36 13: 50
50 14: 65

Notice that func_to_return manipulates two fields that are outside its scope: n and arg (depending on what it is, arg might be a copy, or it might refer to something outside its scope). The nonlocal declaration is required because of the way Python works: if you just start using a variable, it assumes that variable is local. Here, the compiler (yes, Python has a compiler and yes, it actually does some – admittedly quite limited – static type checking) sees that n += arg uses n which, within the scope of func_to_return, hasn’t been initialized, and generates an error message. But if we say that n is nonlocal, Python realizes that we’re using the n that’s defined outside the function scope, and which has been initialized, so it’s OK.

Now we encounter the problem: if we simply return func_to_return, what happens to n, which is outside the scope of func_to_return? Ordinarily we’d expect n to go out of scope and become unavailable, but if that happens then func_to_return won’t work. In order to support dynamic creation of functions, func_to_return must “close over” and keep alive n when the function is returned, and that’s what happens – thus the term closure.

To test make_fun(), we call it twice and capture the resulting function in x and y. The fact that x and y produce completely different results shows that each call to make_fun() produces a completely independent func_to_return with completely independent closed-over storage for n.

Java 8 Lambdas

Now let’s see what the same example looks like with lambdas:

import java.util.function.*;

public class AreLambdasClosures {
    public Function<Integer, Integer> make_fun() {
        // Outside the scope of the returned function:
        int n = 0;
        return arg -> {
            System.out.print(n + " " + arg + ": ");
            arg += 1;
            // n += arg; // Produces error message
            return n + arg;
    public void try_it() {
        Function<Integer, Integer>
            x = make_fun(),
            y = make_fun();
        for(int i = 0; i < 5; i++)
        for(int i = 10; i < 15; i++)
    public static void main(String[] args) {
        new AreLambdasClosures().try_it();
/* Output:
0 0: 1
0 1: 2
0 2: 3
0 3: 4
0 4: 5
0 10: 11
0 11: 12
0 12: 13
0 13: 14
0 14: 15

It’s a mixed bag: we can indeed access n, but we immediately run into trouble when we try to modify n. The error message is: local variables referenced from a lambda expression must be final or effectively final.

It turns out that, in Java, lambdas only close over values, but not variables. Java requires those values to be unchanging, as if they had been declared final. So they must be final whether you declare them that way or not. Thus, “effectively final.” And thus, Java has “closures with restrictions,” which might not be “perfect” closures, but are nonetheless still quite useful.

If we create heap-based objects, we can modify the object, because the compiler only cares that the reference is not modified. For example:

import java.util.function.*;

class myInt {
    int i = 0;

public class AreLambdasClosures2 {
    public Consumer<Integer> make_fun2() {
        myInt n = new myInt();
        return arg -> n.i += arg;

This compiles without complaint, and you can test it by putting the final keyword on the definition for n. Of course, if you use this with any kind of concurrency, you have the problem of mutable shared state.

Lambda expressions accomplish – at least partially – the desired goal: it’s now possible to create functions dynamically. If you step outside the bounds, you get an error message, but there’s generally a way to solve the problem. It’s not as straightforward as the Python solution, but this is Java, after all, and we’ve been trained to take what we are given. And ultimately the end result, while somewhat constricted (face it, everything in Java is somewhat constricted) is not too shabby.

I asked why the feature wasn’t just called “closures” instead of “lambdas,” since it has the characteristics of a closure? The answer I got was that closure is a loaded and ill defined term, and was likely to create more heat than light. When someone says “real closures,” it too often means “what closure meant in the first language I encountered with something called closures.”

I don’t see an OO versus FP (functional programming) debate here; that is not my intention. Indeed, I don’t really see a “versus” issue. OO is good for abstracting over data (and just because Java forces objects on you doesn’t mean that objects are the answer to every problem), while FP is good for abstracting over behavior. Both paradigms are useful, and mixing them together has been even more useful for me, both in Python and now in Java 8. (I have also recently been using Pandoc, written in the pure FP Haskell language, and I’ve been extremely impressed with that, so it seems there is a valuable place for pure FP languages as well).

View or add comments