People of Golem: Team CGI (Computer Generated Imagery)

Let’s dig into the activities of the CGI team, that along with the Brass team, are responsible for Golem’s first use case - Blender rendering, and researching various verticals as verification and new CGI software integrations.

Hey there! As we mentioned in our Mid Term Goals post - we would just present shortly the teams, to later go into more detailed team portraits, explaining how every team works, who they are, what are their tasks and most importantly, what are they currently working on, and will strive to deliver in the medium term (6 to 8 months in average).

Without further do, let’s dig into the activities of the CGI team, that along with the Brass team, are responsible for Golem’s first use case - Blender rendering, and researching various verticals as verification and new CGI software integrations. Their tasks are:

  1. Blender Verification - research, development and maintenance
  2. New rendering use cases - researching possibilities of integrating commercial software, and proof of concept implementations
  3. Video transcoding - use case development and implementation
  4. Consulting and advising on everything related to graphics

This team is formed by Magda Stasiewicz (lead), Witold Dzięcioł, Daniel Bigos and Michał Szostakiewicz. We asked them a few questions so let’s let them do the talking!

With which other team do you have to interact, or you have to help? How exactly?

All teams have to interact with each other at a given point - but we’ll list them for clarity:

  1. Brass
  2. QA (obvious)
  3. RnD
  4. UX
  5. Clay, rather occasionally
  6. Unlimited - knowledge sharing; transcoding for Unlimited
  7. External contractor: ITL

Verification goals (Goal 1): Local , SGX-based local, Redundant, SGX-based redundant verification

It’s not a coincidence that many of our milestones include the word “verification” in their names. Mechanisms protecting requestors from malicious providers, in Golem’s trustless environment, are one of the most important aspects of its security. Due to the need for such mechanisms, we approach this problem from several different angles, and the verification of computation results is one of them.

The CGI team focuses on the verification of Blender tasks, and all the following milestones refer to Blender rendering. We intend to provide a few solutions:

Local verification
When the requestor receives a result image from a provider, they randomly crop sample areas from it. The next step is to render on the requestor’s side (hence its name - local verification) the corresponding randomly chosen areas. In the final stage, samples are compared and the verdict is made. You can find a detailed description of the algorithm here.


This kind of verification was first released with Golem’s 0.17.0 version. Since then, fixes and improvements were made, but the general concept has remained the same.

SGX based local verification

When we add the power of SGX to the equation, we can do something very similar as mentioned above, but without the need for the requestor to render anything on their machine. For instance, we can imagine a situation, in which the requestor’s computer doesn’t have enough RAM to render their scene, so they decide to use Golem. But - oops! - the results can’t be locally verified, because the requestor is unable to render even the small samples required for the local verification. Besides, rendering can often be very resource- and time-consuming, and one might not want to use their own machine even for computing partial renders.

In such case, we can ask a provider (the same who rendered the image or another one) to run the entire verification algorithm, identical as in the ‘basic’ local verification, inside an SGX enclave and expect to get the result (meaning the verdict whether the render should be accepted) along with a cryptographic proof of this computation. This way, we can be sure that the verification was indeed performed, and most importantly - we can trust its result is correct.

Currently, we are able to run the local verification inside an SGX enclave thanks to Graphene - and what’s left to do is the implementation of the mechanism for executing it on the provider’s machine. Also, before releasing this verification scheme in Golem, Graphene needs to become more stable, as we still run into problems with its stability, when using it for some Blender scenes.

Redundant verification

Verification involving redundant computation is a different model. The idea is simple: subtasks could partially overlap, so some areas of the image are rendered by more than one provider. Next, we cross-check these parts and make conclusions.


Interpreting the results - making a verdict on which providers sent wrong images in case conflicts arise - is not an easy problem and depends on the chosen overlapping scheme. One possibility is to use the nodes’ reputations for estimating the probability that the particular provider submitted dishonest result, so this information can also be taken into account. There’s still research to be done in this field before we can proceed to the implementation.

SGX-based redundant verification

This type of verification is strictly connected with the above variant. The only difference is that some of the subtasks can come from users who perform computation in an SGX enclave, so we are sure their results are correct. We can think about these users as the users who have the maximum possible reputation score in the Golem Network.

This verification type will be implemented after the redundant verification is ready and of course, after enabling SGX computations for Blender in Golem.

Goal 2: Video transcoding in Golem Brass

This is a really exciting goal. Video transcoding is a common problem that can be easily (at least in theory) parallelised. For the Golem integration, we chose to use FFmpeg - one of the most popular video manipulation programs.

The Requestor divides the video file into chunks, each of them can be transcoded independently by providers and then collected back - so the general scheme is very similar to the one used by the Blender Golem tasks.

However, we’ve run across many problems. FFmpeg isn’t an easy piece of software to work with, and sometimes its behaviour can be rather unexpected. It’s also worth noting that the problem itself is quite complicated on the implementation level - the multiplicity of video formats and codecs, audio codecs, presence of other streams (for instance subtitles) in a file make it really tricky to develop a stable application. But anyway, we are approaching the end of the development and plan on releasing the transcoding use case this year.

But we’re not quite finished with this goal… we are happy to let you know, that we are working on the Golem Online Transcoding web service!

This is going to be an online application where users can transcode their files with Golem without having to install it on their PCs. The user defines their task on the website and our web server forwards the task to Golem Network via Golem node run by us. The release is planned to happen around the time of releasing the transcoding integration itself.

It’s worth mentioning that the web service is going to be the first app of this kind, and due to its easy UX, it will be very easy to use, even by completely non-technical people.

Goal 3: Commercial renderers integration

Unfortunately, we can’t say much about this yet, because of the unsolved licensing matters pending. We have prepared an example proof of concept integration of one of the most popular commercial renderers. Once the licensing issues are solved and we are positive that we can use this software on top of the Golem Network, we will start improving the integration to the point it can be used by artists for real renders.

Goal 4: Video transcoding in Golem Unlimited

The goal is to provide a possibility of computing transcoding tasks within the safe environment of Golem Unlimited, where’s no danger of data leaking or interacting with malicious providers, because the network consists only of user-chosen, trusted nodes. It can be an attractive proposition for organizations who have at their disposal a good amount of workstations they don’t use all the time and who face the necessity of transcoding large amounts of video files.

The divide-transcode-merge scheme stays valid here, just as it looks like in Golem Brass.

Goal 5: Compositing in Blender

Compositing is a process ran in rendering post-production, in which some of the effects require the whole image to be already rendered, which means it can’t be done by the provider if they were assigned only a part of a frame.

Full frame compositing in Blender

In case when the provider was rendering a whole frame (or frames), compositing can be performed on their side, just after finishing the rendering. This milestone is about enabling this mechanism as a part of the Blender tasks.

Generic compositing in Blender

When the frame was divided into smaller subtasks, the requestor has to run compositing on their machine, after the parts of an image are merged into one. It’s still to be determined how exactly it should be done from a UX point of view.

Goal 6: Blender 2.80 support

The Blender community is recently excited about the new Version 2.80, and so are we! Many artists have already switched to the new version, even if it’s still in the beta phase. We are investigating the possibility of adding its support to Golem, for now, it seems like the biggest challenge is going to be integrating the new GPU render engine - Eevee.

CGI Team: next deliverable

The next news you are going to hear from the CGI team is most probably the release of the video transcoding use case in Golem Brass. We are finishing the implementation, the next step will be thorough testing done by the QA team, and, of course, applying the necessary fixes suggested by the QA team. First, it’s going to be released to the testnet, and in the following Golem version, we intend to enable it also on the mainnet. This will be the moment when the users’ feedback will be most important, all comments and suggestions will be very much appreciated!

Around the time of the mainnet release, we will also publish the video transcoding web service.

What makes the CGI team exciting?

This may sound like a cliche, but we are a young, dynamic team. We love exploring new fields, that’s why we enjoy every kind of research we do while working on Golem. We love solving hard - and thus interesting - theoretical tasks, but also we’re not afraid of getting our hands dirty. Having contact with truly cutting-edge technologies, such as SGX, is one of the things we appreciate the most in this job. We really learn something new every single day.

Also, we have sort of a first movers' advantage in many fields! We were the first team formed, one of the first teams working with Graphene, we are developing Online Transcoding - as mentioned, the first app which doesn't need Golem to be installed locally - and we do a lot of research so there’s always something new in the pipeline.


We hope you enjoyed this first Team Update on our CGI team! We would love to hear your feedback - so feel free to do so in the Reddit thread announcing the blogpost. Stay tuned, because next up, our colleagues from the Brass team will be next, updating on their milestones and insights.

Want info on Golem? Check our Website!