Q&A With Piotr Janiuk: Part II - Intel SGX, GNT and more

This second part comes right on time as we prepare for the release of Graphene, the LibraryOS we have been building in a working group with Intel, ITL, Chia-Che Tsai and Don Porter.

Some 10 days ago, we published the first part of a Q&A that Piotr, our acting CEO (and all-time CTO) answered for one of our users. As promised, there was a part II (and probably, we will have a third instalment with follow-up questions).

This second part comes right on time as we prepare for the release of Graphene, the LibraryOS we have been building in a working group with Intel, ITL, Chia-Che Tsai and Don Porter. Last week, we published an interview with Mona Vij, R&D lead at Intel, that explained the corporation's relationship with the project. In this post, you can learn why we choose to go down the Intel SGX route, which resulted in Graphene.

Another fascinating topic Piotr covers in his answers is token (GNT) and velocity. As you may know, we want to provide avenues that open up to giving GNT more value (value in the context of the function, and not price) and the question on token velocity is an excellent primer to understand GNT.

Q: What are your thoughts on SGX? Would embracing this technology mean that Golem service providers would be limited to Intel chips only?

PJ: Firstly, I would like to underline that Golem is not limited to Intel CPUs - allow me to explain further below:

The discovery protocol mentioned in answer to question no 2 allows adding new types of resources. This applies to all classes of Trusted Execution Environments (TEEs), including Intel Software Guard Extensions (Intel SGX).

We chose to participate in the development of Graphene mostly because Intel SGX is, by far, the most mature one in terms of offered features and recognizability. The maturity of SGX is essential to us because as stated in question no 6, we are continuously looking for additional solutions to increase the quality of the Golem network (be that computation integrity or confidentiality).

Redundant verification in conjunction with Intel SGX may allow the network to reach pretty high levels of integrity and confidentiality, even in the presence of many other CPUs.

The other TEE solutions are not quite there yet, but by design, they should be easy to add to the network. For now, if requestors would like to benefit from the additional security features offered by SGX, they should be able to do so. In the future, choosing other technologies should be just as easy.

We've already implemented a working integration of Blender in Graphene for Golem, but both, the integration and the Graphene LibOS itself still need additional development work before they are launched into mainnet - which goes along well with the answer to the fifth question. The Task API should allow developers to prepare their integrations, and they should be able to provide integrations utilizing many different types of resource (e.g., multiple GPU or CPU types, including Intel CPUs with enabled SGX).

The critical point here is that SGX and TEEs, in general, represent classes of technology focused on what's broadly understood as trustworthiness. However, hardware with these features is not privileged over any other categories of resources in Golem.

For example, the WASM integration takes advantage of an execution environment that comes with its specific set of features. Another execution environment is QEMU, which we are observing closely.

Q: Where have you got to with enterprise partnerships?

PJ: We are currently focusing on the product and not looking actively for enterprise partnerships. However, as stated in answer to the question no 4, there are companies and individuals interested in our technology, but at this stage, we are looking for opportunities to deliver a better product rather than to establish partnerships.

We are an active participant in the Graphene Consortium (https://grapheneproject.io/), where Intel is one of the official contributors. For now, it seems that Golem is the leading force behind the direction and the development of the platform.

Additional info has been published in our AMA:

Q: Have you been working with any dApp developers to integrate Golem into their offerings?

PJ: We have spent a considerable amount of time interviewing potential users and analyzing the resulting use cases (and we still do). Our main goal, though, is to understand the way Golem should be presented to the developers to be successful - not looking for dApp or any application developers to start integrating with Golem right now.

Once the Task API is released, we expect external integrations with Golem being born. This API will most probably also allow users to make the best use out of the Graphene features.

In the meantime, we plan to release a few more integrations (e.g., the WASM integration allowing for a variety of apps, and the transcoding integration) to ensure that the platform handles different classes of tasks correctly.

Q: What are your thoughts around the velocity argument when it comes to the value of the token?

I'll refer implicitly to GNT which is an example of a utility token, not an arbitrary one. Token Utility is of prime importance.

If the sole purpose of the token on a platform is to be the medium of exchange, then favouring this token over any other, with similar characteristics (e.g., average transaction time, transaction fees denominated in USD, and so on) is not necessary from the platform usability perspective. Of course, there is the recurring argument that the network effect of a project, which utilizes its medium of exchange token, is an essential factor. Therefore, forking the project to use any other token is not going to be any more profitable than using the original token. This seems rational provided the network effects mentioned above effectively exist.

On the other hand, one can imagine forking all projects, which implement their ERC20 as a medium of exchange only. For each project, its native token would be replaced with ETH. From the user perspective, it would result in better UX, as only a single token would have to be used down the road.

One additional rationale to do so would be using a single token, as currently, any ERC20 transaction requires ETH to pay gas fees anyways. It should be an interesting exercise to model such a scenario and try to estimate its effect on the ETH price in general, and the estimated value of ETH still held by every such project. This simulated scenario implicitly assumes that speculation is either ruled out from the picture or at least significantly reduced (which is not realistic of course).

In short, if a team decides to create a separate token for the project, and the network effect cannot be guaranteed quickly, then the Utility of the token is of prime importance. However, it seems that currently there is still room for additional project-specific tokens used only as a medium of exchange.

Getting back to your question, it seems you are referring to Vitalik's famous blogpost about the medium of exchange tokens (https://vitalik.ca/general/2017/10/17/moe.html). On the one hand, he highlights a few important aspects of the described token model. However, on the other hand, it seems that a significant part of his reasoning may not necessarily be correct, as stated in the document (https://basicattentiontoken.org/wp-content/uploads/2018/12/token-econ.pdf) written by Scott Locklin.

Long story short, velocity may impact the token value both negatively and positively. If the velocity is limited due to technical reasons, then it may directly render the project unattractive, which can result in a decline of the token value. Though, high velocity of the token in a scenario, where only a fraction of tokens is used, may attract more users to use the platform to buy the token, increasing its value.

These were some pretty simple and a bit contrived examples, but it seems that there is no straightforward correlation of the velocity along with the value of the token. Nevertheless, creating incentives to stake the token appears to be an important factor in this puzzle.

This is why a token economic layer should incentivize users of the platform to stake, or the project should introduce some sink mechanisms. Depending on the overall token model, smart time-locking may be enough, and burning tokens may not be necessary.

Moreover, this is a definite argument for having a separate token as users wouldn't probably be so happy if they had to stake their ETH to take full advantage of the dApps' functionality especially because the staking time may be arbitrarily long.

With the above arguments in mind, the final thought is, that the most important thing is to deliver a good, working product or platform, with great UX related to the token in use, and this is what Golem is trying to do. The intrinsic incentives to stake make sense only when the product can be used. Otherwise, it mostly boils down to speculation.


Thanks for reading, once again! We highly encourage everyone to post the follow-ups to these questions and the previous post in the Reddit thread.

We hope you enjoyed this Q&A section and look forward to your feedback.

More information about Golem on our website.