Extended reality decentralized systems

I have come to the conclusion that, in one or two generations at most, desktop computers may change from being a platform to be just an access point. And similar thing may happen to consoles (which are just specialized computers). Laptops will likely see a similar development, but they may be need to do actual computing without network access.

So, is it possible that in the future consumer’s computer hardware actually becomes less powerful? Well, I think so, but only on the outside.

This conclusion is based in current developments: new streaming services like Google Stadia promise to provide low latency experiences enabling tasks like playing computer games that are executed remotely. This is a difficult task because these experiences must have low latency: when playing a game, there should be no perceived delay between the execution of an action (like pressing a keyboard) and getting the response from the computer.

Note that it does not matter if Google is able to do it. Sooner or later, some company will be able to make it happen.

Taking this idea further, I can see how any extended reality (XR) system can benefit from these developments. One of the biggest problems of VR (and in some cases AR) is that a lot of processing power is required to create realistic/convincing virtual worlds. Nowadays this is solved by:
a) Attaching the headset to a (powerful) computer, which reduces mobility.
b) Through an embedded system, which provides mobility but reduces computing power and limits the autonomy.
Currently, none of these provide a realistic virtual experience.

But what happens if we remove the load of processing the data? when these processes are done remotely, there is virtually no limitation* to the available computer power. The XR system only needs to be able to interpret and display what receives from the outside. Autonomy and battery life will still be an issue because data transfer also consumes a lot of power. Yet processing power will not be a factor anymore (or at least, it can be minimized). It is far easy to develop one efficient system that multiple ones that also interact between themselves.

This ‘unlimited’ power can come directly from different places: a server farm localized close** to the user that asks for the process is one example. Another option getting traction is to use edge computing where multiple small devices do a fraction of the whole process and then all information is combined when it comes back to the user. The latency in the second case is reduced because these devices are very close to the delivery end point (even in the same location).

I believe that this is very likely to happen: it does not matter the purpose or the application. It is very difficult to think in a future where the flow of data will not increase (if not exponentially, at least linearly). It can be to stream games in 4K, for 8K content (and higher) or websites powered by WebGL displaying high end graphics. The speed of the data processing and transfer must increase in a more interconnected future.

Furthermore, it seems plausible that the following business model can work:
– Customers make a ‘small’ investment upfront (for the access point). This may be partially or fully subsidized as well.
– Then customers pay a monthly fee (subscription) for the continuous use of the remote system (hardware and applications).
In fact, that is the direction that Microsoft and Sony seem to be taken with Xbox Game Pass and PlayStation Now, respectively. Will we see a generation of consoles less powerful than the previous one?

As an isolated event, this may not seem important. However, if we add other new technologies like autonomous cars, which can remove the necessity of owning a car. I start to ask myself: how will ownership of things look like in the future? and also, what other ‘things’ that we own (and expect to always own) will have a similar disruption in the near future?.


Notes:
*This will be case for probably 99% of the applications that people will use. There are of course certain types of tasks that require a huge amount of computing power like some scientific simulations where even a supercomputer falls short. These will also benefit for decentralized systems, but probably in different ways (for example, for intermediary steps). I don’t take these in account because although very cool, these systems are not the norm but the exception.
**Close to reduce the latency of the process. Although extremely fast, information does need some time to reach the remote system. To load a website this time is not really important, but to get low latency, everything matters.

Leave a Reply

Your email address will not be published. Required fields are marked *