Discuss Scratch

certainly--stormy
Scratcher
12 posts

Prioritizing GPU usage

Is it possible to prioritize usage of either integrated or a dedicated graphics card? From my knowledge, running a lot of different lines of code that individually are light on the processor at once seems like a job for a graphics card, and a lot of projects would probably run much faster if the gpu was utilized more instead.
ScolderCreations
Scratcher
1000+ posts

Prioritizing GPU usage

Is it even possible to use the GPU in the web browser?
dinonil
Scratcher
89 posts

Prioritizing GPU usage

ScolderCreations wrote:

Is it even possible to use the GPU in the web browser?
You can compile javascript to shader code and run them on the GPU using WebGL but it wouldn't make much sense

Last edited by dinonil (June 9, 2022 22:37:50)

ScolderCreations
Scratcher
1000+ posts

Prioritizing GPU usage

dinonil wrote:

ScolderCreations wrote:

Is it even possible to use the GPU in the web browser?
You can compile javascript to shader code and run them on the GPU using WebGL
Interesting, but can this be done JiT?
dinonil
Scratcher
89 posts

Prioritizing GPU usage

ScolderCreations wrote:

Interesting, but can this be done JiT?
It is possible. It could be like Turbowarp, but instead of compiling to JS, it compiles scratch blocks to shader code.

Last edited by dinonil (June 9, 2022 22:42:31)

uwv
Scratcher
1000+ posts

Prioritizing GPU usage

GPU.js might work for something like this

(the speed is genuinely really amazing)

Last edited by uwv (June 9, 2022 22:59:32)

dinonil
Scratcher
89 posts

Prioritizing GPU usage

uwv wrote:

GPU.js might work for something like this
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
But that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.

Last edited by dinonil (June 9, 2022 23:12:28)

CST1229
Scratcher
1000+ posts

Prioritizing GPU usage

dinonil wrote:

(#7)

uwv wrote:

GPU.js might work for something like this
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
But that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.
It could just compile the entire interpreter to shader code (or possibly even the whole editor!).
EDIT: Apparently gpu.js doesn't support much JavaScript (not even strings!) so it only really can compile a few blocks into shader code.

Last edited by CST1229 (June 12, 2022 06:12:18)

Chiroyce
Scratcher
1000+ posts

Prioritizing GPU usage

ScolderCreations wrote:

Is it even possible to use the GPU in the web browser?
For rendering on a canvas - yes, the GPU can be used via WebGL (Scratch uses the GPU for rendering, mainly for rasterizing the SVG costumes)
uwv
Scratcher
1000+ posts

Prioritizing GPU usage

CST1229 wrote:

(#8)

dinonil wrote:

(#7)

uwv wrote:

GPU.js might work for something like this
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
But that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.
It could just compile the entire interpreter to shader code (or possibly even the whole editor!).
EDIT: Apparently gpu.js doesn't support much JavaScript (not even strings!) so it only really can compile a few blocks into shader code.
all the mathematical functions can be compiled (so the slowest part of scratch) which would make everything significantly faster. for example 3d projects
Vadik1
Scratcher
500+ posts

Prioritizing GPU usage

TL;DR close to impossible to implement, and even if it was implemented, it wouldn't give that much of a speed improvement

ScolderCreations wrote:

Is it even possible to use the GPU in the web browser?
Yes, using either WebGL, WebGL 2 or upcoming WebGPU.

dinonil wrote:

ScolderCreations wrote:

Is it even possible to use the GPU in the web browser?
You can compile javascript to shader code and run them on the GPU using WebGL but it wouldn't make much sense
In WebGL (and also OpenGL) the code that runs on GPU has to be written in a special shader language called GLSL, which has a massive amount of limitations compared to normal programming languages(all loops need to run a predetermined amount of times, which needs to be known at compilation; no recursion is allowed; shaders always take in and output a predetermined amount of data; no dynamic memory allocation; etc.)

certainly--stormy wrote:

Is it possible to prioritize usage of either integrated or a dedicated graphics card? From my knowledge, running a lot of different lines of code that individually are light on the processor at once seems like a job for a graphics card, and a lot of projects would probably run much faster if the gpu was utilized more instead.
This wouldn't really work in the context of scratch. Multiple CPU cores are all independent of one another and each can run completely different code. Multiple cores on GPU however are all linked up, so that all of them always execute the same instruction at once. The conditional logic in GPU is done by instruction decoder going through all possible branches in the code (a known predetermined amount of instructions), while each GPU core decides to execute instructions for it's branch and ignore instructions for other branches. In modern GPUs it is probably done in other more optimal ways, but all current limitations of WebGL point towards it being designed to run on a hardware like I described.

As a result, GPU can only be used to speed up data processing, when a lot of data needs to be processed in the same way and where each part of data can be processed independently of one another.



Another thing to point out is how WebGL works. It only supports rendering points, lines and triangles, and only supports 2 types of shaders for it: vertex and fragment. The data of things that need to be drawn are stored in buffers (lists of data in VRAM). When rendering, the data from buffers is being split into equally sized chunks and those chunks are sent to be processed by a vertex shader. Vertex shader outputs a position on the screen (with depth) and possibly some varyings. Varyings are numbers and there is always a predetermined by shader amount of them and they always take up a predetermined amount of space. Then, depending on what primitive is being drawn(points, lines or triangles), the output data from the vertex shader is grouped (to 1/2/3 vertexes) and a hardware-accelerated filling occurs. For every pixel, a fragment shader is executed. Varyings are known only for each vertex, but their values are being smoothly interpolated across the entire primitive and those averaged values are passed into the fragment shader. Fragment shader, based of the varyings can either calculate and output an RGBA color or discard itself. Both types of shaders have an ability to read from a specified position on one of 16 active textures. Rendering can be done either to canvas or to other textures. WebGL 2 supports rendering to multiple rendering targets at once and for each of them a fragment shader can output a different color. WebGL 2 also supports ability to collect varyings from the vertex shaders into another buffer. Instead on using 8-bit integer per channel textures, WebGL 2 also additionally supports 32-bit floating point textures and rendering to them.

WebGL and WebGL 2 don't have that many features and don't provide any built-in ways of doing general purpose data processing. But with enough effort, many things can be workarounded by implementing them as rendering.

uwv wrote:

all the mathematical functions can be compiled (so the slowest part of scratch) which would make everything significantly faster. for example 3d projects
Since most blocks can't be compiled to shaders, and those that can are surrounded by code that can't, it will require a lot of back and fourth between CPU and GPU. That is exactly the thing that should always be avoided for performance reasons.

Normally GPU has a queue of what it needs to do, CPU adds things to the queue at it's own pace, while GPU executes instructions from the queue at it's own pace. Asking GPU to do something and then waiting for result disrupts this. While waiting, CPU ends up doing nothing, and once it finally gets the result, the whole time it was waiting, it wasn't adding new instructions to the queue, so the queue becomes empty and now GPU stops and has nothing to do. It is also why scratch's touching blocks run entirely on CPU if there is less than 4000 pixels to compare.

Last edited by Vadik1 (June 19, 2022 13:22:42)

dinonil
Scratcher
89 posts

Prioritizing GPU usage

Vadik1 wrote:

In WebGL (and also OpenGL) the code that runs on GPU has to be written in a special shader language called GLSL, which has a massive amount of limitations compared to normal programming languages(all loops need to run a predetermined amount of times, which needs to be known at compilation; no recursion is allowed; shaders always take in and output a predetermined amount of data; no dynamic memory allocation; etc.)
I think you can also run pre-compiled shaders, instead of making a super complex GLSL transpiler.
igtnathan5
Scratcher
1000+ posts

Prioritizing GPU usage

could you modify the project.json file? i mean turbowarp already has a method of doing something not in native scratch yet existing in native scratch (the <turbowarp?> block) by modifying the project.json file, i've seen this been used in a game before

Powered by DjangoBB