Discuss Scratch
- Discussion Forums
- » Advanced Topics
- » Prioritizing GPU usage
- certainly--stormy
-
12 posts
Prioritizing GPU usage
Is it possible to prioritize usage of either integrated or a dedicated graphics card? From my knowledge, running a lot of different lines of code that individually are light on the processor at once seems like a job for a graphics card, and a lot of projects would probably run much faster if the gpu was utilized more instead.
- ScolderCreations
-
1000+ posts
Prioritizing GPU usage
Is it even possible to use the GPU in the web browser?
- dinonil
-
89 posts
Prioritizing GPU usage
You can compile javascript to shader code and run them on the GPU using WebGL but it wouldn't make much sense Is it even possible to use the GPU in the web browser?
Last edited by dinonil (June 9, 2022 22:37:50)
- ScolderCreations
-
1000+ posts
Prioritizing GPU usage
Interesting, but can this be done JiT?You can compile javascript to shader code and run them on the GPU using WebGL Is it even possible to use the GPU in the web browser?
- dinonil
-
89 posts
Prioritizing GPU usage
It is possible. It could be like Turbowarp, but instead of compiling to JS, it compiles scratch blocks to shader code. Interesting, but can this be done JiT?
Last edited by dinonil (June 9, 2022 22:42:31)
- dinonil
-
89 posts
Prioritizing GPU usage
GPU.js might work for something like thisBut that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
Last edited by dinonil (June 9, 2022 23:12:28)
- CST1229
-
1000+ posts
Prioritizing GPU usage
(#7)It could just compile the entire interpreter to shader code (or possibly even the whole editor!).GPU.js might work for something like thisBut that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
EDIT: Apparently gpu.js doesn't support much JavaScript (not even strings!) so it only really can compile a few blocks into shader code.
Last edited by CST1229 (June 12, 2022 06:12:18)
- Chiroyce
-
1000+ posts
Prioritizing GPU usage
For rendering on a canvas - yes, the GPU can be used via WebGL (Scratch uses the GPU for rendering, mainly for rasterizing the SVG costumes) Is it even possible to use the GPU in the web browser?
- uwv
-
1000+ posts
Prioritizing GPU usage
(#8)all the mathematical functions can be compiled (so the slowest part of scratch) which would make everything significantly faster. for example 3d projects(#7)It could just compile the entire interpreter to shader code (or possibly even the whole editor!).GPU.js might work for something like thisBut that only compiles javascript to shader code, not scratch blocks. It would be best if the VM utilizes both a compiler that compiles scratch blocks to shader code like how Turbowarp does to javascript for more complex code and the vanilla interpreter for more simpler code at the same time.
(the speed is genuinely really amazing)
https://assets.scratch.mit.edu/get_image/.%2E/527d5bb674220e6cf5562b6b719a6618.png
EDIT: Apparently gpu.js doesn't support much JavaScript (not even strings!) so it only really can compile a few blocks into shader code.
- Vadik1
-
500+ posts
Prioritizing GPU usage
TL;DR close to impossible to implement, and even if it was implemented, it wouldn't give that much of a speed improvement
As a result, GPU can only be used to speed up data processing, when a lot of data needs to be processed in the same way and where each part of data can be processed independently of one another.
Another thing to point out is how WebGL works. It only supports rendering points, lines and triangles, and only supports 2 types of shaders for it: vertex and fragment. The data of things that need to be drawn are stored in buffers (lists of data in VRAM). When rendering, the data from buffers is being split into equally sized chunks and those chunks are sent to be processed by a vertex shader. Vertex shader outputs a position on the screen (with depth) and possibly some varyings. Varyings are numbers and there is always a predetermined by shader amount of them and they always take up a predetermined amount of space. Then, depending on what primitive is being drawn(points, lines or triangles), the output data from the vertex shader is grouped (to 1/2/3 vertexes) and a hardware-accelerated filling occurs. For every pixel, a fragment shader is executed. Varyings are known only for each vertex, but their values are being smoothly interpolated across the entire primitive and those averaged values are passed into the fragment shader. Fragment shader, based of the varyings can either calculate and output an RGBA color or discard itself. Both types of shaders have an ability to read from a specified position on one of 16 active textures. Rendering can be done either to canvas or to other textures. WebGL 2 supports rendering to multiple rendering targets at once and for each of them a fragment shader can output a different color. WebGL 2 also supports ability to collect varyings from the vertex shaders into another buffer. Instead on using 8-bit integer per channel textures, WebGL 2 also additionally supports 32-bit floating point textures and rendering to them.
WebGL and WebGL 2 don't have that many features and don't provide any built-in ways of doing general purpose data processing. But with enough effort, many things can be workarounded by implementing them as rendering.
Normally GPU has a queue of what it needs to do, CPU adds things to the queue at it's own pace, while GPU executes instructions from the queue at it's own pace. Asking GPU to do something and then waiting for result disrupts this. While waiting, CPU ends up doing nothing, and once it finally gets the result, the whole time it was waiting, it wasn't adding new instructions to the queue, so the queue becomes empty and now GPU stops and has nothing to do. It is also why scratch's touching blocks run entirely on CPU if there is less than 4000 pixels to compare.
Yes, using either WebGL, WebGL 2 or upcoming WebGPU. Is it even possible to use the GPU in the web browser?
In WebGL (and also OpenGL) the code that runs on GPU has to be written in a special shader language called GLSL, which has a massive amount of limitations compared to normal programming languages(all loops need to run a predetermined amount of times, which needs to be known at compilation; no recursion is allowed; shaders always take in and output a predetermined amount of data; no dynamic memory allocation; etc.)You can compile javascript to shader code and run them on the GPU using WebGL but it wouldn't make much sense Is it even possible to use the GPU in the web browser?
This wouldn't really work in the context of scratch. Multiple CPU cores are all independent of one another and each can run completely different code. Multiple cores on GPU however are all linked up, so that all of them always execute the same instruction at once. The conditional logic in GPU is done by instruction decoder going through all possible branches in the code (a known predetermined amount of instructions), while each GPU core decides to execute instructions for it's branch and ignore instructions for other branches. In modern GPUs it is probably done in other more optimal ways, but all current limitations of WebGL point towards it being designed to run on a hardware like I described. Is it possible to prioritize usage of either integrated or a dedicated graphics card? From my knowledge, running a lot of different lines of code that individually are light on the processor at once seems like a job for a graphics card, and a lot of projects would probably run much faster if the gpu was utilized more instead.
As a result, GPU can only be used to speed up data processing, when a lot of data needs to be processed in the same way and where each part of data can be processed independently of one another.
Another thing to point out is how WebGL works. It only supports rendering points, lines and triangles, and only supports 2 types of shaders for it: vertex and fragment. The data of things that need to be drawn are stored in buffers (lists of data in VRAM). When rendering, the data from buffers is being split into equally sized chunks and those chunks are sent to be processed by a vertex shader. Vertex shader outputs a position on the screen (with depth) and possibly some varyings. Varyings are numbers and there is always a predetermined by shader amount of them and they always take up a predetermined amount of space. Then, depending on what primitive is being drawn(points, lines or triangles), the output data from the vertex shader is grouped (to 1/2/3 vertexes) and a hardware-accelerated filling occurs. For every pixel, a fragment shader is executed. Varyings are known only for each vertex, but their values are being smoothly interpolated across the entire primitive and those averaged values are passed into the fragment shader. Fragment shader, based of the varyings can either calculate and output an RGBA color or discard itself. Both types of shaders have an ability to read from a specified position on one of 16 active textures. Rendering can be done either to canvas or to other textures. WebGL 2 supports rendering to multiple rendering targets at once and for each of them a fragment shader can output a different color. WebGL 2 also supports ability to collect varyings from the vertex shaders into another buffer. Instead on using 8-bit integer per channel textures, WebGL 2 also additionally supports 32-bit floating point textures and rendering to them.
WebGL and WebGL 2 don't have that many features and don't provide any built-in ways of doing general purpose data processing. But with enough effort, many things can be workarounded by implementing them as rendering.
Since most blocks can't be compiled to shaders, and those that can are surrounded by code that can't, it will require a lot of back and fourth between CPU and GPU. That is exactly the thing that should always be avoided for performance reasons. all the mathematical functions can be compiled (so the slowest part of scratch) which would make everything significantly faster. for example 3d projects
Normally GPU has a queue of what it needs to do, CPU adds things to the queue at it's own pace, while GPU executes instructions from the queue at it's own pace. Asking GPU to do something and then waiting for result disrupts this. While waiting, CPU ends up doing nothing, and once it finally gets the result, the whole time it was waiting, it wasn't adding new instructions to the queue, so the queue becomes empty and now GPU stops and has nothing to do. It is also why scratch's touching blocks run entirely on CPU if there is less than 4000 pixels to compare.
Last edited by Vadik1 (June 19, 2022 13:22:42)
- dinonil
-
89 posts
Prioritizing GPU usage
I think you can also run pre-compiled shaders, instead of making a super complex GLSL transpiler. In WebGL (and also OpenGL) the code that runs on GPU has to be written in a special shader language called GLSL, which has a massive amount of limitations compared to normal programming languages(all loops need to run a predetermined amount of times, which needs to be known at compilation; no recursion is allowed; shaders always take in and output a predetermined amount of data; no dynamic memory allocation; etc.)
- igtnathan5
-
1000+ posts
Prioritizing GPU usage
could you modify the project.json file? i mean turbowarp already has a method of doing something not in native scratch yet existing in native scratch (the <turbowarp?> block) by modifying the project.json file, i've seen this been used in a game before
- Discussion Forums
- » Advanced Topics
-
» Prioritizing GPU usage