Discuss Scratch

rdococ
Scratcher
500+ posts

64-bit integers

rj8wjeiw wrote:

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
First of all, this would break existing projects using doubles. Also, it makes operations that are important in double arithmetic (sin, cos, tan, sqrt, log, etc.) undefined, making it impossible to take advantage of the exact representation. It would also be extremely costly to display such numbers, and in the end would lead to confusion.

The suggestion is: Keep doubles in variables, and add uint64_t variables and uint64_t arrays to make programming more enjoyable rather than copying JavaScript.

Most Scratchers would have no idea by what you mean by “uint64_t”.

Personally, I think the absolute ‘best’ solution would be to have three internal representations of numbers (integer, float, and fraction), and convert between them on the fly.

This would not break any existing projects unless they relied on the quirks of floating point numbers. The numbers would be displayed in decimal as they always have been - this change would be mostly invisible to most Scratchers but it would make certain results more intuitive, such as 0.1 + 0.2 = 0.3.

That said, I don't think this should be a high priority thing if there are more pressing issues or important suggestions to consider first.

Last edited by rdococ (Feb. 15, 2020 21:54:32)

Sheep_maker
Scratcher
1000+ posts

64-bit integers

Flowermanvista wrote:

I'm just going to say this again: JavaScript only has one natively supported number type, which is a double float. If you make a number in JavaScript, it is a double float - no ifs, ands, or buts.
JavaScript also supports BigInts, which are integers that have no theoretical limit. However, its browser support doesn't fit Scratch's supported browsers.

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
Scheme has a distinction between exact and inexact numbers; perhaps a similar distinction could be made (silently, as to not disturb the children) on Scratch?

- Sheep_maker This is a kumquat-free signature. :P
This is my signature. It appears below all my posts. Discuss it on my profile, not the forums. Here's how to make your own.
.postsignature { overflow: auto; } .scratchblocks { overflow-x: auto; overflow-y: hidden; }
Yyyyyy754
New to Scratch
12 posts

64-bit integers

Sheep_maker wrote:

Flowermanvista wrote:

I'm just going to say this again: JavaScript only has one natively supported number type, which is a double float. If you make a number in JavaScript, it is a double float - no ifs, ands, or buts.
JavaScript also supports BigInts, which are integers that have no theoretical limit. However, its browser support doesn't fit Scratch's supported browsers.

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
Scheme has a distinction between exact and inexact numbers; perhaps a similar distinction could be made (silently, as to not disturb the children) on Scratch?
No. The suggestion did NOT say the existing double/string variables are being replaced. So you are going off-topic.

Unsigned 64-bit integers must not be able to be mixed with Scratch variables in operations, must follow the overflow rule on all operations, must have ors, ands, xors, nots and bitshifts and must crash on division by 0 — no sins, logs, or sqrts.

rdococ wrote:

rj8wjeiw wrote:

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
First of all, this would break existing projects using doubles. Also, it makes operations that are important in double arithmetic (sin, cos, tan, sqrt, log, etc.) undefined, making it impossible to take advantage of the exact representation. It would also be extremely costly to display such numbers, and in the end would lead to confusion.

The suggestion is: Keep doubles in variables, and add uint64_t variables and uint64_t arrays to make programming more enjoyable rather than copying JavaScript.

Most Scratchers would have no idea by what you mean by “uint64_t”.

Personally, I think the absolute ‘best’ solution would be to have three internal representations of numbers (integer, float, and fraction), and convert between them on the fly.

This would not break any existing projects unless they relied on the quirks of floating point numbers. The numbers would be displayed in decimal as they always have been - this change would be mostly invisible to most Scratchers but it would make certain results more intuitive, such as 0.1 + 0.2 = 0.3.

That said, I don't think this should be a high priority thing if there are more pressing issues or important suggestions to consider first.
The unsigned 64-bit integers would not necessarily use the stdint.h syntax. And, once again, replacing EXISTING doubles/strings is off-topic.
Sheep_maker
Scratcher
1000+ posts

64-bit integers

Yyyyyy754 wrote:

Sheep_maker wrote:

Flowermanvista wrote:

I'm just going to say this again: JavaScript only has one natively supported number type, which is a double float. If you make a number in JavaScript, it is a double float - no ifs, ands, or buts.
JavaScript also supports BigInts, which are integers that have no theoretical limit. However, its browser support doesn't fit Scratch's supported browsers.

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
Scheme has a distinction between exact and inexact numbers; perhaps a similar distinction could be made (silently, as to not disturb the children) on Scratch?
No. The suggestion did NOT say the existing double/string variables are being replaced. So you are going off-topic.

Unsigned 64-bit integers must not be able to be mixed with Scratch variables in operations, must follow the overflow rule on all operations, must have ors, ands, xors, nots and bitshifts and must crash on division by 0 — no sins, logs, or sqrts.
I'm pretty sure current Scratch behaviour can be mimicked for this new data type. Variables currently can store any type, so they should be able to store uint64s too. Since Scratch already automatically converts strings to numbers, when an arithmetic operation mixes a normal number and a uint64, the uint64 can be internally converted to a normal JS number, and similarly for the functions. There can be a hardcoded behaviour for when something is divided by a uint64 0 where it returns Infinity.
((uint64 (2)::operators) + (uint64 (4)::operators)) // 6 (internally, a uint64)
((uint64 (2)::operators) + (4)) // 6 (internally, a float)
([sqrt v] of (uint64 (4)::operators)) // 2 (internally, a float)
I never explicitly said JS number and string values had to be replaced in my post, and the suggestion never said they couldn't be replaced. In addition, the first part of my post was about the implementation of this suggestion (BigInts are used for BigUint64Arrays), which can excuse the second part of the post (I'm assuming this, at least, from suggestion 7.3 of the Constructive Sticky, but there's no need to be super pedantic about the forum rules I guess)

Edit: The uint64 _ reporter above is for demonstration purposes

Last edited by Sheep_maker (Feb. 16, 2020 07:00:19)


- Sheep_maker This is a kumquat-free signature. :P
This is my signature. It appears below all my posts. Discuss it on my profile, not the forums. Here's how to make your own.
.postsignature { overflow: auto; } .scratchblocks { overflow-x: auto; overflow-y: hidden; }
Yyyyyy754
New to Scratch
12 posts

64-bit integers

Casting to uint64_t:

((x)|[0]::grey)

Casting to double:

((x::grey)+(-0))

(Note: in double arithmetic, -0+0=0, but -0+-0=-0)

So,
(((-1)+(-0))|[0]::grey)
must return uint64_t
18446744073709551615
, and
(([1]\<\<[63]::grey)+([1]\<\<[63]::grey))
must return uint64_t
0
.

Are you sure an operation on uint64_t and double should return a float? Can you give sources stating in C++, operations on uint64_t and double return a float rather than a double?

Last edited by Yyyyyy754 (Feb. 16, 2020 07:17:23)

imfh
Scratcher
1000+ posts

64-bit integers

The way Scratch currently works, the operator blocks all cast input values to JavaScript Numbers.

Scratch casts like this before performing any math (including abs, tan, etc.):
add (args) {
    return Cast.toNumber(args.NUM1) + Cast.toNumber(args.NUM2);
}

And this is the cast function:
/**
    * Scratch cast to number.
    * Treats NaN as 0.
    * In Scratch 2.0, this is captured by `interp.numArg.`
    * @param {*} value Value to cast to number.
    * @return {number} The Scratch-casted number value.
    */
static toNumber (value) {
    // If value is already a number we don't need to coerce it with
    // Number().
    if (typeof value === 'number') {
        // Scratch treats NaN as 0, when needed as a number.
        // E.g., 0 + NaN -> 0.
        if (Number.isNaN(value)) {
            return 0;
        }
        return value;
    }
    const n = Number(value);
    if (Number.isNaN(n)) {
        // Scratch treats NaN as 0, when needed as a number.
        // E.g., 0 + NaN -> 0.
        return 0;
    }
    return n;
}

Something in here would need to be changed if the default operators were to support 64 bit arithmetic.

Scratch to Pygame converter: https://scratch.mit.edu/discuss/topic/600562/
chrdagos
Scratcher
500+ posts

64-bit integers

45afc4td wrote:

uint64_t [test v] = (-1) :: variables
return (test) :: looks
18446744073709551615

uint64_t [test v] = (13) :: variables
[test v] = ((test) ^ ((test) \<\< (1) :: operators) :: operators) :: variables stack
return (test) :: looks
23

I demand 64-bit integers. With it standard operations like bitshift “<<”, “>>”, xor “^”, or “|”, etc.
I support the bitshift and xor, since that would make my Xorshift PRNG a lot simpler, but scratch is supposed to be simple to use, and not much people even KNOW what a bitshift is
NxNmultiply
Scratcher
100+ posts

64-bit integers

Highly support because many people are confused or bored by double precision and also unsigned 64-bit integers should not interact with Scratch variables except for special conversion blocks (uint64_t to wchar, uint64_t to double, double to uint64_t, wchar to uint64_t).
Flowermanvista
Scratcher
1000+ posts

64-bit integers

Sheep_maker wrote:

Flowermanvista wrote:

I'm just going to say this again: JavaScript only has one natively supported number type, which is a double float. If you make a number in JavaScript, it is a double float - no ifs, ands, or buts.
JavaScript also supports BigInts, which are integers that have no theoretical limit. However, its browser support doesn't fit Scratch's supported browsers.
Well, this is certainly news to me.

Last edited by Flowermanvista (April 11, 2020 19:37:23)


Add a SPOOKY SKELETON to your project!

The Scratch 3 Project Save Troubleshooter - find out why your project won't save

ST, Please Add A Warning When A Size Limit Is Exceeded

My Dumb Creations - THE BEST ANIMATION | The Windows 98 Experience (made on Windows 98) | nobody cares about Me… | the2000 Reveals His New Profile Picture | Not Dumb Creations - Ten Years
Ctrl+Shift+Down for more…
Do evil kumquats keep eating your signature? Assert your dominance and eat the evil kumquats. Did you know that kumquats are only about the size of an olive?
NxNmultiply
Scratcher
100+ posts

64-bit integers

Flowermanvista wrote:

Sheep_maker wrote:

Flowermanvista wrote:

I'm just going to say this again: JavaScript only has one natively supported number type, which is a double float. If you make a number in JavaScript, it is a double float - no ifs, ands, or buts.
JavaScript also supports BigInts, which are integers that have no theoretical limit. However, its browser support doesn't fit Scratch's supported browsers.
Well, this is certainly news to me.
But it is off-topic because Scratch 3.0 must run on Firefox ESR 52.4.1 (32-bit), so this cannot be used to implement unsigned 64-bit integers and would probably be slower anyway.
PkmnQ
Scratcher
1000+ posts

64-bit integers

Support.

If all you want is computations, sure, make a list.
If you want to show it in decimal, oh boy that's another story. It's hard and slow.

Here's a 256-bit integer project. Yes, it is 192 bits too much, but my point still stands.

How long does it take to set the integer to a million? Around 7.5 seconds.
How long does it take to display the integer? Around 33 seconds.

This is an account that exists.

Here, have a useful link:
The Official List of Rejected Suggestions by Za-Chary

NxNmultiply
Scratcher
100+ posts

64-bit integers

uint64_t originates from stdint.h, so perhaps there could be an extension named stdint.h that has all the types from it, as well as the operators relevant to integers, including arrays (which are suggested as well). After all, the extensions feature of Scratch 3.0 is pretty much the same concept as the #include feature of C or C++. Because uint8_t is a character type, this would also provide alternative string manipulation, though restricted to only CP437 characters (or whatever the system 8-bit codepage is).

Last edited by NxNmultiply (June 27, 2020 16:03:16)

HTML-Fan
Scratcher
1000+ posts

64-bit integers

The 64-bit idea doesn't works well together with the idea of scratch to give beginners something easy to learn. If you want to use bit operarions then use C++ or JS. If you can work with bit operations then you should be able to use a text-based programming language. And you can build your bit operations - I think there's a 2^x function somewhere, not sure. Otherwise just repeat * 2 and / 2.

Joke of the century: Just made a good remix of this with Scratch's music extension.
                      BE MOIST B) AND CHECK OUT
_____ ______ _ _
|_ _| | _ (_) (_)
| |_ _____ | | | |_ _ __ ___ ___ _ __ ___ _ ___ _ __ ___ #RoadToMoist100
| \ \ /\ / / _ \ | | | | | '_ ` _ \ / _ \ '_ \/ __| |/ _ \| '_ \/ __|
| |\ V V / (_) | | |/ /| | | | | | | __/ | | \__ \ | (_) | | | \__ \
\_/ \_/\_/ \___/ |___/ |_|_| |_| |_|\___|_| |_|___/_|\___/|_| |_|___/
6d66yh
Scratcher
100+ posts

64-bit integers

There should be the following:
//Create static variable button
//Edit static button (rename, remove, change type)

(cast (i::extension) to type [uint8_t v] [const *]::extension) // other types: int8_t, uint16_t, int16_t, uint32_t, int32_t, uint64_t, int64_t, float, double, long double
([]+[]::extension) // a more 'internal' addition than the more abstract addition provided by Scratch, can add pointers to integers as well
([]-[]::extension) ([]*[]::extension) ([]/[]::extension) ([]%[]::extension) ([]|[]::extension) ([]&[]::extension) ([]^[]::extension) ([]&&[]::extension) ([]||[]::extension) ([]\<\<[]::extension) ([]\>\>[]::extension)
([]+=[]::extension) ([]-=[]::extension) ([]\<\<=[]::extension) ([]\>\>=[]::extension) (...)
([]++::extension) ([]--::extension) (++[]::extension) (--[]::extension)
(~[]::extension) (![]::extension) (+[]::extension) (-[]::extension) ([]?[]:[]::extension)
([]=[]::extension) // variable assignment
([]\<[]::extension) ([]==[]::extension) ([]\<=[]::extension) ([]\>[]::extension) ([]!=[]::extension) ([]\>=[]::extension)
(*[]::extension) (&[]::extension) // with addition block this allows array access
//Note: access violation, division by integer zero, etc. will make the project return integers like 0xC0000005, with an integer log for each project
(local variable (i::extension) of type [uint8_t v] [const *]::extension) // user drags the variable out like a custom block input would be dragged out
(pointer to screen::extension) // potentially faster graphical projects with native array access of screen pixels
[];::extension
\{{
...
}\}::extension
(sizeof []::extension)
(allocate [] bytes::extension) (free []::extension)

Example of clearing the screen black:
define clear
((local variable (i::extension) of type [int32_t v] []::extension)=[0]::extension);::extension
while ((i::extension)<[307200]::extension){
((*((pointer to screen::extension)+(i::extension)::extension)::extension)=[0]::extension);::extension
((i::extension)++::extension);::extension
}::control

As for the difficulty of software implementation, the operations are actually simple hardware operators. So, the developers could put inline assembly in Javascript for native integers.

Last edited by 6d66yh (April 23, 2021 20:03:53)


Integer arithmetic suggestion: https://scratch.mit.edu/discuss/post/5163608/
6d66yh
Scratcher
100+ posts

64-bit integers

Bump.

Integer arithmetic suggestion: https://scratch.mit.edu/discuss/post/5163608/
Jonathan50
Scratcher
1000+ posts

64-bit integers

6d66yh wrote:

As for the difficulty of software implementation, the operations are actually simple hardware operators. So, the developers could put inline assembly in Javascript for native integers.
You can't put assembly for a computer into JavaScript, though what you're describing can apparently be done with WebAssembly.

If JavaScript programmers have gotten on fine with just double-precision floats for 2½ decades (except for Wasm and BigInts,) I think they're mostly good enough for Scratchers too. If you really want a 64-bit integer, you can represent it with one number for the high-order 32 bits and one for the low-order 32 bits.

I agree with SheepMaker that if this were to be added, the arithmetic operators should be made generic. No new blocks would be necessary. This is how it was in Scratch 1.x, when there were arbitrary precision integers. (Fun fact: Squeak has exact fractions, like rdococ mentioned and like Lisp and Scheme; but the Scratch Team disabled them by changing division. If you change division back in Scratch 1.4, they work fine. )

Not yet a Knight of the Mu Calculus.
6d66yh
Scratcher
100+ posts

64-bit integers

Jonathan50 wrote:

I agree with SheepMaker that if this were to be added, the arithmetic operators should be made generic. No new blocks would be necessary.
What? There definitely would be new blocks necessary. Look at the type casting, and bitwise operators, assignment, pointer dereferencing and such. And example code to make black screen was given.

Integer arithmetic suggestion: https://scratch.mit.edu/discuss/post/5163608/
Jonathan50
Scratcher
1000+ posts

64-bit integers

6d66yh wrote:

What? There definitely would be new blocks necessary. Look at the type casting, and bitwise operators, assignment, pointer dereferencing and such. And example code to make black screen was given.
OK, I was just considering precision. The bitwise operations can be implemented with arithmetic. There isn't enough reason to justify adding them as primitives, though I do think custom reporters would be great. As far as efficiency is concerned, at least bit shifts can be done with just one block, and then the overhead of using multiplication/division would be insignificant compared to the overhead that comes merely from using any block, so a bit shift block would yield very little gain. An invisible JIT compiler – one is to some degree being added to Snap! – could yield far greater gains for such code.

In high-level programming languages like JavaScript, Lisp, ML, and Java, you can just pass around compound data as you please. A datum like an array or object is represented by a pointer, but the programmer doesn't need to know the address is a number or perform arithmetic on it. When it's no longer needed, the garbage collector reclaims the memory. So what you want can be accomplished with first class lists (with a constant factor time and space overhead, unless something like JavaScript's typed arrays were to be added.)

Not yet a Knight of the Mu Calculus.
rdococ
Scratcher
500+ posts

64-bit integers

Sheep_maker wrote:

rdococ wrote:

Personally, I think Scratch should represent numbers internally as exact fractions, with a numerator and a denominator. This would fix most problems with floating point precision, but it might incur a performance penalty when they need to be simplified.
Scheme has a distinction between exact and inexact numbers; perhaps a similar distinction could be made (silently, as to not disturb the children) on Scratch?
Yes, this is the exact kind of thing I was going for. Quiet conversions between numerical types to ensure accuracy where it's necessary (e.g. for 0.1 + 0.2 = 0.3), and performance when that's more important (e.g. a pen drawing loop that calculates coordinates).

Although, I don't know if Javascript has native support for integers, nevermind exact fractions. I always had the impression that all Javascript numbers were 64-bit floats, like how Lua did it before 5.3.

Everyone suggesting C operators wrote:

<snip>
You're completely missing the point. Scratch is meant to be a way for kids to learn programming concepts, not for adults who understand the minute differences between 64-bit integers and floats and how to manipulate them precisely.

The only way Scratch should distinguish between different types of number is silently, to avoid unintuitive results from floating point errors while preserving speed when it's necessary. Anything else is unnecessary fluff that will only confuse kids.

Last edited by rdococ (April 25, 2021 10:02:18)

6d66yh
Scratcher
100+ posts

64-bit integers

rdococ wrote:

Everyone suggesting C operators wrote:

<snip>
That's the point. Changing the Scratch's abstractive numerical type is off-topic to this and should not be discussed here. Adding native integers separately like they are done in C, C++, etc. is what this suggestion is all about. Even the ‘double’ type among the ones suggested will still be different from var because it preserves the IEEE standard interpretation of NaN and makes it possible to make arrays of it by casting malloc's output and possible to cast its pointer to a 64-bit integer pointer to natively read its bits.

Last edited by 6d66yh (April 25, 2021 10:22:17)


Integer arithmetic suggestion: https://scratch.mit.edu/discuss/post/5163608/

Powered by DjangoBB