Discuss Scratch

WooHooBoy
Scratcher
1000+ posts

Downtime

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
Carpit999
Scratcher
100+ posts

Downtime

WooHooBoy wrote:

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
iamunknown2
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

WooHooBoy wrote:

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
Haha, you are arguing against the person who controls the servers.
Carpit999
Scratcher
100+ posts

Downtime

iamunknown2 wrote:

Carpit999 wrote:

WooHooBoy wrote:

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
Haha, you are arguing against the person who controls the servers.
?

By the way, your signature suggests emacs, this is not allowed for windoes.
WooHooBoy
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

iamunknown2 wrote:

Carpit999 wrote:

WooHooBoy wrote:

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
Haha, you are arguing against the person who controls the servers.
?
Thisandagain is an admin and he's right in this case.

By the way, your signature suggests emacs, this is not allowed for windoes.
Use cygwin to run it?
iamunknown2
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
I think you're making this up as you go along to look smart.

There is no external server or internal server. There is a client we control and a server they control.

If my client (that is, my phone) crashes, the others won't crash along with it. Each client is independent. For everyone's client to fail would make as much sense as everyone on the planet fainting at the same time.
Carpit999
Scratcher
100+ posts

Downtime

iamunknown2 wrote:

Carpit999 wrote:

No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
I think you're making this up as you go along to look smart.

There is no external server or internal server. There is a client we control and a server they control.

If my client (that is, my phone) crashes, the others won't crash along with it. Each client is independent. For everyone's client to fail would make as much sense as everyone on the planet fainting at the same time.
(That's 75-40 wrong)
NoMod-Programming
Scratcher
1000+ posts

Downtime

WooHooBoy wrote:

Carpit999 wrote:

iamunknown2 wrote:

Carpit999 wrote:

WooHooBoy wrote:

cars_456 wrote:

WooHooBoy wrote:

Carpit999 wrote:

[/offtopic] (Extra text removed by Carpit999- Please be positive.)
I'm honestly wondering what the purpose of this post was.


Cars, can we close the topic now? This thread is derailing seriously
I still have nto found why the downtime has benn happening.
Then you didn't look

thisandagain wrote:

So many conspiracy theories! LOL.

Here is what happened:


In this particular instance one of our many DB replicas had an issue with memory pressure on MySQL indexes which led to a huge drop in performance. This can be caused by custom queries that create large temporary tables. When something like this happens a system called HAProxy will detect that the replica has become unhealthy and remove it from rotation (which is why it appeared to come back so quickly). HAProxy polls each DB to check for health at a specific interval and so while it is able to recover pretty quickly, there can be momentary outages (which is what will cause backend read errors (timeouts) or 404s, 500s) that affect a subset of users that have traffic which is being handled by that replica. Hope that helps clarify.
No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
Haha, you are arguing against the person who controls the servers.
?
Thisandagain is an admin and he's right in this case.
Yup. That's like arguing with Obama on how to run the US
Jonathan50
Scratcher
1000+ posts

Downtime

This topic has seriously derailed and has already been solved…
Carpit999
Scratcher
100+ posts

Downtime

liam48D wrote:

Thanks for the explanation thisandagain

Carpit999 wrote:

scratchyone wrote:

cars_456 wrote:

liam48D wrote:

Carpit999 wrote:

liam48D wrote:

Carpit999 wrote:

Well, the most common cause is the WWW. The web has been planned to be COMPLETELY reset. The server is hprrible too.
The world wide web is going to be completely reset? Well, that might cause some issues.. Source?
GWR
Huh?
Guinness World Records. Proved by real sources, real people.
What do world records have to do with the internet being reset?
Check gwr 2015
Some of us don't own the GWR – could you quote from the page number so that it can be confirmed by somebody else who owns it? I'm interested
I'd have to get my GWR book. Let me take a minute to find it




























































































































jhfHIDDEN!OMGLOOEOEKLFJKFHFHDFJHJDFS
Carpit999
Scratcher
100+ posts

Downtime

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
iamunknown2
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

iamunknown2 wrote:

Carpit999 wrote:

No. It may have been for an external server Scratch internal crash, but the external server (This is the server we control) is actually 404ed by a downtime error, while the internal (MIT) Servers have crashed, causing intensity 404.
I think you're making this up as you go along to look smart.

There is no external server or internal server. There is a client we control and a server they control.

If my client (that is, my phone) crashes, the others won't crash along with it. Each client is independent. For everyone's client to fail would make as much sense as everyone on the planet fainting at the same time.
(That's 75-40 wrong)
What is your idea of how server sends stuff to my phone then?
TheMonsterOfTheDeep
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
Thisandagain explained why the downtime happened.

The only way you wouldn't know is if you didn't read his post…
iamunknown2
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
Thisandagain is part of the SCRATCH TEAM. He is probably speaking on behalf of the guy(s) that run(s) the servers.
Why do you think the server owner is wrong?
Firedrake969
Scratcher
1000+ posts

Downtime

iamunknown2 wrote:

Carpit999 wrote:

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
Thisandagain is part of the SCRATCH TEAM. He is probably speaking on behalf of the guy(s) that run(s) the servers.
Why do you think the server owner is wrong?
0.o
Carpit999
Scratcher
100+ posts

Downtime

liam48D wrote:

Thanks for the explanation thisandagain

Carpit999 wrote:

scratchyone wrote:

cars_456 wrote:

liam48D wrote:

Carpit999 wrote:

liam48D wrote:

Carpit999 wrote:

Well, the most common cause is the WWW. The web has been planned to be COMPLETELY reset. The server is hprrible too.
The world wide web is going to be completely reset? Well, that might cause some issues.. Source?
GWR
Huh?
Guinness World Records. Proved by real sources, real people.
What do world records have to do with the internet being reset?
Check gwr 2015
Some of us don't own the GWR – could you quote from the page number so that it can be confirmed by somebody else who owns it? I'm interested
PG 139 GWR 2015 60th Aniversary: "Seven experts-including Moussa Guebra (BFA), left-form the first group capable of rebooting the World Wide Web, or at least certain aspects of it, in the event of a major catastrophe such as a cyber attack.They are the backup for a security system called DNSSEC that adds a digital signature to Web site names, helping in the battle to stop hackers redirecting surfers to fake sites. Should a disaster take out the DNSSEC, five of the seven global keyholders would be summoned to a secure US location to save the day. Each of the team has a swipe card that provides 1/5 of the reboot key" See? they are planning to reset the whole internet.
Carpit999
Scratcher
100+ posts

Downtime

(Username removed by Carpit999) wrote:

iamunknown2 wrote:

Carpit999 wrote:

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
Thisandagain is part of the SCRATCH TEAM. He is probably speaking on behalf of the guy(s) that run(s) the servers.
Why do you think the server owner is wrong?
I DON'T
Firedrake969
Scratcher
1000+ posts

Downtime

So there's some truth to that, but it's not “rebooting” the internet - it's in case the DNSSEC ever gets taken down, so you can still use “normal” URLs instead of IPs.

No clue why you think it's related to Scratch, especially when the guy IN CHARGE OF THE SCRATCH SERVERS tells you why…
Carpit999
Scratcher
100+ posts

Downtime

Firedrake969 wrote:

So there's some truth to that, but it's not “rebooting” the internet - it's in case the DNSSEC ever gets taken down, so you can still use “normal” URLs instead of IPs.

No clue why you think it's related to Scratch, especially when the guy IN CHARGE OF THE SCRATCH SERVERS tells you why…
NO dip!
iamunknown2
Scratcher
1000+ posts

Downtime

Carpit999 wrote:

(Username removed by Carpit999) wrote:

iamunknown2 wrote:

Carpit999 wrote:

Jonathan50 wrote:

This topic has seriously derailed and has already been solved…
no. I still have NO idea why the downtime happened. This and agains explanation may be right, but that is not for me. The internal servers crashed. Ok, solved. One more thing.
Thisandagain is part of the SCRATCH TEAM. He is probably speaking on behalf of the guy(s) that run(s) the servers.
Why do you think the server owner is wrong?
I DON'T
Then why do you doubt thisandagain's explanation?

Powered by DjangoBB