Hannu Krosing <hannu@skype.net> writes:
> > But that said, realistically *any* solution has to obtain a lock at some time
> > to make the schema change. I would say pretty much any O(1) (constant time)
> > outage is at least somewhat acceptable as contrasted with the normal index
> > build which locks out other writers for at least O(n lg n) time. Anything on
> > the order of 100ms is probably as good as it gets here.
>
> For me any delay less than the client timeout is acceptable and anything
> more than that is not. N sec is ok, N+1 is not. It's as simple as that.
I don't think the client timeout is directly relevant here. If your client
timeout is 20s and you take 19s, how many requests have queued up behind you?
If you normally process requests in under 200ms and receive 10 requests per
second (handling at least 2 simultaneously) then you now have 190 requests
queued up. Those requests take resources and will slow down your server. If
they slow things down too much then you will start failing to meet your 200ms
deadline.
It's more likely that your system is engineered to use queueing and
simultaneous dispatch to deal with spikes in load up to a certain margin. Say
you know it can deal with spikes in load of up to 2x the regular rate. Then
you can deal with service outage of up to the 200ms deadline. If you can deal
with spikes of up to 4x the regular rate then you can deal with an outage of
up to 600ms.
Moreover even if you had the extra resources available to handle a 19s backlog
of requests, how long would it take you to clear that backlog? If you have a
narrow headroom on meeting the deadline in the first place, and now you have
even less headroom because of the resources dedicated to the queue, it'll take
you a long time to clear the backlog.
We periodically ran into problems with load spikes or other performance
problems causing things to get very slow and stay slow for a while. Letting
things settle out usually worked but occasionally we had to restart the whole
system to clear out the queue of requests.
--
greg