You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we are rebasing the timeline every time we receive a frame from the server. This is needlessly expensive and can be improved.
Two perf holes:
the server is sending us more frames soon
we have a lot of pending mutations that haven't been seen by the server
2 can sometimes imply 1, however 1 can also happen if other clients are sending tons of changes.
To optimize for 1, the server can send a hint to the client with the number of outstanding frames. This allows the client to make better decisions about rebase.
To optimize for 2, we can try to optimize using the following facts
how many pending mutations have yet to be acknowledged by the server
how fast are other timelines changing (we can determine this by looking at the timelines table in the db)
We can come up with some heuristics regarding these facts to balance user experience and rebases.
Note, we can also optimize rebase perf through mutation batching. Currently we pass in a single mutation to the reducer at a time. This involves many round trips through wasmi which is not very fast. It would be much faster to batch mutations to the reducer allowing reducers to optimize internally. For example, merging many mutations into a single insert/update statement.
The text was updated successfully, but these errors were encountered:
Currently we are rebasing the timeline every time we receive a frame from the server. This is needlessly expensive and can be improved.
Two perf holes:
2 can sometimes imply 1, however 1 can also happen if other clients are sending tons of changes.
To optimize for 1, the server can send a hint to the client with the number of outstanding frames. This allows the client to make better decisions about rebase.
To optimize for 2, we can try to optimize using the following facts
We can come up with some heuristics regarding these facts to balance user experience and rebases.
Note, we can also optimize rebase perf through mutation batching. Currently we pass in a single mutation to the reducer at a time. This involves many round trips through wasmi which is not very fast. It would be much faster to batch mutations to the reducer allowing reducers to optimize internally. For example, merging many mutations into a single insert/update statement.
The text was updated successfully, but these errors were encountered: