Whenever a new history node is committed after some undo steps, instead
of creating a new branch in the undo graph, we first append the inverse
modifications starting from the end of the undo list up to the current
position before adding the new node.
For example let's assume that the undo history is A-B-C, that a single undo
has been done (bringing us to state B) and that a new change D is committed.
Instead of creating a new branch starting at B, we add the inverse of C
(noted ^C) at the end, and D afterwards. This results in the undo history
A-B-C-^C-D. Since C-^C collapses to a null change, this is equivalent to
A-B-D but without having lost the C branch of the history.
If a new change is committed while no undo has been done, the new history
node is simply appended to the list, as was the case previously.
This results in a simplification of the user interaction, as two bindings
are now sufficient to walk the entire undo history, as opposed to needing
extra bindings to switch branches whenever they occur.
The <a-u> and <a-U> bindings are now free.
It also simplifies the implementation, as the graph traversal and
branching code are not needed anymore. The parent and child of a node are
now respectively the previous and the next elements in the list, so there
is no need to store their ID as part of the node.
Only the committing of an undo group is slightly more complex, as inverse
history nodes need to be added depending on the current position in the
undo list.
The following article was the initial motivation for this change:
https://github.com/zaboople/klonk/blob/master/TheGURQ.md
This commit fixes a bug in Buffer::advance where it would first access
m_lines[-1], and then check whether or not that access would have
segfaulted. This commit moves the check to before the segfault would
occur.
The real technical limit is with lines bigger than 2 GiB and buffers
with more than 2 Gi lines, refactor buffer loading to make it possible
to load those files.
Fix an overflow with the hash_data function at the same time
Only include the value for int/str/bool options, for the rest just
write '<option name>=...'.
This should reduce the cost of some patterns such as repeatedly adding
a value inside a list option.
It seems very unlikely that the actual value would be matched by
a hook regex string for non primitive types.
Do not access Buffer::m_changes to find the inserted range, return
it directly from Buffer::insert and Buffer::replace. This fixes a
wrong behaviour where replacing at eof would lose the selected end
of line (as the implementation does not actually replace that end
of line)
When compiling the code with `-Wp,-D_GLIBCXX_ASSERTIONS`, the process
gets aborted, likely because iterators to standard containers are
not obtained in a safe way.
Fixes#3226.
The debug buffer is a bit special as lots of events might mutate it,
permitting it to be modified leads to some buggy behaviour:
For example, `pipe` uses a ForwardChangeTracker to track buffer
changes, but when applied on a debug buffer with the profile flag
on, each shell execution will trigger an additional modification
of the buffer while applying the changes, leading to an assertion
failing as changes might not be happening in a forward way anymore.
Trying to modify a debug buffer will now raise an error immediatly.
clamp could change ordering between a coordinate past the end.
Say in a buffer with 1 line of 2 char:
{0, 1} was clamped to {0, 1}
{1, 0} was clamped to {0, 0}
That was reversing their ordering, and might be the root cause
of the bug lurking in undo range computation.
We were preserving the history in that case, so on fifo buffers
(that set the NoUndo flag until the fifo is closed), we still had
the history from the "previous life" of the buffer, leading crashes
when trying to apply it.
Fixes#1518
Until now, buffer had multiple recognized end coordinates, either
{ line_count, 0 } or { line_count - 1, line[line_count - 1].length }.
Now the only correct end coord is { line_count, 0 }, removing the need
for various special cases.