tabs <-> space conversion functions did not create a ScopedEdition
leading to uncommited modifications. Fixing this did fix the
interactive error, but that error still existed in non interactive
context so redo now considers there there is no redo child if there
are uncommited modifiations (which is correct as this means we are
currently creating a new leaf in the undo tree)
Fixes#5124
Only record non-synthetized insertions, removing the need to
re-record on replay and fixing the last replay getting dropped by
macro execution.
Fixes#5122
<c-n>/<c-p> handling in insert was always dropping the last key in
the last_insert() vector (in order to replace it with the actual
completion text inserted), this was not valid for synthetized keys
that are not added to that vector in the first place.
Take the opportunity to merge insert completion handling code between
<c-n>/<c-p> and direct menu selection.
Fixes#5120
The wrap highlighter was pulling all lines until the cursor, regardless
of whether the cursor was going to be made visible, this could lead to
a display buffer containing much more lines than the actual viewport
which eventually could lead to menu being anchored out of view that
was leading to past-the-end buffer accesses.
Fix the wrap higlighter not to pull lines up to the cursor when it
is not tasked with making the cursor visible, and always trim the
eventual extra lines after highlighting.
Fixes#5118
After extracting the whole buffer content, a line can end up with
only non-range highlgihters pending which makes its range become
0.0,0.0, after running highlighting on the extracted range it gets
re-inserted but taking the min of existing range and inserted range
wrongly returns 0.0. Avoid this by detecting that the 0.0,0.0 range
does not actually mean anything when we have no ranged atoms.
Fixes#5001
Framed info boxes need one cell for the border and one for inner
space padding. That's 4 extra columns when counting both sides.
Frameless boxes have neither border nor padding so 0 columns here.
Closes#5106
This sometimes allocates saves too eagerly, but it removes a branch
in release saves that executes on every thread failing which seems
slightly better.
When creating a new save, we had to clear all iterators to have valid
values. This operation is relatively costly because it gets optimized
to a memset whose call overhead is pretty high (as we usually have
less than 32 bytes to clear). Bypass this by storing a bitmap of
valid iterators.
Consider
sh -c 'sleep 5 & sleep inf'
Since the shell is non-interactive, there is no job control.
This makes the shell spawn the "sleep 5" process in the shell's own
process group[1] - presumably, because only interactive shells have
a need to forward signals to all processes in its foreground job.
When this non-interactive shell process is cancelled with SIGINT,
"sleep 5" keeps running [2]. At least the dash shell implements this
by running "signal(SIGINT, SIG_IGN)" in the forked child. Unless the
child process explicitly overrides that (to SIG_DFL for example), it
will ignore SIGINT. Probably the reason for this behavior is to feign
consistency with interactive shells, without needing to actually run
background jobs in a dedicated process group like interactive shells
do. Bash documents this behavior[3]:
> When job control is not in effect, asynchronous commands ignore
> SIGINT and SIGQUIT in addition to these inherited handlers.
Several of our scripts[4] - most prominently ":make" - use the
"</dev/null >/dev/null 2>&1 &" pattern to run potentially long-running
processes in the background, without blocking the editor.
On <c-c>, we send SIGINT to our process group.
As explained above, this will generally not terminate any background processes.
This problem has been masked by a behavior that is unique to using
both Bash and its "eval" builtin. Given
nop %sh{
rm -f /tmp/fifo
mkfifo /tmp/fifo
(
eval make >/tmp/fifo 2>&1 &
) >/dev/null 2>&1 </dev/null
}
edit -fifo /tmp/fifo *my-fifo*
When running this and pressing Control+C, Bash actually terminates
the Make processes. However if I remove the "eval", it no longer does.
This doesn't seems like something we should rely on.
Other commands like ":git blame" don't use "eval" so they cannot be
cancelled today.
Fix these issues by sending SIGTERM instead of SIGINT, which should
apply to the whole process group with pretty much the same effect.
Barely tested, let's see if this breaks some weird build system.
In future we might allow more fine-grained control over which processes
are cancelled by <c-c>.
{{{
Alternative solution:
With the above fix, scripts can opt-out of being terminated by <c-c>
by using setsid (though that's not POSIX unfortunately, and may
require nesting quotes) or the classic Unix double-forking trick to
create a daemon process.
Though it is certainly possible that someone expects commands like
this to survive <c-c>:
nop %sh{ tail -f my-log </dev/null 2>&1 | grep some-error > some-file 2>&1 & }
I think it would be ideal to stick to SIGINT and match semantics of
a noninteractive shell, to avoid muddying the waters.
Background processes could still **opt into** being terminated by
<c-c>. For example by providing a simple program in libexec/ that does
// interruptible.c
int main(int argc, char** argv) {
signal(SIGINT, SIG_DFL);
execv(argv[1], &argv[1]);
}
used as
diff --git a/rc/tools/make.kak b/rc/tools/make.kak
index b88f7e538..f6e041908 100644
--- a/rc/tools/make.kak
+++ b/rc/tools/make.kak
@@ -16,3 +16,3 @@ define-command -params .. \
mkfifo ${output}
- ( eval "${kak_opt_makecmd}" "$@" > ${output} 2>&1 & ) > /dev/null 2>&1 < /dev/null
+ ( eval "interruptible ${kak_opt_makecmd}" "$@" > ${output} 2>&1 & ) > /dev/null 2>&1 < /dev/null
Unfortunately, it's inconvenient to add "interruptible" to commands
like clang-parse and git-blame because they background a whole subshell
with many commands, so we'd need to nest quotes. Also I'm not sure
if this brings any benefit.
So I didn't explore this further yet although we can definitely do that.
}}}
Fixes#3751
[1]: https://stackoverflow.com/questions/45106725/why-do-shells-ignore-sigint-and-sigquit-in-backgrounded-processes/45106961#45106961
[2]: https://unix.stackexchange.com/questions/372541/why-doesnt-sigint-work-on-a-background-process-in-a-script/677742#677742
[3]: https://www.gnu.org/software/bash/manual/html_node/Signals.html
[4]: clang-parse, ctags-*, git blame, git log, gopls references,
grep, jedi-complete, lint-*, make; I don't think any of these
should be uninterruptible.
Insert repeat will now only record non-synthesized keys, and when played back
execute mappings as well. Constructing some tests, and with the specific goal
of fixing https://github.com/alexherbo2/auto-pairs.kak/issues/38, this appeared to
be the best approach. Other options could be evaluating the maps only when recording,
but this gave other issues (see tests/normal/repeat-insert/repeat-insert-mapped)
At this point, repeat-insert may be essentially just a hardcoded macro, at least I
haven't identified the difference. If this really is the case, it may make sense to
give it a dedicated register, and implement it as a macro.
Fixes#3600
I saw a crash when running
git log --oneline %arg{@}
hook -once buffer NormalIdle .* %{
execute-keys -draft \
%{gk!} \
%{git diff --quiet || echo "Unstaged changes";} \
%{git diff --quiet --cached || echo "Staged changes";} \
<ret>
}
Backtrace (I still have GDB attached):
#4 0x00006502c740b13f in Kakoune::operator- (rhs=..., lhs=...) at /home/johannes/git/kakoune/src/units.hh:33
33 { return RealType(lhs.m_value - rhs.m_value); }
(gdb) up
#5 Kakoune::Buffer::next (coord=..., this=0x6502c90d7ff0) at /home/johannes/git/kakoune/src/buffer.inl.hh:18
18 if (coord.column < m_lines[coord.line].length() - 1)
(gdb) up
#6 FifoWatcher::read_fifo (this=0x6502c90d9e48) at buffer_utils.cc:252
252 m_buffer.erase(pos, m_buffer.next(pos));
This was introduced in 582c3c56b (Do not add trailing newline to
non-scrolling fifo buffers, 2024-01-28).
The problem seems to be that we call "m_buffer.next()" on a position
that is past-end the buffer, so m_lines[coord.line] is out-of-bounds.
Fix it.
For some reason I have not managed to reproduce the crash, not even
with sanitize=address.
There might be another problem: m_had_trailing_newline is intentionally
uninitialized because it is supposed to be read only on the second
read() with a positive return value. Unfortunately I think it's
possible that e.g. a NormalIdle hook inserts some text before the
first positive read(). Then, this line
const bool is_first = pos == BufferCoord{0,0};
if (not m_scroll and (is_first or m_had_trailing_newline))
pos = m_buffer.next(pos);
will read uninitialized "m_had_trailing_newline". Fix that too, to
be on the safe side. Sadly I don't have a test for this one either
so I'm not sure.
If the first byte in the multi-byte utf8 sequence does not match,
it means the "other" character is not set, so none of the sequence
byte will match (as they are all with the MSB set). This tightens
the critical loop which ends up running faster in most cases.
Avoid the costly shared object function call when most codepoints
will be ascii.
The regex benchmark gets a nice speedup:
Regex Before After
--------------------------------------+----------+---------
'Twain' | 25 ms | 15 ms
'(?i)Twain' | 74 ms | 57 ms
'[a-z]shing' | 323 ms | 303 ms
'Huck[a-zA-Z]+|Saw[a-zA-Z]+' | 26 ms | 17 ms
'\b\w+nn\b' | 424 ms | 393 ms
'[a-q][^u-z]{13}x' | 869 ms | 815 ms
'Tom|Sawyer|Huckleberry|Finn' | 33 ms | 24 ms
'(?i)Tom|Sawyer|Huckleberry|Finn' | 319 ms | 281 ms
'.{0,2}(Tom|Sawyer|Huckleberry|Finn)' | 1294 ms | 1293 ms
'.{2,4}(Tom|Sawyer|Huckleberry|Finn)' | 1470 ms | 1429 ms
'Tom.{10,25}river|river.{10,25}Tom' | 69 ms | 61 ms
'[a-zA-Z]+ing' | 447 ms | 408 ms
'\s[a-zA-Z]{0,12}ing\s' | 539 ms | 543 ms
'([A-Za-z]awyer|[A-Za-z]inn)\s' | 588 ms | 552 ms
'["'][^"']{0,30}[?!\.]["']' | 92 ms | 81 ms
For hash map, using fnv1a is faster as it is a much simpler algorithm
we can afford to inline. For files murmur3 should win as it processes
bytes 4 by 4.
Internally, all lines have a trailing "\n".
Buffers created empty (like fifo buffers) start with a single line.
When reading data into fifo buffers, we insert *before* the last line's
trailing newline ("last newline"). This enables autoscrolling (enabled
with "edit -scroll") as long as the cursor is on the last newline.
When autoscrolling is disabled, we have a special case to insert
*after* the last newline. This means that a cursor on that newline
won't be moved. Then we transplant the newline character from the
beginning to the end of the buffer. This special case happens only on
the very first fifo read; on subsequent reads, the cursor at position
1.1 will not be moved anway because insertions happen below 1.1.
Since we always insert (effectively) before the last newline, fifo
buffers have a trailing empty line.
For autoscrolling buffers this seems correct; it gives users an
obvious way to toggle autoscrolling.
For non-scrolling buffers the newline is redundant. Remove it.
This requires keeping track of whether the last newline comes from
the fifo, or was added by us. The shortest fix I could find
is to always append to the buffer if not scrolling, and then delete
the added newline character if applicable.
m_buffer.insert(m_scroll ? pos : m_buffer.next(pos), StringView(data, data+count));
if (not m_scroll and not m_had_trailing_newline)
m_buffer.erase(pos, m_buffer.next(pos));
maybe that's the best fix overall; but erasing at the end seems better
than erasing in the middle, so do that whenever possible.
Reported in https://lists.sr.ht/~mawww/kakoune/%3CZbTK7qit9nzvrMkx@gmail.com%3E
It turns out diffing was pretty fast, but applying the diff was
sub-optimal as it was constantly inserting/erasing lines which
led to lots of unnecessary shifting. Fix this by manually tracking
a read/write iterator and only shifting when necessary (on keeps,
and inserts).