- avoid UBSAN conversions
- print N/A on no data (i.e. as unprivileged user)
- fix rate calculation to show bytes (instead of a thousandth)
- print bytes as human number (i.e. 8MB) instead of 8388608
- stabilize sorting by adjusting NAN values to very tiny negative number
- sort cases by identifier
- use check snprintf
- color nice value of 0 as gray
- color cpu and memory percentages of 0.0 as gray
- color number of threads of 1 as gray
- color idle and sleeping state as gray
- color tgid matching pid (indicating main thread) as gray
If no terminal name can be found, fall back to generic display method
with major and minor device numbers.
Print special value '(none)' in case both are zero.
Use only one enum instead of a global and a platform specific one.
Drop Platform_numberOfFields global variable.
Set known size of Process_fields array
This acheives two things:
- Allows for simple tie-breaking if values compare equal (needed to make sorting the tree-view stable)
- Allows for platform-dependent overriding of the sort-order for specific fields
Also fixes a small oversight on DragonFlyBSD when default-sorting.
Implements the suggestion from https://github.com/htop-dev/htop/issues/399#issuecomment-747861013
Thanks to the refactors from 0bd5c8fb5da and 6393baa74e5, this was really easy
and clean to do.
It maintains the "Tree view always by PID" option in the Settings, which
results in some specific behaviors such as "clicking on the column header to
exit tree view" and "picking a new sort order to exit tree view", for the sake
of the muscle memory of long time htop users. :)
* This removes duplicated code that adjusts the sort direction from every
OS-specific folder.
* Most fields in a regular htop screen are OS-independent, so trying
Process_compare first and only falling back to the OS-specific
compareByKey function if it's an OS-specific field makes sense.
* This will allow us to override the sortKey in a global way without having
to edit each OS-specific file.
By storing the per-process m_resident and m_virt values in the form
htop wants to display them in (KB, not pages), we no longer need to
have definitions of pageSize and pageSizeKB in the common CRT code.
These variables were never really CRT (i.e. display) related in the
first place. It turns out the darwin platform code doesn't need to
use these at all (the process values are extracted from the kernel
in bytes not pages) and the other platforms can each use their own
local pagesize variables, in more appropriate locations.
Some platforms were actually already doing this, so this change is
removing duplication of logic and variables there.
RichString_writeFrom takes a top spot during performance analysis due to the
calls to mbstowcs() and iswprint().
Most of the time we know in advance that we are only going to print regular
ASCII characters.
If currently two unsigned values are compared via `a - b`, in the case b
is actually bigger than a, the result will not be an negative number (as
-1 is expected) but a huge positive number as the subtraction is an
unsigned subtraction.
Avoid over-/underflow affected operations; use comparisons.
Modern compilers will generate sane code, like:
xor eax, eax
cmp rdi, rsi
seta al
sbb eax, 0
ret
man:sysconf(3) states:
The values obtained from these functions are system configuration constants.
They do not change during the lifetime of a process.
When building on a 32-bit system, the compiler warned that the
following line uses a constant whose value is the overflow result
of a compile-time computation:
Process.c (line 109): } else if (number < 10000 * ONE_M) {
Namely, this constant expression:
10000 * ONE_M
was intended to produce the following value:
10485760000
However, the result overflowed to produce:
1895825408
The reason for this overflow is as follows:
o The macros are expanded:
10000 * (ONE_K * ONE_K)
10000 * (1024L * 1024L)
o The untyped constant expression "10000" is typed:
10000U * (1024L * 1024L)
o The parenthesized expression is evaluated:
10000U * (1048576L)
o The left operand ("10000U") is converted:
10000L * (1048576L)
Unbound by integer sizes, that last multiplication
would produce the following value:
10485760000
However, on a 32-bit machine, where a long is 32 bits
(really 31 bits when talking about positive numbers),
the maximum value that can be computed is 2**31-1:
2147483647
Consequently, the computation overflows.
o The compiler produces a long int value that is the
the result of overflow (10485760000 % 2**31):
1895825408L
Actually, I think this overflow is implementation-defined,
so it's not even a portable description of what happens.
The solution is to use a long long int (or, even better,
an unsigned long long int) type for the constant expression;
the C standard mandates a sufficiently large maximum value
for such types.
Hence, the following change is made to the bad line:
- } else if (number < 10000 * ONE_M) {
+ } else if (number < 10000ULL * ONE_M) {
However, the whole line is now patently silly, because the
variable "number" is typed "unsigned long", and so it will
always be less than the constant expression (the compiler
will warn about this, too).
Hence, "number" must be typed "unsigned long long"; however,
this necessitates changing all of the string formats from
something like "%lu" to something like "%llu".
Et voila! This commit is born.
Then, for the sake of completeness, the declared types of the
constant-expression macros are updated:
o ONE_K is made unsigned (a "UL" instead of "L")
o ONE_T is computed by introducing "1ULL *"
o Similar changes are made for ONE_DECIMAL_{K,T}
Also, a non-portable overflow-conversion to a signed value
has been replaced with a portable comparison:
- if ((long long) number == -1LL) {
+ if (number == ULLONG_MAX) {
It might be worth reviewing the rest of the code for other
cases where overflows are not handled correctly; even at
runtime, it's often necessary to check for overflow unless
such behavior is expected (especially for signed integer
values, for which overflow has implementation-defined
behavior).
PR htop-dev/htop#70 got rid of the infrastructure for generating header
files, but it left behind some code duplication.
Some of cases are things that belong in the header file and don't need
to be repeated in the C file. Other cases are things that belong in the
C file and don't need to be in the header file.
In this commit I tried to fix all of these that I could find. When given
a choice I preferred keeping things out of the header file, unless they
were being used by someone else.
Reasoning:
- implementation was unsound -- broke down when I added a fairly
basic macro definition expanding to a struct initializer in a *.c
file.
- made it way too easy (e.g. via otherwise totally innocuous git
commands) to end up with timestamps such that it always ran
MakeHeader.py but never used its output, leading to overbuild noise
when running what should be a null 'make'.
- but mostly: it's just an awkward way of dealing with C code.
Promote the Arg union to a core data type in Object.c such
that it is visible everywhere (many source files need it),
and correct declarations of several functions that use it.
The Process_sendSignal function is also corrected to have
the expected return type (bool, not void) - an error being
masked by ignoring this not-quite-harmless warning. I've
also added error checking to the kill(2) call here, which
was previously overlooked / missing (?).
A logic mistake in pull request #746 causes <sys/sysmacro.h> to be
*not* included when AC_HEADER_MAJOR (before autoconf-2.70) finds
'major' in <sys/types.h>. Though this would still build htop, it would
still bring deprecation warning in systems using glibc 2.25-2.27. Fix
the logic and suppress the warning.
Also, include config.h in Process.c for the sake of strengthening the
code.
Signed-off-by: Kang-Che Sung <explorer09@gmail.com>
glibc 2.28 no longer defines 'major' and 'minor' in <sys/types.h> and
requires us to include <sys/sysmacros.h>. (glibc 2.25 starts
deprecating the macros in <sys/types.h>.) Now do include the latter if
found on the system.
At the moment, let's also utilize AC_HEADER_MAJOR in configure script.
However as Autoconf 2.69 has not yet updated the AC_HEADER_MAJOR macro
to reflect the glibc change [1], so add a workaround code.
Fixes#663. Supersedes pull request #729.
Reference:
[1] https://git.savannah.gnu.org/gitweb/?p=autoconf.git;a=commit;h=e17a30e987d7ee695fb4294a82d987ec3dc9b974
Signed-off-by: Kang-Che Sung <explorer09@gmail.com>
Linux commit 06eb61844d841d0032a9950ce7f8e783ee49c0d0 ("sched/debug:
Add explicit TASK_IDLE printing") exposes kthreads idling using
TASK_IDLE in procfs as "I (idle)".
Until now, when sorting the STATE ("S") column, htop used the raw
value of the state character for comparison, however that led to the
undesirable effect of TASK_IDLE ('I') tasks being sorted above tasks
that were running ('R').
Thus, explicitly recognize the idle process state, and sort it below
others.
This is/was necessary only on macOS, because you needed root in order
to read the process list. This was never necessary on Linux, and
it also raises security concerns, so now it needs to be enabled
explicitly at build time.
In all the cases where sprintf was being used within htop, snprintf
could have been used. This patch replaces all uses of sprintf with
snprintf which makes sure that if a buffer is too small to hold the
resulting string, the string is simply cut short instead of causing
a buffer overflow which leads to undefined behaviour.
`sizeof(variable)` was used in these cases, as opposed to `sizeof
variable` which is my personal preference because `sizeof(variable)`
was already used in one way or another in other parts of the code.
BFS-patched kernels can have kernel threads with priority -101.
This change makes priority -101 display as "RT", just like priority -100.
Related: https://github.com/hishamhm/htop/issues/314
on Darwin, htop needs to run with root privileges to display information
about other users processes. This commit makes running htop SUID root a
bit more safe.