With that recent change, we would keep track of Content-* headers of parsed
messages. We could ask admins to run a command to reparse messages for all
accounts. But instead we just do it automatically when opening the account. We
keep track whether we did the upgrade. And we do it in the background. Those
recent changes were to add optional fields to the IMAP fetch "bodystructure"
responses. There is a small chance that an IMAP client requests these fields
before they are properly populated with the reparse (only existing messages,
new incoming messages are parsed with the new code). We could try to detect
whether the upgrade has completed, and chance IMAP behaviour based on that. But
the complexity and long-term maintenance burden doesn't seem worth it. Worst
case, we'll temporarily claim some relatively unimportant headers aren't
present on a message. Most email clients won't even look at those fields, but
will parse them message themselves instead.
NOTIFY is like IDLE, but where IDLE watches just the selected mailbox, NOTIFY
can watch all mailboxes. With NOTIFY, a client can also ask a server to
immediately return configurable fetch attributes for new messages, e.g. a
message preview, certain header fields, or simply the entire message.
Mild testing with evolution and fairemail.
The gmail iOS/Android app were showing mime image parts as (garbled) text
instead of rendering them as image. By returning all the optional fields in the
bodystructure fetch attribute, the gmail app renders the image as expected by
the user. So we now add all fields. We didn't before, because we weren't
keeping track of Content-MD5, Content-Language and Content-Location header
fields, since they aren't that useful.
Messages in mailboxes have to be reparsed:
./mox reparse
Without reparsing, imap responses will claim the extra fields
(content-disposition) are absent for existing messages, instead of not claiming
anything at all, which is what we did before.
Accounts and all/some mailboxes can get their "uid validity" bumped ("./mox
bumpuidvalidity $account [$mailbox]"), which should trigger clients to load all
messages from scratch, but gmail doesn't appear to notice, so it would be
better to remove & add the account in gmail.
For issue #327, also relevant to issue #217.
Download as eml is useful with firefox, because opening the raw message in a
new tab, and then downloading it, causes firefox to request the url without
cookies, causing it to save a "403 - forbidden" response.
Exporting a selection is useful during all kinds of testing. Makes it easy to
an entire thread, or just some messages.
The export popover now has buttons for each combination of mbox/maildir vs
zip/tgz/tar. Before you may have had to select the email format and archive
format first, followed by a click. Now it's just a click.
We were already generating previews of plain text parts for the webmail
interface, but we didn't store them, so were generating the previews each time
messages were listed.
Now we store previews in the database for faster handling. And we also generate
previews for html parts if needed. We use the first part that has textual
content.
For IMAP, the previews can be requested by an IMAP client. When we get the
"LAZY" variant, which doesn't require us to generate a preview, we generate it
anyway, because it should be fast enough. So don't make clients first ask for
"PREVIEW (LAZY)" and then again a request for "PREVIEW".
We now also generate a preview when a message is added to the account. Except
for imports. It would slow us down, the previews aren't urgent, and they will
be generated on-demand at first-request.
We normally check errors for all operations. But for some cleanup calls, eg
"defer file.Close()", we didn't. Now we also check and log most of those.
Partially because those errors can point to some mishandling or unexpected code
paths (eg file unexpected already closed). And in part to make it easier to use
"errcheck" to find the real missing error checks, there is too much noise now.
The log.Check function can now be used unconditionally for checking and logging
about errors. It adjusts the log level if the error is caused by a network
connection being closed, or a context is canceled or its deadline reached, or a
socket deadline is reached.
In store/search.go, we would make a copy of a byte array, but then still use
the original instead of the copy. Could result in search operations not finding
messages that do have the content, but under very unlikely conditions only.
We'll keep running ineffassign with "make check", useful enough.
Example symptom, when deleting a message in the webmail (which moves to Trash):
l=error m="duplicating message in old mailbox for current sessions" err="link data/accounts/mjl/msg/I/368638 data/accounts/mjl/msg/J/368640: no such file or directory" pkg=webmail
Problem introduced a few weeks ago, where moving messages starting duplicating
the message first, and the copy is erased once all references (in IMAP
sessions) to the old mailbox have been removed.
The intent to remove the account is stored in the database. At startup, if
there are any such referenes, they are applied by removing the account
directories and the entry in the database. This ensures the account directory
is properly removed, even on incomplete shutdown.
Don't add an account when its directory already exits.
Initialize store and switchboard first, then open account, and close in reverse
order.
Replace all "CheckClosed" calls with "WaitClosed", future changings will keep
an account reference open for a bit after the last regular close, so we can't
know that an account should be closed during tests.
Remove one parameter from the (still too long) "start test server" function in
imapserver testing code.
We weren't appending the individual changes to the slice, but the entire slice.
Since "Change" is an "any", this isn't a type error. So make a Change a
non-empty interface (I had seen an issue like this coming, should have made it
an interface then, at least now we have a reasonable method, to get the modseq
of a change).
Found while working on an imap webpush prototype.
Since a recent change (likely since implementing MULTIAPPEND), the temporary
files weren't removed any more. When changing it, I must have had the wrong
mental model about the MessageAdd method, assuming it would remove the temp
file.
Noticed during tests.
In case the precis check failed, our return of a nil account cleared acc, and
we were then trying to close it, returning in a nil pointer dereference.
Rewrite the return statements so we don't overwrite the named return variables.
Just to be cautious. This hasn't happened yet in practice that I'm aware of.
But in theory, mox could crash after it has written the message file during
delivery, but before the database transaction was committed. In that case, a
message file for the "next message id" is already present. Any future delivery
attempts will get assigned the same message id by the database, but writing the
file will fail because there already is one, causing delivery to fail (until
the file is moved away).
When opening an account, we now check in the file system if newer files exist
than we expect based on the last existing message in the database. If so, we
adjust the message ID the database will assign next.
Keeping the message files around, and the message details in the database, is
useful for IMAP sessions that haven't seen/processed the removal of a message
yet and try to fetch it. Before, we would return errors. Similarly, a session
that has a mailbox selected that is removed can (at least in theory) still read
messages.
The mechanics to do this need keeping removed mailboxes around too. JMAP needs
that anyway, so we now keep modseq/createseq/expunged history for mailboxes
too. And while we're at it, for annotations as well.
For future JMAP support, we now also keep the mailbox parent id around for a
mailbox, with an upgrade step to set the field for existing mailboxes and
fixing up potential missing parents (which could possibly have happened in an
obscure corner case that I doubt anyone ran into).
We normally recover from those situations, printing stack traces instead of
crashing the program. But during tests, we're not looking at the prometheus
metrics or all the output. Without these checks, such panics could go
unnoticed. Seems like a reasonable thing to add, unhandled panics haven't been
encountered in tests.
DeliverMessage() is now MessageAdd(), and it takes a Mailbox object that it
modifies but doesn't write to the database (the caller must do it, and plenty
of times can do it more efficiently by doing it once for multiple messages).
The new AddOpts let the caller influence how many checks and how much of the
work MessageAdd() does. The zero-value AddOpts enable all checks and all the
work, but callers can take responsibility of some of the checks/work if it can
do it more efficiently itself.
This simplifies the code in most places, and makes it more efficient. The
checks to update per-mailbox keywords is a bit simpler too now.
We are also more careful to close the junk filter without saving it in case of
errors.
Still part of more upcoming changes.
In the common case, it's the same as the previous delivery. That means we don't
have to try to create the directory (fewer syscalls) and that we can sync the
dir to disk.
This also tweaks the defer handling in case of a late failure.
Fix up a test or two. Simplify the XOR logic when we train the junk filter:
Only if junk or nonjunk is set, but not when both (or none are set). i.e. when
the values aren't the same.
Locking the account when we do consistency checks prevents spurious test
failures that may have been introduced in the previous commit.
MULTIAPPEND modifies the existing APPEND command to allow multiple messages. it
is somewhat more involved than a regular append of a single message since the
operation (of adding multiple messages) must be atomic. either all are added,
or none are.
we check as early as possible if the messages won't cause an over-quota error.
not just /private. /shared/ is the more commonly implemented namespace, because
it is easier te implement: you don't need per-user/account storage of metadata.
i initially approached it from the other direction: we don't have a mechanism
to share metadata with other accounts, so everything is private, and i assumed
that would be what a user would prefer. but email clients make the decisions,
and they'll likely try the /shared/ namespace.
i added the metadata extension to the imapserver recently. then i wondered how
a client would efficiently find changed metadata. turns out the qresync rfc
mentions that metadata changes should set a new modseq on the mailbox.
shouldn't be hard, except that we were not explicitly keeping track of modseqs
per mailbox. we only kept them for messages, and we were just looking up the
latest message modseq when we needed the modseq (we keep db entries for
expunged messages, so this worked out fine). that approach isn't enough
anymore. so know we keep track of modseq & createseq for mailboxes, just as for
messages. and we also track modseq/createseq for annotations. there's a good
chance jmap is going to need it.
this also adds consistency checks for modseq/createseq on mailboxes and
annotations to the account storage. it helped spot cases i missed where the
values need to be updated.
they are intended to be used by the server to automatically mark some messages
as important, based on server-defined heuristics. we don't have such heuristics
at the moment. perhaps in the future, but until then there are no plans.
we already supported special-use flags. settable through the webmail interface,
and new accounts already got standard mailboxes with special-use flags
predefined. but now the IMAP "CREATE" command implements creating mailboxes
with special-use flags.
it makes a new field available on stored messages. not when they they were
received (over smtp) or appended to the mailbox (over imap), but when they were
last "saved" in the mailbox. copy/move of a message (eg to the trash) resets
the "savedate" value. this helps implement "remove messages from trash after X
days".
logging of login attempts happens in the background, because we don't want to
block regular operation with disk since for such logging. however, when a line
is logged, we evaluate some attributes of a connection, notably the username.
but about when we do authentication, we change the username on a connection. so
we were reading and writing at the same time. this is now fixed by evaluating
the attributes before we pass off the logger to the goroutine.
found by the go race detector.
this allows setting per-mailbox and per-server annotations (metadata). we have
a fixed maximum for total number of annotations (1000) and their total size
(1000000 bytes). this size isn't held against the regular quota for simplicity.
we send unsolicited metadata responses when a connection is in the idle
command and a change to a metadata item is made.
we currently only implement the /private/ namespace. we should implement the
/shared/ namespace, for mox-global metadata annotations. only the admin should
be able to configure those, probably through the config file, cli, or admin web
interface.
for issue #290
accounts with this option enabled can only generate get a new randomly
generated password. this prevents password reuse across services and weak
passwords. existing accounts keep their current ability to set custom
passwords. only admins can change this setting for an account.
related to issue #286 by skyguy
and show them in the account and admin interfaces. this should help with
debugging, to find misconfigured clients, and potentially find attackers trying
to login.
we include details like login name, account name, protocol, authentication
mechanism, ip addresses, tls connection properties, user-agent. and of course
the result.
we group entries by their details. repeat connections don't cause new records
in the database, they just increase the count on the existing record.
we keep data for at most 30 days. and we keep at most 10k entries per account.
to prevent unbounded growth. for successful login attempts, we store them all
for 30d. if a bad user causes so many entries this becomes a problem, it will
be time to talk to the user...
there is no pagination/searching yet in the admin/account interfaces. so the
list may be long. we only show the 10 most recent login attempts by default.
the rest is only shown on a separate page.
there is no way yet to disable this. may come later, either as global setting
or per account.