to work around clients, like the gmail smtp client, that tries to
authenticate with a webpki-issued certificate (which we don't know).
i tried specifying a list of accepted (subjects of) CA certs during the
tls handshake (with just 1 entry, with "xmox.nl" as common name), which
clients can use to influence their cert selection. however, the gmail
smtp client ignores it, so not a solution for the issue where this was
raised. also, specifying a list of accepted certs could cause other
clients to not send their client cert anymore, breaking existing setups.
i also considered only asking for tls client auth when at least one
account has a tls pubkey configured. but decided against it since any
account can add one on their own (without system admin interaction),
changing behaviour of the system and potentially breaking existing
submission/tls configs.
we now also print the "subject" and "issuer" of certs when tls client
auth fails, should be useful for future debugging.
for issue #359
We were first getting UIDs in a transaction with a lock. Then getting the
changes and processing them in a special way. And then processing for qresync
in a new transaction. The special processing of changes is now gone, it seems
to have skipped adding/removing uids to the session, which can't be correct.
The new approach is just using a lock and transaction and process the whole
opening of the mailbox, and not processing any changes as part of the open, and
getting rid of the special "initial" mode processing a mailbox.
Once clients enable this extension, commands can no longer refer to "message
sequence numbers" (MSNs), but can only refer to messages with UIDs. This means
both sides no longer have to carefully keep their sequence numbers in sync
(error-prone), and don't have to keep track of a mapping of sequence numbers to
UIDs (saves resources).
With UIDONLY enabled, all FETCH responses are replaced with UIDFETCH response.
NOTIFY is like IDLE, but where IDLE watches just the selected mailbox, NOTIFY
can watch all mailboxes. With NOTIFY, a client can also ask a server to
immediately return configurable fetch attributes for new messages, e.g. a
message preview, certain header fields, or simply the entire message.
Mild testing with evolution and fairemail.
For long searches in big mailboxes, without any matches, we would previously
keep working and not say anything. Clients could interpret this silence as a
broken connection at some point. We now send a "we're still searching" untagged
OK responses with code INPROGRESS every 10 seconds while we're still searching,
to prevent the client from closing the connection. We also send how many
messages we've processed, and usually also how many we need to process in grand
total. Clients can use this to show a progress bar.
Attackers scanning the internet can use it to easily create a database of
hosts, software and versions. Let's not make it too easy to find old versions
that may be vulnerable to potential bugs found in the future. We could try
hiding the name "mox" as well, but the banner will still be identifyable, so
there isn't much point, and the public knowing approximately which software is
running can be useful for debugging.
The ID command in IMAP is used by clients to announce their software and
version. We only respond with our version when the user is authenticated.
There are still ways to discover the version number. But they don't involve
standard banner scanning, so someone would have to specifically target mox. We
could tighten that in the future.
For issue #322, based on email. Thanks everyone for discussing.
We normally check errors for all operations. But for some cleanup calls, eg
"defer file.Close()", we didn't. Now we also check and log most of those.
Partially because those errors can point to some mishandling or unexpected code
paths (eg file unexpected already closed). And in part to make it easier to use
"errcheck" to find the real missing error checks, there is too much noise now.
The log.Check function can now be used unconditionally for checking and logging
about errors. It adjusts the log level if the error is caused by a network
connection being closed, or a context is canceled or its deadline reached, or a
socket deadline is reached.
Example symptom, when deleting a message in the webmail (which moves to Trash):
l=error m="duplicating message in old mailbox for current sessions" err="link data/accounts/mjl/msg/I/368638 data/accounts/mjl/msg/J/368640: no such file or directory" pkg=webmail
Problem introduced a few weeks ago, where moving messages starting duplicating
the message first, and the copy is erased once all references (in IMAP
sessions) to the old mailbox have been removed.
And merge the duplicate list of capabilities. We had each on a line for
cross-referencing with the RFC, and all capabilities again but on a single line
to use in the server greeting. Now it's just one list.
Since a recent change (likely since implementing MULTIAPPEND), the temporary
files weren't removed any more. When changing it, I must have had the wrong
mental model about the MessageAdd method, assuming it would remove the temp
file.
Noticed during tests.
It still blocks on reading partial flushes from clients, preventing progress
and eventually timing out. The flate library needs more changes to make this
work.
Connections from iOS mail sometimes timed out, not always.
The extension is simply not announced, code is still present.
Keeping the message files around, and the message details in the database, is
useful for IMAP sessions that haven't seen/processed the removal of a message
yet and try to fetch it. Before, we would return errors. Similarly, a session
that has a mailbox selected that is removed can (at least in theory) still read
messages.
The mechanics to do this need keeping removed mailboxes around too. JMAP needs
that anyway, so we now keep modseq/createseq/expunged history for mailboxes
too. And while we're at it, for annotations as well.
For future JMAP support, we now also keep the mailbox parent id around for a
mailbox, with an upgrade step to set the field for existing mailboxes and
fixing up potential missing parents (which could possibly have happened in an
obscure corner case that I doubt anyone ran into).
DeliverMessage() is now MessageAdd(), and it takes a Mailbox object that it
modifies but doesn't write to the database (the caller must do it, and plenty
of times can do it more efficiently by doing it once for multiple messages).
The new AddOpts let the caller influence how many checks and how much of the
work MessageAdd() does. The zero-value AddOpts enable all checks and all the
work, but callers can take responsibility of some of the checks/work if it can
do it more efficiently itself.
This simplifies the code in most places, and makes it more efficient. The
checks to update per-mailbox keywords is a bit simpler too now.
We are also more careful to close the junk filter without saving it in case of
errors.
Still part of more upcoming changes.
We effectively held the account write-locked by using a writable transaction
while processing the FETCH command. We did this because we may have to update
\Seen flags, for non-PEEK attribute fetches. This meant other FETCHes would
block, and other write access to the account too.
We now read the messages in a read-only transaction. We gather messages that
need marking as \Seen, and make that change in one (much shorter) database
transaction at the end of the FETCH command.
In practice, it doesn't seem too sensible to mark messages as seen
automatically. Most clients probably use the PEEK-variant of attribute fetches.
Related to issue #128.
Writing to a connection goes through the flate library to compress. That writes
the compressed bytes to the underlying connection. But that underlying
connection is wrapped to raise a panic with an i/o error instead of returning a
normal error. Jumping out of flate leaves the internal state of the compressor
in undefined state. So far so good. But as part of cleaning up the connection,
we could try to flush output again. Specifically: If we were writing user data,
we had switched from tracing of protocol data to tracing of user data, and we
registered a defer that restored the tracing kind and flushed (to ensure data
was traced at the right level). That flush would cause a write into the
compressor again, which could panic with an out of bounds slice access due to
its inconsistent internal state.
This fix prevents that compressor panic in two ways:
1. We wrap the flate.Writer with a moxio.FlateWriter that keeps track of
whether a panic came out of an operation on it. If so, any further operation
raises the same panic. This prevents access to the inconsistent internal flate
state entirely.
2. Once we raise an i/o error, we mark the connection as broken and that makes
flushes a no-op.
REPLACE can be used to update draft messages as you are editing. Instead of
requiring an APPEND and STORE of \Deleted and EXPUNGE. REPLACE works
atomically.
It has a syntax similar to APPEND, just allows you to specify the message to
replace that's in the currently selected mailbox. The regular REPLACE-command
works on a message sequence number, the UID REPLACE commands on a uid. The
destination mailbox, of the updated message, can be different. For example to
move a draft message from the Drafts folder to the Sent folder.
We have to do quite some bookkeeping, e.g. for updating (message) counts for
the mailbox, checking quota, un/retraining the junk filter. During a
synchronizing literal, we check the parameters early and reject if the replace
would fail (eg over quota, bad destination mailbox).
MULTIAPPEND modifies the existing APPEND command to allow multiple messages. it
is somewhat more involved than a regular append of a single message since the
operation (of adding multiple messages) must be atomic. either all are added,
or none are.
we check as early as possible if the messages won't cause an over-quota error.
our previous approach was to hope clients did the ID command right after the
AUTHENTICATE command. with more extensions implemented, that's a stretch,
clients are doing other commands in between.
the new approach is to allow more commands, but wait at most 1 second. clients
are still assumed to send the ID command soon after authenticate. we also still
ensure login attempts are logged on connection teardown, so we aren't missing
any logging, just may get it slightly delayed. seems reasonable.
we now also keep the useragent value around, and we use when initializing the
login attempt. because the ID command can happen at any time, also before the
AUTHENTICATE command.
only with "return" including "metadata". so clients can quickly get certain
metadata (eg for display, such as a color) for mailboxes.
this also adds a protocol token type "mailboxt" that properly encodes to utf7
if required.
i added the metadata extension to the imapserver recently. then i wondered how
a client would efficiently find changed metadata. turns out the qresync rfc
mentions that metadata changes should set a new modseq on the mailbox.
shouldn't be hard, except that we were not explicitly keeping track of modseqs
per mailbox. we only kept them for messages, and we were just looking up the
latest message modseq when we needed the modseq (we keep db entries for
expunged messages, so this worked out fine). that approach isn't enough
anymore. so know we keep track of modseq & createseq for mailboxes, just as for
messages. and we also track modseq/createseq for annotations. there's a good
chance jmap is going to need it.
this also adds consistency checks for modseq/createseq on mailboxes and
annotations to the account storage. it helped spot cases i missed where the
values need to be updated.
to compress the entire IMAP connection. tested with thunderbird, meli, k9, ios
mail. the initial implementation had interoperability issues with some of these
clients: if they write the deflate stream and flush in "partial mode", the go
stdlib flate reader does not return any data (until there is an explicit
zero-length "sync flush" block, or until the history/sliding window is full),
blocking progress, resulting in clients closing the seemingly stuck connection
after considering the connection timed out. this includes a coy of the flate
package with a new reader that returns partially flushed blocks earlier.
this also adds imap trace logging to imapclient.Conn, which was useful for
debugging.
we already supported special-use flags. settable through the webmail interface,
and new accounts already got standard mailboxes with special-use flags
predefined. but now the IMAP "CREATE" command implements creating mailboxes
with special-use flags.
it makes a new field available on stored messages. not when they they were
received (over smtp) or appended to the mailbox (over imap), but when they were
last "saved" in the mailbox. copy/move of a message (eg to the trash) resets
the "savedate" value. this helps implement "remove messages from trash after X
days".
logging of login attempts happens in the background, because we don't want to
block regular operation with disk since for such logging. however, when a line
is logged, we evaluate some attributes of a connection, notably the username.
but about when we do authentication, we change the username on a connection. so
we were reading and writing at the same time. this is now fixed by evaluating
the attributes before we pass off the logger to the goroutine.
found by the go race detector.
this allows setting per-mailbox and per-server annotations (metadata). we have
a fixed maximum for total number of annotations (1000) and their total size
(1000000 bytes). this size isn't held against the regular quota for simplicity.
we send unsolicited metadata responses when a connection is in the idle
command and a change to a metadata item is made.
we currently only implement the /private/ namespace. we should implement the
/shared/ namespace, for mox-global metadata annotations. only the admin should
be able to configure those, probably through the config file, cli, or admin web
interface.
for issue #290
and don't do needless normalization for the username we got from scram: the
scram package would have failed if the name wasn't already normalized.
unicode may not be specified for sasl with imap (i'm not sure), but there's no
point in accepting it over smtpserver but not in imapserver.
given the "false" flag above when opening the account by email.
the login disabled case is handled after the various auth mechanisms in a
single place.
noticed while making other changes.
and show them in the account and admin interfaces. this should help with
debugging, to find misconfigured clients, and potentially find attackers trying
to login.
we include details like login name, account name, protocol, authentication
mechanism, ip addresses, tls connection properties, user-agent. and of course
the result.
we group entries by their details. repeat connections don't cause new records
in the database, they just increase the count on the existing record.
we keep data for at most 30 days. and we keep at most 10k entries per account.
to prevent unbounded growth. for successful login attempts, we store them all
for 30d. if a bad user causes so many entries this becomes a problem, it will
be time to talk to the user...
there is no pagination/searching yet in the admin/account interfaces. so the
list may be long. we only show the 10 most recent login attempts by default.
the rest is only shown on a separate page.
there is no way yet to disable this. may come later, either as global setting
or per account.
for admins to open an imap connection preauthenticated for an account (by address), also when
it is disabled for logins.
useful for migrations. the admin typically doesn't know the password of the
account, so couldn't get an imap session (for synchronizing) before.
tested with "mox localserve" and running:
mutt -e 'set tunnel="MOXCONF=/home/mjl/.config/mox-localserve/mox.conf ./mox admin imapserve mox@localhost"'
may also work with interimap, but untested.
i initially assumed imap would be done fully on file descriptor 0, but mutt
expects imap output on fd 1. that's the default now. flag -fd0 is for others
that expect it on fd0.
for issue #175, suggested by DanielG
to facilitate migrations from/to other mail setups.
a domain can be added in "disabled" mode (or can be disabled/enabled later on).
you can configure a disabled domain, but incoming/outgoing messages involving
the domain are rejected with temporary error codes (as this may occur during a
migration, remote servers will try again, hopefully to the correct machine or
after this machine has been configured correctly). also, no acme tls certs will
be requested for disabled domains (the autoconfig/mta-sts dns records may still
point to the current/previous machine). accounts with addresses at disabled
domains can still login, unless logins are disabled for their accounts.
an account now has an option to disable logins. you can specify an error
message to show. this will be shown in smtp, imap and the web interfaces. it
could contain a message about migrations, and possibly a URL to a page with
information about how to migrate. incoming/outgoing email involving accounts
with login disabled are still accepted/delivered as normal (unless the domain
involved in the messages is disabled too). account operations by the admin,
such as importing/exporting messages still works.
in the admin web interface, listings of domains/accounts show if they are disabled.
domains & accounts can be enabled/disabled through the config file, cli
commands and admin web interface.
for issue #175 by RobSlgm
Intended for future use with chatmail servers. Standard email ports may be
blocked on some networks, while the HTTPS port may be accessible.
This is a squashed commit of PR #255 by s0ph0s-dog.