Compare commits

...

849 Commits
v0.0.3 ... main

Author SHA1 Message Date
Mechiel Lukkien
833a67fe3d add config option to disable tls client auth during tls handshakes
to work around clients, like the gmail smtp client, that tries to
authenticate with a webpki-issued certificate (which we don't know).

i tried specifying a list of accepted (subjects of) CA certs during the
tls handshake (with just 1 entry, with "xmox.nl" as common name), which
clients can use to influence their cert selection.  however, the gmail
smtp client ignores it, so not a solution for the issue where this was
raised. also, specifying a list of accepted certs could cause other
clients to not send their client cert anymore, breaking existing setups.

i also considered only asking for tls client auth when at least one
account has a tls pubkey configured. but decided against it since any
account can add one on their own (without system admin interaction),
changing behaviour of the system and potentially breaking existing
submission/tls configs.

we now also print the "subject" and "issuer" of certs when tls client
auth fails, should be useful for future debugging.

for issue #359
2025-06-09 12:33:10 +02:00
Mechiel Lukkien
f5b8c64b84 fix typo's 2025-06-09 10:00:53 +02:00
Mechiel Lukkien
f1259ee80e add config option to disable rate limiting for the webserver, and take a reverse proxy into account when finding the ip to use for webserver ratelimting
another approach i looked at was enabling/disabling rate limiting per
web handler. but we want to apply the rate limit as early as possible
(not after we've already done quite some work for the request), and with
per-handler rate limits on/off the code would be sprinkled with calls to
rate limiting. this is probably good enough for now. other protocols are
less likely to need this.

we were always using the ip address of the connection for rate limiting.
but some setups have a reverse proxy in front. if any handler on a
http/https port is marked as "forwarded" (with a reverse proxy), we use
the ip address from the x-forwarded-for header (like we already did for
authentication requests over http).

for issue #346
2025-05-15 20:16:49 +02:00
Mechiel Lukkien
bb438488c5 add "Fail" transport, that immediately fails delivery
allows configs that prevent outgoing deliveries (globally, per domain,
or per account) from/to certain domains.

for issue #347
2025-05-15 17:59:49 +02:00
Mechiel Lukkien
91bfff220e add mx preference to smtpclient.GatherDestinations
mostly so moxtools can show the mx preferences in its output
2025-05-15 16:37:53 +02:00
Mechiel Lukkien
cc627af263 fix tests for previous commit
the build for go1.23 generated a different doc.go...
2025-05-15 14:42:07 +02:00
Mechiel Lukkien
76e58f4a63 fix building previous commit with go1.23 2025-05-15 14:25:18 +02:00
Mechiel Lukkien
4a14abc254 add "mox smtp dial" subcommand, for debugging connectivity issues
with various options for the tls connection.
2025-05-15 14:12:08 +02:00
Mechiel Lukkien
70bbfc8f10 fix findings from gopls 2025-05-13 09:33:37 +02:00
Mechiel Lukkien
aff279711c add ":z" to docker-compose volumes and remove deprecated version field from yml files
the ":z" is required on selinux systems, like fedora. and doesn't seem
to hurt on other systems.
2025-05-12 10:29:22 +02:00
Alice
2e0eea88b0
fix typos in referrer header
It should be referer in both cases, which whilst misspelled is the actual header name
2025-04-25 19:16:32 +01:00
Mechiel Lukkien
baacdbca18
when registering login attempts, use X-Forwarded-For header for finding the IP address
Which we already did for the rate limiting.

Hopefully solves issue #338.
2025-04-22 09:05:34 +02:00
Mechiel Lukkien
ee99e82cf4
add v0.0.15 to website and rotate apidiff 2025-04-18 21:25:37 +02:00
Mechiel Lukkien
b7262d536d
nit, tweaking release process order 2025-04-18 20:53:28 +02:00
Mechiel Lukkien
794ef75d17
accept incoming DMARC and TLS reports with reporting addresses containing catchall separator(s)
Such as "-" when addresses are dmarc-reports@ and tls-reports@.

Existing configuration files can have these combinations. We don't allow them
to be created through the webadmin interface, as this is a likely source of
confusion about how addresses will be matched. We already didn't allow regular
addresses containing catchall separators.
2025-04-18 12:36:01 +02:00
Mechiel Lukkien
4eddf5885d
change default dmarc & tls reporting address so they don't contain a dash
The defaults for a new domain were dmarc-reports@ and tls-reports@. But some
setups use "-" as catchall separator, which currently would cause messages to
those addresses to be rejected with a "no such user" smtp error.

Better to prevent these issues in the future by using dmarcreports@ and
tlsreports@ localparts.

The config checks don't enforce that the DMARC and TLS reporting addresses
don't contain the localpart catchall separator. A next commit will fix
accepting incoming reports to such addresses.
2025-04-18 11:39:45 +02:00
Mechiel Lukkien
53f391ad18
fix flaky test where closing the imapclient connection fails because the server has also closed the tls connection 2025-04-18 09:23:30 +02:00
Mechiel Lukkien
14af5bbb12
when reparsing all messages, actually store the new mime structure in the database 2025-04-18 09:05:09 +02:00
Mechiel Lukkien
75bb1bfa2f
queue: before removing files from the queue, close them, so removing doesn't fail on windows
Mostly relevant for localserve, since full operation doesn't work on windows.
2025-04-17 21:08:07 +02:00
Mechiel Lukkien
5f9f45983d
use smaller batch size when reparsing all messages, to stay response when making changes on slower machines 2025-04-17 09:47:53 +02:00
Mechiel Lukkien
0ce0296a9f
update public suffix list 2025-04-16 20:09:11 +02:00
Mechiel Lukkien
805ae0d827
update to latest golang.org/x dependencies 2025-04-16 20:06:58 +02:00
Mechiel Lukkien
1b2b152cb5
add "mox config account list", printing all accounts and whether they are disabled
based on question from wisse on slack
2025-04-16 20:06:58 +02:00
Mechiel Lukkien
31c22618f5
automatically reparse all messages, in the background, after addition of header fields in the parsed mime form of messages in the message index database
With that recent change, we would keep track of Content-* headers of parsed
messages. We could ask admins to run a command to reparse messages for all
accounts. But instead we just do it automatically when opening the account. We
keep track whether we did the upgrade. And we do it in the background. Those
recent changes were to add optional fields to the IMAP fetch "bodystructure"
responses. There is a small chance that an IMAP client requests these fields
before they are properly populated with the reparse (only existing messages,
new incoming messages are parsed with the new code). We could try to detect
whether the upgrade has completed, and chance IMAP behaviour based on that. But
the complexity and long-term maintenance burden doesn't seem worth it. Worst
case, we'll temporarily claim some relatively unimportant headers aren't
present on a message. Most email clients won't even look at those fields, but
will parse them message themselves instead.
2025-04-16 20:06:58 +02:00
Mechiel Lukkien
07533252b3
message: when parsing a message, don't treat absent header and empty header value the same
We now use "*string" for such header fields, for Content-* fields, as used in
the imapserver when responding to FETCH commands. We'll now return NIL for an
absent header, and "" (empty string) if the header value is empty.
2025-04-16 20:06:45 +02:00
Mechiel Lukkien
3fe765dce9
imapserver: fix fuzz tests
The acc.Close() at the end of the fuzzing would find inconsistencies. For
example, message files on disk that aren't in the database file. I don't
understand what is happening there, the database file on disk does have those
messages, and it seems the database file is getting replaced. When running the
same code not as a fuzzing test but as a regular Go test doesn't show the
problem. So it seems to be some interaction with fuzzing. The problem is
"solved" (feels more like side-stepped), by starting each fuzz test with a
clean database. We still open & close the account in each fuzz test, and it
doesn't find consistency problems.
2025-04-16 11:21:01 +02:00
Mechiel Lukkien
e7b562e3f2
imapclient: first step towards making package usable as imap client with other imap servers, and minor imapserver bug fix
The imapclient needs more changes, like more strict parsing, before it can be a
generally usable IMAP client, these are a few steps towards that.

- Fix a bug in the imapserver METADATA responses for TOOMANY and MAXSIZE.
- Split low-level IMAP protocol handling (new Proto type) from the higher-level
  client command handling (existing Conn type). The idea is that some simple
  uses of IMAP can get by with just using these commands, while more intricate
  uses of IMAP (like a synchronizing client that needs to talk to all kinds of
  servers with different behaviours and implemented extensions) can write custom
  commands and read untagged responses or command completion results
  explicitly. The lower-level method names have clearer names now, like
  ReadResponse instead of Response.
- Merge the untagged responses and (command completion) "Result" into a new
  type Response. Makes function signatures simpler. And make Response implement
  the error interface, and change command methods to return the Response as error
  if the result is NO or BAD. Simplifies error handling, and still provides the
  option to continue after a NO or BAD.
- Add UIDSearch/MSNSearch commands, with a custom "search program", so mostly
  to indicate these commands exist.
- More complete coverage of types for response codes, for easier handling.
- Automatically handle any ENABLED or CAPABILITY untagged response or response
  code for IMAP command methods on type Conn.
- Make difference between MSN vs UID versions of
  FETCH/STORE/SEARCH/COPY/MOVE/REPLACE commands more clear. The original MSN
  commands now have MSN prefixed to their name, so they are grouped together in
  the documentation.
- Document which capabilities are needed for a command.
2025-04-15 08:37:18 +02:00
Mechiel Lukkien
2c1283f032
imapclient: clean up function signature of New, allowing for future options too 2025-04-11 21:04:13 +02:00
Mechiel Lukkien
af3e9351bc
imapserver: simplify and fix logic around processing changes while opening a mailbox (with SELECT or EXAMINE)
We were first getting UIDs in a transaction with a lock. Then getting the
changes and processing them in a special way. And then processing for qresync
in a new transaction. The special processing of changes is now gone, it seems
to have skipped adding/removing uids to the session, which can't be correct.
The new approach is just using a lock and transaction and process the whole
opening of the mailbox, and not processing any changes as part of the open, and
getting rid of the special "initial" mode processing a mailbox.
2025-04-11 20:28:35 +02:00
Mechiel Lukkien
fd5167fdb3
imapserver: enable test that checked that an expunged message can still be read in sessions when they haven't processed the deletion yet.
We've been keeping track of references before we erase the message file for a
while now.
2025-04-11 18:27:42 +02:00
Mechiel Lukkien
1a6d268e1d
imapserver: check for UIDNEXT overflow when adding a message to a mailbox
Return an error, with instructions so a user may be able to work around the
issue.
2025-04-11 18:22:29 +02:00
Mechiel Lukkien
507ca73b96
imapserver: implement UIDONLY extension, RFC 9586
Once clients enable this extension, commands can no longer refer to "message
sequence numbers" (MSNs), but can only refer to messages with UIDs. This means
both sides no longer have to carefully keep their sequence numbers in sync
(error-prone), and don't have to keep track of a mapping of sequence numbers to
UIDs (saves resources).

With UIDONLY enabled, all FETCH responses are replaced with UIDFETCH response.
2025-04-11 11:45:49 +02:00
Mechiel Lukkien
8bab38eac4
imapserver: implement NOTIFY extension from RFC 5465
NOTIFY is like IDLE, but where IDLE watches just the selected mailbox, NOTIFY
can watch all mailboxes. With NOTIFY, a client can also ask a server to
immediately return configurable fetch attributes for new messages, e.g. a
message preview, certain header fields, or simply the entire message.

Mild testing with evolution and fairemail.
2025-04-11 10:06:34 +02:00
Mechiel Lukkien
5a7d5fce98
run ineffassign (fast) before staticcheck (slow) 2025-04-07 18:40:54 +02:00
Mechiel Lukkien
902de0e1f9
queue: in log lines about delivery, we had both "attempts" starting at 0 and "attempt" starting at 1, keep only "attempts" starting at 1
from eric l, thanks!
2025-04-07 13:35:42 +02:00
Mechiel Lukkien
39c21f80cd
imapserver: return proper response for FETCH of "BODY[1.MIME]" where 1 is a message
MIME returns the part headers. If 1 is a message, i.e. a message/rfc822 or
message/global, for example when top-level is a multipart/mixed, we were
returning the MIME headers from the message, not from the part.

We also shouldn't be returning a MIME-Version header or the separating newline
for MIME. Those are for MIME headers of a message, but the "MIME" fetch body
part is always about the part.

Found after looking into FETCH BODY handling for issue #327.
2025-04-07 12:15:13 +02:00
Mechiel Lukkien
462568d878
webmail: for "cid"/content-id's used in html, look for them in all other parts, not just when there is a multipart/related in the message
The gmail apps generate messages consisting of multipart/mixed, with text/html
referring to a sibling image/jpeg. We weren't resolving that cid before.

Related to issue #327.
2025-04-07 11:10:14 +02:00
Mechiel Lukkien
2defbce0bc
imapserver: return all the extensible fields for bodystructure, notably for content-disposition
The gmail iOS/Android app were showing mime image parts as (garbled) text
instead of rendering them as image. By returning all the optional fields in the
bodystructure fetch attribute, the gmail app renders the image as expected by
the user. So we now add all fields. We didn't before, because we weren't
keeping track of Content-MD5, Content-Language and Content-Location header
fields, since they aren't that useful.

Messages in mailboxes have to be reparsed:
	./mox reparse

Without reparsing, imap responses will claim the extra fields
(content-disposition) are absent for existing messages, instead of not claiming
anything at all, which is what we did before.

Accounts and all/some mailboxes can get their "uid validity" bumped ("./mox
bumpuidvalidity $account [$mailbox]"), which should trigger clients to load all
messages from scratch, but gmail doesn't appear to notice, so it would be
better to remove & add the account in gmail.

For issue #327, also relevant to issue #217.
2025-04-05 15:46:17 +02:00
Mechiel Lukkien
69d2699961
write base64 message parts with 76 data bytes on a line instead of 78
As required by RFC 2045 (MIME). The 78 byte lines work in practice, except that
SpamAssassin has rules that give messages with 78-byte lines spam points.

Mentioned by kjetilho on irc.
2025-04-03 10:22:15 +02:00
Mechiel Lukkien
00c8db98e6
start more function names/calls with x when they handle errors through panics
mostly the imapserver and smtpserver connection write and read methods.
2025-04-02 13:59:46 +02:00
Mechiel Lukkien
deb57462a4
update list of sponsors, add logo's and link to the nlnet projects 2025-04-02 11:24:59 +02:00
Mechiel Lukkien
479bf29124
imapserver: implement the MULTISEARCH extension, with its ESEARCH command 2025-03-31 18:34:23 +02:00
Mechiel Lukkien
5dcf674761
webmail: reconnect automatically in more cases
Before, we would only reconnect the SSE connection when the previous one lasted
10 minutes.  For some reason, firefox disconnects SSE connections when there is
any network change. Running the docker integration tests changes the network a
few time in quick succession, prevent further automatic reconnects.

This changes the "stop reconnection automatically" period from 10 minutes to 5
seconds.
2025-03-30 14:54:29 +02:00
Mechiel Lukkien
aba0061073
small tweak to docs and website, mentioning EIA in the context of internalized email 2025-03-30 11:03:06 +02:00
Mechiel Lukkien
cc5e3165ea
imapserver: implement "inprogress" response code (RFC 9585) for keepalive during long search
For long searches in big mailboxes, without any matches, we would previously
keep working and not say anything. Clients could interpret this silence as a
broken connection at some point. We now send a "we're still searching" untagged
OK responses with code INPROGRESS every 10 seconds while we're still searching,
to prevent the client from closing the connection. We also send how many
messages we've processed, and usually also how many we need to process in grand
total. Clients can use this to show a progress bar.
2025-03-30 10:43:02 +02:00
Mechiel Lukkien
3e128d744e
for the web interfaces, ensure the effective configured http paths end in a slash to prevent 404's and/or errors accessing the web interfaces
The default paths for the web interfaces, such as /admin/, /account/, /webmail/
and /webapi/ end with a slash. They should end with a slash because we use the
path when restricting cookies to just that web interface. You could configure
paths not ending with a slash, but due to using http.StripPrefix, and our
handler, some of those requests may not work properly.

We now warn if configured paths don't end with a trailing slash when parsing
the config file. We normally error out when such things happen, but users
probably have paths without trailing slashes configured, and we don't want to
break them on a future upgrade. We now use an effective path that includes the
trailing slash.

We would always redirect requests to the configured paths but without trailing
slash to the path with trailing slash, and that stays.

For issue #325 by odama626.
2025-03-29 22:00:55 +01:00
Mechiel Lukkien
3a3a11560e
web interfaces: don't include version number in html, only return it after authentication
second round for issue #322
2025-03-29 20:46:53 +01:00
Mechiel Lukkien
eeeabdc6de
fix build with previous commit that didn't sync frontend
not at my sharpest...
2025-03-29 20:16:05 +01:00
Mechiel Lukkien
3ac38aacca
imapserver: fix storing previews when requested over imap and they are missing from the database
found while testing.
2025-03-29 20:13:10 +01:00
Mechiel Lukkien
6ab31c15b7
imapserver: actually announce PREVIEW extension 2025-03-29 18:28:33 +01:00
Mechiel Lukkien
a5d74eb718
webmail: add buttons to download a message as eml, and export 1 or more messages as mbox/maildir in zip/tgz/tar, like for entire mailboxes
Download as eml is useful with firefox, because opening the raw message in a
new tab, and then downloading it, causes firefox to request the url without
cookies, causing it to save a "403 - forbidden" response.

Exporting a selection is useful during all kinds of testing. Makes it easy to
an entire thread, or just some messages.

The export popover now has buttons for each combination of mbox/maildir vs
zip/tgz/tar. Before you may have had to select the email format and archive
format first, followed by a click. Now it's just a click.
2025-03-29 18:10:23 +01:00
Mechiel Lukkien
d6e55b5f36
don't use strings.Lines, it's only available in go1.24 and we support go1.23 too 2025-03-28 18:20:18 +01:00
Mechiel Lukkien
68729fa5a3
in smtp banner and imap ID command response when unauthenticated, don't send the mox version number
Attackers scanning the internet can use it to easily create a database of
hosts, software and versions. Let's not make it too easy to find old versions
that may be vulnerable to potential bugs found in the future. We could try
hiding the name "mox" as well, but the banner will still be identifyable, so
there isn't much point, and the public knowing approximately which software is
running can be useful for debugging.

The ID command in IMAP is used by clients to announce their software and
version. We only respond with our version when the user is authenticated.

There are still ways to discover the version number. But they don't involve
standard banner scanning, so someone would have to specifically target mox. We
could tighten that in the future.

For issue #322, based on email. Thanks everyone for discussing.
2025-03-28 17:50:40 +01:00
Mechiel Lukkien
789e4875ca
update to latest bstore 2025-03-28 17:39:20 +01:00
Mechiel Lukkien
6bf80d91bc
sync frontend api doc/client
Forgot to build after change just before commit...
2025-03-28 17:39:20 +01:00
Mechiel Lukkien
aa631c604c
imapserver: implement PREVIEW extension (RFC 8970), and store previews in message database
We were already generating previews of plain text parts for the webmail
interface, but we didn't store them, so were generating the previews each time
messages were listed.

Now we store previews in the database for faster handling. And we also generate
previews for html parts if needed. We use the first part that has textual
content.

For IMAP, the previews can be requested by an IMAP client. When we get the
"LAZY" variant, which doesn't require us to generate a preview, we generate it
anyway, because it should be fast enough. So don't make clients first ask for
"PREVIEW (LAZY)" and then again a request for "PREVIEW".

We now also generate a preview when a message is added to the account. Except
for imports. It would slow us down, the previews aren't urgent, and they will
be generated on-demand at first-request.
2025-03-28 17:10:17 +01:00
Mechiel Lukkien
8b418a9ca2
update golang.org/x dependencies 2025-03-28 17:01:12 +01:00
Mechiel Lukkien
027e5754a0
update to go1.23 and replace golang.org/x/exp/maps with stdlib maps 2025-03-28 17:01:06 +01:00
Mechiel Lukkien
7a87522be0
rename variables, struct fields and functions to include an "x" when they can panic for handling errors
and document the convention in develop.txt.
spurred by running errcheck again (it has been a while). it still has too many
false to enable by default.
2025-03-24 16:12:22 +01:00
Mechiel Lukkien
a2c79e25c1
check and log errors more often in deferred cleanup calls, and log remote-induced errors at lower priority
We normally check errors for all operations. But for some cleanup calls, eg
"defer file.Close()", we didn't. Now we also check and log most of those.
Partially because those errors can point to some mishandling or unexpected code
paths (eg file unexpected already closed). And in part to make it easier to use
"errcheck" to find the real missing error checks, there is too much noise now.

The log.Check function can now be used unconditionally for checking and logging
about errors. It adjusts the log level if the error is caused by a network
connection being closed, or a context is canceled or its deadline reached, or a
socket deadline is reached.
2025-03-24 14:06:05 +01:00
Mechiel Lukkien
15a8ce8c0b
fix warnings by ineffassign, with a one actual issue
In store/search.go, we would make a copy of a byte array, but then still use
the original instead of the copy. Could result in search operations not finding
messages that do have the content, but under very unlikely conditions only.

We'll keep running ineffassign with "make check", useful enough.
2025-03-24 10:25:33 +01:00
Mechiel Lukkien
04b1f030b7
update to latest bstore, which now properly handles modifications during Query.ForEach 2025-03-24 10:02:50 +01:00
Mechiel Lukkien
88ec5c6fbe
add rfc 4155 about mbox files, and cross-reference in the import/export code for mbox files 2025-03-23 13:59:09 +01:00
Mechiel Lukkien
a68a9d8a48
check whether mailboxes have message/etc counts through an "upgrade" boolean flag
Instead of using the per-mailbox flag, and going through all mailboxes when
opening an account.
2025-03-23 12:52:59 +01:00
Mechiel Lukkien
b37faa06bd
After queueing a message in the web api's, prevent context cancelation from completing message changes
Adding to the queue is done in a transaction, the queue db file is mox-global.
Appending the message to the Sent folder, removing it from Drafts, marking the
original message as answered/forwarded, is done in a separate database
transaction that gets the ctx passed in. If the ctx was canceled in between,
the queueing was finished, but the rest wasn't completed.

Reported by mteege, thanks!
2025-03-23 11:07:39 +01:00
Mechiel Lukkien
b0e4dcdb61
sync to latest autocert 2025-03-21 21:47:59 +01:00
Mechiel Lukkien
773d8cc959
update to latest github.com/mjl-/adns, synced to go1.24.1 2025-03-21 18:42:02 +01:00
Mechiel Lukkien
70aedddc90
webmail: when composing, no longer remove the last remaining To address with the ctrl+backspace shortcut
On reply, with too many Cc/Bcc, I usually hit ctrl+backspace a few time. I just
want to clear the addresses, but I practically always still want a To address.
2025-03-21 13:51:53 +01:00
Mechiel Lukkien
297e83188c
Check for queued messages when removing an address, and more completely cleanup accounts when removing.
When removing an address, we want to make sure any queued messages for the
account still still have their address associated with the account. E.g.
through a catchall address.

Before removing an account, we fail deliveries still in the queue for the
account. We remove any addresses on the suppression list (which are stored in
the queue database, not the account database file that is removed completely).
We also clear all sessions for the webmail/webaccount interfaces. For the
webmail, further operations will fail, and the reconnection attempt will cause
the login popup with a message about an unknown session token.
2025-03-21 13:36:10 +01:00
Mechiel Lukkien
75036c3a71
Before moving message files in imapserver and webmail API, ensure the message destination directory for the newly assigned IDs exist.
Example symptom, when deleting a message in the webmail (which moves to Trash):

        l=error m="duplicating message in old mailbox for current sessions" err="link data/accounts/mjl/msg/I/368638 data/accounts/mjl/msg/J/368640: no such file or directory" pkg=webmail

Problem introduced a few weeks ago, where moving messages starting duplicating
the message first, and the copy is erased once all references (in IMAP
sessions) to the old mailbox have been removed.
2025-03-21 10:18:39 +01:00
Mechiel Lukkien
99f9eb438f
Minor cleanup: use the ModSeq from the Mailbox in a ChangeMailboxAdd, no need to add the ModSeq again 2025-03-20 00:10:47 +01:00
Mechiel Lukkien
9ca50ab207
imapserver: When trying to replace a message in a non-existent mailbox, do still consume the message if it is a non-synchronized literal
Not likely to happen in the wild.
2025-03-19 22:00:34 +01:00
Mechiel Lukkien
5294a63c26
When logging structs, do log fields of type time.Time (timestamps)
The simplistic logging approach we've followed so far is to not log struct
fields that are themselves structs, which time.Time is. So we skipped, but do
log it now.
2025-03-19 21:52:31 +01:00
Mechiel Lukkien
719dc2bee1
webmail: Don't abort SSE connection when a metadata/annotation change is made (broadcasted)
Missing case...
2025-03-16 14:02:45 +01:00
Mechiel Lukkien
26793e407a
imapserver: Fix broadcasting change when modifying metadata key
We were not broadcasting the correct change, at least the modseq was missing in
case of an update.
2025-03-16 13:57:44 +01:00
Mechiel Lukkien
ac4b006ecd
When removing an account, wait for the last account reference has gone away before removing the files.
The intent to remove the account is stored in the database. At startup, if
there are any such referenes, they are applied by removing the account
directories and the entry in the database. This ensures the account directory
is properly removed, even on incomplete shutdown.

Don't add an account when its directory already exits.
2025-03-15 14:20:35 +01:00
Mechiel Lukkien
c4255a96f8
In tests, make initializing store/, its switchboard and an account more consistent.
Initialize store and switchboard first, then open account, and close in reverse
order.

Replace all "CheckClosed" calls with "WaitClosed", future changings will keep
an account reference open for a bit after the last regular close, so we can't
know that an account should be closed during tests.

Remove one parameter from the (still too long) "start test server" function in
imapserver testing code.
2025-03-15 11:15:23 +01:00
Mechiel Lukkien
eadbda027c
Fix bug gathering "changes" to broadcast during a mailbox rename in certain situations
We weren't appending the individual changes to the slice, but the entire slice.
Since "Change" is an "any", this isn't a type error. So make a Change a
non-empty interface (I had seen an issue like this coming, should have made it
an interface then, at least now we have a reasonable method, to get the modseq
of a change).

Found while working on an imap webpush prototype.
2025-03-15 10:45:35 +01:00
Mechiel Lukkien
0cf0bfb8a6
We won't be implementing IMAP UNAUTHENTICATE.
Doesn't seem like it's a common thing to do. And it's just a bit risky, it's
too easy to forget to clear some part of the authentication state on a
connection (especially future changes that forget to update clear a new field
during unauthenticate). If a strong use case ever pops up, we can reconsider.

Also update the roadmap a bit.
2025-03-12 10:01:00 +01:00
Mechiel Lukkien
60da7f34b8
Make error message in imapserver tests about missing untagged responses more readable. 2025-03-10 19:00:44 +01:00
Mechiel Lukkien
397fd1f5e7
imapserver: Make list of announced capabilities more readable.
And merge the duplicate list of capabilities. We had each on a line for
cross-referencing with the RFC, and all capabilities again but on a single line
to use in the server greeting. Now it's just one list.
2025-03-10 11:50:32 +01:00
Mechiel Lukkien
a553a107f0
Cleanup temporary files created during IMAP APPEND command.
Since a recent change (likely since implementing MULTIAPPEND), the temporary
files weren't removed any more. When changing it, I must have had the wrong
mental model about the MessageAdd method, assuming it would remove the temp
file.

Noticed during tests.
2025-03-10 09:26:24 +01:00
Mechiel Lukkien
0857e81a6c
Prevent spurious warnings about thread ids not being correct for messages that are expunged but not yet erased.
Erasing a message (removing the message file from the file system) was made a
separate step a few days ago. The verifydata command checks for consistency of
the data, but didn't correctly skip checking expunged-but-not-yet-erased
messages, leading to the warning.

A similar consistency check in store/account.go does check for that.

I was warned by my nightly backup+veridata script.
2025-03-08 09:03:41 +01:00
Mechiel Lukkien
2314397078
Fix recently introduced bug when authentication with password.
In case the precis check failed, our return of a nil account cleared acc, and
we were then trying to close it, returning in a nil pointer dereference.

Rewrite the return statements so we don't overwrite the named return variables.
2025-03-07 21:30:20 +01:00
Mechiel Lukkien
1c58d38280
webmail: When completing a recipient address, quote the "name" if necessary for proper interpretation.
Especially relevant when the name contains a comma, e.g. "lastname, firstname".
Or when it contains parentheses, e.g. "(organization)".

When sending to an address with a comma that isn't quoted, we would actually
interpret it as two addresses: One without an "@" before the comma, and the
second part after the comma with half of the name and the email addrss. This
resulted in an error message.

When sending to a recipient with unquoted parentheses in the name, those
parentheses would be interpreted as an generic email header comment, and left
out.

For issue #305 by mattfbacon.
2025-03-07 15:48:24 +01:00
Mechiel Lukkien
9a8bb1134b
Allow multiple localpart catch all separators, e.g. both "+" and "-", for addresses you+anything@example.com and you-anything@example.com
The original config option stays, and we still use it for the common case where
we have a single separator. The "+" is configured by default. It is optional,
just like the new option "LocalpartCatchallSeparators" (plural).

When parsing the config file, we combine LocalpartCatchallSeparator and
LocalpartCatchallSeparators into a single list
LocalpartCatchallSeparatorsEffective, which we use throughout the code.

For issue #301 by janc13
2025-03-07 14:42:19 +01:00
Mechiel Lukkien
d0b241499f
smtpserver: In localserve mode, don't reject messages "From" domain "localhost" if it doesn't resolve to an IP
Mox does not look up names from the /etc/hosts file, only through DNS. But
"localhost" may not resolve through DNS, or when offline a DNS server may not
even be available. We will want deliveries to work in "mox localserve" mode.

Found by dstotijn.
2025-03-07 11:39:24 +01:00
Mechiel Lukkien
2fc75b5b7b
When adding a new domain, only set up RSA DKIM keys, not ed25519.
We'll need RSA DKIM keys for a long time to come because many systems don't
support ed25519 DKIM signatures. We've been adding both types of keys when
adding a new domain, and adding both two DKIM signatures to outgoing messages.
This works fine in practice, other mail servers are correctly ignoring the
ed25519 signature if they don't understand it. Unfortunately, it causess noise
in DMARC reports: Systems will warn that a DKIM check failed.  Sometimes with a
vague message about a missing key, or a 0-bit key. Sometimes they leave the
selector out of the report, making it hard to understand what's going on.  This
causes postmasters to investigate because they think something is wrong, only
to eventually find out it's all fine. So we're causing needless chores for
postmasters. By having only an RSA DKIM signature, we skip that noise. This
also reduces the number of DNS records postmasters have to add for a domain.

The small ed25519 DKIM DNS TXT records would make them preferrable over the
long multi-string RSA DKIM DNS TXT records (which are often hard to add
correctly through DNS operator web interfaces), but as mentioned, we'll have to
add the RSA DKIM keys anyway.

Another reason why RSA keys _may_ be preferrable over ed25519 keys is that with
RSA, signing is more computationally expensive than verifying, while it's the
other way around for ed25519 keys.

Admins can always add an ed25519 DKIM key to their domain. And we can always
switch back to adding them to new domains by default in the future.

For issue #299.
2025-03-07 11:15:29 +01:00
Mechiel Lukkien
d78aa9d1d7
Fix previous commit, add missing error check and minor test refactor.
Unclear how I botched this up at the last minute before committing...
2025-03-07 10:30:55 +01:00
Mechiel Lukkien
51f58a52c9
When opening an account, check for unexpected message files in the file system, and adjust the next message ID autoincrement sequence in the database to prevent future message delivery failures.
Just to be cautious. This hasn't happened yet in practice that I'm aware of.
But in theory, mox could crash after it has written the message file during
delivery, but before the database transaction was committed. In that case, a
message file for the "next message id" is already present. Any future delivery
attempts will get assigned the same message id by the database, but writing the
file will fail because there already is one, causing delivery to fail (until
the file is moved away).

When opening an account, we now check in the file system if newer files exist
than we expect based on the last existing message in the database. If so, we
adjust the message ID the database will assign next.
2025-03-07 10:15:27 +01:00
Mechiel Lukkien
493cfee3e1
Mention NLnet funding continued in 2024/2025. 2025-03-06 20:26:25 +01:00
Mechiel Lukkien
64f2f788b1
Run modernize to rewrite some older go constructs to newer ones
Mostly using slice.Sort, using min/max, slices.Concat, range of int and
fmt.Appendf for byte slices instead of strings.
2025-03-06 17:33:06 +01:00
Mechiel Lukkien
f6132bdbc0
imapserver: Disable compress=deflate extension
It still blocks on reading partial flushes from clients, preventing progress
and eventually timing out. The flate library needs more changes to make this
work.

Connections from iOS mail sometimes timed out, not always.

The extension is simply not announced, code is still present.
2025-03-06 11:36:33 +01:00
Mechiel Lukkien
e572d01341
Don't allow mailboxes named "." or ".." and normalize names during imports too
It only serves to confuse. When exporting such mailboxes in zip files or tar
files, extracting will cause trouble.
2025-03-06 11:36:33 +01:00
Mechiel Lukkien
7872b138a5
Use consistent lower-case names when logging tls version and ciphersuite
Less shouty than upper case names.
2025-03-06 11:36:33 +01:00
Mechiel Lukkien
aa2b24d861
webserver: don't raise a 500 server error for static file requests with overlong names
The Open call returns an errno ENAMETOOLONG. We didn't handle that specially,
so turned it into a "500 internal server error" response. When serving static
files, we should just return "404 file not found" errors. The file obviously
does not exist.

Saw a few overlong requests from bad bots not recognizing "data:" uri's inlined
in html files, trying to request them.
2025-03-06 11:36:33 +01:00
Mechiel Lukkien
06b7c8bba0
Fix fuzzing for imapserver
Broken since introducing LoginAttempts. The fuzzing functions didn't get the
store.Init() call, and would hang on trying to send to the loginattemptwriter.
2025-03-06 11:36:33 +01:00
Mechiel Lukkien
edfc24a701
rename a few variables for code consistency 2025-03-06 11:36:33 +01:00
Mechiel Lukkien
96667a87eb
Run go test with the -fullpath flag
Makes it easy to open the file in subpackages when an error occurs.
2025-03-06 11:36:29 +01:00
Mechiel Lukkien
a5c64e4361
make code less indented 2025-03-06 11:35:44 +01:00
Mechiel Lukkien
577944310c
Improve expunged message/UID tracking in IMAP sessions, track synchronization history for mailboxes/annotations.
Keeping the message files around, and the message details in the database, is
useful for IMAP sessions that haven't seen/processed the removal of a message
yet and try to fetch it. Before, we would return errors. Similarly, a session
that has a mailbox selected that is removed can (at least in theory) still read
messages.

The mechanics to do this need keeping removed mailboxes around too. JMAP needs
that anyway, so we now keep modseq/createseq/expunged history for mailboxes
too. And while we're at it, for annotations as well.

For future JMAP support, we now also keep the mailbox parent id around for a
mailbox, with an upgrade step to set the field for existing mailboxes and
fixing up potential missing parents (which could possibly have happened in an
obscure corner case that I doubt anyone ran into).
2025-03-06 11:35:44 +01:00
Mechiel Lukkien
684c716e4d
Add missing wlocks around message delivery to account, mostly for tests. 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
2da280f2bb
Fail tests if unhandled panics happened.
We normally recover from those situations, printing stack traces instead of
crashing the program. But during tests, we're not looking at the prometheus
metrics or all the output. Without these checks, such panics could go
unnoticed. Seems like a reasonable thing to add, unhandled panics haven't been
encountered in tests.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
bc50c3bf7f
In imapserver with RENAME of Inbox, we didn't check for the metadata quota.
Rename of Inbox is special, it copies the mailbox including metadata.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
f5b67b5d3d
Clean up the loginattemptclear goroutine with store.Close()
It is called a lot from the test code, so it would spawn lots of those goroutines.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
2beb30cc20
Refactor how messages are added to mailboxes
DeliverMessage() is now MessageAdd(), and it takes a Mailbox object that it
modifies but doesn't write to the database (the caller must do it, and plenty
of times can do it more efficiently by doing it once for multiple messages).
The new AddOpts let the caller influence how many checks and how much of the
work MessageAdd() does. The zero-value AddOpts enable all checks and all the
work, but callers can take responsibility of some of the checks/work if it can
do it more efficiently itself.

This simplifies the code in most places, and makes it more efficient. The
checks to update per-mailbox keywords is a bit simpler too now.

We are also more careful to close the junk filter without saving it in case of
errors.

Still part of more upcoming changes.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
7855a32852
switch from docker-compose to "docker compose"
now that my laptop doesn't have docker-compose anymore
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
82371ad15b
simplify cleaning up temp files in gentestdata.go 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
9ce552368b
Minor tweaks. 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
ea64936a67
Cleanup message file when DeliverMailbox fails.
Part of larger changes.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
5ba51adb14
When retraining ham/spam messages, don't make existence of the messages optional.
If messages that should exist don't, that's a real error we don't want to hide.
Part of larger changes.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
3b731b7afe
various nits 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
7756150a69
Small tweak to LinkOrCopy, including defer for error handling 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
ffc7ed96bc
When delivering a message to a mailbox, remember last dir we delivered to
In the common case, it's the same as the previous delivery. That means we don't
have to try to create the directory (fewer syscalls) and that we can sync the
dir to disk.

This also tweaks the defer handling in case of a late failure.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
1037a756fa
when delivering a message to a mailbox, lazily parse the parsed form of the message
it isn't always needed, so this can improve performance a bit.
come up as part of other refactoring.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
3050baa15a
consistently use store.CloseRemoveTempFile for closing and removing temp files 2025-03-06 11:35:43 +01:00
Mechiel Lukkien
b822533df3
imapserver: Don't keep account write-locked during IMAP FETCH command
We effectively held the account write-locked by using a writable transaction
while processing the FETCH command. We did this because we may have to update
\Seen flags, for non-PEEK attribute fetches. This meant other FETCHes would
block, and other write access to the account too.

We now read the messages in a read-only transaction. We gather messages that
need marking as \Seen, and make that change in one (much shorter) database
transaction at the end of the FETCH command.

In practice, it doesn't seem too sensible to mark messages as seen
automatically. Most clients probably use the PEEK-variant of attribute fetches.

Related to issue #128.
2025-03-06 11:35:43 +01:00
Mechiel Lukkien
caaace403a
Add package smtp as fuzzing target since its addition in previous commit
The previous commit fixed an array out of bounds access that resulted in a
panic on an smtpserver connection. The panic is recovered and marked as
"unhandled panic" in metrics and the connection closed.
2025-03-06 11:15:25 +01:00
Martin Holst Swende
f10bb2c1ae
smtp: add data reader fuzzer + fix OOB read 2025-03-06 09:57:13 +01:00
Mechiel Lukkien
44d37892b8
imapserver: REPLACE commands when in read-only mode should fail 2025-02-26 18:39:41 +01:00
Mechiel Lukkien
d7bd50b5a5
imapserver: fix spurious test failure due to recently added account consistency check
By removing message file while holding the account wlock. We were seeing
messages that weren't removed yet.
2025-02-26 18:33:01 +01:00
Mechiel Lukkien
f235b6ad83
imapclient: log traces of sensitive data with traceauth, and of bulk data with tracedata
Similar to the imapserver. This also fixes tracing of APPEND messages, which
was completely absent before.
2025-02-26 18:13:20 +01:00
Mechiel Lukkien
9c40205343
imapserver: Prevent spurious test failures due to compression layer being closed and TLS close-writes failing 2025-02-26 15:41:46 +01:00
Mechiel Lukkien
062c3ac182
when writing updated word counts to the junk filter, remove entries where both counts are 0
no point in keeping them around.
also pass on error when getting a word from database returned an error.
2025-02-26 15:07:27 +01:00
Mechiel Lukkien
394bdef39d
In storage consistency checks, verify the junk filter has the expected word counts
Fix up a test or two. Simplify the XOR logic when we train the junk filter:
Only if junk or nonjunk is set, but not when both (or none are set). i.e. when
the values aren't the same.

Locking the account when we do consistency checks prevents spurious test
failures that may have been introduced in the previous commit.
2025-02-26 14:44:05 +01:00
Mechiel Lukkien
aa85baf511
add consistency check (enabled during tests only) for unexpected message files in the account message directory 2025-02-26 11:40:36 +01:00
Mechiel Lukkien
17de90e29d
imapserver: Prevent spurious unhandled panics for connections with compress=deflate that break
Writing to a connection goes through the flate library to compress. That writes
the compressed bytes to the underlying connection. But that underlying
connection is wrapped to raise a panic with an i/o error instead of returning a
normal error.  Jumping out of flate leaves the internal state of the compressor
in undefined state. So far so good. But as part of cleaning up the connection,
we could try to flush output again. Specifically: If we were writing user data,
we had switched from tracing of protocol data to tracing of user data, and we
registered a defer that restored the tracing kind and flushed (to ensure data
was traced at the right level). That flush would cause a write into the
compressor again, which could panic with an out of bounds slice access due to
its inconsistent internal state.

This fix prevents that compressor panic in two ways:

1. We wrap the flate.Writer with a moxio.FlateWriter that keeps track of
   whether a panic came out of an operation on it. If so, any further operation
   raises the same panic. This prevents access to the inconsistent internal flate
   state entirely.
2. Once we raise an i/o error, we mark the connection as broken and that makes
   flushes a no-op.
2025-02-26 11:26:54 +01:00
Mechiel Lukkien
ea55c85938
for trace logging, log size of the data (but not for redacted auth data, could be a password) 2025-02-26 10:14:07 +01:00
Mechiel Lukkien
92a87acfcb
Implement IMAP REPLACE extension, RFC 8508.
REPLACE can be used to update draft messages as you are editing. Instead of
requiring an APPEND and STORE of \Deleted and EXPUNGE. REPLACE works
atomically.

It has a syntax similar to APPEND, just allows you to specify the message to
replace that's in the currently selected mailbox. The regular REPLACE-command
works on a message sequence number, the UID REPLACE commands on a uid. The
destination mailbox, of the updated message, can be different. For example to
move a draft message from the Drafts folder to the Sent folder.

We have to do quite some bookkeeping, e.g. for updating (message) counts for
the mailbox, checking quota, un/retraining the junk filter. During a
synchronizing literal, we check the parameters early and reject if the replace
would fail (eg over quota, bad destination mailbox).
2025-02-25 23:27:19 +01:00
Mechiel Lukkien
1066eb4c9f
imapclient: add a type Append for messages for the APPEND-command, and accept multiple for servers with MULTIAPPEND capability
and a few nits.
2025-02-25 23:24:37 +01:00
Mechiel Lukkien
88a68e9143
imapserver: properly accept literal8 for APPEND, since we claim to implement the BINARY extension
it's not just for the APPEND with "UTF8()", also any regular append needs to
accept literal8. found testing with pimalaya.
2025-02-25 23:07:56 +01:00
Mechiel Lukkien
78e0c0255f
imapserver: implement MULTIAPPEND extension, rfc 3502
MULTIAPPEND modifies the existing APPEND command to allow multiple messages. it
is somewhat more involved than a regular append of a single message since the
operation (of adding multiple messages) must be atomic. either all are added,
or none are.

we check as early as possible if the messages won't cause an over-quota error.
2025-02-24 15:47:47 +01:00
Mechiel Lukkien
b56d6c4061
imapserver: try harder to get the user-agent (from the ID command) into the loginattempt
our previous approach was to hope clients did the ID command right after the
AUTHENTICATE command. with more extensions implemented, that's a stretch,
clients are doing other commands in between.

the new approach is to allow more commands, but wait at most 1 second. clients
are still assumed to send the ID command soon after authenticate. we also still
ensure login attempts are logged on connection teardown, so we aren't missing
any logging, just may get it slightly delayed. seems reasonable.

we now also keep the useragent value around, and we use when initializing the
login attempt. because the ID command can happen at any time, also before the
AUTHENTICATE command.
2025-02-24 09:54:38 +01:00
Mechiel Lukkien
d27fc1e7fc
gofmt 2025-02-23 22:40:34 +01:00
Mechiel Lukkien
f117cc0fe1
website: mention tls-alpn-01 and http-01 acme challenge types are implemented, but not dns-01 yet
prompted by question by rawtaz on irc
2025-02-23 22:28:07 +01:00
Mechiel Lukkien
0ed820e3b0
imapserver: implement rfc 9590, returning metadata in the extended list command
only with "return" including "metadata". so clients can quickly get certain
metadata (eg for display, such as a color) for mailboxes.

this also adds a protocol token type "mailboxt" that properly encodes to utf7
if required.
2025-02-23 22:12:18 +01:00
Mechiel Lukkien
2809136451
imap metadata extension: allow keys in the /shared/ namespace too
not just /private. /shared/ is the more commonly implemented namespace, because
it is easier te implement: you don't need per-user/account storage of metadata.
i initially approached it from the other direction: we don't have a mechanism
to share metadata with other accounts, so everything is private, and i assumed
that would be what a user would prefer. but email clients make the decisions,
and they'll likely try the /shared/ namespace.
2025-02-23 20:19:07 +01:00
Mechiel Lukkien
463e801909
add more rfc's and shuffle roadmap once more 2025-02-23 12:08:11 +01:00
Mechiel Lukkien
3b224ea0c2
consistent simpler parsing of domains in cli commands
prompted by previous commit, making me look at dns.ParseDomain calls.
2025-02-23 11:34:51 +01:00
Mechiel Lukkien
151729af08
in dns.ParseDomain, don't allow ipv4 addresses (ipv6 addresses were already rejected)
we are expecting a DNS domain name there.
also highlighted a wrong test in the smtp server.
2025-02-23 11:33:31 +01:00
Mechiel Lukkien
797c1cf9f0
do not log an error for tls requests with ipv6 addresses as sni server name
ip addresses are invalid in server names. for ipv6 addresses, the
autocert.GetCertificate calls would return an error, which we logged, and
increased a metric about. but the alerts for this situation aren't helpful. so
recognize ip addresses early. if we are lenient about unknown server names (for
incoming smtp deliveries), we switch to the fallback hostname, otherwise we
return an error.

this was the error logged:

	l=error m="requesting certificate" err="acme/autocert: server name component count invalid"

for ipv4 addresses, the name wouldn't be in our allowlist and should already
have caused us to switch to the fallback hostname.
2025-02-23 10:46:39 +01:00
Mechiel Lukkien
cad585a70e
webmail: when trying to empty an already empty mailbox, make it a user error, not server error
server errors could cause error logging.
2025-02-22 23:11:34 +01:00
Mechiel Lukkien
9f3cb7340b
update modseq when changing mailbox/server metadata, and also for specialuse changes, and keep track of modseq for mailboxes
i added the metadata extension to the imapserver recently. then i wondered how
a client would efficiently find changed metadata. turns out the qresync rfc
mentions that metadata changes should set a new modseq on the mailbox.
shouldn't be hard, except that we were not explicitly keeping track of modseqs
per mailbox. we only kept them for messages, and we were just looking up the
latest message modseq when we needed the modseq (we keep db entries for
expunged messages, so this worked out fine). that approach isn't enough
anymore. so know we keep track of modseq & createseq for mailboxes, just as for
messages. and we also track modseq/createseq for annotations. there's a good
chance jmap is going to need it.

this also adds consistency checks for modseq/createseq on mailboxes and
annotations to the account storage. it helped spot cases i missed where the
values need to be updated.
2025-02-22 22:52:18 +01:00
Mechiel Lukkien
7c7473ef0e
fix tests on bsds, since previous commit
the tls resumption test was failing due to switch from net.Pipe to unix domain
socket pairs. on bsds, they have an empty name (on linux it is "@"), which
prevents tls resumption from working.
2025-02-21 20:38:37 +01:00
Mechiel Lukkien
f40f94670e
implement IMAP extension COMPRESS=DEFLATE, rfc 4978
to compress the entire IMAP connection. tested with thunderbird, meli, k9, ios
mail. the initial implementation had interoperability issues with some of these
clients: if they write the deflate stream and flush in "partial mode", the go
stdlib flate reader does not return any data (until there is an explicit
zero-length "sync flush" block, or until the history/sliding window is full),
blocking progress, resulting in clients closing the seemingly stuck connection
after considering the connection timed out. this includes a coy of the flate
package with a new reader that returns partially flushed blocks earlier.

this also adds imap trace logging to imapclient.Conn, which was useful for
debugging.
2025-02-21 14:56:17 +01:00
Mechiel Lukkien
3f6c45a41f
for trace-level logging in console format (as opposed to logfmt), print the trace as quoted string
so we can easily see the exact bytes on the wire, instead of having \n's
expanded as newlines. much easier to read. we had this in the past, but it must
have been lost in a refactor.
2025-02-20 17:42:00 +01:00
Mechiel Lukkien
95d2002e77
announce support for namespace extension in imap capabilities line
we already implemented it as part of imap4rev2, but older clients need to be
told we implement it.
2025-02-20 08:32:33 +01:00
Mechiel Lukkien
a458920721
pass "go vet" again, can't use unkeyed struct fields from other package 2025-02-19 23:06:11 +01:00
Mechiel Lukkien
6ed97469b7
imapclient: parse fetch attribute "internaldate" as time.Time instead of keeping it as string
similar to the SAVEDATE fetch attribute implemented recently.
2025-02-19 23:01:23 +01:00
Mechiel Lukkien
02c4715724
remove intention to implement \important special-use mailbox and $important message flag, rfc 8457
they are intended to be used by the server to automatically mark some messages
as important, based on server-defined heuristics. we don't have such heuristics
at the moment. perhaps in the future, but until then there are no plans.
2025-02-19 22:44:04 +01:00
Mechiel Lukkien
5e4d80d48e
implement the WITHIN IMAP extension, rfc 5032
for IMAP "SEARCH" command criteria "YOUNGER" and "OLDER".
2025-02-19 21:29:14 +01:00
Mechiel Lukkien
dcaa99a85c
implement IMAP CREATE-SPECIAL-USE extension for the mailbox create command, part of rfc 6154
we already supported special-use flags. settable through the webmail interface,
and new accounts already got standard mailboxes with special-use flags
predefined. but now the IMAP "CREATE" command implements creating mailboxes
with special-use flags.
2025-02-19 20:39:26 +01:00
Mechiel Lukkien
7288e038e6
implement imap savedate extension, rfc 8514
it makes a new field available on stored messages. not when they they were
received (over smtp) or appended to the mailbox (over imap), but when they were
last "saved" in the mailbox. copy/move of a message (eg to the trash) resets
the "savedate" value. this helps implement "remove messages from trash after X
days".
2025-02-19 17:11:20 +01:00
Mechiel Lukkien
cbe5bb235c
fix data race in code for logging login attempts
logging of login attempts happens in the background, because we don't want to
block regular operation with disk since for such logging. however, when a line
is logged, we evaluate some attributes of a connection, notably the username.
but about when we do authentication, we change the username on a connection. so
we were reading and writing at the same time. this is now fixed by evaluating
the attributes before we pass off the logger to the goroutine.

found by the go race detector.
2025-02-19 15:23:19 +01:00
Mechiel Lukkien
de6262b90a
make test for imap getmetadata reliable by sorting output by key 2025-02-19 14:58:22 +01:00
Mechiel Lukkien
f30c44eddb
implement the imap metadata extension, rfc 5464
this allows setting per-mailbox and per-server annotations (metadata). we have
a fixed maximum for total number of annotations (1000) and their total size
(1000000 bytes). this size isn't held against the regular quota for simplicity.
we send unsolicited metadata responses when a connection is in the idle
command and a change to a metadata item is made.

we currently only implement the /private/ namespace.  we should implement the
/shared/ namespace, for mox-global metadata annotations.  only the admin should
be able to configure those, probably through the config file, cli, or admin web
interface.

for issue #290
2025-02-17 22:57:33 +01:00
Mechiel Lukkien
9dff879164
in domain/dns self-check, for unused services, check that port is 0 like how we told users to configure it and fix checking for errors during srv lookups 2025-02-16 17:42:24 +01:00
Mechiel Lukkien
1c4bf8909c
webmail: when forwarding, include the subject,date,from,reply-to,to,cc headers in the message
mentioned some time ago by ilijamt
2025-02-16 16:45:02 +01:00
Mechiel Lukkien
4765bf3b2c
shuffle entries in roadmap
it hasn't been updated in a while. this isn't the full picture either, but at
least closer to the planned order.
2025-02-16 16:28:48 +01:00
Mechiel Lukkien
3d0dc3a79d
in domain/dns self-check, for unexpected SRV records for "srv autoconfig", show the values of the unexpected records
should be more helpful in understanding what's wrong.

feedback from mteege, thanks!
2025-02-16 16:21:01 +01:00
Mechiel Lukkien
6f678125a5
in domain/dns self-check, provide config snippet for HostTLSRPT if it isn't configured and the admin should check again for the DNS records
feedback from mteege, thanks!
2025-02-16 16:12:44 +01:00
Mechiel Lukkien
1d6f45e592
in domain/dns self-check, don't warn about reverse dns that resolves to multiple names
this is fine. we just need to check if the expected name is among them.

feedback from mteege, thanks!
2025-02-16 15:55:31 +01:00
Mechiel Lukkien
6da5f8f586
add config option to an account destination to reject messages that don't pass a dmarc-like aligned spf/aligned dkim check
intended for automated processors that don't want to send messages to senders
without verified domains (because the address may be forged, and the processor
doesn't want to bother innocent bystanders).

such delivery attempts will fail with a permanent error immediately, typically
resulting in a DSN message to the original sender. the configurable error
message will normally be included in the DSN, so it could have alternative
instructions.
2025-02-15 17:34:06 +01:00
Mechiel Lukkien
f33870ba85
move the large commands for generating api docs to separate shell script 2025-02-15 12:56:59 +01:00
Mechiel Lukkien
3e53abc4db
add account config option to prevent the account for setting their own custom password, and enable by default for new accounts
accounts with this option enabled can only generate get a new randomly
generated password. this prevents password reuse across services and weak
passwords. existing accounts keep their current ability to set custom
passwords. only admins can change this setting for an account.

related to issue #286 by skyguy
2025-02-15 12:44:18 +01:00
Mechiel Lukkien
09975a3100
when warning about weak passwords, mention that passwords reused at other services in particular
based on issue #286
2025-02-15 11:48:10 +01:00
Mechiel Lukkien
46c1693ee9
when delivering over smtp, do not require the other server to announce the 8bitmime extension unless in pedantic mode
all relevant systems nowadays should be accepting "8-bit" messages. before this
change, we would fail delivery for 8bit messages when the remote server doesn't
announce the 8bitmime smtp extension.  even though that system would likely
just accept our message.

also, there's a good chance the non-8bitmime-supporting system is some
intermediate minimal mail server like openbsd spamd, which was fixed to
announce the 8bitmime extension in the past year.

in theory, we could rewrite the message to be 7bit-only if it is a mime
message. but it's probably not worth the trouble.  also see
https://cr.yp.to/smtp/8bitmime.html

as alternative to PR #287 by mattanja (who also reported the issue on matrix),
thanks!
2025-02-15 10:11:17 +01:00
BlankEclair
93b627ceab
main: fix reading passwords longer than 64 bytes
Fixes #284
2025-02-09 22:55:38 +11:00
Mechiel Lukkien
c210b50433
update publicsuffix list to latest version
and add note to (pre)release process to update it
2025-02-07 12:02:39 +01:00
Mechiel Lukkien
2f0997682b
quickstart: check if domain was registered recently, and warn about potential deliverability issues
we use 6 weeks as the cutoff, but this is fuzzy, and will vary by mail
server/service provider.

we check the domain age using RDAP, the replacement for whois. it is a
relatively simple protocol, with HTTP/JSON requests. we fetch the
"registration"-related events to look for a date of registration.
RDAP is not available for all country-level TLDs, but is for most (all?) ICANN
global top level domains. some random cctlds i noticed without rdap: .sh, .au,
.io.

the rdap implementation is very basic, only parsing the fields we need. we
don't yet cache the dns registry bootstrap file from iana. we should once we
use this functionality from the web interface, with more calls.
2025-02-07 11:22:39 +01:00
Mechiel Lukkien
c7354cc22b
also unicode-normalize usernames (email addresses) when logging into the imapserver and webapps
and don't do needless normalization for the username we got from scram: the
scram package would have failed if the name wasn't already normalized.

unicode may not be specified for sasl with imap (i'm not sure), but there's no
point in accepting it over smtpserver but not in imapserver.
2025-02-06 15:38:45 +01:00
Mechiel Lukkien
7b3ebb2647
imapserver: remove unreachable check for logindisabled
given the "false" flag above when opening the account by email.
the login disabled case is handled after the various auth mechanisms in a
single place.

noticed while making other changes.
2025-02-06 15:28:01 +01:00
Mechiel Lukkien
e5e15a3965
add prometheus metrics for errors when getting certificates through acme (typically from let's encrypt)
and add an alerting rule for it.
we certainly want a heads up when there are issues with the certificates.
2025-02-06 15:12:36 +01:00
Mechiel Lukkien
1277d78cb1
keep track of login attempts, both successful and failures
and show them in the account and admin interfaces. this should help with
debugging, to find misconfigured clients, and potentially find attackers trying
to login.

we include details like login name, account name, protocol, authentication
mechanism, ip addresses, tls connection properties, user-agent. and of course
the result.

we group entries by their details. repeat connections don't cause new records
in the database, they just increase the count on the existing record.

we keep data for at most 30 days. and we keep at most 10k entries per account.
to prevent unbounded growth. for successful login attempts, we store them all
for 30d. if a bad user causes so many entries this becomes a problem, it will
be time to talk to the user...

there is no pagination/searching yet in the admin/account interfaces. so the
list may be long. we only show the 10 most recent login attempts by default.
the rest is only shown on a separate page.

there is no way yet to disable this. may come later, either as global setting
or per account.
2025-02-06 14:16:13 +01:00
Mechiel Lukkien
d08e0d3882
webmail: fix dark mode
broken in v0.0.14, probably when introducing the css variables.
i had noticed this issue at the time, and thought i fixed it, but clearly not.

for issue #278, reported by gdunstone
2025-02-03 18:28:48 +01:00
Mechiel Lukkien
091faa8048
webmail: fix parsing search filter "start:<date>" and "end:<date>"
we were only properly parsing values of "<date>T<time>" or just "<time>".
so you could select a date in the form (or type it), but it would be treated as
just a word of text to search for in messages. so it would quietly do the wrong
thing.
2025-01-30 12:15:44 +01:00
Mechiel Lukkien
ef77f58e08
webmail: add button to create a mailbox below another one
before this, you could use the button at the top of the list of mailboxes to
create a submailbox somewhere, and you would have to specify the full path of
the new mailbox name. now you can just open up your Lists/.../ mailbox, and
create a mailbox below that hierarchy.
2025-01-30 11:55:57 +01:00
Mechiel Lukkien
ad26fd265d
webmail: add button to mark a mailbox and its children as read
this sets the seen flag on all messages in the mailbox and its children.
2025-01-30 11:50:52 +01:00
Mechiel Lukkien
c8fd9ca664
webmail: after clicking on the "create mailbox" button, automatically put focus on the input field for the new mailbox name 2025-01-30 11:02:12 +01:00
Mechiel Lukkien
f9280b0891
reduce logging about db schema initializations during tests
they were a bit too noisy, not helpful
2025-01-30 10:21:16 +01:00
Mechiel Lukkien
807d01ee21
simplify/cleanup common smtpserver test code 2025-01-29 21:56:00 +01:00
Mechiel Lukkien
ec7904c0ee
add fail2ban snippet to FAQ
from unguamorray in issue #274
2025-01-29 20:58:31 +01:00
Mechiel Lukkien
df17ae2321
in email to postmaster about new mox version, don't mention "mox backup" explicitly, it's in all the release notes nowadays 2025-01-29 20:27:33 +01:00
Mechiel Lukkien
6ed736241d
also use "password-encrypted" for the 2nd autoconfig configuration
intended for deltachat, which doesn't look at the value. encrypted may be a
better default.

as discussied in #251
2025-01-27 08:31:13 +01:00
Mechiel Lukkien
49e2eba52b
add cli command "mox admin imapserve $preauthaddress"
for admins to open an imap connection preauthenticated for an account (by address), also when
it is disabled for logins.

useful for migrations. the admin typically doesn't know the password of the
account, so couldn't get an imap session (for synchronizing) before.

tested with "mox localserve" and running:

	mutt -e 'set tunnel="MOXCONF=/home/mjl/.config/mox-localserve/mox.conf ./mox admin imapserve mox@localhost"'

may also work with interimap, but untested.

i initially assumed imap would be done fully on file descriptor 0, but mutt
expects imap output on fd 1. that's the default now. flag -fd0 is for others
that expect it on fd0.

for issue #175, suggested by DanielG
2025-01-25 22:18:26 +01:00
Mechiel Lukkien
2d3d726f05
add config options to disable a domain and to disable logins for an account
to facilitate migrations from/to other mail setups.

a domain can be added in "disabled" mode (or can be disabled/enabled later on).
you can configure a disabled domain, but incoming/outgoing messages involving
the domain are rejected with temporary error codes (as this may occur during a
migration, remote servers will try again, hopefully to the correct machine or
after this machine has been configured correctly). also, no acme tls certs will
be requested for disabled domains (the autoconfig/mta-sts dns records may still
point to the current/previous machine). accounts with addresses at disabled
domains can still login, unless logins are disabled for their accounts.

an account now has an option to disable logins. you can specify an error
message to show. this will be shown in smtp, imap and the web interfaces. it
could contain a message about migrations, and possibly a URL to a page with
information about how to migrate. incoming/outgoing email involving accounts
with login disabled are still accepted/delivered as normal (unless the domain
involved in the messages is disabled too). account operations by the admin,
such as importing/exporting messages still works.

in the admin web interface, listings of domains/accounts show if they are disabled.
domains & accounts can be enabled/disabled through the config file, cli
commands and admin web interface.

for issue #175 by RobSlgm
2025-01-25 20:39:20 +01:00
Mechiel Lukkien
132efdd9fb
don't use non-constant for string formatting
found by go1.24rc
2025-01-24 17:00:39 +01:00
Mechiel Lukkien
3e2695323c
add config option to reject incoming deliveries with an error during the smtp transaction
useful when a catchall is configured, and messages to some address need to be
rejected.

would have been nicer if this could be part of a ruleset. but evaluating a
ruleset requires us to have the message (so we can match on headers, etc). but
we can't reject messages to individual recipients during the DATA command in
smtp. that would reject the entire delivery attempt.

for issue #156 by ally9335
2025-01-24 16:51:21 +01:00
Mechiel Lukkien
8b26e3c972
consistently add details about configuration errors when parsing domains.conf
e.g. which domain, account, address, alias, the error is about.

we were adding context some of the time. this introduces helpers for adding
errors that make it easier to add details to the error messages.
2025-01-24 15:06:55 +01:00
Mechiel Lukkien
890c75367a
mox backup: skip message files that were added to queue or account message directories while making the backup, instead of storing them and warning about them
by storing them, a restore may need the -fix flag to become usable again.
it makes more sense to just skip these files. they are not part of the
consistent snapshot.
2025-01-24 12:24:57 +01:00
Mechiel Lukkien
76e96ee673
Change "mox backup $destdir" from storing only data files to $destdir to storing those under $destdir/data and now also copying config files to $destdir/config. (#150)
Upgrade note: Admins may want to check their backup scripts.

Based on feedback in issue #150.
2025-01-24 11:45:43 +01:00
Mechiel Lukkien
3d52efbdf9
fix apidiff.sh to always generate a new apidiff/next.txt file 2025-01-23 23:02:36 +01:00
Mechiel Lukkien
6aa2139a54
do not use results from junk filter if we have less than 50 positive classifications to base the decision on
useful for new accounts. we don't want to start rejecting incoming messages for
having a score near 0.5 because of too little training material. we err on the
side of allowing messages in. the user will mark them as junk, training the
filter. once enough non-junk has come in, we'll start the actual filtering.

for issue #64 by x8x, and i've also seen this concern on matrix
2025-01-23 22:55:50 +01:00
Mechiel Lukkien
8fac9f862b
attempt to fix workflow again
sigh, this is why you don't you use cloud tools that you can't run locally...
2025-01-23 18:40:05 +01:00
Mechiel Lukkien
7df54071d7
update to github action actions/upload-artifact@v4 from v3
we'll now get a coverage file artifact for each of the builds. we do two
builds, and the last was likely overwriting the coverage file "artifact" of the
first.

hopefully fixes the test. can't test it locally...
2025-01-23 18:29:43 +01:00
Mechiel Lukkien
acc1c133b0
admin check: do not raise error when forward-confirmed reverse dns does not match hostname
this should be relatively common with setups involving NAT.
so we do warn about it when NAT isn't active since it could highlight potential
misconfiguration.

for issue #239 by exander77
2025-01-23 18:11:00 +01:00
s0ph0s
3c77e076e2
Add support for negotiating IMAP and SMTP on the HTTPS port 443 using TLS ALPN "imap" and "smtp"
Intended for future use with chatmail servers. Standard email ports may be
blocked on some networks, while the HTTPS port may be accessible.

This is a squashed commit of PR #255 by s0ph0s-dog.
2025-01-23 11:16:20 +01:00
Mechiel Lukkien
0203dfa9d9
webmail: fix nil pointer dereference when searching for attachment types, eg "a:spreadsheet"
for issue #272 by mattfbacon
2025-01-23 11:03:08 +01:00
Mechiel Lukkien
008de1cafb
webmail: in message view, under More, add button to open currently displayed part (either text or html) as raw text (but decoded if in base64/quoted-printable/etc). 2025-01-22 21:19:24 +01:00
Mechiel Lukkien
7647264a72
web interfaces: when there is no login session, and a non-existent path is requested, mention the web interface this is about
may help users understand when /admin/ isn't enabled on a hostname but the
account web interface is at /. the error will now say: no session for "account"
web interface. it hopefully tells users that their request isn't going to an
admin interface, but ends up at the account web interface.

for issue #268
2025-01-22 20:15:14 +01:00
Mechiel Lukkien
f15f2d68fc
webmail: more helpful error message when emptying a mailbox that is already empty
and mention in a tooltip too that "empty mailbox" only affects messages in the
mailbox, not submailboxes or their messages.

prompted by a question on matrix/irc
2025-01-22 20:09:19 +01:00
Mechiel Lukkien
315f10d5f2
add release to website 2025-01-20 12:54:45 +01:00
Mechiel Lukkien
5fcea1eb3b
rotate apidiff/next.txt for release 2025-01-20 12:49:20 +01:00
Mechiel Lukkien
be1065a6c4
add another makefile testing target 2025-01-13 23:23:00 +01:00
Mechiel Lukkien
b85401a83d
fix command gentestdata for testing upgrades
not working since tlspubkey auth
2025-01-13 23:22:14 +01:00
Mechiel Lukkien
dd92ed5117
update to latest golang.org/x dependencies 2025-01-13 22:29:42 +01:00
Mechiel Lukkien
871f70151c
smtpserver: allow using an "message from" address from an allowed alias as smtp mail from during submission
mail clients will use these message from addresses also for smtp mail from, so
sending over smtp would fail for these cases. for the webmail and webapi they
already succeeded since we just took the "message from" address as "smtp mail
from" address.

for issue #266 by Robby-, thanks for reporting!
2025-01-13 21:34:59 +01:00
Mechiel Lukkien
d4d2a0fd99
webmail: when listing messages in backend to send to frontend, don't error out when there's a large plain text part
by not trying to parse the full message for the MessageItem, but only reading
headers when needed.

before previous commit, we wouldn't try reading such messages in full either.
2025-01-13 16:13:25 +01:00
Mechiel Lukkien
1e15a10b66
webmail: fix js error rerendering additional headers after updated keywords
i've seen the error a few times:

	msgheaderElem.children[(msgheaderElem.children.length - 1)] is undefined

i've seen it happen after sending a reply (with the "answered" flag added).
the updateKeywords callback would render the message again, but the code for
rendering the "additional headers" table rows again was making invalid
assumptions.

the approach is now changed. the backend now just immediately sends the
additional headers to the frontend. before, the frontend would first render the
base message, then render again once the headers came in for the parsed
message. this also prevents a reflow for the (quite common) case that one of
the additional headers are present in the message.
2025-01-13 14:53:43 +01:00
Mechiel Lukkien
f7193bd4c3
webmail: fix css to not show text on button (actually html "a" element for links) for downloaded (visited) attachments in blue 2025-01-13 11:22:44 +01:00
Mechiel Lukkien
5a14a5b067
smtpserver: when doing slow writes due to spammy incoming delivery, try a bit harder to prevent a timeout for the other side (if it is mox/itself!)
based on question from wneessen
2025-01-13 11:13:26 +01:00
Mechiel Lukkien
b8bf99e082
ensure kind "acme-tls-alpn-01" is registered on the http handler
previous code couldn't possibly be triggered by my reading.

encountered during PR #255
2025-01-13 10:43:55 +01:00
Mechiel Lukkien
eb88e2651a
dkim: add reference to rfc that says not to accept rsa keys < 1024 bits
saw it mentioned on HN recently
2025-01-13 10:35:25 +01:00
Mechiel Lukkien
e5eaf4d46f
fix race in imapserver tests 2024-12-25 16:50:23 +01:00
Mechiel Lukkien
9b429cce4f
try harder to start docker integration tests with clean slate
for some reason "docker-compose down" takes a very long time, and doesn't
actually stop containers if you add a timeout.
2024-12-25 16:44:54 +01:00
Mechiel Lukkien
965a2b426f
webadmin: when loading page with webserver routes, internal services would always be shown with "admin" as internal services, and saving the handler would overwrite the correct setting
fix this by properly loading the correct internal service.

for issue #264 reported by kiekerjan, thanks!
2024-12-24 22:02:28 +01:00
Mechiel Lukkien
f7666d1582
fix verifying dane-ta connections for outgoing email where the dane-ta record is not for the first certificate in the chain after the leaf certifiate.
tls servers send a list of certificates for the connection. the first is the
leaf certificate. that's the one for the server itself. that's the one we want
to verify. the others are intermediate CA's. and possibly even the root CA
certificate that it hopes is trusted at the client (though sending it doesn't
make it trusted). with dane-ta, the public key of an intermediate or root CA
certificate is listed in the TSLA record. when verifying, we add any
intermediate/root CA that matches a dane-ta tlsa record to the trusted root CA
certs. we should also have added CA certs that didn't match a TLSA record to
the "intermediates" of x509.VerifyOptions. because we didn't,
x509.Certificate.Verify couldn't verify the chain from the trusted dane-ta ca
cert to the leaf cert. we would only properly verify a dane-ta connection
correctly if the dane-ta-trusted ca cert was the one immediately following the
leaf cert. not when there were one or more additional intermediate certs.

this showed when connecting to mx.runbox.com.

problem reported by robbo5000 on matrix, thanks!
2024-12-21 16:09:53 +01:00
Mechiel Lukkien
aa9a06680f
update to golang.org/x/net/html (slow parsing fixed) and other golang.org/x deps 2024-12-21 09:44:11 +01:00
Mechiel Lukkien
d082aaada8
only use constant strings in string formatting
builds with go1.24rc1 fail on these.
only the case in smtpserver could be triggered externally.
2024-12-14 09:38:56 +01:00
Mechiel Lukkien
5320ec1c5b
quickstart: for -existing-webserver, also tls key/cert placeholder for mail.$domain
unless mail.$domain is the mx hostname.

after question about which tls certs are needed from robbo5000 on matrix
2024-12-08 10:18:57 +01:00
Mechiel Lukkien
2255ebcf11
quickstart: write all output to a file "quickstart.log" for later reference
quite some output is printed. you could remember to tee it all to a file. but
that's probably often realized only after having run the quickstart. you can
also copy/paste it all from the terminal, but that's sometimes annoying to do.
writing to a file is more helpful to users.

this has been requested a few times in the past on irc/matrix (i forgot who).
2024-12-07 21:14:43 +01:00
Mechiel Lukkien
35af7e30a6
do not try to get a tls cert for autoconfig.<domain> at startup if there is no listener with autoconfig enabled
reduces needless logging in setups that don't use autoconfig.
2024-12-07 20:28:52 +01:00
Mechiel Lukkien
cbe418ec59
try clarifying that aliases are lists, not to be used for simply adding an address to an account
for issue #244 by exander77
2024-12-07 19:10:02 +01:00
Mechiel Lukkien
f7b58c87b1
instead of using loglevel error for printing a warning, just log it as "warn" error level, and don't log message parsing errors as loglevel error 2024-12-07 19:07:16 +01:00
Mechiel Lukkien
94fb48c2dc
mox retrain: make the parameter, for account, optional and retrain all accounts when absent
for more easily retraining all accounts. users should be retraining their
accounts with the next release, due to the fix in the previous commit.
2024-12-07 17:00:00 +01:00
Mechiel Lukkien
17baf9a883
junk filter: fix adjusting word counts after train/untrain
after seeing some junk messages pass the filter, i investigated word counts in
junkfilter.db. i had seen suspicious counts that were just around powers of
two. did not make sense at the time. more investigating makes it clear: instead
of setting new word counts when updating the junk filter, we were adding the
new value to the current value (instead of just setting the new value). so the
counts got approximately doubled when being updated.

users should retrain the junk filter after this update using the "retrain"
subcommand.

this also adds logging for the hypothetical case where numbers would get
decreased below zero (which would wrap around due to uints).

and this fixes junk filter tests that were passing wrong parameters to
train/untrain...
2024-12-07 16:53:53 +01:00
Mechiel Lukkien
69a4995449
move func PartStructure from webhook to queue, so it isn't tracked anymore for apidiff changes
the types in webhook should be subjected to apidiff'ing, this was a shared
function. it is better off in package queue. also change the apidiff script so
it leaves apidiff/next.txt empty when there aren't any changes. makes it easier
to rotate the files after releases where nothing changed (a common occurrence).
2024-12-07 13:57:07 +01:00
Mechiel Lukkien
0871bf5219
move checking whether a message needs smtputf8 (has utf8 in any of the header sections) to package message 2024-12-07 13:05:09 +01:00
Mechiel Lukkien
3f727cf380
webmail: move 2 config options from localstorage to the settings popup, storing their values on the server
these settings are applied anywhere the webmail is open.  the settings are for
showing keyboard shortcuts in the lower right after a mouse interaction, and
showing additional headers.  the shorcuts were configurable in the "help" popup
before.  the additional headers were only configurable through the developer
console before.

the "mailto:" (un)register buttons are now in the settings popup too.
2024-12-07 12:32:54 +01:00
Mechiel Lukkien
4d3c4115f8
webmail: don't bind to shortcuts ctrl-l, ctrl-u and ctrl-I
ctrl-l is commonly "focus on browser address bar".
ctrl-u is commonly "view source".
ctrl-I (shift i) is commonly "open developer console".

these keys are more useful to leave for the browser.  ctrl-l and ctrl-u (moving
to a message without opening it) can still be had by using also pressing shift.
the previous ctrl-shift-i (show all headers) is now just ctrl-i.

this has been requested in the past on irc/matrix (i forgot who).
2024-12-07 12:29:12 +01:00
Mechiel Lukkien
0a77bc5955
tweak doucmentation for sasl and scram 2024-12-06 15:59:22 +01:00
Mechiel Lukkien
ce75852b7c
add missing space in x-mox-reason that's been bothering me for a while 2024-12-06 15:49:22 +01:00
Mechiel Lukkien
b750668152
add metrics that track how many error/warn/info logging is happening 2024-12-06 15:07:42 +01:00
Mechiel Lukkien
056b571fb6
webmail: don't consume keyboard events while login form is open
e.g. ctrl-l, for going to address bar to go to another site.
2024-12-06 14:57:20 +01:00
Mechiel Lukkien
e59f894a94
add an option for the smtp delivery listener to enable/disable tls session tickets
the field is optional. if absent, the default behaviour is currently to disable
session tickets. users can set the option if they want to try if delivery from
microsoft is working again. in a  future version, we can switch the default to
enabling session tickets.

the previous fix was to disable session tickets for all tls connections,
including https. that was a bit much.

for issue #237
2024-12-06 14:50:02 +01:00
Mechiel Lukkien
42793834f8
add Content-Disposition and Filename to the payload of incoming webhooks
for each message part. The ContentDisposition value is the base value without
header key/value parameters. the Filename field is the likely filename of the
part. the different email clients encode filenames differently. there is a
standard mime mechanism from rfc 2231. and there is the q/b-word-encoding from
rfc 2047. instead of letting users of the webhook api deal with those
differences, we provide just the parsed filename.

for issue #258 by morki, thanks for reporting!
2024-12-06 14:19:39 +01:00
Mechiel Lukkien
8804d6b60e
implement tls client certificate authentication
the imap & smtp servers now allow logging in with tls client authentication and
the "external" sasl authentication mechanism. email clients like thunderbird,
fairemail, k9, macos mail implement it. this seems to be the most secure among
the authentication mechanism commonly implemented by clients. a useful property
is that an account can have a separate tls public key for each device/email
client.  with tls client cert auth, authentication is also bound to the tls
connection. a mitm cannot pass the credentials on to another tls connection,
similar to scram-*-plus. though part of scram-*-plus is that clients verify
that the server knows the client credentials.

for tls client auth with imap, we send a "preauth" untagged message by default.
that puts the connection in authenticated state. given the imap connection
state machine, further authentication commands are not allowed. some clients
don't recognize the preauth message, and try to authenticate anyway, which
fails. a tls public key has a config option to disable preauth, keeping new
connections in unauthenticated state, to work with such email clients.

for smtp (submission), we don't require an explicit auth command.

both for imap and smtp, we allow a client to authenticate with another
mechanism than "external". in that case, credentials are verified, and have to
be for the same account as the tls client auth, but the adress can be another
one than the login address configured with the tls public key.

only the public key is used to identify the account that is authenticating. we
ignore the rest of the certificate. expiration dates, names, constraints, etc
are not verified. no certificate authorities are involved.

users can upload their own (minimal) certificate. the account web interface
shows openssl commands you can run to generate a private key, minimal cert, and
a p12 file (the format that email clients seem to like...) containing both
private key and certificate.

the imapclient & smtpclient packages can now also use tls client auth. and so
does "mox sendmail", either with a pem file with private key and certificate,
or with just an ed25519 private key.

there are new subcommands "mox config tlspubkey ..." for
adding/removing/listing tls public keys from the cli, by the admin.
2024-12-06 10:08:17 +01:00
Mechiel Lukkien
5f7831a7f0
move config-changing code from package mox-/ to admin/
needed for upcoming changes, where (now) package admin needs to import package
store. before, because package store imports mox- (for accessing the active
config), that would lead to a cyclic import. package mox- keeps its active
config, package admin has the higher-level config-changing functions.
2024-12-02 22:03:18 +01:00
Mechiel Lukkien
de435fceba
switch to math/rand/v2 in most places
this allows removing some ugly instantiations of an rng based on the current
time.

Intn is now IntN for our concurrency-safe prng wrapper to match the randv2 api.

v2 exists since go1.22, which we already require.
2024-11-29 13:45:19 +01:00
Mechiel Lukkien
96a3ecd52c
use reflect.TypeFor instead of kludgy reflect.TypeOf
TypeFor was introduced in go1.22, which we already require.
2024-11-29 13:17:13 +01:00
Mechiel Lukkien
afb182cb14
smtpserver: add prometheus metric for failing starttls handshakes for incoming deliveries
and add an alerting rule if the failure rate becomes >10% (e.g. expired
certificate).

the prometheus metrics includes a reason, including potential tls alerts, if
remote smtp clients would send those (openssl s_client -starttls does).

inspired by issue #237, where incoming connections were aborted by remote. such
errors would show up as "eof" in the metrics.
2024-11-29 12:43:21 +01:00
Mechiel Lukkien
09e7ddba9e
web apps: add autocomplete attribute for usernames and passwords
hinted at by chromium developer console
2024-11-29 10:40:22 +01:00
Mechiel Lukkien
96d86ad6f1
add ability to include custom css & js in web interface (webmail, webaccount, webadmin), and use css variables in webmail for easier customization
if files {webmail,webaccount,webadmin}.{css,js} exist in the configdir (where
the mox.conf file lives), their contents are included in the web apps.

the webmail now uses css variables, mostly for colors. so you can write a
custom webmail.css that changes the variables, e.g.:

	:root {
		--color: blue
	}

you can also look at css class names and override their styles.

in the future, we may want to make some css variables configurable in the
per-user settings in the webmail. should reduce the number of variables first.

any custom javascript is loaded first. if it defines a global function
"moxBeforeDisplay", that is called each time a page loads (after
authentication) with the DOM element of the page content as parameter. the
webmail is a single persistent page. this can be used to make some changes to
the DOM, e.g. inserting some elements. we'll have to see how well this works in
practice. perhaps some patterns emerge (e.g. adding a logo), and we can make
those use-cases easier to achieve.

helps partially with issue #114, and based on questions from laura-lilly on
matrix.
2024-11-29 10:17:07 +01:00
Mechiel Lukkien
9e8c8ca583
webmail: fix dragging the corner of the compose popup when it's on top of a message view with an iframe (for an html message)
the pointer events for moving the mouse would be consumed by the iframe. that
broke resizing of the compose popup.  we now disable pointerevents on the main
ui when we are dragging the corner of the compose popup.

this is similar to an earlier change about the draggable split bar between the
message list and the message view (when showing an html message).
2024-11-28 18:36:58 +01:00
Mechiel Lukkien
1f604c6a3d
webmail: when marking message as unread, also clear its (non)junk flags 2024-11-28 18:24:03 +01:00
Mechiel Lukkien
ee48cf0dfd
webmail: fix using the compose window/popup after saving a draft message failed
we kept the "save draft" promise, and would wait for it again for other
operations (eg close, save again, send), which wouldn't make progress.

can easily be reproduced by saving a message with a control character in an
address or the subject. saving the draft will fail.

for issue #256 by ally9335, thanks for reporting
2024-11-28 17:24:58 +01:00
Mechiel Lukkien
bd693805fd
webmail: tweak color for label about encrypted/signed messages
it wasn't very readable, probably since the change that introduced dark mode.
2024-11-28 16:46:24 +01:00
Mechiel Lukkien
d7f057709f
include goversion used to compile mox in the mox version 2024-11-28 16:28:05 +01:00
Mechiel Lukkien
636bb91df6
webaccount: tweak text about opening apple mobileconfig profile files, it has gotten harder to use in ios18
since ios18, downloaded files don't go immediately to the settings (which is
somewhat understandable given potential for abuse), but go to the Files app.
opening them in the Files app then adds them to the settings where they can be
installed.
2024-11-28 16:06:20 +01:00
Mechiel Lukkien
01deecb684
smtpserver: log an error message at debug level when we cannot parse a message for the smtputf8 check
instead of not logging any message. this should make it easier to debug.

based on delivery issue due to smtputf8 seen by wneessen.
2024-11-25 13:25:12 +01:00
Mechiel Lukkien
7f5e1087d4
admin: better handling of disabled mta-sts during self-check
if admin has disabled mta-sts for a domain, we still check for records &
policies, but won't mark it as error when they don't exist. we do now keep
warning that mta-sts isn't enabled, otherwise we would start showing a green
"ok".

this also fixes the mta-sts code returning ErrNoPolicy when mtasts.<domain>
doesn't exist. the dns lookup is done with the reguler "net" package dns lookup
code, not through adns, so we look for two types of DNSError's.

noticed a while ago when testing with MTA-STS while debugging TLS connection
issues with MS.
2024-11-24 13:30:29 +01:00
Mechiel Lukkien
726c0931f7
admin: in self-check for spf records against our ip's, don't try checking the unspecified addresses (0.0.0.0 and ::), and warn if there are no explicitly configured ips
based on question by spectral369 on #mox on matrix
2024-11-24 12:41:00 +01:00
Matt Fellenz
501f594a0a
Split paste into addr field by commas 2024-11-23 15:11:57 +01:00
Mechiel Lukkien
32d4e9a14c
log when mox root process cannot forward signals to unprivileged child
and give the mox.service permissions to send such signals.
2024-11-21 21:59:36 +01:00
Mechiel Lukkien
3d4cd00430
when opening an account by email address, such as during login attempts, and address is an alias, fail with proper error "no such credentials" instead of with error "no such account", which printing a stack trace
was encountered during smtp session. but could also happen for imapserver and
webmail.

in smtpserver, we now log error messages for smtp errors that cause us to print
a stack trace. would have made logging output more helpful (without having to
turn on trace-level logging).

hopefully solves issue #238 by mwyvr, thanks for reporting!
2024-11-10 23:20:17 +01:00
Mechiel Lukkien
0e338b0530
for aliases, enable "public posting" by default when creating an alias
and explain in more detail what it means in the webadmin interface.
will hopefully bring less confusion.

for issue #244 by exander77, thanks for reporting
2024-11-10 22:25:08 +01:00
Mechiel Lukkien
c13f1814fc
also use "SRV 0 0 port ." in webadmin pages
for issue #240, thanks bwbroersma for reporting and patch
2024-11-10 22:24:47 +01:00
Benjamin W. Broersma
355488028d
More RFC compliant SRV service not available
Fix #240.
2024-11-07 15:01:02 +01:00
Mechiel Lukkien
68c130f60e
add v0.0.13 to website 2024-11-06 23:20:44 +01:00
Mechiel Lukkien
22c8911bf3
disable tls session tickets to workaround deliverability issues with incoming email from microsoft
for issue #237
2024-11-06 10:19:23 +01:00
startup-001-steve
76f7b9ebf6
added link to Matrix Chat Room
and make matrix.to url a link and wrap text
2024-11-01 12:11:10 +01:00
Mechiel Lukkien
8fa197b19d
imapserver: for the "bodystructure" fetch response item, add the content-type parameters for multiparts so clients will get the mime boundary without having to parse the message themselves
"bodystructure" is like "body", but bodystructure allows returning more
information. we chose not to do that, initially because it was easier to
implement, and more recently because we can't easily return the additional
content-md5 field for leaf parts (since we don't have it in parsed form). but
now we just return the extended form for multiparts, and non-extended form for
leaf parts. likely no one would be looking for any content-md5-value for leaf
parts anyway. knowing the boundary is much more likely to be useful.

for issue #217 by danieleggert, thanks for reporting!
2024-11-01 11:28:25 +01:00
Mechiel Lukkien
598c5ea6ac
smtpserver: when logging recipients, actually show something about the recipient
before this change, we were logging an empty string, which turned into "[]",
looking like an empty array. misleading and unhelpful.

this is fixed by making struct fields on type recipient "exported" so they can
get logged, and by changing the logging code to log nested
struct/pointer/interface fields if we would otherwise wouldn't log anything
(when only logging more basic data types).

we'll now get log lines like:

	l=info m="deliver attempt to unknown user(s)" pkg=smtpserver recipients="[addr=bogus@test.example]"

for issue #232 by snabb, thanks for reporting!
2024-11-01 10:38:31 +01:00
Mechiel Lukkien
879477a01f
webmail: during "send and archive", don't fail with error message when message that is being responded to is already in archive folder
before this change, when archiving, we would move all messages from the thread
that are in the same mailbox as that of the response message to the archive
mailbox. so if the message that was being responsed to was already in the
archive mailbox, the message would be moved from archive mailbox to archive
mailbox, resulting in an error.

with this change, when archiving, we move the thread messages that are in the
same mailbox as is currently open (independent of the mailbox the message lives
in, a common situation in the threading view). if there is no open mailbox
(search results), we still use the mailbox of the message being responded to as
reference.

with this new approach, we won't get errors moving a message to an archive
mailbox when it's already there. well, you can still get that error, but then
you've got the archive mailbox open, or you're in a search result and
responding to an archived message. the error should at least help understand
that nothing is happening.

we are only moving the messages from one active/reference mailbox because we
don't want to move messages from the thread that are in the Sent mailbox, and
we also don't want to move duplicate messages (cross-posts to mailing lists)
that are in other mailboxes. moving only the messages from the current active
mailbox seems safe, and should do what is what users would expect most of the
time.

for issue #233 by mattfbacon, thanks for reporting!
2024-11-01 09:39:40 +01:00
Mechiel Lukkien
04305722a7
webmail: if we don't have loaded account settings yet, abort loading the popup after showing an error that the settings aren't available yet
missing returning/throwing error.

based on screenshot with unhandled js error in issue #218 by mgkirs
2024-10-10 14:29:52 +02:00
Mechiel Lukkien
0fbf24160c
add a handler for the acme http-01 validiation mechanism to all plain http (non-tls) webservers (ports), not only to the one listening on port 80
because this mechanism is most needed behind a reverse proxy, where acme
tls-alpn-01 won't work (because the reverse proxy won't pass on the alpn
extensions). if that's the case, there is obviously a webserver on port 443.
and it likely also running on port 80. so before this change, if tls-alpn-01
isn't available, http-01 also wasn't available, leaving no validation
mechanisms.

for issue #218 by mgkirs, thanks for reporting and details. hope this helps.
2024-10-10 14:04:13 +02:00
Mechiel Lukkien
354b9f4d98
tweak docs for release process 2024-10-06 13:07:11 +02:00
Mechiel Lukkien
bd842d3ff5
add upcoming release to website, and rotate apidiff 2024-10-06 12:48:56 +02:00
Mechiel Lukkien
5699686870
generate apidiff 2024-10-06 10:46:50 +02:00
Mechiel Lukkien
fdc0560ac4
for messages retired from the delivery queue, set "success" field properly, and include the smtp code/enhanced code on success too (not only on failure)
noticed some time ago when looking at my retired messages queue.
2024-10-05 11:06:42 +02:00
Mechiel Lukkien
fb65ec0676
webmail: fix loading a "view" (messages in a mailbox) when the "initial" message cannot be parsed
when we send a list of messages from the mox backend to the js frontend, we
include a parsed form of the "initial" message: the one we immediately show,
typically the top-most (unread) message. however, if that message could not be
parsed (due to invalid header syntax), we would fail the entire operation of
loading the view.

with this change, we simply don't return a parsed form of an initial message if
we cannot parse it. that will cause the webmail frontend to not select &
display a message immediately. if you then try to open the message, you'll
still get an error message as before. but at least the view has been loaded,
and you can open the raw message to inspect the contents.

for issue #219 by wneessen
2024-10-05 09:50:40 +02:00
Mechiel Lukkien
5d97bf198a
add support for parsing the imap "bodystructure" extensible form
not generating it yet from imapserver because we don't have content-md5
available. we could send "nil" instead of any actual content-md5 header (and
probably no contemporary messages include a content-md5 header), but it would
not be correct. if no known clients have problems in practice with absent
extensible data, it's better to just leave the bodystructure as is, with
extensible data.

for issue #217 by danieleggert
2024-10-04 22:55:43 +02:00
Mechiel Lukkien
81c179bb4c
fix embarrasing bug in checking if string is ascii
result reversed

for issue #179 and issue #157
2024-10-04 20:05:28 +02:00
Mechiel Lukkien
edb6e8d15c
webmail: fix displaying a message in separate window if there was no known viewmode (text or html or html with externals)
we were sending a zero value for ViewMode, which the frontend js rejected
during parsing.

noticed during testing.
2024-10-04 16:37:32 +02:00
Mechiel Lukkien
32b549b260
add more details to x-mox-reason message header added during delivery, for understanding why a message is accepted/rejected
we add various information while analysing an incoming message. like
dkim/spf/ip reputation. and content-based junk filter threshold/result and
ham/spam words used.

for issue #179 by Fell and #157 by mattfbacon
2024-10-04 16:01:30 +02:00
Mechiel Lukkien
98d0ff22bb
update to latest dependencies 2024-10-04 09:44:59 +02:00
Mechiel Lukkien
9a4fa8633f
add missing file from previous commit 2024-10-04 09:34:37 +02:00
Mechiel Lukkien
8f7fc3773b
add subcommand that prints licenses, and link to licenses from the webadmin/webaccount/webmail interfaces 2024-10-04 09:31:31 +02:00
Mechiel Lukkien
7d3f307156
acme port config option, explain why using a https reverse proxy will not work for acme tls-alpn-01 verification
related to #218 by mgkirs
2024-10-03 21:16:19 +02:00
Mechiel Lukkien
7ecc3f68ce
for the smtp login method, use challenges "Username:" and "Password:" as attempt to improve interoperability
there is only an internet-draft about the required behaviour. it says clients
should ignore the strings. some clients do check the string. most servers
appear to use "Username:" and "Password:" as challenge. we'll follow them,
hoping to improve interoperability.

for issue #223 by gdunstone, and with analysis from wneessen of go-mail.
thanks!
2024-10-03 20:29:40 +02:00
Mechiel Lukkien
bbc419c6ab
in webadmin when managing aliases, mention an alias member won't receive a message if the member address is in the message From header
this is a typical case if you made an alias to test how it works, with your
account. we may have to make this behaviour optional in the future.

for issue #220 by wneessen, thanks for reporting!
2024-10-03 20:20:14 +02:00
Mechiel Lukkien
c7315cb72d
handle scram errors more gracefully, not aborting the connection
for some errors during the scram authentication protocol, we would treat some
errors that a client connection could induce as server errors, printing a stack
trace and aborting the connection.

this change recognizes those errors and sends regular "authentication failed"
or "protocol error" error messages to the client.

for issue #222 by wneessen, thanks for reporting
2024-10-03 15:18:09 +02:00
Mechiel Lukkien
b0c4b09010
add "RcptTo" to webapi MessageGet result
otherwise, if the recipient was a bcc, there's no good way to see why the
message was received.

incoming webhooks already have this rcptto field, but that's not always the
moment you want to process it.

for mattanja on matrix, thanks for reporting!
2024-09-30 10:43:48 +02:00
Mechiel Lukkien
a7bdc41cd4
reject attempts at starttls for smtp & imap when no tls config is present
we didn't announce starttls as capability, but clients can still try them. we
would try to do a handshake with a nil certificate, which would cause a
goroutine panic (which is handled gracefully, shutting down the connection).

found with code that was doing starttls unconditionally.
2024-09-15 17:18:50 +02:00
Mechiel Lukkien
0977b7a6d3
get rid of some more gnulinuxisms
to get builds on openbsd going
2024-09-14 20:53:21 +02:00
Mechiel Lukkien
661e77c622
remove linuxism
should make build get further on openbsd
2024-09-14 14:22:39 +02:00
Mechiel Lukkien
b7ba0482ba
don't run install scripts when installing js dependencies 2024-09-08 09:49:24 +02:00
Mechiel Lukkien
594182aae5
webmail: rename query string param "token" to "singleUseToken" to be less scary in access logs
these singleusetokens can be redeemed once. so when you see it in the logs, it
can't be used again. they are short-lived anyway.

this change should help prevent me periodically investigating token handling...
2024-08-23 15:08:27 +02:00
Mechiel Lukkien
a977082b89
when login sessions to admin/account/webmail interfaces expiry or are no longer valid, explain the behaviour in the message
before, we would just say "session expired". now we say "session expired (after
12 hours inactivity)" (for admin) or "session expired (after 24 hours
inactivity)" for account/webmail. for unknown sessions in the admin interface,
we also explain that server restarts and 10 more new sessions can be the
reason.

for issue #202 by ally9335
2024-08-23 14:48:45 +02:00
Mechiel Lukkien
dfe4a54e0b
webmail: when a ui element (eg button) is disabled, make that clear with styles
since we have more of our own styling (probably since dark mode), we weren't
indicating anymore that a button was disabled. this actually only applies to
the button for the current mailbox of a message, when attempting to move it.

we now don't show any hover effects in that case, and we show the button
semitransparent.
2024-08-23 14:28:05 +02:00
Mechiel Lukkien
b77f44ab58
webmail: add setting to show html version of a message by default, instead of text version
related to issue #196 by GildedHonour
2024-08-23 14:02:55 +02:00
Mechiel Lukkien
fe9afb40bc
webmail: for html-only messages, ensure the "html" button is shown as active
instead of both "html" and "html with external resources" being shown as inactive.
2024-08-23 13:39:16 +02:00
Mechiel Lukkien
a485df830d
webapi: minor tweaks in docs 2024-08-23 12:12:13 +02:00
Mechiel Lukkien
6c488ead0b
webapi: implement adding "alternative files" to messages sent with the Send method
with new field "AlternativeFiles" in the JSON body, or with "alternativefile" form file uploads.

can be used if there is a (full) alternative representation (alternative to
text and/or html part), like a calendar item, or PDF file.

for issue #188 by morki
2024-08-23 12:00:25 +02:00
Mechiel Lukkien
62bd2f4427
for incoming smtp deliveries with starttls, use cert of hostname if sni hostname is unknown
instead of failing the connection because no certificates are available.

this may improve interoperability. perhaps the remote smtp client that's doing
the delivery will decide they do like the tls cert for our (mx) hostname after
all.

this only applies to incoming smtp deliveries. for other tls connections
(https, imaps/submissions and imap/submission with starttls) we still cause
connections for unknown sni hostnames to fail. if case no sni was present, we
were already falling back to a cert for the (listener/mx) hostname, that
behaviour hasn't changed.

for issue #206 by RobSlgm
2024-08-23 11:04:21 +02:00
Mechiel Lukkien
7e7f6d48f1
install latest versions of staticcheck & shadow
they tend to break each 6 months with a new go toolchain.
listing fixed versions probably causes more failures than always using the
latest versions.
2024-08-22 22:06:30 +02:00
Mechiel Lukkien
17346d6def
smtpclient: handle server closing connection after writing its response to RCPT TO
if icloud.com has your ip blocklisted, it will close the smtp connection after
writing a response to RCPT TO, before writing a response to a pipelined DATA
command. this is similar to the case (already handled) where a mail server
would close the connection after a response to MAIL FROM when pipelined.

we now recognize this situation (unexpected EOF before we get a response to
DATA, with all RCPT TO's failed), and treat the last response to RCPT TO as the
result.

for issue #198 by soheilpro, thanks for reporting and sending an smtpclient
trace that showed the behaviour.
2024-08-22 21:59:53 +02:00
Mechiel Lukkien
c16162eebc
update to golang.org/x/{crypto,net,text,sync,tools}@latest 2024-08-22 20:45:35 +02:00
Mechiel Lukkien
09b13ed4d5
update to golang.org/x/mod@latest 2024-08-22 20:41:06 +02:00
Mechiel Lukkien
e7e023c6d0
update dependency golang.org/x/sys to latest 2024-08-22 20:39:41 +02:00
Mechiel Lukkien
5678b03324
recognize more charsets than utf-8/iso-8859-1/us-ascii when parsing message headers with address
as they occur in From/To headers, for example: "From:
=?iso-8859-2?Q?Krist=FDna?= <k@example.com>".  we are using net/mail to parse
such headers. most address-parsing functions in that package will only decode
charsets utf-8, iso-8859-1 and us-ascii. we have to be careful to always use
net/mail.AddressParser with a WordDecoder that understands more that the
basics.

for issue #204 by morki, thanks for reporting!
2024-08-22 17:36:49 +02:00
Mechiel Lukkien
0bb4501472
update to latest bbolt (db library) v1.3.11
with a fix for releasing pages allocated during a transaction that was rolled
back.

also bumps required go version to go1.22
2024-08-22 16:22:09 +02:00
Mechiel Lukkien
016fde8d78
fix parsing message headers with addresses that need double quotes
we are using Go's net/mail to parse message headers. it can parse addresses,
and properly decodes email addresses with double quotes (e.g. " "@example.com).
however, it gives us an address without the double quotes in the localpart,
effectively an invalid address. we now have a workaround to parse such
not-quite-addresses.

for issue #199 reported by gene-hightower, thanks for reporting!
2024-08-22 16:03:52 +02:00
Mechiel Lukkien
79b641cdc6
webmail: remove todo for vi editing mode in textarea
users should install a plugin.
i wrote https://addons.mozilla.org/en-US/firefox/addon/vi-editing-mode/, seems
good enough for now.
2024-08-19 15:55:32 +02:00
Mechiel Lukkien
2c003991bb
webmail: put attached files before inline files
some emails have text and html versions. the html can have several logo images.
and there may be a pdf attached. when gathering attachments to show in webmail,
the pdf would come last. it could happen the logo images would get a link to
click, and the pdf would be behind the "more ..." button. by putting
"multipart/mixed" files before the "multipart/related" in the list, it's more
likely that useful files can be clicked immediately, and unimportant logo files
are behind the "more"-button.
2024-08-05 12:10:10 +02:00
Mechiel Lukkien
0a4999f33e
webmail: improve dragging with mouse events over the message iframe
before, the iframe was consuming the mouse events, preventing the dragging to
the right from working properly. the workaround was to drag over the area with
the header, above the message iframe.

with this change, we disable pointer events over the entire right area, which
includes the iframe.
2024-08-03 14:49:38 +02:00
Mechiel Lukkien
aead738836
attempt at improving interoperability of with outlook 365 using the smtp "login" sasl auth mechanism
by sending the (encoded) string "User Name" as mentioned by the internet-draft,
https://datatracker.ietf.org/doc/html/draft-murchison-sasl-login-00#section-2.1

that document says clients should ignore the challenge (which is why were were
not doing any effort and sending an empty challenge). but it also says some
clients require the challenge "Username:" instead of "User Name", implying that
it's important to not send an empty challenge. we can't send both challenges
though...

for issue #51
2024-07-18 21:17:33 +02:00
Mechiel Lukkien
c629ae26af
don't prevent the html pages to load a favicon, and provide one by default
for issue #186 by morki, thanks for reporting and providing sample favicons.

generated by the mentioned generator at favicon.io, with the ubuntu font and a
fuchsia-like color.

the favicon is served for listeners/domains that have the
admin/account/webmail/webapi endpoints enabled, i.e. user-facing. the mta-sts,
autoconfig, etc urls don't serve the favicon.

admins can create webhandler routes to serve another favicon. these webhandler
routes are evaluted before the favicon route (a "service handler").
2024-07-08 21:58:10 +02:00
KiekerJan
151bd1a9c0 Set syslog facility to mail 2024-07-01 12:12:39 +02:00
Mechiel Lukkien
7e54280a9d
show the same spf record for a domain in the dnsrecords and dnscheck output/pages
before, the suggested records would show "v=spf1 mx ~all", while the dnscheck
page would suggest "v=spf1 ip4:... ip6:... -all".

the two places now show the same record: explicitly listing the configured ip's
(so the common case of a valid message is fast and doesn't require lookups of
mx hosts and their addresses), but still including "mx" (may prevent issues
while migrating to new ips in the future and doesn't hurt for legit messages),
and "~all" (for compatibility with some old systems that don't look at
dkim/dmarc when they evaluate spf and reach "-all")

based on #176 created by rdelaage, with record mismatch spotted by RobSlgm,
thanks!
2024-06-28 14:50:39 +02:00
Mechiel Lukkien
367e968199
fix parsing Authentication-Results header with a "reason=..." part
noticed in gopherwatch logging
2024-06-28 10:39:46 +02:00
Mechiel Lukkien
73373a19c1
in dnscheck, warn when dane is not configured (through static host keys), instead of showing "OK"
if no host keys are configured, show as warning (yellow) that dane isn't
configured, and show instructions to enable it.

for issue #185 by morki, thanks for reporting!
2024-06-27 15:57:04 +02:00
Mechiel Lukkien
e350af7eed
during dnscheck, if srv accountconfig record with just a dot, for a non-existent service, is missing, show as warning instead of error
the suggested dns records mention that these records are optional, but the
dnscheck makes it look serious. not helpful.

also remove unneeded whitespace in list of errors/warnings.

for issue #184 by morki, thanks for reporting!
2024-06-27 15:12:52 +02:00
Mechiel Lukkien
beee03574a
mention that imported messages are not deduplicated
so importing twice can result in duplicates.

related to issue #180
2024-06-24 11:46:50 +02:00
Mechiel Lukkien
fdcd2eb0eb
webadmin: remove stray text "pre" in on the "required dns records" page 2024-06-24 10:22:42 +02:00
Mechiel Lukkien
9bab3124f6
show correct host tlsrpt record in dns selfcheck, and make all suggested dns records absolute
the host tlsrpt record implied it was for the domain, but should have been for
the mail host.

some dns records were absolute, others weren't. now they all are for
consistency.

for issue #182 by mdavids, thanks for reporting!
2024-06-22 11:46:12 +02:00
Mechiel Lukkien
ac3596a7d7
try fixing race in tests of ctl socket
there were a few test failures on the github runners. i can't reproduce it
locally. but i can see how they are happening: a gorouting running servectlcmd
could still be doing cleanup (removing files) while a next ctl command was
being run. with this change, we wait for servectlcmd to be done before starting
on a next test.
2024-06-10 23:07:01 +02:00
Mechiel Lukkien
8254e9ce66
webmail: only show "edit" button on drafts, and similar for "e" shortcut
always showing the "edit" button was a bug.
2024-06-10 20:19:17 +02:00
Mechiel Lukkien
a4f7e71457
webmail: ensure white background when viewing attachments, for the black text of plain text attachments
otherwise, in dark mode, the plain text iframe content would be black text on
the white background of the iframe as set by webmail. i can't find a way to set
the content text on the iframe that contains it.
2024-06-10 20:11:26 +02:00
Mechiel Lukkien
f56b04805b
make tests pass with "go test -count n" with n > 1
by closing initialized resources during tests.
2024-06-10 18:18:20 +02:00
Mechiel Lukkien
dde2258f69
update to latest sconf, for improved error messages for mixed tab/space indenting in config files
based on chat with niklas/broitzer
2024-06-10 18:02:47 +02:00
Mechiel Lukkien
aef99a72d8
imapserver: prevent unbounded memory allocations when handling a command
some commands, like search, can specify any number of literals, of arbitrary
size.  we already limited individual literals to 100kb. but you could specify
many of them, causing unbounded memory consumption. this change adds a limit of
1000 literals in a command, and a limit of 1mb of total combined memory for
literals. once the limits are exceeded, a TOOBIG error code is returned.

unbounded memory use could only be triggered on authenticated connections.

this addresses the same issue as CVE-2024-34055 for cyrus-imap, by damian
poddebniak.
2024-06-10 15:00:18 +02:00
Mechiel Lukkien
614576e409
improve http request handling for internal services and multiple domains
per listener, you could enable the admin/account/webmail/webapi handlers. but
that would serve those services on their configured paths (/admin/, /,
/webmail/, /webapi/) on all domains mox would be webserving, including any
non-mail domains. so your www.example/admin/ would be serving the admin web
interface, with no way to disabled that.

with this change, the admin interface is only served on requests to (based on
Host header):
- ip addresses
- the listener host name (explicitly configured in the listener, with fallback
  to global hostname)
- "localhost" (for ssh tunnel/forwarding scenario's)

the account/webmail/webapi interfaces are served on the same domains as the
admin interface, and additionally:
- the client settings domains, as optionally configured in each Domain in
  domains.conf. typically "mail.<yourdomain>".

this means the internal services are no longer served on other domains
configured in the webserver, e.g. www.example.org/admin/ will not be handled
specially.

the order of evaluation of routes/services is also changed:
before this change, the internal handlers would always be evaluated first.
with this change, only the system handlers for
MTA-STS/autoconfig/ACME-validation will be evaluated first. then the webserver
handlers. and finally the internal services (admin/account/webmail/webapi).
this allows an admin to configure overrides for some of the domains (per
hostname-matching rules explained above) that would normally serve these
services.

webserver handlers can now be configured that pass the request to an internal
service: in addition to the existing static/redirect/forward config options,
there is now an "internal" config option, naming the service
(admin/account/webmail/webapi) for handling the request. this allows enabling
the internal services on custom domains.

for issue #160 by TragicLifeHu, thanks for reporting!
2024-05-11 11:13:14 +02:00
Mechiel Lukkien
9152384fd3
use debug logging in tests
by setting the loglevel to debug in package mlog.
we restore the "info" logging in main.
except for "mox localserve", which still sets debug by default.
2024-05-10 15:51:48 +02:00
Mechiel Lukkien
bf8cfd9724
add debug logging about bstore db schema upgrades
bstore was updated to v0.0.6 to add this logging.
this simplifies some of the db-handling code in mtastsdb,tlsrptdb,dmarcdb. we
now call the package-level Init() and Close() in all tests properly.
2024-05-10 14:44:37 +02:00
Mechiel Lukkien
3e4cce826e
webaccount: change xcheckf to handle mox.ErrConfig as user error
like in webadmin
2024-05-09 22:45:44 +02:00
Mechiel Lukkien
3f000fd4e0
make most fields of junk filter configurable by account itself
finally remove the message saying that not all config options can be configured
through the web interface.
2024-05-09 22:45:16 +02:00
Mechiel Lukkien
ebb8ad06b5
use shorter smtp.NewAddress() instead of smtp.Address{...} 2024-05-09 21:26:22 +02:00
Mechiel Lukkien
1179d9d80a
webmail: when opening message in new tab, set document title to subject, message from address(es) and id of message 2024-05-09 21:19:58 +02:00
Mechiel Lukkien
a06a4de5ec
for ctl commands, read all lines before processing, to prevent out of sync protocol when handling errors.
the protocol is often: read one or more lines. only then return error. if we
would return an error after reading 1 line, parsing it and failing, the writer
(client connecting) may be busy writing more lines, not reading an error
response yet.
2024-05-09 21:11:20 +02:00
Mechiel Lukkien
1a0a396713
webmail: in list of From address to use in compose window, don't add the catchall address
it was even selected by default.
2024-05-09 20:55:03 +02:00
Mechiel Lukkien
1fc8f165f7
clarify behaviour of backup command
from RobSlgm, issue #172
2024-05-09 17:48:22 +02:00
Mechiel Lukkien
83004bb18e
give more helpful pointers for dns-related settings
in quickstart, add troubleshooting hints.
in suggested dns records, explain the multiline long dkim record should
probably be converted into a single string.

the quickstart output is quite long already. i'm hoping for a "mox setup" in
the future where we help a user step-by-step to a fully working system. we'll
have more space to prevent hints and check the settings after a user made
changes. it's on the roadmap.

based on issues #158 and #164, thanks vipas84 and RobSlgm for reporting and
discussion.
2024-05-09 17:28:29 +02:00
Mechiel Lukkien
30ac690c8f
when removing account, remove its data directory instead of leaving it around
recreating the account would resurface the old messages, certainly not what you'ld expect.
it's about time to just remove the files. we do ask admins to confirm that when
removing through admin interface. it's also in the "mox config account rm" help
output now.

for issue #162 by RobSlgm with feedback from x8x, thanks!
2024-05-09 16:30:11 +02:00
Mechiel Lukkien
a2c9cfc55b
webadmin: don't show runtime typecheck error for invalid values in dmarc and tls reports
several fields in dmarc and tls reports have known string values. we have a Go
string type for them. sherpats (through sherpadoc) turns those strings into
typescript enums, and sherpats generates runtime-typechecking code (to enforce
correct types for incoming json, to prevent failing deeper in the code when we
get invalid data (much harder to debug)). the Go not-really-enum types allow
other values, and real-world reports have unknown/unspecified/invalid values.
this uses the sherpadoc -rename flag to turn those enums into regular untyped
strings, so sherpats doesn't generate enum-enforcing runtime type checking
code.

this required an update to sherpadoc, to properly handling renaming a type to a
basic type instead of another named type.

for issue #161 by RobSlgm, thanks for reporting!
2024-05-09 15:58:14 +02:00
Mechiel Lukkien
44a6927379
add hint about systemd ReadWritePaths if hardlinking fails on linux due to cross-device link
may help admin figure out more easily how to work around this.

for issue #170 by rdelaage
2024-05-09 14:25:24 +02:00
Mechiel Lukkien
4d28a02621
webmail: better save/close/cancel buttons in compose window
- keep them on the right side of the window (more important now that we can resize)
- merge the close & cancel buttons into a close button, with a popup asking what to do for changes not saved as draft.
2024-05-09 11:46:00 +02:00
Mechiel Lukkien
76aa96ab6f
webadmin: consistent pattern for client api calls wrapped in async/await
adding await in the closure. makes no functional different. but let's stick to one form.
2024-05-09 11:31:04 +02:00
Mechiel Lukkien
98ce133203
update to latest adns with fix for endless loop for incoming corrupt packets 2024-05-09 11:30:42 +02:00
Mechiel Lukkien
09ee89d5c8
update roadmap 2024-05-09 10:51:11 +02:00
Mechiel Lukkien
72be3e8423
webadmin: propagate error when quota size cannot be parsed, improve parsing and hint in error message
- the error wasn't caught because the parseInt() call wasn't evaluated inside the promise.
- we no longer require that the input (e.g. 2G) is the same as how we would format it (2g).
- tooltips and error message should now steer people to using these units.

feedback from pmarini-nc in #115, thanks!
2024-05-09 10:46:18 +02:00
Mechiel Lukkien
db3e44913c
update to latest bbolt
with two changes, both not resulting in different behaviour for us.
2024-05-09 10:32:27 +02:00
Sebastian Haas
587beb75b1 fix typo in SRV validation message
_.tcp => ._tcp
2024-05-07 07:47:26 +02:00
Mechiel Lukkien
a16c08681b
webmail: change many inline styles to using css classes, and add dark mode
this started with looking into the dark mode of PR #163 by mattfbacon. it's a
very good solution, especially for the amount of code. while looking into dark
mode, some common problems with inverting colors are:
- box-shadow start "glowing" which isn't great. likewise, semitransparent
  layers would become brighter, not darker.
- while popups/overlays in light mode just stay the same white, in dark mode
  they should become lighter than the regular content because box shadows don't
  give enough contrast in dark mode.

while looking at adding explicit styles for dark mode, it turns out that's
easier when we work more with css rules/classes instead of inline styles (so we
can use the @media rule).

so we now also create css rules instead of working with inline styles a lot.
benefits:
- creating css rules is useful for items that repeat. they'll have a single css
  class. changing a style on a css class is now reflected in all elements of that
  kind (with that class)
- css class names are helpful when inspecting the DOM while developing: they
  typically describe the function of the element.

most css classes are defined near where they are used, often while making the
element using the class (the css rule is created on first use).

this changes moves colors used for styling to a single place in webmail/lib.ts.
each property can get two values: one for regular/light mode, one for dark mode.
that should prevent forgetting one of them and makes it easy to configure both.
this change sets colors for the dark mode. i think the popups look better than
in PR #163, but in other ways it may be worse. this is a start, we can tweak
the styling.

if we can reduce the number of needed colors some more, we could make them
configurable in the webmail settings in the future. so this is also a step
towards making the ui looks configurable as discussed in issue #107.
2024-05-06 09:13:50 +02:00
Mechiel Lukkien
195c57f06e
update website with latest release v0.0.11 2024-04-30 20:54:32 +02:00
Mechiel Lukkien
7ba18609cd
rotate apidiff/next.txt before release 2024-04-30 20:52:50 +02:00
Mechiel Lukkien
78a59b3476
webadmin: after looking up cid, show it
seems like the useful line of that functionality got lost...
2024-04-29 21:14:05 +02:00
Mechiel Lukkien
5f00f7662e
update readme and docs 2024-04-29 21:10:25 +02:00
Mechiel Lukkien
e34b2c3730
remove log.Print added for debugging 2024-04-29 21:09:41 +02:00
Mechiel Lukkien
b7ec84b80a
queue: when shutting down, wait for pending deliveries before signaling that shutdown is complete
also fixes flaky test, which is how i found it
2024-04-28 22:48:51 +02:00
Mechiel Lukkien
ff6cca1bf9
fix flaky test: close account before marking thread-upgrade as finished
store/threads_test.go opens an account, starts the threading upgrade, waits for
it to finish, runs some tests, and closes the account at the end, verifying all
references are gone. the "thread upgrade" goroutine has its own account
reference. it closes its account after having signaled completion of the
upgrade. in between that time, all checks from the tests could run, its account
closed and its no-more-account-references check would fail. the fix is
hopefully to mark the thread upgrade process finished after closing the
account. hard to verify, but this only happens very rarely.
2024-04-28 14:09:40 +02:00
Mechiel Lukkien
b3a693ee31
update to latest golang.org/x dependencies 2024-04-28 13:53:37 +02:00
Mechiel Lukkien
8cc795b2ec
in smtp submission, if a fromid is present in the mailfrom command, use it when queueing
it's the responsibility of the sender to use unique fromid's.
we do check if that's the case, and return an error if not.

also make it more clear that "unique smtp mail from addresses" map to the
"FromIDLoginAddresses" account config field.

based on feedback from cuu508 for #31, thanks!
2024-04-28 13:18:25 +02:00
Mechiel Lukkien
32cf6500bd
when removing an address, remove it as member from aliases
unless the address is the last member, then the admin must either remove the
alias first, or add new members. we don't want to accidentally remove an alias
address.

in the admin page for removing addresses, we warn the admin that the address
will be removed from any aliases.
2024-04-28 11:44:51 +02:00
Mechiel Lukkien
e2924af8d2
ensure senderaccount is always set for messages in queue
before, the smtpserver that queued a dsn would set an empty senderaccount,
which was interpreted in a few places as the globally configured postmaster
cacount. the empty senderaccount would be used by the smtpserver that queued a
dsn with null return path. we now set the postmaster account when we add a
message to the queue. more code in the queue pretty much needs a non-empty
senderaccount, such as the filters when listing, and the suppression list.
2024-04-28 11:03:47 +02:00
Mechiel Lukkien
6e7f15e0e4
smtpserver tests: use shared function to check expected smtp error codes 2024-04-24 21:00:20 +02:00
Mechiel Lukkien
f749eb2a05
use css white-space: pre-wrap for email addresses displayed
since email addresses can contain multiple consecutive spaces.
this is a valid address: "   "@localhost
and this is a different valid address: " "@localhost

webmail still todo
2024-04-24 20:37:56 +02:00
Mechiel Lukkien
fece75cfe7
automatically install typescript into ./node_modules if missing during build
simplifies process.
2024-04-24 19:48:01 +02:00
Mechiel Lukkien
d9f5625a89
regenerate apidiff, removal due to sherpadoc cleanup 2024-04-24 19:37:47 +02:00
Mechiel Lukkien
960a51242d
add aliases/lists: when sending to an alias, the message gets delivered to all members
the members must currently all be addresses of local accounts.

a message sent to an alias is accepted if at least one of the members accepts
it. if no members accepts it (e.g. due to bad reputation of sender), the
message is rejected.

if a message is submitted to both an alias addresses and to recipients that are
members of the alias in an smtp transaction, the message will be delivered to
such members only once.  the same applies if the address in the message
from-header is the address of a member: that member won't receive the message
(they sent it). this prevents duplicate messages.

aliases have three configuration options:
- PostPublic: whether anyone can send through the alias, or only members.
  members-only lists can be useful inside organizations for internal
  communication. public lists can be useful for support addresses.
- ListMembers: whether members can see the addresses of other members. this can
  be seen in the account web interface. in the future, we could export this in
  other ways, so clients can expand the list.
- AllowMsgFrom: whether messages can be sent through the alias with the alias
  address used in the message from-header. the webmail knows it can use that
  address, and will use it as from-address when replying to a message sent to
  that address.

ideas for the future:
- allow external addresses as members. still with some restrictions, such as
  requiring a valid dkim-signature so delivery has a chance to succeed. will
  also need configuration of an admin that can receive any bounces.
- allow specifying specific members who can sent through the list (instead of
  all members).

for github issue #57 by hmfaysal.
also relevant for #99 by naturalethic.
thanks to damir & marin from sartura for discussing requirements/features.
2024-04-24 19:15:30 +02:00
Mechiel Lukkien
1cf7477642
localserve: change queue to deliver to localserve smtp server
instead of skipping on any smtp and delivering messages to accounts.
we dial the ip of the smtp listener, which is localhost:1025 by default.

the smtp server now uses a mock dns resolver during spf & dkim verification for
hosted domains (localhost by default), so they should pass.

the advantage is that we get regular full smtp server behaviour for delivering
in localserve, including webhooks, and potential first-time sender delays
(though this is disabled by default now).

incoming deliveries now go through normal address resolution, where before we
would always deliver to mox@localhost. we still accept email for unknown
recipients to mox@localhost.

this will be useful upcoming alias/list functionality.

localserve will now generate a dkim key when creating a new config. existing
users may wish to reset (remove) their localserve directory, or add a dkim key.
2024-04-24 11:40:22 +02:00
Mechiel Lukkien
2bb4f78657
remove spurious empty line to fix build, and update roadmap 2024-04-22 14:32:50 +02:00
Mechiel Lukkien
bf5cfca6b9
webmail: add export functionality
per mailbox, or for all mailboxes, in maildir/mbox format, in tar/tgz/zip
archive or without archive format for single mbox, single or recursive. the
webaccount already had an option to export all mailboxes, it now looks similar
to the webmail version.
2024-04-22 13:41:40 +02:00
Mechiel Lukkien
a3f5fd26a6
webmail: less boilerplate code for api functions
open the account at the beginning of the api handler, and close accounts there too
2024-04-21 21:32:24 +02:00
Mechiel Lukkien
ed0c520562
webmail: single db transaction while fetching parsed message 2024-04-21 20:45:06 +02:00
Mechiel Lukkien
8ad32f9ede
improve docs about IPs and ipv4/ipv6 used for outgoing connections
based on feedback from alex on irc, thanks!
2024-04-21 17:22:00 +02:00
Mechiel Lukkien
884f5b5b3f
remove some old todo's from webmail 2024-04-21 17:18:00 +02:00
Mechiel Lukkien
6c0439cf7b
webmail: when moving a single message out of/to the inbox, ask if user wants to create a rule to automatically do that server-side for future deliveries
if the message has a list-id header, we assume this is a (mailing) list
message, and we require a dkim/spf-verified domain (we prefer the shortest that
is a suffix of the list-id value). the rule we would add will mark such
messages as from a mailing list, changing filtering rules on incoming messages
(not enforcing dmarc policies). messages will be matched on list-id header and
will only match if they have the same dkim/spf-verified domain.

if the message doesn't have a list-id header, we'll ask to match based on
"message from" address.

we don't ask the user in several cases:
- if the destination/source mailbox is a special-use mailbox (e.g.
  trash,archive,sent,junk; inbox isn't included)
- if the rule already exist (no point in adding it again).
- if the user said "no, not for this list-id/from-address" in the past.
- if the user said "no, not for messages moved to this mailbox" in the past.

we'll add the rule if the message was moved out of the inbox.
if the message was moved to the inbox, we check if there is a matching rule
that we can remove.

we now remember the "no" answers (for list-id, msg-from-addr and mailbox) in
the account database.

to implement the msgfrom rules, this adds support to rulesets for matching on
message "from" address. before, we could match on smtp from address (and other
fields). rulesets now also have a field for comments. webmail adds a note that
it created the rule, with the date.

manual editing of the rulesets is still in the webaccount page. this webmail
functionality is just a convenient way to add/remove common rules.
2024-04-21 17:14:08 +02:00
Mechiel Lukkien
71c0bd2dd1
for localserve delivery from queue, use the recipient address for finding delivery rules, not sender address 2024-04-21 15:07:50 +02:00
Mechiel Lukkien
0047f09e2b
webmail: new shadowed variables were detected by shadow since previous commit, prevent 2024-04-20 21:33:14 +02:00
Mechiel Lukkien
0f735a1710
webmail: remember per from-address whether we should show the text/html/html-with-external-resources version of a message 2024-04-20 21:25:52 +02:00
Mechiel Lukkien
3a58b2a1f4
webmail: show all images (inline and attachment) below the text part (for the text view, not for html view)
the attachment buttons for images get some opacity for the text view, to
indicate you don't have to open them explicitly.
2024-04-20 21:17:05 +02:00
Mechiel Lukkien
41a62de4d7
webmail: with 6 or more attachments, show the first 4, and a button to show the rest.
for issue #113
2024-04-20 17:53:32 +02:00
Mechiel Lukkien
9529ae0bd4
webmail: store composed message as draft until send, ask about unsaved changes when closing compose window 2024-04-20 17:38:25 +02:00
Mechiel Lukkien
e8bbaa451b
webmail: allow resizing of compose window
in top-left direction. keep textarea filling the height.
remember size in localstorage, only apply either width and/or height when
viewport width/height was the same as when the remembered width/height was set
(independently).

no visual indicator other than a cursor indicating resizability.
2024-04-20 10:26:54 +02:00
Mechiel Lukkien
5229d01601
webmail: for replies/forwards, add button "send and archive thread" next to the "send" button, and give it a control+shift+Enter shortcut
the regular send shortcut is control+Enter. the shift enables "archive thread".
there is no configuration option, you'll always get the button, but only for
reply/forward, not for new compose.

we may do "send and move thread to thrash", but let's wait until people want it.

for github issue #135 by mattfbacon
2024-04-19 21:17:42 +02:00
Mechiel Lukkien
b54e903f01
webmail: ctrl Backspace now removes an address input field if it is empty
instead of "ctrl -". i found ctrl backspace more intuitive.
2024-04-19 18:03:56 +02:00
Mechiel Lukkien
8a1d81c29a
webmail: show link to webaccount interface in top right
only if account web interface is enabled on the same listener and same http/https scheme.
2024-04-19 18:02:30 +02:00
Mechiel Lukkien
70adf353ee
webmail: add server-side stored settings, for signature, top/bottom reply and showing the security indications below address input fields
should solve #102
2024-04-19 18:02:24 +02:00
Mechiel Lukkien
3bbd7c7d9b
website: mention "mox localserve" as a good way to get a feeling for mox 2024-04-19 11:12:17 +02:00
Mechiel Lukkien
ec967ef321
use new sherpadoc rename mechanism to remove some typename stuttering
the stuttering was introduced to make the same type name declared in multiple
packages, and used in the admin sherpa api, unique. with sherpadoc's new
rename, we can make them unique when generating the api definition/docs, and
the Go code can use nicer names.
2024-04-19 10:51:24 +02:00
Mechiel Lukkien
962575f21b
mention webhook retry intervals in webhook docs
for github issue #31, feedback from cuu508
2024-04-19 10:33:28 +02:00
Mechiel Lukkien
e702f45d32
webadmin: make remaining domain settings configurable via admin web interface
for dmarc reporting address, tls reporting address, mtasts policy, dkim keys/selectors.

should make it easier for webadmin-using admins to discover these settings.

the webadmin interface is now on par with functionality you would set through
the configuration file, let's keep it that way.
2024-04-19 10:23:53 +02:00
Mechiel Lukkien
a69887bfab
webadmin: make routes configurable: globally, per domain, per account
this simplifies some of the code that makes modifications to the config file. a
few protected functions can make changes to the dynamic config, which webadmin
can use. instead of having separate functions in mox-/admin.go for each type of
change.

this also exports the parsed full dynamic config to webadmin, so we need fewer
functions for specific config fields too.
2024-04-18 11:14:24 +02:00
Mechiel Lukkien
baf4df55a6
make more account config fields configurable through web interface
so users can change it themselves, instead of requiring an admin to change the
settings.
2024-04-17 21:31:26 +02:00
Mechiel Lukkien
8bcce40c55
webmail: recognize multiple urls in List-Post addresses
there may be a http(s)-address, which we'll ignore. the mailto may come after
that. like in google groups.
2024-04-16 20:26:37 +02:00
Mechiel Lukkien
8654a1f901
with localserve, in queue, when "delivering" to the sender account, mark domain "localhost" as dkimverified
may be useful for testing, e.g. for rulesets to deliver messages to mailboxes other than Inbox.
2024-04-16 19:26:26 +02:00
Mechiel Lukkien
0a10283de0
show separate localpart and dropdown of domains instead of full email field when adding a new account (with initial email address) 2024-04-16 19:23:00 +02:00
Mechiel Lukkien
c9451d4d06
in webmail & webapisrv, store bcc header in sent messages
when sending a message with bcc's, prepend the bcc header to the message we
store in the sent folder. still not in the message we send to the recipients.
2024-04-16 17:57:46 +02:00
Mechiel Lukkien
abd098e8c0
in more tests, after closing accounts, check the last reference is indeed gone 2024-04-16 17:33:54 +02:00
Mechiel Lukkien
afc47c8108
if webauth login cookie is missing, and forwarding was configured, hint that reverse proxy may be stripping path
the cookies are set with a specific path, because the webadmin, webaccount and
webmail cookies can be on the same domain (this is the default). if the reverse
proxy strips the path while forwarding, the browser won't set the cookie and
the login attempt will fail.

based on github issue #151 from naturalethic
2024-04-16 16:06:31 +02:00
Mechiel Lukkien
daa88480cb
fix potential endless loop during queue msg/hook pagination when environment has TZ UTC, triggered by tests introduced in previous test
time.Now() returns a timestamp with timezone Local. if you marshal & unmarshal
it again, it'll get the Local timezone again. unless the local timezone is UTC.
then it will get the UTC timezone. the same time.Time but with explicit UTC
timezone vs explicit UTC-as-Local timezone are not the same when comparing with
==. so comparison should be done with time.Time.Equal, or comparison should be
done after having called .Local() on parsed timestamps (so the explicit UTC
timezone gets converted to the UTC-as-Local timezone). somewhat surprising that
time.Local isn't the same as time.UTC if TZ=/TZ=UTC. there are warnings
throughout the time package about handling of UTC.
2024-04-16 14:18:11 +02:00
Mechiel Lukkien
09fcc49223
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.

this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.

unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...

matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.

a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.

messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.

suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.

submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".

to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.

admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.

new config options have been introduced. they are editable through the admin
and/or account web interfaces.

the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.

gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.

for issue #31 by cuu508
2024-04-15 21:49:02 +02:00
Mechiel Lukkien
8bec5ef7d4
also trigger use of smtputf8 for utf8 localpart in Reply-To header 2024-04-15 20:47:53 +02:00
Mechiel Lukkien
d014303617
use wlock when delivering message about new mox version 2024-04-15 20:40:16 +02:00
Mechiel Lukkien
b7ed035730
add godoc to metrics/ 2024-04-15 20:33:44 +02:00
Mechiel Lukkien
e1dbc07dba
fix harmless race where the same value is written to a tls config concurrently 2024-04-15 20:07:39 +02:00
Mechiel Lukkien
11eaa8cd1a
make imapserver faster like before again
in the precis password change before the previous release, the password used in
fuzzing wasn't correct, triggering sleeps due to botched protocols often, which
made the tests run much longer.
2024-04-14 17:41:36 +02:00
Mechiel Lukkien
12e6975aa7
return smtp response/error correctly in more cases 2024-04-14 17:28:00 +02:00
Mechiel Lukkien
4012b72d96
use type config.Account in sherpa api for better typing, and update to latest sherpa lib
typescript now knows the full types, not just "any" for account config.
inline structs previously in config.Account are given their own type definition
so sherpa can generate types.

also update to latest sherpa lib that knows about time.Duration, to be used soon.
2024-04-14 17:18:20 +02:00
Mechiel Lukkien
b7d6540d51
style nit: only take address of structs when passed on 2024-04-14 12:46:24 +02:00
Mechiel Lukkien
2a949f9f79
fix typo in smtp error code 2024-04-14 12:42:47 +02:00
Mechiel Lukkien
e585a4d180
don't fail to generate apidiff when packages are introduced 2024-04-14 12:38:58 +02:00
Mechiel Lukkien
4b459af4a8
add install as target, calling "go install"
convenient for local testing, i'm often running "mox localserve", often helpful
if it's the latest.
2024-04-14 12:37:52 +02:00
Mechiel Lukkien
1ea851bb53
Merge commit 'feb8e6c37947b21baaa7dcf724ade0f2435a8280'
github PR #152, also for issue #149
2024-04-13 13:36:11 +02:00
Mechiel Lukkien
34572d14d0
regenerate apidiff/next.txt after change to smtpclient
by calling "make genapidiff"
2024-04-13 13:31:32 +02:00
Mechiel Lukkien
73381d26ed
Merge commit 'be570d1c7d3de0ddacb011b6411a302d7f7e9f9e'
from github PR #153
2024-04-13 13:31:02 +02:00
Laurent Meunier
feb8e6c379 queue: retry with another IP when first attempt fails for dualstack remote servers
mox was already giving another try for received errors after the
`HELO`/`EHLO` command. Now mox do the same for received errors when
trying to deliver the message to the remote SMTP server.

This should help to deliver messages to SMTP server that rejects
incoming messages because of bad ipv4 or ipv6 configuration (for example
for servers checking reverse DNS records). mox will now try to deliver
messages on both ip family instead before considering the error as
permanent.

fix #149
2024-04-12 17:44:33 +02:00
Laurent Meunier
be570d1c7d add TransportDirect transport
The `TransportDirect` transport allows to tweak outgoing SMTP
connections to remote servers. Currently, it only allows to select
network IP family (ipv4, ipv6 or both).

For example, to disable ipv6 for all outgoing SMTP connections:
- add these lines in mox.conf to create a new transport named
"disableipv6":
```
Transports:
  disableipv6:
    Direct:
      DisableIpv6: true
```
- then add these lines in domains.conf to use this transport:
```
Routes:
  -
    Transport: disableipv6
```

fix #149
2024-04-12 17:27:39 +02:00
Mechiel Lukkien
f4b6e14cb9
quickstart: if initial address has non-ascii localpart, use "postmaster@" for registering with let's encrypt
because let's encrypt won't create an account for contact addresses with non-ascii characters.
we'll get an error message like:

	400 urn:ietf:params:acme:error:invalidContact: Error creating new account :: contact email [\"mailto:...\"] contains non-ASCII characters

found & reported by arnt, thanks!
2024-04-11 23:58:40 +02:00
Mechiel Lukkien
ad8c5616b1
do not use input type=email for email addresses
despite the name, it doesn't actually check for valid email addresses:
it doesn't allow non-ascii localparts, accepts various invalid localparts, and
rejects various valid localparts. no point in using it.
2024-04-11 23:45:47 +02:00
Mechiel Lukkien
606b915447
sync genapidiff 2024-04-11 23:28:52 +02:00
Mechiel Lukkien
00c8dacc56
fix previous commit, go fmt 2024-04-11 23:22:03 +02:00
Mechiel Lukkien
666f84edea
fix login for account names with non-ascii chars
we include the username in session cookie values. but cookie values must be ascii-only, go's net/http's drops bad values. the typical solution is to querystring-encode/decode the cookie values, which we'll now do.

problem found by arnt, thanks for reporting!
2024-04-11 23:11:31 +02:00
Mechiel Lukkien
d74610c345
bugfix: missing account close in queue direct send
found while writing new tests for upcoming functionality.
the test had an embarrassing workaround for the symptoms...
2024-04-08 20:22:52 +02:00
Mechiel Lukkien
89a9a8bc97
when we get a tls connection with an unrecognized sni hostname/ip, cause an alert "unrecognized name" rather than "internal error"
more helpful error for users trying to debug whats going on.

problem pointed out by arnt, thanks!
2024-04-08 14:22:52 +02:00
Mechiel Lukkien
ecf6163409
improve previous about using mtime from imported maildir message files
don't treat just any number from filename as timestamp. require it has 2 dots.
prevents filenames with just a number as being seen as a timestamp, like when
you import files from a mox accounts msgs directory.
2024-04-02 20:04:09 +02:00
Mechiel Lukkien
6d38a1e9a4
when reading maildirs for imports, use the file mtime as fallback for "received" time
more useful than the time.Time zero file in case the maildir filename isn't
properly formed with a timestamp. this is not too uncommon when people
reconstruct maildirs from other sources of message files to then import the
maildir.

based on message from abdul h
2024-04-02 19:43:45 +02:00
Mechiel Lukkien
96e3e5e33e
make staticcheck happy
i don't think it's actually better, but it is helpful to keep the code base
free of staticcheck findings.
2024-03-31 15:30:24 +02:00
Laurent Meunier
9c5d234162
do not require the SMTPUTF8 extension when not needed (#145)
Squashed commit of the following:

commit 11c25d727f0fff72bfb2dde5b0121d65be5cdc09
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Sun Mar 31 12:37:09 2024 +0200

    Fix style issue

commit c075a8cd8bb116dc1b8ecae9880a70656d362714
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Sun Mar 31 12:35:04 2024 +0200

    Also check smtputf8 for submitted messages or when in pedantic mode

commit c02328f881c653c1e84448233f6b04a6bc30bc4f
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Sun Mar 31 12:33:20 2024 +0200

    Calls to `newParser` should use `c.smtputf8`

commit a0bbd13afc17e5bd7eb845d2045b8bc156c19d25
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Sun Mar 31 12:32:12 2024 +0200

    Improve SMTPUTF8 tests

commit 08735690f3682e96b7f91cae2a32eaba7dc8b1f9
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Sat Mar 30 17:22:33 2024 +0100

    do earlier smtputf8-check

commit 3484651691cb3a78062e5c19d5ac7046a5dfba7b
Author: Laurent Meunier <laurent@deltalima.net>
Date:   Thu Mar 28 17:47:11 2024 +0100

    do not require the SMTPUTF8 extension when not needed

    fix #145
2024-03-31 15:23:53 +02:00
Mechiel Lukkien
d34dd8aae6
update to latest bstore, with a bugfix for queries with multiple orders that were partially handled by an index
causing returned order to be incorrect.
was triggered by new code i'm working on.
2024-03-30 09:39:18 +01:00
Mechiel Lukkien
54b24931c9
add faq entry about configuring mox to send through a smart host
suggested by arnt & friend, thanks for reporting!
2024-03-27 10:23:37 +01:00
Mechiel Lukkien
6516a27689
update to latest sconf, which now gives more helpful error messages about some invalid config lines, like one with only whitespace
from arnt & friend, thanks for reporting!
2024-03-27 10:08:15 +01:00
Mechiel Lukkien
0262f4621e
in quickstart, check outgoing smtp connectivity by dialing gmail.com mx host
if connection cannot be made, warn about it and point to configuring a
smarthost and the config options.

suggested by arnt & friend
2024-03-27 09:35:16 +01:00
Mechiel Lukkien
d4958732c8
add more of a "getting started with building" to develop.txt
based on #145 by lmeunier
2024-03-26 09:34:03 +01:00
Mechiel Lukkien
40ade995a5
improve queue management
- add option to put messages in the queue "on hold", preventing delivery
  attempts until taken off hold again.
- add "hold rules", to automatically mark some/all submitted messages as "on
  hold", e.g. from a specific account or to a specific domain.
- add operation to "fail" a message, causing a DSN to be delivered to the
  sender. previously we could only drop a message from the queue.
- update admin page & add new cli tools for these operations, with new
  filtering rules for selecting the messages to operate on. in the admin
  interface, add filtering and checkboxes to select a set of messages to operate
  on.
2024-03-18 08:50:42 +01:00
Mechiel Lukkien
79f1054b64
factor common typescript api call code pattern into a function 2024-03-17 08:41:33 +01:00
Mechiel Lukkien
25b2ea164f
on build page, mention that changes can be tested easily with mox localserve 2024-03-17 07:58:02 +01:00
Mechiel Lukkien
79fb72f3cd
don't show default domain on admin account page
it is a remnant from the time domains didn't have to be specific in
"Destination" addresses. we still use it for as default selection for adding a
new address to an account. but there's not much point in showing it so
prominently. that raises more questions than it is helpful.

for issue #142 by tabatinga0xffff
2024-03-17 07:39:00 +01:00
Mechiel Lukkien
cef83341e5
make it harder to forget to set smtputf8 on message.Composer
we should do better: first gather all headers, and only write it when we start
on the body, and then calculate smtputf8 ourselves.
2024-03-16 20:59:19 +01:00
Mechiel Lukkien
8b2c97808d
add account option to skip the first-time sender delay
useful for accounts that automatically process messages and want to process quickly
2024-03-16 20:24:07 +01:00
Mechiel Lukkien
281411c297
add styling for sticky table headers, for scrolling with long tables 2024-03-16 19:27:29 +01:00
Mechiel Lukkien
fdee24f3bd
in web interfaces, put crumbs path in document title, for more useful browser history 2024-03-16 19:13:44 +01:00
Mechiel Lukkien
dfe587fdeb
prevent the help output of the reparse subcommand from appearing as a title in the generated documentation 2024-03-14 20:31:31 +01:00
Mechiel Lukkien
2c9cb5b847
add parser of Authentication-Results, and fix bugs it found in our generated headers
we weren't always quoting the values, like dkim's header.b=abc/def. the "/"
requires that the value be quoted.
2024-03-13 17:35:53 +01:00
Mechiel Lukkien
b91480b5af
add /b/ to website that explains how to compile mox, or gives a link to gobuild
the location.hash is used as the version to link to. this can be a tag
(release, e.g. v0.0.1), branch (e.g. main), or commit hash.
2024-03-12 09:41:09 +01:00
Mechiel Lukkien
411cb8fc78
for apidiff, generate apidiff/next.txt and rotate it on release
instead of already giving it a version name before the release. the released
version could be different.
2024-03-11 15:27:25 +01:00
Mechiel Lukkien
bcf737cbec
fix the Status command on imapclient.Conn
it needs at least 1 attribute.
also make types for those attributes, so its harder to get them wrong.
nothing was using this function.
2024-03-11 15:22:41 +01:00
Mechiel Lukkien
4dea2de343
implement imap quota extension (rfc 9208)
we only have a "storage" limit. for total disk usage. we don't have a limit on
messages (count) or mailboxes (count). also not on total annotation size, but
we don't have support annotations at all at the moment.

we don't implement setquota. with rfc 9208 that's allowed. with the previous
quota rfc 2087 it wasn't.

the status command can now return "DELETED-STORAGE". which should be the disk
space that can be reclaimed by removing messages with the \Deleted flags.
however, it's not very likely clients set the \Deleted flag without expunging
the message immediately. we don't want to go through all messages to calculate
the sum of message sizes with the deleted flag. we also don't currently track
that in MailboxCount. so we just respond with "0". not compliant, but let's
wait until someone complains.

when returning quota information, it is not possible to give the current usage
when no limit is configured. clients implementing rfc 9208 should probably
conclude from the presence of QUOTA=RES-* capabilities (only in rfc 9208, not
in 2087) and the absence of those limits in quota responses (or the absence of
an untagged quota response at all) that a resource type doesn't have a limit.
thunderbird will claim there is no quota information when no limit was
configured, so we can probably conclude that it implements rfc 2087, but not
rfc 9208.

we now also show the usage & limit on the account page.

for issue #115 by pmarini
2024-03-11 14:24:32 +01:00
Mechiel Lukkien
6c92949f13
in code/rfc cross-referenced side-by-side view, link to datatracker for rfc's 2024-03-11 09:14:26 +01:00
Mechiel Lukkien
4699504c9f
show goversion and goos/goarch on admin page 2024-03-11 08:58:40 +01:00
Mechiel Lukkien
b115c7b10d
detect whitespace issues in rfc/index.txt earlier
by checking with each fetch and update.
2024-03-11 08:46:40 +01:00
Mechiel Lukkien
5f1157060e
make video work on macos safari
by mentioning mp4 first.  it seems safari doesn't understand this webm
(resolution too high?). still doesn't seem to work on iphone/ipad safari.
2024-03-10 08:47:30 +01:00
Mechiel Lukkien
6984a2ae07
fix latest release on website, tweaks to release process 2024-03-09 20:45:23 +01:00
Mechiel Lukkien
f3501b4e06
fix spacing in rfc/index.txt
genwebsite fails on it.
will make tools that run more often on that file check more strictly too.
2024-03-09 19:55:37 +01:00
Mechiel Lukkien
c6eea5e1cf
add v0.0.10 to the website 2024-03-09 19:49:16 +01:00
Mechiel Lukkien
a601814c3d
fix build after previous commit 2024-03-09 15:52:28 +01:00
Mechiel Lukkien
0c800f3d7e
update to latest sherpats fixing typo in error message, handle absent dmarc "policy override" reason 2024-03-09 15:43:49 +01:00
Mechiel Lukkien
a96493946b
sync latest adns 2024-03-09 15:32:37 +01:00
Mechiel Lukkien
71981ebf43
ensure "make build" on macos generates the same documentation output
it has been i while since i used the old macos machine...
2024-03-09 15:06:42 +01:00
Mechiel Lukkien
a5163493e7
add release process note about updating website 2024-03-09 12:04:15 +01:00
Mechiel Lukkien
7969cf002a
allow zero configured addresses for an account
preventing writing out a domains.conf that is invalid and can't be parsed
again. this happens when the last address was removed from an account. just a
click in the admin web interface.

accounts without email address cannot log in.

for issue #133 by ally9335
2024-03-09 11:51:02 +01:00
Mechiel Lukkien
92e0d2a682
webadmin: be more helpful when adding domains/accounts/addresses
by explaining (in the titles/hovers) what the concepts and requirements are, by
using selects/dropdowns or datalist suggestions where we have a known list, by
automatically suggesting a good account name, and putting the input fields in a
more sensible order.

based on issue #132 by ally9335
2024-03-09 11:11:52 +01:00
Mechiel Lukkien
63cef8e3a5
webmail: fix for ignoring error about sending to invalid address
before, an error about an invalid address was not used, causing a delivery
attempt to an empty address (empty localpart/domain). delivery to that address
would fail, but we should've prevented that message from being queued at all.

additionally, an error in adding the message to the queue was ignored too.
2024-03-09 09:51:24 +01:00
Mechiel Lukkien
c57aeac7f0
prevent unicode-confusion in password by applying PRECIS, and username/email address by applying unicode NFC normalization
an é (e with accent) can also be written as e+\u0301. the first form is NFC,
the second NFD. when logging in, we transform usernames (email addresses) to
NFC. so both forms will be accepted. if a client is using NFD, they can log
in too.

for passwords, we apply the PRECIS "opaquestring", which (despite the name)
transforms the value too: unicode spaces are replaced with ascii spaces. the
string is also normalized to NFC. PRECIS may reject confusing passwords when
you set a password.
2024-03-09 09:20:29 +01:00
Mechiel Lukkien
8e6fe7459b
normalize localparts with unicode nfc when parsing
both when parsing our configs, and for incoming on smtp or in messages.
so we properly compare things like é and e+accent as equal, and accept the
different encodings of that same address.
2024-03-08 21:08:40 +01:00
Mechiel Lukkien
4fbd7abb57
update to latest adns, synced with Go's net 2024-03-08 15:31:54 +01:00
Mechiel Lukkien
a00b0ba6cd
add note about testing localserve on various OSes before release 2024-03-08 15:31:34 +01:00
Mechiel Lukkien
372585de72
build before running test-upgrade 2024-03-08 09:28:39 +01:00
Mechiel Lukkien
03e220c749
update dependencies 2024-03-08 09:28:09 +01:00
Mechiel Lukkien
a9f11b8fa3
fix changing domains.conf through admin with new MonitorDNSBLs present
by not clearing the existing derived info, we would detect duplicate domains
and refuse the changed config.
2024-03-07 11:26:53 +01:00
Mechiel Lukkien
df105a028c
unbreak enforcing dane since previous commits
by using the correct variable.
should have automated tests for this.
found it by manual test through email-security-scans.org, useful service!
2024-03-07 11:19:08 +01:00
Mechiel Lukkien
484ffa67d1
fix new reference to smtp limits rfc 2024-03-07 10:56:58 +01:00
Mechiel Lukkien
85f72582c6
mention matrix channel, add moxtools to things to check for a release 2024-03-07 10:51:48 +01:00
Mechiel Lukkien
b541646275
be more helpful about instructions for installing unbound and dnssec
by mentioning the dnssec root keys, mentioning which unbound version has EDE,
giving a "dig" invocation to check for dnssec results.

based on issue #131 by romner-set, thanks for reporting
2024-03-07 10:47:48 +01:00
Mechiel Lukkien
4db1f5593c
better check for dnssec-verifying resolver
check the authentic data bit for the NS records of "com.", not for ".": some
dnssec-verifying resolvers return unauthentic data for ".".

for issue #139 by triatic, thanks!
2024-03-07 10:34:13 +01:00
Mechiel Lukkien
9e7d6b85b7
queue: deliver to multiple recipients in a single smtp transaction
transferring the data only once. we only do this when the recipient domains
are the same. when queuing, we now take care to set the same NextAttempt
timestamp, so queued messages are actually eligable for combined delivery.

this adds a DeliverMultiple to the smtp client. for pipelined requests, it will
send all RCPT TO (and MAIL and DATA) in one go, and handles the various
responses and error conditions, returning either an overal error, or per
recipient smtp responses. the results of the smtp LIMITS extension are also
available in the smtp client now.

this also takes the "LIMITS RCPTMAX" smtp extension into account: if the server
only accepts a single recipient, we won't send multiple.
if a server doesn't announce a RCPTMAX limit, but still has one (like mox does
for non-spf-verified transactions), we'll recognize code 452 and 552 (for
historic reasons) as temporary error, and try again in a separate transaction
immediately after. we don't yet implement "LIMITS MAILMAX", doesn't seem likely
in practice.
2024-03-07 10:07:53 +01:00
Mechiel Lukkien
8550a5af45
don't expose functions on the prng that aren't mutex-protected
the current Intn calls in queue could be called concurrently, found by the race
detector with upcoming new tests.  best to just prevent any possible concurrent
access.
2024-03-07 10:05:35 +01:00
Mechiel Lukkien
47ebfa8152
queue: implement adding a message to the queue that gets sent to multiple recipients
and in a way that allows us to send that message to multiple recipients in a
single smtp transaction.
2024-03-05 20:10:28 +01:00
Mechiel Lukkien
15e450df61
implement only monitoring dns blocklists, without using them for incoming deliveries
so you can still know when someone has put you on their blocklist (which may
affect delivery), without using them.

also query dnsbls for our ips more often when we do more outgoing connections
for delivery: once every 100 messages, but at least 5 mins and at most 3 hours
since the previous check.
2024-03-05 19:37:48 +01:00
Mechiel Lukkien
e0c36edb8f
accept tls reports with both host & recipient domains, and with multiple recipient domains
embarrassingly, we didn't accept all reports we generated. after the changed
handling of reports about mx/mail host vs recipient domains, would send reports
to mail hosts about multiple recipient domains + the mail host. and we included
a policy domain of the mail host when sending to a recipient domain. we were
still being strict in what we accepted: only a single domain in total in the
entire report, and we still enforced that a report sent to the mx host tlsrpt
address only contained the mx host as policy domain. and likewise for recipient
domains and their tls reporting addresses. those checks would reject reports
generated by a mox instance. this probably only happens with dane configured,
probably most users haven't seen it because of that.

somewhat related to issue #125
2024-03-05 11:43:49 +01:00
Mechiel Lukkien
a9cb6f9d0a
webadmin: add single-line form for looking up a cid for a received id 2024-03-05 10:50:56 +01:00
Mechiel Lukkien
5738d9e7b8
when auth fails due to missing derived secrets, don't hold it against connection
smtp & imap can only indicate which mechanisms the server software supports.
individual accounts may not have derived secrets for all those mechanisms. imap
& smtp cannot indicate that a client should try another (specific) mechanism.
but at least we shouldn't slow the connection down due to failed auth attempts
in that case.

heard from ben that this is a common source for trouble when setting up email
accounts.
2024-03-05 10:40:40 +01:00
Mechiel Lukkien
caa4931d35
tweak faq about email being rejected 2024-03-05 09:41:44 +01:00
Mechiel Lukkien
af968f7614
webmail: for junk/rejects messages, show sender address instead of name in list 2024-03-05 09:04:59 +01:00
Mechiel Lukkien
79f91ebd87
webmail: don't switch back focus after autocompleting address
actually, this fix can reduce focus changes for more operations. withStatus is
often used to show an operation in progress in the status bar, only when the
operation isn't done within 1 second. we would restore focus to the element
before the operation started. that was done because we disable elements
sometimes (preventing duplicate form submission). for things like the
autocomplete, with the tab key, which also moves focus to the next element, we
don't want that focus switched back again.
2024-03-05 08:46:56 +01:00
Mechiel Lukkien
63c3c1fd6a
webmail: leave out own address in reply all when we have addresses remaining 2024-03-04 20:21:41 +01:00
Mechiel Lukkien
26ff0c9417
increase memory limit during tests for upgrade 2024-03-04 19:11:53 +01:00
Mechiel Lukkien
13923e4b7b
better thread matching for dsns
keep track of whether a message is a dsn, and match dsn's against their sent
message by ignoring the message subject.
2024-03-04 16:40:27 +01:00
Mechiel Lukkien
f6497b1aaf
when parsing a dsn, actually set the Action field
noticed when writing dsn-processing code
2024-02-21 21:19:52 +01:00
Mechiel Lukkien
79da4faaa1
add Delivered-To header when locally delivering a DSN
so tools can pick it up and find the original "MAIL FROM", and take the encode
destination address or message id from its localpart.
2024-02-20 16:39:49 +01:00
Mechiel Lukkien
1c934f0103
improve dsn handling
have the full smtp reply in the Diagnostic-Code field, not something that
resembles it but isn't quite the same.

include any additional error message in the Status field as comment.

before, we ended up having an Diagnostic-Code that didn't include the original
smtp code. it only had the enhanced error code.
2024-02-20 16:31:15 +01:00
Mechiel Lukkien
dc83ad1df5
set correct local account when adding a message to the queue
all dsns were going to the postmaster account...
2024-02-20 15:02:47 +01:00
Mechiel Lukkien
cb5097714b
add a few more rfc 2024-02-20 14:58:16 +01:00
Mechiel Lukkien
37de8de1c5
fix incorrect error about bare cr/lf when sending a message over smtp
we weren't properly tracking the cr's and lf's when being strict about message
lines when sending data.

we are reading buffered data from a Reader. if that chunk happens to start with
a newline, we weren't looking at the previously written data, which could be a
cr. instead, in that case, we would always claim the cr/lf wasn't correct.

the new test case triggered the behaviour before having the fix.

should solve issue #129 by x8x, thanks for the report!
2024-02-16 20:20:58 +01:00
Mechiel Lukkien
fd359d5973
add to previous commit, adding multiline smtp responses in dsn
also include api change.
2024-02-16 20:13:05 +01:00
Mechiel Lukkien
50c13965a7
include full smtp response in dsn on errors
we now keep track of the full smtp error responses, potentially multi-line. and
we include it in a dsn in the first free-form human-readable text.

it can have multiple lines in practice, e.g. when a destination mail server
tries to be helpful in explaining what the problem is.
2024-02-14 23:37:43 +01:00
Mechiel Lukkien
39bfa4338a
smtpclient: only obey SIZE= of server if it isn't 0
since that means there is no explicit limit.
2024-02-14 17:46:01 +01:00
Mechiel Lukkien
8046b323fb
fix and ensure consistent lines 2024-02-14 17:43:21 +01:00
Mechiel Lukkien
67300969c1
don't use bash if not needed
from mteege
2024-02-11 21:46:45 +01:00
Mechiel Lukkien
93c52b01a0
implement "future release"
the smtp extension, rfc 4865.
also implement in the webmail.
the queueing/delivery part hardly required changes: we just set the first
delivery time in the future instead of immediately.

still have to find the first client that implements it.
2024-02-10 17:55:56 +01:00
Mechiel Lukkien
17734196e3
add rfc 9078, "Reaction: Indicating Summary Reaction to a Message" to the list
about emoji responses to messages.

no concrete plans (lack of time), but would be fun to experiment with in the
webmail.
2024-02-10 12:14:36 +01:00
Mechiel Lukkien
49c8dbf47e
add FAQ about directly accessing mailboxes through the file system
commonly asked, again at fosdem.
2024-02-10 11:39:31 +01:00
Mechiel Lukkien
ee1db2dde7
webmail: implement registering and handling "mailto:" links
to start composing a message.

the help popup now has a button to register the "mailto:" links with the mox
webmail (typically only works over https, not all browsers support it).

the mailto links are specified in 6068. we support the to/cc/bcc/subject/body
parameters. other parameters should be seen as custom headers, but we don't
support messages with custom headers at all at the moment, so we ignore them.

we now also turn text of the form "mailto:user@host" into a clickable link
(will not be too common). we could be recognizing any "x@x.x" as email address
and make them clickable in the future.

thanks to Hans-Jörg for explaining this functionality.
2024-02-09 11:21:33 +01:00
Mechiel Lukkien
f3bf348214
webmail: show unicode for internationalized email addresses by default
before, we showed the xn-- ascii names, along with the unicode name. but users
of internationalized email don't want to see any xn-- names. we now put those
in an html title attribute for some cases, so you can still see them if you
really want to, by hovering.

after talking to arnt at fosdem.
2024-02-08 18:03:48 +01:00
Mechiel Lukkien
39f4800290
xr: unbreak following links, they were now being opened in a new window
broken in previous update. the tricky part keeps being about when browsers fire
'load' and 'hashchange' events for the outer and two inner documents. the
previous change attempted to prevent a history item being set on the first
load. that behaviour seems to be kept.
2024-02-08 16:25:33 +01:00
Mechiel Lukkien
4ea9e9e978
run more of go vet on the special-purpose tools
tools that are behind build constraints
2024-02-08 15:12:43 +01:00
Mechiel Lukkien
61836f6d00
don't shadow variables, no empty "else" blocks
from go vet and staticcheck
2024-02-08 15:12:06 +01:00
Mechiel Lukkien
5f40d23c1c
remove unused build constraint 2024-02-08 15:10:32 +01:00
Mechiel Lukkien
e75419aeaf
unbreak rfc/xr.go after changing golang.org/x/exp/maps
shouldn't have changed this one.
2024-02-08 15:08:26 +01:00
Mechiel Lukkien
d1b87cdb0d
replace packages slog and slices from golang.org/x/exp with stdlib
since we are now at go1.21 as minimum.
2024-02-08 14:49:01 +01:00
Mechiel Lukkien
c698cd07d9
apidiff: properly check against actual previous version
not hardcoded v0.0.8...
2024-02-08 14:46:31 +01:00
Mechiel Lukkien
ecf60568b4
fix: don't insert spurious \r when fixing up crlf line endings when writing a message
message.Writer.Write() adds missing \r's, but the buffer of "last bytes
written" was only being updated while writing the message headers, not while
writing the body. so for Write()'s in the body section (depending on
buffering), we were compensating based on the "last bytes written" as set
during the last write in the header section. that could cause a spurious \r to
be added when a Write starts with \n while the previous Write did properly
end with \r.

for issue #117, thanks haraldrudell for reporting and investigating
2024-02-08 12:33:19 +01:00
Mechiel Lukkien
dd540e401a
replace another "/bin/bash" with "/usr/bin/env bash" and remove old file 2024-02-01 09:03:32 +01:00
Pierre-Alain TORET
5f297ce54c Improve portability of build scripts 2024-02-01 09:00:21 +01:00
Mechiel Lukkien
1d9e80fd70
for domains configured only for reporting, don't reject messages to that domain during smtp submission
you can configure a domain only to accept dmarc/tls reports. those domains
won't have addresses for that domain configured (the reporting destination
address is for another domain). we already handled such domains specially in a
few places. but we were considering ourselves authoritative for such domains if
an smtp client would send a message to the domain during submit. and we would
reject all recipient addresses. but we should be trying to deliver those
messages to the actual mx hosts for the domain, which we will now do.
2024-01-26 19:51:23 +01:00
Mechiel Lukkien
a524c3a50b
clarify unicode domain names in config file 2024-01-24 10:48:44 +01:00
Mechiel Lukkien
62be829df0
when sending tls reports, ensure we use ASCII A-labels, not U-labels in the policy-domain field 2024-01-24 10:36:20 +01:00
Mechiel Lukkien
14aa85482e
imapserver: fix interpreting the first "*" in sequence/uid patterns, like "*:123" or plain "*"
in some cases, they were interpreted as meaning "the first sequence/uid", but
it should always be "the last sequence/uid", just like patterns of the form
"123:*".

this wrong interpretation was used in the "fetch" command when combined with
"changedsince", and in the search command for some parameters, and during
expunge with an explicit uid range. the form "*" and "*:123" aren't very
common.
2024-01-23 21:21:08 +01:00
Mechiel Lukkien
d9dde0d89e
tweaks to cross-referenced html
- on the two index pages, show rows with alternating background color so the
  files in the 2nd column are more easily matched to the name in the 1st
  column.
- unbreak browser history when navigating files/line numbers. changing an
  iframe src attribute adds an entry to the history. that happens on "back" to,
  causing a 2nd "back" to go forward again. instead of replacing the iframe src,
  we now replace the iframe, as that doesn't cause an entry to be added to the
  browser history. dark browser magic...
2024-01-23 19:29:20 +01:00
Mechiel Lukkien
9cf8ee2162
webmail: don't who an age of "-<1min", drop the -
if a browser is ahead just a few seconds, we would show "-<1min", not great.
just show "<1min" in that case. we'll still show negative age if drift is more
than 1 minute, which seems like a good hint to get time fixed on either client
or server.
2024-01-23 17:01:34 +01:00
Mechiel Lukkien
ed8938c113
fix typo in config field explanation 2024-01-23 16:59:08 +01:00
Mechiel Lukkien
20812dcf62
add types for missing dmarc report values in reports
so admin frontend doesn't complain about invalid values (empty strings).
2024-01-23 16:51:05 +01:00
Mechiel Lukkien
46aacdb79b
webmail: when q/b-word-decoding attachment filenames, recognize more charset encodings
based on #113 by jsfan3
2024-01-12 15:25:23 +01:00
Mechiel Lukkien
aea8740e65
quota: fix handling negative max size when configured for an account, and clarify value is in bytes in config file
for #115 by pmarini-nc
2024-01-12 15:02:16 +01:00
Mechiel Lukkien
7b6cfcd572
add quickstart video 2024-01-11 23:01:04 +01:00
Mechiel Lukkien
0bc3072944
new website for www.xmox.nl
most content is in markdown files in website/, some is taken out of the repo
README and rfc/index.txt. a Go file generates html. static files are kept in a
separate repo due to size.
2024-01-10 17:22:03 +01:00
Mechiel Lukkien
dda0a4ced1
at "client config", mention clients should explicitly be configured with the most secure authentication mechanism supported
to prevent authentication mechanism downgrade attacks by MitM.
2024-01-09 10:50:42 +01:00
Mechiel Lukkien
2392f79aa9
for username/email input field in login form, automatically resize so also longer addresses are fully visible
feedback from jsfan3 in issue #58, thanks!
2024-01-08 22:00:42 +01:00
Mechiel Lukkien
c348834ce9
prevent firefox from autocompleting the current password in the form/fields for changing password 2024-01-05 12:15:55 +01:00
Mechiel Lukkien
9796c4539d
localserve: no longer suggest http basic auth for the web interfaces 2024-01-05 12:07:43 +01:00
Mechiel Lukkien
ac8256feb6
for errors during maildir/mbox zip/tgz import in account page, return http 400 for user errors (e.g. bad file format) and show the error message 2024-01-05 11:31:05 +01:00
Mechiel Lukkien
62db2af846
update dependencies 2024-01-05 11:17:11 +01:00
Mechiel Lukkien
0f8bf2f220
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:

there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.

a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.

another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.

our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.

api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).

in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions.  for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.

webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.

for issue #58
2024-01-05 10:48:42 +01:00
Mechiel Lukkien
c930a400be
remove leftover debug print 2024-01-03 10:35:54 +01:00
Mechiel Lukkien
446726c940
quickstart: clarify that the long text are DNS records to add to a zone file
for issue #111 by jsaponara, thanks for reporting!
2024-01-01 20:27:20 +01:00
Mechiel Lukkien
1f9b640d9a
add faq for smtp smuggling, fix bug around handling "\nX\n" for any X, reject bare carriage returns and possibly smtp-smuggling attempts
mox was already strict in its "\r\n.\r\n" handling for end-of-message in an
smtp transaction.

due to a mostly unrelated bug, sequences of "\nX\n", including "\n.\n" were
rejected with a "local processing error".

the sequence "\r\n.\n" dropped the dot, not necessarily a big problem, this is
unlikely to happen in a legimate transaction and the behaviour not
unreasonable.

we take this opportunity to reject all bare \r.  we detect all slightly
incorrect combinations of "\r\n.\r\n" with an error mentioning smtp smuggling,
in part to appease the tools checking for it.

smtp errors are 500 "bad syntax", and mention smtp smuggling.
2024-01-01 20:11:16 +01:00
Mechiel Lukkien
4b8b53e776
fix build for windows
found with "make buildall", it was broken since the change for reusable components.
2024-01-01 16:08:50 +01:00
Mechiel Lukkien
3f5823de31
add example for sending email through external smtp provider
to serve as documentation. based on issue #105.
2024-01-01 15:12:40 +01:00
Mechiel Lukkien
fce3a5bf73
webmail: moxVersion was too similar to moxversion, choose better name 2024-01-01 14:51:17 +01:00
Mechiel Lukkien
59bffa4701
imapserver: list STATUS=SIZE as capability
we already implemented it as part of imap4rev2, but most clients are imap4rev1
and need to see the announced capability.
2024-01-01 14:32:55 +01:00
Mechiel Lukkien
b887539ee4
webmail/*.ts needed rebuild after changing tcs.sh to target es2022
hopefully last of embarrassing string of commits...
2024-01-01 14:13:05 +01:00
Mechiel Lukkien
3bfff59940
fix build with github action
must have typescript installed before building
2024-01-01 14:04:16 +01:00
Mechiel Lukkien
618e5c2aa3
add gents.sh, forgot to commit 2024-01-01 13:51:20 +01:00
Mechiel Lukkien
d84c96eca5
imapserver: allow creating mailboxes with characters &#*%, and encode mailbox names in imap with imaputf7 when needed
the imapserver started with imap4rev2-only and utf8=only.  to prevent potential
issues with imaputf7, which makes "&" special, we refused any mailbox with an
"&" in the name. we already tried decoding utf7, falling back to using a
mailbox name verbatim. that behaviour wasn't great. we now treat the enabled
extensions IMAP4rev2 and/or UTF8=ACCEPT as indication whether mailbox names are
in imaputf7. if they are, the encoding must be correct.

we now also send mailbox names in imaputf7 when imap4rev2/utf8=accept isn't
enabled.

and we now allow "*" and "%" (wildcard characters for matching) in mailbox
names. not ideal for IMAP LIST with patterns, but not enough reason to refuse
them in mailbox names. people that migrate may run into this, possibly as
blocker.

we also allow "#" in mailbox names, but not as first character, to prevent
potential clashes with IMAP namespaces in the future.

based on report from Damian Poddebniak using
https://github.com/duesee/imap-flow and issue #110, thanks for reporting!
2024-01-01 13:27:29 +01:00
Mechiel Lukkien
a9940f9855
change javascript into typescript for webaccount and webadmin interface
all ui frontend code is now in typescript. we no longer need jshint, and we
build the frontend code during "make build".

this also changes tlsrpt types for a Report, not encoding field names with
dashes, but to keep them valid identifiers in javascript. this makes it more
conveniently to work with in the frontend, and works around a sherpats
limitation.
2023-12-31 12:05:31 +01:00
Mechiel Lukkien
da3ed38a5c
assume a dns cname record mail.<domain>, pointing to the hostname of the mail server, for clients to connect to
the autoconfig/autodiscover endpoints, and the printed client settings (in
quickstart, in the admin interface) now all point to the cname record (called
"client settings domain"). it is configurable per domain, and set to
"mail.<domain>" by default. for existing mox installs, the domain can be added
by editing the config file.

this makes it easier for a domain to migrate to another server in the future.
client settings don't have to be updated, the cname can just be changed.
before, the hostname of the mail server was configured in email clients.
migrating away would require changing settings in all clients.

if a client settings domain is configured, a TLS certificate for the name will
be requested through ACME, or must be configured manually.
2023-12-24 11:06:08 +01:00
Mechiel Lukkien
e7478ed6ac
implement the plus variants of scram, to bind the authentication exchange to the tls connection
to get the security benefits (detecting mitm attempts), explicitly configure
clients to use a scram plus variant, e.g. scram-sha-256-plus. unfortunately,
not many clients support it yet.

imapserver scram plus support seems to work with the latest imtest (imap test
client) from cyrus-sasl. no success yet with mutt (with gsasl) though.
2023-12-23 23:19:36 +01:00
Mechiel Lukkien
4701857d7f
at startup, request missing acme tls certificates more quickly/silently 2023-12-22 13:41:00 +01:00
Mechiel Lukkien
dbd6773f6b
quickstart: don't print logging line about new password 2023-12-22 12:00:05 +01:00
Mechiel Lukkien
ee1094e1cb
implement ACME external account binding (EAB)
where a new acme account is created with a reference to an existing non-acme
account known by the acme provider. some acme providers require this.
2023-12-22 11:50:50 +01:00
Mechiel Lukkien
db3fef4981
when suggesting CAA records for a domain, suggest variants that bind to the account id and with validation methods used by mox
should prevent potential mitm attacks. especially when done close to the
machine itself (where a http/tls challenge is intercepted to get a valid
certificate), as seen on the internet last month.
2023-12-21 15:53:32 +01:00
Mechiel Lukkien
ca97293cb2
add last commit date to cross-reference page 2023-12-21 09:46:01 +01:00
Mechiel Lukkien
802dcef192
webmail: for messages in designated Sent mailbox, show To/Cc/Bcc in italics, and show all correspondents in collapsed thread
showing addressees for Sent messages for issue #104 by mattfbacon, thanks for the report!
2023-12-21 09:23:06 +01:00
Mechiel Lukkien
57fc37af22
if an smtp-submitted message has a return-path header, only fail in pedantic mode
some software sends messages with return-path header.

for issue #103 by Halyul, thanks for reporting!
2023-12-20 21:04:03 +01:00
Mechiel Lukkien
d73bda7511
add per-account quota for total message size disk usage
so a single user cannot fill up the disk.
by default, there is (still) no limit. a default can be set in the config file
for all accounts, and a per-account max size can be set that would override any
global setting.

this does not take into account disk usage of the index database. and also not
of any file system overhead.
2023-12-20 20:54:12 +01:00
Mechiel Lukkien
e048d0962b
small fixes
a typo, using ongoing tx instead of making a new one, don't pass literal string
to formatting function.

found while working on quota support.
2023-12-16 11:53:14 +01:00
Mechiel Lukkien
dfddf0e874
for webapi requests, make canceled contexts a user instead of server error
no need to trigger alerts for user-initiated errors
2023-12-15 15:47:54 +01:00
Mechiel Lukkien
1be0cf485e
add more short-term todo's to the roadmap 2023-12-14 20:34:44 +01:00
Mechiel Lukkien
1abadc5499
add "warn" log level
now that we are using slog, which has them.
and we already could use them for a deprecation warning.
2023-12-14 20:26:06 +01:00
Mechiel Lukkien
41e3d1af10
imapserver: only send OLDNAME in LIST responses when IMAP4rev2 was enabled
OLDNAME is included in IMAP4rev2, but not in IMAP4rev1. it is also included in
the NOTIFY extension, but we don't implement that yet.

found by Damian Poddebniak with https://github.com/duesee/imap-flow, thanks!
2023-12-14 20:20:17 +01:00
Mechiel Lukkien
fbc18d522d
smtpserver: when writing slow responses, don't take so long the remote smtp client regards it as timeout
when writing the 4xx temporary error line, we were taking 1s in between each
byte. the total line could take longer than 30 seconds, which is the timeout we
use for reading a whole line (regardless of individual bytes). so mox as
deliverer was timing out to mox as slow rejecter. this causes slow writes to
not take longer than the 30s timeout: if we are 2s before the 30s, we write the
remainder in one go.

based on a debug log from naturalethic, thanks!
2023-12-14 20:20:17 +01:00
Mechiel Lukkien
2710a5b971
when generating Authentication-Results, put each method on a new line for better readability 2023-12-14 20:20:17 +01:00
Mechiel Lukkien
406fdc312d
when autocompleting, abort previous still pending request
should prevent a long list of "Autocompleting address" mentions in the status
bar at the top in case of non-responsive network
2023-12-14 20:20:17 +01:00
Mechiel Lukkien
22f46aa174
when logging version, also log go version and goos and goarch 2023-12-14 20:20:17 +01:00
Mechiel Lukkien
6d081f38fc
update to latest github.com/prometheus/common to drop dependency on github.com/golang/protobuf 2023-12-14 20:20:17 +01:00
Mechiel Lukkien
920b858da7
when logging, format timestamps more compactly, without needing quoting 2023-12-14 20:20:17 +01:00
Mechiel Lukkien
d1b66035a9
add more documentation, examples with tests to illustrate reusable components 2023-12-14 20:20:17 +01:00
Mechiel Lukkien
810cbdc61d
document that we keep some packages reusable 2023-12-14 20:20:12 +01:00
Mechiel Lukkien
19d1a8059b
smtpclient: expose entire tls connectionstate, not just whether tls was enabled
for moxtools
2023-12-14 15:39:47 +01:00
Mechiel Lukkien
f3a35a6766
keep track of the exposed api for reusable packages using apidiff 2023-12-14 15:39:47 +01:00
Mechiel Lukkien
72ac1fde29
expose fewer internals in packages, for easier software reuse
- prometheus is now behind an interface, they aren't dependencies for the
  reusable components anymore.
- some dependencies have been inverted: instead of packages importing a main
  package to get configuration, the main package now sets configuration in
  these packages. that means fewer internals are pulled in.
- some functions now have new parameters for values that were retrieved from
  package "mox-".
2023-12-14 15:39:36 +01:00
Mechiel Lukkien
fcaa504878
wrap long lines with many logging parameters to multiple lines
for improved readability
2023-12-14 13:45:52 +01:00
Mechiel Lukkien
5b20cba50a
switch to slog.Logger for logging, for easier reuse of packages by external software
we don't want external software to include internal details like mlog.
slog.Logger is/will be the standard.

we still have mlog for its helper functions, and its handler that logs in
concise logfmt used by mox.

packages that are not meant for reuse still pass around mlog.Log for
convenience.

we use golang.org/x/exp/slog because we also support the previous Go toolchain
version. with the next Go release, we'll switch to the builtin slog.
2023-12-14 13:45:52 +01:00
Mechiel Lukkien
56b2a9d980
help user run "mox localserve" using docker
based on feedback from damian poddebniak
2023-12-11 15:56:29 +01:00
Mechiel Lukkien
af5da17623
smtpserver: also allow space after "MAIL FROM:" and "RCPT TO:" command for SMTP delivery (unless in pedantic mode)
we already allowed it for (authenticated) SMTP submission. it turns out also
legitimate senders can use this invalid syntax to deliver messages.

for issue #101 by Fell, thanks for reporting & explaining!
2023-12-11 15:34:11 +01:00
Mechiel Lukkien
02eb7b5033
bugfix: imapserver "append" command: properly account for message size when bare newlines ("\n") are converted to crlf ("\r\n")
the original size, with bare newlines, was stored in the database, not the
actual adjusted file size. this caused failures when reading the message.

users may want to run "mox fixmsgsize <account>" if they imported messages from
another account over IMAP.

reported by daftaupe, thanks!
2023-12-11 15:18:06 +01:00
Mechiel Lukkien
7c1879da82
webmail: when replying to message we sent, don't compose the reply to ourselve, but copy the original to/cc/bcc headers 2023-11-27 12:26:31 +01:00
Mechiel Lukkien
fb81effe45
webmail: for domain in From address, show if domain is dmarc(-like) validated
i'm not sure this is good enough.
this is based on field MsgFromValidation, but it doesn't hold the full DMARC information.
we also don't know mailing list-status for all historic messages.
so the red underline can occur too often.
2023-11-27 12:11:05 +01:00
Mechiel Lukkien
2ff87a0f9c
more strict junk checks for some first-time senders: when TLS isn't used and when recipient address isn't in To/Cc header
both cases are quite typical for spammers, and not for legitimate senders.
this doesn't apply to known senders. and it only requires that the content look
more like ham instead of spam. so legitimate mail can still get through with
these properties.
2023-11-27 10:34:01 +01:00
Mechiel Lukkien
8e37fadc13
webmail: in initial start (sse) event, send the version, and ask user to reload if it changes
will prevent showing errors to users about new unknown fields that may be added
in the new version.
2023-11-27 08:06:27 +01:00
Mechiel Lukkien
416113af72
webmail: do not automatically mark read messages in Rejects mailbox as nonjunk 2023-11-27 07:34:18 +01:00
Mechiel Lukkien
9d2e761494
turns out the esearch tag is a string before imap4rev2, so can't blame new outlook 2023-11-22 22:01:23 +01:00
Mechiel Lukkien
2ae121e400
work around bug in microsoft outlook "new", which fails when the tag in an esearch response doesn't have quotes 2023-11-22 21:51:04 +01:00
Mechiel Lukkien
91b7d3dda8
implement the obsolete sasl login mechanism for smtp
so microsoft outlook "new" can login. that's the "new" email client that logs
in from cloud servers.
2023-11-22 21:44:55 +01:00
Mechiel Lukkien
c66fa64b8b
wrap long dkim dns records at 100 characters instead of 255 for better display (no line-wrap) 2023-11-22 14:02:24 +01:00
Mechiel Lukkien
361bc2b516
when accepting an incoming message, turn any bare newlines (without carriage return) into crlf
because that is what most of the code expects. we could work around having bare
lf, but it would complicate too much code.

currently, a message with bare lf is accepted (in smtpserver delivery,
imapserver append, etc), but when an imap session would try to fetch parsed
parts, that would fail because and even cause a imapserver panic (closing the
connection).

in message imports we would already convert bare lf to crlf (because it is
expected those messages are all lf-only-ending).

we store messages with crlf-ending instead of lf-ending so the imapserver has
all correct information at hand (line counts, byte counts).

found by using emclient with mox. it adds a message to the inbox that can have
mixed crlf and bare lf line endings in a few header fields (in some
localization, emclient authors explained how that happened, thanks!).  we can
now convert those lines and read those messages over imap. emclient already
switched to all-crlf line endings in newer (development) versions.
2023-11-21 13:19:54 +01:00
Mechiel Lukkien
3d80c05423
webmail: for long to/cc/bcc address list (>5) show the first 4 and a button to show the rest
for issue #98 by mattfbacon, thanks
2023-11-20 21:36:40 +01:00
Mechiel Lukkien
73a2a09711
better handling of outgoing tls reports to recipient domains vs hosts
based on discussion on uta mailing list. it seems the intention of the tlsrpt
is to only send reports to recipient domains. but i was able to interpret the
tlsrpt rfc as sending reports to mx hosts too ("policy domain", and because it
makes sense given how DANE works per MX host, not recipient domain). this
change makes the behaviour of outgoing reports to recipient domains work more
in line with expectations most folks may have about tls reporting (i.e. also
include per-mx host tlsa policies in the report). this also keeps reports to mx
hosts working, and makes them more useful by including the recipient domains of
affected deliveries.
2023-11-20 11:31:46 +01:00
Mechiel Lukkien
e5f77a0411
update to latest bstore, with fix for a bug that was triggered by an upcoming commit 2023-11-20 11:01:15 +01:00
Mechiel Lukkien
bdd8fa078e
rfc/xr: tweak, committed previous too soon... 2023-11-14 14:21:02 +01:00
Mechiel Lukkien
5b62013f27
rfc/xr: be more careful about which urls we load in iframes
anything that looks like it specifies a different host should not be loaded.
www.xmox.nl also has a CSP policy that should prevent resources from other
domains from being loaded.
2023-11-14 14:09:35 +01:00
Mechiel Lukkien
51e314f65a
for external domains (for which we only accept external dmarc reports), don't try to fetch tls certificates at startup for autoconfig host 2023-11-14 00:26:18 +01:00
Mechiel Lukkien
651fa68067
webadmin: in list with dmarc evaluations, add the dispositions applied
to easily spot rejects
2023-11-13 14:44:40 +01:00
Mechiel Lukkien
bcb80c3598
tweaks to cross-referenced code/rfc html pages
- show commit hash, with a link to the commit
- highlight if this is the dev or released version page
- sort the rfc's, the list in rfc/index.txt has the major rfc's at the topic, but this nuance is lost in the html page
2023-11-13 14:12:40 +01:00
Mechiel Lukkien
e24e1bee19
add suppression list for outgoing dmarc and tls reports
for reporting addresses that cause DSNs to be returned. that just adds noise.
the admin can add/remove/extend addresses through the webadmin.

in the future, we could send reports with a smtp mail from of
"postmaster+<signed-encoded-recipient>@...", and add the reporting recipient
on the suppression list automatically when a DSN comes in on that address, but
for now this will probably do.
2023-11-13 13:48:52 +01:00
Mechiel Lukkien
6ce69d5425
in starttls command in smtp & imap server, add the cid in the "ok, go ahead with tls" response
so facilitate debugging. a remote client that logs details about failing
connections can give the cid to the mox operator to find the relevant logging.
2023-11-13 10:26:31 +01:00
Mechiel Lukkien
58d84f3882
try fixing accepting incoming tls reports for mail host, again
this is another place with a check on the policy domain...
2023-11-13 08:37:10 +01:00
Mechiel Lukkien
ae37b3ed4d
webadmin: don't on queue page when there are no transports and the queue is non-empty (typical case) 2023-11-12 22:04:48 +01:00
Mechiel Lukkien
2265769b8e
webadmin: allow accessing tls reports for mail host policy domain (tlsa)
instead of requiring policy domains to be configured recipient domains.
when accessing TLS reports, always do it under path #tlsrpt/reports, not under #domain/.../tlsrpt.
2023-11-12 14:58:46 +01:00
Mechiel Lukkien
6e6f716e91
for tlsrpt results (for outgoing reports), after a delivery attempt, only add a no-policy-found (mta-sts) result if there wasn't also a tlsa result for the same policy domain
to prevent confusing operators with both a tlsa result and no-policy-result.
2023-11-12 14:35:47 +01:00
Mechiel Lukkien
ff4237e88a
tlsrpt improvements
- accept incoming tls reports for the host, with policy-domain the host name.
  instead of not storing the domain because it is not a configured (recipient)
  domain.
- in tlsrpt summaries, rename domain to policy domain for clarity.
- in webadmin, fix html for table that lists tls reports in case of multiple
  policies and/or multiple failure details.
2023-11-12 14:19:12 +01:00
Mechiel Lukkien
a87ac99038
for cross-referencing code/rfc, also linkify the errata 2023-11-12 12:20:40 +01:00
Mechiel Lukkien
6a39f2cc54
add a suggestion for tlsrpt no-policy-found result 2023-11-12 12:08:33 +01:00
Mechiel Lukkien
f90b802d4b
webadmin: add column with found policy types to table listing the results 2023-11-12 12:00:21 +01:00
Mechiel Lukkien
a0bae5be55
for dns errors when looking up a tlsrpt record in the admin, don't make it a server error
but a user error. so we don't generate alerts through prometheus.
2023-11-12 11:53:39 +01:00
Mechiel Lukkien
448879126d
when listing incoming tls reports, don't show "(no policy)" for tlsa policies
that hint was meant for the mode of a sts policy. for tlsa (and
no-policy-found), there is not going to be a mode.
2023-11-12 11:50:48 +01:00
Mechiel Lukkien
1d02760f66
fix incoming deliveries to the host-tlsrpt address
it was returning "550 not accepting mail for this domain" due to a missing
check in the address/account lookup function.
2023-11-12 11:37:15 +01:00
Mechiel Lukkien
8f55d0ada6
fix build, missing api build 2023-11-11 20:06:42 +01:00
Mechiel Lukkien
50c9873c2b
cross-referencing code & rfc: todo comments and html pages
- the rfc links back to the code now show any "todo" text that appears in the
  code. helps when looking at an rfc to find any work that may need to be done.
- html pages can now be generated to view code and rfc's side by side. clicking
  on links in one side opens the linked document in the other page, at the
  correct line number.

i'll be publishing the "dev" html version (latest commit on main branch) on the
mox website, updated with each commit. the dev pages will also link to the
latest released version.
2023-11-11 20:01:32 +01:00
Mechiel Lukkien
dcee0345ef
nits, removing a old todo and a stray newline 2023-11-11 19:14:19 +01:00
Mechiel Lukkien
2073db194b
when checking domain settings, check that dmarc & tls reporting addresses are present if there is a record 2023-11-10 20:25:06 +01:00
Mechiel Lukkien
61bae75228
outgoing dmarc/tls reporting improvements
- dmarc reports: add a cid to the log line about one run of sending reports, and log line for each report
- in smtpclient, also handle tls errors from the first read after a handshake. we appear to sometimes get tls alerts about bad certificates on the first read.
- for messages to dmarc/tls reporting addresses that we think should/can not be processed as reports, add an X-Mox- header explaining the reason.
- tls reports: send report messages with From address of postmaster at an actually configured domain for the mail host. and only send reports when dkim signing is configured for that domain. the domain is also the submitter domain. the rfc seems to require dkim-signing with an exact match with the message from and submitter.
- for incoming tls reports, in the smtp server, we do allow a dkim-signature domain that is higher-level (up to publicsuffix) of the message from domain. so we are stricter in what we send than what we receive.
2023-11-10 19:34:00 +01:00
Mechiel Lukkien
b2af63b3ec
update latest prometheus client dependency and its dependencies 2023-11-09 21:43:47 +01:00
Mechiel Lukkien
8c99e54ec1
update dependencies 2023-11-09 21:19:51 +01:00
Mechiel Lukkien
42f6f9cbb3
change the message composing code from webmail over to message.Composer too 2023-11-09 21:15:27 +01:00
Mechiel Lukkien
96faf4b5ec
webmail: don't select requiretls when mta-sts and dane are both not implemented (even though requiretls extension is announced) 2023-11-09 19:57:53 +01:00
Mechiel Lukkien
deb16d23b8
simplify .gitignore, just on line for ignoring all the testdata/*/data directories 2023-11-09 19:47:33 +01:00
Mechiel Lukkien
893a6f8911
implement outgoing tls reports
we were already accepting, processing and displaying incoming tls reports. now
we start tracking TLS connection and security-policy-related errors for
outgoing message deliveries as well. we send reports once a day, to the
reporting addresses specified in TLSRPT records (rua) of a policy domain. these
reports are about MTA-STS policies and/or DANE policies, and about
STARTTLS-related failures.

sending reports is enabled by default, but can be disabled through setting
NoOutgoingTLSReports in mox.conf.

only at the end of the implementation process came the realization that the
TLSRPT policy domain for DANE (MX) hosts are separate from the TLSRPT policy
for the recipient domain, and that MTA-STS and DANE TLS/policy results are
typically delivered in separate reports. so MX hosts need their own TLSRPT
policies.

config for the per-host TLSRPT policy should be added to mox.conf for existing
installs, in field HostTLSRPT. it is automatically configured by quickstart for
new installs. with a HostTLSRPT config, the "dns records" and "dns check" admin
pages now suggest the per-host TLSRPT record. by creating that record, you're
requesting TLS reports about your MX host.

gathering all the TLS/policy results is somewhat tricky. the tentacles go
throughout the code. the positive result is that the TLS/policy-related code
had to be cleaned up a bit. for example, the smtpclient TLS modes now reflect
reality better, with independent settings about whether PKIX and/or DANE
verification has to be done, and/or whether verification errors have to be
ignored (e.g. for tls-required: no header). also, cached mtasts policies of
mode "none" are now cleaned up once the MTA-STS DNS record goes away.
2023-11-09 19:47:26 +01:00
Mechiel Lukkien
df18ca3c02
refactor sending dmarc reports for upcoming implementation for sending tls reports
this also has changes to make the dmarc report sending implementation more
similar to the tls reports implementation.

- factor out code to compose a dmarc report message to the message package
  (from dmarcdb for reports), it will be shared soon.
- spread emails with dmarc reports over 45 minutes (it runs hourly), with at
  most 5 mins in between reports. to prevent bursts of messages. properly abort
  all sending attempts at mox shutdown.
- add use of missing error details in an error path.
- fix dmarc report message subject header by adding missing <>'s around report-id.
- fix dmarc report attachment filename syntax by leaving "unique-id" out.
2023-11-09 17:26:19 +01:00
Mechiel Lukkien
2535f351ed
fix bug with concurrent math/rand.Rand.Read
firstly by using crypto/rand in those cases. and secondly by putting a lock
around the Read (though it isn't used at the moment).

found while working while implementing sending tls reports.
2023-11-09 17:17:26 +01:00
Mechiel Lukkien
d02ac0cb86
webmail: fix received date shown on message
we were trying to offset the timezone, but that makes no sense: we already
created a date in the local timezone based on (milli)seconds passed. so we can
just use that date instead of calculating a wrong date.
2023-11-04 23:35:44 +01:00
Mechiel Lukkien
2abac1a911
for dmarc reporting, be more conservate with sending reports to junky senders, and format textual dmarc report period in the message text in utc as claimed
before this change, a message in the rejects folder that was read and marked as
notjunk (e.g. automatically by webmail), could cause a dmarc report to be sent
for another junky message from the domain. we now require positive signals to
be for messages not in the rejects mailbox.

the text/plain body of a dmarc report contains the period, but it was in local
time while claiming to be in utc. make it utc, so we often get nicely rounded
whole 24h utc days.
2023-11-04 23:24:47 +01:00
Mechiel Lukkien
c955fadb6d
fix parsing dmarc reports that come with content-type application/octet-stream
by fixing a typo in the content-type...
and by recognizing the application/x-zip that is detected as content-type.

discovered when a dmarc report from aws ses wasn't processed.

it seems aws ses was sending a dmarc report because it received a dmarc report.
2023-11-04 13:22:30 +01:00
Mechiel Lukkien
3a7ed9738a
update to latest go.etcd.io/bbolt v1.3.8 2023-11-03 08:31:30 +01:00
Mechiel Lukkien
4510e0ce78
webmail: add Delivered-To to example settingsPut call 2023-11-02 21:56:59 +01:00
Mechiel Lukkien
0200e539a9
when message is delivered, save whether it is from a mailing list; in webmail, show if message was a forward or mailing list, and don't enable requiretls when sending to a list. 2023-11-02 20:03:47 +01:00
Mechiel Lukkien
481a25f294
improvements to outgoing dmarc reports and displaying evaluations
- more eagerly report about overrides, so domain owners can better tell that
  switching from p=none to p=reject will not cause trouble for these messages.
- report multiple reasons, e.g. mailing list and sampled out
- in dmarc analysis for rejects from first-time senders (possibly spammers),
  fix the conditional check on nonjunk messages.
- in evaluations view in admin, show unaligned spf pass in yellow too and a few
  more small tweaks.
2023-11-02 17:54:24 +01:00
Mechiel Lukkien
79e522887e
change error value "fatal io error" to just "io error"
"fatal" was meant as "we need fatal for the connection, it will be dropped".
but it sounds more serious, as if something needs to be fixed.

hopefully enough for issue #39 by ArnoSen
2023-11-02 15:56:01 +01:00
Mechiel Lukkien
38694d3928
Merge remote-tracking branch 'github.com/mattfbacon/mox/message-is-text' 2023-11-02 14:41:43 +01:00
Mechiel Lukkien
81057ee685
add option -initonly to "mox localserve", to only create config files and then quit
for issue #89 by naturalethic
2023-11-02 14:10:41 +01:00
Mechiel Lukkien
9896639ff9
for incoming smtp deliveries, track whether tls and requiretls was used, and display this in the webmail
we store the tls version used, and cipher suite. we don't currently show that
in the webmail.
2023-11-02 09:12:47 +01:00
Mechiel Lukkien
186538fb57
when composing a dsn, try harder to dkim-sign it, also with higher-level domain than full mail hostname
e.g. typical setup is a hostname mail.<domain>. and dsns can be sent from
postmaster@mail.<domain>. so it helps to look for dkim keys for <domain>, and
use them when signing. instead of looking for dkim keys for mail.<domain>,
which won't typically exist.  similar to recent commit that added outgoing
dmarc aggregate reports.
2023-11-02 09:12:47 +01:00
Mechiel Lukkien
f7686b7db8
webmail: show email address instead of display name of "from" header in message listing if display name contains chars from "<@>"
it could be an attempt to confuse the reader with an email address. a classic.
2023-11-02 09:12:47 +01:00
Mechiel Lukkien
725f030d3c
webmail: add clear marker between message header and body, so if html message tries to fake ui elements, it'll be noticed (hopefully) 2023-11-02 09:12:47 +01:00
Mechiel Lukkien
ef50f4abf0
refactor common pattern of close & remove temporary file into calling the new store.CloseRemoveTempFile 2023-11-02 09:12:46 +01:00
Mechiel Lukkien
b6897d1837
add note about adns library 2023-11-02 09:12:46 +01:00
Mechiel Lukkien
e7699708ef
implement outgoing dmarc aggregate reporting
in smtpserver, we store dmarc evaluations (under the right conditions).
in dmarcdb, we periodically (hourly) send dmarc reports if there are
evaluations. for failed deliveries, we deliver the dsn quietly to a submailbox
of the postmaster mailbox.

this is on by default, but can be disabled in mox.conf.
2023-11-02 09:12:30 +01:00
Matt Fellenz
3b6e1851cb
Treat messages as text 2023-11-01 14:17:02 -07:00
Mechiel Lukkien
d1e93020d8
give delivering to mx targets with underscores in name a chance of succeeding
the underscores aren't valid, but have been seen in the wild, so we have a
workaround for them. there are limitations, it won't work with idna domains.
and if the domain has other policies, like mta-sts, the mx host won't pass
either.

after report from richard g about delivery issue, thanks!
2023-10-25 13:01:11 +02:00
Mechiel Lukkien
682f8a0904
dkim selectors shouldn't be interpreted as idna
given they are not part of the domain name (to which idna applies).
only the part after _domainkey may be idna.
found after going through code after report about mx targets with underscores
from richard g.
2023-10-25 12:49:39 +02:00
Mechiel Lukkien
34f7e04474
update roadmap 2023-10-25 12:33:22 +02:00
Mechiel Lukkien
8a866a60dc
when expunging a message, keep its threadid
we will need it for jmap, which needs history for threads
2023-10-24 13:16:00 +02:00
Mechiel Lukkien
7b047ed28d
no need for absolute path for prometheus endpoint pointing to metrics 2023-10-24 13:11:04 +02:00
Mechiel Lukkien
a6d55b7e76
add metric for number of times we fallback to plaintext delivery 2023-10-24 13:09:48 +02:00
Mechiel Lukkien
f9eb18b6a8
for mox localserve, only require being able to parse incoming messages over smtp as parsable with pedantic mode 2023-10-24 13:03:50 +02:00
Mechiel Lukkien
5b4de0523d
ignore mox.exe, since we can now build for windows 2023-10-24 13:02:06 +02:00
Mechiel Lukkien
2f5d6069bf
implement "requiretls", rfc 8689
with requiretls, the tls verification mode/rules for email deliveries can be
changed by the sender/submitter. in two ways:

1. "requiretls" smtp extension to always enforce verified tls (with mta-sts or
dnssec+dane), along the entire delivery path until delivery into the final
destination mailbox (so entire transport is verified-tls-protected).

2. "tls-required: no" message header, to ignore any tls and tls verification
errors even if the recipient domain has a policy that requires tls verification
(mta-sts and/or dnssec+dane), allowing delivery of non-sensitive messages in
case of misconfiguration/interoperability issues (at least useful for sending
tls reports).

we enable requiretls by default (only when tls is active), for smtp and
submission. it can be disabled through the config.

for each delivery attempt, we now store (per recipient domain, in the account
of the sender) whether the smtp server supports starttls and requiretls. this
support is shown (after having sent a first message) in the webmail when
sending a message (the previous 3 bars under the address input field are now 5
bars, the first for starttls support, the last for requiretls support). when
all recipient domains for a message are known to implement requiretls,
requiretls is automatically selected for sending (instead of "default" tls
behaviour). users can also select the "fallback to insecure" to add the
"tls-required: no" header.

new metrics are added for insight into requiretls errors and (some, not yet
all) cases where tls-required-no ignored a tls/verification error.

the admin can change the requiretls status for messages in the queue. so with
default delivery attempts, when verified tls is required by failing, an admin
could potentially change the field to "tls-required: no"-behaviour.

messages received (over smtp) with the requiretls option, get a comment added
to their Received header line, just before "id", after "with".
2023-10-24 10:10:46 +02:00
Moritz Poldrack
0e5e16b3d0
main: remove redundant equal function 2023-10-21 16:49:28 +02:00
Mechiel Lukkien
08995c7806
webmail: when composing a message, show security status in a bar below addressee input field
the bar is currently showing 3 properties:
1. mta-sts enforced;
2. mx lookup returned dnssec-signed response;
3. first delivery destination host has dane records

the colors are: red for not-implemented, green for implemented, gray for error,
nothing for unknown/irrelevant.

the plan is to implement "requiretls" soon and start caching per domain whether
delivery can be done with starttls and whether the domain supports requiretls.
and show that in two new parts of the bar.

thanks to damian poddebniak for pointing out that security indicators should
always be visible, not only for positive/negative result. otherwise users won't
notice their absence.
2023-10-15 15:40:13 +02:00
Mechiel Lukkien
4ab3e6bc9b
webmail: autoresize address input field in compose window
so full name/email address is visible.

using a hidden grid element that gets the same content as the input element.
from https://css-tricks.com/auto-growing-inputs-textareas/

a recent commit probably also make the compose window full-screen-width on
chrome, this restores to the intended behaviour of a less wide default size.

if you add multiple address fields, the compose window will still grow. not
great, in the future, we should make the compose window resizable by dragging.
2023-10-15 10:53:57 +02:00
Mechiel Lukkien
101c2703d2
do not lookup cname after looking up the txt for mta-sts, and follow cnames for mocks
because the txt would already follow cnames.
the additional cname lookup didn't hurt, it just didn't do anything.
i probably didn't realize that before looking deeper into dns.
2023-10-14 22:42:26 +02:00
Mechiel Lukkien
8ca198882e
security fix: use correct domain for mta-sts, that of the email address
the original next-hop domain. not anything after resolving cname's, because
then it takes just a single injected dns cname record to lead us to an
unrelated server (that we would verify, but it's the wrong server).

also don't fallback to just strict tls when something is wrong. we must use the
policy to check if an mx host is allowed. the whole idea is that unsigned dns
records cannot be trusted.

i noticed this while implementing dane.
2023-10-14 22:30:43 +02:00
Mechiel Lukkien
42d817ef3d
quick fix for making compose window resizable by expanding/shrinking when textarea is resized
the textarea is resizable (though it's not convenient to do in firefox which
only shows a dragcorner in the bottomright, usually located in the bottom
corner of the screen, so there is little space left to drag the corner; the
workaround is to move the window temporarily).
2023-10-14 21:02:54 +02:00
Mechiel Lukkien
56956c224b
webmail: when quoting text that switches unicode blocks (as highlighted), don't lose the switched text
by using a String object as the textarea child.  instead of a regular js string
that would be unicode-block-switch-highlighted, which would cause it to be
split into parts, with odd or even parts added as span elements, which the
textarea would then ignore.
2023-10-14 14:47:24 +02:00
Mechiel Lukkien
a40f5a5eb3
webmail: recognize q/b-word-encoded filenames in attachments in messages
according to the rfc's (2231, and 2047), non-ascii filenames in content-type
and content-disposition headers should be encoded like this:

	Content-Type: text/plain; name*=utf-8''hi%E2%98%BA.txt
	Content-Disposition: attachment; filename*=utf-8''hi%E2%98%BA.txt

and that is what the Go standard library mime.ParseMediaType and
mime.FormatMediaType parse and generate.

this is what thunderbird sends:

	Content-Type: text/plain; charset=UTF-8; name="=?UTF-8?B?aGnimLoudHh0?="
	Content-Disposition: attachment; filename*=UTF-8''%68%69%E2%98%BA%2E%74%78%74

(thunderbird will also correctly split long filenames over multiple parameters,
named "filename*0*", "filename*1*", etc.)

this is what gmail sends:

	Content-Type: text/plain; charset="US-ASCII"; name="=?UTF-8?B?aGnimLoudHh0?="
	Content-Disposition: attachment; filename="=?UTF-8?B?aGnimLoudHh0?="

i cannot find where the q/b-word encoded values in "name" and "filename" are
allowed. until that time, we try parsing them unless in pedantic mode.

we didn't generate correctly encoded filenames yet, this commit also fixes that.

for issue #82 by mattfbacon, thanks for reporting!
2023-10-14 14:14:13 +02:00
Mechiel Lukkien
3e53343d21
remove message during delivery when we encounter an error after having placed the message in the destination path
before, we would leave the file, but rollback the delivery. future deliveries
would attempt to deliver to the same path, but would fail because a file
already exists.

encountered during testing on windows, not during actual operation. though it
could in theory have happened.
2023-10-14 11:16:39 +02:00
Mechiel Lukkien
6e391c3be0
ensure there is a space between active requests mentioned in the status bar at the top 2023-10-14 11:13:26 +02:00
Mechiel Lukkien
28fae96a9b
make mox compile on windows, without "mox serve" but with working "mox localserve"
getting mox to compile required changing code in only a few places where
package "syscall" was used: for accessing file access times and for umask
handling. an open problem is how to start a process as an unprivileged user on
windows.  that's why "mox serve" isn't implemented yet. and just finding a way
to implement it now may not be good enough in the near future: we may want to
starting using a more complete privilege separation approach, with a process
handling sensitive tasks (handling private keys, authentication), where we may
want to pass file descriptors between processes. how would that work on
windows?

anyway, getting mox to compile for windows doesn't mean it works properly on
windows. the largest issue: mox would normally open a file, rename or remove
it, and finally close it. this happens during message delivery. that doesn't
work on windows, the rename/remove would fail because the file is still open.
so this commit swaps many "remove" and "close" calls. renames are a longer
story: message delivery had two ways to deliver: with "consuming" the
(temporary) message file (which would rename it to its final destination), and
without consuming (by hardlinking the file, falling back to copying). the last
delivery to a recipient of a message (and the only one in the common case of a
single recipient) would consume the message, and the earlier recipients would
not.  during delivery, the already open message file was used, to parse the
message.  we still want to use that open message file, and the caller now stays
responsible for closing it, but we no longer try to rename (consume) the file.
we always hardlink (or copy) during delivery (this works on windows), and the
caller is responsible for closing and removing (in that order) the original
temporary file. this does cost one syscall more. but it makes the delivery code
(responsibilities) a bit simpler.

there is one more obvious issue: the file system path separator. mox already
used the "filepath" package to join paths in many places, but not everywhere.
and it still used strings with slashes for local file access. with this commit,
the code now uses filepath.FromSlash for path strings with slashes, uses
"filepath" in a few more places where it previously didn't. also switches from
"filepath" to regular "path" package when handling mailbox names in a few
places, because those always use forward slashes, regardless of local file
system conventions.  windows can handle forward slashes when opening files, so
test code that passes path strings with forward slashes straight to go stdlib
file i/o functions are left unchanged to reduce code churn. the regular
non-test code, or test code that uses path strings in places other than
standard i/o functions, does have the paths converted for consistent paths
(otherwise we would end up with paths with mixed forward/backward slashes in
log messages).

windows cannot dup a listening socket. for "mox localserve", it isn't
important, and we can work around the issue. the current approach for "mox
serve" (forking a process and passing file descriptors of listening sockets on
"privileged" ports) won't work on windows. perhaps it isn't needed on windows,
and any user can listen on "privileged" ports? that would be welcome.

on windows, os.Open cannot open a directory, so we cannot call Sync on it after
message delivery. a cursory internet search indicates that directories cannot
be synced on windows. the story is probably much more nuanced than that, with
long deep technical details/discussions/disagreement/confusion, like on unix.
for "mox localserve" we can get away with making syncdir a no-op.
2023-10-14 10:54:07 +02:00
Mechiel Lukkien
96774de8d6
add workaround for windows mail authentication in smtpserver 2023-10-13 21:35:03 +02:00
Mechiel Lukkien
8640fd8cff
webmail: top-post with no text selected and add "on ... wrote"-line, keep bottom-quoting with text selected
top-posting causes "On $datetime, $sender wrote:" above the quoted text to be
added (unless there was no Date header or valid address in a From header).

in the near future we should create settings, and add a setting for adding the
"on ... wrote"-line, ideally including a template.

for issue #83 by mattfbacon, thanks!
2023-10-13 19:28:04 +02:00
Mechiel Lukkien
7d28d80191
if requesting a tls certificate through acme fails, put any validation error messages provided by the acme server in the error message
so users can understand what is going on. e.g. a CAA record that doesn't allow
a CA to sign a certificate. previously, the error message would just be "no
viable challenge type found", which doesn't help the user.
2023-10-13 09:28:01 +02:00
Mechiel Lukkien
14d09bb308
format long multi-string dkim txt records for rsa 2048 as a mult-line record, enclosed in ()'s
more easily readable, though still long
2023-10-13 09:14:42 +02:00
Mechiel Lukkien
40040542f6
for generated dkim keys, use clearer file names
with ".rsa2048.privatekey.pkcs8.pem", instead of "rsakey.pkcs8.pem". "rsakey"
doesn't say if it is a public or private key.
2023-10-13 08:59:35 +02:00
Mechiel Lukkien
4e26fd13e2
when api docs cannot be loaded, say which 2023-10-13 08:52:06 +02:00
Mechiel Lukkien
67fe88f431
change the autodiscover SRV record to point to the mail server hostname directly, not to a cname
srv targest shouldn't be cname's. bind was warning about it.
2023-10-13 08:51:02 +02:00
Mechiel Lukkien
850f4444d4
when suggesting DNS records, leave "IN" out
people will either paste the records in their zone file. in that case, the
records will inherit "IN" from earlier records, and there will always be one
record. if anyone uses a different class, their smart enough to know they need
to add IN manually.

plenty of people will add their records through some clunky web interface of
their dns operator. they probably won't even have the choice to set the class,
it'll always be IN.
2023-10-13 08:25:35 +02:00
Mechiel Lukkien
52e71167a9
rename rfc/index.md to txt, it isn't markdown 2023-10-12 23:15:54 +02:00
Mechiel Lukkien
a93dd348fe
webmail: ensure wrap of long header lines, instead of horizontal scrollbar in message header section 2023-10-12 22:08:13 +02:00
Mechiel Lukkien
8dacc31445
webmail: for high images (aspect ratio), don't let image extend beyond window height
apparently the flex parent and flex child with grow 1 is unbounded even with a parent height of 100%
2023-10-12 21:53:05 +02:00
Mechiel Lukkien
7dce883097
simplify dns.MockResolver, changing MockReq to just a string representing the request
similar to Authentic/Inauthentic
2023-10-12 16:07:53 +02:00
Mechiel Lukkien
c095f3f39c
in "mox import ..." help output, make it more clear what should be done to make mbox/maildir archives accessible to the mox process
for issue #79 reported by mattfbacon, thanks!
2023-10-12 15:50:43 +02:00
Mechiel Lukkien
daa908e9f4
implement dnssec-awareness throughout code, and dane for incoming/outgoing mail delivery
the vendored dns resolver code is a copy of the go stdlib dns resolver, with
awareness of the "authentic data" (i.e. dnssec secure) added, as well as support
for enhanced dns errors, and looking up tlsa records (for dane). ideally it
would be upstreamed, but the chances seem slim.

dnssec-awareness is added to all packages, e.g. spf, dkim, dmarc, iprev. their
dnssec status is added to the Received message headers for incoming email.

but the main reason to add dnssec was for implementing dane. with dane, the
verification of tls certificates can be done through certificates/public keys
published in dns (in the tlsa records). this only makes sense (is trustworthy)
if those dns records can be verified to be authentic.

mox now applies dane to delivering messages over smtp. mox already implemented
mta-sts for webpki/pkix-verification of certificates against the (large) pool
of CA's, and still enforces those policies when present. but it now also checks
for dane records, and will verify those if present. if dane and mta-sts are
both absent, the regular opportunistic tls with starttls is still done. and the
fallback to plaintext is also still done.

mox also makes it easy to setup dane for incoming deliveries, so other servers
can deliver with dane tls certificate verification. the quickstart now
generates private keys that are used when requesting certificates with acme.
the private keys are pre-generated because they must be static and known during
setup, because their public keys must be published in tlsa records in dns.
autocert would generate private keys on its own, so had to be forked to add the
option to provide the private key when requesting a new certificate. hopefully
upstream will accept the change and we can drop the fork.

with this change, using the quickstart to setup a new mox instance, the checks
at internet.nl result in a 100% score, provided the domain is dnssec-signed and
the network doesn't have any issues.
2023-10-10 12:09:35 +02:00
Mechiel Lukkien
c4324fdaa1
fix bug in fixmsgize that makes it stop after the first batch of 10k messages
for issue #71 reported by naturalethic, thanks!

users upgrading from v0.0.6 to v0.0.7 could run into this. the release notes
have been updated with a link to the issue. the issue will stay open until at
least the next release.
2023-10-05 22:59:53 +02:00
tkivisik
3aa5026e11
fix typo in README.md (#72) 2023-10-04 07:39:44 +02:00
Mechiel Lukkien
91140da3a7
when logging about threading operations, include info about account
in verifydata, when warning about missing threading, print the db file.
otherwise it isn't clear which account this is about

when upgrading account thread storage, pass the logger that has the account
name.
2023-09-24 13:36:55 +02:00
Mechiel Lukkien
f2de89e365
shuffle sections in readme 2023-09-24 12:42:19 +02:00
Mechiel Lukkien
024c13c551
tweak readme 2023-09-24 12:34:46 +02:00
Mechiel Lukkien
55febe304e
imapserver: always send special-use attributes for mailboxes
even if not asked for with the "return (special-use)" extended list parameter.
macos x mail does not request the special-use flags, but will use them when present.

for issue #66, thanks x8x for providing the imap protocol transcript that
showed how it is done!
2023-09-23 21:00:26 +02:00
Mechiel Lukkien
f19f16bd8b
webmail: when scrolling down, don't send another parsed message that will cause one of the new messages to be selected (unexpected jump in the ui) 2023-09-23 18:36:24 +02:00
Mechiel Lukkien
d19c75559b
include all email addresses of an account in the mobileconfig profile for apple devices
after feedback from x8x, pointing out the support, thanks!

for issue #65
2023-09-23 17:50:32 +02:00
Mechiel Lukkien
f1f3135135
change "mox setaccountpassword" to use an account name as parameter, not email address
because with the name you would expect an account name.
and the email-resolving behaviour is surprising: with wildcard addresses you
can use any address, including a typo. you would change the password of the
address with the wildcard, without any warning. accounts are more precise and
less error-prone.

for issue #68 by x8x
2023-09-23 17:18:49 +02:00
Mechiel Lukkien
8c2814df89
imapserver: fix returning special-use mailbox "\Drafts" instead of "\Draft"
related to issue #66 by x8x, though this doesn't fix that (macos mail doesn't
yet request the special-use flags).
2023-09-23 14:50:02 +02:00
Mechiel Lukkien
0707f53361
in "mox uidbumpvalidity", bump to the next uidvalidity, otherwise we likely leave the uidvalidities in inconsistent state
the inconsistent state isn't really harmful, but we don't want inconsistencies.

pointed out in issue #61 by x8x
2023-09-23 12:15:13 +02:00
Mechiel Lukkien
85cef2a06c
when warning about not being able to hardlink during a backup, make it clear we continue with regular copying and that there won't be another warning
for issue #61 by x8x
2023-09-23 12:09:20 +02:00
Mechiel Lukkien
2b97c21f99
make setting up apple mail clients easier by providing .mobileconfig device management profiles
including showing a qr code to easily get the file on iphones.
the profile is currently in the "account" page.

idea by x8x in issue #65
2023-09-23 12:08:35 +02:00
Mechiel Lukkien
a0f3856e40
when moving a message out of a Rejects mailbox, mark the message as "not seen" so stands out in the destination mailbox (e.g. inbox)
we set the flag both for move in imap and in webmail.

this also ensures the "MailboxDestinedID", used for per-mailbox reputation
analysis, is set in more reject-situations. before this change, some rejects
(such as based on DMARC reject) wouldn't result in reputation being used after
having been moved the message out of the rejects mailbox.

in the future, we need more tests for scenario's like this...

for issue #63 reported by x8x
may also help with issue #64
2023-09-22 15:53:05 +02:00
Mechiel Lukkien
2ec8c79e10
update roadmap with http auth other than http basic, and add per-domain configs
these work on a list, not as high up as before, but moved up with requests in
issue #58, #67, #62.
2023-09-22 14:30:34 +02:00
Mechiel Lukkien
3353062dbe
webmail: when moving out all messages in a thread (none remaining in view), don't cause js error but select next message
removing an item from the selected list should be done regardless of focus,
i.e. the code snippet shouldn't have been behind the "if (focus...)" condition.
2023-09-22 14:25:25 +02:00
Mechiel Lukkien
be5f804d5b
webmail: use the "threads: on" mode by default
with "threads: unread", there is a bit too much change between different times
of opening the mailbox. perhaps the mode wasn't a good idea...
2023-09-22 14:12:46 +02:00
Mechiel Lukkien
89c543f662
if there is special-use junk flag on mailbox, don't also look at AutomaticJunkFlags option
the special-use flag should take precedence.
2023-09-22 10:51:42 +02:00
Mechiel Lukkien
6315d57166
ignore new pprof files from test-upgrade.sh after previous commit 2023-09-21 18:42:01 +02:00
Mechiel Lukkien
4de0af4fa5
add another automated upgrade test
for the path from v0.0.5 with lots of messages straight to the latest
development version. this can do multiple database changes in one go, so it's a
bit different than for installs where an admin has upgraded each version when
it was released.
2023-09-21 16:09:40 +02:00
Mechiel Lukkien
d618cbf918
mention funding through nlnet/eu ngi0 entrust
small line in the readme, but this means a lot for the project: continued
development for a year. expect lots of improvements and features.
2023-09-21 16:08:43 +02:00
Mechiel Lukkien
e6d8049548
webmail: in attachment viewer, for text/* content-type, show the text immediately too
instead of claiming it may be a binary file and showing a button to display the contents.
2023-09-21 15:29:38 +02:00
Mechiel Lukkien
2e16d8025d
when moving message to mailbox with special-use flag "Junk", mark the message as junk too, for retraining
i had been using the AutomaticJunkFlags option, so hadn't noticed the special use flag wasn't used.
2023-09-21 15:20:24 +02:00
Mechiel Lukkien
79774c15ec
add todo's about mime header parameter decoding
not sure what the correct approach is, would need to analyze email archive for practices.
2023-09-21 15:18:25 +02:00
Mechiel Lukkien
f87f286b80
webmail: dragging works on selected items, so tell user they cannot drag if they try to drag a non-selected message 2023-09-21 14:39:40 +02:00
Mechiel Lukkien
20f11409b6
webmail: when open the first unread message of a thread by default when opening a mailbox with threading enabled and the most recent message is in a thread 2023-09-21 12:56:51 +02:00
Mechiel Lukkien
fc6e61e9a5
webmail: add arrow left/right to collapse/expanse threads 2023-09-21 11:51:38 +02:00
Mechiel Lukkien
9bc860e207
webmail: make double click on mailbox expand/collapse, and make mailbox text unselectable (so the double click doesn't also select text) 2023-09-21 11:40:22 +02:00
Mechiel Lukkien
941a2311f0
webmail: try a bit harder not to get mailbox names or search queries in the potential stacktrace
we want to user to submit the stack trace. user can still edit before
submitting, but it won't look attractive to submit stacktraces with info that
shouldn't be there. not great that firefox is including too much info and the
effort we need to make to get it out again, but well.
2023-09-21 11:31:07 +02:00
Mechiel Lukkien
d07c871f5c
webmail: better recognize URLs in text wrapped in () or <> if it follows interpunction
e.g. "text... (https://localhost)." would keep ) as part of the url before, but not anymore.
2023-09-21 11:09:27 +02:00
Mechiel Lukkien
d649cf7dc2
quickstart: recognize likely NAT setup and set up host IPs in "NATIPs" field in the public listener
for issue #59 by pmarini, thanks!
2023-09-21 10:55:15 +02:00
Mechiel Lukkien
cde54442d2
webmail: in status line about (re|dis)connecting, make error message more readable
with space after line, so a next line doesn't get concatenated. and with capital.
2023-09-21 09:07:49 +02:00
Mechiel Lukkien
9534e464f9
add comment about the sconf config file format at the top of the config files
hopefully this helps admins editing the file and prevent mistakes about config files.

for issue #56 by kikoreis, thanks!
2023-09-21 08:59:10 +02:00
Mechiel Lukkien
0d8603f9e1
update latest deps 2023-09-20 16:52:18 +02:00
Mechiel Lukkien
ca5ef645f3
rename Account.Deliver to Account.DeliverDestination
the name was too generic compared with the other Deliver functions
2023-09-15 17:51:28 +02:00
Mechiel Lukkien
3620d6f05e
initialize metric mox_panic_total with 0, so the alerting rule also catches the first panic for a label
increase() and rate() don't seem to assume a previous value of 0 when a vector
gets a first value for a label. you would think that an increase() on a
first-value mox_panic_total{"..."}=1 would return 1, and similar for rate(), but
that doesn't appear to be the behaviour. so we just explicitly initialize the
count to 0 for each possible label value. mox has more vector metrics, but
panics feels like the most important, and it's too much code to initialize them
all, for all combinations of label values. there is probably a better way that
fixes this for all cases...
2023-09-15 16:47:17 +02:00
Mechiel Lukkien
af71e9855b
add package-level comments for webadmin and webaccount 2023-09-15 16:01:23 +02:00
Mechiel Lukkien
bff0131164
webmail: new shortcut "T" for showing html version of email, and t for text version
shortcut X used to be "show html version", but with threading support became
"toggle collapse", so there was a clash.
2023-09-15 15:51:59 +02:00
Mechiel Lukkien
3fb41ff073
implement message threading in backend and webmail
we match messages to their parents based on the "references" and "in-reply-to"
headers (requiring the same base subject), and in absense of those headers we
also by only base subject (against messages received max 4 weeks ago).

we store a threadid with messages. all messages in a thread have the same
threadid.  messages also have a "thread parent ids", which holds all id's of
parent messages up to the thread root.  then there is "thread missing link",
which is set when a referenced immediate parent wasn't found (but possibly
earlier ancestors can still be found and will be in thread parent ids".

threads can be muted: newly delivered messages are automatically marked as
read/seen.  threads can be marked as collapsed: if set, the webmail collapses
the thread to a single item in the basic threading view (default is to expand
threads).  the muted and collapsed fields are copied from their parent on
message delivery.

the threading is implemented in the webmail. the non-threading mode still works
as before. the new default threading mode "unread" automatically expands only
the threads with at least one unread (not seen) meessage. the basic threading
mode "on" expands all threads except when explicitly collapsed (as saved in the
thread collapsed field). new shortcuts for navigation/interaction threads have
been added, e.g. go to previous/next thread root, toggle collapse/expand of
thread (or double click), toggle mute of thread. some previous shortcuts have
changed, see the help for details.

the message threading are added with an explicit account upgrade step,
automatically started when an account is opened. the upgrade is done in the
background because it will take too long for large mailboxes to block account
operations. the upgrade takes two steps: 1. updating all message records in the
database to add a normalized message-id and thread base subject (with "re:",
"fwd:" and several other schemes stripped). 2. going through all messages in
the database again, reading the "references" and "in-reply-to" headers from
disk, and matching against their parents. this second step is also done at the
end of each import of mbox/maildir mailboxes. new deliveries are matched
immediately against other existing messages, currently no attempt is made to
rematch previously delivered messages (which could be useful for related
messages being delivered out of order).

the threading is not yet exposed over imap.
2023-09-13 15:44:57 +02:00
Mechiel Lukkien
b754b5f9ac
fix flushing of transparently compressed gzip output
this is a problem for connections like SSE, that only send data on events.
those events would stay in the gzip buffer until lots more data was written.

bug because of automatically typing "if err != nil"...

found while testing the maildir/mbox web-based import while working on message
threading support. the import gets progress SSE events that were now hanging.
2023-09-12 21:22:08 +02:00
Mechiel Lukkien
6f1e38f2ce
add flag to mox to store execution trace, similar to cpu/memory profiling
useful for performance testing
2023-09-12 14:43:52 +02:00
Mechiel Lukkien
4a4ccb83a3
when making a message preview, also recognize []-enclosed "horizontal ellipsis" unicode character as a snip 2023-09-11 14:41:50 +02:00
Mechiel Lukkien
fc7b0cc71e
fix parsing List-Post header in webmail 2023-09-11 11:55:28 +02:00
Mechiel Lukkien
f6d03a0eab
track more unexpected panics in metrics 2023-09-11 11:43:49 +02:00
Mechiel Lukkien
a5006a9090
fix not adding duplicate domains to the list of "verified dkim domains" for incoming messages 2023-09-11 11:37:45 +02:00
Mechiel Lukkien
cb1b133e28
add more rfc's, for jmap, caldav, carddav, lemonade profile
being on the list does not mean it is implemented.
2023-09-11 11:26:40 +02:00
Mechiel Lukkien
a6ae87d7ac
webmail: fix showing attachments that are text/plain and have content-disposition: attachment
they were not added to the list of attachments when sending the message to the
webmail frontend. they were shown on the "open message in new tab" page.
2023-09-03 15:20:56 +02:00
Mechiel Lukkien
4283ceecfc
fix serving static webmail files in development mode
due to a missing return, the content was served again.
this path doesn't happen on release binaries, only during local development,
where there is a local file that can be served.
2023-09-03 15:17:09 +02:00
Mechiel Lukkien
165639cb38
smtpserver: in helo/ehlo for submission don't fail on bad domain/ip address
for submission, we don't care about the value. users typically won't be able to
easily fix the errors (of their mail client software). so we just ignore the
domain/ip address, unless in pedantic mode.

this also parses "additional information after literal addresses" more
strictly/correctly.

for issue #55 by gimpf, thanks for the report!
2023-08-25 15:29:02 +02:00
Mechiel Lukkien
f4c20673ff
don't generate duplicate spf record if hostname is equal to domain name, e.g. postmaster@mail.domain.
the assumption has been that the hostname is something like mail.<domain>, when
setting up mox with the quickstart for user@<domain>. but users can use the
quickstart for postmaster@mail.<domain> as well.

for issue #46 by x8x, thanks for reporting!
2023-08-25 14:32:28 +02:00
Mechiel Lukkien
61a5eb61a4
remove needless fmt.Sprintf
by staticcheck
2023-08-23 16:27:02 +02:00
Mechiel Lukkien
f029db3f47
imapserver bugfix: fix expunging for messages marked junk/nonjunk
such messages would be marked expunged in the database, then the junkfilter
would be retrained for the removal of the message. but during retraining, the
expunged flag would be cleared again. the on-disk message file would still be
removed. so when opening the mailbox, the message would appear to still exist,
but cannot be retrieved from disk.

if you run "mox fixmsgsize", and you get warnings about missing message files,
you could create empty files (with "touch"), run "mox fixsmsgsize" again,
followed by "mox recalculatemailboxcounts <affectedaccount>" and run "mox
bumpuidvalidity <affectaccount>".

"mox backup" would probably also complain, as would "mox verifydata".

this may have caused the "wrong mailbox counts" error i got from "mox
verifydata" on a backup.
2023-08-23 16:20:06 +02:00
Mechiel Lukkien
da9f1d9d0d
in admin pages, make the literal instruction text on the dnscheck page visible, and set a max-width for easier readability 2023-08-23 15:10:02 +02:00
Mechiel Lukkien
b3dd4a55c3
fix a spello, and reword so misspell doesn't complain about it 2023-08-23 14:59:43 +02:00
Mechiel Lukkien
affb057a0c
webmail: fix case where tree of mailboxes wasn't displayed properly
for example, when these mailboxes existed: "a", "a.b", "a/b", then "a.b" (.
before / in ascii) prevented "a/b" from being displayed in the tree below "a".
2023-08-23 14:57:05 +02:00
Mechiel Lukkien
aebfd78a9f
implement accepting dmarc & tls reports for other domains
to accept reports for another domain, first add that domain to the config,
leaving all options empty except DMARC/TLSRPT in which you configure a Domain.

the suggested DNS DMARC/TLSRPT records will show the email address with
configured domain. for DMARC, the dnscheck functionality will verify that the
destination domain has opted in to receiving reports.

there is a new command-line subcommand "mox dmarc checkreportaddrs" that
verifies if dmarc reporting destination addresses have opted in to received
reports.

this also changes the suggested dns records (in quickstart, and through admin
pages and cli subcommand) to take into account whether DMARC and TLSRPT is
configured, and with which localpart/domain (previously it always printed
records as if reporting was enabled for the domain). and when generating the
suggested DNS records, the dmarc.Record and tlsrpt.Record code is used, with
proper uri-escaping.
2023-08-23 14:27:21 +02:00
Mechiel Lukkien
9e248860ee
implement transparent gzip compression in the webserver
we only compress if applicable (content-type indicates likely compressible),
client supports it, response doesn't already have a content-encoding).

for internal handlers, we always enable compression.  for reverse proxied and
static files, compression must be enabled per handler.

for internal & reverse proxy handlers, we do streaming compression at
"bestspeed" quality (probably level 1).

for static files, we have a cache based on mtime with fixed max size, where we
evict based on least recently used. we compress with the default level (more
cpu, better ratio).
2023-08-21 21:52:35 +02:00
Mechiel Lukkien
4c72184b44
update link to docker image
user was being redirected to the new url
2023-08-20 18:45:19 +02:00
Mechiel Lukkien
b43529a2e9
sendmail: bugfix: set remote hostname to verify for tls connections
due to logic bug we weren't setting it, and tls connections would fail with a
warning that either the remote hostname must be set or insecurityskipverify
must be set.
2023-08-20 18:26:20 +02:00
Mechiel Lukkien
0b9475271c
add possible future todo for working around ios messages with wrong q-encoded headers 2023-08-16 16:22:00 +02:00
Mechiel Lukkien
80547df6ee
webmail: don't have two spaces between header and address(es) (e.g. for From/To)
because outlook.com will reformat the message and then fail to verify the message.
proton.me also reformats and invalidates the dkim signature, but seemingly
after it verifies the dkim signature.
2023-08-16 15:22:38 +02:00
Mechiel Lukkien
1ccc5d0177
fix message size in a message in gentestdata
and work around the message in test-upgrade.sh.
and add subcommand to open an account, triggering data upgrades.
2023-08-16 14:36:17 +02:00
Mechiel Lukkien
ddf3cb3653
mention there are now webmail screenshots, and small release process tweaks 2023-08-16 10:16:48 +02:00
Mechiel Lukkien
9f46879377
webmail: correct label for Subject in search form 2023-08-15 13:03:02 +02:00
Mechiel Lukkien
aed23d900a
update dependencies 2023-08-15 10:58:01 +02:00
Mechiel Lukkien
02a03710dc
don't try to (non-recursively) remove directories from the data tmp dir
mox only creates files there. directories could be a backup that is being
transferred to elsewhere.
2023-08-15 09:51:52 +02:00
Mechiel Lukkien
fdbbfb765b
point users to spamhaus and spamcop pages and terms of use 2023-08-15 09:48:53 +02:00
Mechiel Lukkien
983002b074
with strict message parsing, don't allow lines longer than 1000 bytes 2023-08-15 09:21:36 +02:00
Mechiel Lukkien
34c2dcd49d
add strict mode when parsing messages, typically enabled for incoming special-use messages like tls/dmarc reports, subjectpass emails
and pass a logger to the message parser, so problems with message parsing get
the cid logged.
2023-08-15 08:25:56 +02:00
Mechiel Lukkien
f5f953b3ab
handle parsing message header without header/body separator
the commit before the previous added tests with a message with only 1 header
line. it's a valid message, but Go's mail.ReadMessage doesn't handle it with
go1.20 and earlier. the automated "test with previous go release" caught it.
work around it by adding the expected but absent \r\n to the parse function.
2023-08-14 15:40:27 +02:00
Mechiel Lukkien
f96310fdd5
fix checking for tls certificates, and the quickstart with the -existing-webserver flag
some time ago, the flag to ParseConfig() to do or skip checking the tls
keys/certs was inverted, but it looks like i didn't change the call sites... so
during "mox config test", and after a regular "mox quickstart" there was no
check for the tls keys/certs, and during "mox quickstart -existing-webserver"
there was a check where there shouldn't be. this made using -existing-webserver
impossible.

this became clear with the question by morki in issue #5.
2023-08-14 15:01:17 +02:00
Mechiel Lukkien
48eb530b1f
improve message parsing: allow bare carriage return (unless in pedantic mode), allow empty header, and no longer treat a message with only headers as a message with only a body 2023-08-11 14:07:49 +02:00
Mechiel Lukkien
79d06184ab
fix flaky test, event doesn't have to be set 2023-08-11 10:46:22 +02:00
Mechiel Lukkien
55d05c6bea
replace listener config option IPsNATed with NATIPs, and let autotls check NATIPs
NATIPs lists the public IPs, so we can still do the DNS checks on them. with
IPsNATed, we disabled the checks.

based on feedback by kikoreis in issue #52
2023-08-11 10:13:17 +02:00
Mechiel Lukkien
d7df70acd8
webmail: don't lose display of additional headers when a flag/keyword changes (e.g. marked as read) 2023-08-11 08:38:57 +02:00
Mechiel Lukkien
383eb483df
webmail: for html-only messages, also show the "show html with external resources" button 2023-08-10 14:55:30 +02:00
Mechiel Lukkien
a4c6fe815f
make some maintenance commands that were previously unlisted listed
we refer to these commands in output of "mox verifydata", so they should be
findable other than through the code...
2023-08-10 12:29:46 +02:00
Mechiel Lukkien
7cceb3d834
add comment about not verifying Sender for submissions 2023-08-10 12:18:05 +02:00
Mechiel Lukkien
6b68920a3a
Go's LookupAddr will return non-absolute names, seemingly for single-label names from /etc/hosts, turn them into absolute names so our verifying forward lookups can succeed 2023-08-10 11:52:35 +02:00
Mechiel Lukkien
a30d8c1378
for localserve, don't special-case smtp submit
the recent webmail addition added localserve local delivery in queue.Add, so we
just that for smtpserver too.

and don't drop incoming smtp deliver messages, but deliver as normal.
2023-08-10 11:28:57 +02:00
Mechiel Lukkien
ce91b7d23e
update roadmap 2023-08-10 11:05:38 +02:00
Mechiel Lukkien
0434e49c3a
webmail: while attachment viewer is open, don't handle global keyboard shortcuts (like search, going to inbox)
feedback from jonathan, thanks!
2023-08-10 11:02:13 +02:00
Mechiel Lukkien
c24bb063e5
webmail tweaks
- padding on small attachment download button.
- don't remember "show html" but always display text first.
- propagate modseq to message when flags/keywords change, so "show internals" shows the update.
2023-08-10 10:56:04 +02:00
Mechiel Lukkien
f48a53726e
when clearing search, open inbox
feedback from jonathan, thanks!
2023-08-10 10:42:54 +02:00
Mechiel Lukkien
038b478d16
listen/bind in deterministic order for consistent error messages, and warn if quickstart cannot find public ip's
without public ip's, the generated mox config will try to listen on 0.0.0.0 and
::, but because there is already a listener for 127.0.0.1:80 (and possibly
others), a bind for 0.0.0.0:80 will fail. explicit public ip's are needed.

the public http listener is useful for ACME validation over http.

for issue #52
2023-08-10 10:29:06 +02:00
Mechiel Lukkien
01bcd98a42
add flag to ruleset that indicates a message is forwarded, slightly modifying how junk analysis is done
part of PR #50 by bobobo1618
2023-08-09 22:31:37 +02:00
Mechiel Lukkien
9c31789c56
add option to ruleset to accept incoming spammy messages to a configured mailbox
this is based on @bobobo1618's PR #50. bobobo1618 had the right idea, i tried
including an "is forwarded email" configuration option but that indeed became
too tightly coupled. the "is forwarded" option is still planned, but it is
separate from the "accept rejects to mailbox" config option, because one could
still want to push back on forwarded spam messages.

we do an actual accept, delivering to a configured mailbox, instead of storing
to the rejects mailbox where messages can automatically be removed from.  one
of the goals of mox is not pretend to accept email while actually junking it.
users can still configure delivery to a junk folder (as was already possible),
but aren't deleted automatically. there is still an X-Mox-Reason header in the
message, and a log line about accepting the reject, but otherwise it is
registered and treated as an (smtp) accept.

the ruleset mailbox is still required to keep that explicit. users can specify
Inbox again.

hope this is good enough for PR #50, otherwise we'll change it.
2023-08-09 22:25:10 +02:00
Mechiel Lukkien
383fe4f53a
explicitly store in a Message whether it was delivered to the rejects mailbox
soon, we can have multiple rejects mailboxes.  and checking against the
configured rejects mailbox name wasn't foolproof to begin with, because it may
have changed between delivery to the rejects mailbox and the message being
moved.

after upgrading, messages currently in rejects mailboxes don't have IsReject
set, so they don't get the special rejecs treatment when being moved. they are
removed from the rejects mailbox after some time though, and newly added
rejects will be treated correctly. so this means some existing messages wrongly
delivered to the rejects mailbox, and moved out, aren't used (for a positive
signal) for future deliveries.  saves a bit of complexity in the
implementation.  i think the tradeoff is worth it.

related to discussion in issue #50
2023-08-09 16:52:24 +02:00
Mechiel Lukkien
0fc59af9a8
add Deliver-To header for delivered messages
for (experimental) rfc 9228
2023-08-09 10:20:45 +02:00
Mechiel Lukkien
20ebdae8ea
in webmail, automatically mark message as nonjunk when open for 5 seconds, and prevent extraneous newlines when composing a reply to selected text 2023-08-09 09:45:54 +02:00
Mechiel Lukkien
34ede1075d
remove last remnants of treating a mailbox named "Sent" specially, in favor of special-use mailbox flags
a few places still looked at the name "Sent". but since we have special-use
flags, we should always look at those. this also changes the config so admins
can specify different names for the special-use mailboxes to create for new
accounts, e.g. in a different language. the old config option is still
understood, just deprecated.
2023-08-09 09:31:23 +02:00
Mechiel Lukkien
19b819d222
in smtpserver, don't put unrecognized smtp commands in prometheus metrics
can blow up prometheus storage.
2023-08-09 08:12:59 +02:00
Mechiel Lukkien
f5af258075
in account & admin web api's, differentiate between server errors and user errors, and add a prometheus monitoring rule for server errors 2023-08-09 08:02:58 +02:00
Mechiel Lukkien
8c3c12d96a
add message size consistency check
the bulk of a message is stored on disk. a message prefix is stored in the
database (for prefixed headers like "Received:"). this adds a check to ensure
Size = prefix length + on-disk file size.

verifydata also checks for this now.

and one older and one new (since yesterday) bug was found. the first when
appending a message without a header/body section (uncommon). the second when
sending messages from webmail with localserve (uncommon).
2023-08-08 22:10:53 +02:00
Mechiel Lukkien
49cf16d3f2
fix race in test setup/teardown
not easily triggered, but it happened just now on a build server.
2023-08-07 23:14:31 +02:00
Mechiel Lukkien
849b4ec9e9
add webmail
it was far down on the roadmap, but implemented earlier, because it's
interesting, and to help prepare for a jmap implementation. for jmap we need to
implement more client-like functionality than with just imap. internal data
structures need to change. jmap has lots of other requirements, so it's already
a big project. by implementing a webmail now, some of the required data
structure changes become clear and can be made now, so the later jmap
implementation can do things similarly to the webmail code. the webmail
frontend and webmail are written together, making their interface/api much
smaller and simpler than jmap.

one of the internal changes is that we now keep track of per-mailbox
total/unread/unseen/deleted message counts and mailbox sizes.  keeping this
data consistent after any change to the stored messages (through the code base)
is tricky, so mox now has a consistency check that verifies the counts are
correct, which runs only during tests, each time an internal account reference
is closed. we have a few more internal "changes" that are propagated for the
webmail frontend (that imap doesn't have a way to propagate on a connection),
like changes to the special-use flags on mailboxes, and used keywords in a
mailbox. more changes that will be required have revealed themselves while
implementing the webmail, and will be implemented next.

the webmail user interface is modeled after the mail clients i use or have
used: thunderbird, macos mail, mutt; and webmails i normally only use for
testing: gmail, proton, yahoo, outlook. a somewhat technical user is assumed,
but still the goal is to make this webmail client easy to use for everyone. the
user interface looks like most other mail clients: a list of mailboxes, a
search bar, a message list view, and message details. there is a top/bottom and
a left/right layout for the list/message view, default is automatic based on
screen size. the panes can be resized by the user. buttons for actions are just
text, not icons. clicking a button briefly shows the shortcut for the action in
the bottom right, helping with learning to operate quickly. any text that is
underdotted has a title attribute that causes more information to be displayed,
e.g. what a button does or a field is about. to highlight potential phishing
attempts, any text (anywhere in the webclient) that switches unicode "blocks"
(a rough approximation to (language) scripts) within a word is underlined
orange. multiple messages can be selected with familiar ui interaction:
clicking while holding control and/or shift keys.  keyboard navigation works
with arrows/page up/down and home/end keys, and also with a few basic vi-like
keys for list/message navigation. we prefer showing the text instead of
html (with inlined images only) version of a message. html messages are shown
in an iframe served from an endpoint with CSP headers to prevent dangerous
resources (scripts, external images) from being loaded. the html is also
sanitized, with javascript removed. a user can choose to load external
resources (e.g. images for tracking purposes).

the frontend is just (strict) typescript, no external frameworks. all
incoming/outgoing data is typechecked, both the api request parameters and
response types, and the data coming in over SSE. the types and checking code
are generated with sherpats, which uses the api definitions generated by
sherpadoc based on the Go code. so types from the backend are automatically
propagated to the frontend.  since there is no framework to automatically
propagate properties and rerender components, changes coming in over the SSE
connection are propagated explicitly with regular function calls.  the ui is
separated into "views", each with a "root" dom element that is added to the
visible document. these views have additional functions for getting changes
propagated, often resulting in the view updating its (internal) ui state (dom).
we keep the frontend compilation simple, it's just a few typescript files that
get compiled (combined and types stripped) into a single js file, no additional
runtime code needed or complicated build processes used.  the webmail is served
is served from a compressed, cachable html file that includes style and the
javascript, currently just over 225kb uncompressed, under 60kb compressed (not
minified, including comments). we include the generated js files in the
repository, to keep Go's easily buildable self-contained binaries.

authentication is basic http, as with the account and admin pages. most data
comes in over one long-term SSE connection to the backend. api requests signal
which mailbox/search/messages are requested over the SSE connection. fetching
individual messages, and making changes, are done through api calls. the
operations are similar to imap, so some code has been moved from package
imapserver to package store. the future jmap implementation will benefit from
these changes too. more functionality will probably be moved to the store
package in the future.

the quickstart enables webmail on the internal listener by default (for new
installs). users can enable it on the public listener if they want to. mox
localserve enables it too. to enable webmail on existing installs, add settings
like the following to the listeners in mox.conf, similar to AccountHTTP(S):

	WebmailHTTP:
		Enabled: true
	WebmailHTTPS:
		Enabled: true

special thanks to liesbeth, gerben, andrii for early user feedback.

there is plenty still to do, see the list at the top of webmail/webmail.ts.
feedback welcome as always.
2023-08-07 21:57:03 +02:00
Mechiel Lukkien
141637df43
when creating a mailbox subscription, don't just try to insert a record into the database and handle bstore.ErrUnique, the transaction will have been marked as botched
behaviour around failing DB calls that change data (insert/update) was changed
in bstore quite some time ago. the tx state in bstore would become inconsistent
when one or more (possibly unique) indexes had been modified, but then an
ErrUnique would occur for the next index. bstore doesn't know how to roll back
the partial changes during a transaction, so it marks the tx as botched and
refuses further operations. so, we cannot just try to insert, wait for a
possible ErrUnique, but then still try to continue with the transaction.
instead, we check if the record exists and only insert it if we couldn't find
it.

found while working on webmail.
2023-08-01 10:14:02 +02:00
Mechiel Lukkien
19550cc041
use Go's mail.ReadMessage instead of textproto.ReadMIMEHeaders and decode RFC 2047 charsets in subject header when parsing message
as the recent Go patch release showed, textproto.ReadMIMEHeaders is parsing
http headers, strictly. too strict for email message headers. valid headers,
e.g. with a slash in them, were rejected by textproto.ReadMIMEHeaders.

the functions in Go's mail package handle RFC 2047 charset-encoded words in
address headers. it can do that because we tell it those headers are addresses,
where such encodings are valid. but that encoding isn't valid in all places in
all headers. for other cases, we must decode explicitly, such as for the
subject header.

with this change, some messages that could not be parsed before can now be
parsed (where headers were previously rejected for being invalid). and the
subject of parsed messages could now be properly decoded. you could run "mox
ensureparsed -all <account>" (while mox isn't running) to force reparsing all
messages. mox needs a subcommand to reparse while running...

it wasn't much of a problem before, because imap email clients typically do
their own parsing (of headers, including subject decoding) again.  but with the
upcoming webmail client, any wrong parsing quickly reveals itself.
2023-08-01 09:50:26 +02:00
Mechiel Lukkien
3ef1f31359
update dependencies 2023-07-28 22:47:28 +02:00
Mechiel Lukkien
01adad62b2
implement decoding charsets (other than ascii and utf-8) while reading textual message parts, and improve search
message.Part now has a ReaderUTF8OrBinary() along with the existing Reader().
the new function returns a reader of decoded content. we now use it in a few
places, including search. we only support the charsets in
golang.org/x/text/encoding/ianaindex.

search has also been changed to not read the entire message in memory. instead,
we make one 8k buffer for reading and search in that, and we keep the buffer
around for all messages. saves quite some allocations when searching large
mailboxes.
2023-07-28 22:15:23 +02:00
Mechiel Lukkien
a31dfc573e
in smtpserver, allow a space after "mail from:" and "rcpt to:" commands for submission connections
the space is explicitly mentioned as not valid in rfc 5321, but it clients do
send it, such as microsoft outlook 365 apps for enterprise. no need to punish
such users, we'll allow it. but only for submission, not regular smtp, because
it is normally a sign of a spammer. we still don't allow it in pedantic mode
(as used by localserve).

for issue #51 by hmfaysal, thanks for reporting and testing!
2023-07-28 20:49:19 +02:00
Mechiel Lukkien
6273afe84f
fix building fresh docker images for integration tests
i always get bitten by some caching or missing checks when i use docker...
Dockerfile.moxmail doesn't exist anymore, but that doesn't matter, it doesn't
even look at it but will just use some image that is still around (based on the
name?) i suppose that means docker-compose also doesn't rebuild an image when
the dockerfile mentioned in the build changes.
2023-07-26 21:58:12 +02:00
Mechiel Lukkien
5be4e91979
new items on roadmap, mention delivered-to rfc, fix wording in comments 2023-07-26 19:23:20 +02:00
Mechiel Lukkien
a92784b824
add missing account close, for retraining junk filter for an account with the retrain cli command 2023-07-26 19:21:58 +02:00
Mechiel Lukkien
e3d0a3a001
fix bug with cli import command in case the mbox/maildir had keywords, future delivery to the mailbox would fail with duplicate uid's.
accounts with a mailbox with this problem can be fixed by running the "mox
fixuidmeta <account>" command.

we were resetting the mailbox uidnext after delivering messages when we were
setting new keywords on the mailbox at the end of the import. so in a future
delivery attempt to that mailbox, a uid would be chosen that was already
present.

the fix is to fetch the updated mailbox from the database before setting the
new keywords.

http/import.go doesn't have this bug because it was already fetching the
mailbox before updating keywords (because it can import into many mailboxes,
so different code).

the "mox verifydata" command (recommended with backups) also warns about this
issue (but doesn't fix it)

found while working on new functionality (webmail).
2023-07-26 10:09:36 +02:00
Mechiel Lukkien
700118dbd2
add Content-Type header to message delivered for new mox releases
at least the android gmail/mail app doesn't show messages without content-type
header. i believe missing content-type is meant to be interpreted as
text/plain, but doesn't hurt to be explicit.
2023-07-25 08:24:05 +02:00
Mechiel Lukkien
7f1b7198a8
add condstore & qresync imap extensions
for conditional storing and quick resynchronisation (not sure if mail clients are actually using it that).

each message now has a "modseq". it is increased for each change. with
condstore, imap clients can request changes since a certain modseq. that
already allows quickly finding changes since a previous connection. condstore
also allows storing (e.g. setting new message flags) only when the modseq of a
message hasn't changed.

qresync should make it fast for clients to get a full list of changed messages
for a mailbox, including removals.

we now also keep basic metadata of messages that have been removed (expunged).
just enough (uid, modseq) to tell client that the messages have been removed.
this does mean we have to be careful when querying messages from the database.
we must now often filter the expunged messages out.

we also keep "createseq", the modseq when a message was created. this will be
useful for the jmap implementation.
2023-07-24 21:25:50 +02:00
Mechiel Lukkien
cc4ecf2927
imap continuations must have a space after the "+"
prevented at least the gmail/mail (?) android app from appending a sent message
to the sent mailbox.
2023-07-24 19:54:55 +02:00
Mechiel Lukkien
bc62aae0e6
in imap4rev1 search, always send an untagged search response, also without matches
required by rfc. i noticed an example doing that in the condstore/qresync rfc.
2023-07-24 15:40:04 +02:00
Mechiel Lukkien
bca33c0364
don't recurse into error checking function xcheckf when sendmail fails
found when wanting to get rid of the only non-err "shadowing" warning.
2023-07-24 14:08:27 +02:00
Mechiel Lukkien
b7a0904907
cleanup for warnings by staticcheck
the warnings that remained were either unused code that i wanted to use in the
future, or other type's of todo's. i've been mentally ignoring them, assuming i
would get back to them soon enough to fix them. but that hasn't happened yet,
and it's better to have a clean list with only actual isses.
2023-07-24 13:55:36 +02:00
Mechiel Lukkien
8bc554b671
update roadmap, top items are likely to happen soon, add milter to the list (for issue #47) 2023-07-24 11:03:53 +02:00
Mechiel Lukkien
c0100f44e7
for test-upgrade, import a (hopefully large) mbox file, checking for performance/memory consumption
in the future, it would be good to actually start a mox and read
mailboxes/messages...
2023-07-24 11:00:11 +02:00
Mechiel Lukkien
840f3afb35
in domain dnscheck, also check for hostname of mail server resolving to a loopback ip
nowadays the quickstart will warn about this, but it may be missed/ignored. and
users that installed mox a few versions ago never got the warning. so now we
keep warning about it in the dns check.

based on feedback from Mendel on slack, thanks!
2023-07-24 09:23:41 +02:00
Mechiel Lukkien
2e5376d7eb
when moving/copying messages in imapserve, also ensure the message keywords make it into the destination mailbox keywords list 2023-07-24 08:49:19 +02:00
Mechiel Lukkien
f9e261e0fb
merge docker-compose-based quickstart and integration tests into a single integration test
the two were so similar it made sense to just have one that tests all. saves
building docker images.
2023-07-23 23:32:02 +02:00
Mechiel Lukkien
dcb0f0a82c
in DSNs, add a References header pointing to the message with deliverability issues
so mail user agents will show DSNs threaded/grouped with the original message.
we store the MessageID in the message queue, so we have the value within reach
when we need it.

i saw a references header in a DSN from gmail on a test account. makes sense to me.
2023-07-23 17:56:39 +02:00
Mechiel Lukkien
c5747bd656
go fmt and updated config after make build
for PR #49
2023-07-23 17:08:55 +02:00
bobobo1618
671fc5b8f1
Add a 'KeepRejects' option that disables auto-cleanup (#49)
Add a 'KeepRejects' option that disables auto cleanup of the rejects mailbox.
2023-07-23 17:03:09 +02:00
Mechiel Lukkien
e943e0c65d
fix delay with propagating mailbox changes to other imap (idle) connections
when broadcasting a change, we would try to send the changes on a channel,
non-blocking. if we couldn't send (because there was no pending blocked
receive), we would wait until the potential receiver would explicitly request
the changes. however, the imap idle handler would not explicitly request the
changes, but do a receive on the changes channel. since there was no pending
blocked send on the channel, that receive would block. only when another event
would come in, would both the pending and the new changes be sent.

we now use a channel only for signaling there are pending changes. the channel
is buffered, so when broadcasting we can just set the signal by a non-blocking
send and continue with the next listener. the receiver will get the buffered
signal. it can then get the changes directly, but lock-protected.

found when looking at a missing/delayed new message notification in thunderbird
when two messages arrive immediately after each other. this doesn't fix that
problem though: it seems thunderbird just ignores imap untagged "exists"
messages (indicating a new message arrived) during the "uid fetch" command that
it issued after notifications from an "idle" command.
2023-07-23 15:28:37 +02:00
Mechiel Lukkien
3e9b4107fd
move "link or copy" functionality to moxio
and add a bit more logging for unexpected failures when closing files.
and make tests pass with a TMPDIR on a different filesystem than the testdata directory.
2023-07-23 12:15:29 +02:00
Mechiel Lukkien
4a4d337ab4
improve comments 2023-07-23 09:42:29 +02:00
Mechiel Lukkien
70806137da
for submission over IPv6, allow missing "IPv6" tag in ip address (unless in pedantic mode)
an EHLO ipv4 address looks like this: "[1.2.3.4]". for ipv6, the syntax is:
"[IPv6🔡:1]". mail user agents aren't as careful in compliance as smtp
servers. for incoming messages from smtp servers, we want to be strict (we're
eager to find a reason not to accept spam messages, and not adhering to the
standards is usually a strong spam signal), but there is no reason to punish
authenticated users.

for the syntax requirements, see ABNF rule "address-literal" in rfc 5321.

for issue #48 by @bobobo1618, thanks!
2023-07-22 14:20:50 +02:00
Mechiel Lukkien
9c25c88542
reinstate go vet ./... 2023-07-22 14:02:05 +02:00
Mechiel Lukkien
5b17fcd712
print log line about unprivileged user after having initialized the values that are printed
we currently were logging as if we were starting with uid=0, which wasn't the case.
2023-07-19 11:32:19 +02:00
Mechiel Lukkien
17dac99830
fix spello and link to a working build on beta.gobuilds.org
if a window user visited beta.gobuilds.org, they would be redirected to the
windows build, which would fail. better point them to a working build that
shows links to the platform they may actually need.
2023-07-18 08:58:01 +02:00
Mechiel Lukkien
91ffa4e99b
fix progress reporting during import through the accounts web page
the import was still processed, but the SSE connection to fetch progress did
not work since adding the loggingWriter.

found while working on other functionality that uses SSE.
2023-07-05 12:54:24 +02:00
Mechiel Lukkien
785a38c8b0
improve deprecation warning about localpart-only destinations a bit
it's still not great. better to automatically change domains.conf. but that
would currently rewrite the whole file, which may not be what admins that
manually edit expect, it would remove their comments. we need better
config-update code.

for issue #40
2023-07-03 09:48:50 +02:00
Mechiel Lukkien
c2448e5adc
update to latest dependencies 2023-07-03 09:13:19 +02:00
Mechiel Lukkien
88d063b598
don't pass git history to docker container builds
isn't needed, and faster this way
2023-07-03 09:12:25 +02:00
Mechiel Lukkien
6e5ed2e30f
add FAQ about using existing TLS cert/keys
for issue #41 by pmarini
2023-07-02 15:05:55 +02:00
Mechiel Lukkien
96326846cd
at startup, print absolute path to config files that we read
after a post on HN about how that's useful for services you haven't had to do
anything with for a while. will help with debugging in that case.
2023-07-02 14:46:20 +02:00
Mechiel Lukkien
d854bc116f
when user opens url to admin or account endpoint, but without trailing slash, redirect them to the url with trailing slash
the trailing slash is commonly forgotten. in the default setup, for the admin
endpoint, this makes you end up at the account endpoint, which won't accept
your admin credentials. with this change, users won't get confused by that
anymore.

for issue #43
2023-07-02 14:37:48 +02:00
Mechiel Lukkien
03c3f56a59
add basic tests for the ctl subcommands, and fix two small bugs
this doesn't really test the output of the ctl commands, just that they succeed
without error. better than nothing...

testing found two small bugs, that are not an issue in practice:

1. we were ack'ing streamed data from the other side of the ctl connection
before having read it. when there is no buffer space on the connection (always
the case for net.Pipe) that would cause a deadlock. only actually happened
during the new tests.

2. the generated dkim keys are relatively to the directory of the dynamic
config file. mox looked it up relative to the directory of the _static_ config
file at startup. this directory is typicaly the same. users would have noticed
if they had triggered this.
2023-07-02 14:18:50 +02:00
Mechiel Lukkien
1469b7293e
more integration tests: start "mox localserve" and submit a message with smtpclient and with "mox sendmail", check that we receive it 2023-07-01 18:48:29 +02:00
Mechiel Lukkien
7facf9d446
when a message contains a date that we cannon marhsal to json, adjust the date
found a message with a 24 hour time zone offset, which Go's json package cannot
marshal. in that case, we adjust the date to utc.
2023-07-01 17:25:10 +02:00
Mechiel Lukkien
5817e87a32
add subcommand "ximport", that is like "import" but directly access files in the datadir
so mox doesn't have to be running when you run it.
will be useful for testing in the near future.

this also moves cpuprof and memprof cli flags to top-level flag parsing, so all
commands can use them.
2023-07-01 16:43:20 +02:00
Mechiel Lukkien
faa08583c0
in integration test, don't read database index files but use imap idle to get notified of message delivery, and make integration & quickstart tests faster by making first-time sender delay configurable, and using a 1s timeout instead of the default 15s
we could make more types of delays configurable. the current approach isn't
great, as it results in an a default value of "0s" in the config file, while
the actual default is 15s (which is documented just above, but still).
2023-07-01 14:24:28 +02:00
Mechiel Lukkien
3173da5497
fix bug in imapserver with rename of inbox, and add consistency checks
renaming inbox is special. the mailbox isn't renamed, but its messages moved to
a new mailbox. we weren't updating the destination mailbox uidnext with the new
messages. the fix not only sets the uidnext correctly, but also renumbers the
uids, starting at 1.

this also adds a consistency check for message uids and mailbox uidnexts, and
for mailbox uidvalidity account nextuidvalidity in "mox verifydata".

this also adds command "mox fixuidmeta" (not listed) that fixes up mailbox uidnext
and account uidvalidity. and command "mox reassignuids" that will renumber the
uids for either one or all mailboxes in an account.
2023-06-30 17:19:29 +02:00
Mechiel Lukkien
1e049a087d
fix bug in imapserver with matching if a uid is in a uidset
for a uid set, the syntax <num>:* must be interpreted as <num>:<maxuid>. a
wrong check turned the uid set into <maxuid>:<maxuid>. that check was meant for
the case where <num> is higher than <maxuid>, in which case num must be
replaced with maxuid.

this affected "uid expunge" with a uid set, possibly causing messages marked
for deletion not to be actually removed, and this affected "search" with the
uid parameter, possibly not returning all messages that were searched for.

found while writing tests for upcoming condstore/qresync extensions.
2023-06-29 21:37:17 +02:00
Mechiel Lukkien
590ed0b81d
in "changes" email for new releases, put the "---" separator on its own line, and remove duplicate word in first sentence... 2023-06-28 19:55:31 +02:00
Mechiel Lukkien
142b2498bf
fix two parsing bugs in imapserver
these could cause the parser to reject correct commands.

the first bug is about the allowed chars for an "atom", we were accepting too
many. this probably isn't easily triggered in practice.

the second bug is about how numbers (digits) are parsed. when gathering digits
to parse as number, we didn't consider only the directly upcoming digits that
make up the number, but continued looking for digits later on in the command.
then we tried to parse a string that was too long as a number, which would fail
because of additional characters. this could have been triggered with commands
containing two numbers. this is possible with e.g. "tag search or larger 123
smaller 123", the "or" takes two search keys again, each with a number. not too
common, but can happen.

found while writing tests for upcoming condstore/qresync implementation.
2023-06-28 19:41:58 +02:00
Mechiel Lukkien
4819180de1
fix fetching errata after html changed 2023-06-27 19:31:47 +02:00
Mechiel Lukkien
e58fe31dd1
add all sieve rfc's and a few recent imap rfc's to the list, and update roadmap 2023-06-24 12:07:22 +02:00
Mechiel Lukkien
5baeea4746
tweak to error message when loading configuration file
instead of saying "parsing config/mox.conf: :93: unknown key ...",
make it "parsing config/mox.conf:93: unknown key ..."
2023-06-24 10:12:25 +02:00
Mechiel Lukkien
40163bd145
implement storing non-system/well-known flags (keywords) for messages and mailboxes, with imap
the mailbox select/examine responses now return all flags used in a mailbox in
the FLAGS response. and indicate in the PERMANENTFLAGS response that clients
can set new keywords. we store these values on the new Message.Keywords field.
system/well-known flags are still in Message.Flags, so we're recognizing those
and handling them separately.

the imap store command handles the new flags. as does the append command, and
the search command.

we store keywords in a mailbox when a message in that mailbox gets the keyword.
we don't automatically remove the keywords from a mailbox. there is currently
no way at all to remove a keyword from a mailbox.

the import commands now handle non-system/well-known keywords too, when
importing from mbox/maildir.

jmap requires keyword support, so best to get it out of the way now.
2023-06-24 00:24:43 +02:00
Mechiel Lukkien
afefadf2c0
in websocket data copying code, wait for other goroutine to stop before changing the connection
found while running tests
2023-06-24 00:14:14 +02:00
Mechiel Lukkien
459317097b
fix typo's and old reference 2023-06-22 21:27:52 +02:00
Mechiel Lukkien
8096441f67
new feature: when delivering messages from the queue, make it possible to use a "transport"
the default transport is still just "direct delivery", where we connect to the
destination domain's MX servers.

other transports are:

- regular smtp without authentication, this is relaying to a smarthost.
- submission with authentication, e.g. to a third party email sending service.
- direct delivery, but with with connections going through a socks proxy. this
  can be helpful if your ip is blocked, you need to get email out, and you have
  another IP that isn't blocked.

keep in mind that for all of the above, appropriate SPF/DKIM settings have to
be configured. the "dnscheck" for a domain does a check for any SOCKS IP in the
SPF record. SPF for smtp/submission (ranges? includes?) and any DKIM
requirements cannot really be checked.

which transport is used can be configured through routes. routes can be set on
an account, a domain, or globally. the routes are evaluated in that order, with
the first match selecting the transport. these routes are evaluated for each
delivery attempt. common selection criteria are recipient domain and sender
domain, but also which delivery attempt this is. you could configured mox to
attempt sending through a 3rd party from the 4th attempt onwards.

routes and transports are optional. if no route matches, or an empty/zero
transport is selected, normal direct delivery is done.

we could already "submit" emails with 3rd party accounts with "sendmail". but
we now support more SASL authentication mechanisms with SMTP (not only PLAIN,
but also SCRAM-SHA-256, SCRAM-SHA-1 and CRAM-MD5), which sendmail now also
supports. sendmail will use the most secure mechanism supported by the server,
or the explicitly configured mechanism.

for issue #36 by dmikushin. also based on earlier discussion on hackernews.
2023-06-16 18:57:05 +02:00
Mechiel Lukkien
2eecf38842
unbreak the subcommands that talk to the mox instance of the ctl socket
broken on may 31st with the "open tls keys as root" change, 70d07c5459d8, so
broken in v0.0.4, not in v0.0.3
2023-06-16 13:27:27 +02:00
Mechiel Lukkien
f73125cbcd
restore checking integration_test.go with go vet 2023-06-16 12:55:57 +02:00
Mechiel Lukkien
e81ed7af26
in DSN, don't add a comment with a nil IP address if we don't have an IP 2023-06-16 09:55:45 +02:00
Mechiel Lukkien
b190a2cda8
mention good hosting providers may initially block outgoing smtp too 2023-06-12 16:35:03 +02:00
Mechiel Lukkien
d4d3f8ce92
add FAQ about the common misconceptation that you cannot run your own email server nowadays 2023-06-12 16:25:35 +02:00
Mechiel Lukkien
c561d7452b
unbreak "mox localserve"
i broke it with 70d07c5459d8, so broken in v0.0.4, not in v0.0.3
2023-06-12 14:59:40 +02:00
Mechiel Lukkien
d2f7d59fce
for dns resolve errors likely due to a missing name server in /etc/resolv.conf, point user to man page of systemd-resolved, the likely cause
it seems linux machines with systemd-resolved don't always set up
/etc/resolv.conf correctly. there may be no "nameserver" entry, causing Go's
net resolver to fallback to 127.0.0.1 and ::1. Systemd-resolved is listening on
127.0.0.53, so users will likely get a "connection refused". So point users to
the systemd-resolved manual page.

for issue #38 by ArnoSen
2023-06-12 14:53:07 +02:00
Mechiel Lukkien
64ac9872a4
in quickstart, if the host name resolves to a loopback IP, warn about it as it will likely prevent email delivery to local accounts
would have helped for issue #37, thanks @dmikushin for reporting
2023-06-12 12:19:20 +02:00
Mechiel Lukkien
0187fa0394
tweak time format for added date headers
seconds are useful, leading zeros for "day" not so much
2023-06-04 21:04:10 +02:00
Mechiel Lukkien
41167d6393
regenerate keys/certs for integration tests with expiration far in the future
don't want to have expiring tests...
2023-06-04 20:43:19 +02:00
Mechiel Lukkien
05fd5c6947
add automated test for quickstart
with tls with acme (with pebble, a small acme server for testing), and with
pregenerated keys/certs.

the two mox instances are configured on their own domain. we launch a separate
test container that connects to the first, submits a message for delivery to
the second. we check if the message is delivered with an imap connection and
the idle command.
2023-06-04 20:38:10 +02:00
Mechiel Lukkien
e53b773d04
fix bug with dkim signing messages without Date or Message-Id header
we were adding the missing date and/or message-id header, but didn't sign it.
and the default dkim signing config is to (over)sign those headers. so that was
causing errors with bad signatures.

found while setting up automated tests for quickstart, while sending a very
basic message between a fresh install.
2023-06-04 20:32:18 +02:00
Mechiel Lukkien
c9a846d019
more logging around smtp and mtasts tls connections
i wondered why self-signed mtasts certs didn't result in delivery failure. it's
because it was a first-time request of the mtasts policy (clean test
container). and for that case it means mtasts should be ignored.
2023-06-04 17:55:55 +02:00
Mechiel Lukkien
5a4f35ad5f
fix delivery from/to smtp addresses with double quotes
found while adding tests for smtp and imap for address with empty double (double
quoted) localparts.
2023-06-03 15:29:18 +02:00
Mechiel Lukkien
77d78191f8
more helpful error message when the queue tries to deliver a message but the remote host is not listed in the mta-sts policy
based on questions on irc by Nemain where this better error message would
probably have made the problem easier to find and fix.
2023-06-01 16:23:36 +02:00
Mechiel Lukkien
cafbfc5fdf
tweaks to backup & verifydata tool to make a typical backup+verifydata produce no output
for easy use in a crontab
2023-06-01 11:34:28 +02:00
Mechiel Lukkien
d25131f2f2
add missing check for err variable in test 2023-05-31 21:57:46 +02:00
Mechiel Lukkien
713d781bad
log a consistent log line for failed authentication attempts, with the remote ip
so external tools (like fail2ban) can monitor the logs and block ip's of bots.

for issue #30 by inigoserna, though i'm not sure i interpreted the suggestion correctly.
2023-05-31 20:39:00 +02:00
Mechiel Lukkien
70d07c5459
open tls keys/certificate as root, pass fd's to the unprivileged child process
makes it easier to use tls keys/certs managed by other tools, with or without
acme. the root process has access to open such files. the child process reads
the key from the file descriptor, then closes the file.

for issue #30 by inigoserna, thanks!
2023-05-31 14:09:53 +02:00
Mechiel Lukkien
dd0cede4f9
after a logout command, actually close the connection
reported by inigoserna in issue #30, thanks!
2023-05-31 10:31:25 +02:00
Mechiel Lukkien
5b8efcc1d9
move "how do i upgrade"-question to just below "how do i stay up to date" question 2023-05-31 10:30:34 +02:00
Mechiel Lukkien
0971700f6c
add ios push mail on not-soon-todo list
someone asked at the the recent golang rotterdam meetup if this would be added.
i looked into it, and it requires implementing an imap extension
XAPPLEPUSHSERVICE (not documented, but apple published modified dovecot
software for macos server that implemented it). to send push notifications to
the ios mail app, you need a APNS certificate. the tutorials online explain you
have to purchase macos server (a deprecated product) and extract the APNS
certificate. the certificate is valid for one year. i'm not sure it still
works, and it feels like it could stop working at any moment. but implementing
it seems doable.
2023-05-31 10:24:48 +02:00
Mechiel Lukkien
259928ab62
add reverse proxying websocket connections
if we recognize that a request for a WebForward is trying to turn the
connection into a websocket, we forward it to the backend and check if the
backend understands the websocket request. if so, we pass back the upgrade
response and get out of the way, copying bytes between the two. we do log the
total amount of bytes read from the client and written to the client. if the
backend doesn't respond with a websocke response, or an invalid one, we respond
with a regular non-websocket response. and we log details about the failed
connection, should help with debugging and any bug reports.

we don't try to parse the websocket framing, that's between the client and the
backend.  we could try to parse it, in part to protect the backend from bad
frames, but it would be a lot of work and could be brittle in the face of
extensions.

this doesn't yet handle websocket connections when a http proxy is configured.
we'll implement it when someone needs it. we do recognize it and fail the
connection.

for issue #25
2023-05-30 22:11:31 +02:00
Mechiel Lukkien
aca64828bd
we now have an index on dkimdomains, remove the todo 2023-05-26 20:49:13 +02:00
Mechiel Lukkien
aad5a5bcb9
add a "backup" subcommand to make consistent backups, and a "verifydata" subcommand to verify a backup before restoring, and add tests for future upgrades
the backup command will make consistent snapshots of all the database files. i
had been copying the db files before, and it usually works. but if the file is
modified during the backup, it is inconsistent and is likely to generate errors
when reading (can be at any moment in the future, when reading some db page).
"mox backup" opens the database file and writes out a copy in a transaction.
it also duplicates the message files.

before doing a restore, you could run "mox verifydata" on the to-be-restored
"data" directory. it check the database files, and compares the message files
with the database.

the new "gentestdata" subcommand generates a basic "data" directory, with a
queue and a few accounts. we will use it in the future along with "verifydata"
to test upgrades from old version to the latest version. both when going to the
next version, and when skipping several versions. the script test-upgrades.sh
executes these tests and doesn't do anything at the moment, because no releases
have this subcommand yet.

inspired by a failed upgrade attempt of a pre-release version.
2023-05-26 19:26:51 +02:00
Mechiel Lukkien
753ec56b3d
make "mailbox" parameter to unlisted command "bumpuidvalidity" optional, to update the uidvalidity for all mailboxes
useful after a database restore.
2023-05-22 18:11:42 +02:00
Mechiel Lukkien
d18983d9a6
add github workflow to build & test, exporting a coverage file 2023-05-22 18:01:21 +02:00
Mechiel Lukkien
dcc051e149
for fuzzing the imapserver and smtpserver use different config files than regular tests
otherwise they cannot be running at the same time, they could overwrite each
other's files.
2023-05-22 15:37:03 +02:00
Mechiel Lukkien
1f5ab1b795
fix language in comments
found through goreportcard.com
2023-05-22 15:04:06 +02:00
Mechiel Lukkien
b0623e6038
in queue.Drop, to drop a message from the outgoing queue, not only remove it from the database, but also its contents from the file system 2023-05-22 15:03:23 +02:00
Mechiel Lukkien
88fd775ec4
if we encounter an error fetching an mta-sts policy as part of a delivery attempt, properly continue with delivery with strict tls checking 2023-05-22 14:46:20 +02:00
Mechiel Lukkien
e81930ba20
update to latest bstore (with support for an index on a []string: Message.DKIMDomains), and cyclic data types (to be used for Message.Part soon); also adds a context.Context to database operations. 2023-05-22 14:40:36 +02:00
Kohei Watanabe
f6ed860ccb
Fixed MTASTSHTTPS.NonTLS option (#29)
AutoconfigHTTPS.NonTLS option was being used.
Fixed to use MTASTSHTTPS.NonTLS option.
2023-05-03 16:26:04 +02:00
Mechiel Lukkien
70ab9a7d4c
tweak alerting rule to include that it is about authentication rate limiting 2023-05-01 14:21:02 +02:00
Mechiel Lukkien
c1753b369d
in smtpserver, accept delivery to postmaster@<hostname>, and also postmaster@ addresses for domains that don't have a postmaster address configured. 2023-04-24 12:04:46 +02:00
Mechiel Lukkien
74dab5fc39
fix sending to address where the domain does not have an mx record (but where we should connect directly to the host, or follow cname records)
such deliveries would fail because a canceled "context" was reused, so the dns
lookups would fail.

the tests didn't catch it before because they ignored their context parameters.
2023-04-24 10:34:19 +02:00
Mechiel Lukkien
1f4df30019
remove debug print 2023-04-24 10:06:59 +02:00
1495 changed files with 475203 additions and 54964 deletions

View File

@ -6,3 +6,4 @@
/cover.*
/.go/
/tmp/
/.git/

39
.github/workflows/build-test.yml vendored Normal file
View File

@ -0,0 +1,39 @@
name: Build and test
on: [push, pull_request, workflow_dispatch]
jobs:
build-test:
runs-on: ubuntu-latest
strategy:
max-parallel: 1 # cannot run tests concurrently, files are created
matrix:
go-version: ['stable', 'oldstable']
steps:
- uses: actions/checkout@v3
# Trigger rebuilding frontends, should be the same as committed.
- uses: actions/setup-node@v3
with:
node-version: 16
cache: 'npm'
- run: 'touch */*.ts'
- uses: actions/setup-go@v4
with:
go-version: ${{ matrix.go-version }}
- run: make build
# Need to run tests with a temp dir on same file system for os.Rename to succeed.
- run: 'mkdir -p tmp && TMPDIR=$PWD/tmp make test'
- uses: actions/upload-artifact@v4
with:
name: coverage-${{ matrix.go-version }}
path: cover.html
# Format code, we check below if nothing changed.
- run: 'make fmt'
# Enforce the steps above didn't make any changes.
- run: git diff --exit-code

21
.gitignore vendored
View File

@ -1,27 +1,28 @@
/mox
/mox.exe
/rfc/[0-9][0-9]*
/rfc/xr/
/local/
/testdata/check/
/testdata/*/data/
/testdata/ctl/config/dkim/
/testdata/empty/
/testdata/exportmaildir/
/testdata/exportmbox/
/testdata/httpaccount/data/
/testdata/imap/data/
/testdata/imaptest/data/
/testdata/integration/data/
/testdata/junk/*.bloom
/testdata/junk/*.db
/testdata/queue/data/
/testdata/sent/
/testdata/smtp/data/
/testdata/smtp/datajunk/
/testdata/smtp/sendlimit/data/
/testdata/smtp/catchall/data/
/testdata/store/data/
/testdata/smtp/postmaster/
/testdata/train/
/testdata/upgradetest.mbox.gz
/testdata/integration/example-integration.zone
/testdata/integration/tmp-pebble-ca.pem
/cover.out
/cover.html
/.go/
/node_modules/
/package.json
/package-lock.json
/upgrade*-verifydata.*.pprof
/upgrade*-openaccounts.*.pprof
/website/html/

179
Makefile
View File

@ -1,79 +1,182 @@
default: build
build:
build: build0 frontend build1
build0:
# build early to catch syntax errors
CGO_ENABLED=0 go build
CGO_ENABLED=0 go vet -tags integration ./...
CGO_ENABLED=0 go vet ./...
./gendoc.sh
(cd http && CGO_ENABLED=0 go run ../vendor/github.com/mjl-/sherpadoc/cmd/sherpadoc/*.go -adjust-function-names none Admin) >http/adminapi.json
(cd http && CGO_ENABLED=0 go run ../vendor/github.com/mjl-/sherpadoc/cmd/sherpadoc/*.go -adjust-function-names none Account) >http/accountapi.json
# build again, files above are embedded
./genapidoc.sh
./gents.sh webadmin/api.json webadmin/api.ts
./gents.sh webaccount/api.json webaccount/api.ts
./gents.sh webmail/api.json webmail/api.ts
build1:
# build again, api json files above are embedded and new frontend code generated
CGO_ENABLED=0 go build
install: build0 frontend
CGO_ENABLED=0 go install
race: build0
go build -race
test:
CGO_ENABLED=0 go test -shuffle=on -coverprofile cover.out ./...
CGO_ENABLED=0 go test -fullpath -shuffle=on -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
test-race:
CGO_ENABLED=1 go test -race -shuffle=on -covermode atomic -coverprofile cover.out ./...
CGO_ENABLED=1 go test -fullpath -race -shuffle=on -covermode atomic -coverprofile cover.out ./...
go tool cover -html=cover.out -o cover.html
test-more:
TZ= CGO_ENABLED=0 go test -fullpath -shuffle=on -count 2 ./...
# note: if testdata/upgradetest.mbox.gz exists, its messages will be imported
# during tests. helpful for performance/resource consumption tests.
test-upgrade: build
nice ./test-upgrade.sh
# needed for "check" target
install-staticcheck:
CGO_ENABLED=0 go install honnef.co/go/tools/cmd/staticcheck@latest
install-ineffassign:
CGO_ENABLED=0 go install github.com/gordonklaus/ineffassign@v0.1.0
check:
staticcheck ./...
staticcheck -tags integration
CGO_ENABLED=0 go vet -tags integration
CGO_ENABLED=0 go vet -tags website website/website.go
CGO_ENABLED=0 go vet -tags link rfc/link.go
CGO_ENABLED=0 go vet -tags errata rfc/errata.go
CGO_ENABLED=0 go vet -tags xr rfc/xr.go
GOARCH=386 CGO_ENABLED=0 go vet ./...
CGO_ENABLED=0 ineffassign ./...
CGO_ENABLED=0 staticcheck ./...
CGO_ENABLED=0 staticcheck -tags integration
CGO_ENABLED=0 staticcheck -tags website website/website.go
CGO_ENABLED=0 staticcheck -tags link rfc/link.go
CGO_ENABLED=0 staticcheck -tags errata rfc/errata.go
CGO_ENABLED=0 staticcheck -tags xr rfc/xr.go
# needed for check-shadow
install-shadow:
CGO_ENABLED=0 go install golang.org/x/tools/go/analysis/passes/shadow/cmd/shadow@latest
# having "err" shadowed is common, best to not have others
check-shadow:
go vet -vettool=$$(which shadow) ./... 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -vettool=$$(which shadow) ./... 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -tags integration -vettool=$$(which shadow) 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -tags website -vettool=$$(which shadow) website/website.go 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -tags link -vettool=$$(which shadow) rfc/link.go 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -tags errata -vettool=$$(which shadow) rfc/errata.go 2>&1 | grep -v '"err"'
CGO_ENABLED=0 go vet -tags xr -vettool=$$(which shadow) rfc/xr.go 2>&1 | grep -v '"err"'
fuzz:
go test -fuzz FuzzParseSignature -fuzztime 5m ./dkim
go test -fuzz FuzzParseRecord -fuzztime 5m ./dkim
go test -fuzz . -fuzztime 5m ./dmarc
go test -fuzz . -fuzztime 5m ./dmarcrpt
go test -fuzz . -parallel 1 -fuzztime 5m ./imapserver
go test -fuzz . -parallel 1 -fuzztime 5m ./junk
go test -fuzz FuzzParseRecord -fuzztime 5m ./mtasts
go test -fuzz FuzzParsePolicy -fuzztime 5m ./mtasts
go test -fuzz . -parallel 1 -fuzztime 5m ./smtpserver
go test -fuzz . -fuzztime 5m ./spf
go test -fuzz FuzzParseRecord -fuzztime 5m ./tlsrpt
go test -fuzz FuzzParseMessage -fuzztime 5m ./tlsrpt
go test -fullpath -fuzz FuzzParseSignature -fuzztime 5m ./dkim
go test -fullpath -fuzz FuzzParseRecord -fuzztime 5m ./dkim
go test -fullpath -fuzz . -fuzztime 5m ./dmarc
go test -fullpath -fuzz . -fuzztime 5m ./dmarcrpt
go test -fullpath -fuzz . -parallel 1 -fuzztime 5m ./imapserver
go test -fullpath -fuzz . -fuzztime 5m ./imapclient
go test -fullpath -fuzz . -parallel 1 -fuzztime 5m ./junk
go test -fullpath -fuzz FuzzParseRecord -fuzztime 5m ./mtasts
go test -fullpath -fuzz FuzzParsePolicy -fuzztime 5m ./mtasts
go test -fullpath -fuzz . -fuzztime 5m ./smtp
go test -fullpath -fuzz . -parallel 1 -fuzztime 5m ./smtpserver
go test -fullpath -fuzz . -fuzztime 5m ./spf
go test -fullpath -fuzz FuzzParseRecord -fuzztime 5m ./tlsrpt
go test -fullpath -fuzz FuzzParseMessage -fuzztime 5m ./tlsrpt
integration-build:
docker-compose -f docker-compose-integration.yml build --no-cache moxmail
govendor:
go mod tidy
go mod vendor
./genlicenses.sh
integration-start:
-rm -r testdata/integration/data
-docker-compose -f docker-compose-integration.yml run moxmail /bin/bash
docker-compose -f docker-compose-integration.yml down
test-integration:
-docker compose -f docker-compose-integration.yml kill
-docker compose -f docker-compose-integration.yml down
docker image build --pull --no-cache -f Dockerfile -t mox_integration_moxmail .
docker image build --pull --no-cache -f testdata/integration/Dockerfile.test -t mox_integration_test testdata/integration
-rm -rf testdata/integration/moxacmepebble/data
-rm -rf testdata/integration/moxmail2/data
-rm -f testdata/integration/tmp-pebble-ca.pem
MOX_UID=$$(id -u) docker compose -f docker-compose-integration.yml run test
docker compose -f docker-compose-integration.yml kill
# run from within "make integration-start"
integration-test:
CGO_ENABLED=0 go test -tags integration
imaptest-build:
-docker-compose -f docker-compose-imaptest.yml build --no-cache mox
-docker compose -f docker-compose-imaptest.yml build --no-cache --pull mox
imaptest-run:
-rm -r testdata/imaptest/data
mkdir testdata/imaptest/data
docker-compose -f docker-compose-imaptest.yml run --entrypoint /usr/local/bin/imaptest imaptest host=mox port=1143 user=mjl@mox.example pass=testtest mbox=imaptest.mbox
docker-compose -f docker-compose-imaptest.yml down
docker compose -f docker-compose-imaptest.yml run --entrypoint /usr/local/bin/imaptest imaptest host=mox port=1143 user=mjl@mox.example pass=testtest mbox=imaptest.mbox
docker compose -f docker-compose-imaptest.yml down
fmt:
go fmt ./...
gofmt -w -s *.go */*.go
jswatch:
inotifywait -m -e close_write http/admin.html http/account.html | xargs -n2 sh -c 'echo changed; ./checkhtmljs http/admin.html http/account.html'
tswatch:
bash -c 'while true; do inotifywait -q -e close_write *.ts webadmin/*.ts webaccount/*.ts webmail/*.ts; make frontend; done'
jsinstall:
node_modules/.bin/tsc:
-mkdir -p node_modules/.bin
npm install jshint@2.13.2
npm ci --ignore-scripts
install-js: node_modules/.bin/tsc
install-js0:
-mkdir -p node_modules/.bin
npm install --ignore-scripts --save-dev --save-exact typescript@5.1.6
webmail/webmail.js: lib.ts webmail/api.ts webmail/lib.ts webmail/webmail.ts
./tsc.sh $@ lib.ts webmail/api.ts webmail/lib.ts webmail/webmail.ts
webmail/msg.js: lib.ts webmail/api.ts webmail/lib.ts webmail/msg.ts
./tsc.sh $@ lib.ts webmail/api.ts webmail/lib.ts webmail/msg.ts
webmail/text.js: lib.ts webmail/api.ts webmail/lib.ts webmail/text.ts
./tsc.sh $@ lib.ts webmail/api.ts webmail/lib.ts webmail/text.ts
webadmin/admin.js: lib.ts webadmin/api.ts webadmin/admin.ts
./tsc.sh $@ lib.ts webadmin/api.ts webadmin/admin.ts
webaccount/account.js: lib.ts webaccount/api.ts webaccount/account.ts
./tsc.sh $@ lib.ts webaccount/api.ts webaccount/account.ts
frontend: node_modules/.bin/tsc webadmin/admin.js webaccount/account.js webmail/webmail.js webmail/msg.js webmail/text.js
install-apidiff:
CGO_ENABLED=0 go install golang.org/x/exp/cmd/apidiff@v0.0.0-20231206192017-f3f8817b8deb
genapidiff:
./apidiff.sh
docker:
docker build -t mox:dev .
docker-release:
./docker-release.sh
genwebsite:
./genwebsite.sh
buildall:
CGO_ENABLED=0 GOOS=linux GOARCH=arm go build
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=linux GOARCH=386 go build
CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=netbsd GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=dragonfly GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=illumos GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=solaris GOARCH=amd64 go build
CGO_ENABLED=0 GOOS=aix GOARCH=ppc64 go build
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build
# no plan9 for now

534
README.md
View File

@ -1,102 +1,48 @@
Mox is a modern full-featured open source secure mail server for low-maintenance self-hosted email.
For more details, see the mox website, https://www.xmox.nl.
See Quickstart below to get started.
## Features
- Quick and easy to start/maintain mail server, for your own domain(s).
- SMTP (with extensions) for receiving and submitting email.
- SMTP (with extensions) for receiving, submitting and delivering email.
- IMAP4 (with extensions) for giving email clients access to email.
- Automatic TLS with ACME, for use with Let's Encrypt and other CA's.
- SPF, verifying that a remote host is allowed to sent email for a domain.
- DKIM, verifying that a message is signed by the claimed sender domain,
and for signing emails sent by mox for others to verify.
- DMARC, for enforcing SPF/DKIM policies set by domains. Incoming DMARC
aggregate reports are analyzed.
- Reputation tracking, learning (per user) host- and domain-based reputation from
(Non-)Junk email.
- Webmail for reading/sending email from the browser.
- SPF/DKIM/DMARC for authenticating messages/delivery, also DMARC aggregate
reports.
- Reputation tracking, learning (per user) host-, domain- and
sender address-based reputation from (Non-)Junk email classification.
- Bayesian spam filtering that learns (per user) from (Non-)Junk email.
- Slowing down senders with no/low reputation or questionable email content
(similar to greylisting). Rejected emails are stored in a mailbox called Rejects
for a short period, helping with misclassified legitimate synchronous
signup/login/transactional emails.
- Internationalized email, with unicode names in domains and usernames
("localparts").
- TLSRPT, parsing reports about TLS usage and issues.
- MTA-STS, for ensuring TLS is used whenever it is required. Both serving of
policies, and tracking and applying policies of remote servers.
- Web admin interface that helps you set up your domains and accounts
(instructions to create DNS records, configure
SPF/DKIM/DMARC/TLSRPT/MTA-STS), for status information, managing
accounts/domains, and modifying the configuration file.
- Autodiscovery (with SRV records, Microsoft-style and Thunderbird-style) for
easy account setup (though not many clients support it).
- Internationalized email (EIA), with unicode in email address usernames
("localparts"), and in domain names (IDNA).
- Automatic TLS with ACME, for use with Let's Encrypt and other CA's.
- DANE and MTA-STS for inbound and outbound delivery over SMTP with STARTTLS,
including REQUIRETLS and with incoming/outgoing TLSRPT reporting.
- Web admin interface that helps you set up your domains, accounts and list
aliases (instructions to create DNS records, configure
SPF/DKIM/DMARC/TLSRPT/MTA-STS), for status information, and modifying the
configuration file.
- Account autodiscovery (with SRV records, Microsoft-style, Thunderbird-style,
and Apple device management profiles) for easy account setup (though client
support is limited).
- Webserver with serving static files and forwarding requests (reverse
proxy), so port 443 can also be used to serve websites.
- Simple HTTP/JSON API for sending transaction email and receiving delivery
events and incoming messages (webapi and webhooks).
- Prometheus metrics and structured logging for operational insight.
- "localserve" subcommand for running mox locally for email-related
- "mox localserve" subcommand for running mox locally for email-related
testing/developing, including pedantic mode.
- Most non-server Go packages mox consists of are written to be reusable.
Mox is available under the MIT-license and was created by Mechiel Lukkien,
mechiel@ueber.net. Mox includes the Public Suffix List by Mozilla, under Mozilla
Public License, v2.0.
# Download
You can easily (cross) compile mox if you have a recent Go toolchain installed
(see "go version", it must be >= 1.19; otherwise, see https://go.dev/dl/ or
https://go.dev/doc/manage-install and $HOME/go/bin):
GOBIN=$PWD CGO_ENABLED=0 go install github.com/mjl-/mox@latest
Or you can download a binary built with the latest Go toolchain from
https://beta.gobuilds.org/github.com/mjl-/mox, and symlink or rename it to
"mox".
Verify you have a working mox binary:
./mox version
Note: Mox only compiles for/works on unix systems, not on Plan 9 or Windows.
You can also run mox with docker image `r.xmox.nl/mox`, with tags like `v0.0.1`
and `v0.0.1-go1.20.1-alpine3.17.2`, see https://r.xmox.nl/repo/mox/. See
docker-compose.yml in this repository for instructions on starting. You must run
docker with host networking, because mox needs to find your actual public IP's
and get the remote IPs for incoming connections, not a local/internal NAT IP.
# Quickstart
The easiest way to get started with serving email for your domain is to get a
vm/machine dedicated to serving email, name it [host].[domain] (e.g.
mail.example.com), login as root, and run:
# Create mox user and homedir (or pick another name or homedir):
useradd -m -d /home/mox mox
cd /home/mox
... compile or download mox to this directory, see above ...
# Generate config files for your address/domain:
./mox quickstart you@example.com
The quickstart creates an account, generates a password and configuration
files, prints the DNS records you need to manually create and prints commands
to start mox and optionally install mox as a service.
A dedicated machine is highly recommended because modern email requires HTTPS,
and mox currently needs it for automatic TLS. You could combine mox with an
existing webserver, but it requires more configuration. If you want to serve
websites on the same machine, consider using the webserver built into mox. If
you want to run an existing webserver on port 443/80, see "mox help quickstart",
it'll tell you to run "./mox quickstart -existing-webserver you@example.com".
After starting, you can access the admin web interface on internal IPs.
# Future/development
mechiel@ueber.net. Mox includes BSD-3-claused code from the Go Authors, and the
Public Suffix List by Mozilla under Mozilla Public License, v2.0.
Mox has automated tests, including for interoperability with Postfix for SMTP.
Mox is manually tested with email clients: Mozilla Thunderbird, mutt, iOS Mail,
@ -106,37 +52,137 @@ proton.me.
The code is heavily cross-referenced with the RFCs for readability/maintainability.
## Roadmap
# Quickstart
- Privilege separation, isolating parts of the application to more restricted
sandbox (e.g. new unauthenticated connections).
- DANE and DNSSEC.
- Sending DMARC and TLS reports (currently only receiving).
- OAUTH2 support, for single sign on.
The easiest way to get started with serving email for your domain is to get a
(virtual) machine dedicated to serving email, name it `[host].[domain]` (e.g.
mail.example.com). Having a DNSSEC-verifying resolver installed, such as
unbound, is highly recommended. Run as root:
# Create mox user and homedir (or pick another name or homedir):
useradd -m -d /home/mox mox
cd /home/mox
... compile or download mox to this directory, see below ...
# Generate config files for your address/domain:
./mox quickstart you@example.com
The quickstart:
- Creates configuration files mox.conf and domains.conf.
- Adds the domain and an account for the email address to domains.conf
- Generates an admin and account password.
- Prints the DNS records you need to add, for the machine and domain.
- Prints commands to start mox, and optionally install mox as a service.
A machine that doesn't already run a webserver is highly recommended because
modern email requires HTTPS, and mox currently needs to run a webserver for
automatic TLS with ACME. You could combine mox with an existing webserver, but
it requires a lot more configuration. If you want to serve websites on the same
machine, consider using the webserver built into mox. It's pretty good! If you
want to run an existing webserver on port 443/80, see `mox help quickstart`.
After starting, you can access the admin web interface on internal IPs.
# Download
Download a mox binary from
https://beta.gobuilds.org/github.com/mjl-/mox@latest/linux-amd64-latest/.
Symlink or rename it to "mox".
The URL above always resolves to the latest release for linux/amd64 built with
the latest Go toolchain. See the links at the bottom of that page for binaries
for other platforms.
# Compiling
You can easily (cross) compile mox yourself. You need a recent Go toolchain
installed. Run `go version`, it must be >= 1.23. Download the latest version
from https://go.dev/dl/ or see https://go.dev/doc/manage-install.
To download the source code of the latest release, and compile it to binary "mox":
GOBIN=$PWD CGO_ENABLED=0 go install github.com/mjl-/mox@latest
Mox only compiles for and fully works on unix systems. Mox also compiles for
Windows, but "mox serve" does not yet work, though "mox localserve" (for a
local test instance) and most other subcommands do. Mox does not compile for
Plan 9.
# Docker
Although not recommended, you can also run mox with docker image
`r.xmox.nl/mox`, with tags like `v0.0.1` and `v0.0.1-go1.20.1-alpine3.17.2`, see
https://r.xmox.nl/r/mox/. See
https://github.com/mjl-/mox/blob/main/docker-compose.yml to get started.
New docker images aren't (automatically) generated for new Go runtime/compile
releases.
It is important to run with docker host networking, so mox can use the public
IPs and has correct remote IP information for incoming connections (important
for junk filtering and rate-limiting).
# Development
See develop.txt for instructions/tips for developing on mox.
# Sponsors
Thanks to NLnet foundation, the European Commission's NGI programme, and the
Netherlands Ministry of the Interior and Kingdom Relations for financial
support:
- 2024/2025, NLnet NGI0 Zero Core, https://nlnet.nl/project/Mox-Automation/
- 2024, NLnet e-Commons Fund, https://nlnet.nl/project/Mox-API/
- 2023/2024, NLnet NGI0 Entrust, https://nlnet.nl/project/Mox/
# Roadmap
- "mox setup" command, using admin web interface for interactive setup
- Automate DNS management, for setup and maintenance, such as DANE/DKIM key rotation
- Config options for "transactional email domains", for which mox will only
send messages
- Encrypted storage of files (email messages, TLS keys), also with per account keys
- Recognize common deliverability issues and help postmasters solve them
- JMAP, IMAP OBJECTID extension, IMAP JMAPACCESS extension
- Calendaring with CalDAV/iCal
- Introbox, to which first-time senders are delivered
- Add special IMAP mailbox ("Queue?") that contains queued but
not-yet-delivered messages.
undelivered messages, updated with IMAP flags/keywords/tags and message headers.
- External addresses in aliases/lists.
- Autoresponder (out of office/vacation)
- Mailing list manager
- IMAP extensions for "online"/non-syncing/webmail clients (SORT (including
DISPLAYFROM, DISPLAYTO), THREAD, PARTIAL, CONTEXT=SEARCH CONTEXT=SORT ESORT,
FILTERS)
- IMAP ACL support, for account sharing (interacts with many extensions and code)
- Improve support for mobile clients with extensions: IMAP URLAUTH, SMTP
CHUNKING and BINARYMIME, IMAP CATENATE
- Privilege separation, isolating parts of the application to more restricted
sandbox (e.g. new unauthenticated connections)
- Using mox as backup MX
- Sieve for filtering (for now see Rulesets in the account config)
- Calendaring
- IMAP CONDSTORE and QRESYNC extensions
- IMAP THREAD extension
- Using mox as backup MX.
- Old-style internationalization in messages.
- JMAP
- Webmail
- ARC, with forwarded email from trusted source
- Milter support, for integration with external tools
- SMTP DSN extension
- IMAP Sieve extension, to run Sieve scripts after message changes (not only
new deliveries)
- OAUTH2 support, for single sign on
- Forwarding (to an external address)
There are many smaller improvements to make as well, search for "todo" in the code.
## Not supported
## Not supported/planned
But perhaps in the future...
There is currently no plan to implement the following. Though this may
change in the future.
- HTTP-based API for sending messages and receiving delivery feedback
- Functioning as SMTP relay
- Forwarding (to an external address)
- Autoresponders
- Functioning as an SMTP relay without authentication
- POP3
- Delivery to (unix) OS system users
- Mailing list manager
- Delivery to (unix) OS system users (mbox/Maildir)
- Support for pluggable delivery mechanisms
@ -145,18 +191,26 @@ But perhaps in the future...
## Why a new mail server implementation?
Mox aims to make "running a mail server" easy and nearly effortless. Excellent
quality mail server software exists, but getting a working setup typically
requires you configure half a dozen services (SMTP, IMAP, SPF/DKIM/DMARC, spam
filtering). That seems to lead to people no longer running their own mail
servers, instead switching to one of the few centralized email providers. Email
with SMTP is a long-time decentralized messaging protocol. To keep it
decentralized, people need to run their own mail server. Mox aims to make that
easy.
quality (open source) mail server software exists, but getting a working setup
typically requires you configure half a dozen services (SMTP, IMAP,
SPF/DKIM/DMARC, spam filtering), which are often written in C (where small bugs
often have large consequences). That seems to lead to people no longer running
their own mail servers, instead switching to one of the few centralized email
providers. Email with SMTP is a long-time decentralized messaging protocol. To
keep it decentralized, people need to run their own mail server. Mox aims to
make that easy.
## Where is the documentation?
See all commands and help text at https://pkg.go.dev/github.com/mjl-/mox/, and
example config files at https://pkg.go.dev/github.com/mjl-/mox/config/.
To keep mox as a project maintainable, documentation is integrated into, and
generated from the code.
A list of mox commands, and their help output, are at
https://www.xmox.nl/commands/.
Mox is configured through configuration files, and each field comes with
documentation. See https://www.xmox.nl/config/ for config files containing all
fields and their documentation.
You can get the same information by running "mox" without arguments to list its
subcommands and usage, and "mox help [subcommand]" for more details.
@ -164,9 +218,44 @@ subcommands and usage, and "mox help [subcommand]" for more details.
The example config files are printed by "mox config describe-static" and "mox
config describe-dynamic".
Mox is still in early stages, and documentation is still limited. Please create
an issue describing what is unclear or confusing, and we'll try to improve the
documentation.
If you're missing some documentation, please create an issue describing what is
unclear or confusing, and we'll try to improve the documentation.
## Is Mox affected by SMTP smuggling?
Mox itself is not affected: it only treats "\r\n.\r\n" as SMTP end-of-message.
But read on for caveats.
SMTP smuggling exploits differences in handling by SMTP servers of: carriage
returns (CR, or "\r"), newlines (line feeds, LF, "\n") in the context of "dot
stuffing". SMTP is a text-based protocol. An SMTP transaction to send a
message is finalized with a "\r\n.\r\n" sequence. This sequence could occur in
the message being transferred, so any verbatim "." at the start of a line in a
message is "escaped" with another dot ("dot stuffing"), to not trigger the SMTP
end-of-message. SMTP smuggling takes advantage of bugs in some mail servers
that interpret other sequences than "\r\n.\r\n" as SMTP end-of-message. For
example "\n.\n" or even "\r.\r", and perhaps even other magic character
combinations.
Before v0.0.9, mox accepted SMTP transactions with bare carriage returns
(without newline) for compatibility with real-world email messages, considering
them meaningless and therefore innocuous.
Since v0.0.9, SMTP transactions with bare carriage returns are rejected.
Sending messages with bare carriage returns to buggy mail servers can cause
those mail servers to materialize non-existent messages. Now that mox rejects
messages with bare carriage returns, sending a message through mox can no
longer be used to trigger those bugs.
Mox can still handle bare carriage returns in email messages, e.g. those
imported from mbox files or Maildirs, or from messages added over IMAP. Mox
still fixes up messages with bare newlines by adding the missing carriage
returns.
Before v0.0.9, an SMTP transaction for a message containing "\n.\n" would
result in a non-specific error message, and "\r\n.\n" would result in the dot
being dropped. Since v0.0.9, these sequences are rejected with a message
mentioning SMTP smuggling.
## How do I import/export email?
@ -178,6 +267,10 @@ and copy or move messages from one account to the other.
Similarly, see the export functionality on the accounts web page and the "mox
export maildir" and "mox export mbox" subcommands to export email.
Importing large mailboxes may require a lot of memory (a limitation of the
current database). Splitting up mailboxes in smaller parts (e.g. 100k messages)
would help.
## How can I help?
Mox needs users and testing in real-life setups! So just give it a try, send
@ -193,31 +286,33 @@ compatibility issues, limitations, anti-spam measures, specification
violations, that would be interesting to hear about.
Pull requests for bug fixes and new code are welcome too. If the changes are
large, it helps to start a discussion (create a ticket) before doing all the
work.
large, it helps to start a discussion (create an "issue") before doing all the
work. In practice, starting with a small contribution and growing from there has
the highest chance of success.
By contributing (e.g. code), you agree your contributions are licensed under the
MIT license (like mox), and have the rights to do so.
## Where can I discuss mox?
Join #mox on irc.oftc.net, or #mox on the "Gopher slack".
Join #mox on irc.oftc.net, or #mox:matrix.org (https://matrix.to/#/#mox:matrix.org),
or #mox on the "Gopher slack".
For bug reports, please file an issue at https://github.com/mjl-/mox/issues/new.
## How do I change my password?
Regular users (doing IMAP/SMTP with authentication) can change their password
at the account page, e.g. http://localhost/. Or you can set a password with "mox
at the account page, e.g. `http://localhost/`. Or you can set a password with "mox
setaccountpassword".
The admin can change the password of any account through the admin page, at
http://localhost/admin/ by default (leave username empty when logging in).
`http://localhost/admin/` by default (leave username empty when logging in).
The account and admin pages are served on localhost on your mail server.
To access these from your browser, run
The account and admin pages are served on localhost for configs created with
the quickstart. To access these from your browser, run
`ssh -L 8080:localhost:80 you@yourmachine` locally and open
http://localhost:8080/[...].
`http://localhost:8080/[...]`.
The admin password can be changed with "mox setadminpassword".
@ -226,8 +321,13 @@ The admin password can be changed with "mox setadminpassword".
Unfortunately, mox does not yet provide an option for that. Mox does spam
filtering based on reputation of received messages. It will take a good amount
of work to share that information with a backup MX. Without that information,
spammers could use a backup MX to get their spam accepted. Until mox has a
proper solution, you can simply run a single SMTP server.
spammers could use a backup MX to get their spam accepted.
Until mox has a proper solution, you can simply run a single SMTP server. The
author has run a single mail server for over a decade without issues. Machines
and network connectivity are stable nowadays, and email delivery will be
retried for many hours during temporary errors (e.g. when rebooting a machine
after updates).
## How do I stay up to date?
@ -244,9 +344,44 @@ You can also monitor newly added releases on this repository with the github
(https://github.com/mjl-/mox/tags.atom) or releases
(https://github.com/mjl-/mox/releases.atom), or monitor the docker images.
Keep in mind you have a responsibility to keep the internect-connected software
Keep in mind you have a responsibility to keep the internet-connected software
you run up to date and secure.
## How do I upgrade my mox installation?
We try to make upgrades effortless and you can typically just put a new binary
in place and restart. If manual actions are required, the release notes mention
them. Check the release notes of all version between your current installation
and the release you're upgrading to.
Before upgrading, make a backup of the config & data directory with `mox backup
<destdir>`. This copies all files from the config directory to
`<destdir>/config`, and creates `<destdir>/data` with a consistent snapshots of
the database files, and message files from the outgoing queue and accounts.
Using the new mox binary, run `mox verifydata <destdir>/data` (do NOT use the
"live" data directory!) for a dry run. If this fails, an upgrade will probably
fail too.
Important: verifydata with the new mox binary can modify the database files
(due to automatic schema upgrades). So make a fresh backup again before the
actual upgrade. See the help output of the "backup" and "verifydata" commands
for more details.
During backup, message files are hardlinked if possible, and copied otherwise.
Using a destination directory like `data/tmp/backup` increases the odds
hardlinking succeeds: the default mox systemd service file mounts
the data directory separately, so hardlinks to outside the data directory are
cross-device and will fail.
If an upgrade fails and you have to restore (parts) of the data directory, you
should run `mox verifydata <datadir>` (with the original binary) on the
restored directory before starting mox again. If problematic files are found,
for example queue or account message files that are not in the database, run
`mox verifydata -fix <datadir>` to move away those files. After a restore, you may
also want to run `mox bumpuidvalidity <account>` for each account for which
messages in a mailbox changed, to force IMAP clients to synchronize mailbox
state.
## How secure is mox?
Security is high on the priority list for mox. Mox is young, so don't expect no
@ -270,15 +405,148 @@ should account for the size of the email messages (no compression currently),
an additional 15% overhead for the meta data, and add some more headroom.
Expand as necessary.
## Can I see some screenshots?
## Won't the big email providers block my email?
Yes, see https://www.xmox.nl/screenshots/.
It is a common misconception that it is impossible to run your own email server
nowadays. The claim is that the handful big email providers will simply block
your email. However, you can run your own email server just fine, and your
email will be accepted, provided you are doing it right.
Mox has an "account" web interface where users can view their account and
manage their address configuration, such as rules for automatically delivering
certain incoming messages to a specific mailbox.
If your email is rejected, it is often because your IP address has a bad email
sending reputation. Email servers often use IP blocklists to reject email
networks with a bad email sending reputation. These blocklists often work at
the level of whole network ranges. So if you try to run an email server from a
hosting provider with a bad reputation (which happens if they don't monitor
their network or don't act on abuse/spam reports), your IP too will have a bad
reputation and other mail servers (both large and small) may reject messages
coming from you. During the quickstart, mox checks if your IPs are on a few
often-used blocklists. It's typically not a good idea to host an email server
on the cheapest or largest cloud providers: They often don't spend the
resources necessary for a good reputation, or they simply block all outgoing
SMTP traffic. It's better to look for a technically-focused local provider.
They too may initially block outgoing SMTP connections on new machines to
prevent spam from their networks. But they will either automatically open up
outgoing SMTP traffic after a cool down period (e.g. 24 hours), or after you've
contacted their support.
Mox also has an "admin" web interface where the mox instance administrator can
make changes, e.g. add/remove/modify domains/accounts/addresses.
After you get past the IP blocklist checks, email servers use many more signals
to determine if your email message could be spam and should be rejected. Mox
helps you set up a system that doesn't trigger most of the technical signals
(e.g. with SPF/DKIM/DMARC). But there are more signals, for example: Sending to
a mail server or address for the first time. Sending from a newly registered
domain (especially if you're sending automated messages, and if you send more
messages after previous messages were rejected), domains that existed for a few
weeks to a month are treated more friendly. Sending messages with content that
resembles known spam messages.
Mox does not have a webmail yet, so there are no screenshots of actual email.
Should your email be rejected, you will typically get an error message during
the SMTP transaction that explains why. In the case of big email providers the
error message often has instructions on how to prove to them you are a
legitimate sender.
## Can mox deliver through a smarthost?
Yes, you can configure a "Transport" in mox.conf and configure "Routes" in
domains.conf to send some or all messages through the transport. A transport
can be an SMTP relay or authenticated submission, or making mox make outgoing
connections through a SOCKS proxy.
For an example, see https://www.xmox.nl/config/#hdr-example-transport. For
details about Transports and Routes, see
https://www.xmox.nl/config/#cfg-mox-conf-Transports and
https://www.xmox.nl/config/#cfg-domains-conf-Routes.
Remember to add the IP addresses of the transport to the SPF records of your
domains. Keep in mind some 3rd party submission servers may mishandle your
messages, for example by replacing your Message-Id header and thereby
invalidating your DKIM-signatures, or rejecting messages with more than one
DKIM-signature.
## Can I use mox to send transactional email?
Yes. While you can use SMTP submission to send messages you've composed
yourself, and monitor a mailbox for DSNs, a more convenient option is to use
the mox HTTP/JSON-based webapi and webhooks.
The mox webapi can be used to send outgoing messages that mox composes. The web
api can also be used to deal with messages stored in an account, like changing
message flags, retrieving messages in parsed form or individual parts of
multipart messages, or moving messages to another mailbox or deleting messages
altogether.
Mox webhooks can be used to receive updates about incoming and outgoing
deliveries. Mox can automatically manage per account suppression lists.
See https://www.xmox.nl/features/#hdr-webapi-and-webhooks for details.
## Can I use existing TLS certificates/keys?
Yes. The quickstart command creates a config that uses ACME with Let's Encrypt,
but you can change the config file to use existing certificate and key files.
You'll see "ACME: letsencrypt" in the "TLS" section of the "public" Listener.
Remove or comment out the ACME-line, and add a "KeyCerts" section, see
https://www.xmox.nl/config/#cfg-mox-conf-Listeners-x-TLS-KeyCerts
You can have multiple certificates and keys: The line with the "-" (dash) is
the start of a list item. Duplicate that line up to and including the line with
KeyFile for each certificate/key you have. Mox makes a TLS config that holds
all specified certificates/keys, and uses it for all services for that Listener
(including a webserver), choosing the correct certificate for incoming
requests.
Keep in mind that for each email domain you host, you will need a certificate
for `mta-sts.<domain>`, `autoconfig.<domain>` and `mail.<domain>`, unless you
disable MTA-STS, autoconfig and the client-settings-domain for that domain.
Mox opens the key and certificate files during initial startup, as root (and
passes file descriptors to the unprivileged process). No special permissions
are needed on the key and certificate files.
## Can I directly access mailboxes through the file system?
No, mox only provides access to email through protocols like IMAP.
While it can be convenient for users/email clients to access email through
conventions like Maildir, providing such access puts quite a burden on the
server: The server has to continuously watch for changes made to the mail store
by external programs, and sync its internal state. By only providing access to
emails through mox, the storage/state management is simpler and easier to
implement reliably.
Not providing direct file system access also allows future improvements in the
storage mechanism. Such as encryption of all stored messages. Programs won't be
able to access such messages directly.
Mox stores metadata about delivered messages in its per-account message index
database, more than fits in a simple (filename-based) format like Maildir. The
IP address of the remote SMTP server during delivery, SPF/DKIM/DMARC domains
and validation status, and more...
For efficiency, mox doesn't prepend message headers generated during delivery
(e.g. Authentication-Results) to the on-disk message file, but only stores it
in the database. This prevents a rewrite of the entire message file. When
reading a message, mox combines the prepended headers from the database with
the message file.
Mox user accounts have no relation to operating system user accounts. Multiple
system users reading their email on a single machine is not very common
anymore. All data (for all accounts) stored by mox is accessible only by the
mox process. Messages are currently stored as individual files in standard
Internet Message Format (IMF), at `data/accounts/<account>/msg/<dir>/<msgid>`:
`msgid` is a consecutive unique integer id assigned by the per-account message
index database; `dir` groups 8k consecutive message ids into a directory,
ensuring they don't become too large. The message index database file for an
account is at `data/accounts/<account>/index.db`, accessed with the bstore
database library, which uses bbolt (formerly BoltDB) for storage, a
transactional key/value library/file format inspired by LMDB.
## How do I block IPs with authentication failures with fail2ban?
Mox includes a rate limiter for IPs/networks that cause too many authentication
failures. It automatically unblocks such IPs/networks after a while. So you may
not need fail2ban. If you want to use fail2ban, you could use this snippet:
[Definition]
failregex = .*failed authentication attempt.*remote=<HOST>
ignoreregex =

1158
admin/admin.go Normal file

File diff suppressed because it is too large Load Diff

175
admin/clientconfig.go Normal file
View File

@ -0,0 +1,175 @@
package admin
import (
"fmt"
"maps"
"slices"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mox-"
)
type TLSMode uint8
const (
TLSModeImmediate TLSMode = 0
TLSModeSTARTTLS TLSMode = 1
TLSModeNone TLSMode = 2
)
type ProtocolConfig struct {
Host dns.Domain
Port int
TLSMode TLSMode
EnabledOnHTTPS bool
}
type ClientConfig struct {
IMAP ProtocolConfig
Submission ProtocolConfig
}
// ClientConfigDomain returns a single IMAP and Submission client configuration for
// a domain.
func ClientConfigDomain(d dns.Domain) (rconfig ClientConfig, rerr error) {
var haveIMAP, haveSubmission bool
domConf, ok := mox.Conf.Domain(d)
if !ok {
return ClientConfig{}, fmt.Errorf("%w: unknown domain", ErrRequest)
}
gather := func(l config.Listener) (done bool) {
host := mox.Conf.Static.HostnameDomain
if l.Hostname != "" {
host = l.HostnameDomain
}
if domConf.ClientSettingsDomain != "" {
host = domConf.ClientSettingsDNSDomain
}
if !haveIMAP && l.IMAPS.Enabled {
rconfig.IMAP.Host = host
rconfig.IMAP.Port = config.Port(l.IMAPS.Port, 993)
rconfig.IMAP.TLSMode = TLSModeImmediate
rconfig.IMAP.EnabledOnHTTPS = l.IMAPS.EnabledOnHTTPS
haveIMAP = true
}
if !haveIMAP && l.IMAP.Enabled {
rconfig.IMAP.Host = host
rconfig.IMAP.Port = config.Port(l.IMAP.Port, 143)
rconfig.IMAP.TLSMode = TLSModeSTARTTLS
if l.TLS == nil {
rconfig.IMAP.TLSMode = TLSModeNone
}
haveIMAP = true
}
if !haveSubmission && l.Submissions.Enabled {
rconfig.Submission.Host = host
rconfig.Submission.Port = config.Port(l.Submissions.Port, 465)
rconfig.Submission.TLSMode = TLSModeImmediate
rconfig.Submission.EnabledOnHTTPS = l.Submissions.EnabledOnHTTPS
haveSubmission = true
}
if !haveSubmission && l.Submission.Enabled {
rconfig.Submission.Host = host
rconfig.Submission.Port = config.Port(l.Submission.Port, 587)
rconfig.Submission.TLSMode = TLSModeSTARTTLS
if l.TLS == nil {
rconfig.Submission.TLSMode = TLSModeNone
}
haveSubmission = true
}
return haveIMAP && haveSubmission
}
// Look at the public listener first. Most likely the intended configuration.
if public, ok := mox.Conf.Static.Listeners["public"]; ok {
if gather(public) {
return
}
}
// Go through the other listeners in consistent order.
names := slices.Sorted(maps.Keys(mox.Conf.Static.Listeners))
for _, name := range names {
if gather(mox.Conf.Static.Listeners[name]) {
return
}
}
return ClientConfig{}, fmt.Errorf("%w: no listeners found for imap and/or submission", ErrRequest)
}
// ClientConfigs holds the client configuration for IMAP/Submission for a
// domain.
type ClientConfigs struct {
Entries []ClientConfigsEntry
}
type ClientConfigsEntry struct {
Protocol string
Host dns.Domain
Port int
Listener string
Note string
}
// ClientConfigsDomain returns the client configs for IMAP/Submission for a
// domain.
func ClientConfigsDomain(d dns.Domain) (ClientConfigs, error) {
domConf, ok := mox.Conf.Domain(d)
if !ok {
return ClientConfigs{}, fmt.Errorf("%w: unknown domain", ErrRequest)
}
c := ClientConfigs{}
c.Entries = []ClientConfigsEntry{}
var listeners []string
for name := range mox.Conf.Static.Listeners {
listeners = append(listeners, name)
}
slices.Sort(listeners)
note := func(tls bool, requiretls bool) string {
if !tls {
return "plain text, no STARTTLS configured"
}
if requiretls {
return "STARTTLS required"
}
return "STARTTLS optional"
}
for _, name := range listeners {
l := mox.Conf.Static.Listeners[name]
host := mox.Conf.Static.HostnameDomain
if l.Hostname != "" {
host = l.HostnameDomain
}
if domConf.ClientSettingsDomain != "" {
host = domConf.ClientSettingsDNSDomain
}
if l.Submissions.Enabled {
note := "with TLS"
if l.Submissions.EnabledOnHTTPS {
note += "; also served on port 443 with TLS ALPN \"smtp\""
}
c.Entries = append(c.Entries, ClientConfigsEntry{"Submission (SMTP)", host, config.Port(l.Submissions.Port, 465), name, note})
}
if l.IMAPS.Enabled {
note := "with TLS"
if l.IMAPS.EnabledOnHTTPS {
note += "; also served on port 443 with TLS ALPN \"imap\""
}
c.Entries = append(c.Entries, ClientConfigsEntry{"IMAP", host, config.Port(l.IMAPS.Port, 993), name, note})
}
if l.Submission.Enabled {
c.Entries = append(c.Entries, ClientConfigsEntry{"Submission (SMTP)", host, config.Port(l.Submission.Port, 587), name, note(l.TLS != nil, !l.Submission.NoRequireSTARTTLS)})
}
if l.IMAP.Enabled {
c.Entries = append(c.Entries, ClientConfigsEntry{"IMAP", host, config.Port(l.IMAPS.Port, 143), name, note(l.TLS != nil, !l.IMAP.NoRequireSTARTTLS)})
}
}
return c, nil
}

318
admin/dnsrecords.go Normal file
View File

@ -0,0 +1,318 @@
package admin
import (
"crypto"
"crypto/ed25519"
"crypto/rsa"
"crypto/sha256"
"crypto/x509"
"fmt"
"net/url"
"strings"
"github.com/mjl-/adns"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dmarc"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/spf"
"github.com/mjl-/mox/tlsrpt"
"slices"
)
// todo: find a way to automatically create the dns records as it would greatly simplify setting up email for a domain. we could also dynamically make changes, e.g. providing grace periods after disabling a dkim key, only automatically removing the dkim dns key after a few days. but this requires some kind of api and authentication to the dns server. there doesn't appear to be a single commonly used api for dns management. each of the numerous cloud providers have their own APIs and rather large SKDs to use them. we don't want to link all of them in.
// DomainRecords returns text lines describing DNS records required for configuring
// a domain.
//
// If certIssuerDomainName is set, CAA records to limit TLS certificate issuance to
// that caID will be suggested. If acmeAccountURI is also set, CAA records also
// restricting issuance to that account ID will be suggested.
func DomainRecords(domConf config.Domain, domain dns.Domain, hasDNSSEC bool, certIssuerDomainName, acmeAccountURI string) ([]string, error) {
d := domain.ASCII
h := mox.Conf.Static.HostnameDomain.ASCII
// The first line with ";" is used by ../testdata/integration/moxacmepebble.sh and
// ../testdata/integration/moxmail2.sh for selecting DNS records
records := []string{
"; Time To Live of 5 minutes, may be recognized if importing as a zone file.",
"; Once your setup is working, you may want to increase the TTL.",
"$TTL 300",
"",
}
if public, ok := mox.Conf.Static.Listeners["public"]; ok && public.TLS != nil && (len(public.TLS.HostPrivateRSA2048Keys) > 0 || len(public.TLS.HostPrivateECDSAP256Keys) > 0) {
records = append(records,
`; DANE: These records indicate that a remote mail server trying to deliver email`,
`; with SMTP (TCP port 25) must verify the TLS certificate with DANE-EE (3), based`,
`; on the certificate public key ("SPKI", 1) that is SHA2-256-hashed (1) to the`,
`; hexadecimal hash. DANE-EE verification means only the certificate or public`,
`; key is verified, not whether the certificate is signed by a (centralized)`,
`; certificate authority (CA), is expired, or matches the host name.`,
`;`,
`; NOTE: Create the records below only once: They are for the machine, and apply`,
`; to all hosted domains.`,
)
if !hasDNSSEC {
records = append(records,
";",
"; WARNING: Domain does not appear to be DNSSEC-signed. To enable DANE, first",
"; enable DNSSEC on your domain, then add the TLSA records. Records below have been",
"; commented out.",
)
}
addTLSA := func(privKey crypto.Signer) error {
spkiBuf, err := x509.MarshalPKIXPublicKey(privKey.Public())
if err != nil {
return fmt.Errorf("marshal SubjectPublicKeyInfo for DANE record: %v", err)
}
sum := sha256.Sum256(spkiBuf)
tlsaRecord := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: sum[:],
}
var s string
if hasDNSSEC {
s = fmt.Sprintf("_25._tcp.%-*s TLSA %s", 20+len(d)-len("_25._tcp."), h+".", tlsaRecord.Record())
} else {
s = fmt.Sprintf(";; _25._tcp.%-*s TLSA %s", 20+len(d)-len(";; _25._tcp."), h+".", tlsaRecord.Record())
}
records = append(records, s)
return nil
}
for _, privKey := range public.TLS.HostPrivateECDSAP256Keys {
if err := addTLSA(privKey); err != nil {
return nil, err
}
}
for _, privKey := range public.TLS.HostPrivateRSA2048Keys {
if err := addTLSA(privKey); err != nil {
return nil, err
}
}
records = append(records, "")
}
if d != h {
records = append(records,
"; For the machine, only needs to be created once, for the first domain added:",
"; ",
"; SPF-allow host for itself, resulting in relaxed DMARC pass for (postmaster)",
"; messages (DSNs) sent from host:",
fmt.Sprintf(`%-*s TXT "v=spf1 a -all"`, 20+len(d), h+"."), // ../rfc/7208:2263 ../rfc/7208:2287
"",
)
}
if d != h && mox.Conf.Static.HostTLSRPT.ParsedLocalpart != "" {
uri := url.URL{
Scheme: "mailto",
Opaque: smtp.NewAddress(mox.Conf.Static.HostTLSRPT.ParsedLocalpart, mox.Conf.Static.HostnameDomain).Pack(false),
}
tlsrptr := tlsrpt.Record{Version: "TLSRPTv1", RUAs: [][]tlsrpt.RUA{{tlsrpt.RUA(uri.String())}}}
records = append(records,
"; For the machine, only needs to be created once, for the first domain added:",
"; ",
"; Request reporting about success/failures of TLS connections to (MX) host, for DANE.",
fmt.Sprintf(`_smtp._tls.%-*s TXT "%s"`, 20+len(d)-len("_smtp._tls."), h+".", tlsrptr.String()),
"",
)
}
records = append(records,
"; Deliver email for the domain to this host.",
fmt.Sprintf("%s. MX 10 %s.", d, h),
"",
"; Outgoing messages will be signed with the first two DKIM keys. The other two",
"; configured for backup, switching to them is just a config change.",
)
var selectors []string
for name := range domConf.DKIM.Selectors {
selectors = append(selectors, name)
}
slices.Sort(selectors)
for _, name := range selectors {
sel := domConf.DKIM.Selectors[name]
dkimr := dkim.Record{
Version: "DKIM1",
Hashes: []string{"sha256"},
PublicKey: sel.Key.Public(),
}
if _, ok := sel.Key.(ed25519.PrivateKey); ok {
dkimr.Key = "ed25519"
} else if _, ok := sel.Key.(*rsa.PrivateKey); !ok {
return nil, fmt.Errorf("unrecognized private key for DKIM selector %q: %T", name, sel.Key)
}
txt, err := dkimr.Record()
if err != nil {
return nil, fmt.Errorf("making DKIM DNS TXT record: %v", err)
}
if len(txt) > 100 {
records = append(records,
"; NOTE: The following is a single long record split over several lines for use",
"; in zone files. When adding through a DNS operator web interface, combine the",
"; strings into a single string, without ().",
)
}
s := fmt.Sprintf("%s._domainkey.%s. TXT %s", name, d, mox.TXTStrings(txt))
records = append(records, s)
}
dmarcr := dmarc.DefaultRecord
dmarcr.Policy = "reject"
if domConf.DMARC != nil {
uri := url.URL{
Scheme: "mailto",
Opaque: smtp.NewAddress(domConf.DMARC.ParsedLocalpart, domConf.DMARC.DNSDomain).Pack(false),
}
dmarcr.AggregateReportAddresses = []dmarc.URI{
{Address: uri.String(), MaxSize: 10, Unit: "m"},
}
}
dspfr := spf.Record{Version: "spf1"}
for _, ip := range mox.DomainSPFIPs() {
mech := "ip4"
if ip.To4() == nil {
mech = "ip6"
}
dspfr.Directives = append(dspfr.Directives, spf.Directive{Mechanism: mech, IP: ip})
}
dspfr.Directives = append(dspfr.Directives,
spf.Directive{Mechanism: "mx"},
spf.Directive{Qualifier: "~", Mechanism: "all"},
)
dspftxt, err := dspfr.Record()
if err != nil {
return nil, fmt.Errorf("making domain spf record: %v", err)
}
records = append(records,
"",
"; Specify the MX host is allowed to send for our domain and for itself (for DSNs).",
"; ~all means softfail for anything else, which is done instead of -all to prevent older",
"; mail servers from rejecting the message because they never get to looking for a dkim/dmarc pass.",
fmt.Sprintf(`%s. TXT "%s"`, d, dspftxt),
"",
"; Emails that fail the DMARC check (without aligned DKIM and without aligned SPF)",
"; should be rejected, and request reports. If you email through mailing lists that",
"; strip DKIM-Signature headers and don't rewrite the From header, you may want to",
"; set the policy to p=none.",
fmt.Sprintf(`_dmarc.%s. TXT "%s"`, d, dmarcr.String()),
"",
)
if sts := domConf.MTASTS; sts != nil {
records = append(records,
"; Remote servers can use MTA-STS to verify our TLS certificate with the",
"; WebPKI pool of CA's (certificate authorities) when delivering over SMTP with",
"; STARTTLS.",
fmt.Sprintf(`mta-sts.%s. CNAME %s.`, d, h),
fmt.Sprintf(`_mta-sts.%s. TXT "v=STSv1; id=%s"`, d, sts.PolicyID),
"",
)
} else {
records = append(records,
"; Note: No MTA-STS to indicate TLS should be used. Either because disabled for the",
"; domain or because mox.conf does not have a listener with MTA-STS configured.",
"",
)
}
if domConf.TLSRPT != nil {
uri := url.URL{
Scheme: "mailto",
Opaque: smtp.NewAddress(domConf.TLSRPT.ParsedLocalpart, domConf.TLSRPT.DNSDomain).Pack(false),
}
tlsrptr := tlsrpt.Record{Version: "TLSRPTv1", RUAs: [][]tlsrpt.RUA{{tlsrpt.RUA(uri.String())}}}
records = append(records,
"; Request reporting about TLS failures.",
fmt.Sprintf(`_smtp._tls.%s. TXT "%s"`, d, tlsrptr.String()),
"",
)
}
if domConf.ClientSettingsDomain != "" && domConf.ClientSettingsDNSDomain != mox.Conf.Static.HostnameDomain {
records = append(records,
"; Client settings will reference a subdomain of the hosted domain, making it",
"; easier to migrate to a different server in the future by not requiring settings",
"; in all clients to be updated.",
fmt.Sprintf(`%-*s CNAME %s.`, 20+len(d), domConf.ClientSettingsDNSDomain.ASCII+".", h),
"",
)
}
records = append(records,
"; Autoconfig is used by Thunderbird. Autodiscover is (in theory) used by Microsoft.",
fmt.Sprintf(`autoconfig.%s. CNAME %s.`, d, h),
fmt.Sprintf(`_autodiscover._tcp.%s. SRV 0 1 443 %s.`, d, h),
"",
// ../rfc/6186:133 ../rfc/8314:692
"; For secure IMAP and submission autoconfig, point to mail host.",
fmt.Sprintf(`_imaps._tcp.%s. SRV 0 1 993 %s.`, d, h),
fmt.Sprintf(`_submissions._tcp.%s. SRV 0 1 465 %s.`, d, h),
"",
// ../rfc/6186:242
"; Next records specify POP3 and non-TLS ports are not to be used.",
"; These are optional and safe to leave out (e.g. if you have to click a lot in a",
"; DNS admin web interface).",
fmt.Sprintf(`_imap._tcp.%s. SRV 0 0 0 .`, d),
fmt.Sprintf(`_submission._tcp.%s. SRV 0 0 0 .`, d),
fmt.Sprintf(`_pop3._tcp.%s. SRV 0 0 0 .`, d),
fmt.Sprintf(`_pop3s._tcp.%s. SRV 0 0 0 .`, d),
)
if certIssuerDomainName != "" {
// ../rfc/8659:18 for CAA records.
records = append(records,
"",
"; Optional:",
"; You could mark Let's Encrypt as the only Certificate Authority allowed to",
"; sign TLS certificates for your domain.",
fmt.Sprintf(`%s. CAA 0 issue "%s"`, d, certIssuerDomainName),
)
if acmeAccountURI != "" {
// ../rfc/8657:99 for accounturi.
// ../rfc/8657:147 for validationmethods.
records = append(records,
";",
"; Optionally limit certificates for this domain to the account ID and methods used by mox.",
fmt.Sprintf(`;; %s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01"`, d, certIssuerDomainName, acmeAccountURI),
";",
"; Or alternatively only limit for email-specific subdomains, so you can use",
"; other accounts/methods for other subdomains.",
fmt.Sprintf(`;; autoconfig.%s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01"`, d, certIssuerDomainName, acmeAccountURI),
fmt.Sprintf(`;; mta-sts.%s. CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01"`, d, certIssuerDomainName, acmeAccountURI),
)
if domConf.ClientSettingsDomain != "" && domConf.ClientSettingsDNSDomain != mox.Conf.Static.HostnameDomain {
records = append(records,
fmt.Sprintf(`;; %-*s CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01"`, 20-3+len(d), domConf.ClientSettingsDNSDomain.ASCII, certIssuerDomainName, acmeAccountURI),
)
}
if strings.HasSuffix(h, "."+d) {
records = append(records,
";",
"; And the mail hostname.",
fmt.Sprintf(`;; %-*s CAA 0 issue "%s; accounturi=%s; validationmethods=tls-alpn-01,http-01"`, 20-3+len(d), h+".", certIssuerDomainName, acmeAccountURI),
)
}
} else {
// The string "will be suggested" is used by
// ../testdata/integration/moxacmepebble.sh and ../testdata/integration/moxmail2.sh
// as end of DNS records.
records = append(records,
";",
"; Note: After starting up, once an ACME account has been created, CAA records",
"; that restrict issuance to the account will be suggested.",
)
}
}
return records, nil
}

38
apidiff.sh Executable file
View File

@ -0,0 +1,38 @@
#!/bin/sh
set -e
prevversion=$(go list -mod=readonly -m -f '{{ .Version }}' github.com/mjl-/mox@latest)
if ! test -d tmp/mox-$prevversion; then
mkdir -p tmp/mox-$prevversion
git archive --format=tar $prevversion | tar -C tmp/mox-$prevversion -xf -
fi
(rm -r tmp/apidiff || exit 0)
mkdir -p tmp/apidiff/$prevversion tmp/apidiff/next
(rm apidiff/next.txt.new 2>/dev/null || exit 0)
touch apidiff/next.txt.new
for p in $(cat apidiff/packages.txt); do
if ! test -d tmp/mox-$prevversion/$p; then
continue
fi
(cd tmp/mox-$prevversion && apidiff -w ../apidiff/$prevversion/$p.api ./$p)
apidiff -w tmp/apidiff/next/$p.api ./$p
apidiff -incompatible tmp/apidiff/$prevversion/$p.api tmp/apidiff/next/$p.api >$p.diff
if test -s $p.diff; then
(
echo '#' $p
cat $p.diff
echo
) >>apidiff/next.txt.new
fi
rm $p.diff
done
if test -s apidiff/next.txt.new; then
(
echo "Below are the incompatible changes between $prevversion and next, per package."
echo
cat apidiff/next.txt.new
) >apidiff/next.txt
rm apidiff/next.txt.new
else
mv apidiff/next.txt.new apidiff/next.txt
fi

10
apidiff/README.txt Normal file
View File

@ -0,0 +1,10 @@
This directory lists incompatible changes between released versions for packages
intended for reuse by third party projects, as listed in packages.txt. These
files are generated using golang.org/x/exp/cmd/apidiff (see
https://pkg.go.dev/golang.org/x/exp/apidiff) and ../apidiff.sh.
There is no guarantee that there will be no breaking changes. With Go's
dependency versioning approach (minimal version selection), Go code will never
unexpectedly stop compiling. Incompatibilities will show when explicitly
updating a dependency. Making the required changes is typically fairly
straightforward.

5
apidiff/next.txt Normal file
View File

@ -0,0 +1,5 @@
Below are the incompatible changes between v0.0.15 and next, per package.
# smtpclient
- GatherDestinations: changed from func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.IPDomain, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []HostPref, bool, error)

20
apidiff/packages.txt Normal file
View File

@ -0,0 +1,20 @@
dane
dmarc
dmarcrpt
dns
dnsbl
iprev
message
mtasts
publicsuffix
ratelimit
sasl
scram
smtp
smtpclient
spf
subjectpass
tlsrpt
updates
webapi
webhook

79
apidiff/v0.0.10.txt Normal file
View File

@ -0,0 +1,79 @@
Below are the incompatible changes between v0.0.9 and v0.0.10, per package.
# dane
- Dial: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, string, string, []github.com/mjl-/adns.TLSAUsage, *crypto/x509.CertPool) (net.Conn, github.com/mjl-/adns.TLSA, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, string, string, []github.com/mjl-/adns.TLSAUsage, *crypto/x509.CertPool) (net.Conn, github.com/mjl-/adns.TLSA, error)
- TLSClientConfig: changed from func(*golang.org/x/exp/slog.Logger, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *github.com/mjl-/adns.TLSA, *crypto/x509.CertPool) crypto/tls.Config to func(*log/slog.Logger, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *github.com/mjl-/adns.TLSA, *crypto/x509.CertPool) crypto/tls.Config
- Verify: changed from func(*golang.org/x/exp/slog.Logger, []github.com/mjl-/adns.TLSA, crypto/tls.ConnectionState, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *crypto/x509.CertPool) (bool, github.com/mjl-/adns.TLSA, error) to func(*log/slog.Logger, []github.com/mjl-/adns.TLSA, crypto/tls.ConnectionState, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *crypto/x509.CertPool) (bool, github.com/mjl-/adns.TLSA, error)
# dmarc
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, github.com/mjl-/mox/dns.Domain, *Record, string, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, github.com/mjl-/mox/dns.Domain, *Record, string, bool, error)
- LookupExternalReportsAccepted: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain) (bool, Status, []*Record, []string, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain) (bool, Status, []*Record, []string, bool, error)
- Verify: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dkim.Result, github.com/mjl-/mox/spf.Status, *github.com/mjl-/mox/dns.Domain, bool) (bool, Result) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dkim.Result, github.com/mjl-/mox/spf.Status, *github.com/mjl-/mox/dns.Domain, bool) (bool, Result)
# dmarcrpt
- ParseMessageReport: changed from func(*golang.org/x/exp/slog.Logger, io.ReaderAt) (*Feedback, error) to func(*log/slog.Logger, io.ReaderAt) (*Feedback, error)
# dns
- StrictResolver.Log: changed from *golang.org/x/exp/slog.Logger to *log/slog.Logger
# dnsbl
- CheckHealth: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) error to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) error
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, net.IP) (Status, string, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, net.IP) (Status, string, error)
# iprev
# message
- (*Part).ParseNextPart: changed from func(*golang.org/x/exp/slog.Logger) (*Part, error) to func(*log/slog.Logger) (*Part, error)
- (*Part).Walk: changed from func(*golang.org/x/exp/slog.Logger, *Part) error to func(*log/slog.Logger, *Part) error
- EnsurePart: changed from func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt, int64) (Part, error) to func(*log/slog.Logger, bool, io.ReaderAt, int64) (Part, error)
- From: changed from func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error) to func(*log/slog.Logger, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error)
- Parse: changed from func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt) (Part, error) to func(*log/slog.Logger, bool, io.ReaderAt) (Part, error)
# mtasts
- FetchPolicy: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) (*Policy, string, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Domain) (*Policy, string, error)
- Get: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, *Policy, string, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, *Policy, string, error)
- HTTPClientObserve: changed from func(context.Context, *golang.org/x/exp/slog.Logger, string, string, int, error, time.Time) to func(context.Context, *log/slog.Logger, string, string, int, error, time.Time)
- LookupRecord: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error)
# publicsuffix
- List.Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain
- ParseList: changed from func(*golang.org/x/exp/slog.Logger, io.Reader) (List, error) to func(*log/slog.Logger, io.Reader) (List, error)
# ratelimit
# sasl
# scram
# smtp
- SePol7ARCFail: removed
- SePol7MissingReqTLS: removed
# smtpclient
- Dial: changed from func(context.Context, *golang.org/x/exp/slog.Logger, Dialer, github.com/mjl-/mox/dns.IPDomain, []net.IP, int, map[string][]net.IP, []net.IP) (net.Conn, net.IP, error) to func(context.Context, *log/slog.Logger, Dialer, github.com/mjl-/mox/dns.IPDomain, []net.IP, int, map[string][]net.IP, []net.IP) (net.Conn, net.IP, error)
- Error: old is comparable, new is not
- GatherDestinations: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.IPDomain, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.IPDomain, bool, error)
- GatherIPs: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error)
- GatherTLSA: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, bool, github.com/mjl-/mox/dns.Domain) (bool, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, bool, github.com/mjl-/mox/dns.Domain) (bool, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, error)
- New: changed from func(context.Context, *golang.org/x/exp/slog.Logger, net.Conn, TLSMode, bool, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain, Opts) (*Client, error) to func(context.Context, *log/slog.Logger, net.Conn, TLSMode, bool, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain, Opts) (*Client, error)
# spf
- Evaluate: changed from func(context.Context, *golang.org/x/exp/slog.Logger, *Record, github.com/mjl-/mox/dns.Resolver, Args) (Status, string, string, bool, error) to func(context.Context, *log/slog.Logger, *Record, github.com/mjl-/mox/dns.Resolver, Args) (Status, string, string, bool, error)
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, string, *Record, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, string, *Record, bool, error)
- Verify: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, Args) (Received, github.com/mjl-/mox/dns.Domain, string, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, Args) (Received, github.com/mjl-/mox/dns.Domain, string, bool, error)
# subjectpass
- Generate: changed from func(*golang.org/x/exp/slog.Logger, github.com/mjl-/mox/smtp.Address, []byte, time.Time) string to func(*log/slog.Logger, github.com/mjl-/mox/smtp.Address, []byte, time.Time) string
- Verify: changed from func(*golang.org/x/exp/slog.Logger, io.ReaderAt, []byte, time.Duration) error to func(*log/slog.Logger, io.ReaderAt, []byte, time.Duration) error
# tlsrpt
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error)
- ParseMessage: changed from func(*golang.org/x/exp/slog.Logger, io.ReaderAt) (*ReportJSON, error) to func(*log/slog.Logger, io.ReaderAt) (*ReportJSON, error)
# updates
- Check: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, Version, string, []byte) (Version, *Record, *Changelog, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, Version, string, []byte) (Version, *Record, *Changelog, error)
- FetchChangelog: changed from func(context.Context, *golang.org/x/exp/slog.Logger, string, Version, []byte) (*Changelog, error) to func(context.Context, *log/slog.Logger, string, Version, []byte) (*Changelog, error)
- HTTPClientObserve: changed from func(context.Context, *golang.org/x/exp/slog.Logger, string, string, int, error, time.Time) to func(context.Context, *log/slog.Logger, string, string, int, error, time.Time)
- Lookup: changed from func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Version, *Record, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Version, *Record, error)

45
apidiff/v0.0.11.txt Normal file
View File

@ -0,0 +1,45 @@
Below are the incompatible changes between v0.0.10 and v0.0.11, per package.
# dane
# dmarc
- DMARCPolicy: removed
# dmarcrpt
# dns
# dnsbl
# iprev
# message
- (*Composer).TextPart: changed from func(string) ([]byte, string, string) to func(string, string) ([]byte, string, string)
- From: changed from func(*log/slog.Logger, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error) to func(*log/slog.Logger, bool, io.ReaderAt, *Part) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error)
- NewComposer: changed from func(io.Writer, int64) *Composer to func(io.Writer, int64, bool) *Composer
# mtasts
- STSMX: removed
# publicsuffix
# ratelimit
# sasl
# scram
# smtp
- SeMsg6ConversoinUnsupported3: removed
# smtpclient
- GatherIPs: changed from func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error) to func(context.Context, *log/slog.Logger, github.com/mjl-/mox/dns.Resolver, string, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error)
# spf
# subjectpass
# tlsrpt
# updates

43
apidiff/v0.0.12.txt Normal file
View File

@ -0,0 +1,43 @@
Below are the incompatible changes between v0.0.11 and next, per package.
# dane
# dmarc
# dmarcrpt
# dns
# dnsbl
# iprev
# message
- (*HeaderWriter).AddWrap: changed from func([]byte) to func([]byte, bool)
# mtasts
# publicsuffix
# ratelimit
# sasl
# scram
# smtp
# smtpclient
# spf
# subjectpass
# tlsrpt
# updates
# webapi
# webhook

5
apidiff/v0.0.13.txt Normal file
View File

@ -0,0 +1,5 @@
Below are the incompatible changes between v0.0.13 and next, per package.
# webhook
- PartStructure: removed

7
apidiff/v0.0.15.txt Normal file
View File

@ -0,0 +1,7 @@
Below are the incompatible changes between v0.0.14 and next, per package.
# message
- Part.ContentDescription: changed from string to *string
- Part.ContentID: changed from string to *string
- Part.ContentTransferEncoding: changed from string to *string

83
apidiff/v0.0.9.txt Normal file
View File

@ -0,0 +1,83 @@
Below are the incompatible changes between v0.0.8 and v0.0.9, per package.
# dane
- Dial: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, string, string, []github.com/mjl-/adns.TLSAUsage) (net.Conn, github.com/mjl-/adns.TLSA, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, string, string, []github.com/mjl-/adns.TLSAUsage, *crypto/x509.CertPool) (net.Conn, github.com/mjl-/adns.TLSA, error)
- TLSClientConfig: changed from func(*github.com/mjl-/mox/mlog.Log, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *github.com/mjl-/adns.TLSA) crypto/tls.Config to func(*golang.org/x/exp/slog.Logger, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *github.com/mjl-/adns.TLSA, *crypto/x509.CertPool) crypto/tls.Config
- Verify: changed from func(*github.com/mjl-/mox/mlog.Log, []github.com/mjl-/adns.TLSA, crypto/tls.ConnectionState, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain) (bool, github.com/mjl-/adns.TLSA, error) to func(*golang.org/x/exp/slog.Logger, []github.com/mjl-/adns.TLSA, crypto/tls.ConnectionState, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.Domain, *crypto/x509.CertPool) (bool, github.com/mjl-/adns.TLSA, error)
# dmarc
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, github.com/mjl-/mox/dns.Domain, *Record, string, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, github.com/mjl-/mox/dns.Domain, *Record, string, bool, error)
- LookupExternalReportsAccepted: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain) (bool, Status, []*Record, []string, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain) (bool, Status, []*Record, []string, bool, error)
- Verify: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dkim.Result, github.com/mjl-/mox/spf.Status, *github.com/mjl-/mox/dns.Domain, bool) (bool, Result) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dkim.Result, github.com/mjl-/mox/spf.Status, *github.com/mjl-/mox/dns.Domain, bool) (bool, Result)
# dmarcrpt
- ParseMessageReport: changed from func(*github.com/mjl-/mox/mlog.Log, io.ReaderAt) (*Feedback, error) to func(*golang.org/x/exp/slog.Logger, io.ReaderAt) (*Feedback, error)
# dns
# dnsbl
- CheckHealth: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) error to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) error
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, net.IP) (Status, string, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, net.IP) (Status, string, error)
# iprev
# message
- (*Part).ParseNextPart: changed from func(*github.com/mjl-/mox/mlog.Log) (*Part, error) to func(*golang.org/x/exp/slog.Logger) (*Part, error)
- (*Part).Walk: changed from func(*github.com/mjl-/mox/mlog.Log, *Part) error to func(*golang.org/x/exp/slog.Logger, *Part) error
- EnsurePart: changed from func(*github.com/mjl-/mox/mlog.Log, bool, io.ReaderAt, int64) (Part, error) to func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt, int64) (Part, error)
- From: changed from func(*github.com/mjl-/mox/mlog.Log, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, net/textproto.MIMEHeader, error) to func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt) (github.com/mjl-/mox/smtp.Address, *Envelope, net/textproto.MIMEHeader, error)
- Parse: changed from func(*github.com/mjl-/mox/mlog.Log, bool, io.ReaderAt) (Part, error) to func(*golang.org/x/exp/slog.Logger, bool, io.ReaderAt) (Part, error)
- TLSReceivedComment: removed
# mtasts
- FetchPolicy: changed from func(context.Context, github.com/mjl-/mox/dns.Domain) (*Policy, string, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) (*Policy, string, error)
- Get: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, *Policy, string, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, *Policy, string, error)
- LookupRecord: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error)
# publicsuffix
- List.Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Domain) github.com/mjl-/mox/dns.Domain
- ParseList: changed from func(io.Reader) (List, error) to func(*golang.org/x/exp/slog.Logger, io.Reader) (List, error)
# ratelimit
# sasl
- NewClientSCRAMSHA1: changed from func(string, string) Client to func(string, string, bool) Client
- NewClientSCRAMSHA256: changed from func(string, string) Client to func(string, string, bool) Client
# scram
- HMAC: removed
- NewClient: changed from func(func() hash.Hash, string, string) *Client to func(func() hash.Hash, string, string, bool, *crypto/tls.ConnectionState) *Client
- NewServer: changed from func(func() hash.Hash, []byte) (*Server, error) to func(func() hash.Hash, []byte, *crypto/tls.ConnectionState, bool) (*Server, error)
# smtp
# smtpclient
- (*Client).TLSEnabled: removed
- Dial: changed from func(context.Context, *github.com/mjl-/mox/mlog.Log, Dialer, github.com/mjl-/mox/dns.IPDomain, []net.IP, int, map[string][]net.IP) (net.Conn, net.IP, error) to func(context.Context, *golang.org/x/exp/slog.Logger, Dialer, github.com/mjl-/mox/dns.IPDomain, []net.IP, int, map[string][]net.IP, []net.IP) (net.Conn, net.IP, error)
- GatherDestinations: changed from func(context.Context, *github.com/mjl-/mox/mlog.Log, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.IPDomain, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain) (bool, bool, bool, github.com/mjl-/mox/dns.Domain, []github.com/mjl-/mox/dns.IPDomain, bool, error)
- GatherIPs: changed from func(context.Context, *github.com/mjl-/mox/mlog.Log, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.IPDomain, map[string][]net.IP) (bool, bool, github.com/mjl-/mox/dns.Domain, []net.IP, bool, error)
- GatherTLSA: changed from func(context.Context, *github.com/mjl-/mox/mlog.Log, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, bool, github.com/mjl-/mox/dns.Domain) (bool, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, bool, github.com/mjl-/mox/dns.Domain) (bool, []github.com/mjl-/adns.TLSA, github.com/mjl-/mox/dns.Domain, error)
- New: changed from func(context.Context, *github.com/mjl-/mox/mlog.Log, net.Conn, TLSMode, bool, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain, Opts) (*Client, error) to func(context.Context, *golang.org/x/exp/slog.Logger, net.Conn, TLSMode, bool, github.com/mjl-/mox/dns.Domain, github.com/mjl-/mox/dns.Domain, Opts) (*Client, error)
- Opts.Auth: changed from []github.com/mjl-/mox/sasl.Client to func([]string, *crypto/tls.ConnectionState) (github.com/mjl-/mox/sasl.Client, error)
# spf
- Evaluate: changed from func(context.Context, *Record, github.com/mjl-/mox/dns.Resolver, Args) (Status, string, string, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, *Record, github.com/mjl-/mox/dns.Resolver, Args) (Status, string, string, bool, error)
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, string, *Record, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Status, string, *Record, bool, error)
- Verify: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, Args) (Received, github.com/mjl-/mox/dns.Domain, string, bool, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, Args) (Received, github.com/mjl-/mox/dns.Domain, string, bool, error)
# subjectpass
- Generate: changed from func(github.com/mjl-/mox/smtp.Address, []byte, time.Time) string to func(*golang.org/x/exp/slog.Logger, github.com/mjl-/mox/smtp.Address, []byte, time.Time) string
- Verify: changed from func(*github.com/mjl-/mox/mlog.Log, io.ReaderAt, []byte, time.Duration) error to func(*golang.org/x/exp/slog.Logger, io.ReaderAt, []byte, time.Duration) error
# tlsrpt
- (*TLSRPTDateRange).UnmarshalJSON: removed
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (*Record, string, error)
- Parse: changed from func(io.Reader) (*Report, error) to func(io.Reader) (*ReportJSON, error)
- ParseMessage: changed from func(*github.com/mjl-/mox/mlog.Log, io.ReaderAt) (*Report, error) to func(*golang.org/x/exp/slog.Logger, io.ReaderAt) (*ReportJSON, error)
# updates
- Check: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, Version, string, []byte) (Version, *Record, *Changelog, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain, Version, string, []byte) (Version, *Record, *Changelog, error)
- FetchChangelog: changed from func(context.Context, string, Version, []byte) (*Changelog, error) to func(context.Context, *golang.org/x/exp/slog.Logger, string, Version, []byte) (*Changelog, error)
- Lookup: changed from func(context.Context, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Version, *Record, error) to func(context.Context, *golang.org/x/exp/slog.Logger, github.com/mjl-/mox/dns.Resolver, github.com/mjl-/mox/dns.Domain) (Version, *Record, error)

View File

@ -20,6 +20,7 @@ import (
"errors"
"fmt"
"io"
"log/slog"
"net"
"os"
"path/filepath"
@ -28,19 +29,37 @@ import (
"sync"
"time"
"golang.org/x/crypto/acme"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"github.com/mjl-/autocert"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/moxvar"
)
var xlog = mlog.New("autotls")
var (
metricMissingServerName = promauto.NewCounter(
prometheus.CounterOpts{
Name: "mox_autotls_missing_servername_total",
Help: "Number of failed TLS connection attempts with missing SNI where no fallback hostname was configured.",
},
)
metricUnknownServerName = promauto.NewCounter(
prometheus.CounterOpts{
Name: "mox_autotls_unknown_servername_total",
Help: "Number of failed TLS connection attempts with an unrecognized SNI name where no fallback hostname was configured.",
},
)
metricCertRequestErrors = promauto.NewCounter(
prometheus.CounterOpts{
Name: "mox_autotls_cert_request_errors_total",
Help: "Number of errors trying to retrieve a certificate for a hostname, possibly ACME verification errors.",
},
)
metricCertput = promauto.NewCounter(
prometheus.CounterOpts{
Name: "mox_autotls_certput_total",
@ -53,7 +72,6 @@ var (
// certificates for allowlisted hosts.
type Manager struct {
ACMETLSConfig *tls.Config // For serving HTTPS on port 443, which is required for certificate requests to succeed.
TLSConfig *tls.Config // For all TLS servers not used for validating ACME requests. Like SMTP and IMAP (including with STARTTLS) and HTTPS on ports other than 443.
Manager *autocert.Manager
shutdown <-chan struct{}
@ -64,10 +82,19 @@ type Manager struct {
// Load returns an initialized autotls manager for "name" (used for the ACME key
// file and requested certs and their keys). All files are stored within acmeDir.
//
// contactEmail must be a valid email address to which notifications about ACME can
// be sent. directoryURL is the ACME starting point. When shutdown is closed, no
// new TLS connections can be created.
func Load(name, acmeDir, contactEmail, directoryURL string, shutdown <-chan struct{}) (*Manager, error) {
// be sent. directoryURL is the ACME starting point.
//
// eabKeyID and eabKey are for external account binding when making a new account,
// which some ACME providers require.
//
// getPrivateKey is called to get the private key for the host and key type. It
// can be used to deliver a specific (e.g. always the same) private key for a
// host, or a newly generated key.
//
// When shutdown is closed, no new TLS connections can be created.
func Load(log mlog.Log, name, acmeDir, contactEmail, directoryURL string, eabKeyID string, eabKey []byte, getPrivateKey func(host string, keyType autocert.KeyType) (crypto.Signer, error), shutdown <-chan struct{}) (*Manager, error) {
if directoryURL == "" {
return nil, fmt.Errorf("empty ACME directory URL")
}
@ -76,11 +103,14 @@ func Load(name, acmeDir, contactEmail, directoryURL string, shutdown <-chan stru
}
// Load identity key if it exists. Otherwise, create a new key.
p := filepath.Join(acmeDir + "/" + name + ".key")
p := filepath.Join(acmeDir, name+".key")
var key crypto.Signer
f, err := os.Open(p)
if f != nil {
defer f.Close()
defer func() {
err := f.Close()
log.Check(err, "closing identify key file")
}()
}
if err != nil && os.IsNotExist(err) {
key, err = ecdsa.GenerateKey(elliptic.P256(), cryptorand.Reader)
@ -128,7 +158,7 @@ func Load(name, acmeDir, contactEmail, directoryURL string, shutdown <-chan stru
}
m := &autocert.Manager{
Cache: dirCache(acmeDir + "/keycerts/" + name),
Cache: dirCache(filepath.Join(acmeDir, "keycerts", name)),
Prompt: autocert.AcceptTOS,
Email: contactEmail,
Client: &acme.Client{
@ -136,57 +166,163 @@ func Load(name, acmeDir, contactEmail, directoryURL string, shutdown <-chan stru
Key: key,
UserAgent: "mox/" + moxvar.Version,
},
GetPrivateKey: getPrivateKey,
// HostPolicy set below.
}
loggingGetCertificate := func(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {
log := xlog.WithContext(hello.Context())
// Handle missing SNI to prevent logging an error below.
// At startup, during config initialization, we already adjust the tls config to
// inject the listener hostname if there isn't one in the TLS client hello. This is
// common for SMTP STARTTLS connections, which often do not care about the
// validation of the certificate.
if hello.ServerName == "" {
log.Debug("tls request without sni servername, rejecting", mlog.Field("localaddr", hello.Conn.LocalAddr()), mlog.Field("supportedprotos", hello.SupportedProtos))
return nil, fmt.Errorf("sni server name required")
// If external account binding key is provided, use it for registering a new account.
// todo: ideally the key and its id are provided temporarily by the admin when registering a new account. but we don't do that interactive setup yet. in the future, an interactive setup/quickstart would ask for the key once to register a new acme account.
if eabKeyID != "" {
m.ExternalAccountBinding = &acme.ExternalAccountBinding{
KID: eabKeyID,
Key: eabKey,
}
cert, err := m.GetCertificate(hello)
if err != nil {
if errors.Is(err, errHostNotAllowed) {
log.Debugx("requesting certificate", err, mlog.Field("host", hello.ServerName))
} else {
log.Errorx("requesting certificate", err, mlog.Field("host", hello.ServerName))
}
}
return cert, err
}
acmeTLSConfig := *m.TLSConfig()
acmeTLSConfig.GetCertificate = loggingGetCertificate
tlsConfig := tls.Config{
GetCertificate: loggingGetCertificate,
}
a := &Manager{
ACMETLSConfig: &acmeTLSConfig,
TLSConfig: &tlsConfig,
Manager: m,
shutdown: shutdown,
hosts: map[dns.Domain]struct{}{},
Manager: m,
shutdown: shutdown,
hosts: map[dns.Domain]struct{}{},
}
m.HostPolicy = a.HostPolicy
acmeTLSConfig := *m.TLSConfig()
acmeTLSConfig.GetCertificate = func(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {
return a.loggingGetCertificate(hello, dns.Domain{}, false, false)
}
a.ACMETLSConfig = &acmeTLSConfig
return a, nil
}
// loggingGetCertificate is a helper to implement crypto/tls.Config.GetCertificate,
// optionally falling back to a certificate for fallbackHostname in case SNI is
// absent or for an unknown hostname.
func (m *Manager) loggingGetCertificate(hello *tls.ClientHelloInfo, fallbackHostname dns.Domain, fallbackNoSNI, fallbackUnknownSNI bool) (*tls.Certificate, error) {
log := mlog.New("autotls", nil).WithContext(hello.Context()).With(
slog.Any("localaddr", hello.Conn.LocalAddr()),
slog.Any("supportedprotos", hello.SupportedProtos),
slog.String("servername", hello.ServerName),
)
// If we can't find a certificate (depending on fallback parameters), we return a
// nil certificate and nil error, which crypto/tls turns into a TLS alert
// "unrecognized name", which can be interpreted by clients as a hint that they are
// using the wrong hostname, or a certificate is missing. ../rfc/9325:578
// IP addresses for ServerName are not allowed, but happen in practice. If we
// should be lenient (fallbackUnknownSNI), we switch to the fallback hostname,
// otherwise we return an error. We don't want to pass IP addresses to
// GetCertificate because it will return an error for IPv6 addresses.
// ../rfc/6066:367 ../rfc/4366:535
if net.ParseIP(hello.ServerName) != nil {
if fallbackUnknownSNI {
hello.ServerName = fallbackHostname.ASCII
log = log.With(slog.String("servername", hello.ServerName))
} else {
log.Debug("tls request with ip for server name, rejecting")
return nil, fmt.Errorf("invalid ip address for sni server name")
}
}
if hello.ServerName == "" && fallbackNoSNI {
hello.ServerName = fallbackHostname.ASCII
log = log.With(slog.String("servername", hello.ServerName))
}
// Handle missing SNI to prevent logging an error below.
if hello.ServerName == "" {
metricMissingServerName.Inc()
log.Debug("tls request without sni server name, rejecting")
return nil, nil
}
cert, err := m.Manager.GetCertificate(hello)
if err != nil && errors.Is(err, errHostNotAllowed) {
if !fallbackUnknownSNI {
metricUnknownServerName.Inc()
log.Debugx("requesting certificate", err)
return nil, nil
}
// Some legitimate email deliveries over SMTP use an unknown SNI, e.g. a bare
// domain instead of the MX hostname. We "should" return an error, but that would
// break email delivery, so we use the fallback name if it is configured.
// ../rfc/9325:589
log = log.With(slog.String("servername", hello.ServerName))
log.Debug("certificate for unknown hostname, using fallback hostname")
hello.ServerName = fallbackHostname.ASCII
cert, err = m.Manager.GetCertificate(hello)
if err != nil {
metricCertRequestErrors.Inc()
log.Errorx("requesting certificate for fallback hostname", err)
} else {
log.Debug("using certificate for fallback hostname")
}
return cert, err
} else if err != nil {
metricCertRequestErrors.Inc()
log.Errorx("requesting certificate", err)
}
return cert, err
}
// TLSConfig returns a TLS server config that optionally returns a certificate for
// fallbackHostname if no SNI was done, or for an unknown hostname.
//
// If fallbackNoSNI is set, TLS connections without SNI will use a certificate for
// fallbackHostname. Otherwise, connections without SNI will fail with a message
// that no TLS certificate is available.
//
// If fallbackUnknownSNI is set, TLS connections with an SNI hostname that is not
// allowlisted will instead use a certificate for fallbackHostname. Otherwise, such
// TLS connections will fail.
func (m *Manager) TLSConfig(fallbackHostname dns.Domain, fallbackNoSNI, fallbackUnknownSNI bool) *tls.Config {
return &tls.Config{
GetCertificate: func(hello *tls.ClientHelloInfo) (*tls.Certificate, error) {
return m.loggingGetCertificate(hello, fallbackHostname, fallbackNoSNI, fallbackUnknownSNI)
},
}
}
// CertAvailable checks whether a non-expired ECDSA certificate is available in the
// cache for host. No other checks than expiration are done.
func (m *Manager) CertAvailable(ctx context.Context, log mlog.Log, host dns.Domain) (bool, error) {
ck := host.ASCII // Would be "+rsa" for rsa keys.
data, err := m.Manager.Cache.Get(ctx, ck)
if err != nil && errors.Is(err, autocert.ErrCacheMiss) {
return false, nil
} else if err != nil {
return false, fmt.Errorf("attempt to get certificate from cache: %v", err)
}
// The cached keycert is of the form: private key, leaf certificate, intermediate certificates...
privb, rem := pem.Decode(data)
if privb == nil {
return false, fmt.Errorf("missing private key in cached keycert file")
}
pubb, _ := pem.Decode(rem)
if pubb == nil {
return false, fmt.Errorf("missing certificate in cached keycert file")
} else if pubb.Type != "CERTIFICATE" {
return false, fmt.Errorf("second pem block is %q, expected CERTIFICATE", pubb.Type)
}
cert, err := x509.ParseCertificate(pubb.Bytes)
if err != nil {
return false, fmt.Errorf("parsing certificate from cached keycert file: %v", err)
}
// We assume the certificate has a matching hostname, and is properly CA-signed. We
// only check the expiration time.
if time.Until(cert.NotBefore) > 0 || time.Since(cert.NotAfter) > 0 {
return false, nil
}
return true, nil
}
// SetAllowedHostnames sets a new list of allowed hostnames for automatic TLS.
// After setting the host names, a goroutine is start to check that new host names
// are fully served by publicIPs (only if non-empty and there is no unspecified
// address in the list). If no, log an error with a warning that ACME validation
// may fail.
func (m *Manager) SetAllowedHostnames(resolver dns.Resolver, hostnames map[dns.Domain]struct{}, publicIPs []string, checkHosts bool) {
func (m *Manager) SetAllowedHostnames(log mlog.Log, resolver dns.Resolver, hostnames map[dns.Domain]struct{}, publicIPs []string, checkHosts bool) {
m.Lock()
defer m.Unlock()
@ -199,7 +335,7 @@ func (m *Manager) SetAllowedHostnames(resolver dns.Resolver, hostnames map[dns.D
return l[i].Name() < l[j].Name()
})
xlog.Debug("autotls setting allowed hostnames", mlog.Field("hostnames", l), mlog.Field("publicips", publicIPs))
log.Debug("autotls setting allowed hostnames", slog.Any("hostnames", l), slog.Any("publicips", publicIPs))
var added []dns.Domain
for h := range hostnames {
if _, ok := m.hosts[h]; !ok {
@ -223,16 +359,20 @@ func (m *Manager) SetAllowedHostnames(resolver dns.Resolver, hostnames map[dns.D
publicIPstrs[ip] = struct{}{}
}
xlog.Debug("checking ips of hosts configured for acme tls cert validation")
log.Debug("checking ips of hosts configured for acme tls cert validation")
for _, h := range added {
ips, err := resolver.LookupIP(ctx, "ip", h.ASCII+".")
ips, _, err := resolver.LookupIP(ctx, "ip", h.ASCII+".")
if err != nil {
xlog.Errorx("warning: acme tls cert validation for host may fail due to dns lookup error", err, mlog.Field("host", h))
log.Warnx("acme tls cert validation for host may fail due to dns lookup error", err, slog.Any("host", h))
continue
}
for _, ip := range ips {
if _, ok := publicIPstrs[ip.String()]; !ok {
xlog.Error("warning: acme tls cert validation for host is likely to fail because not all its ips are being listened on", mlog.Field("hostname", h), mlog.Field("listenedips", publicIPs), mlog.Field("hostips", ips), mlog.Field("missingip", ip))
log.Warn("acme tls cert validation for host is likely to fail because not all its ips are being listened on",
slog.Any("hostname", h),
slog.Any("listenedips", publicIPs),
slog.Any("hostips", ips),
slog.Any("missingip", ip))
}
}
}
@ -255,12 +395,12 @@ var errHostNotAllowed = errors.New("autotls: host not in allowlist")
// HostPolicy decides if a host is allowed for use with ACME, i.e. whether a
// certificate will be returned if present and/or will be requested if not yet
// present. Only hosts added with AllowHostname are allowed. During shutdown, no
// new connections are allowed.
// present. Only hosts added with SetAllowedHostnames are allowed. During shutdown,
// no new connections are allowed.
func (m *Manager) HostPolicy(ctx context.Context, host string) (rerr error) {
log := xlog.WithContext(ctx)
log := mlog.New("autotls", nil).WithContext(ctx)
defer func() {
log.WithContext(ctx).Debugx("autotls hostpolicy result", rerr, mlog.Field("host", host))
log.Debugx("autotls hostpolicy result", rerr, slog.String("host", host))
}()
// Don't request new TLS certs when we are shutting down.
@ -292,46 +432,46 @@ func (m *Manager) HostPolicy(ctx context.Context, host string) (rerr error) {
type dirCache autocert.DirCache
func (d dirCache) Delete(ctx context.Context, name string) (rerr error) {
log := xlog.WithContext(ctx)
log := mlog.New("autotls", nil).WithContext(ctx)
defer func() {
log.Debugx("dircache delete result", rerr, mlog.Field("name", name))
log.Debugx("dircache delete result", rerr, slog.String("name", name))
}()
err := autocert.DirCache(d).Delete(ctx, name)
if err != nil {
log.Errorx("deleting cert from dir cache", err, mlog.Field("name", name))
log.Errorx("deleting cert from dir cache", err, slog.String("name", name))
} else if !strings.HasSuffix(name, "+token") {
log.Info("autotls cert delete", mlog.Field("name", name))
log.Info("autotls cert delete", slog.String("name", name))
}
return err
}
func (d dirCache) Get(ctx context.Context, name string) (rbuf []byte, rerr error) {
log := xlog.WithContext(ctx)
log := mlog.New("autotls", nil).WithContext(ctx)
defer func() {
log.Debugx("dircache get result", rerr, mlog.Field("name", name))
log.Debugx("dircache get result", rerr, slog.String("name", name))
}()
buf, err := autocert.DirCache(d).Get(ctx, name)
if err != nil && errors.Is(err, autocert.ErrCacheMiss) {
log.Infox("getting cert from dir cache", err, mlog.Field("name", name))
log.Infox("getting cert from dir cache", err, slog.String("name", name))
} else if err != nil {
log.Errorx("getting cert from dir cache", err, mlog.Field("name", name))
log.Errorx("getting cert from dir cache", err, slog.String("name", name))
} else if !strings.HasSuffix(name, "+token") {
log.Debug("autotls cert get", mlog.Field("name", name))
log.Debug("autotls cert get", slog.String("name", name))
}
return buf, err
}
func (d dirCache) Put(ctx context.Context, name string, data []byte) (rerr error) {
log := xlog.WithContext(ctx)
log := mlog.New("autotls", nil).WithContext(ctx)
defer func() {
log.Debugx("dircache put result", rerr, mlog.Field("name", name))
log.Debugx("dircache put result", rerr, slog.String("name", name))
}()
metricCertput.Inc()
err := autocert.DirCache(d).Put(ctx, name, data)
if err != nil {
log.Errorx("storing cert in dir cache", err, mlog.Field("name", name))
log.Errorx("storing cert in dir cache", err, slog.String("name", name))
} else if !strings.HasSuffix(name, "+token") {
log.Info("autotls cert store", mlog.Field("name", name))
log.Info("autotls cert store", slog.String("name", name))
}
return err
}

View File

@ -2,22 +2,30 @@ package autotls
import (
"context"
"crypto"
"errors"
"fmt"
"os"
"reflect"
"testing"
"golang.org/x/crypto/acme/autocert"
"github.com/mjl-/autocert"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
)
func TestAutotls(t *testing.T) {
log := mlog.New("autotls", nil)
os.RemoveAll("../testdata/autotls")
os.MkdirAll("../testdata/autotls", 0770)
shutdown := make(chan struct{})
m, err := Load("test", "../testdata/autotls", "mox@localhost", "https://localhost/", shutdown)
getPrivateKey := func(host string, keyType autocert.KeyType) (crypto.Signer, error) {
return nil, fmt.Errorf("not used")
}
m, err := Load(log, "test", "../testdata/autotls", "mox@localhost", "https://localhost/", "", nil, getPrivateKey, shutdown)
if err != nil {
t.Fatalf("load manager: %v", err)
}
@ -28,7 +36,7 @@ func TestAutotls(t *testing.T) {
if err := m.HostPolicy(context.Background(), "mox.example"); err == nil || !errors.Is(err, errHostNotAllowed) {
t.Fatalf("hostpolicy, got err %v, expected errHostNotAllowed", err)
}
m.SetAllowedHostnames(dns.StrictResolver{}, map[dns.Domain]struct{}{{ASCII: "mox.example"}: {}}, nil, false)
m.SetAllowedHostnames(log, dns.MockResolver{}, map[dns.Domain]struct{}{{ASCII: "mox.example"}: {}}, nil, false)
l = m.Hostnames()
if !reflect.DeepEqual(l, []dns.Domain{{ASCII: "mox.example"}}) {
t.Fatalf("hostnames, got %v, expected single mox.example", l)
@ -74,7 +82,7 @@ func TestAutotls(t *testing.T) {
key0 := m.Manager.Client.Key
m, err = Load("test", "../testdata/autotls", "mox@localhost", "https://localhost/", shutdown)
m, err = Load(log, "test", "../testdata/autotls", "mox@localhost", "https://localhost/", "", nil, getPrivateKey, shutdown)
if err != nil {
t.Fatalf("load manager again: %v", err)
}
@ -82,12 +90,12 @@ func TestAutotls(t *testing.T) {
t.Fatalf("private key changed after reload")
}
m.shutdown = make(chan struct{})
m.SetAllowedHostnames(dns.StrictResolver{}, map[dns.Domain]struct{}{{ASCII: "mox.example"}: {}}, nil, false)
m.SetAllowedHostnames(log, dns.MockResolver{}, map[dns.Domain]struct{}{{ASCII: "mox.example"}: {}}, nil, false)
if err := m.HostPolicy(context.Background(), "mox.example"); err != nil {
t.Fatalf("hostpolicy, got err %v, expected no error", err)
}
m2, err := Load("test2", "../testdata/autotls", "mox@localhost", "https://localhost/", shutdown)
m2, err := Load(log, "test2", "../testdata/autotls", "mox@localhost", "https://localhost/", "", nil, nil, shutdown)
if err != nil {
t.Fatalf("load another manager: %v", err)
}

698
backup.go Normal file
View File

@ -0,0 +1,698 @@
package main
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"io/fs"
"log/slog"
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"syscall"
"time"
"github.com/mjl-/bstore"
"github.com/mjl-/mox/dmarcdb"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
"github.com/mjl-/mox/mtastsdb"
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/store"
"github.com/mjl-/mox/tlsrptdb"
)
func xbackupctl(ctx context.Context, xctl *ctl) {
/* protocol:
> "backup"
> destdir
> "verbose" or ""
< stream
< "ok" or error
*/
// Convention in this function: variables containing "src" or "dst" are file system
// paths that can be passed to os.Open and such. Variables with dirs/paths without
// "src" or "dst" are incomplete paths relative to the source or destination data
// directories.
dstDir := xctl.xread()
verbose := xctl.xread() == "verbose"
// Set when an error is encountered. At the end, we warn if set.
var incomplete bool
// We'll be writing output, and logging both to mox and the ctl stream.
xwriter := xctl.writer()
// Format easily readable output for the user.
formatLog := func(prefix, text string, err error, attrs ...slog.Attr) []byte {
var b bytes.Buffer
fmt.Fprint(&b, prefix)
fmt.Fprint(&b, text)
if err != nil {
fmt.Fprint(&b, ": "+err.Error())
}
for _, a := range attrs {
fmt.Fprintf(&b, "; %s=%v", a.Key, a.Value)
}
fmt.Fprint(&b, "\n")
return b.Bytes()
}
// Log an error to both the mox service as the user running "mox backup".
pkglogx := func(prefix, text string, err error, attrs ...slog.Attr) {
xctl.log.Errorx(text, err, attrs...)
xwriter.Write(formatLog(prefix, text, err, attrs...))
}
// Log an error but don't mark backup as failed.
xwarnx := func(text string, err error, attrs ...slog.Attr) {
pkglogx("warning: ", text, err, attrs...)
}
// Log an error that causes the backup to be marked as failed. We typically
// continue processing though.
xerrx := func(text string, err error, attrs ...slog.Attr) {
incomplete = true
pkglogx("error: ", text, err, attrs...)
}
// If verbose is enabled, log to the cli command. Always log as info level.
xvlog := func(text string, attrs ...slog.Attr) {
xctl.log.Info(text, attrs...)
if verbose {
xwriter.Write(formatLog("", text, nil, attrs...))
}
}
dstConfigDir := filepath.Join(dstDir, "config")
dstDataDir := filepath.Join(dstDir, "data")
// Warn if directories already exist, will likely cause failures when trying to
// write files that already exist.
if _, err := os.Stat(dstConfigDir); err == nil {
xwarnx("destination config directory already exists", nil, slog.String("configdir", dstConfigDir))
}
if _, err := os.Stat(dstDataDir); err == nil {
xwarnx("destination data directory already exists", nil, slog.String("datadir", dstDataDir))
}
os.MkdirAll(dstDir, 0770)
os.MkdirAll(dstConfigDir, 0770)
os.MkdirAll(dstDataDir, 0770)
// Copy all files in the config dir.
srcConfigDir := filepath.Clean(mox.ConfigDirPath("."))
err := filepath.WalkDir(srcConfigDir, func(srcPath string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if srcConfigDir == srcPath {
return nil
}
// Trim directory and separator.
relPath := srcPath[len(srcConfigDir)+1:]
destPath := filepath.Join(dstConfigDir, relPath)
if d.IsDir() {
if info, err := os.Stat(srcPath); err != nil {
return fmt.Errorf("stat config dir %s: %v", srcPath, err)
} else if err := os.Mkdir(destPath, info.Mode()&0777); err != nil {
return fmt.Errorf("mkdir %s: %v", destPath, err)
}
return nil
}
if d.Type()&fs.ModeSymlink != 0 {
linkDest, err := os.Readlink(srcPath)
if err != nil {
return fmt.Errorf("reading symlink %s: %v", srcPath, err)
}
if err := os.Symlink(linkDest, destPath); err != nil {
return fmt.Errorf("creating symlink %s: %v", destPath, err)
}
return nil
}
if !d.Type().IsRegular() {
xwarnx("skipping non-regular/dir/symlink file in config dir", nil, slog.String("path", srcPath))
return nil
}
sf, err := os.Open(srcPath)
if err != nil {
return fmt.Errorf("open config file %s: %v", srcPath, err)
}
info, err := sf.Stat()
if err != nil {
return fmt.Errorf("stat config file %s: %v", srcPath, err)
}
df, err := os.OpenFile(destPath, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0777&info.Mode())
if err != nil {
return fmt.Errorf("create destination config file %s: %v", destPath, err)
}
defer func() {
if df != nil {
err := df.Close()
xctl.log.Check(err, "closing file")
}
}()
defer func() {
err := sf.Close()
xctl.log.Check(err, "closing file")
}()
if _, err := io.Copy(df, sf); err != nil {
return fmt.Errorf("copying config file %s to %s: %v", srcPath, destPath, err)
}
if err := df.Close(); err != nil {
return fmt.Errorf("closing destination config file %s: %v", srcPath, err)
}
df = nil
return nil
})
if err != nil {
xerrx("storing config directory", err)
}
srcDataDir := filepath.Clean(mox.DataDirPath("."))
// When creating a file in the destination, we first ensure its directory exists.
// We track which directories we created, to prevent needless syscalls.
createdDirs := map[string]struct{}{}
ensureDestDir := func(dstpath string) {
dstdir := filepath.Dir(dstpath)
if _, ok := createdDirs[dstdir]; !ok {
err := os.MkdirAll(dstdir, 0770)
if err != nil {
xerrx("creating directory", err)
}
createdDirs[dstdir] = struct{}{}
}
}
// Backup a single file by copying (never hardlinking, the file may change).
backupFile := func(path string) {
tmFile := time.Now()
srcpath := filepath.Join(srcDataDir, path)
dstpath := filepath.Join(dstDataDir, path)
sf, err := os.Open(srcpath)
if err != nil {
xerrx("open source file (not backed up)", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
return
}
defer func() {
err := sf.Close()
xctl.log.Check(err, "closing source file")
}()
ensureDestDir(dstpath)
df, err := os.OpenFile(dstpath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0660)
if err != nil {
xerrx("creating destination file (not backed up)", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
return
}
defer func() {
if df != nil {
err := df.Close()
xctl.log.Check(err, "closing destination file")
}
}()
if _, err := io.Copy(df, sf); err != nil {
xerrx("copying file (not backed up properly)", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
return
}
err = df.Close()
df = nil
if err != nil {
xerrx("closing destination file (not backed up properly)", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
return
}
xvlog("backed up file", slog.String("path", path), slog.Duration("duration", time.Since(tmFile)))
}
// Back up the files in a directory (by copying).
backupDir := func(dir string) {
tmDir := time.Now()
srcdir := filepath.Join(srcDataDir, dir)
dstdir := filepath.Join(dstDataDir, dir)
err := filepath.WalkDir(srcdir, func(srcpath string, d fs.DirEntry, err error) error {
if err != nil {
xerrx("walking file (not backed up)", err, slog.String("srcpath", srcpath))
return nil
}
if d.IsDir() {
return nil
}
backupFile(srcpath[len(srcDataDir)+1:])
return nil
})
if err != nil {
xerrx("copying directory (not backed up properly)", err,
slog.String("srcdir", srcdir),
slog.String("dstdir", dstdir),
slog.Duration("duration", time.Since(tmDir)))
return
}
xvlog("backed up directory", slog.String("dir", dir), slog.Duration("duration", time.Since(tmDir)))
}
// Backup a database by copying it in a readonly transaction. Wrapped by backupDB
// which logs and returns just a bool.
backupDB0 := func(db *bstore.DB, path string) error {
dstpath := filepath.Join(dstDataDir, path)
ensureDestDir(dstpath)
df, err := os.OpenFile(dstpath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0660)
if err != nil {
return fmt.Errorf("creating destination file: %v", err)
}
defer func() {
if df != nil {
err := df.Close()
xctl.log.Check(err, "closing destination database file")
}
}()
err = db.Read(ctx, func(tx *bstore.Tx) error {
// Using regular WriteTo seems fine, and fast. It just copies pages.
//
// bolt.Compact is slower, it writes all key/value pairs, building up new data
// structures. My compacted test database was ~60% of original size. Lz4 on the
// uncompacted database got it to 14%. Lz4 on the compacted database got it to 13%.
// Backups are likely archived somewhere with compression, so we don't compact.
//
// Tests with WriteTo and os.O_DIRECT were slower than without O_DIRECT, but
// probably because everything fit in the page cache. It may be better to use
// O_DIRECT when copying many large or inactive databases.
_, err := tx.WriteTo(df)
return err
})
if err != nil {
return fmt.Errorf("copying database: %v", err)
}
err = df.Close()
df = nil
if err != nil {
return fmt.Errorf("closing destination database after copy: %v", err)
}
return nil
}
backupDB := func(db *bstore.DB, path string) bool {
start := time.Now()
err := backupDB0(db, path)
if err != nil {
xerrx("backing up database", err, slog.String("path", path), slog.Duration("duration", time.Since(start)))
return false
}
xvlog("backed up database file", slog.String("path", path), slog.Duration("duration", time.Since(start)))
return true
}
// Try to create a hardlink. Fall back to copying the file (e.g. when on different file system).
warnedHardlink := false // We warn once about failing to hardlink.
linkOrCopy := func(srcpath, dstpath string) (bool, error) {
ensureDestDir(dstpath)
if err := os.Link(srcpath, dstpath); err == nil {
return true, nil
} else if os.IsNotExist(err) {
// No point in trying with regular copy, we would warn twice.
return false, err
} else if !warnedHardlink {
var hardlinkHint string
if runtime.GOOS == "linux" && errors.Is(err, syscall.EXDEV) {
hardlinkHint = " (hint: if running under systemd, ReadWritePaths in mox.service may cause multiple mountpoints; consider merging paths into a single parent directory to prevent cross-device/mountpoint hardlinks)"
}
xwarnx("creating hardlink to message failed, will be doing regular file copies and not warn again"+hardlinkHint, err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
warnedHardlink = true
}
// Fall back to copying.
sf, err := os.Open(srcpath)
if err != nil {
return false, fmt.Errorf("open source path %s: %v", srcpath, err)
}
defer func() {
err := sf.Close()
xctl.log.Check(err, "closing copied source file")
}()
df, err := os.OpenFile(dstpath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0660)
if err != nil {
return false, fmt.Errorf("create destination path %s: %v", dstpath, err)
}
defer func() {
if df != nil {
err := df.Close()
xctl.log.Check(err, "closing partial destination file")
}
}()
if _, err := io.Copy(df, sf); err != nil {
return false, fmt.Errorf("coping: %v", err)
}
err = df.Close()
df = nil
if err != nil {
return false, fmt.Errorf("closing destination file: %v", err)
}
return false, nil
}
// Start making the backup.
tmStart := time.Now()
xctl.log.Print("making backup", slog.String("destdir", dstDataDir))
if err := os.MkdirAll(dstDataDir, 0770); err != nil {
xerrx("creating destination data directory", err)
}
if err := os.WriteFile(filepath.Join(dstDataDir, "moxversion"), []byte(moxvar.Version), 0660); err != nil {
xerrx("writing moxversion", err)
}
backupDB(store.AuthDB, "auth.db")
backupDB(dmarcdb.ReportsDB, "dmarcrpt.db")
backupDB(dmarcdb.EvalDB, "dmarceval.db")
backupDB(mtastsdb.DB, "mtasts.db")
backupDB(tlsrptdb.ReportDB, "tlsrpt.db")
backupDB(tlsrptdb.ResultDB, "tlsrptresult.db")
backupFile("receivedid.key")
// Acme directory is optional.
srcAcmeDir := filepath.Join(srcDataDir, "acme")
if _, err := os.Stat(srcAcmeDir); err == nil {
backupDir("acme")
} else if !os.IsNotExist(err) {
xerrx("copying acme/", err)
}
// Copy the queue database and all message files.
backupQueue := func(path string) {
tmQueue := time.Now()
if !backupDB(queue.DB, path) {
return
}
dstdbpath := filepath.Join(dstDataDir, path)
opts := bstore.Options{MustExist: true, RegisterLogger: xctl.log.Logger}
db, err := bstore.Open(ctx, dstdbpath, &opts, queue.DBTypes...)
if err != nil {
xerrx("open copied queue database", err, slog.String("dstpath", dstdbpath), slog.Duration("duration", time.Since(tmQueue)))
return
}
defer func() {
if db != nil {
err := db.Close()
xctl.log.Check(err, "closing new queue db")
}
}()
// Link/copy known message files. If a message has been removed while we read the
// database, our backup is not consistent and the backup will be marked failed.
tmMsgs := time.Now()
seen := map[string]struct{}{}
var nlinked, ncopied int
var maxID int64
err = bstore.QueryDB[queue.Msg](ctx, db).ForEach(func(m queue.Msg) error {
if m.ID > maxID {
maxID = m.ID
}
mp := store.MessagePath(m.ID)
seen[mp] = struct{}{}
srcpath := filepath.Join(srcDataDir, "queue", mp)
dstpath := filepath.Join(dstDataDir, "queue", mp)
if linked, err := linkOrCopy(srcpath, dstpath); err != nil {
xerrx("linking/copying queue message", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
} else if linked {
nlinked++
} else {
ncopied++
}
return nil
})
if err != nil {
xerrx("processing queue messages (not backed up properly)", err, slog.Duration("duration", time.Since(tmMsgs)))
} else {
xvlog("queue message files linked/copied",
slog.Int("linked", nlinked),
slog.Int("copied", ncopied),
slog.Duration("duration", time.Since(tmMsgs)))
}
// Read through all files in queue directory and warn about anything we haven't
// handled yet. Message files that are newer than we expect from our consistent
// database snapshot are ignored.
tmWalk := time.Now()
srcqdir := filepath.Join(srcDataDir, "queue")
err = filepath.WalkDir(srcqdir, func(srcqpath string, d fs.DirEntry, err error) error {
if err != nil {
xerrx("walking files in queue", err, slog.String("srcpath", srcqpath))
return nil
}
if d.IsDir() {
return nil
}
p := srcqpath[len(srcqdir)+1:]
if _, ok := seen[p]; ok {
return nil
}
if p == "index.db" {
return nil
}
// Skip any messages that were added since we started on our consistent snapshot.
// We don't want to cause spurious backup warnings.
if id, err := strconv.ParseInt(filepath.Base(p), 10, 64); err == nil && maxID > 0 && id > maxID && p == store.MessagePath(id) {
return nil
}
qp := filepath.Join("queue", p)
xwarnx("backing up unrecognized file in queue directory", nil, slog.String("path", qp))
backupFile(qp)
return nil
})
if err != nil {
xerrx("walking queue directory (not backed up properly)", err, slog.String("dir", "queue"), slog.Duration("duration", time.Since(tmWalk)))
} else {
xvlog("walked queue directory", slog.Duration("duration", time.Since(tmWalk)))
}
xvlog("queue backed finished", slog.Duration("duration", time.Since(tmQueue)))
}
backupQueue(filepath.FromSlash("queue/index.db"))
backupAccount := func(acc *store.Account) {
defer func() {
err := acc.Close()
xctl.log.Check(err, "closing account")
}()
tmAccount := time.Now()
// Copy database file.
dbpath := filepath.Join("accounts", acc.Name, "index.db")
backupDB(acc.DB, dbpath)
// todo: should document/check not taking a rlock on account.
// Copy junkfilter files, if configured.
if jf, _, err := acc.OpenJunkFilter(ctx, xctl.log); err != nil {
if !errors.Is(err, store.ErrNoJunkFilter) {
xerrx("opening junk filter for account (not backed up)", err)
}
} else {
db := jf.DB()
jfpath := filepath.Join("accounts", acc.Name, "junkfilter.db")
backupDB(db, jfpath)
bloompath := filepath.Join("accounts", acc.Name, "junkfilter.bloom")
backupFile(bloompath)
err := jf.Close()
xctl.log.Check(err, "closing junkfilter")
}
dstdbpath := filepath.Join(dstDataDir, dbpath)
opts := bstore.Options{MustExist: true, RegisterLogger: xctl.log.Logger}
db, err := bstore.Open(ctx, dstdbpath, &opts, store.DBTypes...)
if err != nil {
xerrx("open copied account database", err, slog.String("dstpath", dstdbpath), slog.Duration("duration", time.Since(tmAccount)))
return
}
defer func() {
if db != nil {
err := db.Close()
xctl.log.Check(err, "close account database")
}
}()
// Link/copy known message files.
tmMsgs := time.Now()
seen := map[string]struct{}{}
var maxID int64
var nlinked, ncopied int
err = bstore.QueryDB[store.Message](ctx, db).FilterEqual("Expunged", false).ForEach(func(m store.Message) error {
if m.ID > maxID {
maxID = m.ID
}
mp := store.MessagePath(m.ID)
seen[mp] = struct{}{}
amp := filepath.Join("accounts", acc.Name, "msg", mp)
srcpath := filepath.Join(srcDataDir, amp)
dstpath := filepath.Join(dstDataDir, amp)
if linked, err := linkOrCopy(srcpath, dstpath); err != nil {
xerrx("linking/copying account message", err, slog.String("srcpath", srcpath), slog.String("dstpath", dstpath))
} else if linked {
nlinked++
} else {
ncopied++
}
return nil
})
if err != nil {
xerrx("processing account messages (not backed up properly)", err, slog.Duration("duration", time.Since(tmMsgs)))
} else {
xvlog("account message files linked/copied",
slog.Int("linked", nlinked),
slog.Int("copied", ncopied),
slog.Duration("duration", time.Since(tmMsgs)))
}
eraseIDs := map[int64]struct{}{}
err = bstore.QueryDB[store.MessageErase](ctx, db).ForEach(func(me store.MessageErase) error {
eraseIDs[me.ID] = struct{}{}
return nil
})
if err != nil {
xerrx("listing erased messages", err)
}
// Read through all files in queue directory and warn about anything we haven't
// handled yet. Message files that are newer than we expect from our consistent
// database snapshot are ignored.
tmWalk := time.Now()
srcadir := filepath.Join(srcDataDir, "accounts", acc.Name)
err = filepath.WalkDir(srcadir, func(srcapath string, d fs.DirEntry, err error) error {
if err != nil {
xerrx("walking files in account", err, slog.String("srcpath", srcapath))
return nil
}
if d.IsDir() {
return nil
}
p := srcapath[len(srcadir)+1:]
l := strings.Split(p, string(filepath.Separator))
if l[0] == "msg" {
mp := filepath.Join(l[1:]...)
if _, ok := seen[mp]; ok {
return nil
}
// Skip any messages that were added since we started on our consistent snapshot,
// or messages that will be erased. We don't want to cause spurious backup
// warnings.
id, err := strconv.ParseInt(l[len(l)-1], 10, 64)
if err == nil && id > maxID && mp == store.MessagePath(id) {
return nil
} else if _, ok := eraseIDs[id]; err == nil && ok {
return nil
}
}
switch p {
case "index.db", "junkfilter.db", "junkfilter.bloom":
return nil
}
ap := filepath.Join("accounts", acc.Name, p)
if strings.HasPrefix(p, "msg"+string(filepath.Separator)) {
xwarnx("backing up unrecognized file in account message directory (should be moved away)", nil, slog.String("path", ap))
} else {
xwarnx("backing up unrecognized file in account directory", nil, slog.String("path", ap))
}
backupFile(ap)
return nil
})
if err != nil {
xerrx("walking account directory (not backed up properly)", err, slog.String("srcdir", srcadir), slog.Duration("duration", time.Since(tmWalk)))
} else {
xvlog("walked account directory", slog.Duration("duration", time.Since(tmWalk)))
}
xvlog("account backup finished", slog.String("dir", filepath.Join("accounts", acc.Name)), slog.Duration("duration", time.Since(tmAccount)))
}
// For each configured account, open it, make a copy of the database and
// hardlink/copy the messages. We track the accounts we handled, and skip the
// account directories when handling "all other files" below.
accounts := map[string]struct{}{}
for _, accName := range mox.Conf.Accounts() {
acc, err := store.OpenAccount(xctl.log, accName, false)
if err != nil {
xerrx("opening account for copying (will try to copy as regular files later)", err, slog.String("account", accName))
continue
}
accounts[accName] = struct{}{}
backupAccount(acc)
}
// Copy all other files, that aren't part of the known files, databases, queue or accounts.
tmWalk := time.Now()
err = filepath.WalkDir(srcDataDir, func(srcpath string, d fs.DirEntry, err error) error {
if err != nil {
xerrx("walking path", err, slog.String("path", srcpath))
return nil
}
if srcpath == srcDataDir {
return nil
}
p := srcpath[len(srcDataDir)+1:]
if p == "queue" || p == "acme" || p == "tmp" {
return fs.SkipDir
}
l := strings.Split(p, string(filepath.Separator))
if len(l) >= 2 && l[0] == "accounts" {
name := l[1]
if _, ok := accounts[name]; ok {
return fs.SkipDir
}
}
// Only files are explicitly backed up.
if d.IsDir() {
return nil
}
switch p {
case "auth.db", "dmarcrpt.db", "dmarceval.db", "mtasts.db", "tlsrpt.db", "tlsrptresult.db", "receivedid.key", "ctl":
// Already handled.
return nil
case "lastknownversion": // Optional file, not yet handled.
default:
xwarnx("backing up unrecognized file", nil, slog.String("path", p))
}
backupFile(p)
return nil
})
if err != nil {
xerrx("walking other files (not backed up properly)", err, slog.Duration("duration", time.Since(tmWalk)))
} else {
xvlog("walking other files finished", slog.Duration("duration", time.Since(tmWalk)))
}
xvlog("backup finished", slog.Duration("duration", time.Since(tmStart)))
xwriter.xclose()
if incomplete {
xctl.xwrite("errors were encountered during backup")
} else {
xctl.xwriteok()
}
}

View File

@ -1,2 +0,0 @@
#!/bin/sh
exec ./node_modules/.bin/jshint --extract always $@ | fixjshintlines

View File

@ -5,6 +5,7 @@ import (
"crypto/tls"
"crypto/x509"
"net"
"net/http"
"net/url"
"reflect"
"regexp"
@ -19,6 +20,10 @@ import (
// todo: better default values, so less has to be specified in the config file.
// DefaultMaxMsgSize is the maximum message size for incoming and outgoing
// messages, in bytes. Can be overridden per listener.
const DefaultMaxMsgSize = 100 * 1024 * 1024
// Port returns port if non-zero, and fallback otherwise.
func Port(port, fallback int) int {
if port == 0 {
@ -30,14 +35,14 @@ func Port(port, fallback int) int {
// Static is a parsed form of the mox.conf configuration file, before converting it
// into a mox.Config after additional processing.
type Static struct {
DataDir string `sconf-doc:"Directory where all data is stored, e.g. queue, accounts and messages, ACME TLS certs/keys. If this is a relative path, it is relative to the directory of mox.conf."`
DataDir string `sconf-doc:"NOTE: This config file is in 'sconf' format. Indent with tabs. Comments must be on their own line, they don't end a line. Do not escape or quote strings. Details: https://pkg.go.dev/github.com/mjl-/sconf.\n\n\nDirectory where all data is stored, e.g. queue, accounts and messages, ACME TLS certs/keys. If this is a relative path, it is relative to the directory of mox.conf."`
LogLevel string `sconf-doc:"Default log level, one of: error, info, debug, trace, traceauth, tracedata. Trace logs SMTP and IMAP protocol transcripts, with traceauth also messages with passwords, and tracedata on top of that also the full data exchanges (full messages), which can be a large amount of data."`
PackageLogLevels map[string]string `sconf:"optional" sconf-doc:"Overrides of log level per package (e.g. queue, smtpclient, smtpserver, imapserver, spf, dkim, dmarc, dmarcdb, autotls, junk, mtasts, tlsrpt)."`
User string `sconf:"optional" sconf-doc:"User to switch to after binding to all sockets as root. Default: mox. If the value is not a known user, it is parsed as integer and used as uid and gid."`
NoFixPermissions bool `sconf:"optional" sconf-doc:"If true, do not automatically fix file permissions when starting up. By default, mox will ensure reasonable owner/permissions on the working, data and config directories (and files), and mox binary (if present)."`
Hostname string `sconf-doc:"Full hostname of system, e.g. mail.<domain>"`
HostnameDomain dns.Domain `sconf:"-" json:"-"` // Parsed form of hostname.
CheckUpdates bool `sconf:"optional" sconf-doc:"If enabled, a single DNS TXT lookup of _updates.xmox.nl is done every 24h to check for a new release. Each time a new release is found, a changelog is fetched from https://updates.xmox.nl and delivered to the postmaster mailbox."`
CheckUpdates bool `sconf:"optional" sconf-doc:"If enabled, a single DNS TXT lookup of _updates.xmox.nl is done every 24h to check for a new release. Each time a new release is found, a changelog is fetched from https://updates.xmox.nl/changelog and delivered to the postmaster mailbox."`
Pedantic bool `sconf:"optional" sconf-doc:"In pedantic mode protocol violations (that happen in the wild) for SMTP/IMAP/etc result in errors instead of accepting such behaviour."`
TLS struct {
CA *struct {
@ -52,10 +57,24 @@ type Static struct {
Postmaster struct {
Account string
Mailbox string `sconf-doc:"E.g. Postmaster or Inbox."`
} `sconf-doc:"Destination for emails delivered to postmaster address."`
DefaultMailboxes []string `sconf:"optional" sconf-doc:"Mailboxes to create when adding an account. Inbox is always created. If no mailboxes are specified, the following are automatically created: Sent, Archive, Trash, Drafts and Junk."`
} `sconf-doc:"Destination for emails delivered to postmaster addresses: a plain 'postmaster' without domain, 'postmaster@<hostname>' (also for each listener with SMTP enabled), and as fallback for each domain without explicitly configured postmaster destination."`
HostTLSRPT struct {
Account string `sconf-doc:"Account to deliver TLS reports to. Typically same account as for postmaster."`
Mailbox string `sconf-doc:"Mailbox to deliver TLS reports to. Recommended value: TLSRPT."`
Localpart string `sconf-doc:"Localpart at hostname to accept TLS reports at. Recommended value: tlsreports."`
// All IPs that were explicitly listen on for external SMTP. Only set when there
ParsedLocalpart smtp.Localpart `sconf:"-"`
} `sconf:"optional" sconf-doc:"Destination for per-host TLS reports (TLSRPT). TLS reports can be per recipient domain (for MTA-STS), or per MX host (for DANE). The per-domain TLS reporting configuration is in domains.conf. This is the TLS reporting configuration for this host. If absent, no host-based TLSRPT address is configured, and no host TLSRPT DNS record is suggested."`
InitialMailboxes InitialMailboxes `sconf:"optional" sconf-doc:"Mailboxes to create for new accounts. Inbox is always created. Mailboxes can be given a 'special-use' role, which are understood by most mail clients. If absent/empty, the following additional mailboxes are created: Sent, Archive, Trash, Drafts and Junk."`
DefaultMailboxes []string `sconf:"optional" sconf-doc:"Deprecated in favor of InitialMailboxes. Mailboxes to create when adding an account. Inbox is always created. If no mailboxes are specified, the following are automatically created: Sent, Archive, Trash, Drafts and Junk."`
Transports map[string]Transport `sconf:"optional" sconf-doc:"Transport are mechanisms for delivering messages. Transports can be referenced from Routes in accounts, domains and the global configuration. There is always an implicit/fallback delivery transport doing direct delivery with SMTP from the outgoing message queue. Transports are typically only configured when using smarthosts, i.e. when delivering through another SMTP server. Zero or one transport methods must be set in a transport, never multiple. When using an external party to send email for a domain, keep in mind you may have to add their IP address to your domain's SPF record, and possibly additional DKIM records."`
// Awkward naming of fields to get intended default behaviour for zero values.
NoOutgoingDMARCReports bool `sconf:"optional" sconf-doc:"Do not send DMARC reports (aggregate only). By default, aggregate reports on DMARC evaluations are sent to domains if their DMARC policy requests them. Reports are sent at whole hours, with a minimum of 1 hour and maximum of 24 hours, rounded up so a whole number of intervals cover 24 hours, aligned at whole days in UTC. Reports are sent from the postmaster@<mailhostname> address."`
NoOutgoingTLSReports bool `sconf:"optional" sconf-doc:"Do not send TLS reports. By default, reports about failed SMTP STARTTLS connections and related MTA-STS/DANE policies are sent to domains if their TLSRPT DNS record requests them. Reports covering a 24 hour UTC interval are sent daily. Reports are sent from the postmaster address of the configured domain the mailhostname is in. If there is no such domain, or it does not have DKIM configured, no reports are sent."`
OutgoingTLSReportsForAllSuccess bool `sconf:"optional" sconf-doc:"Also send TLS reports if there were no SMTP STARTTLS connection failures. By default, reports are only sent when at least one failure occurred. If a report is sent, it does always include the successful connection counts as well."`
QuotaMessageSize int64 `sconf:"optional" sconf-doc:"Default maximum total message size in bytes for each individual account, only applicable if greater than zero. Can be overridden per account. Attempting to add new messages to an account beyond its maximum total size will result in an error. Useful to prevent a single account from filling storage. The quota only applies to the email message files, not to any file system overhead and also not the message index database file (account for approximately 15% overhead)."`
// All IPs that were explicitly listened on for external SMTP. Only set when there
// are no unspecified external SMTP listeners and there is at most one for IPv4 and
// at most one for IPv6. Used for setting the local address when making outgoing
// connections. Those IPs are assumed to be in an SPF record for the domain,
@ -69,40 +88,79 @@ type Static struct {
GID uint32 `sconf:"-" json:"-"`
}
// InitialMailboxes are mailboxes created for a new account.
type InitialMailboxes struct {
SpecialUse SpecialUseMailboxes `sconf:"optional" sconf-doc:"Special-use roles to mailbox to create."`
Regular []string `sconf:"optional" sconf-doc:"Regular, non-special-use mailboxes to create."`
}
// SpecialUseMailboxes holds mailbox names for special-use roles. Mail clients
// recognize these special-use roles, e.g. appending sent messages to whichever
// mailbox has the Sent special-use flag.
type SpecialUseMailboxes struct {
Sent string `sconf:"optional"`
Archive string `sconf:"optional"`
Trash string `sconf:"optional"`
Draft string `sconf:"optional"`
Junk string `sconf:"optional"`
}
// Dynamic is the parsed form of domains.conf, and is automatically reloaded when changed.
type Dynamic struct {
Domains map[string]Domain `sconf-doc:"Domains for which email is accepted. For internationalized domains, use their IDNA names in UTF-8."`
Accounts map[string]Account `sconf-doc:"Accounts to which email can be delivered. An account can accept email for multiple domains, for multiple localparts, and deliver to multiple mailboxes."`
Domains map[string]Domain `sconf-doc:"NOTE: This config file is in 'sconf' format. Indent with tabs. Comments must be on their own line, they don't end a line. Do not escape or quote strings. Details: https://pkg.go.dev/github.com/mjl-/sconf.\n\n\nDomains for which email is accepted. For internationalized domains, use their IDNA names in UTF-8."`
Accounts map[string]Account `sconf-doc:"Accounts represent mox users, each with a password and email address(es) to which email can be delivered (possibly at different domains). Each account has its own on-disk directory holding its messages and index database. An account name is not an email address."`
WebDomainRedirects map[string]string `sconf:"optional" sconf-doc:"Redirect all requests from domain (key) to domain (value). Always redirects to HTTPS. For plain HTTP redirects, use a WebHandler with a WebRedirect."`
WebHandlers []WebHandler `sconf:"optional" sconf-doc:"Handle webserver requests by serving static files, redirecting or reverse-proxying HTTP(s). The first matching WebHandler will handle the request. Built-in handlers, e.g. for account, admin, autoconfig and mta-sts always run first. If no handler matches, the response status code is file not found (404). If functionality you need is missng, simply forward the requests to an application that can provide the needed functionality."`
WebHandlers []WebHandler `sconf:"optional" sconf-doc:"Handle webserver requests by serving static files, redirecting, reverse-proxying HTTP(s) or passing the request to an internal service. The first matching WebHandler will handle the request. Built-in system handlers, e.g. for ACME validation, autoconfig and mta-sts always run first. Built-in handlers for admin, account, webmail and webapi are evaluated after all handlers, including webhandlers (allowing for overrides of internal services for some domains). If no handler matches, the response status code is file not found (404). If webserver features are missing, forward the requests to an application that provides the needed functionality itself."`
Routes []Route `sconf:"optional" sconf-doc:"Routes for delivering outgoing messages through the queue. Each delivery attempt evaluates account routes, domain routes and finally these global routes. The transport of the first matching route is used in the delivery attempt. If no routes match, which is the default with no configured routes, messages are delivered directly from the queue."`
MonitorDNSBLs []string `sconf:"optional" sconf-doc:"DNS blocklists to periodically check with if IPs we send from are present, without using them for checking incoming deliveries.. Also see DNSBLs in SMTP listeners in mox.conf, which specifies DNSBLs to use both for incoming deliveries and for checking our IPs against. Example DNSBLs: sbl.spamhaus.org, bl.spamcop.net."`
WebDNSDomainRedirects map[dns.Domain]dns.Domain `sconf:"-"`
WebDNSDomainRedirects map[dns.Domain]dns.Domain `sconf:"-" json:"-"`
MonitorDNSBLZones []dns.Domain `sconf:"-"`
ClientSettingDomains map[dns.Domain]struct{} `sconf:"-" json:"-"`
}
type ACME struct {
DirectoryURL string `sconf-doc:"For letsencrypt, use https://acme-v02.api.letsencrypt.org/directory."`
RenewBefore time.Duration `sconf:"optional" sconf-doc:"How long before expiration to renew the certificate. Default is 30 days."`
ContactEmail string `sconf-doc:"Email address to register at ACME provider. The provider can email you when certificates are about to expire. If you configure an address for which email is delivered by this server, keep in mind that TLS misconfigurations could result in such notification emails not arriving."`
Port int `sconf:"optional" sconf-doc:"TLS port for ACME validation, 443 by default. You should only override this if you cannot listen on port 443 directly. ACME will make requests to port 443, so you'll have to add an external mechanism to get the connection here, e.g. by configuring port forwarding."`
DirectoryURL string `sconf-doc:"For letsencrypt, use https://acme-v02.api.letsencrypt.org/directory."`
RenewBefore time.Duration `sconf:"optional" sconf-doc:"How long before expiration to renew the certificate. Default is 30 days."`
ContactEmail string `sconf-doc:"Email address to register at ACME provider. The provider can email you when certificates are about to expire. If you configure an address for which email is delivered by this server, keep in mind that TLS misconfigurations could result in such notification emails not arriving."`
Port int `sconf:"optional" sconf-doc:"TLS port for ACME validation, 443 by default. You should only override this if you cannot listen on port 443 directly. ACME will make requests to port 443, so you'll have to add an external mechanism to get the tls connection here, e.g. by configuring firewall-level port forwarding. Validation over the https port uses tls-alpn-01 with application-layer protocol negotiation, which essentially means the original tls connection must make it here unmodified, an https reverse proxy will not work."`
IssuerDomainName string `sconf:"optional" sconf-doc:"If set, used for suggested CAA DNS records, for restricting TLS certificate issuance to a Certificate Authority. If empty and DirectyURL is for Let's Encrypt, this value is set automatically to letsencrypt.org."`
ExternalAccountBinding *ExternalAccountBinding `sconf:"optional" sconf-doc:"ACME providers can require that a request for a new ACME account reference an existing non-ACME account known to the provider. External account binding references that account by a key id, and authorizes new ACME account requests by signing it with a key known both by the ACME client and ACME provider."`
// ../rfc/8555:2111
Manager *autotls.Manager `sconf:"-" json:"-"`
}
type ExternalAccountBinding struct {
KeyID string `sconf-doc:"Key identifier, from ACME provider."`
KeyFile string `sconf-doc:"File containing the base64url-encoded key used to sign account requests with external account binding. The ACME provider will verify the account request is correctly signed by the key. File is evaluated relative to the directory of mox.conf."`
}
type Listener struct {
IPs []string `sconf-doc:"Use 0.0.0.0 to listen on all IPv4 and/or :: to listen on all IPv6 addresses, but it is better to explicitly specify the IPs you want to use for email, as mox will make sure outgoing connections will only be made from one of those IPs."`
IPsNATed bool `sconf:"optional" sconf-doc:"Set this if the specified IPs are not the public IPs, but are NATed. This makes the DNS check skip a few checks related to IPs, such as for iprev, mx, spf, autoconfig, autodiscover."`
Hostname string `sconf:"optional" sconf-doc:"If empty, the config global Hostname is used."`
IPs []string `sconf-doc:"Use 0.0.0.0 to listen on all IPv4 and/or :: to listen on all IPv6 addresses, but it is better to explicitly specify the IPs you want to use for email, as mox will make sure outgoing connections will only be made from one of those IPs. If both outgoing IPv4 and IPv6 connectivity is possible, and only one family has explicitly configured addresses, both address families are still used for outgoing connections. Use the \"direct\" transport to limit address families for outgoing connections."`
NATIPs []string `sconf:"optional" sconf-doc:"If set, the mail server is configured behind a NAT and field IPs are internal instead of the public IPs, while NATIPs lists the public IPs. Used during IP-related DNS self-checks, such as for iprev, mx, spf, autoconfig, autodiscover, and for autotls."`
IPsNATed bool `sconf:"optional" sconf-doc:"Deprecated, use NATIPs instead. If set, IPs are not the public IPs, but are NATed. Skips IP-related DNS self-checks."`
Hostname string `sconf:"optional" sconf-doc:"If empty, the config global Hostname is used. The internal services webadmin, webaccount, webmail and webapi only match requests to IPs, this hostname, \"localhost\". All except webadmin also match for any client settings domain."`
HostnameDomain dns.Domain `sconf:"-" json:"-"` // Set when parsing config.
TLS *TLS `sconf:"optional" sconf-doc:"For SMTP/IMAP STARTTLS, direct TLS and HTTPS connections."`
SMTPMaxMessageSize int64 `sconf:"optional" sconf-doc:"Maximum size in bytes accepted incoming and outgoing messages. Default is 100MB."`
SMTPMaxMessageSize int64 `sconf:"optional" sconf-doc:"Maximum size in bytes for incoming and outgoing messages. Default is 100MB."`
SMTP struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 25."`
NoSTARTTLS bool `sconf:"optional" sconf-doc:"Do not offer STARTTLS to secure the connection. Not recommended."`
RequireSTARTTLS bool `sconf:"optional" sconf-doc:"Do not accept incoming messages if STARTTLS is not active. Can be used in combination with a strict MTA-STS policy. A remote SMTP server may not support TLS and may not be able to deliver messages."`
DNSBLs []string `sconf:"optional" sconf-doc:"Addresses of DNS block lists for incoming messages. Block lists are only consulted for connections/messages without enough reputation to make an accept/reject decision. This prevents sending IPs of all communications to the block list provider. If any of the listed DNSBLs contains a requested IP address, the message is rejected as spam. The DNSBLs are checked for healthiness before use, at most once per 4 hours. Example DNSBLs: sbl.spamhaus.org, bl.spamcop.net"`
DNSBLZones []dns.Domain `sconf:"-"`
Port int `sconf:"optional" sconf-doc:"Default 25."`
NoSTARTTLS bool `sconf:"optional" sconf-doc:"Do not offer STARTTLS to secure the connection. Not recommended."`
RequireSTARTTLS bool `sconf:"optional" sconf-doc:"Do not accept incoming messages if STARTTLS is not active. Consider using in combination with an MTA-STS policy and/or DANE. A remote SMTP server may not support TLS and may not be able to deliver messages. Incoming messages for TLS reporting addresses ignore this setting and do not require TLS."`
NoRequireTLS bool `sconf:"optional" sconf-doc:"Do not announce the REQUIRETLS SMTP extension. Messages delivered using the REQUIRETLS extension should only be distributed onwards to servers also implementing the REQUIRETLS extension. In some situations, such as hosting mailing lists, this may not be feasible due to lack of support for the extension by mailing list subscribers."`
// Reoriginated messages (such as messages sent to mailing list subscribers) should
// keep REQUIRETLS. ../rfc/8689:412
DNSBLs []string `sconf:"optional" sconf-doc:"Addresses of DNS block lists for incoming messages. Block lists are only consulted for connections/messages without enough reputation to make an accept/reject decision. This prevents sending IPs of all communications to the block list provider. If any of the listed DNSBLs contains a requested IP address, the message is rejected as spam. The DNSBLs are checked for healthiness before use, at most once per 4 hours. IPs we can send from are periodically checked for being in the configured DNSBLs. See MonitorDNSBLs in domains.conf to only monitor IPs we send from, without using those DNSBLs for incoming messages. Example DNSBLs: sbl.spamhaus.org, bl.spamcop.net. See https://www.spamhaus.org/sbl/ and https://www.spamcop.net/ for more information and terms of use."`
FirstTimeSenderDelay *time.Duration `sconf:"optional" sconf-doc:"Delay before accepting a message from a first-time sender for the destination account. Default: 15s."`
TLSSessionTicketsDisabled *bool `sconf:"optional" sconf-doc:"Override default setting for enabling TLS session tickets. Disabling session tickets may work around TLS interoperability issues."`
DNSBLZones []dns.Domain `sconf:"-"`
} `sconf:"optional"`
Submission struct {
Enabled bool
@ -110,8 +168,9 @@ type Listener struct {
NoRequireSTARTTLS bool `sconf:"optional" sconf-doc:"Do not require STARTTLS. Since users must login, this means password may be sent without encryption. Not recommended."`
} `sconf:"optional" sconf-doc:"SMTP for submitting email, e.g. by email applications. Starts out in plain text, can be upgraded to TLS with the STARTTLS command. Prefer using Submissions which is always a TLS connection."`
Submissions struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 465."`
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 465."`
EnabledOnHTTPS bool `sconf:"optional" sconf-doc:"Additionally enable submission on HTTPS port 443 via TLS ALPN. TLS Application Layer Protocol Negotiation allows clients to request a specific protocol from the server as part of the TLS connection setup. When this setting is enabled and a client requests the 'smtp' protocol after TLS, it will be able to talk SMTP to Mox on port 443. This is meant to be useful as a censorship circumvention technique for Delta Chat."`
} `sconf:"optional" sconf-doc:"SMTP over TLS for submitting email, by email applications. Requires a TLS config."`
IMAP struct {
Enabled bool
@ -119,30 +178,19 @@ type Listener struct {
NoRequireSTARTTLS bool `sconf:"optional" sconf-doc:"Enable this only when the connection is otherwise encrypted (e.g. through a VPN)."`
} `sconf:"optional" sconf-doc:"IMAP for reading email, by email applications. Starts out in plain text, can be upgraded to TLS with the STARTTLS command. Prefer using IMAPS instead which is always a TLS connection."`
IMAPS struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 993."`
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 993."`
EnabledOnHTTPS bool `sconf:"optional" sconf-doc:"Additionally enable IMAP on HTTPS port 443 via TLS ALPN. TLS Application Layer Protocol Negotiation allows clients to request a specific protocol from the server as part of the TLS connection setup. When this setting is enabled and a client requests the 'imap' protocol after TLS, it will be able to talk IMAP to Mox on port 443. This is meant to be useful as a censorship circumvention technique for Delta Chat."`
} `sconf:"optional" sconf-doc:"IMAP over TLS for reading email, by email applications. Requires a TLS config."`
AccountHTTP struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 80."`
Path string `sconf:"optional" sconf-doc:"Path to serve account requests on, e.g. /mox/. Useful if domain serves other resources. Default is /."`
} `sconf:"optional" sconf-doc:"Account web interface, for email users wanting to change their accounts, e.g. set new password, set new delivery rulesets. Served at /."`
AccountHTTPS struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 80."`
Path string `sconf:"optional" sconf-doc:"Path to serve account requests on, e.g. /mox/. Useful if domain serves other resources. Default is /."`
} `sconf:"optional" sconf-doc:"Account web interface listener for HTTPS. Requires a TLS config."`
AdminHTTP struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 80."`
Path string `sconf:"optional" sconf-doc:"Path to serve admin requests on, e.g. /moxadmin/. Useful if domain serves other resources. Default is /admin/."`
} `sconf:"optional" sconf-doc:"Admin web interface, for managing domains, accounts, etc. Served at /admin/. Preferrably only enable on non-public IPs. Hint: use 'ssh -L 8080:localhost:80 you@yourmachine' and open http://localhost:8080/admin/, or set up a tunnel (e.g. WireGuard) and add its IP to the mox 'internal' listener."`
AdminHTTPS struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 443."`
Path string `sconf:"optional" sconf-doc:"Path to serve admin requests on, e.g. /moxadmin/. Useful if domain serves other resources. Default is /admin/."`
} `sconf:"optional" sconf-doc:"Admin web interface listener for HTTPS. Requires a TLS config. Preferrably only enable on non-public IPs."`
MetricsHTTP struct {
AccountHTTP WebService `sconf:"optional" sconf-doc:"Account web interface, for email users wanting to change their accounts, e.g. set new password, set new delivery rulesets. Default path is /."`
AccountHTTPS WebService `sconf:"optional" sconf-doc:"Account web interface listener like AccountHTTP, but for HTTPS. Requires a TLS config."`
AdminHTTP WebService `sconf:"optional" sconf-doc:"Admin web interface, for managing domains, accounts, etc. Default path is /admin/. Preferably only enable on non-public IPs. Hint: use 'ssh -L 8080:localhost:80 you@yourmachine' and open http://localhost:8080/admin/, or set up a tunnel (e.g. WireGuard) and add its IP to the mox 'internal' listener."`
AdminHTTPS WebService `sconf:"optional" sconf-doc:"Admin web interface listener like AdminHTTP, but for HTTPS. Requires a TLS config."`
WebmailHTTP WebService `sconf:"optional" sconf-doc:"Webmail client, for reading email. Default path is /webmail/."`
WebmailHTTPS WebService `sconf:"optional" sconf-doc:"Webmail client, like WebmailHTTP, but for HTTPS. Requires a TLS config."`
WebAPIHTTP WebService `sconf:"optional" sconf-doc:"Like WebAPIHTTP, but with plain HTTP, without TLS."`
WebAPIHTTPS WebService `sconf:"optional" sconf-doc:"WebAPI, a simple HTTP/JSON-based API for email, with HTTPS (requires a TLS config). Default path is /webapi/."`
MetricsHTTP struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 8010."`
} `sconf:"optional" sconf-doc:"Serve prometheus metrics, for monitoring. You should not enable this on a public IP."`
@ -161,64 +209,177 @@ type Listener struct {
NonTLS bool `sconf:"optional" sconf-doc:"If set, plain HTTP instead of HTTPS is spoken on the configured port. Can be useful when the mta-sts domain is reverse proxied."`
} `sconf:"optional" sconf-doc:"Serve MTA-STS policies describing SMTP TLS requirements. Requires a TLS config."`
WebserverHTTP struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Port for plain HTTP (non-TLS) webserver."`
Enabled bool
Port int `sconf:"optional" sconf-doc:"Port for plain HTTP (non-TLS) webserver."`
RateLimitDisabled bool `sconf:"optional" sconf-doc:"Disable rate limiting for all requests to this port."`
} `sconf:"optional" sconf-doc:"All configured WebHandlers will serve on an enabled listener."`
WebserverHTTPS struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Port for HTTPS webserver."`
Enabled bool
Port int `sconf:"optional" sconf-doc:"Port for HTTPS webserver."`
RateLimitDisabled bool `sconf:"optional" sconf-doc:"Disable rate limiting for all requests to this port."`
} `sconf:"optional" sconf-doc:"All configured WebHandlers will serve on an enabled listener. Either ACME must be configured, or for each WebHandler domain a TLS certificate must be configured."`
}
type Domain struct {
Description string `sconf:"optional" sconf-doc:"Free-form description of domain."`
LocalpartCatchallSeparator string `sconf:"optional" sconf-doc:"If not empty, only the string before the separator is used to for email delivery decisions. For example, if set to \"+\", you+anything@example.com will be delivered to you@example.com."`
LocalpartCaseSensitive bool `sconf:"optional" sconf-doc:"If set, upper/lower case is relevant for email delivery."`
DKIM DKIM `sconf:"optional" sconf-doc:"With DKIM signing, a domain is taking responsibility for (content of) emails it sends, letting receiving mail servers build up a (hopefully positive) reputation of the domain, which can help with mail delivery."`
DMARC *DMARC `sconf:"optional" sconf-doc:"With DMARC, a domain publishes, in DNS, a policy on how other mail servers should handle incoming messages with the From-header matching this domain and/or subdomain (depending on the configured alignment). Receiving mail servers use this to build up a reputation of this domain, which can help with mail delivery. A domain can also publish an email address to which reports about DMARC verification results can be sent by verifying mail servers, useful for monitoring. Incoming DMARC reports are automatically parsed, validated, added to metrics and stored in the reporting database for later display in the admin web pages."`
MTASTS *MTASTS `sconf:"optional" sconf-doc:"With MTA-STS a domain publishes, in DNS, presence of a policy for using/requiring TLS for SMTP connections. The policy is served over HTTPS."`
TLSRPT *TLSRPT `sconf:"optional" sconf-doc:"With TLSRPT a domain specifies in DNS where reports about encountered SMTP TLS behaviour should be sent. Useful for monitoring. Incoming TLS reports are automatically parsed, validated, added to metrics and stored in the reporting database for later display in the admin web pages."`
// WebService is an internal web interface: webmail, webaccount, webadmin, webapi.
type WebService struct {
Enabled bool
Port int `sconf:"optional" sconf-doc:"Default 80 for HTTP and 443 for HTTPS. See Hostname at Listener for hostname matching behaviour."`
Path string `sconf:"optional" sconf-doc:"Path to serve requests on. Should end with a slash, related to cookie paths."`
Forwarded bool `sconf:"optional" sconf-doc:"If set, X-Forwarded-* headers are used for the remote IP address for rate limiting and for the \"secure\" status of cookies."`
}
Domain dns.Domain `sconf:"-" json:"-"`
// Transport is a method to delivery a message. At most one of the fields can
// be non-nil. The non-nil field represents the type of transport. For a
// transport with all fields nil, regular email delivery is done.
type Transport struct {
Submissions *TransportSMTP `sconf:"optional" sconf-doc:"Submission SMTP over a TLS connection to submit email to a remote queue."`
Submission *TransportSMTP `sconf:"optional" sconf-doc:"Submission SMTP over a plain TCP connection (possibly with STARTTLS) to submit email to a remote queue."`
SMTP *TransportSMTP `sconf:"optional" sconf-doc:"SMTP over a plain connection (possibly with STARTTLS), typically for old-fashioned unauthenticated relaying to a remote queue."`
Socks *TransportSocks `sconf:"optional" sconf-doc:"Like regular direct delivery, but makes outgoing connections through a SOCKS proxy."`
Direct *TransportDirect `sconf:"optional" sconf-doc:"Like regular direct delivery, but allows to tweak outgoing connections."`
Fail *TransportFail `sconf:"optional" sconf-doc:"Immediately fails the delivery attempt."`
}
// TransportSMTP delivers messages by "submission" (SMTP, typically
// authenticated) to the queue of a remote host (smarthost), or by relaying
// (SMTP, typically unauthenticated).
type TransportSMTP struct {
Host string `sconf-doc:"Host name to connect to and for verifying its TLS certificate."`
Port int `sconf:"optional" sconf-doc:"If unset or 0, the default port for submission(s)/smtp is used: 25 for SMTP, 465 for submissions (with TLS), 587 for submission (possibly with STARTTLS)."`
STARTTLSInsecureSkipVerify bool `sconf:"optional" sconf-doc:"If set an unverifiable remote TLS certificate during STARTTLS is accepted."`
NoSTARTTLS bool `sconf:"optional" sconf-doc:"If set for submission or smtp transport, do not attempt STARTTLS on the connection. Authentication credentials and messages will be transferred in clear text."`
Auth *SMTPAuth `sconf:"optional" sconf-doc:"If set, authentication credentials for the remote server."`
DNSHost dns.Domain `sconf:"-" json:"-"`
}
// SMTPAuth hold authentication credentials used when delivering messages
// through a smarthost.
type SMTPAuth struct {
Username string
Password string
Mechanisms []string `sconf:"optional" sconf-doc:"Allowed authentication mechanisms. Defaults to SCRAM-SHA-256-PLUS, SCRAM-SHA-256, SCRAM-SHA-1-PLUS, SCRAM-SHA-1, CRAM-MD5. Not included by default: PLAIN. Specify the strongest mechanism known to be implemented by the server to prevent mechanism downgrade attacks."`
EffectiveMechanisms []string `sconf:"-" json:"-"`
}
type TransportSocks struct {
Address string `sconf-doc:"Address of SOCKS proxy, of the form host:port or ip:port."`
RemoteIPs []string `sconf-doc:"IP addresses connections from the SOCKS server will originate from. This IP addresses should be configured in the SPF record (keep in mind DNS record time to live (TTL) when adding a SOCKS proxy). Reverse DNS should be set up for these address, resolving to RemoteHostname. These are typically the IPv4 and IPv6 address for the host in the Address field."`
RemoteHostname string `sconf-doc:"Hostname belonging to RemoteIPs. This name is used during in SMTP EHLO. This is typically the hostname of the host in the Address field."`
// todo: add authentication credentials?
IPs []net.IP `sconf:"-" json:"-"` // Parsed form of RemoteIPs.
Hostname dns.Domain `sconf:"-" json:"-"` // Parsed form of RemoteHostname
}
type TransportDirect struct {
DisableIPv4 bool `sconf:"optional" sconf-doc:"If set, outgoing SMTP connections will *NOT* use IPv4 addresses to connect to remote SMTP servers."`
DisableIPv6 bool `sconf:"optional" sconf-doc:"If set, outgoing SMTP connections will *NOT* use IPv6 addresses to connect to remote SMTP servers."`
IPFamily string `sconf:"-" json:"-"`
}
// TransportFail is a transport that fails all delivery attempts.
type TransportFail struct {
SMTPCode int `sconf:"optional" sconf-doc:"SMTP error code and optional enhanced error code to use for the failure. If empty, 554 is used (transaction failed)."`
SMTPMessage string `sconf:"optional" sconf-doc:"Message to include for the rejection. It will be shown in the DSN."`
// Effective values to use, set when parsing.
Code int `sconf:"-"`
Message string `sconf:"-"`
}
type Domain struct {
Disabled bool `sconf:"optional" sconf-doc:"Disabled domains can be useful during/before migrations. Domains that are disabled can still be configured like normal, including adding addresses using the domain to accounts. However, disabled domains: 1. Do not try to fetch ACME certificates. TLS connections to host names involving the email domain will fail. A TLS certificate for the hostname (that wil be used as MX) itself will be requested. 2. Incoming deliveries over SMTP are rejected with a temporary error '450 4.2.1 recipient domain temporarily disabled'. 3. Submissions over SMTP using an (envelope) SMTP MAIL FROM address or message 'From' address of a disabled domain will be rejected with a temporary error '451 4.3.0 sender domain temporarily disabled'. Note that accounts with addresses at disabled domains can still log in and read email (unless the account itself is disabled)."`
Description string `sconf:"optional" sconf-doc:"Free-form description of domain."`
ClientSettingsDomain string `sconf:"optional" sconf-doc:"Hostname for client settings instead of the mail server hostname. E.g. mail.<domain>. For future migration to another mail operator without requiring all clients to update their settings, it is convenient to have client settings that reference a subdomain of the hosted domain instead of the hostname of the server where the mail is currently hosted. If empty, the hostname of the mail server is used for client configurations. Unicode name."`
LocalpartCatchallSeparator string `sconf:"optional" sconf-doc:"If not empty, only the string before the separator is used to for email delivery decisions. For example, if set to \"+\", you+anything@example.com will be delivered to you@example.com."`
LocalpartCatchallSeparators []string `sconf:"optional" sconf-doc:"Similar to LocalpartCatchallSeparator, but in case multiple are needed. For example both \"+\" and \"-\". Only of one LocalpartCatchallSeparator or LocalpartCatchallSeparators can be set. If set, the first separator is used to make unique addresses for outgoing SMTP connections with FromIDLoginAddresses."`
LocalpartCaseSensitive bool `sconf:"optional" sconf-doc:"If set, upper/lower case is relevant for email delivery."`
DKIM DKIM `sconf:"optional" sconf-doc:"With DKIM signing, a domain is taking responsibility for (content of) emails it sends, letting receiving mail servers build up a (hopefully positive) reputation of the domain, which can help with mail delivery."`
DMARC *DMARC `sconf:"optional" sconf-doc:"With DMARC, a domain publishes, in DNS, a policy on how other mail servers should handle incoming messages with the From-header matching this domain and/or subdomain (depending on the configured alignment). Receiving mail servers use this to build up a reputation of this domain, which can help with mail delivery. A domain can also publish an email address to which reports about DMARC verification results can be sent by verifying mail servers, useful for monitoring. Incoming DMARC reports are automatically parsed, validated, added to metrics and stored in the reporting database for later display in the admin web pages."`
MTASTS *MTASTS `sconf:"optional" sconf-doc:"MTA-STS is a mechanism that allows publishing a policy with requirements for WebPKI-verified SMTP STARTTLS connections for email delivered to a domain. Existence of a policy is announced in a DNS TXT record (often unprotected/unverified, MTA-STS's weak spot). If a policy exists, it is fetched with a WebPKI-verified HTTPS request. The policy can indicate that WebPKI-verified SMTP STARTTLS is required, and which MX hosts (optionally with a wildcard pattern) are allowd. MX hosts to deliver to are still taken from DNS (again, not necessarily protected/verified), but messages will only be delivered to domains matching the MX hosts from the published policy. Mail servers look up the MTA-STS policy when first delivering to a domain, then keep a cached copy, periodically checking the DNS record if a new policy is available, and fetching and caching it if so. To update a policy, first serve a new policy with an updated policy ID, then update the DNS record (not the other way around). To remove an enforced policy, publish an updated policy with mode \"none\" for a long enough period so all cached policies have been refreshed (taking DNS TTL and policy max age into account), then remove the policy from DNS, wait for TTL to expire, and stop serving the policy."`
TLSRPT *TLSRPT `sconf:"optional" sconf-doc:"With TLSRPT a domain specifies in DNS where reports about encountered SMTP TLS behaviour should be sent. Useful for monitoring. Incoming TLS reports are automatically parsed, validated, added to metrics and stored in the reporting database for later display in the admin web pages."`
Routes []Route `sconf:"optional" sconf-doc:"Routes for delivering outgoing messages through the queue. Each delivery attempt evaluates account routes, these domain routes and finally global routes. The transport of the first matching route is used in the delivery attempt. If no routes match, which is the default with no configured routes, messages are delivered directly from the queue."`
Aliases map[string]Alias `sconf:"optional" sconf-doc:"Aliases that cause messages to be delivered to one or more locally configured addresses. Keys are localparts (encoded, as they appear in email addresses)."`
Domain dns.Domain `sconf:"-"`
ClientSettingsDNSDomain dns.Domain `sconf:"-" json:"-"`
// Set when DMARC and TLSRPT (when set) has an address with different domain (we're
// hosting the reporting), and there are no destination addresses configured for
// the domain. Disables some functionality related to hosting a domain.
ReportsOnly bool `sconf:"-" json:"-"`
LocalpartCatchallSeparatorsEffective []string `sconf:"-"` // Either LocalpartCatchallSeparators, the value of LocalpartCatchallSeparator, or empty.
}
// todo: allow external addresses as members of aliases. we would add messages for them to the queue for outgoing delivery. we should require an admin addresses to which delivery failures will be delivered (locally, and to use in smtp mail from, so dsns go there). also take care to evaluate smtputf8 (if external address requires utf8 and incoming transaction didn't).
// todo: as alternative to PostPublic, allow specifying a list of addresses (dmarc-like verified) that are (the only addresses) allowed to post to the list. if msgfrom is an external address, require a valid dkim signature to prevent dmarc-policy-related issues when delivering to remote members.
// todo: add option to require messages sent to an alias have that alias as From or Reply-To address?
type Alias struct {
Addresses []string `sconf-doc:"Expanded addresses to deliver to. These must currently be of addresses of local accounts. To prevent duplicate messages, a member address that is also an explicit recipient in the SMTP transaction will only have the message delivered once. If the address in the message From header is a member, that member also won't receive the message."`
PostPublic bool `sconf:"optional" sconf-doc:"If true, anyone can send messages to the list. Otherwise only members, based on message From address, which is assumed to be DMARC-like-verified."`
ListMembers bool `sconf:"optional" sconf-doc:"If true, members can see addresses of members."`
AllowMsgFrom bool `sconf:"optional" sconf-doc:"If true, members are allowed to send messages with this alias address in the message From header."`
LocalpartStr string `sconf:"-"` // In encoded form.
Domain dns.Domain `sconf:"-"`
ParsedAddresses []AliasAddress `sconf:"-"` // Matches addresses.
}
type AliasAddress struct {
Address smtp.Address // Parsed address.
AccountName string // Looked up.
Destination Destination // Belonging to address.
}
type DMARC struct {
Localpart string `sconf-doc:"Address-part before the @ that accepts DMARC reports. Must be non-internationalized. Recommended value: dmarc-reports."`
Localpart string `sconf-doc:"Address-part before the @ that accepts DMARC reports. Must be non-internationalized. Recommended value: dmarcreports."`
Domain string `sconf:"optional" sconf-doc:"Alternative domain for reporting address, for incoming reports. Typically empty, causing the domain wherein this config exists to be used. Can be used to receive reports for domains that aren't fully hosted on this server. Configure such a domain as a hosted domain without making all the DNS changes, and configure this field with a domain that is fully hosted on this server, so the localpart and the domain of this field form a reporting address. Then only update the DMARC DNS record for the not fully hosted domain, ensuring the reporting address is specified in its \"rua\" field as shown in the suggested DNS settings. Unicode name."`
Account string `sconf-doc:"Account to deliver to."`
Mailbox string `sconf-doc:"Mailbox to deliver to, e.g. DMARC."`
ParsedLocalpart smtp.Localpart `sconf:"-"`
ParsedLocalpart smtp.Localpart `sconf:"-"` // Lower-case if case-sensitivity is not configured for domain. Not "canonical" for catchall separators for backwards compatibility.
DNSDomain dns.Domain `sconf:"-"` // Effective domain, always set based on Domain field or Domain where this is configured.
}
type MTASTS struct {
PolicyID string `sconf-doc:"Policies are versioned. The version must be specified in the DNS record. If you change a policy, first change it in mox, then update the DNS record."`
Mode mtasts.Mode `sconf-doc:"testing, enforce or none. If set to enforce, a remote SMTP server will not deliver email to us if it cannot make a TLS connection."`
PolicyID string `sconf-doc:"Policies are versioned. The version must be specified in the DNS record. If you change a policy, first change it here to update the served policy, then update the DNS record with the updated policy ID."`
Mode mtasts.Mode `sconf-doc:"If set to \"enforce\", a remote SMTP server will not deliver email to us if it cannot make a WebPKI-verified SMTP STARTTLS connection. In mode \"testing\", deliveries can be done without verified TLS, but errors will be reported through TLS reporting. In mode \"none\", verified TLS is not required, used for phasing out an MTA-STS policy."`
MaxAge time.Duration `sconf-doc:"How long a remote mail server is allowed to cache a policy. Typically 1 or several weeks."`
MX []string `sconf:"optional" sconf-doc:"List of server names allowed for SMTP. If empty, the configured hostname is set. Host names can contain a wildcard (*) as a leading label (matching a single label, e.g. *.example matches host.example, not sub.host.example)."`
// todo: parse mx as valid mtasts.Policy.MX, with dns.ParseDomain but taking wildcard into account
}
type TLSRPT struct {
Localpart string `sconf-doc:"Address-part before the @ that accepts TLSRPT reports. Recommended value: tls-reports."`
Localpart string `sconf-doc:"Address-part before the @ that accepts TLSRPT reports. Recommended value: tlsreports."`
Domain string `sconf:"optional" sconf-doc:"Alternative domain for reporting address, for incoming reports. Typically empty, causing the domain wherein this config exists to be used. Can be used to receive reports for domains that aren't fully hosted on this server. Configure such a domain as a hosted domain without making all the DNS changes, and configure this field with a domain that is fully hosted on this server, so the localpart and the domain of this field form a reporting address. Then only update the TLSRPT DNS record for the not fully hosted domain, ensuring the reporting address is specified in its \"rua\" field as shown in the suggested DNS settings. Unicode name."`
Account string `sconf-doc:"Account to deliver to."`
Mailbox string `sconf-doc:"Mailbox to deliver to, e.g. TLSRPT."`
ParsedLocalpart smtp.Localpart `sconf:"-"`
ParsedLocalpart smtp.Localpart `sconf:"-"` // Lower-case if case-sensitivity is not configured for domain. Not "canonical" for catchall separators for backwards compatibility.
DNSDomain dns.Domain `sconf:"-"` // Effective domain, always set based on Domain field or Domain where this is configured.
}
type Canonicalization struct {
HeaderRelaxed bool `sconf-doc:"If set, some modifications to the headers (mostly whitespace) are allowed."`
BodyRelaxed bool `sconf-doc:"If set, some whitespace modifications to the message body are allowed."`
}
type Selector struct {
Hash string `sconf:"optional" sconf-doc:"sha256 (default) or (older, not recommended) sha1"`
HashEffective string `sconf:"-"`
Canonicalization struct {
HeaderRelaxed bool `sconf-doc:"If set, some modifications to the headers (mostly whitespace) are allowed."`
BodyRelaxed bool `sconf-doc:"If set, some whitespace modifications to the message body are allowed."`
} `sconf:"optional"`
Headers []string `sconf:"optional" sconf-doc:"Headers to sign with DKIM. If empty, a reasonable default set of headers is selected."`
HeadersEffective []string `sconf:"-"`
DontSealHeaders bool `sconf:"optional" sconf-doc:"If set, don't prevent duplicate headers from being added. Not recommended."`
Expiration string `sconf:"optional" sconf-doc:"Period a signature is valid after signing, as duration, e.g. 72h. The period should be enough for delivery at the final destination, potentially with several hops/relays. In the order of days at least."`
PrivateKeyFile string `sconf-doc:"Either an RSA or ed25519 private key file in PKCS8 PEM form."`
Hash string `sconf:"optional" sconf-doc:"sha256 (default) or (older, not recommended) sha1."`
HashEffective string `sconf:"-"`
Canonicalization Canonicalization `sconf:"optional"`
Headers []string `sconf:"optional" sconf-doc:"Headers to sign with DKIM. If empty, a reasonable default set of headers is selected."`
HeadersEffective []string `sconf:"-"` // Used when signing. Based on Headers from config, or the reasonable default.
DontSealHeaders bool `sconf:"optional" sconf-doc:"If set, don't prevent duplicate headers from being added. Not recommended."`
Expiration string `sconf:"optional" sconf-doc:"Period a signature is valid after signing, as duration, e.g. 72h. The period should be enough for delivery at the final destination, potentially with several hops/relays. In the order of days at least."`
PrivateKeyFile string `sconf-doc:"Either an RSA or ed25519 private key file in PKCS8 PEM form."`
Algorithm string `sconf:"-"` // "ed25519", "rsa-*", based on private key.
ExpirationSeconds int `sconf:"-" json:"-"` // Parsed from Expiration.
Key crypto.Signer `sconf:"-" json:"-"` // As parsed with x509.ParsePKCS8PrivateKey.
Domain dns.Domain `sconf:"-" json:"-"` // Of selector only, not FQDN.
@ -229,28 +390,81 @@ type DKIM struct {
Sign []string `sconf:"optional" sconf-doc:"List of selectors that emails will be signed with."`
}
type Account struct {
Domain string `sconf-doc:"Default domain for account. Deprecated behaviour: If a destination is not a full address but only a localpart, this domain is added to form a full address."`
Description string `sconf:"optional" sconf-doc:"Free form description, e.g. full name or alternative contact info."`
Destinations map[string]Destination `sconf-doc:"Destinations, keys are email addresses (with IDNA domains). If the address is of the form '@domain', i.e. with localpart missing, it serves as a catchall for the domain, matching all messages that are not explicitly configured. Deprecated behaviour: If the address is not a full address but a localpart, it is combined with Domain to form a full address."`
SubjectPass struct {
Period time.Duration `sconf-doc:"How long unique values are accepted after generating, e.g. 12h."` // todo: have a reasonable default for this?
} `sconf:"optional" sconf-doc:"If configured, messages classified as weakly spam are rejected with instructions to retry delivery, but this time with a signed token added to the subject. During the next delivery attempt, the signed token will bypass the spam filter. Messages with a clear spam signal, such as a known bad reputation, are rejected/delayed without a signed token."`
RejectsMailbox string `sconf:"optional" sconf-doc:"Mail that looks like spam will be rejected, but a copy can be stored temporarily in a mailbox, e.g. Rejects. If mail isn't coming in when you expect, you can look there. The mail still isn't accepted, so the remote mail server may retry (hopefully, if legitimate), or give up (hopefully, if indeed a spammer). Messages are automatically removed from this mailbox, so do not set it to a mailbox that has messages you want to keep."`
AutomaticJunkFlags struct {
Enabled bool `sconf-doc:"If enabled, flags will be set automatically if they match a regular expression below. When two of the three mailbox regular expressions are set, the remaining one will match all unmatched messages. Messages are matched in the order specified and the search stops on the first match. Mailboxes are lowercased before matching."`
JunkMailboxRegexp string `sconf:"optional" sconf-doc:"Example: ^(junk|spam)."`
NeutralMailboxRegexp string `sconf:"optional" sconf-doc:"Example: ^(inbox|neutral|postmaster|dmarc|tlsrpt|rejects), and you may wish to add trash depending on how you use it, or leave this empty."`
NotJunkMailboxRegexp string `sconf:"optional" sconf-doc:"Example: .* or an empty string."`
} `sconf:"optional" sconf-doc:"Automatically set $Junk and $NotJunk flags based on mailbox messages are delivered/moved/copied to. Email clients typically have too limited functionality to conveniently set these flags, especially $NonJunk, but they can all move messages to a different mailbox, so this helps them."`
JunkFilter *JunkFilter `sconf:"optional" sconf-doc:"Content-based filtering, using the junk-status of individual messages to rank words in such messages as spam or ham. It is recommended you always set the applicable (non)-junk status on messages, and that you do not empty your Trash because those messages contain valuable ham/spam training information."` // todo: sane defaults for junkfilter
MaxOutgoingMessagesPerDay int `sconf:"optional" sconf-doc:"Maximum number of outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 1000."`
MaxFirstTimeRecipientsPerDay int `sconf:"optional" sconf-doc:"Maximum number of first-time recipients in outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 200."`
type Route struct {
FromDomain []string `sconf:"optional" sconf-doc:"Matches if the envelope from domain matches one of the configured domains, or if the list is empty. If a domain starts with a dot, prefixes of the domain also match."`
ToDomain []string `sconf:"optional" sconf-doc:"Like FromDomain, but matching against the envelope to domain."`
MinimumAttempts int `sconf:"optional" sconf-doc:"Matches if at least this many deliveries have already been attempted. This can be used to attempt sending through a smarthost when direct delivery has failed for several times."`
Transport string `sconf:"The transport used for delivering the message that matches requirements of the above fields."`
DNSDomain dns.Domain `sconf:"-"` // Parsed form of Domain.
JunkMailbox *regexp.Regexp `sconf:"-" json:"-"`
NeutralMailbox *regexp.Regexp `sconf:"-" json:"-"`
NotJunkMailbox *regexp.Regexp `sconf:"-" json:"-"`
// todo future: add ToMX, where we look up the MX record of the destination domain and check (the first, any, all?) mx host against the values in ToMX.
FromDomainASCII []string `sconf:"-"`
ToDomainASCII []string `sconf:"-"`
ResolvedTransport Transport `sconf:"-" json:"-"`
}
// todo: move RejectsMailbox to store.Mailbox.SpecialUse, possibly with "X" prefix?
// note: outgoing hook events are in ../queue/hooks.go, ../mox-/config.go, ../queue.go and ../webapi/gendoc.sh. keep in sync.
type OutgoingWebhook struct {
URL string `sconf-doc:"URL to POST webhooks."`
Authorization string `sconf:"optional" sconf-doc:"If not empty, value of Authorization header to add to HTTP requests."`
Events []string `sconf:"optional" sconf-doc:"Events to send outgoing delivery notifications for. If absent, all events are sent. Valid values: delivered, suppressed, delayed, failed, relayed, expanded, canceled, unrecognized."`
}
type IncomingWebhook struct {
URL string `sconf-doc:"URL to POST webhooks to for incoming deliveries over SMTP."`
Authorization string `sconf:"optional" sconf-doc:"If not empty, value of Authorization header to add to HTTP requests."`
}
type SubjectPass struct {
Period time.Duration `sconf-doc:"How long unique values are accepted after generating, e.g. 12h."` // todo: have a reasonable default for this?
}
type AutomaticJunkFlags struct {
Enabled bool `sconf-doc:"If enabled, junk/nonjunk flags will be set automatically if they match some of the regular expressions. When two of the three mailbox regular expressions are set, the remaining one will match all unmatched messages. Messages are matched in the order 'junk', 'neutral', 'not junk', and the search stops on the first match. Mailboxes are lowercased before matching."`
JunkMailboxRegexp string `sconf:"optional" sconf-doc:"Example: ^(junk|spam)."`
NeutralMailboxRegexp string `sconf:"optional" sconf-doc:"Example: ^(inbox|neutral|postmaster|dmarc|tlsrpt|rejects), and you may wish to add trash depending on how you use it, or leave this empty."`
NotJunkMailboxRegexp string `sconf:"optional" sconf-doc:"Example: .* or an empty string."`
}
type Account struct {
OutgoingWebhook *OutgoingWebhook `sconf:"optional" sconf-doc:"Webhooks for events about outgoing deliveries."`
IncomingWebhook *IncomingWebhook `sconf:"optional" sconf-doc:"Webhooks for events about incoming deliveries over SMTP."`
FromIDLoginAddresses []string `sconf:"optional" sconf-doc:"Login addresses that cause outgoing email to be sent with SMTP MAIL FROM addresses with a unique id after the localpart catchall separator (which must be enabled when addresses are specified here). Any delivery status notifications (DSN, e.g. for bounces), can be related to the original message and recipient with unique id's. You can login to an account with any valid email address, including variants with the localpart catchall separator. You can use this mechanism to both send outgoing messages with and without unique fromid for a given email address. With the webapi and webmail, a unique id will be generated. For submission, the id from the SMTP MAIL FROM command is used if present, and a unique id is generated otherwise."`
KeepRetiredMessagePeriod time.Duration `sconf:"optional" sconf-doc:"Period to keep messages retired from the queue (delivered or failed) around. Keeping retired messages is useful for maintaining the suppression list for transactional email, for matching incoming DSNs to sent messages, and for debugging. The time at which to clean up (remove) is calculated at retire time. E.g. 168h (1 week)."`
KeepRetiredWebhookPeriod time.Duration `sconf:"optional" sconf-doc:"Period to keep webhooks retired from the queue (delivered or failed) around. Useful for debugging. The time at which to clean up (remove) is calculated at retire time. E.g. 168h (1 week)."`
LoginDisabled string `sconf:"optional" sconf-doc:"If non-empty, login attempts on all protocols (e.g. SMTP/IMAP, web interfaces) is rejected with this error message. Useful during migrations. Incoming deliveries for addresses of this account are still accepted as normal."`
Domain string `sconf-doc:"Default domain for account. Deprecated behaviour: If a destination is not a full address but only a localpart, this domain is added to form a full address."`
Description string `sconf:"optional" sconf-doc:"Free form description, e.g. full name or alternative contact info."`
FullName string `sconf:"optional" sconf-doc:"Full name, to use in message From header when composing messages in webmail. Can be overridden per destination."`
Destinations map[string]Destination `sconf:"optional" sconf-doc:"Destinations, keys are email addresses (with IDNA domains). All destinations are allowed for logging in with IMAP/SMTP/webmail. If no destinations are configured, the account can not login. If the address is of the form '@domain', i.e. with localpart missing, it serves as a catchall for the domain, matching all messages that are not explicitly configured. Deprecated behaviour: If the address is not a full address but a localpart, it is combined with Domain to form a full address."`
SubjectPass SubjectPass `sconf:"optional" sconf-doc:"If configured, messages classified as weakly spam are rejected with instructions to retry delivery, but this time with a signed token added to the subject. During the next delivery attempt, the signed token will bypass the spam filter. Messages with a clear spam signal, such as a known bad reputation, are rejected/delayed without a signed token."`
QuotaMessageSize int64 `sconf:"optional" sconf-doc:"Default maximum total message size in bytes for the account, overriding any globally configured default maximum size if non-zero. A negative value can be used to have no limit in case there is a limit by default. Attempting to add new messages to an account beyond its maximum total size will result in an error. Useful to prevent a single account from filling storage."`
RejectsMailbox string `sconf:"optional" sconf-doc:"Mail that looks like spam will be rejected, but a copy can be stored temporarily in a mailbox, e.g. Rejects. If mail isn't coming in when you expect, you can look there. The mail still isn't accepted, so the remote mail server may retry (hopefully, if legitimate), or give up (hopefully, if indeed a spammer). Messages are automatically removed from this mailbox, so do not set it to a mailbox that has messages you want to keep."`
KeepRejects bool `sconf:"optional" sconf-doc:"Don't automatically delete mail in the RejectsMailbox listed above. This can be useful, e.g. for future spam training. It can also cause storage to fill up."`
AutomaticJunkFlags AutomaticJunkFlags `sconf:"optional" sconf-doc:"Automatically set $Junk and $NotJunk flags based on mailbox messages are delivered/moved/copied to. Email clients typically have too limited functionality to conveniently set these flags, especially $NonJunk, but they can all move messages to a different mailbox, so this helps them."`
JunkFilter *JunkFilter `sconf:"optional" sconf-doc:"Content-based filtering, using the junk-status of individual messages to rank words in such messages as spam or ham. It is recommended you always set the applicable (non)-junk status on messages, and that you do not empty your Trash because those messages contain valuable ham/spam training information."` // todo: sane defaults for junkfilter
MaxOutgoingMessagesPerDay int `sconf:"optional" sconf-doc:"Maximum number of outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 1000."`
MaxFirstTimeRecipientsPerDay int `sconf:"optional" sconf-doc:"Maximum number of first-time recipients in outgoing messages for this account in a 24 hour window. This limits the damage to recipients and the reputation of this mail server in case of account compromise. Default 200."`
NoFirstTimeSenderDelay bool `sconf:"optional" sconf-doc:"Do not apply a delay to SMTP connections before accepting an incoming message from a first-time sender. Can be useful for accounts that sends automated responses and want instant replies."`
NoCustomPassword bool `sconf:"optional" sconf-doc:"If set, this account cannot set a password of their own choice, but can only set a new randomly generated password, preventing password reuse across services and use of weak passwords. Custom account passwords can be set by the admin."`
Routes []Route `sconf:"optional" sconf-doc:"Routes for delivering outgoing messages through the queue. Each delivery attempt evaluates these account routes, domain routes and finally global routes. The transport of the first matching route is used in the delivery attempt. If no routes match, which is the default with no configured routes, messages are delivered directly from the queue."`
DNSDomain dns.Domain `sconf:"-"` // Parsed form of Domain.
JunkMailbox *regexp.Regexp `sconf:"-" json:"-"`
NeutralMailbox *regexp.Regexp `sconf:"-" json:"-"`
NotJunkMailbox *regexp.Regexp `sconf:"-" json:"-"`
ParsedFromIDLoginAddresses []smtp.Address `sconf:"-" json:"-"`
Aliases []AddressAlias `sconf:"-"`
}
type AddressAlias struct {
SubscriptionAddress string
Alias Alias // Without members.
MemberAddresses []string // Only if allowed to see.
}
type JunkFilter struct {
@ -259,11 +473,19 @@ type JunkFilter struct {
}
type Destination struct {
Mailbox string `sconf:"optional" sconf-doc:"Mailbox to deliver to if none of Rulesets match. Default: Inbox."`
Rulesets []Ruleset `sconf:"optional" sconf-doc:"Delivery rules based on message and SMTP transaction. You may want to match each mailing list by SMTP MailFrom address, VerifiedDomain and/or List-ID header (typically <listname.example.org> if the list address is listname@example.org), delivering them to their own mailbox."`
Mailbox string `sconf:"optional" sconf-doc:"Mailbox to deliver to if none of Rulesets match. Default: Inbox."`
Rulesets []Ruleset `sconf:"optional" sconf-doc:"Delivery rules based on message and SMTP transaction. You may want to match each mailing list by SMTP MailFrom address, VerifiedDomain and/or List-ID header (typically <listname.example.org> if the list address is listname@example.org), delivering them to their own mailbox."`
SMTPError string `sconf:"optional" sconf-doc:"If non-empty, incoming delivery attempts to this destination will be rejected during SMTP RCPT TO with this error response line. Useful when a catchall address is configured for the domain and messages to some addresses should be rejected. The response line must start with an error code. Currently the following error resonse codes are allowed: 421 (temporary local error), 550 (user not found). If the line consists of only an error code, an appropriate error message is added. Rejecting messages with a 4xx code invites later retries by the remote, while 5xx codes should prevent further delivery attempts."`
MessageAuthRequiredSMTPError string `sconf:"optional" sconf-doc:"If non-empty, an additional DMARC-like message authentication check is done for incoming messages, validating the domain in the From-header of the message. Messages without either an aligned SPF or aligned DKIM pass are rejected during the SMTP DATA command with a permanent error code followed by the message in this field. The domain in the message 'From' header is matched in relaxed or strict mode according to the domain's DMARC policy if present, or relaxed mode (organizational instead of exact domain match) otherwise. Useful for autoresponders that don't want to accept messages they don't want to send an automated reply to."`
FullName string `sconf:"optional" sconf-doc:"Full name to use in message From header when composing messages coming from this address with webmail."`
DMARCReports bool `sconf:"-" json:"-"`
TLSReports bool `sconf:"-" json:"-"`
DMARCReports bool `sconf:"-" json:"-"`
HostTLSReports bool `sconf:"-" json:"-"`
DomainTLSReports bool `sconf:"-" json:"-"`
// Ready to use in SMTP responses.
SMTPErrorCode int `sconf:"-" json:"-"`
SMTPErrorSecode string `sconf:"-" json:"-"`
SMTPErrorMsg string `sconf:"-" json:"-"`
}
// Equal returns whether d and o are equal, only looking at their user-changeable fields.
@ -280,16 +502,22 @@ func (d Destination) Equal(o Destination) bool {
}
type Ruleset struct {
SMTPMailFromRegexp string `sconf:"optional" sconf-doc:"Matches if this regular expression matches (a substring of) the SMTP MAIL FROM address (not the message From-header). E.g. user@example.org."`
SMTPMailFromRegexp string `sconf:"optional" sconf-doc:"Matches if this regular expression matches (a substring of) the SMTP MAIL FROM address (not the message From-header). E.g. '^user@example\\.org$'."`
MsgFromRegexp string `sconf:"optional" sconf-doc:"Matches if this regular expression matches (a substring of) the single address in the message From header."`
VerifiedDomain string `sconf:"optional" sconf-doc:"Matches if this domain matches an SPF- and/or DKIM-verified (sub)domain."`
HeadersRegexp map[string]string `sconf:"optional" sconf-doc:"Matches if these header field/value regular expressions all match (substrings of) the message headers. Header fields and valuees are converted to lower case before matching. Whitespace is trimmed from the value before matching. A header field can occur multiple times in a message, only one instance has to match. For mailing lists, you could match on ^list-id$ with the value typically the mailing list address in angled brackets with @ replaced with a dot, e.g. <name\\.lists\\.example\\.org>."`
// todo: add a SMTPRcptTo check, and MessageFrom that works on a properly parsed From header.
// todo: add a SMTPRcptTo check
ListAllowDomain string `sconf:"optional" sconf-doc:"Influence the spam filtering, this does not change whether this ruleset applies to a message. If this domain matches an SPF- and/or DKIM-verified (sub)domain, the message is accepted without further spam checks, such as a junk filter or DMARC reject evaluation. DMARC rejects should not apply for mailing lists that are not configured to rewrite the From-header of messages that don't have a passing DKIM signature of the From-domain. Otherwise, by rejecting messages, you may be automatically unsubscribed from the mailing list. The assumption is that mailing lists do their own spam filtering/moderation."`
// todo: once we implement ARC, we can use dkim domains that we cannot verify but that the arc-verified forwarding mail server was able to verify.
IsForward bool `sconf:"optional" sconf-doc:"Influences spam filtering only, this option does not change whether a message matches this ruleset. Can only be used together with SMTPMailFromRegexp and VerifiedDomain. SMTPMailFromRegexp must be set to the address used to deliver the forwarded message, e.g. '^user(|\\+.*)@forward\\.example$'. Changes to junk analysis: 1. Messages are not rejected for failing a DMARC policy, because a legitimate forwarded message without valid/intact/aligned DKIM signature would be rejected because any verified SPF domain will be 'unaligned', of the forwarding mail server. 2. The sending mail server IP address, and sending EHLO and MAIL FROM domains and matching DKIM domain aren't used in future reputation-based spam classifications (but other verified DKIM domains are) because the forwarding server is not a useful spam signal for future messages."`
ListAllowDomain string `sconf:"optional" sconf-doc:"Influences spam filtering only, this option does not change whether a message matches this ruleset. If this domain matches an SPF- and/or DKIM-verified (sub)domain, the message is accepted without further spam checks, such as a junk filter or DMARC reject evaluation. DMARC rejects should not apply for mailing lists that are not configured to rewrite the From-header of messages that don't have a passing DKIM signature of the From-domain. Otherwise, by rejecting messages, you may be automatically unsubscribed from the mailing list. The assumption is that mailing lists do their own spam filtering/moderation."`
AcceptRejectsToMailbox string `sconf:"optional" sconf-doc:"Influences spam filtering only, this option does not change whether a message matches this ruleset. If a message is classified as spam, it isn't rejected during the SMTP transaction (the normal behaviour), but accepted during the SMTP transaction and delivered to the specified mailbox. The specified mailbox is not automatically cleaned up like the account global Rejects mailbox, unless set to that Rejects mailbox."`
Mailbox string `sconf-doc:"Mailbox to deliver to if this ruleset matches."`
Comment string `sconf:"optional" sconf-doc:"Free-form comments."`
SMTPMailFromRegexpCompiled *regexp.Regexp `sconf:"-" json:"-"`
MsgFromRegexpCompiled *regexp.Regexp `sconf:"-" json:"-"`
VerifiedDNSDomain dns.Domain `sconf:"-"`
HeadersRegexpCompiled [][2]*regexp.Regexp `sconf:"-" json:"-"`
ListAllowDNSDomain dns.Domain `sconf:"-"`
@ -297,7 +525,7 @@ type Ruleset struct {
// Equal returns whether r and o are equal, only looking at their user-changeable fields.
func (r Ruleset) Equal(o Ruleset) bool {
if r.SMTPMailFromRegexp != o.SMTPMailFromRegexp || r.VerifiedDomain != o.VerifiedDomain || r.ListAllowDomain != o.ListAllowDomain || r.Mailbox != o.Mailbox {
if r.SMTPMailFromRegexp != o.SMTPMailFromRegexp || r.MsgFromRegexp != o.MsgFromRegexp || r.VerifiedDomain != o.VerifiedDomain || r.IsForward != o.IsForward || r.ListAllowDomain != o.ListAllowDomain || r.AcceptRejectsToMailbox != o.AcceptRejectsToMailbox || r.Mailbox != o.Mailbox || r.Comment != o.Comment {
return false
}
if !reflect.DeepEqual(r.HeadersRegexp, o.HeadersRegexp) {
@ -312,22 +540,31 @@ type KeyCert struct {
}
type TLS struct {
ACME string `sconf:"optional" sconf-doc:"Name of provider from top-level configuration to use for ACME, e.g. letsencrypt."`
KeyCerts []KeyCert `sconf:"optional"`
MinVersion string `sconf:"optional" sconf-doc:"Minimum TLS version. Default: TLSv1.2."`
ACME string `sconf:"optional" sconf-doc:"Name of provider from top-level configuration to use for ACME, e.g. letsencrypt."`
KeyCerts []KeyCert `sconf:"optional" sconf-doc:"Keys and certificates to use for this listener. The files are opened by the privileged root process and passed to the unprivileged mox process, so no special permissions are required on the files. If the private key will not be replaced when refreshing certificates, also consider adding the private key to HostPrivateKeyFiles and configuring DANE TLSA DNS records."`
MinVersion string `sconf:"optional" sconf-doc:"Minimum TLS version. Default: TLSv1.2."`
HostPrivateKeyFiles []string `sconf:"optional" sconf-doc:"Private keys used for ACME certificates. Specified explicitly so DANE TLSA DNS records can be generated, even before the certificates are requested. DANE is a mechanism to authenticate remote TLS certificates based on a public key or certificate specified in DNS, protected with DNSSEC. DANE is opportunistic and attempted when delivering SMTP with STARTTLS. The private key files must be in PEM format. PKCS8 is recommended, but PKCS1 and EC private keys are recognized as well. Only RSA 2048 bit and ECDSA P-256 keys are currently used. The first of each is used when requesting new certificates through ACME."`
ClientAuthDisabled bool `sconf:"optional" sconf-doc:"Disable TLS client authentication with certificates/keys, preventing the TLS server from requesting a TLS certificate from clients. Useful for working around clients that don't handle TLS client authentication well."`
Config *tls.Config `sconf:"-" json:"-"` // TLS config for non-ACME-verification connections, i.e. SMTP and IMAP, and not port 443.
ACMEConfig *tls.Config `sconf:"-" json:"-"` // TLS config that handles ACME verification, for serving on port 443.
Config *tls.Config `sconf:"-" json:"-"` // TLS config for non-ACME-verification connections, i.e. SMTP and IMAP, and not port 443. Connections without SNI will use a certificate for the hostname of the listener, connections with an SNI hostname that isn't allowed will be rejected.
ConfigFallback *tls.Config `sconf:"-" json:"-"` // Like Config, but uses the certificate for the listener hostname when the requested SNI hostname is not allowed, instead of causing the connection to fail.
ACMEConfig *tls.Config `sconf:"-" json:"-"` // TLS config that handles ACME verification, for serving on port 443.
HostPrivateRSA2048Keys []crypto.Signer `sconf:"-" json:"-"` // Private keys for new TLS certificates for listener host name, for new certificates with ACME, and for DANE records.
HostPrivateECDSAP256Keys []crypto.Signer `sconf:"-" json:"-"`
}
// todo: we could implement matching WebHandler.Domain as IPs too
type WebHandler struct {
LogName string `sconf:"optional" sconf-doc:"Name to use in logging and metrics."`
Domain string `sconf-doc:"Both Domain and PathRegexp must match for this WebHandler to match a request. Exactly one of WebStatic, WebRedirect, WebForward must be set."`
Domain string `sconf-doc:"Both Domain and PathRegexp must match for this WebHandler to match a request. Exactly one of WebStatic, WebRedirect, WebForward, WebInternal must be set."`
PathRegexp string `sconf-doc:"Regular expression matched against request path, must always start with ^ to ensure matching from the start of the path. The matching prefix can optionally be stripped by WebForward. The regular expression does not have to end with $."`
DontRedirectPlainHTTP bool `sconf:"optional" sconf-doc:"If set, plain HTTP requests are not automatically permanently redirected (308) to HTTPS. If you don't have a HTTPS webserver configured, set this to true."`
Compress bool `sconf:"optional" sconf-doc:"Transparently compress responses (currently with gzip) if the client supports it, the status is 200 OK, no Content-Encoding is set on the response yet and the Content-Type of the response hints that the data is compressible (text/..., specific application/... and .../...+json and .../...+xml). For static files only, a cache with compressed files is kept."`
WebStatic *WebStatic `sconf:"optional" sconf-doc:"Serve static files."`
WebRedirect *WebRedirect `sconf:"optional" sconf-doc:"Redirect requests to configured URL."`
WebForward *WebForward `sconf:"optional" sconf-doc:"Forward requests to another webserver, i.e. reverse proxy."`
WebInternal *WebInternal `sconf:"optional" sconf-doc:"Pass request to internal service, like webmail, webapi, etc."`
Name string `sconf:"-"` // Either LogName, or numeric index if LogName was empty. Used instead of LogName in logging/metrics.
DNSDomain dns.Domain `sconf:"-"`
@ -343,6 +580,7 @@ func (wh WebHandler) Equal(o WebHandler) bool {
x.WebStatic = nil
x.WebRedirect = nil
x.WebForward = nil
x.WebInternal = nil
return x
}
cwh := clean(wh)
@ -350,7 +588,7 @@ func (wh WebHandler) Equal(o WebHandler) bool {
if cwh != co {
return false
}
if (wh.WebStatic == nil) != (o.WebStatic == nil) || (wh.WebRedirect == nil) != (o.WebRedirect == nil) || (wh.WebForward == nil) != (o.WebForward == nil) {
if (wh.WebStatic == nil) != (o.WebStatic == nil) || (wh.WebRedirect == nil) != (o.WebRedirect == nil) || (wh.WebForward == nil) != (o.WebForward == nil) || (wh.WebInternal == nil) != (o.WebInternal == nil) {
return false
}
if wh.WebStatic != nil {
@ -362,6 +600,9 @@ func (wh WebHandler) Equal(o WebHandler) bool {
if wh.WebForward != nil {
return wh.WebForward.equal(*o.WebForward)
}
if wh.WebInternal != nil {
return wh.WebInternal.equal(*o.WebInternal)
}
return true
}
@ -393,7 +634,7 @@ func (wr WebRedirect) equal(o WebRedirect) bool {
type WebForward struct {
StripPath bool `sconf:"optional" sconf-doc:"Strip the matching WebHandler path from the WebHandler before forwarding the request."`
URL string `sconf-doc:"URL to forward HTTP requests to, e.g. http://127.0.0.1:8123/base. If StripPath is false the full request path is added to the URL. Host headers are sent unmodified. New X-Forwarded-{For,Host,Proto} headers are set. Any query string in the URL is ignored. Requests are made using Go's net/http.DefaultTransport that takes environment variables HTTP_PROXY and HTTPS_PROXY into account."`
URL string `sconf-doc:"URL to forward HTTP requests to, e.g. http://127.0.0.1:8123/base. If StripPath is false the full request path is added to the URL. Host headers are sent unmodified. New X-Forwarded-{For,Host,Proto} headers are set. Any query string in the URL is ignored. Requests are made using Go's net/http.DefaultTransport that takes environment variables HTTP_PROXY and HTTPS_PROXY into account. Websocket connections are forwarded and data is copied between client and backend without looking at the framing. The websocket 'version' and 'key'/'accept' headers are verified during the handshake, but other websocket headers, including 'origin', 'protocol' and 'extensions' headers, are not inspected and the backend is responsible for verifying/interpreting them."`
ResponseHeaders map[string]string `sconf:"optional" sconf-doc:"Headers to add to the response. Useful for adding security- and cache-related headers."`
TargetURL *url.URL `sconf:"-" json:"-"`
@ -404,3 +645,16 @@ func (wf WebForward) equal(o WebForward) bool {
o.TargetURL = nil
return reflect.DeepEqual(wf, o)
}
type WebInternal struct {
BasePath string `sconf-doc:"Path to use as root of internal service, e.g. /webmail/."`
Service string `sconf-doc:"Name of the service, values: admin, account, webmail, webapi."`
Handler http.Handler `sconf:"-" json:"-"`
}
func (wi WebInternal) equal(o WebInternal) bool {
wi.Handler = nil
o.Handler = nil
return reflect.DeepEqual(wi, o)
}

File diff suppressed because it is too large Load Diff

1579
ctl.go

File diff suppressed because it is too large Load Diff

558
ctl_test.go Normal file
View File

@ -0,0 +1,558 @@
//go:build !integration
package main
import (
"context"
"crypto/ed25519"
cryptorand "crypto/rand"
"crypto/x509"
"flag"
"fmt"
"log/slog"
"math/big"
"net"
"os"
"path/filepath"
"testing"
"time"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dmarcdb"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/imapclient"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/mtastsdb"
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/store"
"github.com/mjl-/mox/tlsrptdb"
)
var ctxbg = context.Background()
var pkglog = mlog.New("ctl", nil)
func tcheck(t *testing.T, err error, errmsg string) {
if err != nil {
t.Helper()
t.Fatalf("%s: %v", errmsg, err)
}
}
// TestCtl executes commands through ctl. This tests at least the protocols (who
// sends when/what) is tested. We often don't check the actual results, but
// unhandled errors would cause a panic.
func TestCtl(t *testing.T) {
os.RemoveAll("testdata/ctl/data")
mox.ConfigStaticPath = filepath.FromSlash("testdata/ctl/config/mox.conf")
mox.ConfigDynamicPath = filepath.FromSlash("testdata/ctl/config/domains.conf")
if errs := mox.LoadConfig(ctxbg, pkglog, true, false); len(errs) > 0 {
t.Fatalf("loading mox config: %v", errs)
}
err := store.Init(ctxbg)
tcheck(t, err, "store init")
defer store.Close()
defer store.Switchboard()()
err = queue.Init()
tcheck(t, err, "queue init")
defer queue.Shutdown()
var cid int64
testctl := func(fn func(clientxctl *ctl)) {
t.Helper()
cconn, sconn := net.Pipe()
clientxctl := ctl{conn: cconn, log: pkglog}
serverxctl := ctl{conn: sconn, log: pkglog}
done := make(chan struct{})
go func() {
cid++
servectlcmd(ctxbg, &serverxctl, cid, func() {})
close(done)
}()
fn(&clientxctl)
cconn.Close()
<-done
sconn.Close()
}
// "deliver"
testctl(func(xctl *ctl) {
ctlcmdDeliver(xctl, "mjl@mox.example")
})
// "setaccountpassword"
testctl(func(xctl *ctl) {
ctlcmdSetaccountpassword(xctl, "mjl", "test4321")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesList(xctl)
})
// All messages.
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "", "", "")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "mjl", "", "")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "", "☺.mox.example", "")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "mox", "☺.mox.example", "example.com")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesRemove(xctl, 1)
})
// Queue a message to list/change/dump.
msg := "Subject: subject\r\n\r\nbody\r\n"
msgFile, err := store.CreateMessageTemp(pkglog, "queuedump-test")
tcheck(t, err, "temp file")
_, err = msgFile.Write([]byte(msg))
tcheck(t, err, "write message")
_, err = msgFile.Seek(0, 0)
tcheck(t, err, "rewind message")
defer os.Remove(msgFile.Name())
defer msgFile.Close()
addr, err := smtp.ParseAddress("mjl@mox.example")
tcheck(t, err, "parse address")
qml := []queue.Msg{queue.MakeMsg(addr.Path(), addr.Path(), false, false, int64(len(msg)), "<random@localhost>", nil, nil, time.Now(), "subject")}
queue.Add(ctxbg, pkglog, "mjl", msgFile, qml...)
qmid := qml[0].ID
// Has entries now.
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesList(xctl)
})
// "queuelist"
testctl(func(xctl *ctl) {
ctlcmdQueueList(xctl, queue.Filter{}, queue.Sort{})
})
// "queueholdset"
testctl(func(xctl *ctl) {
ctlcmdQueueHoldSet(xctl, queue.Filter{}, true)
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldSet(xctl, queue.Filter{}, false)
})
// "queueschedule"
testctl(func(xctl *ctl) {
ctlcmdQueueSchedule(xctl, queue.Filter{}, true, time.Minute)
})
// "queuetransport"
testctl(func(xctl *ctl) {
ctlcmdQueueTransport(xctl, queue.Filter{}, "socks")
})
// "queuerequiretls"
testctl(func(xctl *ctl) {
ctlcmdQueueRequireTLS(xctl, queue.Filter{}, nil)
})
// "queuedump"
testctl(func(xctl *ctl) {
ctlcmdQueueDump(xctl, fmt.Sprintf("%d", qmid))
})
// "queuefail"
testctl(func(xctl *ctl) {
ctlcmdQueueFail(xctl, queue.Filter{})
})
// "queuedrop"
testctl(func(xctl *ctl) {
ctlcmdQueueDrop(xctl, queue.Filter{})
})
// "queueholdruleslist"
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesList(xctl)
})
// "queueholdrulesadd"
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "mjl", "", "")
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesAdd(xctl, "mjl", "localhost", "")
})
// "queueholdrulesremove"
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesRemove(xctl, 2)
})
testctl(func(xctl *ctl) {
ctlcmdQueueHoldrulesList(xctl)
})
// "queuesuppresslist"
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressList(xctl, "mjl")
})
// "queuesuppressadd"
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressAdd(xctl, "mjl", "base@localhost")
})
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressAdd(xctl, "mjl", "other@localhost")
})
// "queuesuppresslookup"
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressLookup(xctl, "mjl", "base@localhost")
})
// "queuesuppressremove"
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressRemove(xctl, "mjl", "base@localhost")
})
testctl(func(xctl *ctl) {
ctlcmdQueueSuppressList(xctl, "mjl")
})
// "queueretiredlist"
testctl(func(xctl *ctl) {
ctlcmdQueueRetiredList(xctl, queue.RetiredFilter{}, queue.RetiredSort{})
})
// "queueretiredprint"
testctl(func(xctl *ctl) {
ctlcmdQueueRetiredPrint(xctl, "1")
})
// "queuehooklist"
testctl(func(xctl *ctl) {
ctlcmdQueueHookList(xctl, queue.HookFilter{}, queue.HookSort{})
})
// "queuehookschedule"
testctl(func(xctl *ctl) {
ctlcmdQueueHookSchedule(xctl, queue.HookFilter{}, true, time.Minute)
})
// "queuehookprint"
testctl(func(xctl *ctl) {
ctlcmdQueueHookPrint(xctl, "1")
})
// "queuehookcancel"
testctl(func(xctl *ctl) {
ctlcmdQueueHookCancel(xctl, queue.HookFilter{})
})
// "queuehookretiredlist"
testctl(func(xctl *ctl) {
ctlcmdQueueHookRetiredList(xctl, queue.HookRetiredFilter{}, queue.HookRetiredSort{})
})
// "queuehookretiredprint"
testctl(func(xctl *ctl) {
ctlcmdQueueHookRetiredPrint(xctl, "1")
})
// "importmbox"
testctl(func(xctl *ctl) {
ctlcmdImport(xctl, true, "mjl", "inbox", "testdata/importtest.mbox")
})
// "importmaildir"
testctl(func(xctl *ctl) {
ctlcmdImport(xctl, false, "mjl", "inbox", "testdata/importtest.maildir")
})
// "domainadd"
testctl(func(xctl *ctl) {
ctlcmdConfigDomainAdd(xctl, false, dns.Domain{ASCII: "mox2.example"}, "mjl", "")
})
// "accountadd"
testctl(func(xctl *ctl) {
ctlcmdConfigAccountAdd(xctl, "mjl2", "mjl2@mox2.example")
})
// "addressadd"
testctl(func(xctl *ctl) {
ctlcmdConfigAddressAdd(xctl, "mjl3@mox2.example", "mjl2")
})
// Add a message.
testctl(func(xctl *ctl) {
ctlcmdDeliver(xctl, "mjl3@mox2.example")
})
// "retrain", retrain junk filter.
testctl(func(xctl *ctl) {
ctlcmdRetrain(xctl, "mjl2")
})
// "addressrm"
testctl(func(xctl *ctl) {
ctlcmdConfigAddressRemove(xctl, "mjl3@mox2.example")
})
// "accountdisabled"
testctl(func(xctl *ctl) {
ctlcmdConfigAccountDisabled(xctl, "mjl2", "testing")
})
// "accountlist"
testctl(func(xctl *ctl) {
ctlcmdConfigAccountList(xctl)
})
testctl(func(xctl *ctl) {
ctlcmdConfigAccountDisabled(xctl, "mjl2", "")
})
// "accountrm"
testctl(func(xctl *ctl) {
ctlcmdConfigAccountRemove(xctl, "mjl2")
})
// "domaindisabled"
testctl(func(xctl *ctl) {
ctlcmdConfigDomainDisabled(xctl, dns.Domain{ASCII: "mox2.example"}, true)
})
testctl(func(xctl *ctl) {
ctlcmdConfigDomainDisabled(xctl, dns.Domain{ASCII: "mox2.example"}, false)
})
// "domainrm"
testctl(func(xctl *ctl) {
ctlcmdConfigDomainRemove(xctl, dns.Domain{ASCII: "mox2.example"})
})
// "aliasadd"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasAdd(xctl, "support@mox.example", config.Alias{Addresses: []string{"mjl@mox.example"}})
})
// "aliaslist"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasList(xctl, "mox.example")
})
// "aliasprint"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasPrint(xctl, "support@mox.example")
})
// "aliasupdate"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasUpdate(xctl, "support@mox.example", "true", "true", "true")
})
// "aliasaddaddr"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasAddaddr(xctl, "support@mox.example", []string{"mjl2@mox.example"})
})
// "aliasrmaddr"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasRmaddr(xctl, "support@mox.example", []string{"mjl2@mox.example"})
})
// "aliasrm"
testctl(func(xctl *ctl) {
ctlcmdConfigAliasRemove(xctl, "support@mox.example")
})
// accounttlspubkeyadd
certDER := fakeCert(t)
testctl(func(xctl *ctl) {
ctlcmdConfigTlspubkeyAdd(xctl, "mjl@mox.example", "testkey", false, certDER)
})
// "accounttlspubkeylist"
testctl(func(xctl *ctl) {
ctlcmdConfigTlspubkeyList(xctl, "")
})
testctl(func(xctl *ctl) {
ctlcmdConfigTlspubkeyList(xctl, "mjl")
})
tpkl, err := store.TLSPublicKeyList(ctxbg, "")
tcheck(t, err, "list tls public keys")
if len(tpkl) != 1 {
t.Fatalf("got %d tls public keys, expected 1", len(tpkl))
}
fingerprint := tpkl[0].Fingerprint
// "accounttlspubkeyget"
testctl(func(xctl *ctl) {
ctlcmdConfigTlspubkeyGet(xctl, fingerprint)
})
// "accounttlspubkeyrm"
testctl(func(xctl *ctl) {
ctlcmdConfigTlspubkeyRemove(xctl, fingerprint)
})
tpkl, err = store.TLSPublicKeyList(ctxbg, "")
tcheck(t, err, "list tls public keys")
if len(tpkl) != 0 {
t.Fatalf("got %d tls public keys, expected 0", len(tpkl))
}
// "loglevels"
testctl(func(xctl *ctl) {
ctlcmdLoglevels(xctl)
})
// "setloglevels"
testctl(func(xctl *ctl) {
ctlcmdSetLoglevels(xctl, "", "debug")
})
testctl(func(xctl *ctl) {
ctlcmdSetLoglevels(xctl, "smtpserver", "debug")
})
// Export data, import it again
xcmdExport(true, false, []string{filepath.FromSlash("testdata/ctl/data/tmp/export/mbox/"), filepath.FromSlash("testdata/ctl/data/accounts/mjl")}, &cmd{log: pkglog})
xcmdExport(false, false, []string{filepath.FromSlash("testdata/ctl/data/tmp/export/maildir/"), filepath.FromSlash("testdata/ctl/data/accounts/mjl")}, &cmd{log: pkglog})
testctl(func(xctl *ctl) {
ctlcmdImport(xctl, true, "mjl", "inbox", filepath.FromSlash("testdata/ctl/data/tmp/export/mbox/Inbox.mbox"))
})
testctl(func(xctl *ctl) {
ctlcmdImport(xctl, false, "mjl", "inbox", filepath.FromSlash("testdata/ctl/data/tmp/export/maildir/Inbox"))
})
// "recalculatemailboxcounts"
testctl(func(xctl *ctl) {
ctlcmdRecalculateMailboxCounts(xctl, "mjl")
})
// "fixmsgsize"
testctl(func(xctl *ctl) {
ctlcmdFixmsgsize(xctl, "mjl")
})
testctl(func(xctl *ctl) {
acc, err := store.OpenAccount(xctl.log, "mjl", false)
tcheck(t, err, "open account")
defer func() {
acc.Close()
acc.WaitClosed()
}()
content := []byte("Subject: hi\r\n\r\nbody\r\n")
deliver := func(m *store.Message) {
t.Helper()
m.Size = int64(len(content))
msgf, err := store.CreateMessageTemp(xctl.log, "ctltest")
tcheck(t, err, "create temp file")
defer os.Remove(msgf.Name())
defer msgf.Close()
_, err = msgf.Write(content)
tcheck(t, err, "write message file")
acc.WithWLock(func() {
err = acc.DeliverMailbox(xctl.log, "Inbox", m, msgf)
tcheck(t, err, "deliver message")
})
}
var msgBadSize store.Message
deliver(&msgBadSize)
msgBadSize.Size = 1
err = acc.DB.Update(ctxbg, &msgBadSize)
tcheck(t, err, "update message to bad size")
mb := store.Mailbox{ID: msgBadSize.MailboxID}
err = acc.DB.Get(ctxbg, &mb)
tcheck(t, err, "get db")
mb.Size -= int64(len(content))
mb.Size += 1
err = acc.DB.Update(ctxbg, &mb)
tcheck(t, err, "update mailbox size")
// Fix up the size.
ctlcmdFixmsgsize(xctl, "")
err = acc.DB.Get(ctxbg, &msgBadSize)
tcheck(t, err, "get message")
if msgBadSize.Size != int64(len(content)) {
t.Fatalf("after fixing, message size is %d, should be %d", msgBadSize.Size, len(content))
}
})
// "reparse"
testctl(func(xctl *ctl) {
ctlcmdReparse(xctl, "mjl")
})
testctl(func(xctl *ctl) {
ctlcmdReparse(xctl, "")
})
// "reassignthreads"
testctl(func(xctl *ctl) {
ctlcmdReassignthreads(xctl, "mjl")
})
testctl(func(xctl *ctl) {
ctlcmdReassignthreads(xctl, "")
})
// "backup", backup account.
err = dmarcdb.Init()
tcheck(t, err, "dmarcdb init")
defer dmarcdb.Close()
err = mtastsdb.Init(false)
tcheck(t, err, "mtastsdb init")
defer mtastsdb.Close()
err = tlsrptdb.Init()
tcheck(t, err, "tlsrptdb init")
defer tlsrptdb.Close()
testctl(func(xctl *ctl) {
os.RemoveAll("testdata/ctl/data/tmp/backup")
err := os.WriteFile("testdata/ctl/data/receivedid.key", make([]byte, 16), 0600)
tcheck(t, err, "writing receivedid.key")
ctlcmdBackup(xctl, filepath.FromSlash("testdata/ctl/data/tmp/backup"), false)
})
// Verify the backup.
xcmd := cmd{
flag: flag.NewFlagSet("", flag.ExitOnError),
flagArgs: []string{filepath.FromSlash("testdata/ctl/data/tmp/backup/data")},
}
cmdVerifydata(&xcmd)
// IMAP connection.
testctl(func(xctl *ctl) {
a, b := net.Pipe()
go func() {
opts := imapclient.Opts{
Logger: slog.Default().With("cid", mox.Cid()),
Error: func(err error) { panic(err) },
}
client, err := imapclient.New(a, &opts)
tcheck(t, err, "new imapclient")
client.Select("inbox")
client.Logout()
defer a.Close()
}()
ctlcmdIMAPServe(xctl, "mjl@mox.example", b, b)
})
}
func fakeCert(t *testing.T) []byte {
t.Helper()
seed := make([]byte, ed25519.SeedSize)
privKey := ed25519.NewKeyFromSeed(seed) // Fake key, don't use this for real!
template := &x509.Certificate{
SerialNumber: big.NewInt(1), // Required field...
}
localCertBuf, err := x509.CreateCertificate(cryptorand.Reader, template, template, privKey.Public(), privKey)
tcheck(t, err, "making certificate")
return localCertBuf
}

14
curves.go Normal file
View File

@ -0,0 +1,14 @@
//go:build !go1.24
package main
import (
"crypto/tls"
)
var curvesList = []tls.CurveID{
tls.CurveP256,
tls.CurveP384,
tls.CurveP521,
tls.X25519,
}

15
curves_go124.go Normal file
View File

@ -0,0 +1,15 @@
//go:build go1.24
package main
import (
"crypto/tls"
)
var curvesList = []tls.CurveID{
tls.CurveP256,
tls.CurveP384,
tls.CurveP521,
tls.X25519,
tls.X25519MLKEM768,
}

516
dane/dane.go Normal file
View File

@ -0,0 +1,516 @@
// Package dane verifies TLS certificates through DNSSEC-verified TLSA records.
//
// On the internet, TLS certificates are commonly verified by checking if they are
// signed by one of many commonly trusted Certificate Authorities (CAs). This is
// PKIX or WebPKI. With DANE, TLS certificates are verified through
// DNSSEC-protected DNS records of type TLSA. These TLSA records specify the rules
// for verification ("usage") and whether a full certificate ("selector" cert) is
// checked or only its "subject public key info" ("selector" spki). The (hash of)
// the certificate or "spki" is included in the TLSA record ("matchtype").
//
// DANE SMTP connections have two allowed "usages" (verification rules):
// - DANE-EE, which only checks if the certificate or spki match, without the
// WebPKI verification of expiration, name or signed-by-trusted-party verification.
// - DANE-TA, which does verification similar to PKIX/WebPKI, but verifies against
// a certificate authority ("trust anchor", or "TA") specified in the TLSA record
// instead of the CA pool.
//
// DANE has two more "usages", that may be used with protocols other than SMTP:
// - PKIX-EE, which matches the certificate or spki, and also verifies the
// certificate against the CA pool.
// - PKIX-TA, which verifies the certificate or spki against a "trust anchor"
// specified in the TLSA record, that also has to be trusted by the CA pool.
//
// TLSA records are looked up for a specific port number, protocol (tcp/udp) and
// host name. Each port can have different TLSA records. TLSA records must be
// signed and verified with DNSSEC before they can be trusted and used.
//
// TLSA records are looked up under "TLSA candidate base domains". The domain
// where the TLSA records are found is the "TLSA base domain". If the host to
// connect to is a CNAME that can be followed with DNSSEC protection, it is the
// first TLSA candidate base domain. If no protected records are found, the
// original host name is the second TLSA candidate base domain.
//
// For TLS connections, the TLSA base domain is used with SNI during the
// handshake.
//
// For TLS certificate verification that requires PKIX/WebPKI/trusted-anchor
// verification (all except DANE-EE), the potential second TLSA candidate base
// domain name is also a valid hostname. With SMTP, additionally for hosts found in
// MX records for a "next-hop domain", the "original next-hop domain" (domain of an
// email address to deliver to) is also a valid name, as is the "CNAME-expanded
// original next-hop domain", bringing the potential total allowed names to four
// (if CNAMEs are followed for the MX hosts).
package dane
// todo: why is https://datatracker.ietf.org/doc/html/draft-barnes-dane-uks-00 not in use? sounds reasonable.
// todo: add a DialSRV function that accepts a domain name, looks up srv records, dials the service, verifies dane certificate and returns the connection. for ../rfc/7673
import (
"bytes"
"context"
"crypto/sha256"
"crypto/sha512"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"log/slog"
"net"
"strings"
"time"
"github.com/mjl-/adns"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/stub"
"slices"
)
var (
MetricVerify stub.Counter = stub.CounterIgnore{}
MetricVerifyErrors stub.Counter = stub.CounterIgnore{}
)
var (
// ErrNoRecords means no TLSA records were found and host has not opted into DANE.
ErrNoRecords = errors.New("dane: no tlsa records")
// ErrInsecure indicates insecure DNS responses were encountered while looking up
// the host, CNAME records, or TLSA records.
ErrInsecure = errors.New("dane: dns lookups insecure")
// ErrNoMatch means some TLSA records were found, but none can be verified against
// the remote TLS certificate.
ErrNoMatch = errors.New("dane: no match between certificate and tlsa records")
)
// VerifyError is an error encountered while verifying a DANE TLSA record. For
// example, an error encountered with x509 certificate trusted-anchor verification.
// A TLSA record that does not match a TLS certificate is not a VerifyError.
type VerifyError struct {
Err error // Underlying error, possibly from crypto/x509.
Record adns.TLSA // Cause of error.
}
// Error returns a string explaining this is a dane verify error along with the
// underlying error.
func (e VerifyError) Error() string {
return fmt.Sprintf("dane verify error: %s", e.Err)
}
// Unwrap returns the underlying error.
func (e VerifyError) Unwrap() error {
return e.Err
}
// Dial looks up DNSSEC-protected DANE TLSA records for the domain name and
// port/service in address, checks for allowed usages, makes a network connection
// and verifies the remote certificate against the TLSA records. If verification
// succeeds, the verified record is returned.
//
// Different protocols require different usages. For example, SMTP with STARTTLS
// for delivery only allows usages DANE-TA and DANE-EE. If allowedUsages is
// non-nil, only the specified usages are taken into account when verifying, and
// any others ignored.
//
// Errors that can be returned, possibly in wrapped form:
// - ErrNoRecords, also in case the DNS response indicates "not found".
// - adns.DNSError, potentially wrapping adns.ExtendedError of which some can
// indicate DNSSEC errors.
// - ErrInsecure
// - VerifyError, potentially wrapping errors from crypto/x509.
func Dial(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, network, address string, allowedUsages []adns.TLSAUsage, pkixRoots *x509.CertPool) (net.Conn, adns.TLSA, error) {
log := mlog.New("dane", elog)
// Split host and port.
host, portstr, err := net.SplitHostPort(address)
if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("parsing address: %w", err)
}
port, err := resolver.LookupPort(ctx, network, portstr)
if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("parsing port: %w", err)
}
hostDom, err := dns.ParseDomain(strings.TrimSuffix(host, "."))
if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("parsing host: %w", err)
}
// ../rfc/7671:1015
// First follow CNAMEs for host. If the path to the final name is secure, we must
// lookup TLSA there first, then fallback to the original name. If the final name
// is secure that's also the SNI server name we must use, with the original name as
// allowed host during certificate name checks (for all TLSA usages other than
// DANE-EE).
cnameDom := hostDom
cnameAuthentic := true
for i := 0; ; i += 1 {
if i == 10 {
return nil, adns.TLSA{}, fmt.Errorf("too many cname lookups")
}
cname, cnameResult, err := resolver.LookupCNAME(ctx, cnameDom.ASCII+".")
cnameAuthentic = cnameAuthentic && cnameResult.Authentic
if !cnameResult.Authentic && i == 0 {
return nil, adns.TLSA{}, fmt.Errorf("%w: cname lookup insecure", ErrInsecure)
} else if dns.IsNotFound(err) {
break
} else if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("resolving cname %s: %w", cnameDom, err)
} else if d, err := dns.ParseDomain(strings.TrimSuffix(cname, ".")); err != nil {
return nil, adns.TLSA{}, fmt.Errorf("parsing cname: %w", err)
} else {
cnameDom = d
}
}
// We lookup the IP.
ipnetwork := "ip"
if strings.HasSuffix(network, "4") {
ipnetwork += "4"
} else if strings.HasSuffix(network, "6") {
ipnetwork += "6"
}
ips, _, err := resolver.LookupIP(ctx, ipnetwork, cnameDom.ASCII+".")
// note: For SMTP with opportunistic DANE we would stop here with an insecure
// response. But as long as long as we have a verified original tlsa base name, we
// can continue with regular DANE.
if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("resolving ips: %w", err)
} else if len(ips) == 0 {
return nil, adns.TLSA{}, &adns.DNSError{Err: "no ips for host", Name: cnameDom.ASCII, IsNotFound: true}
}
// Lookup TLSA records. If resolving CNAME was secure, we try that first. Otherwise
// we try at the secure original domain.
baseDom := hostDom
if cnameAuthentic {
baseDom = cnameDom
}
var records []adns.TLSA
var result adns.Result
for {
var err error
records, result, err = resolver.LookupTLSA(ctx, port, network, baseDom.ASCII+".")
// If no (secure) records can be found at the final cname, and there is an original
// name, try at original name.
// ../rfc/7671:1015
if baseDom != hostDom && (dns.IsNotFound(err) || !result.Authentic) {
baseDom = hostDom
continue
}
if !result.Authentic {
return nil, adns.TLSA{}, ErrInsecure
} else if dns.IsNotFound(err) {
return nil, adns.TLSA{}, ErrNoRecords
} else if err != nil {
return nil, adns.TLSA{}, fmt.Errorf("lookup dane tlsa records: %w", err)
}
break
}
// Keep only the allowed usages.
if allowedUsages != nil {
o := 0
for _, r := range records {
if slices.Contains(allowedUsages, r.Usage) {
records[o] = r
o++
}
}
records = records[:o]
if len(records) == 0 {
// No point in dialing when we know we won't be able to verify the remote TLS
// certificate.
return nil, adns.TLSA{}, fmt.Errorf("no usable tlsa records remaining: %w", ErrNoMatch)
}
}
// We use the base domain for SNI, allowing the original domain as well.
// ../rfc/7671:1021
var moreAllowedHosts []dns.Domain
if baseDom != hostDom {
moreAllowedHosts = []dns.Domain{hostDom}
}
// Dial the remote host.
timeout := 30 * time.Second
if deadline, ok := ctx.Deadline(); ok && len(ips) > 0 {
timeout = time.Until(deadline) / time.Duration(len(ips))
}
dialer := &net.Dialer{Timeout: timeout}
var conn net.Conn
var dialErrs []error
for _, ip := range ips {
addr := net.JoinHostPort(ip.String(), portstr)
c, err := dialer.DialContext(ctx, network, addr)
if err != nil {
dialErrs = append(dialErrs, err)
continue
}
conn = c
break
}
if conn == nil {
return nil, adns.TLSA{}, errors.Join(dialErrs...)
}
var verifiedRecord adns.TLSA
config := TLSClientConfig(log.Logger, records, baseDom, moreAllowedHosts, &verifiedRecord, pkixRoots)
tlsConn := tls.Client(conn, &config)
if err := tlsConn.HandshakeContext(ctx); err != nil {
xerr := conn.Close()
log.Check(xerr, "closing connection")
return nil, adns.TLSA{}, err
}
return tlsConn, verifiedRecord, nil
}
// TLSClientConfig returns a tls.Config to be used for dialing/handshaking a
// TLS connection with DANE verification.
//
// Callers should only pass records that are allowed for the intended use. DANE
// with SMTP only allows DANE-EE and DANE-TA usages, not the PKIX-usages.
//
// The config has InsecureSkipVerify set to true, with a custom VerifyConnection
// function for verifying DANE. Its VerifyConnection can return ErrNoMatch and
// additionally one or more (wrapped) errors of type VerifyError.
//
// The TLS config uses allowedHost for SNI.
//
// If verifiedRecord is not nil, it is set to the record that was successfully
// verified, if any.
func TLSClientConfig(elog *slog.Logger, records []adns.TLSA, allowedHost dns.Domain, moreAllowedHosts []dns.Domain, verifiedRecord *adns.TLSA, pkixRoots *x509.CertPool) tls.Config {
log := mlog.New("dane", elog)
return tls.Config{
ServerName: allowedHost.ASCII, // For SNI.
InsecureSkipVerify: true,
VerifyConnection: func(cs tls.ConnectionState) error {
verified, record, err := Verify(log.Logger, records, cs, allowedHost, moreAllowedHosts, pkixRoots)
log.Debugx("dane verification", err, slog.Bool("verified", verified), slog.Any("record", record))
if verified {
if verifiedRecord != nil {
*verifiedRecord = record
}
return nil
} else if err == nil {
return ErrNoMatch
}
return fmt.Errorf("%w, and error(s) encountered during verification: %w", ErrNoMatch, err)
},
MinVersion: tls.VersionTLS12, // ../rfc/8996:31 ../rfc/8997:66
}
}
// Verify checks if the TLS connection state can be verified against DANE TLSA
// records.
//
// allowedHost along with the optional moreAllowedHosts are the host names that are
// allowed during certificate verification (as used by PKIX-TA, PKIX-EE, DANE-TA,
// but not DANE-EE). A typical connection would allow just one name, but some uses
// of DANE allow multiple, like SMTP which allow up to four valid names for a TLS
// certificate based on MX/CNAME/TLSA/DNSSEC lookup results.
//
// When one of the records matches, Verify returns true, along with the matching
// record and a nil error.
// If there is no match, then in the typical case Verify returns: false, a zero
// record value and a nil error.
// If an error is encountered while verifying a record, e.g. for x509
// trusted-anchor verification, an error may be returned, typically one or more
// (wrapped) errors of type VerifyError.
//
// Verify is useful when DANE verification and its results has to be done
// separately from other validation, e.g. for MTA-STS. The caller can create a
// tls.Config with a VerifyConnection function that checks DANE and MTA-STS
// separately.
func Verify(elog *slog.Logger, records []adns.TLSA, cs tls.ConnectionState, allowedHost dns.Domain, moreAllowedHosts []dns.Domain, pkixRoots *x509.CertPool) (verified bool, matching adns.TLSA, rerr error) {
log := mlog.New("dane", elog)
MetricVerify.Inc()
if len(records) == 0 {
MetricVerifyErrors.Inc()
return false, adns.TLSA{}, fmt.Errorf("verify requires at least one tlsa record")
}
var errs []error
for _, r := range records {
ok, err := verifySingle(log, r, cs, allowedHost, moreAllowedHosts, pkixRoots)
if err != nil {
errs = append(errs, VerifyError{err, r})
} else if ok {
return true, r, nil
}
}
MetricVerifyErrors.Inc()
return false, adns.TLSA{}, errors.Join(errs...)
}
// verifySingle verifies the TLS connection against a single DANE TLSA record.
//
// If the remote TLS certificate matches with the TLSA record, true is
// returned. Errors may be encountered while verifying, e.g. when checking one
// of the allowed hosts against a TLSA record. A typical non-matching/verified
// TLSA record returns a nil error. But in some cases, e.g. when encountering
// errors while verifying certificates against a trust-anchor, an error can be
// returned with one or more underlying x509 verification errors. A nil-nil error
// is only returned when verified is false.
func verifySingle(log mlog.Log, tlsa adns.TLSA, cs tls.ConnectionState, allowedHost dns.Domain, moreAllowedHosts []dns.Domain, pkixRoots *x509.CertPool) (verified bool, rerr error) {
if len(cs.PeerCertificates) == 0 {
return false, fmt.Errorf("no server certificate")
}
match := func(cert *x509.Certificate) bool {
var buf []byte
switch tlsa.Selector {
case adns.TLSASelectorCert:
buf = cert.Raw
case adns.TLSASelectorSPKI:
buf = cert.RawSubjectPublicKeyInfo
default:
return false
}
switch tlsa.MatchType {
case adns.TLSAMatchTypeFull:
case adns.TLSAMatchTypeSHA256:
d := sha256.Sum256(buf)
buf = d[:]
case adns.TLSAMatchTypeSHA512:
d := sha512.Sum512(buf)
buf = d[:]
default:
return false
}
return bytes.Equal(buf, tlsa.CertAssoc)
}
pkixVerify := func(host dns.Domain) ([][]*x509.Certificate, error) {
// Default Verify checks for expiration. We pass the host name to check. And we
// configure the intermediates. The roots are filled in by the x509 package.
opts := x509.VerifyOptions{
DNSName: host.ASCII,
Intermediates: x509.NewCertPool(),
Roots: pkixRoots,
}
for _, cert := range cs.PeerCertificates[1:] {
opts.Intermediates.AddCert(cert)
}
chains, err := cs.PeerCertificates[0].Verify(opts)
return chains, err
}
switch tlsa.Usage {
case adns.TLSAUsagePKIXTA:
// We cannot get at the system trusted ca certificates to look for the trusted
// anchor. So we just ask Go to verify, then see if any of the chains include the
// ca certificate.
var errs []error
for _, host := range append([]dns.Domain{allowedHost}, moreAllowedHosts...) {
chains, err := pkixVerify(host)
log.Debugx("pkix-ta verify", err)
if err != nil {
errs = append(errs, err)
continue
}
// The chains by x509's Verify should include the longest possible match, so it is
// sure to include the trusted anchor. ../rfc/7671:835
for _, chain := range chains {
// If pkix verified, check if any of the certificates match.
for i := len(chain) - 1; i >= 0; i-- {
if match(chain[i]) {
return true, nil
}
}
}
}
return false, errors.Join(errs...)
case adns.TLSAUsagePKIXEE:
// Check for a certificate match.
if !match(cs.PeerCertificates[0]) {
return false, nil
}
// And do regular pkix checks, ../rfc/7671:799
var errs []error
for _, host := range append([]dns.Domain{allowedHost}, moreAllowedHosts...) {
_, err := pkixVerify(host)
log.Debugx("pkix-ee verify", err)
if err == nil {
return true, nil
}
errs = append(errs, err)
}
return false, errors.Join(errs...)
case adns.TLSAUsageDANETA:
// We set roots, so the system defaults don't get used. Verify checks the host name
// (set below) and checks for expiration.
opts := x509.VerifyOptions{
Intermediates: x509.NewCertPool(),
Roots: x509.NewCertPool(),
}
// If the full certificate was included, we must add it to the valid roots, the TLS
// server may not send it. ../rfc/7671:692
var found bool
if tlsa.Selector == adns.TLSASelectorCert && tlsa.MatchType == adns.TLSAMatchTypeFull {
cert, err := x509.ParseCertificate(tlsa.CertAssoc)
if err != nil {
log.Debugx("parsing full exact certificate from tlsa record to use as root for usage dane-trusted-anchor", err)
// Continue anyway, perhaps the servers sends it again in a way that the tls package can parse? (unlikely)
} else {
opts.Roots.AddCert(cert)
found = true
}
}
for i, cert := range cs.PeerCertificates {
if match(cert) {
opts.Roots.AddCert(cert)
found = true
break
} else if i > 0 {
opts.Intermediates.AddCert(cert)
}
}
if !found {
// Trusted anchor was not found in TLS certificates so we won't be able to
// verify.
return false, nil
}
// Trusted anchor was found, still need to verify.
var errs []error
for _, host := range append([]dns.Domain{allowedHost}, moreAllowedHosts...) {
opts.DNSName = host.ASCII
_, err := cs.PeerCertificates[0].Verify(opts)
if err == nil {
return true, nil
}
errs = append(errs, err)
}
return false, errors.Join(errs...)
case adns.TLSAUsageDANEEE:
// ../rfc/7250 is about raw public keys instead of x.509 certificates in tls
// handshakes. Go's crypto/tls does not implement the extension (see
// crypto/tls/common.go, the extensions values don't appear in the
// rfc, but have values 19 and 20 according to
// https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-1
// ../rfc/7671:1148 mentions the raw public keys are allowed. It's still
// questionable that this is commonly implemented. For now the world can probably
// live with an ignored certificate wrapped around the subject public key info.
// We don't verify host name in certificate, ../rfc/7671:489
// And we don't check for expiration. ../rfc/7671:527
// The whole point of this type is to have simple secure infrastructure that
// doesn't automatically expire (at the most inconvenient times).
return match(cs.PeerCertificates[0]), nil
default:
// Unknown, perhaps defined in the future. Not an error.
log.Debug("unrecognized tlsa usage, skipping", slog.Any("tlsausage", tlsa.Usage))
return false, nil
}
}

476
dane/dane_test.go Normal file
View File

@ -0,0 +1,476 @@
package dane
import (
"context"
"crypto/ecdsa"
"crypto/elliptic"
cryptorand "crypto/rand"
"crypto/sha256"
"crypto/sha512"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"errors"
"fmt"
"math/big"
"net"
"reflect"
"strconv"
"sync/atomic"
"testing"
"time"
"github.com/mjl-/adns"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
)
func tcheckf(t *testing.T, err error, format string, args ...any) {
t.Helper()
if err != nil {
t.Fatalf("%s: %s", fmt.Sprintf(format, args...), err)
}
}
// Test dialing and DANE TLS verification.
func TestDial(t *testing.T) {
log := mlog.New("dane", nil)
// Create fake CA/trusted-anchor certificate.
taTempl := x509.Certificate{
SerialNumber: big.NewInt(1), // Required field.
Subject: pkix.Name{CommonName: "fake ca"},
Issuer: pkix.Name{CommonName: "fake ca"},
NotBefore: time.Now().Add(-1 * time.Hour),
NotAfter: time.Now().Add(1 * time.Hour),
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
ExtKeyUsage: []x509.ExtKeyUsage{
x509.ExtKeyUsageServerAuth,
x509.ExtKeyUsageClientAuth,
},
BasicConstraintsValid: true,
IsCA: true,
MaxPathLen: 1,
}
taPriv, err := ecdsa.GenerateKey(elliptic.P256(), cryptorand.Reader)
tcheckf(t, err, "generating trusted-anchor ca private key")
taCertBuf, err := x509.CreateCertificate(cryptorand.Reader, &taTempl, &taTempl, taPriv.Public(), taPriv)
tcheckf(t, err, "create trusted-anchor ca certificate")
taCert, err := x509.ParseCertificate(taCertBuf)
tcheckf(t, err, "parsing generated trusted-anchor ca certificate")
tacertsha256 := sha256.Sum256(taCert.Raw)
taCertSHA256 := tacertsha256[:]
// Generate leaf private key & 2 certs, one expired and one valid, both signed by
// trusted-anchor cert.
leafPriv, err := ecdsa.GenerateKey(elliptic.P256(), cryptorand.Reader)
tcheckf(t, err, "generating leaf private key")
makeLeaf := func(expired bool) (tls.Certificate, []byte, []byte) {
now := time.Now()
if expired {
now = now.Add(-2 * time.Hour)
}
leafTempl := x509.Certificate{
SerialNumber: big.NewInt(1), // Required field.
Issuer: taTempl.Subject,
NotBefore: now.Add(-1 * time.Hour),
NotAfter: now.Add(1 * time.Hour),
DNSNames: []string{"localhost"},
}
leafCertBuf, err := x509.CreateCertificate(cryptorand.Reader, &leafTempl, taCert, leafPriv.Public(), taPriv)
tcheckf(t, err, "create trusted-anchor ca certificate")
leafCert, err := x509.ParseCertificate(leafCertBuf)
tcheckf(t, err, "parsing generated trusted-anchor ca certificate")
leafSPKISHA256 := sha256.Sum256(leafCert.RawSubjectPublicKeyInfo)
leafSPKISHA512 := sha512.Sum512(leafCert.RawSubjectPublicKeyInfo)
tlsLeafCert := tls.Certificate{
Certificate: [][]byte{leafCertBuf, taCertBuf},
PrivateKey: leafPriv, // .(crypto.PrivateKey),
Leaf: leafCert,
}
return tlsLeafCert, leafSPKISHA256[:], leafSPKISHA512[:]
}
tlsLeafCert, leafSPKISHA256, leafSPKISHA512 := makeLeaf(false)
tlsLeafCertExpired, _, _ := makeLeaf(true)
// Set up loopback tls server.
listenConn, err := net.Listen("tcp", "127.0.0.1:0")
tcheckf(t, err, "listen for test server")
addr := listenConn.Addr().String()
_, portstr, err := net.SplitHostPort(addr)
tcheckf(t, err, "get localhost port")
uport, err := strconv.ParseUint(portstr, 10, 16)
tcheckf(t, err, "parse localhost port")
port := int(uport)
defer listenConn.Close()
// Config for server, replaced during tests.
var tlsConfig atomic.Pointer[tls.Config]
tlsConfig.Store(&tls.Config{
Certificates: []tls.Certificate{tlsLeafCert},
})
// Loop handling incoming TLS connections.
go func() {
for {
conn, err := listenConn.Accept()
if err != nil {
return
}
tlsConn := tls.Server(conn, tlsConfig.Load())
tlsConn.Handshake()
tlsConn.Close()
}
}()
dialHost := "localhost"
var allowedUsages []adns.TLSAUsage
pkixRoots := x509.NewCertPool()
// Helper function for dialing with DANE.
test := func(resolver dns.Resolver, expRecord adns.TLSA, expErr any) {
t.Helper()
conn, record, err := Dial(context.Background(), log.Logger, resolver, "tcp", net.JoinHostPort(dialHost, portstr), allowedUsages, pkixRoots)
if err == nil {
conn.Close()
}
if (err == nil) != (expErr == nil) || err != nil && !errors.Is(err, expErr.(error)) && !errors.As(err, expErr) {
t.Fatalf("got err %v (%#v), expected %#v", err, err, expErr)
}
if !reflect.DeepEqual(record, expRecord) {
t.Fatalf("got verified record %v, expected %v", record, expRecord)
}
}
tlsaName := fmt.Sprintf("_%d._tcp.localhost.", port)
// Make all kinds of records, some invalid or non-matching.
var zeroRecord adns.TLSA
recordDANEEESPKISHA256 := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: leafSPKISHA256,
}
recordDANEEESPKISHA512 := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA512,
CertAssoc: leafSPKISHA512,
}
recordDANEEESPKIFull := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeFull,
CertAssoc: tlsLeafCert.Leaf.RawSubjectPublicKeyInfo,
}
mismatchRecordDANEEESPKISHA256 := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: make([]byte, sha256.Size), // Zero, no match.
}
malformedRecordDANEEESPKISHA256 := adns.TLSA{
Usage: adns.TLSAUsageDANEEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: leafSPKISHA256[:16], // Too short.
}
unknownparamRecordDANEEESPKISHA256 := adns.TLSA{
Usage: adns.TLSAUsage(10), // Unrecognized value.
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: leafSPKISHA256,
}
recordDANETACertSHA256 := adns.TLSA{
Usage: adns.TLSAUsageDANETA,
Selector: adns.TLSASelectorCert,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: taCertSHA256,
}
recordDANETACertFull := adns.TLSA{
Usage: adns.TLSAUsageDANETA,
Selector: adns.TLSASelectorCert,
MatchType: adns.TLSAMatchTypeFull,
CertAssoc: taCert.Raw,
}
malformedRecordDANETACertFull := adns.TLSA{
Usage: adns.TLSAUsageDANETA,
Selector: adns.TLSASelectorCert,
MatchType: adns.TLSAMatchTypeFull,
CertAssoc: taCert.Raw[1:], // Cannot parse certificate.
}
mismatchRecordDANETACertSHA256 := adns.TLSA{
Usage: adns.TLSAUsageDANETA,
Selector: adns.TLSASelectorCert,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: make([]byte, sha256.Size), // Zero, no match.
}
recordPKIXEESPKISHA256 := adns.TLSA{
Usage: adns.TLSAUsagePKIXEE,
Selector: adns.TLSASelectorSPKI,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: leafSPKISHA256,
}
recordPKIXTACertSHA256 := adns.TLSA{
Usage: adns.TLSAUsagePKIXTA,
Selector: adns.TLSASelectorCert,
MatchType: adns.TLSAMatchTypeSHA256,
CertAssoc: taCertSHA256,
}
resolver := dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANEEESPKISHA256}},
AllAuthentic: true,
}
// DANE-EE SPKI SHA2-256 record.
test(resolver, recordDANEEESPKISHA256, nil)
// Check that record isn't used if not allowed.
allowedUsages = []adns.TLSAUsage{adns.TLSAUsagePKIXTA}
test(resolver, zeroRecord, ErrNoMatch)
allowedUsages = nil // Restore.
// Mixed allowed/not allowed usages are fine.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {mismatchRecordDANETACertSHA256, recordDANEEESPKISHA256}},
AllAuthentic: true,
}
allowedUsages = []adns.TLSAUsage{adns.TLSAUsageDANEEE}
test(resolver, recordDANEEESPKISHA256, nil)
allowedUsages = nil // Restore.
// DANE-TA CERT SHA2-256 record.
resolver.TLSA = map[string][]adns.TLSA{
tlsaName: {recordDANETACertSHA256},
}
test(resolver, recordDANETACertSHA256, nil)
// No TLSA record.
resolver.TLSA = nil
test(resolver, zeroRecord, ErrNoRecords)
// Insecure TLSA record.
resolver.TLSA = map[string][]adns.TLSA{
tlsaName: {recordDANEEESPKISHA256},
}
resolver.Inauthentic = []string{"tlsa " + tlsaName}
test(resolver, zeroRecord, ErrInsecure)
// Insecure CNAME.
resolver.Inauthentic = []string{"cname localhost."}
test(resolver, zeroRecord, ErrInsecure)
// Insecure TLSA
resolver.Inauthentic = []string{"tlsa " + tlsaName}
test(resolver, zeroRecord, ErrInsecure)
// Insecure CNAME should not look at TLSA records under that name, only under original.
// Initial name/cname is secure. And it has secure TLSA records. But the lookup for
// example1 is not secure, though the final example2 records are.
resolver = dns.MockResolver{
A: map[string][]string{"example2.": {"127.0.0.1"}},
CNAME: map[string]string{"localhost.": "example1.", "example1.": "example2."},
TLSA: map[string][]adns.TLSA{
fmt.Sprintf("_%d._tcp.example2.", port): {mismatchRecordDANETACertSHA256}, // Should be ignored.
tlsaName: {recordDANEEESPKISHA256}, // Should match.
},
AllAuthentic: true,
Inauthentic: []string{"cname example1."},
}
test(resolver, recordDANEEESPKISHA256, nil)
// Matching records after following cname.
resolver = dns.MockResolver{
A: map[string][]string{"example.": {"127.0.0.1"}},
CNAME: map[string]string{"localhost.": "example."},
TLSA: map[string][]adns.TLSA{fmt.Sprintf("_%d._tcp.example.", port): {recordDANETACertSHA256}},
AllAuthentic: true,
}
test(resolver, recordDANETACertSHA256, nil)
// Fallback to original name for TLSA records if cname-expanded name doesn't have records.
resolver = dns.MockResolver{
A: map[string][]string{"example.": {"127.0.0.1"}},
CNAME: map[string]string{"localhost.": "example."},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANETACertSHA256}},
AllAuthentic: true,
}
test(resolver, recordDANETACertSHA256, nil)
// Invalid DANE-EE record.
resolver = dns.MockResolver{
A: map[string][]string{
"localhost.": {"127.0.0.1"},
},
TLSA: map[string][]adns.TLSA{
tlsaName: {mismatchRecordDANEEESPKISHA256},
},
AllAuthentic: true,
}
test(resolver, zeroRecord, ErrNoMatch)
// DANE-EE SPKI SHA2-512 record.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANEEESPKISHA512}},
AllAuthentic: true,
}
test(resolver, recordDANEEESPKISHA512, nil)
// DANE-EE SPKI Full record.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANEEESPKIFull}},
AllAuthentic: true,
}
test(resolver, recordDANEEESPKIFull, nil)
// DANE-TA with full certificate.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANETACertFull}},
AllAuthentic: true,
}
test(resolver, recordDANETACertFull, nil)
// DANE-TA for cert not in TLS handshake.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {mismatchRecordDANETACertSHA256}},
AllAuthentic: true,
}
test(resolver, zeroRecord, ErrNoMatch)
// DANE-TA with leaf cert for other name.
resolver = dns.MockResolver{
A: map[string][]string{"example.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{fmt.Sprintf("_%d._tcp.example.", port): {recordDANETACertSHA256}},
AllAuthentic: true,
}
origDialHost := dialHost
dialHost = "example."
test(resolver, zeroRecord, ErrNoMatch)
dialHost = origDialHost
// DANE-TA with expired cert.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANETACertSHA256}},
AllAuthentic: true,
}
tlsConfig.Store(&tls.Config{
Certificates: []tls.Certificate{tlsLeafCertExpired},
})
test(resolver, zeroRecord, ErrNoMatch)
test(resolver, zeroRecord, &VerifyError{})
test(resolver, zeroRecord, &x509.CertificateInvalidError{})
// Restore.
tlsConfig.Store(&tls.Config{
Certificates: []tls.Certificate{tlsLeafCert},
})
// Malformed TLSA record is unusable, resulting in failure if none left.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {malformedRecordDANEEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, zeroRecord, ErrNoMatch)
// Malformed TLSA record is unusable and skipped, other verified record causes Dial to succeed.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {malformedRecordDANEEESPKISHA256, recordDANEEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, recordDANEEESPKISHA256, nil)
// Record with unknown parameters (usage in this case) is unusable, resulting in failure if none left.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {unknownparamRecordDANEEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, zeroRecord, ErrNoMatch)
// Unknown parameter does not prevent other valid record to verify.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {unknownparamRecordDANEEESPKISHA256, recordDANEEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, recordDANEEESPKISHA256, nil)
// Malformed full TA certificate.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {malformedRecordDANETACertFull}},
AllAuthentic: true,
}
test(resolver, zeroRecord, ErrNoMatch)
// Full TA certificate without getting it from TLS server.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordDANETACertFull}},
AllAuthentic: true,
}
tlsLeafOnlyCert := tlsLeafCert
tlsLeafOnlyCert.Certificate = tlsLeafOnlyCert.Certificate[:1]
tlsConfig.Store(&tls.Config{
Certificates: []tls.Certificate{tlsLeafOnlyCert},
})
test(resolver, recordDANETACertFull, nil)
// Restore.
tlsConfig.Store(&tls.Config{
Certificates: []tls.Certificate{tlsLeafCert},
})
// PKIXEE, will fail due to not being CA-signed.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordPKIXEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, zeroRecord, &x509.UnknownAuthorityError{})
// PKIXTA, will fail due to not being CA-signed.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordPKIXTACertSHA256}},
AllAuthentic: true,
}
test(resolver, zeroRecord, &x509.UnknownAuthorityError{})
// Now we add the TA to the "pkix" trusted roots and try again.
pkixRoots.AddCert(taCert)
// PKIXEE, will now succeed.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordPKIXEESPKISHA256}},
AllAuthentic: true,
}
test(resolver, recordPKIXEESPKISHA256, nil)
// PKIXTA, will fail due to not being CA-signed.
resolver = dns.MockResolver{
A: map[string][]string{"localhost.": {"127.0.0.1"}},
TLSA: map[string][]adns.TLSA{tlsaName: {recordPKIXTACertSHA256}},
AllAuthentic: true,
}
test(resolver, recordPKIXTACertSHA256, nil)
}

32
dane/examples_test.go Normal file
View File

@ -0,0 +1,32 @@
package dane_test
import (
"context"
"crypto/x509"
"log"
"log/slog"
"github.com/mjl-/adns"
"github.com/mjl-/mox/dane"
"github.com/mjl-/mox/dns"
)
func ExampleDial() {
ctx := context.Background()
resolver := dns.StrictResolver{}
usages := []adns.TLSAUsage{adns.TLSAUsageDANETA, adns.TLSAUsageDANEEE}
pkixRoots, err := x509.SystemCertPool()
if err != nil {
log.Fatalf("system pkix roots: %v", err)
}
// Connect to SMTP server, use STARTTLS, and verify TLS certificate with DANE.
conn, verifiedRecord, err := dane.Dial(ctx, slog.Default(), resolver, "tcp", "mx.example.com", usages, pkixRoots)
if err != nil {
log.Fatalf("dial: %v", err)
}
defer conn.Close()
log.Printf("connected, conn %v, verified record %s", conn, verifiedRecord)
}

View File

@ -1,5 +1,125 @@
This file has notes useful for mox developers.
# Building & testing
For a full build, you'll need a recent Go compiler/toolchain and nodejs/npm for
the frontend. Run "make build" to do a full build. Run "make test" to run the
test suite. With docker installed, you can run "make test-integration" to start
up a few mox instances, a dns server, a postfix instance, and send email
between them.
The mox localserve command is a convenient way to test locally. Most of the
code paths are reachable/testable with mox localserve, but some use cases will
require a full setup.
Before committing, run at least "make fmt" and "make check" (which requires
staticcheck and ineffassign, run "make install-staticcheck install-ineffassign"
once). Also run "make check-shadow" and fix any shadowed variables other than
"err" (which are filtered out, but causes the command to always exit with an
error code; run "make install-shadow" once to install the shadow command). If
you've updated RFC references, run "make" in rfc/, it verifies the referenced
files exist.
When making changes to the public API of a package listed in
apidiff/packages.txt, run "make genapidiff" to update the list of changes in
the upcoming release (run "make install-apidiff" once to install the apidiff
command).
New features may be worth mentioning on the website, see website/ and
instructions below.
# Code style, guidelines, notes
- Keep the same style as existing code.
- For Windows: use package "path/filepath" when dealing with files/directories.
Test code can pass forward-slashed paths directly to standard library functions,
but use proper filepath functions when parameters are passed and in non-test
code. Mailbox names always use forward slash, so use package "path" for mailbox
name/path manipulation. Do not remove/rename files that are still open.
- Not all code uses adns, the DNSSEC-aware resolver. Such as code that makes
http requests, like mtasts and autotls/autocert.
- We don't have an internal/ directory, really just to prevent long paths in
the repo, and to keep all Go code matching *.go */*.go (without matching
vendor/). Part of the packages are reusable by other software. Those reusable
packages must not cause mox implementation details (such as bstore) to get out,
which would cause unexpected dependencies. Those packages also only expose the
standard slog package for logging, not our mlog package. Packages not intended
for reuse do use mlog as it is more convenient. Internally, we always use
mlog.Log to do the logging, wrapping an slog.Logger.
- The code uses panic for error handling in quite a few places, including
smtpserver, imapserver and web API calls. Functions/methods, variables, struct
fields and types that begin with an "x" indicate they can panic on errors. Both
for i/o errors that are fatal for a connection, and also often for user-induced
errors, for example bad IMAP commands or invalid web API requests. These panics
are caught again at the top of a command or top of the connection. Write code
that is panic-safe, using defer to clean up and release resources.
- Try to check all errors, at the minimum using mlog.Log.Check() to log an error
at the appropriate level. Also when just closing a file. Log messages sometimes
unexpectedly point out latent issues. Only when there is no point in logging,
for example when previous writes to stderr failed, can error logging be skipped.
Test code is less strict about checking errors.
# Reusable packages
Most non-server Go packages are meant to be reusable. This means internal
details are not exposed in the API, and we don't make unneeded changes. We can
still make breaking changes when it improves mox: We don't want to be stuck
with bad API. Third party users aren't affected too seriously due to Go's
minimal version selection. The reusable packages are in apidiff/packages.txt.
We generate the incompatible changes with each release.
# Web interfaces/frontend
The web interface frontends (for webmail/, webadmin/ and webaccount/) are
written in strict TypeScript. The web API is a simple self-documenting
HTTP/JSON RPC API mechanism called sherpa,
https://www.ueber.net/who/mjl/sherpa/. The web API exposes types and functions
as implemented in Go, using https://github.com/mjl-/sherpa. API definitions in
JSON form are generated with https://github.com/mjl-/sherpadoc. Those API
definitions are used to generate TypeScript clients with by
https://github.com/mjl-/sherpats/.
The JavaScript that is generated from the TypeScript is included in the
repository. This makes it available for inclusion in the binary, which is
practical for users, and desirable given Go's reproducible builds. When
developing, run "make" to also build the frontend code. Run "make
install-frontend" once to install the TypeScript compiler into ./node_modules/.
There are no other external (runtime or devtime) frontend dependencies. A
light-weight abstraction over the DOM is provided by ./lib.ts. A bit more
manual UI state management must be done compared to "frameworks", but it is
little code, and this allows JavaScript/TypeScript developer to quickly get
started. UI state is often encapsulated in a JavaScript object with a
TypeScript interface exposing a "root" HTMLElement that is added to the DOM,
and functions for accessing/changing the internal state, keeping the UI
managable.
# Website
The content of the public website at https://www.xmox.nl is in website/, as
markdown files. The website HTML is generated with "make genwebsite", which
writes to website/html/ (files not committed). The FAQ is taken from
README.md, the protocol support table is generated from rfc/index.txt. The
website is kept in this repository so a commit can change both the
implementation and the documentation on the website. Some of the info in
README.md is duplicated on the website, often more elaborate and possibly with
a slightly less technical audience. The website should also mostly be readable
through the markdown in the git repo.
Large files (images/videos) are in https://github.com/mjl-/mox-website-files to
keep the repository reasonably sized.
The public website may serve the content from the "website" branch. After a
release, the main branch (with latest development code and corresponding
changes to the website about new features) is merged into the website branch.
Commits to the website branch (e.g. for a news item, or any other change
unrelated to a new release) is merged back into the main branch.
# TLS certificates
https://github.com/cloudflare/cfssl is useful for testing with TLS
@ -80,12 +200,13 @@ Listeners:
KeyFile: ../../cfssl/wildcard.$domain-key.pem
```
# ACME
https://github.com/letsencrypt/pebble is useful for testing with ACME. Start a
pebble instance that uses the localhost TLS cert/key created by cfssl for its
TLS serving. Pebble generates a new CA certificate for its own use each time it
is started. Fetch it from https://localhost:14000/root, write it to a file, and
is started. Fetch it from https://localhost:15000/roots/0, write it to a file, and
add it to mox.conf TLS.CA.CertFiles. See below.
Setup pebble, run once:
@ -122,7 +243,7 @@ Write new CA bundle that includes pebble's temporary CA cert:
export CURL_CA_BUNDLE=local/ca-bundle.pem # for curl
export SSL_CERT_FILE=local/ca-bundle.pem # for go apps
cat /etc/ssl/certs/ca-certificates.crt local/cfssl/ca.pem >local/ca-bundle.pem
curl https://localhost:14000/root >local/pebble/ca.pem # fetch temp pebble ca, DO THIS EVERY TIME PEBBLE IS RESTARTED!
curl https://localhost:15000/roots/0 >local/pebble/ca.pem # fetch temp pebble ca, DO THIS EVERY TIME PEBBLE IS RESTARTED!
cat /etc/ssl/certs/ca-certificates.crt local/cfssl/ca.pem local/pebble/ca.pem >local/ca-bundle.pem # create new list that includes cfssl ca and temp pebble ca.
rm -r local/*/data/acme/keycerts/pebble # remove existing pebble-signed certs in acme cert/key cache, they are invalid due to newly generated temp pebble ca.
```
@ -158,24 +279,67 @@ non-testing purposes. Unfortunately, this also makes it inconvenient to use for
testing purposes.
# Messages for testing
For compatibility and preformance testing, it helps to have many messages,
created a long time ago and recently, by different mail user agents. A helpful
source is the Linux kernel mailing list. Archives are available as multiple git
repositories (split due to size) at
https://lore.kernel.org/lkml/_/text/mirror/. The git repo's can be converted
to compressed mbox files (about 800MB each) with:
```
# 0 is the first epoch (with over half a million messages), 12 is last
# already-complete epoch at the time of writing (with a quarter million
# messages). The archives are large, converting will take some time.
for i in 0 12; do
git clone --mirror http://lore.kernel.org/lkml/$i lkml-$i.git
(cd lkml-$i.git && time ./tombox.sh | gzip >../lkml-$i.mbox.gz)
done
```
With the following "tombox.sh" script:
```
#!/bin/sh
pre=''
for rev in $(git rev-list master | reverse); do
printf "$pre"
echo "From sender@host $(date '+%a %b %e %H:%M:%S %Y' -d @$(git show -s --format=%ct $rev))"
git show ${rev}:m | sed 's/^>*From />&/'
pre='\n'
done
```
# Release proces
- Gather feedback on recent changes.
- Check if dependencies need updates.
- Update to latest publicsuffix/ list.
- Check code if there are deprecated features that can be removed.
- Update features & roadmap in README.md
- Write release notes, use instructions from updating.txt.
- Build and run tests with previous major Go release.
- Run all (integration) tests, including with race detector.
- Generate apidiff and check if breaking changes can be prevented. Update moxtools.
- Update features & roadmap in README.md and website.
- Write release notes, copy from previous.
- Build and run tests with previous major Go release, run "make docker-release" to test building images.
- Run tests, including with race detector, also with TZ= for UTC-behaviour, and with -count 2.
- Run integration and upgrade tests.
- Run fuzzing tests for a while.
- Deploy to test environment. Test the update instructions.
- Generate a config with quickstart, check if it results in a working setup.
- Test mox localserve on various OSes (linux, bsd, macos, windows).
- Send and receive email through the major webmail providers, check headers.
- Send and receive email with imap4/smtp clients.
- Check DNS check admin page.
- Check with https://internet.nl
- Clear updating.txt.
- Create git tag, push code.
- Publish new docker image.
- Publish signed release notes for updates.xmox.nl and update DNS record.
- Check with https://internet.nl.
- Move apidiff/next.txt to apidiff/<version>.txt, and create empty next.txt.
- Add release to the Latest release & News sections of website/index.md.
- Create git tag (note: "#" is comment, not title/header), push code.
- Build and publish new docker image.
- Deploy update to website.
- Create new release on the github page, so watchers get a notification.
Copy/paste it manually from the tag text, and add link to download/compile
instructions to prevent confusion about "assets" github links to.
- Publish new cross-referenced code/rfc to www.xmox.nl/xr/.
- Update moxtools with latest version.
- Update implementations support matrix.
- Publish signed release notes for updates.xmox.nl and update DNS record.

View File

@ -21,43 +21,25 @@ import (
"fmt"
"hash"
"io"
"log/slog"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/moxio"
"github.com/mjl-/mox/publicsuffix"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/stub"
"slices"
)
var xlog = mlog.New("dkim")
// If set, signatures for top-level domain "localhost" are accepted.
var Localserve bool
var (
metricDKIMSign = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "mox_dkim_sign_total",
Help: "DKIM messages signings.",
},
[]string{
"key",
},
)
metricDKIMVerify = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "mox_dkim_verify_duration_seconds",
Help: "DKIM verify, including lookup, duration and result.",
Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.100, 0.5, 1, 5, 10, 20},
},
[]string{
"algorithm",
"status",
},
)
MetricSign stub.CounterVec = stub.CounterVecIgnore{}
MetricVerify stub.HistogramVec = stub.HistogramVecIgnore{}
)
var timeNow = time.Now // Replaced during tests.
@ -113,20 +95,45 @@ var (
// To decide what to do with a message, both the signature parameters and the DNS
// TXT record have to be consulted.
type Result struct {
Status Status
Sig *Sig // Parsed form of DKIM-Signature header. Can be nil for invalid DKIM-Signature header.
Record *Record // Parsed form of DKIM DNS record for selector and domain in Sig. Optional.
Err error // If Status is not StatusPass, this error holds the details and can be checked using errors.Is.
Status Status
Sig *Sig // Parsed form of DKIM-Signature header. Can be nil for invalid DKIM-Signature header.
Record *Record // Parsed form of DKIM DNS record for selector and domain in Sig. Optional.
RecordAuthentic bool // Whether DKIM DNS record was DNSSEC-protected. Only valid if Sig is non-nil.
Err error // If Status is not StatusPass, this error holds the details and can be checked using errors.Is.
}
// todo: use some io.Writer to hash the body and the header.
// Selector holds selectors and key material to generate DKIM signatures.
type Selector struct {
Hash string // "sha256" or the older "sha1".
HeaderRelaxed bool // If the header is canonicalized in relaxed instead of simple mode.
BodyRelaxed bool // If the body is canonicalized in relaxed instead of simple mode.
Headers []string // Headers to include in signature.
// Whether to "oversign" headers, ensuring additional/new values of existing
// headers cannot be added.
SealHeaders bool
// If > 0, period a signature is valid after signing, as duration, e.g. 72h. The
// period should be enough for delivery at the final destination, potentially with
// several hops/relays. In the order of days at least.
Expiration time.Duration
PrivateKey crypto.Signer // Either an *rsa.PrivateKey or ed25519.PrivateKey.
Domain dns.Domain // Of selector only, not FQDN.
}
// Sign returns line(s) with DKIM-Signature headers, generated according to the configuration.
func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c config.DKIM, smtputf8 bool, msg io.ReaderAt) (headers string, rerr error) {
log := xlog.WithContext(ctx)
func Sign(ctx context.Context, elog *slog.Logger, localpart smtp.Localpart, domain dns.Domain, selectors []Selector, smtputf8 bool, msg io.ReaderAt) (headers string, rerr error) {
log := mlog.New("dkim", elog)
start := timeNow()
defer func() {
log.Debugx("dkim sign result", rerr, mlog.Field("localpart", localpart), mlog.Field("domain", domain), mlog.Field("smtputf8", smtputf8), mlog.Field("duration", time.Since(start)))
log.Debugx("dkim sign result", rerr,
slog.Any("localpart", localpart),
slog.Any("domain", domain),
slog.Bool("smtputf8", smtputf8),
slog.Duration("duration", time.Since(start)))
}()
hdrs, bodyOffset, err := parseHeaders(bufio.NewReader(&moxio.AtReader{R: msg}))
@ -150,26 +157,25 @@ func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c co
var bodyHashes = map[hashKey][]byte{}
for _, sign := range c.Sign {
sel := c.Selectors[sign]
for _, sel := range selectors {
sig := newSigWithDefaults()
sig.Version = 1
switch sel.Key.(type) {
switch sel.PrivateKey.(type) {
case *rsa.PrivateKey:
sig.AlgorithmSign = "rsa"
metricDKIMSign.WithLabelValues("rsa").Inc()
MetricSign.IncLabels("rsa")
case ed25519.PrivateKey:
sig.AlgorithmSign = "ed25519"
metricDKIMSign.WithLabelValues("ed25519").Inc()
MetricSign.IncLabels("ed25519")
default:
return "", fmt.Errorf("internal error, unknown pivate key %T", sel.Key)
return "", fmt.Errorf("internal error, unknown pivate key %T", sel.PrivateKey)
}
sig.AlgorithmHash = sel.HashEffective
sig.AlgorithmHash = sel.Hash
sig.Domain = domain
sig.Selector = sel.Domain
sig.Identity = &Identity{&localpart, domain}
sig.SignedHeaders = append([]string{}, sel.HeadersEffective...)
if !sel.DontSealHeaders {
sig.SignedHeaders = slices.Clone(sel.Headers)
if sel.SealHeaders {
// ../rfc/6376:2156
// Each time a header name is added to the signature, the next unused value is
// signed (in reverse order as they occur in the message). So we can add each
@ -179,23 +185,23 @@ func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c co
for _, h := range hdrs {
counts[h.lkey]++
}
for _, h := range sel.HeadersEffective {
for _, h := range sel.Headers {
for j := counts[strings.ToLower(h)]; j > 0; j-- {
sig.SignedHeaders = append(sig.SignedHeaders, h)
}
}
}
sig.SignTime = timeNow().Unix()
if sel.ExpirationSeconds > 0 {
sig.ExpireTime = sig.SignTime + int64(sel.ExpirationSeconds)
if sel.Expiration > 0 {
sig.ExpireTime = sig.SignTime + int64(sel.Expiration/time.Second)
}
sig.Canonicalization = "simple"
if sel.Canonicalization.HeaderRelaxed {
if sel.HeaderRelaxed {
sig.Canonicalization = "relaxed"
}
sig.Canonicalization += "/"
if sel.Canonicalization.BodyRelaxed {
if sel.BodyRelaxed {
sig.Canonicalization += "relaxed"
} else {
sig.Canonicalization += "simple"
@ -212,12 +218,12 @@ func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c co
// DKIM-Signature header.
// ../rfc/6376:1700
hk := hashKey{!sel.Canonicalization.BodyRelaxed, strings.ToLower(sig.AlgorithmHash)}
hk := hashKey{!sel.BodyRelaxed, strings.ToLower(sig.AlgorithmHash)}
if bh, ok := bodyHashes[hk]; ok {
sig.BodyHash = bh
} else {
br := bufio.NewReader(&moxio.AtReader{R: msg, Offset: int64(bodyOffset)})
bh, err = bodyHash(h.New(), !sel.Canonicalization.BodyRelaxed, br)
bh, err = bodyHash(h.New(), !sel.BodyRelaxed, br)
if err != nil {
return "", err
}
@ -231,12 +237,12 @@ func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c co
}
verifySig := []byte(strings.TrimSuffix(sigh, "\r\n"))
dh, err := dataHash(h.New(), !sel.Canonicalization.HeaderRelaxed, sig, hdrs, verifySig)
dh, err := dataHash(h.New(), !sel.HeaderRelaxed, sig, hdrs, verifySig)
if err != nil {
return "", err
}
switch key := sel.Key.(type) {
switch key := sel.PrivateKey.(type) {
case *rsa.PrivateKey:
sig.Signature, err = key.Sign(cryptorand.Reader, dh, h)
if err != nil {
@ -267,22 +273,29 @@ func Sign(ctx context.Context, localpart smtp.Localpart, domain dns.Domain, c co
//
// A requested record is <selector>._domainkey.<domain>. Exactly one valid DKIM
// record should be present.
func Lookup(ctx context.Context, resolver dns.Resolver, selector, domain dns.Domain) (rstatus Status, rrecord *Record, rtxt string, rerr error) {
log := xlog.WithContext(ctx)
//
// authentic indicates if DNS results were DNSSEC-verified.
func Lookup(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, selector, domain dns.Domain) (rstatus Status, rrecord *Record, rtxt string, authentic bool, rerr error) {
log := mlog.New("dkim", elog)
start := timeNow()
defer func() {
log.Debugx("dkim lookup result", rerr, mlog.Field("selector", selector), mlog.Field("domain", domain), mlog.Field("status", rstatus), mlog.Field("record", rrecord), mlog.Field("duration", time.Since(start)))
log.Debugx("dkim lookup result", rerr,
slog.Any("selector", selector),
slog.Any("domain", domain),
slog.Any("status", rstatus),
slog.Any("record", rrecord),
slog.Duration("duration", time.Since(start)))
}()
name := selector.ASCII + "._domainkey." + domain.ASCII + "."
records, err := dns.WithPackage(resolver, "dkim").LookupTXT(ctx, name)
records, lookupResult, err := dns.WithPackage(resolver, "dkim").LookupTXT(ctx, name)
if dns.IsNotFound(err) {
// ../rfc/6376:2608
// We must return StatusPermerror. We may want to return StatusTemperror because in
// practice someone will start using a new key before DNS changes have propagated.
return StatusPermerror, nil, "", fmt.Errorf("%w: dns name %q", ErrNoRecord, name)
return StatusPermerror, nil, "", lookupResult.Authentic, fmt.Errorf("%w: dns name %q", ErrNoRecord, name)
} else if err != nil {
return StatusTemperror, nil, "", fmt.Errorf("%w: dns name %q: %s", ErrDNS, name, err)
return StatusTemperror, nil, "", lookupResult.Authentic, fmt.Errorf("%w: dns name %q: %s", ErrDNS, name, err)
}
// ../rfc/6376:2612
@ -298,7 +311,7 @@ func Lookup(ctx context.Context, resolver dns.Resolver, selector, domain dns.Dom
var isdkim bool
r, isdkim, err = ParseRecord(s)
if err != nil && isdkim {
return StatusPermerror, nil, txt, fmt.Errorf("%w: %s", ErrSyntax, err)
return StatusPermerror, nil, txt, lookupResult.Authentic, fmt.Errorf("%w: %s", ErrSyntax, err)
} else if err != nil {
// Hopefully the remote MTA admin discovers the configuration error and fix it for
// an upcoming delivery attempt, in case we rejected with temporary status.
@ -310,7 +323,7 @@ func Lookup(ctx context.Context, resolver dns.Resolver, selector, domain dns.Dom
// ../rfc/6376:1609
// ../rfc/6376:2584
if record != nil {
return StatusTemperror, nil, "", fmt.Errorf("%w: dns name %q", ErrMultipleRecords, name)
return StatusTemperror, nil, "", lookupResult.Authentic, fmt.Errorf("%w: dns name %q", ErrMultipleRecords, name)
}
record = r
txt = s
@ -318,9 +331,9 @@ func Lookup(ctx context.Context, resolver dns.Resolver, selector, domain dns.Dom
}
if record == nil {
return status, nil, "", err
return status, nil, "", lookupResult.Authentic, err
}
return StatusNeutral, record, txt, nil
return StatusNeutral, record, txt, lookupResult.Authentic, nil
}
// Verify parses the DKIM-Signature headers in a message and verifies each of them.
@ -335,8 +348,8 @@ func Lookup(ctx context.Context, resolver dns.Resolver, selector, domain dns.Dom
// verification failure is treated as actual failure. With ignoreTestMode
// false, such verification failures are treated as if there is no signature by
// returning StatusNone.
func Verify(ctx context.Context, resolver dns.Resolver, smtputf8 bool, policy func(*Sig) error, r io.ReaderAt, ignoreTestMode bool) (results []Result, rerr error) {
log := xlog.WithContext(ctx)
func Verify(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, smtputf8 bool, policy func(*Sig) error, r io.ReaderAt, ignoreTestMode bool) (results []Result, rerr error) {
log := mlog.New("dkim", elog)
start := timeNow()
defer func() {
duration := float64(time.Since(start)) / float64(time.Second)
@ -346,14 +359,19 @@ func Verify(ctx context.Context, resolver dns.Resolver, smtputf8 bool, policy fu
alg = r.Sig.Algorithm()
}
status := string(r.Status)
metricDKIMVerify.WithLabelValues(alg, status).Observe(duration)
MetricVerify.ObserveLabels(duration, alg, status)
}
if len(results) == 0 {
log.Debugx("dkim verify result", rerr, mlog.Field("smtputf8", smtputf8), mlog.Field("duration", time.Since(start)))
log.Debugx("dkim verify result", rerr, slog.Bool("smtputf8", smtputf8), slog.Duration("duration", time.Since(start)))
}
for _, result := range results {
log.Debugx("dkim verify result", result.Err, mlog.Field("smtputf8", smtputf8), mlog.Field("status", result.Status), mlog.Field("sig", result.Sig), mlog.Field("record", result.Record), mlog.Field("duration", time.Since(start)))
log.Debugx("dkim verify result", result.Err,
slog.Bool("smtputf8", smtputf8),
slog.Any("status", result.Status),
slog.Any("sig", result.Sig),
slog.Any("record", result.Record),
slog.Duration("duration", time.Since(start)))
}
}()
@ -373,33 +391,33 @@ func Verify(ctx context.Context, resolver dns.Resolver, smtputf8 bool, policy fu
if err != nil {
// ../rfc/6376:2503
err := fmt.Errorf("parsing DKIM-Signature header: %w", err)
results = append(results, Result{StatusPermerror, nil, nil, err})
results = append(results, Result{StatusPermerror, nil, nil, false, err})
continue
}
h, canonHeaderSimple, canonDataSimple, err := checkSignatureParams(ctx, sig)
h, canonHeaderSimple, canonDataSimple, err := checkSignatureParams(ctx, log, sig)
if err != nil {
results = append(results, Result{StatusPermerror, nil, nil, err})
results = append(results, Result{StatusPermerror, sig, nil, false, err})
continue
}
// ../rfc/6376:2560
if err := policy(sig); err != nil {
err := fmt.Errorf("%w: %s", ErrPolicy, err)
results = append(results, Result{StatusPolicy, nil, nil, err})
results = append(results, Result{StatusPolicy, sig, nil, false, err})
continue
}
br := bufio.NewReader(&moxio.AtReader{R: r, Offset: int64(bodyOffset)})
status, txt, err := verifySignature(ctx, resolver, sig, h, canonHeaderSimple, canonDataSimple, hdrs, verifySig, br, ignoreTestMode)
results = append(results, Result{status, sig, txt, err})
status, txt, authentic, err := verifySignature(ctx, log.Logger, resolver, sig, h, canonHeaderSimple, canonDataSimple, hdrs, verifySig, br, ignoreTestMode)
results = append(results, Result{status, sig, txt, authentic, err})
}
return results, nil
}
// check if signature is acceptable.
// Only looks at the signature parameters, not at the DNS record.
func checkSignatureParams(ctx context.Context, sig *Sig) (hash crypto.Hash, canonHeaderSimple, canonBodySimple bool, rerr error) {
func checkSignatureParams(ctx context.Context, log mlog.Log, sig *Sig) (hash crypto.Hash, canonHeaderSimple, canonBodySimple bool, rerr error) {
// "From" header is required, ../rfc/6376:2122 ../rfc/6376:2546
var from bool
for _, h := range sig.SignedHeaders {
@ -428,7 +446,7 @@ func checkSignatureParams(ctx context.Context, sig *Sig) (hash crypto.Hash, cano
if subdom.Unicode != "" {
subdom.Unicode = "x." + subdom.Unicode
}
if orgDom := publicsuffix.Lookup(ctx, subdom); subdom.ASCII == orgDom.ASCII {
if orgDom := publicsuffix.Lookup(ctx, log.Logger, subdom); subdom.ASCII == orgDom.ASCII && !(Localserve && sig.Domain.ASCII == "localhost") {
return 0, false, false, fmt.Errorf("%w: %s", ErrTLD, sig.Domain)
}
@ -477,15 +495,15 @@ func checkSignatureParams(ctx context.Context, sig *Sig) (hash crypto.Hash, cano
}
// lookup the public key in the DNS and verify the signature.
func verifySignature(ctx context.Context, resolver dns.Resolver, sig *Sig, hash crypto.Hash, canonHeaderSimple, canonDataSimple bool, hdrs []header, verifySig []byte, body *bufio.Reader, ignoreTestMode bool) (Status, *Record, error) {
func verifySignature(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, sig *Sig, hash crypto.Hash, canonHeaderSimple, canonDataSimple bool, hdrs []header, verifySig []byte, body *bufio.Reader, ignoreTestMode bool) (Status, *Record, bool, error) {
// ../rfc/6376:2604
status, record, _, err := Lookup(ctx, resolver, sig.Selector, sig.Domain)
status, record, _, authentic, err := Lookup(ctx, elog, resolver, sig.Selector, sig.Domain)
if err != nil {
// todo: for temporary errors, we could pass on information so caller returns a 4.7.5 ecode, ../rfc/6376:2777
return status, nil, err
return status, nil, authentic, err
}
status, err = verifySignatureRecord(record, sig, hash, canonHeaderSimple, canonDataSimple, hdrs, verifySig, body, ignoreTestMode)
return status, record, err
return status, record, authentic, err
}
// verify a DKIM signature given the record from dns and signature from the email message.
@ -531,7 +549,7 @@ func verifySignatureRecord(r *Record, sig *Sig, hash crypto.Hash, canonHeaderSim
if r.PublicKey == nil {
return StatusPermerror, ErrKeyRevoked
} else if rsaKey, ok := r.PublicKey.(*rsa.PublicKey); ok && rsaKey.N.BitLen() < 1024 {
// todo: find a reference that supports this.
// ../rfc/8301:157
return StatusPermerror, ErrWeakKey
}
@ -822,8 +840,8 @@ func parseHeaders(br *bufio.Reader) ([]header, int, error) {
return nil, 0, fmt.Errorf("empty header key")
}
lkey = strings.ToLower(key)
value = append([]byte{}, t[1]...)
raw = append([]byte{}, line...)
value = slices.Clone(t[1])
raw = slices.Clone(line)
}
if key != "" {
l = append(l, header{key, lkey, value, raw})

View File

@ -15,10 +15,12 @@ import (
"strings"
"testing"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
)
var pkglog = mlog.New("dkim", nil)
func policyOK(sig *Sig) error {
return nil
}
@ -143,7 +145,7 @@ test
},
}
results, err := Verify(context.Background(), resolver, false, policyOK, strings.NewReader(message), false)
results, err := Verify(context.Background(), pkglog.Logger, resolver, false, policyOK, strings.NewReader(message), false)
if err != nil {
t.Fatalf("dkim verify: %v", err)
}
@ -190,7 +192,7 @@ Joe.
},
}
results, err := Verify(context.Background(), resolver, false, policyOK, strings.NewReader(message), false)
results, err := Verify(context.Background(), pkglog.Logger, resolver, false, policyOK, strings.NewReader(message), false)
if err != nil {
t.Fatalf("dkim verify: %v", err)
}
@ -219,50 +221,42 @@ test
rsaKey := getRSAKey(t)
ed25519Key := ed25519.NewKeyFromSeed(make([]byte, 32))
selrsa := config.Selector{
HashEffective: "sha256",
Key: rsaKey,
HeadersEffective: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "testrsa"},
selrsa := Selector{
Hash: "sha256",
PrivateKey: rsaKey,
Headers: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "testrsa"},
}
// Now with sha1 and relaxed canonicalization.
selrsa2 := config.Selector{
HashEffective: "sha1",
Key: rsaKey,
HeadersEffective: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "testrsa2"},
selrsa2 := Selector{
Hash: "sha1",
PrivateKey: rsaKey,
Headers: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "testrsa2"},
}
selrsa2.Canonicalization.HeaderRelaxed = true
selrsa2.Canonicalization.BodyRelaxed = true
selrsa2.HeaderRelaxed = true
selrsa2.BodyRelaxed = true
// Ed25519 key.
seled25519 := config.Selector{
HashEffective: "sha256",
Key: ed25519Key,
HeadersEffective: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "tested25519"},
seled25519 := Selector{
Hash: "sha256",
PrivateKey: ed25519Key,
Headers: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "tested25519"},
}
// Again ed25519, but without sealing headers. Use sha256 again, for reusing the body hash from the previous dkim-signature.
seled25519b := config.Selector{
HashEffective: "sha256",
Key: ed25519Key,
HeadersEffective: strings.Split("From,To,Cc,Bcc,Reply-To,Subject,Date", ","),
DontSealHeaders: true,
Domain: dns.Domain{ASCII: "tested25519b"},
}
dkimConf := config.DKIM{
Selectors: map[string]config.Selector{
"testrsa": selrsa,
"testrsa2": selrsa2,
"tested25519": seled25519,
"tested25519b": seled25519b,
},
Sign: []string{"testrsa", "testrsa2", "tested25519", "tested25519b"},
seled25519b := Selector{
Hash: "sha256",
PrivateKey: ed25519Key,
Headers: strings.Split("From,To,Cc,Bcc,Reply-To,Subject,Date", ","),
SealHeaders: true,
Domain: dns.Domain{ASCII: "tested25519b"},
}
selectors := []Selector{selrsa, selrsa2, seled25519, seled25519b}
ctx := context.Background()
headers, err := Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader(message))
headers, err := Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader(message))
if err != nil {
t.Fatalf("sign: %v", err)
}
@ -293,7 +287,7 @@ test
nmsg := headers + message
results, err := Verify(ctx, resolver, false, policyOK, strings.NewReader(nmsg), false)
results, err := Verify(ctx, pkglog.Logger, resolver, false, policyOK, strings.NewReader(nmsg), false)
if err != nil {
t.Fatalf("verify: %s", err)
}
@ -304,31 +298,31 @@ test
//log.Infof("nmsg\n%s", nmsg)
// Multiple From headers.
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader("From: <mjl@mox.example>\r\nFrom: <mjl@mox.example>\r\n\r\ntest"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader("From: <mjl@mox.example>\r\nFrom: <mjl@mox.example>\r\n\r\ntest"))
if !errors.Is(err, ErrFrom) {
t.Fatalf("sign, got err %v, expected ErrFrom", err)
}
// No From header.
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader("Brom: <mjl@mox.example>\r\n\r\ntest"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader("Brom: <mjl@mox.example>\r\n\r\ntest"))
if !errors.Is(err, ErrFrom) {
t.Fatalf("sign, got err %v, expected ErrFrom", err)
}
// Malformed headers.
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader(":\r\n\r\ntest"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader(":\r\n\r\ntest"))
if !errors.Is(err, ErrHeaderMalformed) {
t.Fatalf("sign, got err %v, expected ErrHeaderMalformed", err)
}
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader(" From:<mjl@mox.example>\r\n\r\ntest"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader(" From:<mjl@mox.example>\r\n\r\ntest"))
if !errors.Is(err, ErrHeaderMalformed) {
t.Fatalf("sign, got err %v, expected ErrHeaderMalformed", err)
}
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader("Frøm:<mjl@mox.example>\r\n\r\ntest"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader("Frøm:<mjl@mox.example>\r\n\r\ntest"))
if !errors.Is(err, ErrHeaderMalformed) {
t.Fatalf("sign, got err %v, expected ErrHeaderMalformed", err)
}
_, err = Sign(ctx, "mjl", dns.Domain{ASCII: "mox.example"}, dkimConf, false, strings.NewReader("From:<mjl@mox.example>"))
_, err = Sign(ctx, pkglog.Logger, "mjl", dns.Domain{ASCII: "mox.example"}, selectors, false, strings.NewReader("From:<mjl@mox.example>"))
if !errors.Is(err, ErrHeaderMalformed) {
t.Fatalf("sign, got err %v, expected ErrHeaderMalformed", err)
}
@ -355,9 +349,9 @@ test
var record *Record
var recordTxt string
var msg string
var sel config.Selector
var dkimConf config.DKIM
var policy func(*Sig) error
var sel Selector
var selectors []Selector
var signed bool
var signDomain dns.Domain
@ -386,18 +380,13 @@ test
},
}
sel = config.Selector{
HashEffective: "sha256",
Key: key,
HeadersEffective: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "test"},
}
dkimConf = config.DKIM{
Selectors: map[string]config.Selector{
"test": sel,
},
Sign: []string{"test"},
sel = Selector{
Hash: "sha256",
PrivateKey: key,
Headers: strings.Split("From,To,Cc,Bcc,Reply-To,References,In-Reply-To,Subject,Date,Message-ID,Content-Type", ","),
Domain: dns.Domain{ASCII: "test"},
}
selectors = []Selector{sel}
msg = message
signed = false
@ -408,7 +397,7 @@ test
msg = strings.ReplaceAll(msg, "\n", "\r\n")
headers, err := Sign(context.Background(), "mjl", signDomain, dkimConf, false, strings.NewReader(msg))
headers, err := Sign(context.Background(), pkglog.Logger, "mjl", signDomain, selectors, false, strings.NewReader(msg))
if err != nil {
t.Fatalf("sign: %v", err)
}
@ -425,7 +414,7 @@ test
sign()
}
results, err := Verify(context.Background(), resolver, true, policy, strings.NewReader(msg), false)
results, err := Verify(context.Background(), pkglog.Logger, resolver, true, policy, strings.NewReader(msg), false)
if (err == nil) != (expErr == nil) || err != nil && !errors.Is(err, expErr) {
t.Fatalf("got verify error %v, expected %v", err, expErr)
}
@ -460,8 +449,8 @@ test
})
// DNS request is failing temporarily.
test(nil, StatusTemperror, ErrDNS, func() {
resolver.Fail = map[dns.Mockreq]struct{}{
{Type: "txt", Name: "test._domainkey.mox.example."}: {},
resolver.Fail = []string{
"txt test._domainkey.mox.example.",
}
})
// Claims to be DKIM through v=, but cannot be parsed. ../rfc/6376:2621
@ -512,11 +501,9 @@ test
})
// Unknown canonicalization.
test(nil, StatusPermerror, ErrCanonicalizationUnknown, func() {
sel.Canonicalization.HeaderRelaxed = true
sel.Canonicalization.BodyRelaxed = true
dkimConf.Selectors = map[string]config.Selector{
"test": sel,
}
sel.HeaderRelaxed = true
sel.BodyRelaxed = true
selectors = []Selector{sel}
sign()
msg = strings.ReplaceAll(msg, "relaxed/relaxed", "bogus/bogus")
@ -574,10 +561,8 @@ test
resolver.TXT = map[string][]string{
"test._domainkey.mox.example.": {txt},
}
sel.Key = key
dkimConf.Selectors = map[string]config.Selector{
"test": sel,
}
sel.PrivateKey = key
selectors = []Selector{sel}
})
// Key not allowed for email by DNS record. ../rfc/6376:1541
test(nil, StatusPermerror, ErrKeyNotForEmail, func() {
@ -600,18 +585,14 @@ test
// Check that last-occurring header field is used.
test(nil, StatusFail, ErrSigVerify, func() {
sel.DontSealHeaders = true
dkimConf.Selectors = map[string]config.Selector{
"test": sel,
}
sel.SealHeaders = false
selectors = []Selector{sel}
sign()
msg = strings.ReplaceAll(msg, "\r\n\r\n", "\r\nsubject: another\r\n\r\n")
})
test(nil, StatusPass, nil, func() {
sel.DontSealHeaders = true
dkimConf.Selectors = map[string]config.Selector{
"test": sel,
}
sel.SealHeaders = false
selectors = []Selector{sel}
sign()
msg = "subject: another\r\n" + msg
})

View File

@ -6,11 +6,15 @@ import (
"strconv"
"strings"
"golang.org/x/text/unicode/norm"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/moxvar"
"github.com/mjl-/mox/smtp"
)
// Pedantic enables stricter parsing.
var Pedantic bool
type parseErr string
func (e parseErr) Error() string {
@ -200,18 +204,18 @@ func (p *parser) xdomainselector(isselector bool) dns.Domain {
// domain names must always be a-labels, ../rfc/6376:1115 ../rfc/6376:1187 ../rfc/6376:1303
// dkim selectors with underscores happen in the wild, accept them when not in
// pedantic mode. ../rfc/6376:581 ../rfc/5321:2303
return isalphadigit(c) || (i > 0 && (c == '-' || isselector && !moxvar.Pedantic && c == '_') && p.o+1 < len(p.s))
return isalphadigit(c) || (i > 0 && (c == '-' || isselector && !Pedantic && c == '_') && p.o+1 < len(p.s))
}
s := p.xtakefn1(false, subdomain)
for p.hasPrefix(".") {
s += p.xtake(".") + p.xtakefn1(false, subdomain)
}
if isselector {
// Not to be interpreted as IDNA.
return dns.Domain{ASCII: strings.ToLower(s)}
}
d, err := dns.ParseDomain(s)
if err != nil {
// ParseDomain does not allow underscore, work around it.
if strings.Contains(s, "_") && isselector && !moxvar.Pedantic {
return dns.Domain{ASCII: strings.ToLower(s)}
}
p.xerrorf("parsing domain %q: %s", s, err)
}
return d
@ -273,11 +277,11 @@ func (p *parser) xlocalpart() smtp.Localpart {
}
}
// In the wild, some services use large localparts for generated (bounce) addresses.
if moxvar.Pedantic && len(s) > 64 || len(s) > 128 {
if Pedantic && len(s) > 64 || len(s) > 128 {
// ../rfc/5321:3486
p.xerrorf("localpart longer than 64 octets")
}
return smtp.Localpart(s)
return smtp.Localpart(norm.NFC.String(s))
}
func (p *parser) xquotedString() string {

View File

@ -117,7 +117,7 @@ func (s *Sig) Header() (string, error) {
} else if i == len(s.SignedHeaders)-1 {
v += ";"
}
w.Addf(sep, v)
w.Addf(sep, "%s", v)
}
}
if len(s.CopiedHeaders) > 0 {
@ -139,7 +139,7 @@ func (s *Sig) Header() (string, error) {
} else if i == len(s.CopiedHeaders)-1 {
v += ";"
}
w.Addf(sep, v)
w.Addf(sep, "%s", v)
}
}
@ -147,7 +147,7 @@ func (s *Sig) Header() (string, error) {
w.Addf(" ", "b=")
if len(s.Signature) > 0 {
w.AddWrap([]byte(base64.StdEncoding.EncodeToString(s.Signature)))
w.AddWrap([]byte(base64.StdEncoding.EncodeToString(s.Signature)), false)
}
w.Add("\r\n")
return w.String(), nil

View File

@ -91,7 +91,7 @@ func TestSig(t *testing.T) {
BodyHash: xbase64("LjkN2rUhrS3zKXfH2vNgUzz5ERRJkgP9CURXBX0JP0Q="),
Domain: xdomain("xn--mx-lka.example"), // møx.example
SignedHeaders: []string{"from"},
Selector: xdomain("xn--tst-bma"), // tést
Selector: dns.Domain{ASCII: "xn--tst-bma"},
Identity: &Identity{&ulp, xdomain("xn--tst-bma.xn--mx-lka.example")}, // tést.møx.example
Canonicalization: "simple/simple",
Length: -1,

View File

@ -32,7 +32,7 @@ func TestParseRecord(t *testing.T) {
}
if r != nil {
pk := r.Pubkey
for i := 0; i < 2; i++ {
for range 2 {
ntxt, err := r.Record()
if err != nil {
t.Fatalf("making record: %v", err)

View File

@ -14,34 +14,20 @@ import (
"context"
"errors"
"fmt"
mathrand "math/rand"
"log/slog"
mathrand2 "math/rand/v2"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/publicsuffix"
"github.com/mjl-/mox/spf"
"github.com/mjl-/mox/stub"
)
var xlog = mlog.New("dmarc")
var (
metricDMARCVerify = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "mox_dmarc_verify_duration_seconds",
Help: "DMARC verify, including lookup, duration and result.",
Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.100, 0.5, 1, 5, 10, 20},
},
[]string{
"status",
"reject", // yes/no
"use", // yes/no, if policy is used after random selection
},
)
MetricVerify stub.HistogramVec = stub.HistogramVecIgnore{}
)
// link errata:
@ -71,16 +57,21 @@ const (
// Result is a DMARC policy evaluation.
type Result struct {
// Whether to reject the message based on policies. If false, the message should
// not necessarily be accepted, e.g. due to reputation or content-based analysis.
// not necessarily be accepted: other checks such as reputation-based and
// content-based analysis may lead to reject the message.
Reject bool
// Result of DMARC validation. A message can fail validation, but still
// not be rejected, e.g. if the policy is "none".
Status Status
Status Status
AlignedSPFPass bool
AlignedDKIMPass bool
// Domain with the DMARC DNS record. May be the organizational domain instead of
// the domain in the From-header.
Domain dns.Domain
// Parsed DMARC record.
Record *Record
// Whether DMARC DNS response was DNSSEC-signed, regardless of whether SPF/DKIM records were DNSSEC-signed.
RecordAuthentic bool
// Details about possible error condition, e.g. when parsing the DMARC record failed.
Err error
}
@ -93,36 +84,45 @@ type Result struct {
// domain is determined using the public suffix list. E.g. for
// "sub.example.com", the organizational domain is "example.com". The returned
// domain is the domain with the DMARC record.
func Lookup(ctx context.Context, resolver dns.Resolver, from dns.Domain) (status Status, domain dns.Domain, record *Record, txt string, rerr error) {
log := xlog.WithContext(ctx)
//
// rauthentic indicates if the DNS results were DNSSEC-verified.
func Lookup(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, msgFrom dns.Domain) (status Status, domain dns.Domain, record *Record, txt string, rauthentic bool, rerr error) {
log := mlog.New("dmarc", elog)
start := time.Now()
defer func() {
log.Debugx("dmarc lookup result", rerr, mlog.Field("fromdomain", from), mlog.Field("status", status), mlog.Field("domain", domain), mlog.Field("record", record), mlog.Field("duration", time.Since(start)))
log.Debugx("dmarc lookup result", rerr,
slog.Any("fromdomain", msgFrom),
slog.Any("status", status),
slog.Any("domain", domain),
slog.Any("record", record),
slog.Duration("duration", time.Since(start)))
}()
// ../rfc/7489:859 ../rfc/7489:1370
domain = from
status, record, txt, err := lookupRecord(ctx, resolver, domain)
domain = msgFrom
status, record, txt, authentic, err := lookupRecord(ctx, resolver, domain)
if status != StatusNone {
return status, domain, record, txt, err
return status, domain, record, txt, authentic, err
}
if record == nil {
// ../rfc/7489:761 ../rfc/7489:1377
domain = publicsuffix.Lookup(ctx, from)
if domain == from {
return StatusNone, domain, nil, txt, err
domain = publicsuffix.Lookup(ctx, log.Logger, msgFrom)
if domain == msgFrom {
return StatusNone, domain, nil, txt, authentic, err
}
status, record, txt, err = lookupRecord(ctx, resolver, domain)
var xauth bool
status, record, txt, xauth, err = lookupRecord(ctx, resolver, domain)
authentic = authentic && xauth
}
return status, domain, record, txt, err
return status, domain, record, txt, authentic, err
}
func lookupRecord(ctx context.Context, resolver dns.Resolver, domain dns.Domain) (Status, *Record, string, error) {
func lookupRecord(ctx context.Context, resolver dns.Resolver, domain dns.Domain) (Status, *Record, string, bool, error) {
name := "_dmarc." + domain.ASCII + "."
txts, err := dns.WithPackage(resolver, "dmarc").LookupTXT(ctx, name)
txts, result, err := dns.WithPackage(resolver, "dmarc").LookupTXT(ctx, name)
if err != nil && !dns.IsNotFound(err) {
return StatusTemperror, nil, "", fmt.Errorf("%w: %s", ErrDNS, err)
return StatusTemperror, nil, "", result.Authentic, fmt.Errorf("%w: %s", ErrDNS, err)
}
var record *Record
var text string
@ -133,17 +133,82 @@ func lookupRecord(ctx context.Context, resolver dns.Resolver, domain dns.Domain)
// ../rfc/7489:1374
continue
} else if err != nil {
return StatusPermerror, nil, text, fmt.Errorf("%w: %s", ErrSyntax, err)
return StatusPermerror, nil, text, result.Authentic, fmt.Errorf("%w: %s", ErrSyntax, err)
}
if record != nil {
// ../ ../rfc/7489:1388
return StatusNone, nil, "", ErrMultipleRecords
// ../rfc/7489:1388
return StatusNone, nil, "", result.Authentic, ErrMultipleRecords
}
text = txt
record = r
rerr = nil
}
return StatusNone, record, text, rerr
return StatusNone, record, text, result.Authentic, rerr
}
func lookupReportsRecord(ctx context.Context, resolver dns.Resolver, dmarcDomain, extDestDomain dns.Domain) (Status, []*Record, []string, bool, error) {
// ../rfc/7489:1566
name := dmarcDomain.ASCII + "._report._dmarc." + extDestDomain.ASCII + "."
txts, result, err := dns.WithPackage(resolver, "dmarc").LookupTXT(ctx, name)
if err != nil && !dns.IsNotFound(err) {
return StatusTemperror, nil, nil, result.Authentic, fmt.Errorf("%w: %s", ErrDNS, err)
}
var records []*Record
var texts []string
var rerr error = ErrNoRecord
for _, txt := range txts {
r, isdmarc, err := ParseRecordNoRequired(txt)
// Examples in the RFC use "v=DMARC1", even though it isn't a valid DMARC record.
// Accept the specific example.
// ../rfc/7489-eid5440
if !isdmarc && txt == "v=DMARC1" {
xr := DefaultRecord
r, isdmarc, err = &xr, true, nil
}
if !isdmarc {
// ../rfc/7489:1586
continue
}
texts = append(texts, txt)
records = append(records, r)
if err != nil {
return StatusPermerror, records, texts, result.Authentic, fmt.Errorf("%w: %s", ErrSyntax, err)
}
// Multiple records are allowed for the _report record, unlike for policies. ../rfc/7489:1593
rerr = nil
}
return StatusNone, records, texts, result.Authentic, rerr
}
// LookupExternalReportsAccepted returns whether the extDestDomain has opted in
// to receiving dmarc reports for dmarcDomain (where the dmarc record was found),
// through a "._report._dmarc." DNS TXT DMARC record.
//
// accepts is true if the external domain has opted in.
// If a temporary error occurred, the returned status is StatusTemperror, and a
// later retry may give an authoritative result.
// The returned error is ErrNoRecord if no opt-in DNS record exists, which is
// not a failure condition.
//
// The normally invalid "v=DMARC1" record is accepted since it is used as
// example in RFC 7489.
//
// authentic indicates if the DNS results were DNSSEC-verified.
func LookupExternalReportsAccepted(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, dmarcDomain dns.Domain, extDestDomain dns.Domain) (accepts bool, status Status, records []*Record, txts []string, authentic bool, rerr error) {
log := mlog.New("dmarc", elog)
start := time.Now()
defer func() {
log.Debugx("dmarc externalreports result", rerr,
slog.Bool("accepts", accepts),
slog.Any("dmarcdomain", dmarcDomain),
slog.Any("extdestdomain", extDestDomain),
slog.Any("records", records),
slog.Duration("duration", time.Since(start)))
}()
status, records, txts, authentic, rerr = lookupReportsRecord(ctx, resolver, dmarcDomain, extDestDomain)
accepts = rerr == nil
return accepts, status, records, txts, authentic, rerr
}
// Verify evaluates the DMARC policy for the domain in the From-header of a
@ -157,9 +222,10 @@ func lookupRecord(ctx context.Context, resolver dns.Resolver, domain dns.Domain)
// Verify always returns the result of verifying the DMARC policy
// against the message (for inclusion in Authentication-Result headers).
//
// useResult indicates if the result should be applied in a policy decision.
func Verify(ctx context.Context, resolver dns.Resolver, from dns.Domain, dkimResults []dkim.Result, spfResult spf.Status, spfIdentity *dns.Domain, applyRandomPercentage bool) (useResult bool, result Result) {
log := xlog.WithContext(ctx)
// useResult indicates if the result should be applied in a policy decision,
// based on the "pct" field in the DMARC record.
func Verify(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, msgFrom dns.Domain, dkimResults []dkim.Result, spfResult spf.Status, spfIdentity *dns.Domain, applyRandomPercentage bool) (useResult bool, result Result) {
log := mlog.New("dmarc", elog)
start := time.Now()
defer func() {
use := "no"
@ -170,25 +236,33 @@ func Verify(ctx context.Context, resolver dns.Resolver, from dns.Domain, dkimRes
if result.Reject {
reject = "yes"
}
metricDMARCVerify.WithLabelValues(string(result.Status), reject, use).Observe(float64(time.Since(start)) / float64(time.Second))
log.Debugx("dmarc verify result", result.Err, mlog.Field("fromdomain", from), mlog.Field("dkimresults", dkimResults), mlog.Field("spfresult", spfResult), mlog.Field("status", result.Status), mlog.Field("reject", result.Reject), mlog.Field("use", useResult), mlog.Field("duration", time.Since(start)))
MetricVerify.ObserveLabels(float64(time.Since(start))/float64(time.Second), string(result.Status), reject, use)
log.Debugx("dmarc verify result", result.Err,
slog.Any("fromdomain", msgFrom),
slog.Any("dkimresults", dkimResults),
slog.Any("spfresult", spfResult),
slog.Any("status", result.Status),
slog.Bool("reject", result.Reject),
slog.Bool("use", useResult),
slog.Duration("duration", time.Since(start)))
}()
status, recordDomain, record, _, err := Lookup(ctx, resolver, from)
status, recordDomain, record, _, authentic, err := Lookup(ctx, log.Logger, resolver, msgFrom)
if record == nil {
return false, Result{false, status, recordDomain, record, err}
return false, Result{false, status, false, false, recordDomain, record, authentic, err}
}
result.Domain = recordDomain
result.Record = record
result.RecordAuthentic = authentic
// Record can request sampling of messages to apply policy.
// See ../rfc/7489:1432
useResult = !applyRandomPercentage || record.Percentage == 100 || mathrand.Intn(100) < record.Percentage
useResult = !applyRandomPercentage || record.Percentage == 100 || mathrand2.IntN(100) < record.Percentage
// We reject treat "quarantine" and "reject" the same. Thus, we also don't
// "downgrade" from reject to quarantine if this message was sampled out.
// We treat "quarantine" and "reject" the same. Thus, we also don't "downgrade"
// from reject to quarantine if this message was sampled out.
// ../rfc/7489:1446 ../rfc/7489:1024
if recordDomain != from && record.SubdomainPolicy != PolicyEmpty {
if recordDomain != msgFrom && record.SubdomainPolicy != PolicyEmpty {
result.Reject = record.SubdomainPolicy != PolicyNone
} else {
result.Reject = record.Policy != PolicyNone
@ -208,17 +282,15 @@ func Verify(ctx context.Context, resolver dns.Resolver, from dns.Domain, dkimRes
if r, ok := pubsuffixes[name]; ok {
return r
}
r := publicsuffix.Lookup(ctx, name)
r := publicsuffix.Lookup(ctx, log.Logger, name)
pubsuffixes[name] = r
return r
}
// ../rfc/7489:1319
// ../rfc/7489:544
if spfResult == spf.StatusPass && spfIdentity != nil && (*spfIdentity == from || result.Record.ASPF == "r" && pubsuffix(from) == pubsuffix(*spfIdentity)) {
result.Reject = false
result.Status = StatusPass
return
if spfResult == spf.StatusPass && spfIdentity != nil && (*spfIdentity == msgFrom || result.Record.ASPF == "r" && pubsuffix(msgFrom) == pubsuffix(*spfIdentity)) {
result.AlignedSPFPass = true
}
for _, dkimResult := range dkimResults {
@ -228,12 +300,16 @@ func Verify(ctx context.Context, resolver dns.Resolver, from dns.Domain, dkimRes
continue
}
// ../rfc/7489:511
if dkimResult.Status == dkim.StatusPass && dkimResult.Sig != nil && (dkimResult.Sig.Domain == from || result.Record.ADKIM == "r" && pubsuffix(from) == pubsuffix(dkimResult.Sig.Domain)) {
if dkimResult.Status == dkim.StatusPass && dkimResult.Sig != nil && (dkimResult.Sig.Domain == msgFrom || result.Record.ADKIM == "r" && pubsuffix(msgFrom) == pubsuffix(dkimResult.Sig.Domain)) {
// ../rfc/7489:535
result.Reject = false
result.Status = StatusPass
return
result.AlignedDKIMPass = true
break
}
}
if result.AlignedSPFPass || result.AlignedDKIMPass {
result.Reject = false
result.Status = StatusPass
}
return
}

View File

@ -8,9 +8,12 @@ import (
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/spf"
)
var pkglog = mlog.New("dmarc", nil)
func TestLookup(t *testing.T) {
resolver := dns.MockResolver{
TXT: map[string][]string{
@ -21,15 +24,15 @@ func TestLookup(t *testing.T) {
"_dmarc.malformed.example.": {"v=DMARC1; p=none; bogus;"},
"_dmarc.example.com.": {"v=DMARC1; p=none;"},
},
Fail: map[dns.Mockreq]struct{}{
{Type: "txt", Name: "_dmarc.temperror.example."}: {},
Fail: []string{
"txt _dmarc.temperror.example.",
},
}
test := func(d string, expStatus Status, expDomain string, expRecord *Record, expErr error) {
t.Helper()
status, dom, record, _, err := Lookup(context.Background(), resolver, dns.Domain{ASCII: d})
status, dom, record, _, _, err := Lookup(context.Background(), pkglog.Logger, resolver, dns.Domain{ASCII: d})
if (err == nil) != (expErr == nil) || err != nil && !errors.Is(err, expErr) {
t.Fatalf("got err %#v, expected %#v", err, expErr)
}
@ -50,6 +53,45 @@ func TestLookup(t *testing.T) {
test("sub.example.com", StatusNone, "example.com", &r, nil) // Policy published at organizational domain, public suffix.
}
func TestLookupExternalReportsAccepted(t *testing.T) {
resolver := dns.MockResolver{
TXT: map[string][]string{
"example.com._report._dmarc.simple.example.": {"v=DMARC1"},
"example.com._report._dmarc.simple2.example.": {"v=DMARC1;"},
"example.com._report._dmarc.one.example.": {"v=DMARC1; p=none;", "other"},
"example.com._report._dmarc.temperror.example.": {"v=DMARC1; p=none;"},
"example.com._report._dmarc.multiple.example.": {"v=DMARC1; p=none;", "v=DMARC1"},
"example.com._report._dmarc.malformed.example.": {"v=DMARC1; p=none; bogus;"},
},
Fail: []string{
"txt example.com._report._dmarc.temperror.example.",
},
}
test := func(dom, extdom string, expStatus Status, expAccepts bool, expErr error) {
t.Helper()
accepts, status, _, _, _, err := LookupExternalReportsAccepted(context.Background(), pkglog.Logger, resolver, dns.Domain{ASCII: dom}, dns.Domain{ASCII: extdom})
if (err == nil) != (expErr == nil) || err != nil && !errors.Is(err, expErr) {
t.Fatalf("got err %#v, expected %#v", err, expErr)
}
if status != expStatus || accepts != expAccepts {
t.Fatalf("got status %s, accepts %v, expected %v, %v", status, accepts, expStatus, expAccepts)
}
}
r := DefaultRecord
r.Policy = PolicyNone
test("example.com", "simple.example", StatusNone, true, nil)
test("example.org", "simple.example", StatusNone, false, ErrNoRecord)
test("example.com", "simple2.example", StatusNone, true, nil)
test("example.com", "one.example", StatusNone, true, nil)
test("example.com", "absent.example", StatusNone, false, ErrNoRecord)
test("example.com", "multiple.example", StatusNone, true, nil)
test("example.com", "malformed.example", StatusPermerror, false, ErrSyntax)
test("example.com", "temperror.example", StatusTemperror, false, ErrDNS)
}
func TestVerify(t *testing.T) {
resolver := dns.MockResolver{
TXT: map[string][]string{
@ -61,8 +103,8 @@ func TestVerify(t *testing.T) {
"_dmarc.malformed.example.": {"v=DMARC1; p=none; bogus"},
"_dmarc.example.com.": {"v=DMARC1; p=reject"},
},
Fail: map[dns.Mockreq]struct{}{
{Type: "txt", Name: "_dmarc.temperror.example."}: {},
Fail: []string{
"txt _dmarc.temperror.example.",
},
}
@ -85,7 +127,7 @@ func TestVerify(t *testing.T) {
if err != nil {
t.Fatalf("parsing domain: %v", err)
}
useResult, result := Verify(context.Background(), resolver, from, dkimResults, spfResult, spfIdentity, true)
useResult, result := Verify(context.Background(), pkglog.Logger, resolver, from, dkimResults, spfResult, spfIdentity, true)
if useResult != expUseResult || !equalResult(result, expResult) {
t.Fatalf("verify: got useResult %v, result %#v, expected %v %#v", useResult, result, expUseResult, expResult)
}
@ -98,7 +140,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusNone,
nil,
true, Result{true, StatusFail, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{true, StatusFail, false, false, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// Accept with spf pass.
@ -106,7 +148,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusPass,
&dns.Domain{ASCII: "sub.reject.example"},
true, Result{false, StatusPass, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusPass, true, false, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// Accept with dkim pass.
@ -122,7 +164,7 @@ func TestVerify(t *testing.T) {
},
spf.StatusFail,
&dns.Domain{ASCII: "reject.example"},
true, Result{false, StatusPass, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusPass, false, true, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// Reject due to spf and dkim "strict".
@ -142,7 +184,7 @@ func TestVerify(t *testing.T) {
},
spf.StatusPass,
&dns.Domain{ASCII: "sub.strict.example"},
true, Result{true, StatusFail, dns.Domain{ASCII: "strict.example"}, &strict, nil},
true, Result{true, StatusFail, false, false, dns.Domain{ASCII: "strict.example"}, &strict, false, nil},
)
// No dmarc policy, nothing to say.
@ -150,7 +192,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusNone,
nil,
false, Result{false, StatusNone, dns.Domain{ASCII: "absent.example"}, nil, ErrNoRecord},
false, Result{false, StatusNone, false, false, dns.Domain{ASCII: "absent.example"}, nil, false, ErrNoRecord},
)
// No dmarc policy, spf pass does nothing.
@ -158,7 +200,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusPass,
&dns.Domain{ASCII: "absent.example"},
false, Result{false, StatusNone, dns.Domain{ASCII: "absent.example"}, nil, ErrNoRecord},
false, Result{false, StatusNone, false, false, dns.Domain{ASCII: "absent.example"}, nil, false, ErrNoRecord},
)
none := DefaultRecord
@ -168,7 +210,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusPass,
&dns.Domain{ASCII: "none.example"},
true, Result{false, StatusPass, dns.Domain{ASCII: "none.example"}, &none, nil},
true, Result{false, StatusPass, true, false, dns.Domain{ASCII: "none.example"}, &none, false, nil},
)
// No actual reject due to pct=0.
@ -179,7 +221,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusNone,
nil,
false, Result{true, StatusFail, dns.Domain{ASCII: "test.example"}, &testr, nil},
false, Result{true, StatusFail, false, false, dns.Domain{ASCII: "test.example"}, &testr, false, nil},
)
// No reject if subdomain has "none" policy.
@ -190,7 +232,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusFail,
&dns.Domain{ASCII: "sub.subnone.example"},
true, Result{false, StatusFail, dns.Domain{ASCII: "subnone.example"}, &sub, nil},
true, Result{false, StatusFail, false, false, dns.Domain{ASCII: "subnone.example"}, &sub, false, nil},
)
// No reject if spf temperror and no other pass.
@ -198,7 +240,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusTemperror,
&dns.Domain{ASCII: "mail.reject.example"},
true, Result{false, StatusTemperror, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusTemperror, false, false, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// No reject if dkim temperror and no other pass.
@ -214,7 +256,7 @@ func TestVerify(t *testing.T) {
},
spf.StatusNone,
nil,
true, Result{false, StatusTemperror, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusTemperror, false, false, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// No reject if spf temperror but still dkim pass.
@ -230,7 +272,7 @@ func TestVerify(t *testing.T) {
},
spf.StatusTemperror,
&dns.Domain{ASCII: "mail.reject.example"},
true, Result{false, StatusPass, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusPass, false, true, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// No reject if dkim temperror but still spf pass.
@ -246,7 +288,7 @@ func TestVerify(t *testing.T) {
},
spf.StatusPass,
&dns.Domain{ASCII: "mail.reject.example"},
true, Result{false, StatusPass, dns.Domain{ASCII: "reject.example"}, &reject, nil},
true, Result{false, StatusPass, true, false, dns.Domain{ASCII: "reject.example"}, &reject, false, nil},
)
// Bad DMARC record results in permerror without reject.
@ -254,7 +296,7 @@ func TestVerify(t *testing.T) {
[]dkim.Result{},
spf.StatusNone,
nil,
false, Result{false, StatusPermerror, dns.Domain{ASCII: "malformed.example"}, nil, ErrSyntax},
false, Result{false, StatusPermerror, false, false, dns.Domain{ASCII: "malformed.example"}, nil, false, ErrSyntax},
)
// DKIM domain that is higher-level than organizational can not result in a pass. ../rfc/7489:525
@ -270,6 +312,6 @@ func TestVerify(t *testing.T) {
},
spf.StatusNone,
nil,
true, Result{true, StatusFail, dns.Domain{ASCII: "example.com"}, &reject, nil},
true, Result{true, StatusFail, false, false, dns.Domain{ASCII: "example.com"}, &reject, false, nil},
)
}

85
dmarc/examples_test.go Normal file
View File

@ -0,0 +1,85 @@
package dmarc_test
import (
"context"
"log"
"log/slog"
"net"
"strings"
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dmarc"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/spf"
)
func ExampleLookup() {
ctx := context.Background()
resolver := dns.StrictResolver{}
msgFrom, err := dns.ParseDomain("sub.example.com")
if err != nil {
log.Fatalf("parsing from domain: %v", err)
}
// Lookup DMARC DNS record for domain.
status, domain, record, txt, authentic, err := dmarc.Lookup(ctx, slog.Default(), resolver, msgFrom)
if err != nil {
log.Fatalf("dmarc lookup: %v", err)
}
log.Printf("status %s, domain %s, record %v, txt %q, dnssec %v", status, domain, record, txt, authentic)
}
func ExampleVerify() {
ctx := context.Background()
resolver := dns.StrictResolver{}
// Message to verify.
msg := strings.NewReader("From: <sender@example.com>\r\nMore: headers\r\n\r\nBody\r\n")
msgFrom, _, _, err := message.From(slog.Default(), true, msg, nil)
if err != nil {
log.Fatalf("parsing message for from header: %v", err)
}
// Verify SPF, for use with DMARC.
args := spf.Args{
RemoteIP: net.ParseIP("10.11.12.13"),
MailFromDomain: dns.Domain{ASCII: "sub.example.com"},
}
spfReceived, spfDomain, _, _, err := spf.Verify(ctx, slog.Default(), resolver, args)
if err != nil {
log.Printf("verifying spf: %v", err)
}
// Verify DKIM-Signature headers, for use with DMARC.
smtputf8 := false
ignoreTestMode := false
dkimResults, err := dkim.Verify(ctx, slog.Default(), resolver, smtputf8, dkim.DefaultPolicy, msg, ignoreTestMode)
if err != nil {
log.Printf("verifying dkim: %v", err)
}
// Verify DMARC, based on DKIM and SPF results.
applyRandomPercentage := true
useResult, result := dmarc.Verify(ctx, slog.Default(), resolver, msgFrom.Domain, dkimResults, spfReceived.Result, &spfDomain, applyRandomPercentage)
// Print results.
log.Printf("dmarc status: %s", result.Status)
log.Printf("use result: %v", useResult)
if useResult && result.Reject {
log.Printf("should reject message")
}
log.Printf("result: %#v", result)
}
func ExampleParseRecord() {
txt := "v=DMARC1; p=reject; rua=mailto:postmaster@mox.example"
record, isdmarc, err := dmarc.ParseRecord(txt)
if err != nil {
log.Fatalf("parsing dmarc record: %v (isdmarc: %v)", err, isdmarc)
}
log.Printf("parsed record: %v", record)
}

View File

@ -19,7 +19,22 @@ func (e parseErr) Error() string {
// for easy comparison.
//
// DefaultRecord provides default values for tags not present in s.
//
// isdmarc indicates if the record starts tag "v" with value "DMARC1", and should
// be treated as a valid DMARC record. Used to detect possibly multiple DMARC
// records (invalid) for a domain with multiple TXT record (quite common).
func ParseRecord(s string) (record *Record, isdmarc bool, rerr error) {
return parseRecord(s, true)
}
// ParseRecordNoRequired is like ParseRecord, but don't check for required fields
// for regular DMARC records. Useful for checking the _report._dmarc record,
// used for opting into receiving reports for other domains.
func ParseRecordNoRequired(s string) (record *Record, isdmarc bool, rerr error) {
return parseRecord(s, false)
}
func parseRecord(s string, checkRequired bool) (record *Record, isdmarc bool, rerr error) {
defer func() {
x := recover()
if x == nil {
@ -77,9 +92,9 @@ func ParseRecord(s string) (record *Record, isdmarc bool, rerr error) {
// ../rfc/7489:1105
p.xerrorf("p= (policy) must be first tag")
}
r.Policy = DMARCPolicy(p.xtakelist("none", "quarantine", "reject"))
r.Policy = Policy(p.xtakelist("none", "quarantine", "reject"))
case "sp":
r.SubdomainPolicy = DMARCPolicy(p.xkeyword())
r.SubdomainPolicy = Policy(p.xkeyword())
// note: we check if the value is valid before returning.
case "rua":
r.AggregateReportAddresses = append(r.AggregateReportAddresses, p.xuri())
@ -134,7 +149,7 @@ func ParseRecord(s string) (record *Record, isdmarc bool, rerr error) {
// ../rfc/7489:1106 says "p" is required, but ../rfc/7489:1407 implies we must be
// able to parse a record without a "p" or with invalid "sp" tag.
sp := r.SubdomainPolicy
if !seen["p"] || sp != PolicyEmpty && sp != PolicyNone && sp != PolicyQuarantine && sp != PolicyReject {
if checkRequired && (!seen["p"] || sp != PolicyEmpty && sp != PolicyNone && sp != PolicyQuarantine && sp != PolicyReject) {
if len(r.AggregateReportAddresses) > 0 {
r.Policy = PolicyNone
r.SubdomainPolicy = PolicyEmpty

View File

@ -5,25 +5,23 @@ import (
"strings"
)
// todo: DMARCPolicy should be named just Policy, but this is causing conflicting types in sherpadoc output. should somehow get the dmarc-prefix only in the sherpadoc.
// Policy as used in DMARC DNS record for "p=" or "sp=".
type DMARCPolicy string
type Policy string
// ../rfc/7489:1157
const (
PolicyEmpty DMARCPolicy = "" // Only for the optional Record.SubdomainPolicy.
PolicyNone DMARCPolicy = "none"
PolicyQuarantine DMARCPolicy = "quarantine"
PolicyReject DMARCPolicy = "reject"
PolicyEmpty Policy = "" // Only for the optional Record.SubdomainPolicy.
PolicyNone Policy = "none"
PolicyQuarantine Policy = "quarantine"
PolicyReject Policy = "reject"
)
// URI is a destination address for reporting.
type URI struct {
Address string // Should start with "mailto:".
MaxSize uint64 // Optional maximum message size, subject to Unit.
Unit string // "" (b), "k", "g", "t" (case insensitive), unit size, where k is 2^10 etc.
Unit string // "" (b), "k", "m", "g", "t" (case insensitive), unit size, where k is 2^10 etc.
}
// String returns a string representation of the URI for inclusion in a DMARC
@ -33,7 +31,7 @@ func (u URI) String() string {
s = strings.ReplaceAll(s, ",", "%2C")
s = strings.ReplaceAll(s, "!", "%21")
if u.MaxSize > 0 {
s += fmt.Sprintf("%d", u.MaxSize)
s += fmt.Sprintf("!%d", u.MaxSize)
}
s += u.Unit
return s
@ -55,17 +53,17 @@ const (
//
// v=DMARC1; p=reject; rua=mailto:postmaster@mox.example
type Record struct {
Version string // "v=DMARC1"
Policy DMARCPolicy // Required, for "p=".
SubdomainPolicy DMARCPolicy // Like policy but for subdomains. Optional, for "sp=".
AggregateReportAddresses []URI // Optional, for "rua=".
FailureReportAddresses []URI // Optional, for "ruf="
ADKIM Align // "r" (default) for relaxed or "s" for simple. For "adkim=".
ASPF Align // "r" (default) for relaxed or "s" for simple. For "aspf=".
AggregateReportingInterval int // Default 86400. For "ri="
FailureReportingOptions []string // "0" (default), "1", "d", "s". For "fo=".
ReportingFormat []string // "afrf" (default). Ffor "rf=".
Percentage int // Between 0 and 100, default 100. For "pct=".
Version string // "v=DMARC1", fixed.
Policy Policy // Required, for "p=".
SubdomainPolicy Policy // Like policy but for subdomains. Optional, for "sp=".
AggregateReportAddresses []URI // Optional, for "rua=". Destination addresses for aggregate reports.
FailureReportAddresses []URI // Optional, for "ruf=". Destination addresses for failure reports.
ADKIM Align // Alignment: "r" (default) for relaxed or "s" for simple. For "adkim=".
ASPF Align // Alignment: "r" (default) for relaxed or "s" for simple. For "aspf=".
AggregateReportingInterval int // In seconds, default 86400. For "ri="
FailureReportingOptions []string // "0" (default), "1", "d", "s". For "fo=".
ReportingFormat []string // "afrf" (default). For "rf=".
Percentage int // Between 0 and 100, default 100. For "pct=". Policy applies randomly to this percentage of messages.
}
// DefaultRecord holds the defaults for a DMARC record.
@ -109,13 +107,13 @@ func (r Record) String() string {
s := strings.Join(l, ",")
write(true, "ruf", s)
}
write(r.ADKIM != "", "adkim", string(r.ADKIM))
write(r.ASPF != "", "aspf", string(r.ASPF))
write(r.ADKIM != "" && r.ADKIM != "r", "adkim", string(r.ADKIM))
write(r.ASPF != "" && r.ASPF != "r", "aspf", string(r.ASPF))
write(r.AggregateReportingInterval != DefaultRecord.AggregateReportingInterval, "ri", fmt.Sprintf("%d", r.AggregateReportingInterval))
if len(r.FailureReportingOptions) > 1 || (len(r.FailureReportingOptions) == 1 && r.FailureReportingOptions[0] != "0") {
if len(r.FailureReportingOptions) > 1 || len(r.FailureReportingOptions) == 1 && r.FailureReportingOptions[0] != "0" {
write(true, "fo", strings.Join(r.FailureReportingOptions, ":"))
}
if len(r.ReportingFormat) > 1 || (len(r.ReportingFormat) == 1 && strings.EqualFold(r.ReportingFormat[0], "afrf")) {
if len(r.ReportingFormat) > 1 || len(r.ReportingFormat) == 1 && !strings.EqualFold(r.ReportingFormat[0], "afrf") {
write(true, "rf", strings.Join(r.FailureReportingOptions, ":"))
}
write(r.Percentage != 100, "pct", fmt.Sprintf("%d", r.Percentage))

77
dmarcdb/dmarcdb.go Normal file
View File

@ -0,0 +1,77 @@
// Package dmarcdb stores incoming DMARC aggrate reports and evaluations for outgoing aggregate reports.
//
// With DMARC, a domain can request reports with DMARC evaluation results to be
// sent to a specified address. Mox parses such reports, stores them in its
// database and makes them available through its admin web interface. Mox also
// keeps track of the evaluations it does for incoming messages and sends reports
// to mail servers that request reports.
//
// Only aggregate reports are stored and sent. Failure reports about individual
// messages are not implemented.
package dmarcdb
import (
"context"
"fmt"
"os"
"path/filepath"
"time"
"github.com/mjl-/bstore"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
)
// Init opens the databases.
//
// The incoming reports and evaluations for outgoing reports are in separate
// databases for simpler file-based handling of the databases.
func Init() error {
if ReportsDB != nil || EvalDB != nil {
return fmt.Errorf("already initialized")
}
log := mlog.New("dmarcdb", nil)
var err error
ReportsDB, err = openReportsDB(mox.Shutdown, log)
if err != nil {
return fmt.Errorf("open reports db: %v", err)
}
EvalDB, err = openEvalDB(mox.Shutdown, log)
if err != nil {
return fmt.Errorf("open eval db: %v", err)
}
return nil
}
func Close() error {
if err := ReportsDB.Close(); err != nil {
return fmt.Errorf("closing reports db: %w", err)
}
ReportsDB = nil
if err := EvalDB.Close(); err != nil {
return fmt.Errorf("closing eval db: %w", err)
}
EvalDB = nil
return nil
}
func openReportsDB(ctx context.Context, log mlog.Log) (*bstore.DB, error) {
p := mox.DataDirPath("dmarcrpt.db")
os.MkdirAll(filepath.Dir(p), 0770)
opts := bstore.Options{Timeout: 5 * time.Second, Perm: 0660, RegisterLogger: moxvar.RegisterLogger(p, log.Logger)}
return bstore.Open(ctx, p, &opts, ReportsDBTypes...)
}
func openEvalDB(ctx context.Context, log mlog.Log) (*bstore.DB, error) {
p := mox.DataDirPath("dmarceval.db")
os.MkdirAll(filepath.Dir(p), 0770)
opts := bstore.Options{Timeout: 5 * time.Second, Perm: 0660, RegisterLogger: moxvar.RegisterLogger(p, log.Logger)}
return bstore.Open(ctx, p, &opts, EvalDBTypes...)
}

1064
dmarcdb/eval.go Normal file

File diff suppressed because it is too large Load Diff

403
dmarcdb/eval_test.go Normal file
View File

@ -0,0 +1,403 @@
package dmarcdb
import (
"context"
"encoding/json"
"encoding/xml"
"fmt"
"io"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"time"
"github.com/mjl-/mox/dmarcrpt"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxio"
"github.com/mjl-/mox/queue"
"slices"
)
func tcheckf(t *testing.T, err error, format string, args ...any) {
t.Helper()
if err != nil {
t.Fatalf("%s: %s", fmt.Sprintf(format, args...), err)
}
}
func tcompare(t *testing.T, got, expect any) {
t.Helper()
if !reflect.DeepEqual(got, expect) {
t.Fatalf("got:\n%v\nexpected:\n%v", got, expect)
}
}
func TestEvaluations(t *testing.T) {
os.RemoveAll("../testdata/dmarcdb/data")
mox.Context = ctxbg
mox.ConfigStaticPath = filepath.FromSlash("../testdata/dmarcdb/mox.conf")
mox.MustLoadConfig(true, false)
os.Remove(mox.DataDirPath("dmarceval.db"))
err := Init()
tcheckf(t, err, "init")
defer func() {
err := Close()
tcheckf(t, err, "close")
}()
parseJSON := func(s string) (e Evaluation) {
t.Helper()
err := json.Unmarshal([]byte(s), &e)
tcheckf(t, err, "unmarshal")
return
}
packJSON := func(e Evaluation) string {
t.Helper()
buf, err := json.Marshal(e)
tcheckf(t, err, "marshal")
return string(buf)
}
e0 := Evaluation{
PolicyDomain: "sender1.example",
Evaluated: time.Now().Round(0),
IntervalHours: 1,
PolicyPublished: dmarcrpt.PolicyPublished{
Domain: "sender1.example",
ADKIM: dmarcrpt.AlignmentRelaxed,
ASPF: dmarcrpt.AlignmentRelaxed,
Policy: dmarcrpt.DispositionReject,
SubdomainPolicy: dmarcrpt.DispositionReject,
Percentage: 100,
},
SourceIP: "10.1.2.3",
Disposition: dmarcrpt.DispositionNone,
AlignedDKIMPass: true,
AlignedSPFPass: true,
EnvelopeTo: "mox.example",
EnvelopeFrom: "sender1.example",
HeaderFrom: "sender1.example",
DKIMResults: []dmarcrpt.DKIMAuthResult{
{
Domain: "sender1.example",
Selector: "test",
Result: dmarcrpt.DKIMPass,
},
},
SPFResults: []dmarcrpt.SPFAuthResult{
{
Domain: "sender1.example",
Scope: dmarcrpt.SPFDomainScopeMailFrom,
Result: dmarcrpt.SPFPass,
},
},
}
e1 := e0
e2 := parseJSON(strings.ReplaceAll(packJSON(e0), "sender1.example", "sender2.example"))
e3 := parseJSON(strings.ReplaceAll(packJSON(e0), "10.1.2.3", "10.3.2.1"))
e3.Optional = true
for i, e := range []*Evaluation{&e0, &e1, &e2, &e3} {
e.Evaluated = e.Evaluated.Add(time.Duration(i) * time.Second)
err = AddEvaluation(ctxbg, 3600, e)
tcheckf(t, err, "add evaluation")
}
expStats := map[string]EvaluationStat{
"sender1.example": {
Domain: dns.Domain{ASCII: "sender1.example"},
Dispositions: []string{"none"},
Count: 3,
SendReport: true,
},
"sender2.example": {
Domain: dns.Domain{ASCII: "sender2.example"},
Dispositions: []string{"none"},
Count: 1,
SendReport: true,
},
}
stats, err := EvaluationStats(ctxbg)
tcheckf(t, err, "evaluation stats")
tcompare(t, stats, expStats)
// EvaluationsDomain
evals, err := EvaluationsDomain(ctxbg, dns.Domain{ASCII: "sender1.example"})
tcheckf(t, err, "get evaluations for domain")
tcompare(t, evals, []Evaluation{e0, e1, e3})
evals, err = EvaluationsDomain(ctxbg, dns.Domain{ASCII: "sender2.example"})
tcheckf(t, err, "get evaluations for domain")
tcompare(t, evals, []Evaluation{e2})
evals, err = EvaluationsDomain(ctxbg, dns.Domain{ASCII: "bogus.example"})
tcheckf(t, err, "get evaluations for domain")
tcompare(t, evals, []Evaluation{})
// RemoveEvaluationsDomain
err = RemoveEvaluationsDomain(ctxbg, dns.Domain{ASCII: "sender1.example"})
tcheckf(t, err, "remove evaluations")
expStats = map[string]EvaluationStat{
"sender2.example": {
Domain: dns.Domain{ASCII: "sender2.example"},
Dispositions: []string{"none"},
Count: 1,
SendReport: true,
},
}
stats, err = EvaluationStats(ctxbg)
tcheckf(t, err, "evaluation stats")
tcompare(t, stats, expStats)
}
func TestSendReports(t *testing.T) {
os.RemoveAll("../testdata/dmarcdb/data")
mox.Context = ctxbg
mox.ConfigStaticPath = filepath.FromSlash("../testdata/dmarcdb/mox.conf")
mox.MustLoadConfig(true, false)
os.Remove(mox.DataDirPath("dmarceval.db"))
err := Init()
tcheckf(t, err, "init")
defer func() {
err := Close()
tcheckf(t, err, "close")
}()
resolver := dns.MockResolver{
TXT: map[string][]string{
"_dmarc.sender.example.": {
"v=DMARC1; rua=mailto:dmarcrpt@sender.example; ri=3600",
},
},
}
end := nextWholeHour(time.Now())
eval := Evaluation{
PolicyDomain: "sender.example",
Evaluated: end.Add(-time.Hour / 2),
IntervalHours: 1,
PolicyPublished: dmarcrpt.PolicyPublished{
Domain: "sender.example",
ADKIM: dmarcrpt.AlignmentRelaxed,
ASPF: dmarcrpt.AlignmentRelaxed,
Policy: dmarcrpt.DispositionReject,
SubdomainPolicy: dmarcrpt.DispositionReject,
Percentage: 100,
},
SourceIP: "10.1.2.3",
Disposition: dmarcrpt.DispositionNone,
AlignedDKIMPass: true,
AlignedSPFPass: true,
EnvelopeTo: "mox.example",
EnvelopeFrom: "sender.example",
HeaderFrom: "sender.example",
DKIMResults: []dmarcrpt.DKIMAuthResult{
{
Domain: "sender.example",
Selector: "test",
Result: dmarcrpt.DKIMPass,
},
},
SPFResults: []dmarcrpt.SPFAuthResult{
{
Domain: "sender.example",
Scope: dmarcrpt.SPFDomainScopeMailFrom,
Result: dmarcrpt.SPFPass,
},
},
}
expFeedback := &dmarcrpt.Feedback{
XMLName: xml.Name{Local: "feedback"},
Version: "1.0",
ReportMetadata: dmarcrpt.ReportMetadata{
OrgName: "mail.mox.example",
Email: "postmaster@mail.mox.example",
DateRange: dmarcrpt.DateRange{
Begin: end.Add(-1 * time.Hour).Unix(),
End: end.Add(-time.Second).Unix(),
},
},
PolicyPublished: dmarcrpt.PolicyPublished{
Domain: "sender.example",
ADKIM: dmarcrpt.AlignmentRelaxed,
ASPF: dmarcrpt.AlignmentRelaxed,
Policy: dmarcrpt.DispositionReject,
SubdomainPolicy: dmarcrpt.DispositionReject,
Percentage: 100,
},
Records: []dmarcrpt.ReportRecord{
{
Row: dmarcrpt.Row{
SourceIP: "10.1.2.3",
Count: 1,
PolicyEvaluated: dmarcrpt.PolicyEvaluated{
Disposition: dmarcrpt.DispositionNone,
DKIM: dmarcrpt.DMARCPass,
SPF: dmarcrpt.DMARCPass,
},
},
Identifiers: dmarcrpt.Identifiers{
EnvelopeTo: "mox.example",
EnvelopeFrom: "sender.example",
HeaderFrom: "sender.example",
},
AuthResults: dmarcrpt.AuthResults{
DKIM: []dmarcrpt.DKIMAuthResult{
{
Domain: "sender.example",
Selector: "test",
Result: dmarcrpt.DKIMPass,
},
},
SPF: []dmarcrpt.SPFAuthResult{
{
Domain: "sender.example",
Scope: dmarcrpt.SPFDomainScopeMailFrom,
Result: dmarcrpt.SPFPass,
},
},
},
},
},
}
// Set a timeUntil that we steplock and that causes the actual sleep to return immediately when we want to.
wait := make(chan struct{})
step := make(chan time.Duration)
jitteredTimeUntil = func(_ time.Time) time.Duration {
wait <- struct{}{}
return <-step
}
sleepBetween = func(ctx context.Context, between time.Duration) (ok bool) { return true }
test := func(evals []Evaluation, expAggrAddrs map[string]struct{}, expErrorAddrs map[string]struct{}, optExpReport *dmarcrpt.Feedback) {
t.Helper()
mox.Shutdown, mox.ShutdownCancel = context.WithCancel(ctxbg)
for _, e := range evals {
err := EvalDB.Insert(ctxbg, &e)
tcheckf(t, err, "inserting evaluation")
}
aggrAddrs := map[string]struct{}{}
errorAddrs := map[string]struct{}{}
queueAdd = func(ctx context.Context, log mlog.Log, senderAccount string, msgFile *os.File, qml ...queue.Msg) error {
if len(qml) != 1 {
return fmt.Errorf("queued %d messages, expected 1", len(qml))
}
qm := qml[0]
// Read message file. Also write copy to disk for inspection.
buf, err := io.ReadAll(&moxio.AtReader{R: msgFile})
tcheckf(t, err, "read report message")
err = os.WriteFile("../testdata/dmarcdb/data/report.eml", slices.Concat(qm.MsgPrefix, buf), 0600)
tcheckf(t, err, "write report message")
var feedback *dmarcrpt.Feedback
addr := qm.Recipient().String()
isErrorReport := strings.Contains(string(buf), "DMARC aggregate reporting error report")
if isErrorReport {
errorAddrs[addr] = struct{}{}
} else {
aggrAddrs[addr] = struct{}{}
feedback, err = dmarcrpt.ParseMessageReport(log.Logger, msgFile)
tcheckf(t, err, "parsing generated report message")
}
if optExpReport != nil {
// Parse report in message and compare with expected.
optExpReport.ReportMetadata.ReportID = feedback.ReportMetadata.ReportID
tcompare(t, feedback, expFeedback)
}
return nil
}
Start(resolver)
// Run first loop.
<-wait
step <- 0
<-wait
tcompare(t, aggrAddrs, expAggrAddrs)
tcompare(t, errorAddrs, expErrorAddrs)
// Second loop. Evaluations cleaned, should not result in report messages.
aggrAddrs = map[string]struct{}{}
errorAddrs = map[string]struct{}{}
step <- 0
<-wait
tcompare(t, aggrAddrs, map[string]struct{}{})
tcompare(t, errorAddrs, map[string]struct{}{})
// Caus Start to stop.
mox.ShutdownCancel()
step <- time.Minute
}
// Typical case, with a single address that receives an aggregate report.
test([]Evaluation{eval}, map[string]struct{}{"dmarcrpt@sender.example": {}}, map[string]struct{}{}, expFeedback)
// Only optional evaluations, no report at all.
evalOpt := eval
evalOpt.Optional = true
test([]Evaluation{evalOpt}, map[string]struct{}{}, map[string]struct{}{}, nil)
// Address is suppressed.
sa := SuppressAddress{ReportingAddress: "dmarcrpt@sender.example", Until: time.Now().Add(time.Minute)}
err = EvalDB.Insert(ctxbg, &sa)
tcheckf(t, err, "insert suppress address")
test([]Evaluation{eval}, map[string]struct{}{}, map[string]struct{}{}, nil)
// Suppression has expired.
sa.Until = time.Now().Add(-time.Minute)
err = EvalDB.Update(ctxbg, &sa)
tcheckf(t, err, "update suppress address")
test([]Evaluation{eval}, map[string]struct{}{"dmarcrpt@sender.example": {}}, map[string]struct{}{}, expFeedback)
// Two RUA's, one with a size limit that doesn't pass, and one that does pass.
resolver.TXT["_dmarc.sender.example."] = []string{"v=DMARC1; rua=mailto:dmarcrpt1@sender.example!1,mailto:dmarcrpt2@sender.example!10t; ri=3600"}
test([]Evaluation{eval}, map[string]struct{}{"dmarcrpt2@sender.example": {}}, map[string]struct{}{}, nil)
// Redirect to external domain, without permission, no report sent.
resolver.TXT["_dmarc.sender.example."] = []string{"v=DMARC1; rua=mailto:unauthorized@other.example"}
test([]Evaluation{eval}, map[string]struct{}{}, map[string]struct{}{}, nil)
// Redirect to external domain, with basic permission.
resolver.TXT = map[string][]string{
"_dmarc.sender.example.": {"v=DMARC1; rua=mailto:authorized@other.example"},
"sender.example._report._dmarc.other.example.": {"v=DMARC1"},
}
test([]Evaluation{eval}, map[string]struct{}{"authorized@other.example": {}}, map[string]struct{}{}, nil)
// Redirect to authorized external domain, with 2 allowed replacements and 1 invalid and 1 refusing due to size.
resolver.TXT = map[string][]string{
"_dmarc.sender.example.": {"v=DMARC1; rua=mailto:authorized@other.example"},
"sender.example._report._dmarc.other.example.": {"v=DMARC1; rua=mailto:good1@other.example,mailto:bad1@yetanother.example,mailto:good2@other.example,mailto:badsize@other.example!1"},
}
test([]Evaluation{eval}, map[string]struct{}{"good1@other.example": {}, "good2@other.example": {}}, map[string]struct{}{}, nil)
// Without RUA, we send no message.
resolver.TXT = map[string][]string{
"_dmarc.sender.example.": {"v=DMARC1;"},
}
test([]Evaluation{eval}, map[string]struct{}{}, map[string]struct{}{}, nil)
// If message size limit is reached, an error repor is sent.
resolver.TXT = map[string][]string{
"_dmarc.sender.example.": {"v=DMARC1; rua=mailto:dmarcrpt@sender.example!1"},
}
test([]Evaluation{eval}, map[string]struct{}{}, map[string]struct{}{"dmarcrpt@sender.example": {}}, nil)
}

17
dmarcdb/main_test.go Normal file
View File

@ -0,0 +1,17 @@
package dmarcdb
import (
"fmt"
"os"
"testing"
"github.com/mjl-/mox/metrics"
)
func TestMain(m *testing.M) {
m.Run()
if metrics.Panics.Load() > 0 {
fmt.Println("unhandled panics encountered")
os.Exit(2)
}
}

View File

@ -1,17 +1,8 @@
// Package dmarcdb stores incoming DMARC reports.
//
// With DMARC, a domain can request emails with DMARC verification results by
// remote mail servers to be sent to a specified address. Mox parses such
// reports, stores them in its database and makes them available through its
// admin web interface.
package dmarcdb
import (
"context"
"fmt"
"os"
"path/filepath"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
@ -21,15 +12,11 @@ import (
"github.com/mjl-/mox/dmarcrpt"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
)
var xlog = mlog.New("dmarcdb")
var (
dmarcDB *bstore.DB
mutex sync.Mutex
ReportsDBTypes = []any{DomainFeedback{}} // Types stored in DB.
ReportsDB *bstore.DB // Exported for backups.
)
var (
@ -67,44 +54,18 @@ type DomainFeedback struct {
dmarcrpt.Feedback
}
func database() (rdb *bstore.DB, rerr error) {
mutex.Lock()
defer mutex.Unlock()
if dmarcDB == nil {
p := mox.DataDirPath("dmarcrpt.db")
os.MkdirAll(filepath.Dir(p), 0770)
db, err := bstore.Open(p, &bstore.Options{Timeout: 5 * time.Second, Perm: 0660}, DomainFeedback{})
if err != nil {
return nil, err
}
dmarcDB = db
}
return dmarcDB, nil
}
// Init opens the database.
func Init() error {
_, err := database()
return err
}
// AddReport adds a DMARC aggregate feedback report from an email to the database,
// and updates prometheus metrics.
//
// fromDomain is the domain in the report message From header.
func AddReport(ctx context.Context, f *dmarcrpt.Feedback, fromDomain dns.Domain) error {
db, err := database()
if err != nil {
return err
}
d, err := dns.ParseDomain(f.PolicyPublished.Domain)
if err != nil {
return fmt.Errorf("parsing domain in report: %v", err)
}
df := DomainFeedback{0, d.Name(), fromDomain.Name(), *f}
if err := db.Insert(&df); err != nil {
if err := ReportsDB.Insert(ctx, &df); err != nil {
return err
}
@ -143,38 +104,23 @@ func AddReport(ctx context.Context, f *dmarcrpt.Feedback, fromDomain dns.Domain)
// Records returns all reports in the database.
func Records(ctx context.Context) ([]DomainFeedback, error) {
db, err := database()
if err != nil {
return nil, err
}
return bstore.QueryDB[DomainFeedback](db).List()
return bstore.QueryDB[DomainFeedback](ctx, ReportsDB).List()
}
// RecordID returns the report for the ID.
func RecordID(ctx context.Context, id int64) (DomainFeedback, error) {
db, err := database()
if err != nil {
return DomainFeedback{}, err
}
e := DomainFeedback{ID: id}
err = db.Get(&e)
err := ReportsDB.Get(ctx, &e)
return e, err
}
// RecordsPeriodDomain returns the reports overlapping start and end, for the given
// domain. If domain is empty, all records match for domain.
func RecordsPeriodDomain(ctx context.Context, start, end time.Time, domain string) ([]DomainFeedback, error) {
db, err := database()
if err != nil {
return nil, err
}
s := start.Unix()
e := end.Unix()
q := bstore.QueryDB[DomainFeedback](db)
q := bstore.QueryDB[DomainFeedback](ctx, ReportsDB)
if domain != "" {
q.FilterNonzero(DomainFeedback{Domain: domain})
}

View File

@ -13,17 +13,20 @@ import (
"github.com/mjl-/mox/mox-"
)
var ctxbg = context.Background()
func TestDMARCDB(t *testing.T) {
mox.ConfigStaticPath = "../testdata/dmarcdb/fake.conf"
mox.Conf.Static.DataDir = "."
mox.Shutdown = ctxbg
mox.ConfigStaticPath = filepath.FromSlash("../testdata/dmarcdb/mox.conf")
mox.MustLoadConfig(true, false)
dbpath := mox.DataDirPath("dmarcrpt.db")
os.MkdirAll(filepath.Dir(dbpath), 0770)
defer os.Remove(dbpath)
if err := Init(); err != nil {
t.Fatalf("init database: %s", err)
}
os.Remove(mox.DataDirPath("dmarcrpt.db"))
err := Init()
tcheckf(t, err, "init")
defer func() {
err := Close()
tcheckf(t, err, "close")
}()
feedback := &dmarcrpt.Feedback{
ReportMetadata: dmarcrpt.ReportMetadata{
@ -76,32 +79,32 @@ func TestDMARCDB(t *testing.T) {
},
},
}
if err := AddReport(context.Background(), feedback, dns.Domain{ASCII: "google.com"}); err != nil {
if err := AddReport(ctxbg, feedback, dns.Domain{ASCII: "google.com"}); err != nil {
t.Fatalf("adding report: %s", err)
}
records, err := Records(context.Background())
records, err := Records(ctxbg)
if err != nil || len(records) != 1 || !reflect.DeepEqual(&records[0].Feedback, feedback) {
t.Fatalf("records: got err %v, records %#v, expected no error, single record with feedback %#v", err, records, feedback)
}
record, err := RecordID(context.Background(), records[0].ID)
record, err := RecordID(ctxbg, records[0].ID)
if err != nil || !reflect.DeepEqual(&record.Feedback, feedback) {
t.Fatalf("record id: got err %v, record %#v, expected feedback %#v", err, record, feedback)
}
start := time.Unix(1596412800, 0)
end := time.Unix(1596499199, 0)
records, err = RecordsPeriodDomain(context.Background(), start, end, "example.org")
records, err = RecordsPeriodDomain(ctxbg, start, end, "example.org")
if err != nil || len(records) != 1 || !reflect.DeepEqual(&records[0].Feedback, feedback) {
t.Fatalf("records: got err %v, records %#v, expected no error, single record with feedback %#v", err, records, feedback)
}
records, err = RecordsPeriodDomain(context.Background(), end, end, "example.org")
records, err = RecordsPeriodDomain(ctxbg, end, end, "example.org")
if err != nil || len(records) != 0 {
t.Fatalf("records: got err %v, records %#v, expected no error and no records", err, records)
}
records, err = RecordsPeriodDomain(context.Background(), start, end, "other.example")
records, err = RecordsPeriodDomain(ctxbg, start, end, "other.example")
if err != nil || len(records) != 0 {
t.Fatalf("records: got err %v, records %#v, expected no error and no records", err, records)
}

View File

@ -1,9 +1,14 @@
package dmarcrpt
import (
"encoding/xml"
)
// Initially generated by xsdgen, then modified.
// Feedback is the top-level XML field returned.
type Feedback struct {
XMLName xml.Name `xml:"feedback" json:"-"` // todo: removing the json tag triggers bug in sherpadoc, should fix.
Version string `xml:"version"`
ReportMetadata ReportMetadata `xml:"report_metadata"`
PolicyPublished PolicyPublished `xml:"policy_published"`
@ -26,6 +31,9 @@ type DateRange struct {
// PolicyPublished is the policy as found in DNS for the domain.
type PolicyPublished struct {
// Domain is where DMARC record was found, not necessarily message From. Reports we
// generate use unicode names, incoming reports may have either ASCII-only or
// Unicode domains.
Domain string `xml:"domain"`
ADKIM Alignment `xml:"adkim,omitempty"`
ASPF Alignment `xml:"aspf,omitempty"`
@ -39,6 +47,8 @@ type PolicyPublished struct {
type Alignment string
const (
AlignmentAbsent Alignment = ""
AlignmentRelaxed Alignment = "r" // Subdomains match the DMARC from-domain.
AlignmentStrict Alignment = "s" // Only exact from-domain match.
)
@ -48,6 +58,8 @@ const (
type Disposition string
const (
DispositionAbsent Disposition = ""
DispositionNone Disposition = "none"
DispositionQuarantine Disposition = "quarantine"
DispositionReject Disposition = "reject"
@ -79,6 +91,8 @@ type PolicyEvaluated struct {
type DMARCResult string
const (
DMARCAbsent DMARCResult = ""
DMARCPass DMARCResult = "pass"
DMARCFail DMARCResult = "fail"
)
@ -93,6 +107,8 @@ type PolicyOverrideReason struct {
type PolicyOverride string
const (
PolicyOverrideAbsent PolicyOverride = ""
PolicyOverrideForwarded PolicyOverride = "forwarded"
PolicyOverrideSampledOut PolicyOverride = "sampled_out"
PolicyOverrideTrustedForwarder PolicyOverride = "trusted_forwarder"
@ -122,6 +138,8 @@ type DKIMAuthResult struct {
type DKIMResult string
const (
DKIMAbsent DKIMResult = ""
DKIMNone DKIMResult = "none"
DKIMPass DKIMResult = "pass"
DKIMFail DKIMResult = "fail"
@ -140,6 +158,8 @@ type SPFAuthResult struct {
type SPFDomainScope string
const (
SPFDomainScopeAbsent SPFDomainScope = ""
SPFDomainScopeHelo SPFDomainScope = "helo" // SMTP EHLO
SPFDomainScopeMailFrom SPFDomainScope = "mfrom" // SMTP "MAIL FROM".
)
@ -147,6 +167,8 @@ const (
type SPFResult string
const (
SPFAbsent SPFResult = ""
SPFNone SPFResult = "none"
SPFNeutral SPFResult = "neutral"
SPFPass SPFResult = "pass"

View File

@ -9,14 +9,16 @@ import (
"errors"
"fmt"
"io"
"log/slog"
"net/http"
"strings"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/moxio"
)
var ErrNoReport = errors.New("no dmarc report found in message")
var ErrNoReport = errors.New("no dmarc aggregate report found in message")
// ParseReport parses an XML aggregate feedback report.
// The maximum report size is 20MB.
@ -33,34 +35,35 @@ func ParseReport(r io.Reader) (*Feedback, error) {
// ParseMessageReport parses an aggregate feedback report from a mail message. The
// maximum message size is 15MB, the maximum report size after decompression is
// 20MB.
func ParseMessageReport(r io.ReaderAt) (*Feedback, error) {
func ParseMessageReport(elog *slog.Logger, r io.ReaderAt) (*Feedback, error) {
log := mlog.New("dmarcrpt", elog)
// ../rfc/7489:1801
p, err := message.Parse(&moxio.LimitAtReader{R: r, Limit: 15 * 1024 * 1024})
p, err := message.Parse(log.Logger, true, &moxio.LimitAtReader{R: r, Limit: 15 * 1024 * 1024})
if err != nil {
return nil, fmt.Errorf("parsing mail message: %s", err)
}
return parseMessageReport(p)
return parseMessageReport(log, p)
}
func parseMessageReport(p message.Part) (*Feedback, error) {
func parseMessageReport(log mlog.Log, p message.Part) (*Feedback, error) {
// Pretty much any mime structure is allowed. ../rfc/7489:1861
// In practice, some parties will send the report as the only (non-multipart)
// content of the message.
if p.MediaType != "MULTIPART" {
return parseReport(p)
return parseReport(log, p)
}
for {
sp, err := p.ParseNextPart()
sp, err := p.ParseNextPart(log.Logger)
if err == io.EOF {
return nil, ErrNoReport
}
if err != nil {
return nil, err
}
report, err := parseMessageReport(*sp)
report, err := parseMessageReport(log, *sp)
if err == ErrNoReport {
continue
} else if err != nil || report != nil {
@ -69,12 +72,12 @@ func parseMessageReport(p message.Part) (*Feedback, error) {
}
}
func parseReport(p message.Part) (*Feedback, error) {
func parseReport(log mlog.Log, p message.Part) (*Feedback, error) {
ct := strings.ToLower(p.MediaType + "/" + p.MediaSubType)
r := p.Reader()
// If no (useful) content-type is set, try to detect it.
if ct == "" || ct == "application/octect-stream" {
if ct == "" || ct == "application/octet-stream" {
data := make([]byte, 512)
n, err := io.ReadFull(r, data)
if err == io.EOF {
@ -90,8 +93,8 @@ func parseReport(p message.Part) (*Feedback, error) {
switch ct {
case "application/zip":
// Google sends messages with direct application/zip content-type.
return parseZip(r)
case "application/gzip":
return parseZip(log, r)
case "application/gzip", "application/x-gzip":
gzr, err := gzip.NewReader(r)
if err != nil {
return nil, fmt.Errorf("decoding gzip xml report: %s", err)
@ -103,7 +106,7 @@ func parseReport(p message.Part) (*Feedback, error) {
return nil, ErrNoReport
}
func parseZip(r io.Reader) (*Feedback, error) {
func parseZip(log mlog.Log, r io.Reader) (*Feedback, error) {
buf, err := io.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("reading feedback: %s", err)
@ -119,6 +122,9 @@ func parseZip(r io.Reader) (*Feedback, error) {
if err != nil {
return nil, fmt.Errorf("opening file in zip: %s", err)
}
defer f.Close()
defer func() {
err := f.Close()
log.Check(err, "closing report file in zip file")
}()
return ParseReport(f)
}

View File

@ -1,12 +1,18 @@
package dmarcrpt
import (
"encoding/xml"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"github.com/mjl-/mox/mlog"
)
var pkglog = mlog.New("dmarcrpt", nil)
const reportExample = `<?xml version="1.0" encoding="UTF-8" ?>
<feedback>
<report_metadata>
@ -57,6 +63,7 @@ const reportExample = `<?xml version="1.0" encoding="UTF-8" ?>
func TestParseReport(t *testing.T) {
var expect = &Feedback{
XMLName: xml.Name{Local: "feedback"},
ReportMetadata: ReportMetadata{
OrgName: "google.com",
Email: "noreply-dmarc-support@google.com",
@ -118,19 +125,19 @@ func TestParseReport(t *testing.T) {
}
func TestParseMessageReport(t *testing.T) {
const dir = "../testdata/dmarc-reports"
dir := filepath.FromSlash("../testdata/dmarc-reports")
files, err := os.ReadDir(dir)
if err != nil {
t.Fatalf("listing dmarc report emails: %s", err)
t.Fatalf("listing dmarc aggregate report emails: %s", err)
}
for _, file := range files {
p := dir + "/" + file.Name()
p := filepath.Join(dir, file.Name())
f, err := os.Open(p)
if err != nil {
t.Fatalf("open %q: %s", p, err)
}
_, err = ParseMessageReport(f)
_, err = ParseMessageReport(pkglog.Logger, f)
if err != nil {
t.Fatalf("ParseMessageReport: %q: %s", p, err)
}
@ -138,7 +145,7 @@ func TestParseMessageReport(t *testing.T) {
}
// No report in a non-multipart message.
_, err = ParseMessageReport(strings.NewReader("From: <mjl@mox.example>\r\n\r\nNo report.\r\n"))
_, err = ParseMessageReport(pkglog.Logger, strings.NewReader("From: <mjl@mox.example>\r\n\r\nNo report.\r\n"))
if err != ErrNoReport {
t.Fatalf("message without report, got err %#v, expected ErrNoreport", err)
}
@ -164,7 +171,7 @@ MIME-Version: 1.0
--===============5735553800636657282==--
`, "\n", "\r\n")
_, err = ParseMessageReport(strings.NewReader(multipartNoreport))
_, err = ParseMessageReport(pkglog.Logger, strings.NewReader(multipartNoreport))
if err != ErrNoReport {
t.Fatalf("message without report, got err %#v, expected ErrNoreport", err)
}

View File

@ -9,19 +9,31 @@ import (
"strings"
"golang.org/x/net/idna"
"github.com/mjl-/adns"
)
var errTrailingDot = errors.New("dns name has trailing dot")
// Pedantic enables stricter parsing.
var Pedantic bool
var (
errTrailingDot = errors.New("dns name has trailing dot")
errUnderscore = errors.New("domain name with underscore")
errIDNA = errors.New("idna")
errIPNotName = errors.New("ip address while name required")
)
// Domain is a domain name, with one or more labels, with at least an ASCII
// representation, and for IDNA non-ASCII domains a unicode representation.
// The ASCII string must be used for DNS lookups.
// The ASCII string must be used for DNS lookups. The strings do not have a
// trailing dot. When using with StrictResolver, add the trailing dot.
type Domain struct {
// A non-unicode domain, e.g. with A-labels (xn--...) or NR-LDH (non-reserved
// letters/digits/hyphens) labels. Always in lower case.
// letters/digits/hyphens) labels. Always in lower case. No trailing dot.
ASCII string
// Name as U-labels. Empty if this is an ASCII-only domain.
// Name as U-labels, in Unicode NFC. Empty if this is an ASCII-only domain. No
// trailing dot.
Unicode string
}
@ -60,7 +72,8 @@ func (d Domain) String() string {
}
// LogString returns a domain for logging.
// For IDNA names, the string contains both the unicode and ASCII name.
// For IDNA names, the string is the slash-separated Unicode and ASCII name.
// For ASCII-only domain names, just the ASCII string is returned.
func (d Domain) LogString() string {
if d.Unicode == "" {
return d.ASCII
@ -77,18 +90,26 @@ func (d Domain) IsZero() bool {
// labels (unicode).
// Names are IDN-canonicalized and lower-cased.
// Characters in unicode can be replaced by equivalents. E.g. "Ⓡ" to "r". This
// means you should only compare parsed domain names, never strings directly.
// means you should only compare parsed domain names, never unparsed strings
// directly.
func ParseDomain(s string) (Domain, error) {
if strings.HasSuffix(s, ".") {
return Domain{}, errTrailingDot
}
// IPv4 addresses would be accepted by idna lookups. TLDs cannot be all numerical,
// so IP addresses are not valid DNS names.
if net.ParseIP(s) != nil {
return Domain{}, errIPNotName
}
ascii, err := idna.Lookup.ToASCII(s)
if err != nil {
return Domain{}, fmt.Errorf("to ascii: %w", err)
return Domain{}, fmt.Errorf("%w: to ascii: %v", errIDNA, err)
}
unicode, err := idna.Lookup.ToUnicode(s)
if err != nil {
return Domain{}, fmt.Errorf("to unicode: %w", err)
return Domain{}, fmt.Errorf("%w: to unicode: %w", errIDNA, err)
}
// todo: should we cause errors for unicode domains that were not in
// canonical form? we are now accepting all kinds of obscure spellings
@ -100,16 +121,54 @@ func ParseDomain(s string) (Domain, error) {
return Domain{ascii, unicode}, nil
}
// IsNotFound returns whether an error is a net.DNSError with IsNotFound set.
// ParseDomainLax parses a domain like ParseDomain, but allows labels with
// underscores if the entire domain name is ASCII-only non-IDNA and Pedantic mode
// is not enabled. Used for interoperability, e.g. domains may specify MX
// targets with underscores.
func ParseDomainLax(s string) (Domain, error) {
if Pedantic || !strings.Contains(s, "_") {
return ParseDomain(s)
}
// If there is any non-ASCII, this is certainly not an A-label-only domain.
s = strings.ToLower(s)
for _, c := range s {
if c >= 0x80 {
return Domain{}, fmt.Errorf("%w: underscore and non-ascii not allowed", errUnderscore)
}
}
// Try parsing with underscores replaced with allowed ASCII character.
// If that's not valid, the version with underscore isn't either.
repl := strings.ReplaceAll(s, "_", "a")
d, err := ParseDomain(repl)
if err != nil {
return Domain{}, fmt.Errorf("%w: %v", errUnderscore, err)
}
// If we found an IDNA domain, we're not going to allow it.
if d.Unicode != "" {
return Domain{}, fmt.Errorf("%w: idna domain with underscores not allowed", errUnderscore)
}
// Just to be safe, ensure no unexpected conversions happened.
if d.ASCII != repl {
return Domain{}, fmt.Errorf("%w: underscores and non-canonical names not allowed", errUnderscore)
}
return Domain{ASCII: s}, nil
}
// IsNotFound returns whether an error is an adns.DNSError or net.DNSError with
// IsNotFound set.
//
// IsNotFound means the requested type does not exist for the given domain (a
// nodata or nxdomain response). It doesn't not necessarily mean no other types
// for that name exist.
// nodata or nxdomain response). It doesn't not necessarily mean no other types for
// that name exist.
//
// A DNS server can respond to a lookup with an error "nxdomain" to indicate a
// name does not exist (at all), or with a success status with an empty list.
// The Go resolver returns an IsNotFound error for both cases, there is no need
// to explicitly check for zero entries.
// The adns resolver (just like the Go resolver) returns an IsNotFound error for
// both cases, there is no need to explicitly check for zero entries.
func IsNotFound(err error) bool {
var adnsErr *adns.DNSError
var dnsErr *net.DNSError
return err != nil && errors.As(err, &dnsErr) && dnsErr.IsNotFound
return err != nil && (errors.As(err, &adnsErr) && adnsErr.IsNotFound || errors.As(err, &dnsErr) && dnsErr.IsNotFound)
}

View File

@ -6,9 +6,15 @@ import (
)
func TestParseDomain(t *testing.T) {
test := func(s string, exp Domain, expErr error) {
test := func(lax bool, s string, exp Domain, expErr error) {
t.Helper()
dom, err := ParseDomain(s)
var dom Domain
var err error
if lax {
dom, err = ParseDomainLax(s)
} else {
dom, err = ParseDomain(s)
}
if (err == nil) != (expErr == nil) || expErr != nil && !errors.Is(err, expErr) {
t.Fatalf("parse domain %q: err %v, expected %v", s, err, expErr)
}
@ -18,10 +24,15 @@ func TestParseDomain(t *testing.T) {
}
// We rely on normalization of names throughout the code base.
test("xmox.nl", Domain{"xmox.nl", ""}, nil)
test("XMOX.NL", Domain{"xmox.nl", ""}, nil)
test("TEST☺.XMOX.NL", Domain{"xn--test-3o3b.xmox.nl", "test☺.xmox.nl"}, nil)
test("TEST☺.XMOX.NL", Domain{"xn--test-3o3b.xmox.nl", "test☺.xmox.nl"}, nil)
test("ℂᵤⓇℒ。𝐒🄴", Domain{"curl.se", ""}, nil) // https://daniel.haxx.se/blog/2022/12/14/idn-is-crazy/
test("xmox.nl.", Domain{}, errTrailingDot)
test(false, "xmox.nl", Domain{"xmox.nl", ""}, nil)
test(false, "XMOX.NL", Domain{"xmox.nl", ""}, nil)
test(false, "TEST☺.XMOX.NL", Domain{"xn--test-3o3b.xmox.nl", "test☺.xmox.nl"}, nil)
test(false, "TEST☺.XMOX.NL", Domain{"xn--test-3o3b.xmox.nl", "test☺.xmox.nl"}, nil)
test(false, "ℂᵤⓇℒ。𝐒🄴", Domain{"curl.se", ""}, nil) // https://daniel.haxx.se/blog/2022/12/14/idn-is-crazy/
test(false, "xmox.nl.", Domain{}, errTrailingDot)
test(false, "_underscore.xmox.nl", Domain{}, errIDNA)
test(true, "_underscore.xmox.NL", Domain{ASCII: "_underscore.xmox.nl"}, nil)
test(true, "_underscore.☺.xmox.nl", Domain{}, errUnderscore)
test(true, "_underscore.xn--test-3o3b.xmox.nl", Domain{}, errUnderscore)
}

36
dns/examples_test.go Normal file
View File

@ -0,0 +1,36 @@
package dns_test
import (
"fmt"
"log"
"github.com/mjl-/mox/dns"
)
func ExampleParseDomain() {
// ASCII-only domain.
basic, err := dns.ParseDomain("example.com")
if err != nil {
log.Fatalf("parse domain: %v", err)
}
fmt.Printf("%s\n", basic)
// IDNA domain xn--74h.example.
smile, err := dns.ParseDomain("☺.example")
if err != nil {
log.Fatalf("parse domain: %v", err)
}
fmt.Printf("%s\n", smile)
// ASCII only domain curl.se in surprisingly allowed spelling.
surprising, err := dns.ParseDomain("ℂᵤⓇℒ。𝐒🄴")
if err != nil {
log.Fatalf("parse domain: %v", err)
}
fmt.Printf("%s\n", surprising)
// Output:
// example.com
// ☺.example/xn--74h.example
// curl.se
}

View File

@ -4,153 +4,249 @@ import (
"context"
"fmt"
"net"
"slices"
"github.com/mjl-/adns"
)
// MockResolver is a Resolver used for testing.
// Set DNS records in the fields, which map FQDNs (with trailing dot) to values.
type MockResolver struct {
PTR map[string][]string
A map[string][]string
AAAA map[string][]string
TXT map[string][]string
MX map[string][]*net.MX
CNAME map[string]string
Fail map[Mockreq]struct{}
PTR map[string][]string
A map[string][]string
AAAA map[string][]string
TXT map[string][]string
MX map[string][]*net.MX
TLSA map[string][]adns.TLSA // Keys are e.g. _25._tcp.<host>.
CNAME map[string]string
Fail []string // Records of the form "type name", e.g. "cname localhost." that will return a servfail.
AllAuthentic bool // Default value for authentic in responses. Overridden with Authentic and Inauthentic
Authentic []string // Like Fail, but records that cause the response to be authentic.
Inauthentic []string // Like Authentic, but making response inauthentic.
}
type Mockreq struct {
type mockReq struct {
Type string // E.g. "cname", "txt", "mx", "ptr", etc.
Name string
Name string // Name of request. For TLSA, the full requested DNS name, e.g. _25._tcp.<host>.
}
func (mr mockReq) String() string {
return mr.Type + " " + mr.Name
}
var _ Resolver = MockResolver{}
func (r MockResolver) nxdomain(s string) *net.DNSError {
return &net.DNSError{
func (r MockResolver) result(ctx context.Context, mr mockReq) (string, adns.Result, error) {
result := adns.Result{Authentic: r.AllAuthentic}
if err := ctx.Err(); err != nil {
return "", result, err
}
updateAuthentic := func(mock string) {
if slices.Contains(r.Authentic, mock) {
result.Authentic = true
}
if slices.Contains(r.Inauthentic, mock) {
result.Authentic = false
}
}
for {
if slices.Contains(r.Fail, mr.String()) {
updateAuthentic(mr.String())
return mr.Name, adns.Result{}, r.servfail(mr.Name)
}
cname, ok := r.CNAME[mr.Name]
if !ok {
updateAuthentic(mr.String())
break
}
updateAuthentic("cname " + mr.Name)
if mr.Type == "cname" {
return mr.Name, result, nil
}
mr.Name = cname
}
return mr.Name, result, nil
}
func (r MockResolver) nxdomain(s string) error {
return &adns.DNSError{
Err: "no record",
Name: s,
Server: "localhost",
Server: "mock",
IsNotFound: true,
}
}
func (r MockResolver) servfail(s string) *net.DNSError {
return &net.DNSError{
func (r MockResolver) servfail(s string) error {
return &adns.DNSError{
Err: "temp error",
Name: s,
Server: "localhost",
Server: "mock",
IsTemporary: true,
}
}
func (r MockResolver) LookupCNAME(ctx context.Context, name string) (string, error) {
if _, ok := r.Fail[Mockreq{"cname", name}]; ok {
return "", r.servfail(name)
func (r MockResolver) LookupPort(ctx context.Context, network, service string) (port int, err error) {
if err := ctx.Err(); err != nil {
return 0, err
}
if cname, ok := r.CNAME[name]; ok {
return cname, nil
}
return "", r.nxdomain(name)
return net.LookupPort(network, service)
}
func (r MockResolver) LookupAddr(ctx context.Context, ip string) ([]string, error) {
if _, ok := r.Fail[Mockreq{"ptr", ip}]; ok {
return nil, r.servfail(ip)
func (r MockResolver) LookupCNAME(ctx context.Context, name string) (string, adns.Result, error) {
mr := mockReq{"cname", name}
name, result, err := r.result(ctx, mr)
if err != nil {
return name, result, err
}
cname, ok := r.CNAME[name]
if !ok {
return cname, result, r.nxdomain(name)
}
return cname, result, nil
}
func (r MockResolver) LookupAddr(ctx context.Context, ip string) ([]string, adns.Result, error) {
mr := mockReq{"ptr", ip}
_, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
l, ok := r.PTR[ip]
if !ok {
return nil, r.nxdomain(ip)
return nil, result, r.nxdomain(ip)
}
return l, nil
return l, result, nil
}
func (r MockResolver) LookupNS(ctx context.Context, name string) ([]*net.NS, error) {
return nil, r.servfail("ns not implemented")
}
func (r MockResolver) LookupPort(ctx context.Context, network, service string) (port int, err error) {
return 0, r.servfail("port not implemented")
}
func (r MockResolver) LookupSRV(ctx context.Context, service, proto, name string) (string, []*net.SRV, error) {
return "", nil, r.servfail("srv not implemented")
}
func (r MockResolver) LookupIPAddr(ctx context.Context, host string) ([]net.IPAddr, error) {
if _, ok := r.Fail[Mockreq{"ipaddr", host}]; ok {
return nil, r.servfail(host)
}
addrs, err := r.LookupHost(ctx, host)
func (r MockResolver) LookupNS(ctx context.Context, name string) ([]*net.NS, adns.Result, error) {
mr := mockReq{"ns", name}
_, result, err := r.result(ctx, mr)
if err != nil {
return nil, err
return nil, result, err
}
return nil, result, r.servfail("ns not implemented")
}
func (r MockResolver) LookupSRV(ctx context.Context, service, proto, name string) (string, []*net.SRV, adns.Result, error) {
xname := fmt.Sprintf("_%s._%s.%s", service, proto, name)
mr := mockReq{"srv", xname}
name, result, err := r.result(ctx, mr)
if err != nil {
return name, nil, result, err
}
return name, nil, result, r.servfail("srv not implemented")
}
func (r MockResolver) LookupIPAddr(ctx context.Context, host string) ([]net.IPAddr, adns.Result, error) {
// todo: make closer to resolver, doing a & aaaa lookups, including their error/(in)secure status.
mr := mockReq{"ipaddr", host}
_, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
addrs, result1, err := r.LookupHost(ctx, host)
result.Authentic = result.Authentic && result1.Authentic
if err != nil {
return nil, result, err
}
ips := make([]net.IPAddr, len(addrs))
for i, a := range addrs {
ip := net.ParseIP(a)
if ip == nil {
return nil, fmt.Errorf("malformed ip %q", a)
return nil, result, fmt.Errorf("malformed ip %q", a)
}
ips[i] = net.IPAddr{IP: ip}
}
return ips, nil
return ips, result, nil
}
func (r MockResolver) LookupHost(ctx context.Context, host string) (addrs []string, err error) {
if _, ok := r.Fail[Mockreq{"host", host}]; ok {
return nil, r.servfail(host)
func (r MockResolver) LookupHost(ctx context.Context, host string) ([]string, adns.Result, error) {
// todo: make closer to resolver, doing a & aaaa lookups, including their error/(in)secure status.
mr := mockReq{"host", host}
_, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
var addrs []string
addrs = append(addrs, r.A[host]...)
addrs = append(addrs, r.AAAA[host]...)
if len(addrs) > 0 {
return addrs, nil
if len(addrs) == 0 {
return nil, result, r.nxdomain(host)
}
if cname, ok := r.CNAME[host]; ok {
return []string{cname}, nil
}
return nil, r.nxdomain(host)
return addrs, result, nil
}
func (r MockResolver) LookupIP(ctx context.Context, network, host string) ([]net.IP, error) {
if _, ok := r.Fail[Mockreq{"ip", host}]; ok {
return nil, r.servfail(host)
func (r MockResolver) LookupIP(ctx context.Context, network, host string) ([]net.IP, adns.Result, error) {
mr := mockReq{"ip", host}
name, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
var ips []net.IP
switch network {
case "ip", "ip4":
for _, ip := range r.A[host] {
for _, ip := range r.A[name] {
ips = append(ips, net.ParseIP(ip))
}
}
switch network {
case "ip", "ip6":
for _, ip := range r.AAAA[host] {
for _, ip := range r.AAAA[name] {
ips = append(ips, net.ParseIP(ip))
}
}
if len(ips) == 0 {
return nil, r.nxdomain(host)
return nil, result, r.nxdomain(host)
}
return ips, nil
return ips, result, nil
}
func (r MockResolver) LookupMX(ctx context.Context, name string) ([]*net.MX, error) {
if _, ok := r.Fail[Mockreq{"mx", name}]; ok {
return nil, r.servfail(name)
func (r MockResolver) LookupMX(ctx context.Context, name string) ([]*net.MX, adns.Result, error) {
mr := mockReq{"mx", name}
name, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
l, ok := r.MX[name]
if !ok {
return nil, r.nxdomain(name)
return nil, result, r.nxdomain(name)
}
return l, nil
return l, result, nil
}
func (r MockResolver) LookupTXT(ctx context.Context, name string) ([]string, error) {
if _, ok := r.Fail[Mockreq{"txt", name}]; ok {
return nil, r.servfail(name)
func (r MockResolver) LookupTXT(ctx context.Context, name string) ([]string, adns.Result, error) {
mr := mockReq{"txt", name}
name, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
l, ok := r.TXT[name]
if !ok {
return nil, r.nxdomain(name)
return nil, result, r.nxdomain(name)
}
return l, nil
return l, result, nil
}
func (r MockResolver) LookupTLSA(ctx context.Context, port int, protocol string, host string) ([]adns.TLSA, adns.Result, error) {
var name string
if port == 0 && protocol == "" {
name = host
} else {
name = fmt.Sprintf("_%d._%s.%s", port, protocol, host)
}
mr := mockReq{"tlsa", name}
name, result, err := r.result(ctx, mr)
if err != nil {
return nil, result, err
}
l, ok := r.TLSA[name]
if !ok {
return nil, result, r.nxdomain(name)
}
return l, result, nil
}

View File

@ -3,50 +3,45 @@ package dns
import (
"context"
"errors"
"fmt"
"log/slog"
"net"
"os"
"runtime"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/mjl-/adns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/stub"
)
// todo future: replace with a dnssec capable resolver
// todo future: change to interface that is closer to DNS. 1. expose nxdomain vs success with zero entries: nxdomain means the name does not exist for any dns resource record type, success with zero records means the name exists for other types than the requested type; 2. add ability to not follow cname records when resolving. the net resolver automatically follows cnames for LookupHost, LookupIP, LookupIPAddr. when resolving names found in mx records, we explicitly must not follow cnames. that seems impossible at the moment. 3. when looking up a cname, actually lookup the record? "net" LookupCNAME will return the requested name with no error if there is no CNAME record. because it returns the canonical name.
// todo future: add option to not use anything in the cache, for the admin pages where you check the latest DNS settings, ignoring old cached info.
var xlog = mlog.New("dns")
func init() {
net.DefaultResolver.StrictErrors = true
}
var (
metricLookup = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "mox_dns_lookup_duration_seconds",
Help: "DNS lookups.",
Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.100, 0.5, 1, 5, 10, 20, 30},
},
[]string{
"pkg",
"type", // Lower-case Resolver method name without leading Lookup.
"result", // ok, nxdomain, temporary, timeout, canceled, error
},
)
MetricLookup stub.HistogramVec = stub.HistogramVecIgnore{}
)
// Resolver is the interface strict resolver implements.
type Resolver interface {
LookupAddr(ctx context.Context, addr string) ([]string, error)
LookupCNAME(ctx context.Context, host string) (string, error) // NOTE: returns an error if no CNAME record is present.
LookupHost(ctx context.Context, host string) (addrs []string, err error)
LookupIP(ctx context.Context, network, host string) ([]net.IP, error)
LookupIPAddr(ctx context.Context, host string) ([]net.IPAddr, error)
LookupMX(ctx context.Context, name string) ([]*net.MX, error)
LookupNS(ctx context.Context, name string) ([]*net.NS, error)
LookupPort(ctx context.Context, network, service string) (port int, err error)
LookupSRV(ctx context.Context, service, proto, name string) (string, []*net.SRV, error)
LookupTXT(ctx context.Context, name string) ([]string, error)
LookupAddr(ctx context.Context, addr string) ([]string, adns.Result, error) // Always returns absolute names, with trailing dot.
LookupCNAME(ctx context.Context, host string) (string, adns.Result, error) // NOTE: returns an error if no CNAME record is present.
LookupHost(ctx context.Context, host string) ([]string, adns.Result, error)
LookupIP(ctx context.Context, network, host string) ([]net.IP, adns.Result, error)
LookupIPAddr(ctx context.Context, host string) ([]net.IPAddr, adns.Result, error)
LookupMX(ctx context.Context, name string) ([]*net.MX, adns.Result, error)
LookupNS(ctx context.Context, name string) ([]*net.NS, adns.Result, error)
LookupSRV(ctx context.Context, service, proto, name string) (string, []*net.SRV, adns.Result, error)
LookupTXT(ctx context.Context, name string) ([]string, adns.Result, error)
LookupTLSA(ctx context.Context, port int, protocol, host string) ([]adns.TLSA, adns.Result, error)
}
// WithPackage sets Pkg on resolver if it is a StrictResolve and does not have a package set yet.
@ -63,8 +58,17 @@ func WithPackage(resolver Resolver, name string) Resolver {
// StrictResolver is a net.Resolver that enforces that DNS names end with a dot,
// preventing "search"-relative lookups.
type StrictResolver struct {
Pkg string // Name of subsystem that is making DNS requests, for metrics.
Resolver *net.Resolver // Where the actual lookups are done. If nil, net.DefaultResolver is used for lookups.
Pkg string // Name of subsystem that is making DNS requests, for metrics.
Resolver *adns.Resolver // Where the actual lookups are done. If nil, adns.DefaultResolver is used for lookups.
Log *slog.Logger
}
func (r StrictResolver) log() mlog.Log {
pkg := r.Pkg
if pkg == "" {
pkg = "dns"
}
return mlog.New(pkg, r.Log)
}
var _ Resolver = StrictResolver{}
@ -73,7 +77,7 @@ var ErrRelativeDNSName = errors.New("dns: host to lookup must be absolute, endin
func metricLookupObserve(pkg, typ string, err error, start time.Time) {
var result string
var dnsErr *net.DNSError
var dnsErr *adns.DNSError
switch {
case err == nil:
result = "ok"
@ -88,7 +92,7 @@ func metricLookupObserve(pkg, typ string, err error, start time.Time) {
default:
result = "error"
}
metricLookup.WithLabelValues(pkg, typ, result).Observe(float64(time.Since(start)) / float64(time.Second))
MetricLookup.ObserveLabels(float64(time.Since(start))/float64(time.Second), pkg, typ, result)
}
func (r StrictResolver) WithPackage(name string) Resolver {
@ -99,37 +103,91 @@ func (r StrictResolver) WithPackage(name string) Resolver {
func (r StrictResolver) resolver() Resolver {
if r.Resolver == nil {
return net.DefaultResolver
return adns.DefaultResolver
}
return r.Resolver
}
func (r StrictResolver) LookupAddr(ctx context.Context, addr string) (resp []string, err error) {
func resolveErrorHint(err *error) {
e := *err
if e == nil {
return
}
dnserr, ok := e.(*adns.DNSError)
if !ok {
return
}
// If the dns server is not running, and it is one of the default/fallback IPs,
// hint at where to look.
if dnserr.IsTemporary && runtime.GOOS == "linux" && (dnserr.Server == "127.0.0.1:53" || dnserr.Server == "[::1]:53") && strings.HasSuffix(dnserr.Err, "connection refused") {
*err = fmt.Errorf("%w (hint: does /etc/resolv.conf point to a running nameserver? in case of systemd-resolved, see systemd-resolved.service(8); better yet, install a proper dnssec-verifying recursive resolver like unbound)", *err)
}
}
func (r StrictResolver) LookupPort(ctx context.Context, network, service string) (resp int, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "port", err, start)
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "port"),
slog.String("network", network),
slog.String("service", service),
slog.Int("resp", resp),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
resp, err = r.resolver().LookupPort(ctx, network, service)
return
}
func (r StrictResolver) LookupAddr(ctx context.Context, addr string) (resp []string, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "addr", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "addr"), mlog.Field("addr", addr), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "addr"),
slog.String("addr", addr),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
resp, err = r.resolver().LookupAddr(ctx, addr)
resp, result, err = r.resolver().LookupAddr(ctx, addr)
// For addresses from /etc/hosts without dot, we add the missing trailing dot.
for i, s := range resp {
if !strings.HasSuffix(s, ".") {
resp[i] = s + "."
}
}
return
}
// LookupCNAME looks up a CNAME. Unlike "net" LookupCNAME, it returns a "not found"
// error if there is no CNAME record.
func (r StrictResolver) LookupCNAME(ctx context.Context, host string) (resp string, err error) {
func (r StrictResolver) LookupCNAME(ctx context.Context, host string) (resp string, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "cname", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "cname"), mlog.Field("host", host), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "cname"),
slog.String("host", host),
slog.String("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(host, ".") {
return "", ErrRelativeDNSName
return "", result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupCNAME(ctx, host)
resp, result, err = r.resolver().LookupCNAME(ctx, host)
if err == nil && resp == host {
return "", &net.DNSError{
return "", result, &adns.DNSError{
Err: "no cname record",
Name: host,
Server: "",
@ -138,111 +196,177 @@ func (r StrictResolver) LookupCNAME(ctx context.Context, host string) (resp stri
}
return
}
func (r StrictResolver) LookupHost(ctx context.Context, host string) (resp []string, err error) {
func (r StrictResolver) LookupHost(ctx context.Context, host string) (resp []string, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "host", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "host"), mlog.Field("host", host), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "host"),
slog.String("host", host),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(host, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupHost(ctx, host)
resp, result, err = r.resolver().LookupHost(ctx, host)
return
}
func (r StrictResolver) LookupIP(ctx context.Context, network, host string) (resp []net.IP, err error) {
func (r StrictResolver) LookupIP(ctx context.Context, network, host string) (resp []net.IP, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "ip", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "ip"), mlog.Field("network", network), mlog.Field("host", host), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "ip"),
slog.String("network", network),
slog.String("host", host),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(host, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupIP(ctx, network, host)
resp, result, err = r.resolver().LookupIP(ctx, network, host)
return
}
func (r StrictResolver) LookupIPAddr(ctx context.Context, host string) (resp []net.IPAddr, err error) {
func (r StrictResolver) LookupIPAddr(ctx context.Context, host string) (resp []net.IPAddr, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "ipaddr", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "ipaddr"), mlog.Field("host", host), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "ipaddr"),
slog.String("host", host),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(host, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupIPAddr(ctx, host)
resp, result, err = r.resolver().LookupIPAddr(ctx, host)
return
}
func (r StrictResolver) LookupMX(ctx context.Context, name string) (resp []*net.MX, err error) {
func (r StrictResolver) LookupMX(ctx context.Context, name string) (resp []*net.MX, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "mx", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "mx"), mlog.Field("name", name), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "mx"),
slog.String("name", name),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(name, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupMX(ctx, name)
resp, result, err = r.resolver().LookupMX(ctx, name)
return
}
func (r StrictResolver) LookupNS(ctx context.Context, name string) (resp []*net.NS, err error) {
func (r StrictResolver) LookupNS(ctx context.Context, name string) (resp []*net.NS, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "ns", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "ns"), mlog.Field("name", name), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "ns"),
slog.String("name", name),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(name, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupNS(ctx, name)
resp, result, err = r.resolver().LookupNS(ctx, name)
return
}
func (r StrictResolver) LookupPort(ctx context.Context, network, service string) (resp int, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "port", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "port"), mlog.Field("network", network), mlog.Field("service", service), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
}()
resp, err = r.resolver().LookupPort(ctx, network, service)
return
}
func (r StrictResolver) LookupSRV(ctx context.Context, service, proto, name string) (resp0 string, resp1 []*net.SRV, err error) {
func (r StrictResolver) LookupSRV(ctx context.Context, service, proto, name string) (resp0 string, resp1 []*net.SRV, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "srv", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "srv"), mlog.Field("service", service), mlog.Field("proto", proto), mlog.Field("name", name), mlog.Field("resp0", resp0), mlog.Field("resp1", resp1), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "srv"),
slog.String("service", service),
slog.String("proto", proto),
slog.String("name", name),
slog.String("resp0", resp0),
slog.Any("resp1", resp1),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(name, ".") {
return "", nil, ErrRelativeDNSName
return "", nil, result, ErrRelativeDNSName
}
resp0, resp1, err = r.resolver().LookupSRV(ctx, service, proto, name)
resp0, resp1, result, err = r.resolver().LookupSRV(ctx, service, proto, name)
return
}
func (r StrictResolver) LookupTXT(ctx context.Context, name string) (resp []string, err error) {
func (r StrictResolver) LookupTXT(ctx context.Context, name string) (resp []string, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "txt", err, start)
xlog.WithContext(ctx).Debugx("dns lookup result", err, mlog.Field("pkg", r.Pkg), mlog.Field("type", "txt"), mlog.Field("name", name), mlog.Field("resp", resp), mlog.Field("duration", time.Since(start)))
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "txt"),
slog.String("name", name),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(name, ".") {
return nil, ErrRelativeDNSName
return nil, result, ErrRelativeDNSName
}
resp, err = r.resolver().LookupTXT(ctx, name)
resp, result, err = r.resolver().LookupTXT(ctx, name)
return
}
func (r StrictResolver) LookupTLSA(ctx context.Context, port int, protocol, host string) (resp []adns.TLSA, result adns.Result, err error) {
start := time.Now()
defer func() {
metricLookupObserve(r.Pkg, "tlsa", err, start)
r.log().WithContext(ctx).Debugx("dns lookup result", err,
slog.String("type", "tlsa"),
slog.Int("port", port),
slog.String("protocol", protocol),
slog.String("host", host),
slog.Any("resp", resp),
slog.Bool("authentic", result.Authentic),
slog.Duration("duration", time.Since(start)),
)
}()
defer resolveErrorHint(&err)
if !strings.HasSuffix(host, ".") {
return nil, result, ErrRelativeDNSName
}
resp, result, err = r.resolver().LookupTLSA(ctx, port, protocol, host)
return
}

View File

@ -1,39 +1,39 @@
// Package dnsbl implements DNS block lists (RFC 5782), for checking incoming messages from sources without reputation.
//
// A DNS block list contains IP addresses that should be blocked. The DNSBL is
// queried using DNS "A" lookups. The DNSBL starts at a "zone", e.g.
// "dnsbl.example". To look up whether an IP address is listed, a DNS name is
// composed: For 10.11.12.13, that name would be "13.12.11.10.dnsbl.example". If
// the lookup returns "record does not exist", the IP is not listed. If an IP
// address is returned, the IP is listed. If an IP is listed, an additional TXT
// lookup is done for more information about the block. IPv6 addresses are also
// looked up with an DNS "A" lookup of a name similar to an IPv4 address, but with
// 4-bit hexadecimal dot-separated characters, in reverse.
//
// The health of a DNSBL "zone" can be checked through a lookup of 127.0.0.1
// (must not be present) and 127.0.0.2 (must be present).
package dnsbl
import (
"context"
"errors"
"fmt"
"log/slog"
"net"
"strconv"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/stub"
)
var xlog = mlog.New("dnsbl")
var (
metricLookup = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "mox_dnsbl_lookup_duration_seconds",
Help: "DNSBL lookup",
Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.100, 0.5, 1, 5, 10, 20},
},
[]string{
"zone",
"status",
},
)
MetricLookup stub.HistogramVec = stub.HistogramVecIgnore{}
)
var ErrDNS = errors.New("dnsbl: dns error")
var ErrDNS = errors.New("dnsbl: dns error") // Temporary error.
// Status is the result of a DNSBL lookup.
type Status string
@ -45,12 +45,17 @@ var (
)
// Lookup checks if "ip" occurs in the DNS block list "zone" (e.g. dnsbl.example.org).
func Lookup(ctx context.Context, resolver dns.Resolver, zone dns.Domain, ip net.IP) (rstatus Status, rexplanation string, rerr error) {
log := xlog.WithContext(ctx)
func Lookup(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, zone dns.Domain, ip net.IP) (rstatus Status, rexplanation string, rerr error) {
log := mlog.New("dnsbl", elog)
start := time.Now()
defer func() {
metricLookup.WithLabelValues(zone.Name(), string(rstatus)).Observe(float64(time.Since(start)) / float64(time.Second))
log.Debugx("dnsbl lookup result", rerr, mlog.Field("zone", zone), mlog.Field("ip", ip), mlog.Field("status", rstatus), mlog.Field("explanation", rexplanation), mlog.Field("duration", time.Since(start)))
MetricLookup.ObserveLabels(float64(time.Since(start))/float64(time.Second), zone.Name(), string(rstatus))
log.Debugx("dnsbl lookup result", rerr,
slog.Any("zone", zone),
slog.Any("ip", ip),
slog.Any("status", rstatus),
slog.String("explanation", rexplanation),
slog.Duration("duration", time.Since(start)))
}()
b := &strings.Builder{}
@ -82,18 +87,18 @@ func Lookup(ctx context.Context, resolver dns.Resolver, zone dns.Domain, ip net.
addr := b.String()
// ../rfc/5782:175
_, err := dns.WithPackage(resolver, "dnsbl").LookupIP(ctx, "ip4", addr)
_, _, err := dns.WithPackage(resolver, "dnsbl").LookupIP(ctx, "ip4", addr)
if dns.IsNotFound(err) {
return StatusPass, "", nil
} else if err != nil {
return StatusTemperr, "", fmt.Errorf("%w: %s", ErrDNS, err)
}
txts, err := dns.WithPackage(resolver, "dnsbl").LookupTXT(ctx, addr)
txts, _, err := dns.WithPackage(resolver, "dnsbl").LookupTXT(ctx, addr)
if dns.IsNotFound(err) {
return StatusFail, "", nil
} else if err != nil {
log.Debugx("looking up txt record from dnsbl", err, mlog.Field("addr", addr))
log.Debugx("looking up txt record from dnsbl", err, slog.String("addr", addr))
return StatusFail, "", nil
}
return StatusFail, strings.Join(txts, "; "), nil
@ -104,16 +109,16 @@ func Lookup(ctx context.Context, resolver dns.Resolver, zone dns.Domain, ip net.
// Users of a DNSBL should periodically check if the DNSBL is still operating
// properly.
// For temporary errors, ErrDNS is returned.
func CheckHealth(ctx context.Context, resolver dns.Resolver, zone dns.Domain) (rerr error) {
log := xlog.WithContext(ctx)
func CheckHealth(ctx context.Context, elog *slog.Logger, resolver dns.Resolver, zone dns.Domain) (rerr error) {
log := mlog.New("dnsbl", elog)
start := time.Now()
defer func() {
log.Debugx("dnsbl healthcheck result", rerr, mlog.Field("zone", zone), mlog.Field("duration", time.Since(start)))
log.Debugx("dnsbl healthcheck result", rerr, slog.Any("zone", zone), slog.Duration("duration", time.Since(start)))
}()
// ../rfc/5782:355
status1, _, err1 := Lookup(ctx, resolver, zone, net.IPv4(127, 0, 0, 1))
status2, _, err2 := Lookup(ctx, resolver, zone, net.IPv4(127, 0, 0, 2))
status1, _, err1 := Lookup(ctx, log.Logger, resolver, zone, net.IPv4(127, 0, 0, 1))
status2, _, err2 := Lookup(ctx, log.Logger, resolver, zone, net.IPv4(127, 0, 0, 2))
if status1 == StatusPass && status2 == StatusFail {
return nil
} else if status1 == StatusFail {

View File

@ -6,10 +6,12 @@ import (
"testing"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
)
func TestDNSBL(t *testing.T) {
ctx := context.Background()
log := mlog.New("dnsbl", nil)
resolver := dns.MockResolver{
A: map[string][]string{
@ -23,7 +25,7 @@ func TestDNSBL(t *testing.T) {
},
}
if status, expl, err := Lookup(ctx, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("10.0.0.1")); err != nil {
if status, expl, err := Lookup(ctx, log.Logger, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("10.0.0.1")); err != nil {
t.Fatalf("lookup: %v", err)
} else if status != StatusFail {
t.Fatalf("lookup, got status %v, expected fail", status)
@ -31,7 +33,7 @@ func TestDNSBL(t *testing.T) {
t.Fatalf("lookup, got explanation %q", expl)
}
if status, expl, err := Lookup(ctx, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("2001:db8:1:2:3:4:567:89ab")); err != nil {
if status, expl, err := Lookup(ctx, log.Logger, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("2001:db8:1:2:3:4:567:89ab")); err != nil {
t.Fatalf("lookup: %v", err)
} else if status != StatusFail {
t.Fatalf("lookup, got status %v, expected fail", status)
@ -39,17 +41,17 @@ func TestDNSBL(t *testing.T) {
t.Fatalf("lookup, got explanation %q", expl)
}
if status, _, err := Lookup(ctx, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("10.0.0.2")); err != nil {
if status, _, err := Lookup(ctx, log.Logger, resolver, dns.Domain{ASCII: "example.com"}, net.ParseIP("10.0.0.2")); err != nil {
t.Fatalf("lookup: %v", err)
} else if status != StatusPass {
t.Fatalf("lookup, got status %v, expected pass", status)
}
// ../rfc/5782:357
if err := CheckHealth(ctx, resolver, dns.Domain{ASCII: "example.com"}); err != nil {
if err := CheckHealth(ctx, log.Logger, resolver, dns.Domain{ASCII: "example.com"}); err != nil {
t.Fatalf("dnsbl not healthy: %v", err)
}
if err := CheckHealth(ctx, resolver, dns.Domain{ASCII: "example.org"}); err == nil {
if err := CheckHealth(ctx, log.Logger, resolver, dns.Domain{ASCII: "example.org"}); err == nil {
t.Fatalf("bad dnsbl is healthy")
}
@ -58,7 +60,7 @@ func TestDNSBL(t *testing.T) {
"1.0.0.127.example.com.": {"127.0.0.2"}, // Should not be present in healthy dnsbl.
},
}
if err := CheckHealth(ctx, unhealthyResolver, dns.Domain{ASCII: "example.com"}); err == nil {
if err := CheckHealth(ctx, log.Logger, unhealthyResolver, dns.Domain{ASCII: "example.com"}); err == nil {
t.Fatalf("bad dnsbl is healthy")
}
}

30
dnsbl/examples_test.go Normal file
View File

@ -0,0 +1,30 @@
package dnsbl_test
import (
"context"
"log"
"log/slog"
"net"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/dnsbl"
)
func ExampleLookup() {
ctx := context.Background()
resolver := dns.StrictResolver{}
// Lookup if ip 127.0.0.2 is in spamhaus blocklist at zone sbl.spamhaus.org.
status, explanation, err := dnsbl.Lookup(ctx, slog.Default(), resolver, dns.Domain{ASCII: "sbl.spamhaus.org"}, net.ParseIP("127.0.0.2"))
if err != nil {
log.Fatalf("dnsbl lookup: %v", err)
}
switch status {
case dnsbl.StatusTemperr:
log.Printf("dnsbl lookup, temporary dns error: %v", err)
case dnsbl.StatusPass:
log.Printf("dnsbl lookup, ip not listed")
case dnsbl.StatusFail:
log.Printf("dnsbl lookup, ip listed: %s", explanation)
}
}

1213
doc.go

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +1,15 @@
version: '3.7'
services:
mox:
build:
context: .
dockerfile: Dockerfile.moximaptest
volumes:
- ./testdata/imaptest/config:/mox/config
- ./testdata/imaptest/data:/mox/data
- ./testdata/imaptest/imaptest.mbox:/mox/imaptest.mbox
- ./testdata/imaptest/config:/mox/config:z
- ./testdata/imaptest/data:/mox/data:z
- ./testdata/imaptest/imaptest.mbox:/mox/imaptest.mbox:z
working_dir: /mox
tty: true # For job control with set -m.
command: sh -c 'set -m; mox serve & sleep 1; echo testtest | mox setaccountpassword mjl@mox.example; fg'
command: sh -c 'set -m; mox serve & sleep 1; echo testtest | mox setaccountpassword mjl; fg'
healthcheck:
test: netstat -nlt | grep ':1143 '
interval: 1s
@ -24,7 +23,7 @@ services:
command: host=mox port=1143 'user=mjl@mox.example' pass=testtest mbox=/imaptest/imaptest.mbox
working_dir: /imaptest
volumes:
- ./testdata/imaptest:/imaptest
- ./testdata/imaptest:/imaptest:z
depends_on:
mox:
condition: service_healthy

View File

@ -1,18 +1,47 @@
version: '3.7'
services:
moxmail:
# todo: understand why hostname and/or domainname don't have any influence on the reverse dns set up by docker, requiring us to use our own /etc/resolv.conf...
hostname: moxmail1.mox1.example
domainname: mox1.example
build:
dockerfile: Dockerfile.moxmail
context: testdata/integration
# We run integration_test.go from this container, it connects to the other mox instances.
test:
hostname: test.mox1.example
image: mox_integration_test
# We add our cfssl-generated CA (which is in the repo) and acme pebble CA
# (generated each time pebble starts) to the list of trusted CA's, so the TLS
# dials in integration_test.go succeed.
command: ["sh", "-c", "set -ex; cat /integration/tmp-pebble-ca.pem /integration/tls/ca.pem >>/etc/ssl/certs/ca-certificates.crt; go test -tags integration"]
volumes:
- ./.go:/.go
- ./testdata/integration/resolv.conf:/etc/resolv.conf
- .:/mox
- ./.go:/.go:z
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
- ./testdata/integration/moxsubmit.conf:/etc/moxsubmit.conf:z
- .:/mox:z
environment:
GOCACHE: /.go/.cache/go-build
depends_on:
dns:
condition: service_healthy
# moxmail2 depends on moxacmepebble, we connect to both.
moxmail2:
condition: service_healthy
postfixmail:
condition: service_healthy
localserve:
condition: service_healthy
moxacmepebblealpn:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.50
# First mox instance that uses ACME with pebble.
moxacmepebble:
hostname: moxacmepebble.mox1.example
domainname: mox1.example
image: mox_integration_moxmail
environment:
MOX_UID: "${MOX_UID}"
command: ["sh", "-c", "/integration/moxacmepebble.sh"]
volumes:
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
healthcheck:
test: netstat -nlt | grep ':25 '
interval: 1s
@ -21,15 +50,87 @@ services:
depends_on:
dns:
condition: service_healthy
postfixmail:
acmepebble:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.10
mailnet2:
ipv4_address: 172.28.2.10
mailnet3:
ipv4_address: 172.28.3.10
# Second mox instance, with TLS cert/keys from files.
moxmail2:
hostname: moxmail2.mox2.example
domainname: mox2.example
image: mox_integration_moxmail
environment:
MOX_UID: "${MOX_UID}"
command: ["sh", "-c", "/integration/moxmail2.sh"]
volumes:
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
healthcheck:
test: netstat -nlt | grep ':25 '
interval: 1s
timeout: 1s
retries: 10
depends_on:
dns:
condition: service_healthy
acmepebble:
condition: service_healthy
# moxacmepebble creates tmp-pebble-ca.pem, needed by moxmail2 to trust the certificates offered by moxacmepebble.
moxacmepebble:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.20
# Third mox instance that uses ACME with pebble and has ALPN enabled.
moxacmepebblealpn:
hostname: moxacmepebblealpn.mox1.example
domainname: mox1.example
image: mox_integration_moxmail
environment:
MOX_UID: "${MOX_UID}"
command: ["sh", "-c", "/integration/moxacmepebblealpn.sh"]
volumes:
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
healthcheck:
test: netstat -nlt | grep ':25 '
interval: 1s
timeout: 1s
retries: 10
depends_on:
dns:
condition: service_healthy
acmepebble:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.80
localserve:
hostname: localserve.mox1.example
domainname: mox1.example
image: mox_integration_moxmail
command: ["sh", "-c", "set -e; chmod o+r /etc/resolv.conf; mox -checkconsistency localserve -ip 172.28.1.60"]
volumes:
- ./.go:/.go:z
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- .:/mox:z
environment:
GOCACHE: /.go/.cache/go-build
healthcheck:
test: netstat -nlt | grep ':1025 '
interval: 1s
timeout: 1s
retries: 10
depends_on:
dns:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.60
postfixmail:
hostname: postfixmail.postfix.example
@ -39,8 +140,8 @@ services:
context: testdata/integration
volumes:
# todo: figure out how to mount files with a uid that the process in the container can read...
- ./testdata/integration/resolv.conf:/etc/resolv.conf
command: ["sh", "-c", "set -e; chmod o+r /etc/resolv.conf; (echo 'maillog_file = /dev/stdout'; echo 'mydestination = $$myhostname, localhost.$$mydomain, localhost, $$mydomain') >>/etc/postfix/main.cf; echo 'root: moxtest1@mox1.example' >>/etc/postfix/aliases; newaliases; postfix start-fg"]
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
command: ["sh", "-c", "set -e; chmod o+r /etc/resolv.conf; (echo 'maillog_file = /dev/stdout'; echo 'mydestination = $$myhostname, localhost.$$mydomain, localhost, $$mydomain'; echo 'smtp_tls_security_level = may') >>/etc/postfix/main.cf; echo 'root: postfix@mox1.example' >>/etc/postfix/aliases; newaliases; postfix start-fg"]
healthcheck:
test: netstat -nlt | grep ':25 '
interval: 1s
@ -51,7 +152,7 @@ services:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.20
ipv4_address: 172.28.1.70
dns:
hostname: dns.example
@ -60,9 +161,11 @@ services:
# todo: figure out how to build from dockerfile with empty context without creating empty dirs in file system.
context: testdata/integration
volumes:
- ./testdata/integration/resolv.conf:/etc/resolv.conf
- ./testdata/integration:/integration
command: ["sh", "-c", "set -e; chmod o+r /etc/resolv.conf; install -m 640 -o unbound /integration/unbound.conf /integration/*.zone /etc/unbound/; unbound -d -p -v"]
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
# We start with a base example.zone, but moxacmepebble appends its records,
# followed by moxmail2. They restart unbound after appending records.
command: ["sh", "-c", "set -ex; ls -l /etc/resolv.conf; chmod o+r /etc/resolv.conf; install -m 640 -o unbound /integration/unbound.conf /etc/unbound/; chmod 755 /integration; chmod 644 /integration/*.zone; cp /integration/example.zone /integration/example-integration.zone; ls -ld /integration /integration/reverse.zone; unbound -d -p -v"]
healthcheck:
test: netstat -nlu | grep '172.28.1.30:53 '
interval: 1s
@ -72,6 +175,31 @@ services:
mailnet1:
ipv4_address: 172.28.1.30
# pebble is a small acme server useful for testing. It creates a new CA
# certificate each time it starts, so we go through some trouble to configure the
# certificate in moxacmepebble and moxmail2.
acmepebble:
hostname: acmepebble.example
image: docker.io/letsencrypt/pebble:v2.3.1@sha256:fc5a537bf8fbc7cc63aa24ec3142283aa9b6ba54529f86eb8ff31fbde7c5b258
volumes:
- ./testdata/integration/resolv.conf:/etc/resolv.conf:z
- ./testdata/integration:/integration:z
command: ["sh", "-c", "set -ex; mount; ls -l /etc/resolv.conf; chmod o+r /etc/resolv.conf; pebble -config /integration/pebble-config.json"]
ports:
- 14000:14000 # ACME port
- 15000:15000 # Management port
healthcheck:
test: netstat -nlt | grep ':14000 '
interval: 1s
timeout: 1s
retries: 10
depends_on:
dns:
condition: service_healthy
networks:
mailnet1:
ipv4_address: 172.28.1.40
networks:
mailnet1:
driver: bridge
@ -79,15 +207,3 @@ networks:
driver: default
config:
- subnet: "172.28.1.0/24"
mailnet2:
driver: bridge
ipam:
driver: default
config:
- subnet: "172.28.2.0/24"
mailnet3:
driver: bridge
ipam:
driver: default
config:
- subnet: "172.28.3.0/24"

View File

@ -10,11 +10,26 @@
# After following the quickstart instructions you can start mox:
#
# docker-compose up
#
#
# If you want to run "mox localserve", you could start it like this:
#
# docker run \
# -p 127.0.0.1:25:1025 \
# -p 127.0.0.1:465:1465 \
# -p 127.0.0.1:587:1587 \
# -p 127.0.0.1:993:1993 \
# -p 127.0.0.1:143:1143 \
# -p 127.0.0.1:443:1443 \
# -p 127.0.0.1:80:1080 \
# r.xmox.nl/mox:latest mox localserve -ip 0.0.0.0
#
# The -ip flag ensures connections to the published ports make it to mox, and it
# prevents listening on ::1 (IPv6 is not enabled in docker by default).
version: '3.7'
services:
mox:
# Replace "latest" with the version you want to run, see https://r.xmox.nl/repo/mox/.
# Replace "latest" with the version you want to run, see https://r.xmox.nl/r/mox/.
# Include the @sha256:... digest to ensure you get the listed image.
image: r.xmox.nl/mox:latest
environment:
@ -23,11 +38,11 @@ services:
# machine, and the IPs of incoming connections for spam filtering.
network_mode: 'host'
volumes:
- ./config:/mox/config
- ./data:/mox/data
- ./config:/mox/config:z
- ./data:/mox/data:z
# web is optional but recommended to bind in, useful for serving static files with
# the webserver.
- ./web:/mox/web
- ./web:/mox/web:z
working_dir: /mox
restart: on-failure
healthcheck:

View File

@ -5,21 +5,17 @@ package dsn
import (
"bufio"
"bytes"
"context"
"encoding/base64"
"errors"
"fmt"
"io"
"mime/multipart"
"net/textproto"
"strconv"
"strings"
"time"
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/smtp"
)
@ -45,6 +41,18 @@ type Message struct {
// Message subject header, e.g. describing mail delivery failure.
Subject string
MessageID string
// References header, with Message-ID of original message this DSN is about. So
// mail user-agents will thread the DSN with the original message.
References string
// For message submitted with FUTURERELEASE SMTP extension. Value is either "for;"
// plus original interval in seconds or "until;" plus original UTC RFC3339
// date-time.
FutureReleaseRequest string
// ../rfc/4865:315
// Human-readable text explaining the failure. Line endings should be
// bare newlines, not \r\n. They are converted to \r\n when composing.
TextBody string
@ -91,9 +99,10 @@ type Recipient struct {
Action Action
// Enhanced status code. First digit indicates permanent or temporary
// error. If the string contains more than just a status, that
// additional text is added as comment when composing a DSN.
// error.
Status string
// For additional details, included in comment.
StatusComment string
// Optional fields.
// Original intended recipient of message. Used with the DSN extensions ORCPT
@ -105,10 +114,10 @@ type Recipient struct {
// deliveries.
RemoteMTA NameIP
// If RemoteMTA is present, DiagnosticCode is from remote. When
// creating a DSN, additional text in the string will be added to the
// DSN as comment.
DiagnosticCode string
// DiagnosticCodeSMTP are the full SMTP response lines, space separated. The marshaled
// form starts with "smtp; ", this value does not.
DiagnosticCodeSMTP string
LastAttemptDate time.Time
FinalLogID string
@ -126,8 +135,8 @@ type Recipient struct {
// supports smtputf8. This influences the message media (sub)types used for the
// DSN.
//
// DKIM signatures are added if DKIM signing is configured for the "from" domain.
func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
// Called may want to add DKIM-Signature headers.
func (m *Message) Compose(log mlog.Log, smtputf8 bool) ([]byte, error) {
// ../rfc/3462:119
// ../rfc/3464:377
// We'll make a multipart/report with 2 or 3 parts:
@ -158,7 +167,13 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
header("From", fmt.Sprintf("<%s>", m.From.XString(smtputf8))) // todo: would be good to have a local ascii-only name for this address.
header("To", fmt.Sprintf("<%s>", m.To.XString(smtputf8))) // todo: we could just leave this out if it has utf-8 and remote does not support utf-8.
header("Subject", m.Subject)
header("Message-Id", fmt.Sprintf("<%s>", mox.MessageIDGen(smtputf8)))
if m.MessageID == "" {
return nil, fmt.Errorf("missing message-id")
}
header("Message-Id", fmt.Sprintf("<%s>", m.MessageID))
if m.References != "" {
header("References", m.References)
}
header("Date", time.Now().Format(message.RFC5322Z))
header("MIME-Version", "1.0")
mp := multipart.NewWriter(msgw)
@ -221,6 +236,10 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
status("Received-From-MTA", fmt.Sprintf("dns;%s (%s)", m.ReceivedFromMTA.Name, smtp.AddressLiteral(m.ReceivedFromMTA.ConnIP)))
}
status("Arrival-Date", m.ArrivalDate.Format(message.RFC5322Z)) // ../rfc/3464:758
if m.FutureReleaseRequest != "" {
// ../rfc/4865:320
status("Future-Release-Request", m.FutureReleaseRequest)
}
// Then per-recipient fields. ../rfc/3464:769
// todo: should also handle other address types. at least recognize "unknown". Probably just store this field. ../rfc/3464:819
@ -253,26 +272,23 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
st = "2.0.0"
}
}
var rest string
st, rest = codeLine(st)
statusLine := st
if rest != "" {
statusLine += " (" + rest + ")"
if r.StatusComment != "" {
statusLine += " (" + r.StatusComment + ")"
}
status("Status", statusLine) // ../rfc/3464:975
if !r.RemoteMTA.IsZero() {
// ../rfc/3464:1015
status("Remote-MTA", fmt.Sprintf("dns;%s (%s)", r.RemoteMTA.Name, smtp.AddressLiteral(r.RemoteMTA.IP)))
s := "dns;" + r.RemoteMTA.Name
if len(r.RemoteMTA.IP) > 0 {
s += " (" + smtp.AddressLiteral(r.RemoteMTA.IP) + ")"
}
status("Remote-MTA", s)
}
// Presence of Diagnostic-Code indicates the code is from Remote-MTA. ../rfc/3464:1053
if r.DiagnosticCode != "" {
diagCode, rest := codeLine(r.DiagnosticCode)
diagLine := diagCode
if rest != "" {
diagLine += " (" + rest + ")"
}
// ../rfc/6533:589
status("Diagnostic-Code", "smtp; "+diagLine)
if r.DiagnosticCodeSMTP != "" {
// ../rfc/3461:1342 ../rfc/6533:589
status("Diagnostic-Code", "smtp; "+r.DiagnosticCodeSMTP)
}
if !r.LastAttemptDate.IsZero() {
status("Last-Attempt-Date", r.LastAttemptDate.Format(message.RFC5322Z)) // ../rfc/3464:1076
@ -295,10 +311,8 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
headers = m.Original
} else if err != nil {
return nil, err
} else {
// This is a whole message. We still only include the headers.
// todo: include the whole body.
}
// Else, this is a whole message. We still only include the headers. todo: include the whole body.
origHdr := textproto.MIMEHeader{}
if smtputf8 {
@ -326,10 +340,7 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
data := base64.StdEncoding.EncodeToString(headers)
for len(data) > 0 {
line := data
n := len(line)
if n > 78 {
n = 78
}
n := min(len(line), 76) // ../rfc/2045:1372
line, data = data[:n], data[n:]
if _, err := origp.Write([]byte(line + "\r\n")); err != nil {
return nil, err
@ -351,17 +362,6 @@ func (m *Message) Compose(log *mlog.Log, smtputf8 bool) ([]byte, error) {
}
data := msgw.w.Bytes()
fd := m.From.IPDomain.Domain
confDom, _ := mox.Conf.Domain(fd)
if len(confDom.DKIM.Sign) > 0 {
if dkimHeaders, err := dkim.Sign(context.Background(), m.From.Localpart, fd, confDom.DKIM, smtputf8, bytes.NewReader(data)); err != nil {
log.Errorx("dsn: dkim sign for domain, returning unsigned dsn", err, mlog.Field("domain", fd))
} else {
data = append([]byte(dkimHeaders), data...)
}
}
return data, nil
}
@ -378,34 +378,3 @@ func (w *errWriter) Write(buf []byte) (int, error) {
w.err = err
return n, err
}
// split a line into enhanced status code and rest.
func codeLine(s string) (string, string) {
t := strings.SplitN(s, " ", 2)
l := strings.Split(t[0], ".")
if len(l) != 3 {
return "", s
}
for i, e := range l {
_, err := strconv.ParseInt(e, 10, 32)
if err != nil {
return "", s
}
if i == 0 && len(e) != 1 {
return "", s
}
}
var rest string
if len(t) == 2 {
rest = t[1]
}
return t[0], rest
}
// HasCode returns whether line starts with an enhanced SMTP status code.
func HasCode(line string) bool {
// ../rfc/3464:986
ecode, _ := codeLine(line)
return ecode != ""
}

View File

@ -2,7 +2,6 @@ package dsn
import (
"bytes"
"context"
"fmt"
"io"
"net"
@ -11,14 +10,14 @@ import (
"testing"
"time"
"github.com/mjl-/mox/dkim"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/smtp"
)
var pkglog = mlog.New("dsn", nil)
func xparseDomain(s string) dns.Domain {
d, err := dns.ParseDomain(s)
if err != nil {
@ -33,7 +32,7 @@ func xparseIPDomain(s string) dns.IPDomain {
func tparseMessage(t *testing.T, data []byte, nparts int) (*Message, *message.Part) {
t.Helper()
m, p, err := Parse(bytes.NewReader(data))
m, p, err := Parse(pkglog.Logger, bytes.NewReader(data))
if err != nil {
t.Fatalf("parsing dsn: %v", err)
}
@ -51,8 +50,8 @@ func tcheckType(t *testing.T, p *message.Part, mt, mst, cte string) {
if !strings.EqualFold(p.MediaSubType, mst) {
t.Fatalf("got mediasubtype %q, expected %q", p.MediaSubType, mst)
}
if !strings.EqualFold(p.ContentTransferEncoding, cte) {
t.Fatalf("got content-transfer-encoding %q, expected %q", p.ContentTransferEncoding, cte)
if !(cte == "" && p.ContentTransferEncoding == nil || cte != "" && p.ContentTransferEncoding != nil && strings.EqualFold(cte, *p.ContentTransferEncoding)) {
t.Fatalf("got content-transfer-encoding %v, expected %v", p.ContentTransferEncoding, cte)
}
}
@ -72,7 +71,7 @@ func tcompareReader(t *testing.T, r io.Reader, exp []byte) {
}
func TestDSN(t *testing.T) {
log := mlog.New("dsn")
log := mlog.New("dsn", nil)
now := time.Now()
@ -80,14 +79,16 @@ func TestDSN(t *testing.T) {
m := Message{
SMTPUTF8: false,
From: smtp.Path{Localpart: "postmaster", IPDomain: xparseIPDomain("mox.example")},
To: smtp.Path{Localpart: "mjl", IPDomain: xparseIPDomain("remote.example")},
Subject: "dsn",
TextBody: "delivery failure\n",
From: smtp.Path{Localpart: "postmaster", IPDomain: xparseIPDomain("mox.example")},
To: smtp.Path{Localpart: "mjl", IPDomain: xparseIPDomain("remote.example")},
Subject: "dsn",
MessageID: "test@localhost",
TextBody: "delivery failure\n",
ReportingMTA: "mox.example",
ReceivedFromMTA: smtp.Ehlo{Name: xparseIPDomain("relay.example"), ConnIP: net.ParseIP("10.10.10.10")},
ArrivalDate: now,
ReportingMTA: "mox.example",
ReceivedFromMTA: smtp.Ehlo{Name: xparseIPDomain("relay.example"), ConnIP: net.ParseIP("10.10.10.10")},
ArrivalDate: now,
FutureReleaseRequest: "for;123",
Recipients: []Recipient{
{
@ -104,6 +105,7 @@ func TestDSN(t *testing.T) {
if err != nil {
t.Fatalf("composing dsn: %v", err)
}
pmsg, part := tparseMessage(t, msgbuf, 3)
tcheckType(t, part, "multipart", "report", "")
tcheckType(t, &part.Parts[0], "text", "plain", "7bit")
@ -127,35 +129,15 @@ func TestDSN(t *testing.T) {
tcompareReader(t, part.Parts[2].Reader(), m.Original)
tcompare(t, pmsg.Recipients[0].FinalRecipient, m.Recipients[0].FinalRecipient)
// Test for valid DKIM signature.
mox.Context = context.Background()
mox.ConfigStaticPath = "../testdata/dsn/mox.conf"
mox.MustLoadConfig(false)
msgbuf, err = m.Compose(log, false)
if err != nil {
t.Fatalf("composing utf-8 dsn with utf-8 support: %v", err)
}
resolver := &dns.MockResolver{
TXT: map[string][]string{
"testsel._domainkey.mox.example.": {"v=DKIM1;h=sha256;t=s;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3ZId3ys70VFspp/VMFaxMOrNjHNPg04NOE1iShih16b3Ex7hHBOgC1UvTGSmrMlbCB1OxTXkvf6jW6S4oYRnZYVNygH6zKUwYYhaSaGIg1xA/fDn+IgcTRyLoXizMUgUgpTGyxhNrwIIWv+i7jjbs3TKpP3NU4owQ/rxowmSNqg+fHIF1likSvXvljYS" + "jaFXXnWfYibW7TdDCFFpN4sB5o13+as0u4vLw6MvOi59B1tLype1LcHpi1b9PfxNtznTTdet3kL0paxIcWtKHT0LDPUos8YYmiPa5nGbUqlC7d+4YT2jQPvwGxCws1oo2Tw6nj1UaihneYGAyvEky49FBwIDAQAB"},
},
}
results, err := dkim.Verify(context.Background(), resolver, false, func(*dkim.Sig) error { return nil }, bytes.NewReader(msgbuf), false)
if err != nil {
t.Fatalf("dkim verify: %v", err)
}
if len(results) != 1 || results[0].Status != dkim.StatusPass {
t.Fatalf("dkim result not pass, %#v", results)
}
// An utf-8 message.
m = Message{
SMTPUTF8: true,
From: smtp.Path{Localpart: "postmæster", IPDomain: xparseIPDomain("møx.example")},
To: smtp.Path{Localpart: "møx", IPDomain: xparseIPDomain("remøte.example")},
Subject: "dsn¡",
TextBody: "delivery failure¿\n",
From: smtp.Path{Localpart: "postmæster", IPDomain: xparseIPDomain("møx.example")},
To: smtp.Path{Localpart: "møx", IPDomain: xparseIPDomain("remøte.example")},
Subject: "dsn¡",
MessageID: "test@localhost",
TextBody: "delivery failure¿\n",
ReportingMTA: "mox.example",
ReceivedFromMTA: smtp.Ehlo{Name: xparseIPDomain("reläy.example"), ConnIP: net.ParseIP("10.10.10.10")},
@ -210,34 +192,3 @@ func TestDSN(t *testing.T) {
tcheckType(t, &part.Parts[1], "message", "global-delivery-status", "8bit")
tcompare(t, pmsg.Recipients[0].FinalRecipient, m.Recipients[0].FinalRecipient)
}
func TestCode(t *testing.T) {
testCodeLine := func(line, ecode, rest string) {
t.Helper()
e, r := codeLine(line)
if e != ecode || r != rest {
t.Fatalf("codeLine %q: got %q %q, expected %q %q", line, e, r, ecode, rest)
}
}
testCodeLine("4.0.0", "4.0.0", "")
testCodeLine("4.0.0 more", "4.0.0", "more")
testCodeLine("other", "", "other")
testCodeLine("other more", "", "other more")
testHasCode := func(line string, exp bool) {
t.Helper()
got := HasCode(line)
if got != exp {
t.Fatalf("HasCode %q: got %v, expected %v", line, got, exp)
}
}
testHasCode("4.0.0", true)
testHasCode("5.7.28", true)
testHasCode("10.0.0", false) // first number must be single digit.
testHasCode("4.1.1 more", true)
testHasCode("other ", false)
testHasCode("4.2.", false)
testHasCode("4.2. ", false)
testHasCode(" 4.2.4", false)
testHasCode(" 4.2.4 ", false)
}

View File

@ -4,6 +4,7 @@ import (
"bufio"
"fmt"
"io"
"log/slog"
"net/textproto"
"strconv"
"strings"
@ -11,7 +12,9 @@ import (
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/message"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/smtp"
"slices"
)
// Parse reads a DSN message.
@ -22,17 +25,19 @@ import (
// The first return value is the machine-parsed DSN message. The second value is
// the entire MIME multipart message. Use its Parts field to access the
// human-readable text and optional original message/headers.
func Parse(r io.ReaderAt) (*Message, *message.Part, error) {
func Parse(elog *slog.Logger, r io.ReaderAt) (*Message, *message.Part, error) {
log := mlog.New("dsn", elog)
// DSNs can mix and match subtypes with and without utf-8. ../rfc/6533:441
part, err := message.Parse(r)
part, err := message.Parse(log.Logger, false, r)
if err != nil {
return nil, nil, fmt.Errorf("parsing message: %v", err)
}
if part.MediaType != "MULTIPART" || part.MediaSubType != "REPORT" {
return nil, nil, fmt.Errorf(`message has content-type %q, must have "message/report"`, strings.ToLower(part.MediaType+"/"+part.MediaSubType))
}
err = part.Walk(nil)
err = part.Walk(log.Logger, nil)
if err != nil {
return nil, nil, fmt.Errorf("parsing message parts: %v", err)
}
@ -61,7 +66,11 @@ func Parse(r io.ReaderAt) (*Message, *message.Part, error) {
if err != nil {
return smtp.Path{}, fmt.Errorf("parsing domain: %v", err)
}
return smtp.Path{Localpart: smtp.Localpart(a.User), IPDomain: dns.IPDomain{Domain: d}}, nil
lp, err := smtp.ParseLocalpart(a.User)
if err != nil {
return smtp.Path{}, fmt.Errorf("parsing localpart: %v", err)
}
return smtp.Path{Localpart: lp, IPDomain: dns.IPDomain{Domain: d}}, nil
}
if len(part.Envelope.From) == 1 {
m.From, err = addressPath(part.Envelope.From[0])
@ -76,7 +85,7 @@ func Parse(r io.ReaderAt) (*Message, *message.Part, error) {
}
}
m.Subject = part.Envelope.Subject
buf, err := io.ReadAll(p0.Reader())
buf, err := io.ReadAll(p0.ReaderUTF8OrBinary())
if err != nil {
return nil, nil, fmt.Errorf("reading human-readable text part: %v", err)
}
@ -209,19 +218,21 @@ func parseRecipientHeader(mr *textproto.Reader, utf8 bool) (Recipient, error) {
case "Action":
a := Action(strings.ToLower(v))
actions := []Action{Failed, Delayed, Delivered, Relayed, Expanded}
var ok bool
for _, x := range actions {
if a == x {
ok = true
break
}
}
if !ok {
if slices.Contains(actions, a) {
r.Action = a
} else {
err = fmt.Errorf("unrecognized action %q", v)
}
case "Status":
// todo: parse the enhanced status code?
r.Status = v
t := strings.SplitN(v, "(", 2)
v = strings.TrimSpace(v)
if len(t) == 2 && strings.HasSuffix(v, ")") {
r.Status = strings.TrimSpace(t[0])
r.StatusComment = strings.TrimSpace(strings.TrimSuffix(t[1], ")"))
}
case "Remote-Mta":
r.RemoteMTA = NameIP{Name: v}
case "Diagnostic-Code":
@ -233,7 +244,7 @@ func parseRecipientHeader(mr *textproto.Reader, utf8 bool) (Recipient, error) {
} else if len(t) != 2 {
err = fmt.Errorf("missing semicolon to separate diagnostic-type from code")
} else {
r.DiagnosticCode = strings.TrimSpace(t[1])
r.DiagnosticCodeSMTP = strings.TrimSpace(t[1])
}
case "Last-Attempt-Date":
r.LastAttemptDate, err = parseDateTime(v)
@ -306,17 +317,18 @@ func parseAddress(s string, utf8 bool) (smtp.Path, error) {
}
}
// todo: more proper parser
t = strings.SplitN(s, "@", 2)
if len(t) != 2 || t[0] == "" || t[1] == "" {
t = strings.Split(s, "@")
if len(t) == 1 {
return smtp.Path{}, fmt.Errorf("invalid email address")
}
d, err := dns.ParseDomain(t[1])
d, err := dns.ParseDomain(t[len(t)-1])
if err != nil {
return smtp.Path{}, fmt.Errorf("parsing domain: %v", err)
}
var lp string
var esc string
for _, c := range t[0] {
lead := strings.Join(t[:len(t)-1], "@")
for _, c := range lead {
if esc == "" && c == '\\' || esc == `\` && (c == 'x' || c == 'X') || esc == `\x` && c == '{' {
if c == 'X' {
c = 'x'
@ -340,7 +352,11 @@ func parseAddress(s string, utf8 bool) (smtp.Path, error) {
if esc != "" {
return smtp.Path{}, fmt.Errorf("parsing localpart: unfinished embedded unicode char")
}
p := smtp.Path{Localpart: smtp.Localpart(lp), IPDomain: dns.IPDomain{Domain: d}}
localpart, err := smtp.ParseLocalpart(lp)
if err != nil {
return smtp.Path{}, fmt.Errorf("parsing localpart: %v", err)
}
p := smtp.Path{Localpart: localpart, IPDomain: dns.IPDomain{Domain: d}}
return p, nil
}

325
examples.go Normal file
View File

@ -0,0 +1,325 @@
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"log"
"reflect"
"strings"
"time"
"github.com/mjl-/sconf"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/webhook"
)
func cmdExample(c *cmd) {
c.params = "[name]"
c.help = `List available examples, or print a specific example.`
args := c.Parse()
if len(args) > 1 {
c.Usage()
}
var match func() string
for _, ex := range examples {
if len(args) == 0 {
fmt.Println(ex.Name)
} else if args[0] == ex.Name {
match = ex.Get
}
}
if len(args) == 0 {
return
}
if match == nil {
log.Fatalln("not found")
}
fmt.Print(match())
}
func cmdConfigExample(c *cmd) {
c.params = "[name]"
c.help = `List available config examples, or print a specific example.`
args := c.Parse()
if len(args) > 1 {
c.Usage()
}
var match func() string
for _, ex := range configExamples {
if len(args) == 0 {
fmt.Println(ex.Name)
} else if args[0] == ex.Name {
match = ex.Get
}
}
if len(args) == 0 {
return
}
if match == nil {
log.Fatalln("not found")
}
fmt.Print(match())
}
var configExamples = []struct {
Name string
Get func() string
}{
{
"webhandlers",
func() string {
const webhandlers = `# Snippet of domains.conf to configure WebDomainRedirects and WebHandlers.
# Redirect all requests for mox.example to https://www.mox.example.
WebDomainRedirects:
mox.example: www.mox.example
# Each request is matched against these handlers until one matches and serves it.
WebHandlers:
-
# Redirect all plain http requests to https, leaving path, query strings, etc
# intact. When the request is already to https, the destination URL would have the
# same scheme, host and path, causing this redirect handler to not match the
# request (and not cause a redirect loop) and the webserver to serve the request
# with a later handler.
LogName: redirhttps
Domain: www.mox.example
PathRegexp: ^/
# Could leave DontRedirectPlainHTTP at false if it wasn't for this being an
# example for doing this redirect.
DontRedirectPlainHTTP: true
WebRedirect:
BaseURL: https://www.mox.example
-
# The name of the handler, used in logging and metrics.
LogName: staticmjl
# With ACME configured, each configured domain will automatically get a TLS
# certificate on first request.
Domain: www.mox.example
PathRegexp: ^/who/mjl/
WebStatic:
StripPrefix: /who/mjl
# Requested path /who/mjl/inferno/ resolves to local web/mjl/inferno.
# If a directory contains an index.html, it is served when a directory is requested.
Root: web/mjl
# With ListFiles true, if a directory does not contain an index.html, the contents are listed.
ListFiles: true
ResponseHeaders:
X-Mox: hi
-
LogName: redir
Domain: www.mox.example
PathRegexp: ^/redir/a/b/c
# Don't redirect from plain HTTP to HTTPS.
DontRedirectPlainHTTP: true
WebRedirect:
# Just change the domain and add query string set fragment. No change to scheme.
# Path will start with /redir/a/b/c (and whathever came after) because no
# OrigPathRegexp+ReplacePath is set.
BaseURL: //moxest.example?q=1#frag
# Default redirection is 308 - Permanent Redirect.
StatusCode: 307
-
LogName: oldnew
Domain: www.mox.example
PathRegexp: ^/old/
WebRedirect:
# Replace path, leaving rest of URL intact.
OrigPathRegexp: ^/old/(.*)
ReplacePath: /new/$1
-
LogName: app
Domain: www.mox.example
PathRegexp: ^/app/
WebForward:
# Strip the path matched by PathRegexp before forwarding the request. So original
# request /app/api become just /api.
StripPath: true
# URL of backend, where requests are forwarded to. The path in the URL is kept,
# so for incoming request URL /app/api, the outgoing request URL has path /app-v2/api.
# Requests are made with Go's net/http DefaultTransporter, including using
# HTTP_PROXY and HTTPS_PROXY environment variables.
URL: http://127.0.0.1:8900/app-v2/
# Add headers to response.
ResponseHeaders:
X-Frame-Options: deny
X-Content-Type-Options: nosniff
`
// Parse just so we know we have the syntax right.
// todo: ideally we would have a complete config file and parse it fully.
var conf struct {
WebDomainRedirects map[string]string
WebHandlers []config.WebHandler
}
err := sconf.Parse(strings.NewReader(webhandlers), &conf)
xcheckf(err, "parsing webhandlers example")
return webhandlers
},
},
{
"transport",
func() string {
const moxconf = `# Snippet for mox.conf, defining a transport called Example that connects on the
# SMTP submission with TLS port 465 ("submissions"), authenticating with
# SCRAM-SHA-256-PLUS (other providers may not support SCRAM-SHA-256-PLUS, but they
# typically do support the older CRAM-MD5).:
# Transport are mechanisms for delivering messages. Transports can be referenced
# from Routes in accounts, domains and the global configuration. There is always
# an implicit/fallback delivery transport doing direct delivery with SMTP from the
# outgoing message queue. Transports are typically only configured when using
# smarthosts, i.e. when delivering through another SMTP server. Zero or one
# transport methods must be set in a transport, never multiple. When using an
# external party to send email for a domain, keep in mind you may have to add
# their IP address to your domain's SPF record, and possibly additional DKIM
# records. (optional)
Transports:
Example:
# Submission SMTP over a TLS connection to submit email to a remote queue.
# (optional)
Submissions:
# Host name to connect to and for verifying its TLS certificate.
Host: smtp.example.com
# If set, authentication credentials for the remote server. (optional)
Auth:
Username: user@example.com
Password: test1234
Mechanisms:
# Allowed authentication mechanisms. Defaults to SCRAM-SHA-256-PLUS,
# SCRAM-SHA-256, SCRAM-SHA-1-PLUS, SCRAM-SHA-1, CRAM-MD5. Not included by default:
# PLAIN. Specify the strongest mechanism known to be implemented by the server to
# prevent mechanism downgrade attacks. (optional)
- SCRAM-SHA-256-PLUS
`
const domainsconf = `# Snippet for domains.conf, specifying a route that sends through the transport:
# Routes for delivering outgoing messages through the queue. Each delivery attempt
# evaluates account routes, domain routes and finally these global routes. The
# transport of the first matching route is used in the delivery attempt. If no
# routes match, which is the default with no configured routes, messages are
# delivered directly from the queue. (optional)
Routes:
-
Transport: Example
`
var static struct {
Transports map[string]config.Transport
}
var dynamic struct {
Routes []config.Route
}
err := sconf.Parse(strings.NewReader(moxconf), &static)
xcheckf(err, "parsing moxconf example")
err = sconf.Parse(strings.NewReader(domainsconf), &dynamic)
xcheckf(err, "parsing domainsconf example")
return moxconf + "\n\n" + domainsconf
},
},
}
var exampleTime = time.Date(2024, time.March, 27, 0, 0, 0, 0, time.UTC)
var examples = []struct {
Name string
Get func() string
}{
{
"webhook-outgoing-delivered",
func() string {
v := webhook.Outgoing{
Version: 0,
Event: webhook.EventDelivered,
QueueMsgID: 101,
FromID: base64.RawURLEncoding.EncodeToString([]byte("0123456789abcdef")),
MessageID: "<QnxzgulZK51utga6agH_rg@mox.example>",
Subject: "subject of original message",
WebhookQueued: exampleTime,
Extra: map[string]string{},
SMTPCode: smtp.C250Completed,
}
return "Example webhook HTTP POST JSON body for successful outgoing delivery:\n\n\t" + formatJSON(v)
},
},
{
"webhook-outgoing-dsn-failed",
func() string {
v := webhook.Outgoing{
Version: 0,
Event: webhook.EventFailed,
DSN: true,
Suppressing: true,
QueueMsgID: 102,
FromID: base64.RawURLEncoding.EncodeToString([]byte("0123456789abcdef")),
MessageID: "<QnxzgulZK51utga6agH_rg@mox.example>",
Subject: "subject of original message",
WebhookQueued: exampleTime,
Extra: map[string]string{"userid": "456"},
Error: "timeout connecting to host",
SMTPCode: smtp.C554TransactionFailed,
SMTPEnhancedCode: "5." + smtp.SeNet4Other0,
}
return `Example webhook HTTP POST JSON body for failed delivery based on incoming DSN
message, with custom extra data fields (from original submission), and adding address to the suppression list:
` + formatJSON(v)
},
},
{
"webhook-incoming-basic",
func() string {
v := webhook.Incoming{
Version: 0,
From: []webhook.NameAddress{{Address: "mox@localhost"}},
To: []webhook.NameAddress{{Address: "mjl@localhost"}},
Subject: "hi",
MessageID: "<QnxzgulZK51utga6agH_rg@mox.example>",
Date: &exampleTime,
Text: "hello world ☺\n",
Structure: webhook.Structure{
ContentType: "text/plain",
ContentTypeParams: map[string]string{"charset": "utf-8"},
DecodedSize: int64(len("hello world ☺\r\n")),
Parts: []webhook.Structure{},
},
Meta: webhook.IncomingMeta{
MsgID: 201,
MailFrom: "mox@localhost",
MailFromValidated: false,
MsgFromValidated: true,
RcptTo: "mjl@localhost",
DKIMVerifiedDomains: []string{"localhost"},
RemoteIP: "127.0.0.1",
Received: exampleTime.Add(3 * time.Second),
MailboxName: "Inbox",
Automated: false,
},
}
return "Example JSON body for webhooks for incoming delivery of basic message:\n\n\t" + formatJSON(v)
},
},
}
func formatJSON(v any) string {
nv, _ := mox.FillNil(reflect.ValueOf(v))
v = nv.Interface()
var b bytes.Buffer
enc := json.NewEncoder(&b)
enc.SetIndent("\t", "\t")
enc.SetEscapeHTML(false)
err := enc.Encode(v)
xcheckf(err, "encoding to json")
return b.String()
}

View File

@ -1,31 +1,33 @@
package main
import (
"context"
"log"
"path/filepath"
"time"
"github.com/mjl-/bstore"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/store"
)
func cmdExportMaildir(c *cmd) {
c.params = "dst-dir account-path [mailbox]"
c.params = "[-single] dst-dir account-path [mailbox]"
c.help = `Export one or all mailboxes from an account in maildir format.
Export bypasses a running mox instance. It opens the account mailbox/message
database file directly. This may block if a running mox instance also has the
database open, e.g. for IMAP connections. To export from a running instance, use
the accounts web page.
the accounts web page or webmail.
`
var single bool
c.flag.BoolVar(&single, "single", false, "export single mailbox, without any children. disabled if mailbox isn't specified.")
args := c.Parse()
xcmdExport(false, args, c)
xcmdExport(false, single, args, c)
}
func cmdExportMbox(c *cmd) {
c.params = "dst-dir account-path [mailbox]"
c.params = "[-single] dst-dir account-path [mailbox]"
c.help = `Export messages from one or all mailboxes in an account in mbox format.
Using mbox is not recommended. Maildir is a better format.
@ -33,17 +35,19 @@ Using mbox is not recommended. Maildir is a better format.
Export bypasses a running mox instance. It opens the account mailbox/message
database file directly. This may block if a running mox instance also has the
database open, e.g. for IMAP connections. To export from a running instance, use
the accounts web page.
the accounts web page or webmail.
For mbox export, "mboxrd" is used where message lines starting with the magic
"From " string are escaped by prepending a >. All ">*From " are escaped,
otherwise reconstructing the original could lose a ">".
`
var single bool
c.flag.BoolVar(&single, "single", false, "export single mailbox, without any children. disabled if mailbox isn't specified.")
args := c.Parse()
xcmdExport(true, args, c)
xcmdExport(true, single, args, c)
}
func xcmdExport(mbox bool, args []string, c *cmd) {
func xcmdExport(mbox, single bool, args []string, c *cmd) {
if len(args) != 2 && len(args) != 3 {
c.Usage()
}
@ -53,10 +57,13 @@ func xcmdExport(mbox bool, args []string, c *cmd) {
var mailbox string
if len(args) == 3 {
mailbox = args[2]
} else {
single = false
}
dbpath := filepath.Join(accountDir, "index.db")
db, err := bstore.Open(dbpath, &bstore.Options{Timeout: 5 * time.Second, Perm: 0660}, store.Message{}, store.Recipient{}, store.Mailbox{})
opts := bstore.Options{Timeout: 5 * time.Second, Perm: 0660, RegisterLogger: c.log.Logger}
db, err := bstore.Open(context.Background(), dbpath, &opts, store.DBTypes...)
xcheckf(err, "open database %q", dbpath)
defer func() {
if err := db.Close(); err != nil {
@ -65,7 +72,7 @@ func xcmdExport(mbox bool, args []string, c *cmd) {
}()
a := store.DirArchiver{Dir: dst}
err = store.ExportMessages(mlog.New("export"), db, accountDir, a, !mbox, mailbox)
err = store.ExportMessages(context.Background(), c.log, db, accountDir, a, !mbox, mailbox, nil, !single)
xcheckf(err, "exporting messages")
err = a.Close()
xcheckf(err, "closing archiver")

10
genapidoc.sh Executable file
View File

@ -0,0 +1,10 @@
#!/bin/sh
set -eu
# we rewrite some dmarcprt and tlsrpt enums into untyped strings: real-world
# reports have invalid values, and our loose Go typed strings accept all values,
# but we don't want the typescript runtime checker to fail on those unrecognized
# values.
(cd webadmin && CGO_ENABLED=0 go run ../vendor/github.com/mjl-/sherpadoc/cmd/sherpadoc/*.go -adjust-function-names none -rename 'config Domain ConfigDomain,dmarc Policy DMARCPolicy,mtasts MX STSMX,tlsrptdb Record TLSReportRecord,tlsrptdb SuppressAddress TLSRPTSuppressAddress,dmarcrpt DKIMResult string,dmarcrpt SPFResult string,dmarcrpt SPFDomainScope string,dmarcrpt DMARCResult string,dmarcrpt PolicyOverride string,dmarcrpt Alignment string,dmarcrpt Disposition string,tlsrpt PolicyType string,tlsrpt ResultType string' Admin) >webadmin/api.json
(cd webaccount && CGO_ENABLED=0 go run ../vendor/github.com/mjl-/sherpadoc/cmd/sherpadoc/*.go -adjust-function-names none Account) >webaccount/api.json
(cd webmail && CGO_ENABLED=0 go run ../vendor/github.com/mjl-/sherpadoc/cmd/sherpadoc/*.go -adjust-function-names none Webmail) >webmail/api.json

101
gendoc.sh
View File

@ -1,36 +1,34 @@
#!/bin/sh
#!/usr/bin/env sh
# ./doc.go
(
cat <<EOF
/*
Command mox is a modern full-featured open source secure mail server for
Command mox is a modern, secure, full-featured, open source mail server for
low-maintenance self-hosted email.
- Quick and easy to set up with quickstart and automatic TLS with ACME and
Let's Encrypt.
- IMAP4 with extensions for accessing email.
- SMTP with SPF, DKIM, DMARC, DNSBL, MTA-STS, TLSRPT for exchanging email.
- Reputation-based and content-based spam filtering.
- Internationalized email.
- Admin web interface.
Mox is started with the "serve" subcommand, but mox also has many other
subcommands.
# Commands
Many of those commands talk to a running mox instance, through the ctl file in
the data directory. Specify the configuration file (that holds the path to the
data directory) through the -config flag or MOXCONF environment variable.
Commands that don't talk to a running mox instance are often for
testing/debugging email functionality. For example for parsing an email message,
or looking up SPF/DKIM/DMARC records.
Below is the usage information as printed by the command when started without
any parameters. Followed by the help and usage information for each command.
# Usage
EOF
./mox 2>&1 | sed 's/^\( *\|usage: \)/\t/'
cat <<EOF
Many commands talk to a running mox instance, through the ctl file in the data
directory. Specify the configuration file (that holds the path to the data
directory) through the -config flag or MOXCONF environment variable.
EOF
# setting XDG_CONFIG_HOME ensures "mox localserve" has reasonable default
# values in its help output.
XDG_CONFIG_HOME='$userconfigdir' ./mox helpall 2>&1
./mox 2>&1 | sed -e 's/^usage: */ /' -e 's/^ */ /'
echo
./mox helpall 2>&1
cat <<EOF
*/
@ -41,47 +39,70 @@ EOF
)>doc.go
gofmt -w doc.go
# ./config/doc.go
(
cat <<EOF
/*
Package config holds the configuration file definitions for mox.conf (Static)
and domains.conf (Dynamic).
Package config holds the configuration file definitions.
These config files are in "sconf" format. Summarized: Indent with tabs, "#" as
first non-whitespace character makes the line a comment (you cannot have a line
with both a value and a comment), strings are not quoted/escaped and can never
span multiple lines. See https://pkg.go.dev/github.com/mjl-/sconf for details.
Mox uses two config files:
1. mox.conf, also called the static configuration file.
2. domains.conf, also called the dynamic configuration file.
The static configuration file is never reloaded during the lifetime of a
running mox instance. After changes to mox.conf, mox must be restarted for the
changes to take effect.
The dynamic configuration file is reloaded automatically when it changes.
If the file contains an error after the change, the reload is aborted and the
previous version remains active.
Below are "empty" config files, generated from the config file definitions in
the source code, along with comments explaining the fields. Fields named "x" are
placeholders for user-chosen map keys.
# sconf
The config files are in "sconf" format. Properties of sconf files:
- Indentation with tabs only.
- "#" as first non-whitespace character makes the line a comment. Lines with a
value cannot also have a comment.
- Values don't have syntax indicating their type. For example, strings are
not quoted/escaped and can never span multiple lines.
- Fields that are optional can be left out completely. But the value of an
optional field may itself have required fields.
See https://pkg.go.dev/github.com/mjl-/sconf for details.
Annotated empty/default configuration files you could use as a starting point
for your mox.conf and domains.conf, as generated by "mox config
describe-static" and "mox config describe-domains":
# mox.conf
EOF
./mox config describe-static | sed 's/^/\t/'
./mox config describe-static | sed 's/^/ /'
cat <<EOF
# domains.conf
EOF
./mox config describe-domains | sed 's/^/\t/'
./mox config describe-domains | sed 's/^/ /'
cat <<EOF
# Examples
Mox includes configuration files to illustrate common setups. You can see these
examples with "mox example", and print a specific example with "mox example
<name>". Below are all examples included in mox.
examples with "mox config example", and print a specific example with "mox
config example <name>". Below are all examples included in mox.
EOF
for ex in $(./mox example); do
for ex in $(./mox config example); do
echo '# Example '$ex
echo
./mox example $ex | sed 's/^/\t/'
./mox config example $ex | sed 's/^/ /'
echo
done
@ -93,3 +114,7 @@ package config
EOF
)>config/doc.go
gofmt -w config/doc.go
# ./webapi/doc.go
./webapi/gendoc.sh >webapi/doc.go
gofmt -w webapi/doc.go

7
genlicenses.sh Executable file
View File

@ -0,0 +1,7 @@
#!/bin/sh
rm -r licenses
set -e
for p in $(cd vendor && find . -iname '*license*' -or -iname '*licence*' -or -iname '*notice*' -or -iname '*patent*'); do
(set +e; mkdir -p $(dirname licenses/$p))
cp vendor/$p licenses/$p
done

376
gentestdata.go Normal file
View File

@ -0,0 +1,376 @@
package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"time"
"github.com/mjl-/bstore"
"github.com/mjl-/sconf"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dmarcdb"
"github.com/mjl-/mox/dmarcrpt"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
"github.com/mjl-/mox/mtasts"
"github.com/mjl-/mox/mtastsdb"
"github.com/mjl-/mox/queue"
"github.com/mjl-/mox/smtp"
"github.com/mjl-/mox/store"
"github.com/mjl-/mox/tlsrpt"
"github.com/mjl-/mox/tlsrptdb"
)
func cmdGentestdata(c *cmd) {
c.unlisted = true
c.params = "destdir"
c.help = `Generate a data directory populated, for testing upgrades.`
args := c.Parse()
if len(args) != 1 {
c.Usage()
}
destDataDir, err := filepath.Abs(args[0])
xcheckf(err, "making destination directory an absolute path")
if _, err := os.Stat(destDataDir); err == nil {
log.Fatalf("destination directory already exists, refusing to generate test data")
}
err = os.MkdirAll(destDataDir, 0770)
xcheckf(err, "creating destination data directory")
err = os.MkdirAll(filepath.Join(destDataDir, "tmp"), 0770)
xcheckf(err, "creating tmp directory")
tempfile := func() *os.File {
f, err := os.CreateTemp(filepath.Join(destDataDir, "tmp"), "temp")
xcheckf(err, "creating temp file")
return f
}
ctxbg := context.Background()
mox.Conf.Log[""] = mlog.LevelInfo
mlog.SetConfig(mox.Conf.Log)
const domainsConf = `
Domains:
mox.example: nil
.example: nil
Accounts:
test0:
Domain: mox.example
Destinations:
test0@mox.example: nil
test1:
Domain: mox.example
Destinations:
test1@mox.example: nil
test2:
Domain: .example
Destinations:
@.example: nil
JunkFilter:
Threshold: 0.95
Params:
Twograms: true
MaxPower: 0.1
TopWords: 10
IgnoreWords: 0.1
`
mox.ConfigStaticPath = filepath.FromSlash("/tmp/mox-bogus/mox.conf")
mox.ConfigDynamicPath = filepath.FromSlash("/tmp/mox-bogus/domains.conf")
mox.Conf.DynamicLastCheck = time.Now() // Should prevent warning.
mox.Conf.Static = config.Static{
DataDir: destDataDir,
}
err = sconf.Parse(strings.NewReader(domainsConf), &mox.Conf.Dynamic)
xcheckf(err, "parsing domains config")
const dmarcReport = `<?xml version="1.0" encoding="UTF-8" ?>
<feedback>
<report_metadata>
<org_name>google.com</org_name>
<email>noreply-dmarc-support@google.com</email>
<extra_contact_info>https://support.google.com/a/answer/2466580</extra_contact_info>
<report_id>10051505501689795560</report_id>
<date_range>
<begin>1596412800</begin>
<end>1596499199</end>
</date_range>
</report_metadata>
<policy_published>
<domain>mox.example</domain>
<adkim>r</adkim>
<aspf>r</aspf>
<p>reject</p>
<sp>reject</sp>
<pct>100</pct>
</policy_published>
<record>
<row>
<source_ip>127.0.0.1</source_ip>
<count>1</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>pass</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>example.org</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>example.org</domain>
<result>pass</result>
<selector>example</selector>
</dkim>
<spf>
<domain>example.org</domain>
<result>pass</result>
</spf>
</auth_results>
</record>
</feedback>
`
const tlsReport = `{
"organization-name": "Company-X",
"date-range": {
"start-datetime": "2016-04-01T00:00:00Z",
"end-datetime": "2016-04-01T23:59:59Z"
},
"contact-info": "sts-reporting@company-x.example",
"report-id": "5065427c-23d3-47ca-b6e0-946ea0e8c4be",
"policies": [{
"policy": {
"policy-type": "sts",
"policy-string": ["version: STSv1","mode: testing",
"mx: *.mail.company-y.example","max_age: 86400"],
"policy-domain": "mox.example",
"mx-host": ["*.mail.company-y.example"]
},
"summary": {
"total-successful-session-count": 5326,
"total-failure-session-count": 303
},
"failure-details": [{
"result-type": "certificate-expired",
"sending-mta-ip": "2001:db8:abcd:0012::1",
"receiving-mx-hostname": "mx1.mail.company-y.example",
"failed-session-count": 100
}, {
"result-type": "starttls-not-supported",
"sending-mta-ip": "2001:db8:abcd:0013::1",
"receiving-mx-hostname": "mx2.mail.company-y.example",
"receiving-ip": "203.0.113.56",
"failed-session-count": 200,
"additional-information": "https://reports.company-x.example/report_info ? id = 5065427 c - 23 d3# StarttlsNotSupported "
}, {
"result-type": "validation-failure",
"sending-mta-ip": "198.51.100.62",
"receiving-ip": "203.0.113.58",
"receiving-mx-hostname": "mx-backup.mail.company-y.example",
"failed-session-count": 3,
"failure-reason-code": "X509_V_ERR_PROXY_PATH_LENGTH_EXCEEDED"
}]
}]
}`
err = os.WriteFile(filepath.Join(destDataDir, "moxversion"), []byte(moxvar.Version), 0660)
xcheckf(err, "writing moxversion")
// Populate auth.db
err = store.Init(ctxbg)
xcheckf(err, "store init")
err = store.TLSPublicKeyAdd(ctxbg, &store.TLSPublicKey{Name: "testkey", Fingerprint: "...", Type: "ecdsa-p256", CertDER: []byte("..."), Account: "test0", LoginAddress: "test0@mox.example"})
xcheckf(err, "adding tlspubkey")
// Populate dmarc.db.
err = dmarcdb.Init()
xcheckf(err, "dmarcdb init")
report, err := dmarcrpt.ParseReport(strings.NewReader(dmarcReport))
xcheckf(err, "parsing dmarc aggregate report")
err = dmarcdb.AddReport(ctxbg, report, dns.Domain{ASCII: "mox.example"})
xcheckf(err, "adding dmarc aggregate report")
// Populate mtasts.db.
err = mtastsdb.Init(false)
xcheckf(err, "mtastsdb init")
mtastsPolicy := mtasts.Policy{
Version: "STSv1",
Mode: mtasts.ModeTesting,
MX: []mtasts.MX{
{Domain: dns.Domain{ASCII: "mx1.example.com"}},
{Domain: dns.Domain{ASCII: "mx2.example.com"}},
{Domain: dns.Domain{ASCII: "backup-example.com"}, Wildcard: true},
},
MaxAgeSeconds: 1296000,
}
err = mtastsdb.Upsert(ctxbg, dns.Domain{ASCII: "mox.example"}, "123", &mtastsPolicy, mtastsPolicy.String())
xcheckf(err, "adding mtastsdb report")
// Populate tlsrpt.db.
err = tlsrptdb.Init()
xcheckf(err, "tlsrptdb init")
tlsreportJSON, err := tlsrpt.Parse(strings.NewReader(tlsReport))
xcheckf(err, "parsing tls report")
tlsr := tlsreportJSON.Convert()
err = tlsrptdb.AddReport(ctxbg, c.log, dns.Domain{ASCII: "mox.example"}, "tlsrpt@mox.example", false, &tlsr)
xcheckf(err, "adding tls report")
// Populate queue, with a message.
err = queue.Init()
xcheckf(err, "queue init")
mailfrom := smtp.Path{Localpart: "other", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "other.example"}}}
rcptto := smtp.Path{Localpart: "test0", IPDomain: dns.IPDomain{Domain: dns.Domain{ASCII: "mox.example"}}}
prefix := []byte{}
mf := tempfile()
xcheckf(err, "temp file for queue message")
defer store.CloseRemoveTempFile(c.log, mf, "test message")
const qmsg = "From: <test0@mox.example>\r\nTo: <other@remote.example>\r\nSubject: test\r\n\r\nthe message...\r\n"
_, err = fmt.Fprint(mf, qmsg)
xcheckf(err, "writing message")
qm := queue.MakeMsg(mailfrom, rcptto, false, false, int64(len(qmsg)), "<test@localhost>", prefix, nil, time.Now(), "test")
err = queue.Add(ctxbg, c.log, "test0", mf, qm)
xcheckf(err, "enqueue message")
// Create three accounts.
// First account without messages.
accTest0, err := store.OpenAccount(c.log, "test0", false)
xcheckf(err, "open account test0")
err = accTest0.ThreadingWait(c.log)
xcheckf(err, "wait for threading to finish")
err = accTest0.Close()
xcheckf(err, "close account")
// Second account with one message.
accTest1, err := store.OpenAccount(c.log, "test1", false)
xcheckf(err, "open account test1")
err = accTest1.ThreadingWait(c.log)
xcheckf(err, "wait for threading to finish")
err = accTest1.DB.Write(ctxbg, func(tx *bstore.Tx) error {
inbox, err := bstore.QueryTx[store.Mailbox](tx).FilterNonzero(store.Mailbox{Name: "Inbox"}).Get()
xcheckf(err, "looking up inbox")
const msg = "From: <other@remote.example>\r\nTo: <test1@mox.example>\r\nSubject: test\r\n\r\nthe message...\r\n"
m := store.Message{
MailboxID: inbox.ID,
MailboxOrigID: inbox.ID,
RemoteIP: "1.2.3.4",
RemoteIPMasked1: "1.2.3.4",
RemoteIPMasked2: "1.2.3.0",
RemoteIPMasked3: "1.2.0.0",
EHLODomain: "other.example",
MailFrom: "other@remote.example",
MailFromLocalpart: smtp.Localpart("other"),
MailFromDomain: "remote.example",
RcptToLocalpart: "test1",
RcptToDomain: "mox.example",
MsgFromLocalpart: "other",
MsgFromDomain: "remote.example",
MsgFromOrgDomain: "remote.example",
EHLOValidated: true,
MailFromValidated: true,
MsgFromValidated: true,
EHLOValidation: store.ValidationStrict,
MailFromValidation: store.ValidationPass,
MsgFromValidation: store.ValidationStrict,
DKIMDomains: []string{"other.example"},
Size: int64(len(msg)),
}
mf := tempfile()
xcheckf(err, "creating temp file for delivery")
defer store.CloseRemoveTempFile(c.log, mf, "test message")
_, err = fmt.Fprint(mf, msg)
xcheckf(err, "writing deliver message to file")
err = accTest1.MessageAdd(c.log, tx, &inbox, &m, mf, store.AddOpts{})
xcheckf(err, "deliver message")
err = tx.Update(&inbox)
xcheckf(err, "update inbox")
return nil
})
xcheckf(err, "write transaction with new message")
err = accTest1.Close()
xcheckf(err, "close account")
// Third account with two messages and junkfilter.
accTest2, err := store.OpenAccount(c.log, "test2", false)
xcheckf(err, "open account test2")
err = accTest2.ThreadingWait(c.log)
xcheckf(err, "wait for threading to finish")
err = accTest2.DB.Write(ctxbg, func(tx *bstore.Tx) error {
inbox, err := bstore.QueryTx[store.Mailbox](tx).FilterNonzero(store.Mailbox{Name: "Inbox"}).Get()
xcheckf(err, "looking up inbox")
const msg0 = "From: <other@remote.example>\r\nTo: <☹@xn--74h.example>\r\nSubject: test\r\n\r\nthe message...\r\n"
m0 := store.Message{
MailboxID: inbox.ID,
MailboxOrigID: inbox.ID,
RemoteIP: "::1",
RemoteIPMasked1: "::",
RemoteIPMasked2: "::",
RemoteIPMasked3: "::",
EHLODomain: "other.example",
MailFrom: "other@remote.example",
MailFromLocalpart: smtp.Localpart("other"),
MailFromDomain: "remote.example",
RcptToLocalpart: "☹",
RcptToDomain: "☺.example",
MsgFromLocalpart: "other",
MsgFromDomain: "remote.example",
MsgFromOrgDomain: "remote.example",
EHLOValidated: true,
MailFromValidated: true,
MsgFromValidated: true,
EHLOValidation: store.ValidationStrict,
MailFromValidation: store.ValidationPass,
MsgFromValidation: store.ValidationStrict,
DKIMDomains: []string{"other.example"},
Size: int64(len(msg0)),
}
mf0 := tempfile()
xcheckf(err, "creating temp file for delivery")
defer store.CloseRemoveTempFile(c.log, mf0, "test message")
_, err = fmt.Fprint(mf0, msg0)
xcheckf(err, "writing deliver message to file")
err = accTest2.MessageAdd(c.log, tx, &inbox, &m0, mf0, store.AddOpts{})
xcheckf(err, "add message to account test2")
err = tx.Update(&inbox)
xcheckf(err, "update inbox")
sent, err := bstore.QueryTx[store.Mailbox](tx).FilterNonzero(store.Mailbox{Name: "Sent"}).Get()
xcheckf(err, "looking up inbox")
const prefix1 = "Extra: test\r\n"
const msg1 = "From: <other@remote.example>\r\nTo: <☹@xn--74h.example>\r\nSubject: test\r\n\r\nthe message...\r\n"
m1 := store.Message{
MailboxID: sent.ID,
MailboxOrigID: sent.ID,
Flags: store.Flags{Seen: true, Junk: true},
Size: int64(len(prefix1) + len(msg1)),
MsgPrefix: []byte(prefix1),
}
mf1 := tempfile()
xcheckf(err, "creating temp file for delivery")
defer store.CloseRemoveTempFile(c.log, mf1, "test message")
_, err = fmt.Fprint(mf1, msg1)
xcheckf(err, "writing deliver message to file")
err = accTest2.MessageAdd(c.log, tx, &sent, &m1, mf1, store.AddOpts{})
xcheckf(err, "add message to account test2")
err = tx.Update(&sent)
xcheckf(err, "update sent")
return nil
})
xcheckf(err, "write transaction with new message")
err = accTest2.Close()
xcheckf(err, "close account")
}

11
gents.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/sh
set -eu
# generate new typescript client, only install it when it is different, so we
# don't trigger frontend builds needlessly.
go run vendor/github.com/mjl-/sherpats/cmd/sherpats/main.go -bytes-to-string -slices-nullable -maps-nullable -nullable-optional -namespace api api <$1 >$2.tmp
if cmp -s $2 $2.tmp; then
rm $2.tmp
else
mv $2.tmp $2
fi

117
genwebsite.sh Executable file
View File

@ -0,0 +1,117 @@
#!/usr/bin/env bash
mkdir website/html 2>/dev/null
rm -r website/html/* 2>/dev/null
set -euo pipefail
commithash=$(git rev-parse --short HEAD)
commitdate=$(git log -1 --date=format:"%Y-%m-%d" --format="%ad")
export commithash
export commitdate
# Link to static files and cross-references.
ln -sf ../../../mox-website-files/files website/html/files
ln -sf ../../rfc/xr website/html/xr
# All commands below are executed relative to ./website/
cd website
go run website.go -root -title 'Mox: modern, secure, all-in-one mail server' 'Mox' < index.md >html/index.html
mkdir html/features
(
cat features/index.md
echo
sed -n -e 's/^# Roadmap/## Roadmap/' -e '/# FAQ/q' -e '/# Roadmap/,/# FAQ/p' < ../README.md
echo
echo 'Also see the [Protocols](../protocols/) page for implementation status, and (non)-plans.'
) | go run website.go 'Features' >html/features/index.html
mkdir html/screenshots
go run website.go 'Screenshots' < screenshots/index.md >html/screenshots/index.html
mkdir html/install
go run website.go 'Install' < install/index.md >html/install/index.html
mkdir html/faq
sed -n '/# FAQ/,//p' < ../README.md | go run website.go 'FAQ' >html/faq/index.html
mkdir html/config
(
echo '# Config reference'
echo
sed -n '/^Package config holds /,/\*\//p' < ../config/doc.go | grep -v -E '^(Package config holds |\*/)' | sed 's/^# /## /'
) | go run website.go 'Config reference' >html/config/index.html
mkdir html/commands
(
echo '# Command reference'
echo
sed -n '/^Mox is started /,/\*\//p' < ../doc.go | grep -v '\*/' | sed 's/^# /## /'
) | go run website.go 'Command reference' >html/commands/index.html
mkdir html/protocols
go run website.go -protocols 'Protocols' <../rfc/index.txt >html/protocols/index.html
mkdir html/b
cat <<'EOF' >html/b/index.html
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<title>mox build</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="icon" href="noNeedlessFaviconRequestsPlease:" />
<style>
body { padding: 1em; }
* { font-size: 18px; font-family: ubuntu, lato, sans-serif; margin: 0; padding: 0; box-sizing: border-box; }
p { max-width: 50em; margin-bottom: 2ex; }
pre { font-family: 'ubuntu mono', monospace; }
pre, blockquote { padding: 1em; background-color: #eee; border-radius: .25em; display: inline-block; margin-bottom: 1em; }
h1 { margin: 1em 0 .5em 0; }
</style>
</head>
<body>
<script>
const elem = (name, ...s) => {
const e = document.createElement(name)
e.append(...s)
return e
}
const link = (url, anchor) => {
const e = document.createElement('a')
e.setAttribute('href', url)
e.setAttribute('rel', 'noopener')
e.append(anchor || url)
return e
}
let h = location.hash.substring(1)
const ok = /^[a-zA-Z0-9_\.]+$/.test(h)
if (!ok) {
h = '<tag-or-branch-or-commithash>'
}
const init = () => {
document.body.append(
elem('p', 'Compile or download any version of mox, by tag (release), branch or commit hash.'),
elem('h1', 'Compile'),
elem('p', 'Run:'),
elem('pre', 'CGO_ENABLED=0 GOBIN=$PWD go install github.com/mjl-/mox@'+h),
elem('p', 'Mox is tested with the Go toolchain versions that are still have support: The most recent version, and the version before.'),
elem('h1', 'Download'),
elem('p', 'Download a binary for your platform:'),
elem('blockquote', ok ?
link('https://beta.gobuilds.org/github.com/mjl-/mox@'+h) :
'https://beta.gobuilds.org/github.com/mjl-/mox@'+h
),
elem('p', 'Because mox is written in Go, builds are reproducible, also when cross-compiling. Gobuilds.org is a service that builds Go applications on-demand with the latest Go toolchain/runtime.'),
elem('h1', 'Localserve'),
elem('p', 'Changes to mox can often be most easily tested locally with ', link('../features/#hdr-localserve', '"mox localserve"'), ', without having to update your running mail server.'),
)
}
window.addEventListener('load', init)
</script>
</body>
</html>
EOF

48
go.mod
View File

@ -1,31 +1,37 @@
module github.com/mjl-/mox
go 1.18
go 1.23.0
require (
github.com/mjl-/bstore v0.0.0-20230211204415-a9899ef6e782
github.com/mjl-/sconf v0.0.4
github.com/mjl-/sherpa v0.6.5
github.com/mjl-/sherpadoc v0.0.10
github.com/mjl-/adns v0.0.0-20250321173553-ab04b05bdfea
github.com/mjl-/autocert v0.0.0-20250321204043-abab2b936e31
github.com/mjl-/bstore v0.0.9
github.com/mjl-/flate v0.0.0-20250221133712-6372d09eb978
github.com/mjl-/sconf v0.0.7
github.com/mjl-/sherpa v0.6.7
github.com/mjl-/sherpadoc v0.0.16
github.com/mjl-/sherpaprom v0.0.2
github.com/prometheus/client_golang v1.14.0
go.etcd.io/bbolt v1.3.7
golang.org/x/crypto v0.8.0
golang.org/x/net v0.9.0
golang.org/x/text v0.9.0
github.com/mjl-/sherpats v0.0.6
github.com/prometheus/client_golang v1.18.0
github.com/russross/blackfriday/v2 v2.1.0
go.etcd.io/bbolt v1.3.11
golang.org/x/crypto v0.37.0
golang.org/x/net v0.39.0
golang.org/x/sys v0.32.0
golang.org/x/text v0.24.0
rsc.io/qr v0.2.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mjl-/xfmt v0.0.0-20190521151243-39d9c00752ce // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
golang.org/x/mod v0.8.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/tools v0.6.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/mjl-/xfmt v0.0.2 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/tools v0.32.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect
)

505
go.sum
View File

@ -1,510 +1,117 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mjl-/bstore v0.0.0-20230211204415-a9899ef6e782 h1:dVwJA/wXzXXUROM9oM3Stg3cmqixiFh4Zi1Xumvtj74=
github.com/mjl-/bstore v0.0.0-20230211204415-a9899ef6e782/go.mod h1:/cD25FNBaDfvL/plFRxI3Ba3E+wcB0XVOS8nJDqndg0=
github.com/mjl-/sconf v0.0.4 h1:uyfn4vv5qOULSgiwQsPbbgkiONKnMFMsSOhsHfAiYwI=
github.com/mjl-/sconf v0.0.4/go.mod h1:ezf7YOn7gtClo8y71SqgZKaEkyMQ5Te7vkv4PmTTfwM=
github.com/mjl-/sherpa v0.6.5 h1:d90uG/j8fw+2M+ohCTAcVwTSUURGm8ktYDScJO1nKog=
github.com/mjl-/sherpa v0.6.5/go.mod h1:dSpAOdgpwdqQZ72O4n3EHo/tR68eKyan8tYYraUMPNc=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg=
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k=
github.com/mjl-/adns v0.0.0-20250321173553-ab04b05bdfea h1:8dftsVL1tHhRksXzFZRhSJ7gSlcy/t87Nvucs3JnTGE=
github.com/mjl-/adns v0.0.0-20250321173553-ab04b05bdfea/go.mod h1:rWZMqGA2HoBm5b5q/A5J8u1sSVuEYh6zBz9tMoVs+RU=
github.com/mjl-/autocert v0.0.0-20250321204043-abab2b936e31 h1:6MFGOLPGf6VzHWkKv8waSzJMMS98EFY2LVKPRHffCyo=
github.com/mjl-/autocert v0.0.0-20250321204043-abab2b936e31/go.mod h1:taMFU86abMxKLPV4Bynhv8enbYmS67b8LG80qZv2Qus=
github.com/mjl-/bstore v0.0.9 h1:j8HVXL10Arbk4ujeRGwns8gipH1N1TZn853inQ42FgY=
github.com/mjl-/bstore v0.0.9/go.mod h1:xzIpSfcFosgPJ6h+vsdIt0pzCq4i8hhMuHPQJ0aHQhM=
github.com/mjl-/flate v0.0.0-20250221133712-6372d09eb978 h1:Eg5DfI3/00URzGErujKus6a3O0kyXzF8vjoDZzH/gig=
github.com/mjl-/flate v0.0.0-20250221133712-6372d09eb978/go.mod h1:QBkFtjai3AiQQuUu7pVh6PA06Vd3oa68E+vddf/UBOs=
github.com/mjl-/sconf v0.0.7 h1:bdBcSFZCDFMm/UdBsgNCsjkYmKrSgYwp7rAOoufwHe4=
github.com/mjl-/sconf v0.0.7/go.mod h1:uF8OdWtLT8La3i4ln176i1pB0ps9pXGCaABEU55ZkE0=
github.com/mjl-/sherpa v0.6.7 h1:C5F8XQdV5nCuS4fvB+ye/ziUQrajEhOoj/t2w5T14BY=
github.com/mjl-/sherpa v0.6.7/go.mod h1:dSpAOdgpwdqQZ72O4n3EHo/tR68eKyan8tYYraUMPNc=
github.com/mjl-/sherpadoc v0.0.0-20190505200843-c0a7f43f5f1d/go.mod h1:5khTKxoKKNXcB8bkVUO6GlzC7PFtMmkHq578lPbmnok=
github.com/mjl-/sherpadoc v0.0.10 h1:tvRVd37IIGg70ZmNkNKNnjDSPtKI5/DdEIukMkWtZYE=
github.com/mjl-/sherpadoc v0.0.10/go.mod h1:vh5zcsk3j/Tvm725EY+unTZb3EZcZcpiEQzrODSa6+I=
github.com/mjl-/sherpadoc v0.0.16 h1:BdlFNXfnTaA7qO54kof4xpNFJxYBTY0cIObRk7QAP6M=
github.com/mjl-/sherpadoc v0.0.16/go.mod h1:vh5zcsk3j/Tvm725EY+unTZb3EZcZcpiEQzrODSa6+I=
github.com/mjl-/sherpaprom v0.0.2 h1:1dlbkScsNafM5jURI44uiWrZMSwfZtcOFEEq7vx2C1Y=
github.com/mjl-/sherpaprom v0.0.2/go.mod h1:cl5nMNOvqhzMiQJ2FzccQ9ReivjHXe53JhOVkPfSvw4=
github.com/mjl-/xfmt v0.0.0-20190521151243-39d9c00752ce h1:oyFmIHo3GLWZzb0odAzN9QUy0MTW6P8JaNRnNVGCBCk=
github.com/mjl-/xfmt v0.0.0-20190521151243-39d9c00752ce/go.mod h1:DIEOLmETMQHHr4OgwPG7iC37rDiN9MaZIZxNm5hBtL8=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mjl-/sherpats v0.0.6 h1:2lSoJbb+jkjLOdlvoMxItq0QQrrnkH+rnm3PMRfpbmA=
github.com/mjl-/sherpats v0.0.6/go.mod h1:MoNZJtLmu8oCZ4Ocv5vZksENN4pp6/SJMlg9uTII4KA=
github.com/mjl-/xfmt v0.0.2 h1:6dLgd6U3bmDJKtTxsaSYYyMaORoO4hKBAJo4XKkPRko=
github.com/mjl-/xfmt v0.0.2/go.mod h1:DIEOLmETMQHHr4OgwPG7iC37rDiN9MaZIZxNm5hBtL8=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
github.com/prometheus/client_golang v1.18.0 h1:HzFfmkOzH5Q8L8G+kSJKUx5dtG87sewO+FoDDqP5Tbk=
github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
github.com/prometheus/common v0.3.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM=
github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190503130316-740c07785007/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ=
go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
go.etcd.io/bbolt v1.3.11 h1:yGEzV1wPz2yVCLsD8ZAiGHhHVlczyC9d1rP43/VCRJ0=
go.etcd.io/bbolt v1.3.11/go.mod h1:dksAq7YMXoljX0xu6VF5DMZGbhYYoLUalEiSySYAS4I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.9.0 h1:aWJ/m6xSmxWBx+V0XRHTlrYrPG56jKsLdTFmsSsCzOM=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.32.0 h1:Q7N1vhpkQv7ybVzLFtTjvQya2ewbwNDZzUgfXGqtMWU=
golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
rsc.io/qr v0.2.0 h1:6vBLea5/NRMVTz8V66gipeLycZMl/+UlFmk8DvqQ6WY=
rsc.io/qr v0.2.0/go.mod h1:IF+uZjkb9fqyeF/4tlBoynqmQxUoPfWEKh921coOuXs=

View File

@ -1,356 +0,0 @@
package http
import (
"archive/tar"
"archive/zip"
"compress/gzip"
"context"
"encoding/base64"
"encoding/json"
"errors"
"io"
"net"
"net/http"
"os"
"strings"
"time"
_ "embed"
"github.com/mjl-/sherpa"
"github.com/mjl-/sherpaprom"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/metrics"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxvar"
"github.com/mjl-/mox/store"
)
//go:embed accountapi.json
var accountapiJSON []byte
//go:embed account.html
var accountHTML []byte
var accountDoc = mustParseAPI(accountapiJSON)
var accountSherpaHandler http.Handler
func init() {
collector, err := sherpaprom.NewCollector("moxaccount", nil)
if err != nil {
xlog.Fatalx("creating sherpa prometheus collector", err)
}
accountSherpaHandler, err = sherpa.NewHandler("/api/", moxvar.Version, Account{}, &accountDoc, &sherpa.HandlerOpts{Collector: collector, AdjustFunctionNames: "none"})
if err != nil {
xlog.Fatalx("sherpa handler", err)
}
}
// Account exports web API functions for the account web interface. All its
// methods are exported under api/. Function calls require valid HTTP
// Authentication credentials of a user.
type Account struct{}
// check http basic auth, returns account name if valid, and writes http response
// and returns empty string otherwise.
func checkAccountAuth(ctx context.Context, log *mlog.Log, w http.ResponseWriter, r *http.Request) string {
authResult := "error"
start := time.Now()
var addr *net.TCPAddr
defer func() {
metrics.AuthenticationInc("httpaccount", "httpbasic", authResult)
if authResult == "ok" && addr != nil {
mox.LimiterFailedAuth.Reset(addr.IP, start)
}
}()
var err error
addr, err = net.ResolveTCPAddr("tcp", r.RemoteAddr)
if err != nil {
log.Errorx("parsing remote address", err, mlog.Field("addr", r.RemoteAddr))
}
if addr != nil && !mox.LimiterFailedAuth.Add(addr.IP, start, 1) {
metrics.AuthenticationRatelimitedInc("httpaccount")
http.Error(w, "429 - too many auth attempts", http.StatusTooManyRequests)
return ""
}
// store.OpenEmailAuth has an auth cache, so we don't bcrypt for every auth attempt.
if auth := r.Header.Get("Authorization"); auth == "" || !strings.HasPrefix(auth, "Basic ") {
} else if authBuf, err := base64.StdEncoding.DecodeString(strings.TrimPrefix(auth, "Basic ")); err != nil {
log.Debugx("parsing base64", err)
} else if t := strings.SplitN(string(authBuf), ":", 2); len(t) != 2 {
log.Debug("bad user:pass form")
} else if acc, err := store.OpenEmailAuth(t[0], t[1]); err != nil {
if errors.Is(err, store.ErrUnknownCredentials) {
authResult = "badcreds"
}
log.Errorx("open account", err)
} else {
authResult = "ok"
accName := acc.Name
err := acc.Close()
log.Check(err, "closing account")
return accName
}
// note: browsers don't display the realm to prevent users getting confused by malicious realm messages.
w.Header().Set("WWW-Authenticate", `Basic realm="mox account - login with email address and password"`)
http.Error(w, "http 401 - unauthorized - mox account - login with email address and password", http.StatusUnauthorized)
return ""
}
func accountHandle(w http.ResponseWriter, r *http.Request) {
ctx := context.WithValue(r.Context(), mlog.CidKey, mox.Cid())
log := xlog.WithContext(ctx).Fields(mlog.Field("userauth", ""))
// Without authentication. The token is unguessable.
if r.URL.Path == "/importprogress" {
if r.Method != "GET" {
http.Error(w, "405 - method not allowed - get required", http.StatusMethodNotAllowed)
return
}
q := r.URL.Query()
token := q.Get("token")
if token == "" {
http.Error(w, "400 - bad request - missing token", http.StatusBadRequest)
return
}
flusher, ok := w.(http.Flusher)
if !ok {
log.Error("internal error: ResponseWriter not a http.Flusher")
http.Error(w, "500 - internal error - cannot sync to http connection", 500)
return
}
l := importListener{token, make(chan importEvent, 100), make(chan bool, 1)}
importers.Register <- &l
ok = <-l.Register
if !ok {
http.Error(w, "400 - bad request - unknown token, import may have finished more than a minute ago", http.StatusBadRequest)
return
}
defer func() {
importers.Unregister <- &l
}()
h := w.Header()
h.Set("Content-Type", "text/event-stream")
h.Set("Cache-Control", "no-cache")
_, err := w.Write([]byte(": keepalive\n\n"))
if err != nil {
return
}
flusher.Flush()
cctx := r.Context()
for {
select {
case e := <-l.Events:
_, err := w.Write(e.SSEMsg)
flusher.Flush()
if err != nil {
return
}
case <-cctx.Done():
return
}
}
}
accName := checkAccountAuth(ctx, log, w, r)
if accName == "" {
// Response already sent.
return
}
switch r.URL.Path {
case "/":
if r.Method != "GET" {
http.Error(w, "405 - method not allowed - post required", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.Header().Set("Cache-Control", "no-cache; max-age=0")
// We typically return the embedded admin.html, but during development it's handy
// to load from disk.
f, err := os.Open("http/account.html")
if err == nil {
defer f.Close()
_, _ = io.Copy(w, f)
} else {
_, _ = w.Write(accountHTML)
}
case "/mail-export-maildir.tgz", "/mail-export-maildir.zip", "/mail-export-mbox.tgz", "/mail-export-mbox.zip":
maildir := strings.Contains(r.URL.Path, "maildir")
tgz := strings.Contains(r.URL.Path, ".tgz")
acc, err := store.OpenAccount(accName)
if err != nil {
log.Errorx("open account for export", err)
http.Error(w, "500 - internal server error", http.StatusInternalServerError)
return
}
defer func() {
err := acc.Close()
log.Check(err, "closing account")
}()
var archiver store.Archiver
if tgz {
// Don't tempt browsers to "helpfully" decompress.
w.Header().Set("Content-Type", "application/octet-stream")
gzw := gzip.NewWriter(w)
defer func() {
_ = gzw.Close()
}()
archiver = store.TarArchiver{Writer: tar.NewWriter(gzw)}
} else {
w.Header().Set("Content-Type", "application/zip")
archiver = store.ZipArchiver{Writer: zip.NewWriter(w)}
}
defer func() {
err := archiver.Close()
log.Check(err, "exporting mail close")
}()
if err := store.ExportMessages(log, acc.DB, acc.Dir, archiver, maildir, ""); err != nil {
log.Errorx("exporting mail", err)
}
case "/import":
if r.Method != "POST" {
http.Error(w, "405 - method not allowed - post required", http.StatusMethodNotAllowed)
return
}
f, _, err := r.FormFile("file")
if err != nil {
if errors.Is(err, http.ErrMissingFile) {
http.Error(w, "400 - bad request - missing file", http.StatusBadRequest)
} else {
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
}
return
}
defer func() {
err := f.Close()
log.Check(err, "closing form file")
}()
skipMailboxPrefix := r.FormValue("skipMailboxPrefix")
tmpf, err := os.CreateTemp("", "mox-import")
if err != nil {
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
defer func() {
if tmpf != nil {
err := tmpf.Close()
log.Check(err, "closing uploaded file")
}
}()
if err := os.Remove(tmpf.Name()); err != nil {
log.Errorx("removing temporary file", err)
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
if _, err := io.Copy(tmpf, f); err != nil {
log.Errorx("copying import to temporary file", err)
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
token, err := importStart(log, accName, tmpf, skipMailboxPrefix)
if err != nil {
log.Errorx("starting import", err)
http.Error(w, "500 - internal server error - "+err.Error(), http.StatusInternalServerError)
return
}
tmpf = nil // importStart is now responsible for closing.
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{"ImportToken": token})
default:
if strings.HasPrefix(r.URL.Path, "/api/") {
accountSherpaHandler.ServeHTTP(w, r.WithContext(context.WithValue(ctx, authCtxKey, accName)))
return
}
http.NotFound(w, r)
}
}
type ctxKey string
var authCtxKey ctxKey = "account"
// SetPassword saves a new password for the account, invalidating the previous password.
// Sessions are not interrupted, and will keep working. New login attempts must use the new password.
// Password must be at least 8 characters.
func (Account) SetPassword(ctx context.Context, password string) {
if len(password) < 8 {
panic(&sherpa.Error{Code: "user:error", Message: "password must be at least 8 characters"})
}
accountName := ctx.Value(authCtxKey).(string)
acc, err := store.OpenAccount(accountName)
xcheckf(ctx, err, "open account")
defer func() {
err := acc.Close()
xlog.Check(err, "closing account")
}()
err = acc.SetPassword(password)
xcheckf(ctx, err, "setting password")
}
// Destinations returns the default domain, and the destinations (keys are email
// addresses, or localparts to the default domain).
// todo: replace with a function that returns the whole account, when sherpadoc understands unnamed struct fields.
func (Account) Destinations(ctx context.Context) (dns.Domain, map[string]config.Destination) {
accountName := ctx.Value(authCtxKey).(string)
accConf, ok := mox.Conf.Account(accountName)
if !ok {
xcheckf(ctx, errors.New("not found"), "looking up account")
}
return accConf.DNSDomain, accConf.Destinations
}
// DestinationSave updates a destination.
// OldDest is compared against the current destination. If it does not match, an
// error is returned. Otherwise newDest is saved and the configuration reloaded.
func (Account) DestinationSave(ctx context.Context, destName string, oldDest, newDest config.Destination) {
accountName := ctx.Value(authCtxKey).(string)
accConf, ok := mox.Conf.Account(accountName)
if !ok {
xcheckf(ctx, errors.New("not found"), "looking up account")
}
curDest, ok := accConf.Destinations[destName]
if !ok {
xcheckf(ctx, errors.New("not found"), "looking up destination")
}
if !curDest.Equal(oldDest) {
xcheckf(ctx, errors.New("modified"), "checking stored destination")
}
// Keep fields we manage.
newDest.DMARCReports = curDest.DMARCReports
newDest.TLSReports = curDest.TLSReports
err := mox.DestinationSave(ctx, accountName, destName, newDest)
xcheckf(ctx, err, "saving destination")
}
// ImportAbort aborts an import that is in progress. If the import exists and isn't
// finished, no changes will have been made by the import.
func (Account) ImportAbort(ctx context.Context, importToken string) error {
req := importAbortRequest{importToken, make(chan error)}
importers.Abort <- req
return <-req.Response
}

View File

@ -1,673 +0,0 @@
<!doctype html>
<html>
<head>
<title>Mox Account</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<style>
body, html { padding: 1em; font-size: 16px; }
* { font-size: inherit; font-family: ubuntu, lato, sans-serif; margin: 0; padding: 0; box-sizing: border-box; }
h1, h2, h3, h4 { margin-bottom: 1ex; }
h1 { font-size: 1.2rem; }
h2 { font-size: 1.1rem; }
h3, h4 { font-size: 1rem; }
ul { padding-left: 1rem; }
.literal { background-color: #fdfdfd; padding: .5em 1em; border: 1px solid #eee; border-radius: 4px; white-space: pre-wrap; font-family: monospace; font-size: 15px; tab-size: 4; }
table td, table th { padding: .2em .5em; }
table > tbody > tr:nth-child(odd) { background-color: #f8f8f8; }
.text { max-width: 50em; }
p { margin-bottom: 1em; max-width: 50em; }
[title] { text-decoration: underline; text-decoration-style: dotted; }
fieldset { border: 0; }
#page { opacity: 1; animation: fadein 0.15s ease-in; }
#page.loading { opacity: 0.1; animation: fadeout 1s ease-out; }
@keyframes fadein { 0% { opacity: 0 } 100% { opacity: 1 } }
@keyframes fadeout { 0% { opacity: 1 } 100% { opacity: 0.1 } }
</style>
<script src="api/sherpa.js"></script>
<script>api._sherpa.baseurl = 'api/'</script>
</head>
<body>
<div id="page">Loading...</div>
<script>
const [dom, style, attr, prop] = (function() {
function _domKids(e, ...kl) {
kl.forEach(k => {
if (typeof k === 'string' || k instanceof String) {
e.appendChild(document.createTextNode(k))
} else if (k instanceof Node) {
e.appendChild(k)
} else if (Array.isArray(k)) {
_domKids(e, ...k)
} else if (typeof k === 'function') {
if (!k.name) {
throw new Error('function without name', k)
}
e.addEventListener(k.name, k)
} else if (typeof k === 'object' && k !== null) {
if (k.root) {
e.appendChild(k.root)
return
}
for (const key in k) {
const value = k[key]
if (key === '_prop') {
for (const prop in value) {
e[prop] = value[prop]
}
} else if (key === '_attr') {
for (const prop in value) {
e.setAttribute(prop, value[prop])
}
} else if (key === '_listen') {
e.addEventListener(...value)
} else {
e.style[key] = value
}
}
} else {
console.log('bad kid', k)
throw new Error('bad kid')
}
})
}
const _dom = (kind, ...kl) => {
const t = kind.split('.')
const e = document.createElement(t[0])
for (let i = 1; i < t.length; i++) {
e.classList.add(t[i])
}
_domKids(e, kl)
return e
}
_dom._kids = function(e, ...kl) {
while(e.firstChild) {
e.removeChild(e.firstChild)
}
_domKids(e, kl)
}
const dom = new Proxy(_dom, {
get: function(dom, prop) {
if (prop in dom) {
return dom[prop]
}
const fn = (...kl) => _dom(prop, kl)
dom[prop] = fn
return fn
},
apply: function(target, that, args) {
if (args.length === 1 && typeof args[0] === 'object' && !Array.isArray(args[0])) {
return {_attr: args[0]}
}
return _dom(...args)
},
})
const style = x => x
const attr = x => { return {_attr: x} }
const prop = x => { return {_prop: x} }
return [dom, style, attr, prop]
})()
const link = (href, anchorOpt) => dom.a(attr({href: href, rel: 'noopener noreferrer'}), anchorOpt || href)
const crumblink = (text, link) => dom.a(text, attr({href: link}))
const crumbs = (...l) => [dom.h1(l.map((e, index) => index === 0 ? e : [' / ', e])), dom.br()]
const footer = dom.div(
style({marginTop: '6ex', opacity: 0.75}),
link('https://github.com/mjl-/mox', 'mox'),
' ',
api._sherpa.version,
)
const domainName = d => {
return d.Unicode || d.ASCII
}
const domainString = d => {
if (d.Unicode) {
return d.Unicode+" ("+d.ASCII+")"
}
return d.ASCII
}
const box = (color, ...l) => [
dom.div(
style({
display: 'inline-block',
padding: '.25em .5em',
backgroundColor: color,
borderRadius: '3px',
margin: '.5ex 0',
}),
l,
),
dom.br(),
]
const green = '#1dea20'
const yellow = '#ffe400'
const red = '#ff7443'
const blue = '#8bc8ff'
const index = async () => {
const [domain, destinations] = await api.Destinations()
let passwordForm, passwordFieldset, password1, password2, passwordHint
let importForm, importFieldset, mailboxFile, mailboxFileHint, mailboxPrefix, mailboxPrefixHint, importProgress, importAbortBox, importAbort
const importTrack = async (token) => {
const importConnection = dom.div('Waiting for updates...')
importProgress.appendChild(importConnection)
let countsTbody
let counts = {} // mailbox -> elem
let problems // element
await new Promise((resolve, reject) => {
const eventSource = new window.EventSource('importprogress?token=' + encodeURIComponent(token))
eventSource.addEventListener('open', function(e) {
console.log('eventsource open', {e})
dom._kids(importConnection, dom.div('Waiting for updates, connected...'))
dom._kids(importAbortBox,
importAbort=dom.button('Abort import', attr({title: 'If the import is not yet finished, it can be aborted and no messages will have been imported.'}), async function click(e) {
try {
await api.ImportAbort(token)
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
}
// On success, the event source will get an aborted notification and shutdown the connection.
})
)
})
eventSource.addEventListener('error', function(e) {
console.log('eventsource error', {e})
dom._kids(importConnection, box(red, 'Connection error'))
reject({message: 'Connection error'})
})
eventSource.addEventListener('count', (e) => {
const data = JSON.parse(e.data) // {Mailbox: ..., Count: ...}
console.log('import count event', {e, data})
if (!countsTbody) {
importProgress.appendChild(
dom.div(
dom.br(),
dom.h3('Importing mailboxes and messages...'),
dom.table(
dom.thead(
dom.tr(dom.th('Mailbox'), dom.th('Messages')),
),
countsTbody=dom.tbody(),
),
)
)
}
let elem = counts[data.Mailbox]
if (!elem) {
countsTbody.appendChild(
dom.tr(
dom.td(data.Mailbox),
elem=dom.td(style({textAlign: 'right'}), ''+data.Count),
),
)
counts[data.Mailbox] = elem
}
dom._kids(elem, ''+data.Count)
})
eventSource.addEventListener('problem', (e) => {
const data = JSON.parse(e.data) // {Message: ...}
console.log('import problem event', {e, data})
if (!problems) {
importProgress.appendChild(
dom.div(
dom.br(),
dom.h3('Problems during import'),
problems=dom.div(),
),
)
}
problems.appendChild(dom.div(box(yellow, data.Message)))
})
eventSource.addEventListener('done', (e) => {
console.log('import done event', {e})
importProgress.appendChild(dom.div(dom.br(), box(blue, 'Import finished')))
eventSource.close()
dom._kids(importConnection)
dom._kids(importAbortBox)
window.sessionStorage.removeItem('ImportToken')
resolve()
})
eventSource.addEventListener('aborted', function(e) {
console.log('import aborted event', {e})
importProgress.appendChild(dom.div(dom.br(), box(red, 'Import aborted, no message imported')))
eventSource.close()
dom._kids(importConnection)
dom._kids(importAbortBox)
window.sessionStorage.removeItem('ImportToken')
reject({message: 'Import aborted'})
})
})
}
const page = document.getElementById('page')
dom._kids(page,
crumbs('Mox Account'),
dom.p('NOTE: Not all account settings can be configured through these pages yet. See the configuration file for more options.'),
dom.div(
'Default domain: ',
domain.ASCII ? domainString(domain) : '(none)',
),
dom.br(),
dom.h2('Addresses'),
dom.ul(
Object.entries(destinations).sort().map(t =>
dom.li(
dom.a(t[0], attr({href: '#destinations/'+t[0]})),
t[0].startsWith('@') ? ' (catchall)' : [],
),
),
),
dom.br(),
dom.h2('Change password'),
passwordForm=dom.form(
passwordFieldset=dom.fieldset(
dom.label(
style({display: 'inline-block'}),
'New password',
dom.br(),
password1=dom.input(attr({type: 'password', required: ''}), function focus() {
passwordHint.style.display = ''
}),
),
' ',
dom.label(
style({display: 'inline-block'}),
'New password repeat',
dom.br(),
password2=dom.input(attr({type: 'password', required: ''})),
),
' ',
dom.button('Change password'),
),
passwordHint=dom.div(
style({display: 'none', marginTop: '.5ex'}),
dom.button('Generate random password', attr({type: 'button'}), function click(e) {
e.preventDefault()
let b = new Uint8Array(1)
let s = ''
const chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*-_;:,<.>/'
while (s.length < 12) {
self.crypto.getRandomValues(b)
if (Math.ceil(b[0]/chars.length)*chars.length > 255) {
continue // Prevent bias.
}
s += chars[b[0]%chars.length]
}
password1.type = 'text'
password2.type = 'text'
password1.value = s
password2.value = s
}),
dom('div.text',
box(yellow, 'Important: Bots will try to bruteforce your password. Connections with failed authentication attempts will be rate limited but attackers WILL find weak passwords. If your account is compromised, spammers are likely to abuse your system, spamming your address and the wider internet in your name. So please pick a random, unguessable password, preferrably at least 12 characters.'),
),
),
async function submit(e) {
e.stopPropagation()
e.preventDefault()
if (!password1.value || password1.value !== password2.value) {
window.alert('Passwords do not match.')
return
}
passwordFieldset.disabled = true
try {
await api.SetPassword(password1.value)
window.alert('Password has been changed.')
passwordForm.reset()
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
} finally {
passwordFieldset.disabled = false
}
},
),
dom.br(),
dom.h2('Export'),
dom.p('Export all messages in all mailboxes. In maildir or mbox format, as .zip or .tgz file.'),
dom.ul(
dom.li(dom.a('mail-export-maildir.tgz', attr({href: 'mail-export-maildir.tgz'}))),
dom.li(dom.a('mail-export-maildir.zip', attr({href: 'mail-export-maildir.zip'}))),
dom.li(dom.a('mail-export-mbox.tgz', attr({href: 'mail-export-mbox.tgz'}))),
dom.li(dom.a('mail-export-mbox.zip', attr({href: 'mail-export-mbox.zip'}))),
),
dom.br(),
dom.h2('Import'),
dom.p('Import messages from a .zip or .tgz file with maildirs and/or mbox files.'),
importForm=dom.form(
async function submit(e) {
e.preventDefault()
e.stopPropagation()
const request = () => {
return new Promise((resolve, reject) => {
// Browsers can do everything. Except show a progress bar while uploading...
let progressBox, progressPercentage, progressBar
dom._kids(importProgress,
progressBox=dom.div(
dom.div('Uploading... ', progressPercentage=dom.span()),
),
)
importProgress.style.display = ''
const xhr = new window.XMLHttpRequest()
xhr.open('POST', 'import', true)
xhr.upload.addEventListener('progress', (e) => {
if (!e.lengthComputable) {
return
}
const pct = Math.floor(100*e.loaded/e.total)
dom._kids(progressPercentage, pct+'%')
})
xhr.addEventListener('load', () => {
console.log('upload done', {xhr: xhr, status: xhr.status})
if (xhr.status !== 200) {
reject({message: 'status '+xhr.status})
return
}
let resp
try {
resp = JSON.parse(xhr.responseText)
} catch (err) {
reject({message: 'parsing resonse json: '+err.message})
return
}
resolve(resp)
})
xhr.addEventListener('error', (e) => reject({message: 'upload error', event: e}))
xhr.addEventListener('abort', (e) => reject({message: 'upload aborted', event: e}))
xhr.send(new window.FormData(importForm))
})
}
try {
const p = request()
importFieldset.disabled = true
const result = await p
try {
window.sessionStorage.setItem('ImportToken', result.ImportToken)
} catch (err) {
console.log('storing import token in session storage', {err})
// Ignore error, could be some browser security thing like private browsing.
}
await importTrack(result.ImportToken)
} catch (err) {
console.log({err})
window.alert('Error: '+err.message)
} finally {
importFieldset.disabled = false
}
},
importFieldset=dom.fieldset(
dom.div(
style({marginBottom: '1ex'}),
dom.label(
dom.div(style({marginBottom: '.5ex'}), 'File'),
mailboxFile=dom.input(attr({type: 'file', required: '', name: 'file'}), function focus() {
mailboxFileHint.style.display = ''
}),
),
mailboxFileHint=dom.p(style({display: 'none', fontStyle: 'italic', marginTop: '.5ex'}), 'This file must either be a zip file or a gzipped tar file with mbox and/or maildir mailboxes. For maildirs, an optional file "dovecot-keywords" is read additional keywords, like Forwarded/Junk/NotJunk. If an imported mailbox already exists by name, messages are added to the existing mailbox. If a mailbox does not yet exist it will be created.'),
),
dom.div(
style({marginBottom: '1ex'}),
dom.label(
dom.div(style({marginBottom: '.5ex'}), 'Skip mailbox prefix (optional)'),
mailboxPrefix=dom.input(attr({name: 'skipMailboxPrefix'}), function focus() {
mailboxPrefixHint.style.display = ''
}),
),
mailboxPrefixHint=dom.p(style({display: 'none', fontStyle: 'italic', marginTop: '.5ex'}), 'If set, any mbox/maildir path with this prefix will have it stripped before importing. For example, if all mailboxes are in a directory "Takeout", specify that path in the field above so mailboxes like "Takeout/Inbox.mbox" are imported into a mailbox called "Inbox" instead of "Takeout/Inbox".'),
),
dom.div(
dom.button('Upload and import'),
dom.p(style({fontStyle: 'italic', marginTop: '.5ex'}), 'The file is uploaded first, then its messages are imported. Importing is done in a transaction, you can abort the entire import before it is finished.'),
),
),
),
importAbortBox=dom.div(), // Outside fieldset because it gets disabled, above progress because may be scrolling it down quickly with problems.
importProgress=dom.div(
style({display: 'none'}),
),
footer,
)
// Try to show the progress of an earlier import session. The user may have just
// refreshed the browser.
let importToken
try {
importToken = window.sessionStorage.getItem('ImportToken')
} catch (err) {
console.log('looking up ImportToken in session storage', {err})
return
}
if (!importToken) {
return
}
importFieldset.disabled = true
dom._kids(importProgress,
dom.div(
dom.div('Reconnecting to import...'),
),
)
importProgress.style.display = ''
importTrack(importToken)
.catch((err) => {
if (window.confirm('Error reconnecting to import. Remove this import session?')) {
window.sessionStorage.removeItem('ImportToken')
dom._kids(importProgress)
importProgress.style.display = 'none'
}
})
.finally(() => {
importFieldset.disabled = false
})
}
const destination = async (name) => {
const [domain, destinations] = await api.Destinations()
let dest = destinations[name]
if (!dest) {
throw new Error('destination not found')
}
let rulesetsTbody = dom.tbody()
let rulesetsRows = []
const addRulesetsRow = (rs) => {
let headersCell = dom.td()
let headers = [] // Holds objects: {key, value, root}
const addHeader = (k, v) => {
let h = {}
h.root = dom.div(
h.key=dom.input(attr({value: k})),
' ',
h.value=dom.input(attr({value: v})),
' ',
dom.button('-', style({width: '1.5em'}), function click(e) {
h.root.remove()
headers = headers.filter(x => x !== h)
if (headers.length === 0) {
const b = dom.button('+', style({width: '1.5em'}), function click(e) {
e.target.remove()
addHeader('', '')
})
headersCell.appendChild(dom.div(style({textAlign: 'right'}), b))
}
}),
' ',
dom.button('+', style({width: '1.5em'}), function click(e) {
addHeader('', '')
}),
)
headers.push(h)
headersCell.appendChild(h.root)
}
Object.entries(rs.HeadersRegexp || {}).sort().forEach(t =>
addHeader(t[0], t[1])
)
if (Object.entries(rs.HeadersRegexp || {}).length === 0) {
const b = dom.button('+', style({width: '1.5em'}), function click(e) {
e.target.remove()
addHeader('', '')
})
headersCell.appendChild(dom.div(style({textAlign: 'right'}), b))
}
let row = {headers}
row.root=dom.tr(
dom.td(row.SMTPMailFromRegexp=dom.input(attr({value: rs.SMTPMailFromRegexp || ''}))),
dom.td(row.VerifiedDomain=dom.input(attr({value: rs.VerifiedDomain || ''}))),
headersCell,
dom.td(row.ListAllowDomain=dom.input(attr({value: rs.ListAllowDomain || ''}))),
dom.td(row.Mailbox=dom.input(attr({value: rs.Mailbox || ''}))),
dom.td(
dom.button('Remove ruleset', function click(e) {
row.root.remove()
rulesetsRows = rulesetsRows.filter(e => e !== row)
}),
),
)
rulesetsRows.push(row)
rulesetsTbody.appendChild(row.root)
}
(dest.Rulesets || []).forEach(rs => {
addRulesetsRow(rs)
})
let defaultMailbox
let saveButton
const page = document.getElementById('page')
dom._kids(page,
crumbs(
crumblink('Mox Account', '#'),
'Destination ' + name,
),
dom.div(
dom.span('Default mailbox', attr({title: 'Default mailbox where email for this recipient is delivered to if it does not match any ruleset. Default is Inbox.'})),
dom.br(),
defaultMailbox=dom.input(attr({value: dest.Mailbox, placeholder: 'Inbox'})),
dom
),
dom.br(),
dom.h2('Rulesets'),
dom.p('Incoming messages are checked against the rulesets. If a ruleset matches, the message is delivered to the mailbox configured for the ruleset instead of to the default mailbox.'),
dom.p('The "List allow domain" does not affect the matching, but skips the regular spam checks if one of the verified domains is a (sub)domain of the domain mentioned here.'),
dom.table(
dom.thead(
dom.tr(
dom.th('SMTP "MAIL FROM" regexp', attr({title: 'Matches if this regular expression matches (a substring of) the SMTP MAIL FROM address (not the message From-header). E.g. user@example.org.'})),
dom.th('Verified domain', attr({title: 'Matches if this domain matches an SPF- and/or DKIM-verified (sub)domain.'})),
dom.th('Headers regexp', attr({title: 'Matches if these header field/value regular expressions all match (substrings of) the message headers. Header fields and valuees are converted to lower case before matching. Whitespace is trimmed from the value before matching. A header field can occur multiple times in a message, only one instance has to match. For mailing lists, you could match on ^list-id$ with the value typically the mailing list address in angled brackets with @ replaced with a dot, e.g. <name\\.lists\\.example\\.org>.'})),
dom.th('List allow domain', attr({title: "Influence the spam filtering, this does not change whether this ruleset applies to a message. If this domain matches an SPF- and/or DKIM-verified (sub)domain, the message is accepted without further spam checks, such as a junk filter or DMARC reject evaluation. DMARC rejects should not apply for mailing lists that are not configured to rewrite the From-header of messages that don't have a passing DKIM signature of the From-domain. Otherwise, by rejecting messages, you may be automatically unsubscribed from the mailing list. The assumption is that mailing lists do their own spam filtering/moderation."})),
dom.th('Mailbox', attr({title: 'Mailbox to deliver to if this ruleset matches.'})),
dom.th('Action'),
)
),
rulesetsTbody,
dom.tfoot(
dom.tr(
dom.td(attr({colspan: '5'})),
dom.td(
dom.button('Add ruleset', function click(e) {
addRulesetsRow({})
}),
),
),
),
),
dom.br(),
saveButton=dom.button('Save', async function click(e) {
saveButton.disabled = true
try {
const newDest = {
Mailbox: defaultMailbox.value,
Rulesets: rulesetsRows.map(row => {
return {
SMTPMailFromRegexp: row.SMTPMailFromRegexp.value,
VerifiedDomain: row.VerifiedDomain.value,
HeadersRegexp: Object.fromEntries(row.headers.map(h => [h.key.value, h.value.value])),
ListAllowDomain: row.ListAllowDomain.value,
Mailbox: row.Mailbox.value,
}
}),
}
page.classList.add('loading')
await api.DestinationSave(name, dest, newDest)
dest = newDest // Set new dest, for if user edits again. Without this, they would get an error that the config has been modified.
} catch (err) {
console.log({err})
window.alert('Error: '+err.message)
return
} finally {
saveButton.disabled = false
page.classList.remove('loading')
}
}),
)
}
const init = async () => {
let curhash
const page = document.getElementById('page')
const hashChange = async () => {
if (curhash === window.location.hash) {
return
}
let h = decodeURIComponent(window.location.hash)
if (h !== '' && h.substring(0, 1) == '#') {
h = h.substring(1)
}
const t = h.split('/')
page.classList.add('loading')
try {
if (h === '') {
await index()
} else if (t[0] === 'destinations' && t.length === 2) {
await destination(t[1])
} else {
dom._kids(page, 'page not found')
}
} catch (err) {
console.log({err})
window.alert('Error: ' + err.message)
window.location.hash = curhash
curhash = window.location.hash
return
}
curhash = window.location.hash
page.classList.remove('loading')
}
window.addEventListener('hashchange', hashChange)
hashChange()
}
window.addEventListener('load', init)
</script>
</body>
</html>

View File

@ -1,181 +0,0 @@
package http
import (
"archive/tar"
"archive/zip"
"bytes"
"compress/gzip"
"context"
"encoding/json"
"io"
"mime/multipart"
"net/http"
"net/http/httptest"
"os"
"path"
"path/filepath"
"strings"
"testing"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/store"
)
func tcheck(t *testing.T, err error, msg string) {
t.Helper()
if err != nil {
t.Fatalf("%s: %s", msg, err)
}
}
func TestAccount(t *testing.T) {
os.RemoveAll("../testdata/httpaccount/data")
mox.ConfigStaticPath = "../testdata/httpaccount/mox.conf"
mox.ConfigDynamicPath = filepath.Join(filepath.Dir(mox.ConfigStaticPath), "domains.conf")
mox.MustLoadConfig(false)
acc, err := store.OpenAccount("mjl")
tcheck(t, err, "open account")
defer acc.Close()
switchDone := store.Switchboard()
defer close(switchDone)
log := mlog.New("store")
test := func(authHdr string, expect string) {
t.Helper()
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "/ignored", nil)
if authHdr != "" {
r.Header.Add("Authorization", authHdr)
}
ok := checkAccountAuth(context.Background(), log, w, r)
if ok != expect {
t.Fatalf("got %v, expected %v", ok, expect)
}
}
const authOK = "Basic bWpsQG1veC5leGFtcGxlOnRlc3QxMjM0" // mjl@mox.example:test1234
const authBad = "Basic bWpsQG1veC5leGFtcGxlOmJhZHBhc3N3b3Jk" // mjl@mox.example:badpassword
authCtx := context.WithValue(context.Background(), authCtxKey, "mjl")
test(authOK, "") // No password set yet.
Account{}.SetPassword(authCtx, "test1234")
test(authOK, "mjl")
test(authBad, "")
_, dests := Account{}.Destinations(authCtx)
Account{}.DestinationSave(authCtx, "mjl@mox.example", dests["mjl@mox.example"], dests["mjl@mox.example"]) // todo: save modified value and compare it afterwards
go importManage()
// Import mbox/maildir tgz/zip.
testImport := func(filename string, expect int) {
t.Helper()
var reqBody bytes.Buffer
mpw := multipart.NewWriter(&reqBody)
part, err := mpw.CreateFormFile("file", path.Base(filename))
tcheck(t, err, "creating form file")
buf, err := os.ReadFile(filename)
tcheck(t, err, "reading file")
_, err = part.Write(buf)
tcheck(t, err, "write part")
err = mpw.Close()
tcheck(t, err, "close multipart writer")
r := httptest.NewRequest("POST", "/import", &reqBody)
r.Header.Add("Content-Type", mpw.FormDataContentType())
r.Header.Add("Authorization", authOK)
w := httptest.NewRecorder()
accountHandle(w, r)
if w.Code != http.StatusOK {
t.Fatalf("import, got status code %d, expected 200: %s", w.Code, w.Body.Bytes())
}
m := map[string]string{}
if err := json.Unmarshal(w.Body.Bytes(), &m); err != nil {
t.Fatalf("parsing import response: %v", err)
}
token := m["ImportToken"]
l := importListener{token, make(chan importEvent, 100), make(chan bool)}
importers.Register <- &l
if !<-l.Register {
t.Fatalf("register failed")
}
defer func() {
importers.Unregister <- &l
}()
count := 0
loop:
for {
e := <-l.Events
switch x := e.Event.(type) {
case importCount:
count += x.Count
case importProblem:
t.Fatalf("unexpected problem: %q", x.Message)
case importDone:
break loop
case importAborted:
t.Fatalf("unexpected aborted import")
default:
panic("missing case")
}
}
if count != expect {
t.Fatalf("imported %d messages, expected %d", count, expect)
}
}
testImport("../testdata/importtest.mbox.zip", 2)
testImport("../testdata/importtest.maildir.tgz", 2)
testExport := func(httppath string, iszip bool, expectFiles int) {
t.Helper()
r := httptest.NewRequest("GET", httppath, nil)
r.Header.Add("Authorization", authOK)
w := httptest.NewRecorder()
accountHandle(w, r)
if w.Code != http.StatusOK {
t.Fatalf("export, got status code %d, expected 200: %s", w.Code, w.Body.Bytes())
}
var count int
if iszip {
buf := w.Body.Bytes()
zr, err := zip.NewReader(bytes.NewReader(buf), int64(len(buf)))
tcheck(t, err, "reading zip")
for _, f := range zr.File {
if !strings.HasSuffix(f.Name, "/") {
count++
}
}
} else {
gzr, err := gzip.NewReader(w.Body)
tcheck(t, err, "gzip reader")
tr := tar.NewReader(gzr)
for {
h, err := tr.Next()
if err == io.EOF {
break
}
tcheck(t, err, "next file in tar")
if !strings.HasSuffix(h.Name, "/") {
count++
}
_, err = io.Copy(io.Discard, tr)
tcheck(t, err, "reading from tar")
}
}
if count != expectFiles {
t.Fatalf("export, has %d files, expected %d", count, expectFiles)
}
}
testExport("/mail-export-maildir.tgz", false, 6) // 2 mailboxes, each with 2 messages and a dovecot-keyword file
testExport("/mail-export-maildir.zip", true, 6)
testExport("/mail-export-mbox.tgz", false, 2)
testExport("/mail-export-mbox.zip", true, 2)
}

View File

@ -1,181 +0,0 @@
{
"Name": "Account",
"Docs": "Account exports web API functions for the account web interface. All its\nmethods are exported under api/. Function calls require valid HTTP\nAuthentication credentials of a user.",
"Functions": [
{
"Name": "SetPassword",
"Docs": "SetPassword saves a new password for the account, invalidating the previous password.\nSessions are not interrupted, and will keep working. New login attempts must use the new password.\nPassword must be at least 8 characters.",
"Params": [
{
"Name": "password",
"Typewords": [
"string"
]
}
],
"Returns": []
},
{
"Name": "Destinations",
"Docs": "Destinations returns the default domain, and the destinations (keys are email\naddresses, or localparts to the default domain).\ntodo: replace with a function that returns the whole account, when sherpadoc understands unnamed struct fields.",
"Params": [],
"Returns": [
{
"Name": "r0",
"Typewords": [
"Domain"
]
},
{
"Name": "r1",
"Typewords": [
"{}",
"Destination"
]
}
]
},
{
"Name": "DestinationSave",
"Docs": "DestinationSave updates a destination.\nOldDest is compared against the current destination. If it does not match, an\nerror is returned. Otherwise newDest is saved and the configuration reloaded.",
"Params": [
{
"Name": "destName",
"Typewords": [
"string"
]
},
{
"Name": "oldDest",
"Typewords": [
"Destination"
]
},
{
"Name": "newDest",
"Typewords": [
"Destination"
]
}
],
"Returns": []
},
{
"Name": "ImportAbort",
"Docs": "ImportAbort aborts an import that is in progress. If the import exists and isn't\nfinished, no changes will have been made by the import.",
"Params": [
{
"Name": "importToken",
"Typewords": [
"string"
]
}
],
"Returns": []
}
],
"Sections": [],
"Structs": [
{
"Name": "Domain",
"Docs": "Domain is a domain name, with one or more labels, with at least an ASCII\nrepresentation, and for IDNA non-ASCII domains a unicode representation.\nThe ASCII string must be used for DNS lookups.",
"Fields": [
{
"Name": "ASCII",
"Docs": "A non-unicode domain, e.g. with A-labels (xn--...) or NR-LDH (non-reserved letters/digits/hyphens) labels. Always in lower case.",
"Typewords": [
"string"
]
},
{
"Name": "Unicode",
"Docs": "Name as U-labels. Empty if this is an ASCII-only domain.",
"Typewords": [
"string"
]
}
]
},
{
"Name": "Destination",
"Docs": "",
"Fields": [
{
"Name": "Mailbox",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Rulesets",
"Docs": "",
"Typewords": [
"[]",
"Ruleset"
]
}
]
},
{
"Name": "Ruleset",
"Docs": "",
"Fields": [
{
"Name": "SMTPMailFromRegexp",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "VerifiedDomain",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "HeadersRegexp",
"Docs": "",
"Typewords": [
"{}",
"string"
]
},
{
"Name": "ListAllowDomain",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "Mailbox",
"Docs": "",
"Typewords": [
"string"
]
},
{
"Name": "VerifiedDNSDomain",
"Docs": "",
"Typewords": [
"Domain"
]
},
{
"Name": "ListAllowDNSDomain",
"Docs": "",
"Typewords": [
"Domain"
]
}
]
}
],
"Ints": [],
"Strings": [],
"SherpaVersion": 0,
"SherpadocVersion": 1
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,133 +0,0 @@
package http
import (
"context"
"crypto/ed25519"
"net"
"net/http/httptest"
"os"
"testing"
"time"
"golang.org/x/crypto/bcrypt"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mox-"
)
func init() {
mox.LimitersInit()
}
func TestAdminAuth(t *testing.T) {
test := func(passwordfile, authHdr string, expect bool) {
t.Helper()
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "/ignored", nil)
if authHdr != "" {
r.Header.Add("Authorization", authHdr)
}
ok := checkAdminAuth(context.Background(), passwordfile, w, r)
if ok != expect {
t.Fatalf("got %v, expected %v", ok, expect)
}
}
const authOK = "Basic YWRtaW46bW94dGVzdDEyMw==" // admin:moxtest123
const authBad = "Basic YWRtaW46YmFkcGFzc3dvcmQ=" // admin:badpassword
const path = "../testdata/http-passwordfile"
os.Remove(path)
defer os.Remove(path)
test(path, authOK, false) // Password file does not exist.
adminpwhash, err := bcrypt.GenerateFromPassword([]byte("moxtest123"), bcrypt.DefaultCost)
if err != nil {
t.Fatalf("generate bcrypt hash: %v", err)
}
if err := os.WriteFile(path, adminpwhash, 0660); err != nil {
t.Fatalf("write password file: %v", err)
}
// We loop to also exercise the auth cache.
for i := 0; i < 2; i++ {
test(path, "", false) // Empty/missing header.
test(path, "Malformed ", false) // Not "Basic"
test(path, "Basic malformed ", false) // Bad base64.
test(path, "Basic dGVzdA== ", false) // base64 is ok, but wrong tokens inside.
test(path, authBad, false) // Wrong password.
test(path, authOK, true)
}
}
func TestCheckDomain(t *testing.T) {
// NOTE: we aren't currently looking at the results, having the code paths executed is better than nothing.
resolver := dns.MockResolver{
MX: map[string][]*net.MX{
"mox.example.": {{Host: "mail.mox.example.", Pref: 10}},
},
A: map[string][]string{
"mail.mox.example.": {"127.0.0.2"},
},
AAAA: map[string][]string{
"mail.mox.example.": {"127.0.0.2"},
},
TXT: map[string][]string{
"mox.example.": {"v=spf1 mx -all"},
"test._domainkey.mox.example.": {"v=DKIM1;h=sha256;k=ed25519;p=ln5zd/JEX4Jy60WAhUOv33IYm2YZMyTQAdr9stML504="},
"_dmarc.mox.example.": {"v=DMARC1; p=reject; rua=mailto:mjl@mox.example"},
"_smtp._tls.mox.example": {"v=TLSRPTv1; rua=mailto:tlsrpt@mox.example;"},
"_mta-sts.mox.example": {"v=STSv1; id=20160831085700Z"},
},
CNAME: map[string]string{},
}
listener := config.Listener{
IPs: []string{"127.0.0.2"},
Hostname: "mox.example",
HostnameDomain: dns.Domain{ASCII: "mox.example"},
}
listener.SMTP.Enabled = true
listener.AutoconfigHTTPS.Enabled = true
listener.MTASTSHTTPS.Enabled = true
mox.Conf.Static.Listeners = map[string]config.Listener{
"public": listener,
}
domain := config.Domain{
DKIM: config.DKIM{
Selectors: map[string]config.Selector{
"test": {
HashEffective: "sha256",
HeadersEffective: []string{"From", "Date", "Subject"},
Key: ed25519.NewKeyFromSeed(make([]byte, 32)), // warning: fake zero key, do not copy this code.
Domain: dns.Domain{ASCII: "test"},
},
"missing": {
HashEffective: "sha256",
HeadersEffective: []string{"From", "Date", "Subject"},
Key: ed25519.NewKeyFromSeed(make([]byte, 32)), // warning: fake zero key, do not copy this code.
Domain: dns.Domain{ASCII: "missing"},
},
},
Sign: []string{"test", "test2"},
},
}
mox.Conf.Dynamic.Domains = map[string]config.Domain{
"mox.example": domain,
}
// Make a dialer that fails immediately before actually connecting.
done := make(chan struct{})
close(done)
dialer := &net.Dialer{Deadline: time.Now().Add(-time.Second), Cancel: done}
checkDomain(context.Background(), resolver, dialer, "mox.example")
// todo: check returned data
Admin{}.Domains(context.Background()) // todo: check results
dnsblsStatus(context.Background(), resolver) // todo: check results
}

File diff suppressed because it is too large Load Diff

16
http/atime.go Normal file
View File

@ -0,0 +1,16 @@
//go:build !netbsd && !freebsd && !darwin && !windows
package http
import (
"fmt"
"syscall"
)
func statAtime(sys any) (int64, error) {
x, ok := sys.(*syscall.Stat_t)
if !ok {
return 0, fmt.Errorf("sys is a %T, expected *syscall.Stat_t", sys)
}
return int64(x.Atim.Sec)*1000*1000*1000 + int64(x.Atim.Nsec), nil
}

16
http/atime_bsd.go Normal file
View File

@ -0,0 +1,16 @@
//go:build netbsd || freebsd || darwin
package http
import (
"fmt"
"syscall"
)
func statAtime(sys any) (int64, error) {
x, ok := sys.(*syscall.Stat_t)
if !ok {
return 0, fmt.Errorf("stat sys is a %T, expected *syscall.Stat_t", sys)
}
return int64(x.Atimespec.Sec)*1000*1000*1000 + int64(x.Atimespec.Nsec), nil
}

16
http/atime_windows.go Normal file
View File

@ -0,0 +1,16 @@
//go:build windows
package http
import (
"fmt"
"syscall"
)
func statAtime(sys any) (int64, error) {
x, ok := sys.(*syscall.Win32FileAttributeData)
if !ok {
return 0, fmt.Errorf("sys is a %T, expected *syscall.Win32FileAttributeData", sys)
}
return x.LastAccessTime.Nanoseconds(), nil
}

View File

@ -3,14 +3,16 @@ package http
import (
"encoding/xml"
"fmt"
"log/slog"
"net/http"
"strings"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"rsc.io/qr"
"github.com/mjl-/mox/config"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/admin"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/smtp"
)
@ -36,7 +38,9 @@ var (
// - Thunderbird will request an "autoconfig" xml file.
// - Microsoft tools will request an "autodiscovery" xml file.
// - In my tests on an internal domain, iOS mail only talks to Apple servers, then
// does not attempt autoconfiguration. Possibly due to them being private DNS names.
// does not attempt autoconfiguration. Possibly due to them being private DNS
// names. Apple software can be provisioned with "mobileconfig" profile files,
// which users can download after logging in.
//
// DNS records seem optional, but autoconfig.<domain> and autodiscover.<domain>
// (both CNAME or A) are useful, and so is SRV _autodiscovery._tcp.<domain> 0 0 443
@ -52,7 +56,7 @@ var (
// User should create a DNS record: autoconfig.<domain> (CNAME or A).
// See https://wiki.mozilla.org/Thunderbird:Autoconfiguration:ConfigFileFormat
func autoconfHandle(w http.ResponseWriter, r *http.Request) {
log := xlog.WithContext(r.Context())
log := pkglog.WithContext(r.Context())
var addrDom string
defer func() {
@ -60,99 +64,123 @@ func autoconfHandle(w http.ResponseWriter, r *http.Request) {
}()
email := r.FormValue("emailaddress")
log.Debug("autoconfig request", mlog.Field("email", email))
addr, err := smtp.ParseAddress(email)
log.Debug("autoconfig request", slog.String("email", email))
var domain dns.Domain
if email == "" {
email = "%EMAILADDRESS%"
// Declare this here rather than using := to avoid shadowing domain from
// the outer scope.
var err error
domain, err = dns.ParseDomain(r.Host)
if err != nil {
http.Error(w, fmt.Sprintf("400 - bad request - invalid domain: %s", r.Host), http.StatusBadRequest)
return
}
domain.ASCII = strings.TrimPrefix(domain.ASCII, "autoconfig.")
domain.Unicode = strings.TrimPrefix(domain.Unicode, "autoconfig.")
} else {
addr, err := smtp.ParseAddress(email)
if err != nil {
http.Error(w, "400 - bad request - invalid parameter emailaddress", http.StatusBadRequest)
return
}
domain = addr.Domain
}
socketType := func(tlsMode admin.TLSMode) (string, error) {
switch tlsMode {
case admin.TLSModeImmediate:
return "SSL", nil
case admin.TLSModeSTARTTLS:
return "STARTTLS", nil
case admin.TLSModeNone:
return "plain", nil
default:
return "", fmt.Errorf("unknown tls mode %v", tlsMode)
}
}
var imapTLS, submissionTLS string
config, err := admin.ClientConfigDomain(domain)
if err == nil {
imapTLS, err = socketType(config.IMAP.TLSMode)
}
if err == nil {
submissionTLS, err = socketType(config.Submission.TLSMode)
}
if err != nil {
http.Error(w, "400 - bad request - invalid parameter emailaddress", http.StatusBadRequest)
http.Error(w, "400 - bad request - "+err.Error(), http.StatusBadRequest)
return
}
if _, ok := mox.Conf.Domain(addr.Domain); !ok {
http.Error(w, "400 - bad request - unknown domain", http.StatusBadRequest)
return
}
addrDom = addr.Domain.Name()
hostname := mox.Conf.Static.HostnameDomain
// Thunderbird doesn't seem to allow U-labels, always return ASCII names.
var resp autoconfigResponse
resp.Version = "1.1"
resp.EmailProvider.ID = addr.Domain.ASCII
resp.EmailProvider.Domain = addr.Domain.ASCII
resp.EmailProvider.ID = domain.ASCII
resp.EmailProvider.Domain = domain.ASCII
resp.EmailProvider.DisplayName = email
resp.EmailProvider.DisplayShortName = addr.Domain.ASCII
var imapPort int
var imapSocket string
for _, l := range mox.Conf.Static.Listeners {
if l.IMAPS.Enabled {
imapSocket = "SSL"
imapPort = config.Port(l.IMAPS.Port, 993)
} else if l.IMAP.Enabled {
if l.TLS != nil && imapSocket != "SSL" {
imapSocket = "STARTTLS"
imapPort = config.Port(l.IMAP.Port, 143)
} else if imapSocket == "" {
imapSocket = "plain"
imapPort = config.Port(l.IMAP.Port, 143)
}
}
}
if imapPort == 0 {
log.Error("autoconfig: no imap configured?")
}
resp.EmailProvider.DisplayShortName = domain.ASCII
// todo: specify SCRAM-SHA-256 once thunderbird and autoconfig supports it. or perhaps that will fall under "password-encrypted" by then.
// todo: let user configure they prefer or require tls client auth and specify "TLS-client-cert"
resp.EmailProvider.IncomingServer.Type = "imap"
resp.EmailProvider.IncomingServer.Hostname = hostname.ASCII
resp.EmailProvider.IncomingServer.Port = imapPort
resp.EmailProvider.IncomingServer.SocketType = imapSocket
resp.EmailProvider.IncomingServer.Username = email
resp.EmailProvider.IncomingServer.Authentication = "password-encrypted"
var smtpPort int
var smtpSocket string
for _, l := range mox.Conf.Static.Listeners {
if l.Submissions.Enabled {
smtpSocket = "SSL"
smtpPort = config.Port(l.Submissions.Port, 465)
} else if l.Submission.Enabled {
if l.TLS != nil && smtpSocket != "SSL" {
smtpSocket = "STARTTLS"
smtpPort = config.Port(l.Submission.Port, 587)
} else if smtpSocket == "" {
smtpSocket = "plain"
smtpPort = config.Port(l.Submission.Port, 587)
}
incoming := incomingServer{
"imap",
config.IMAP.Host.ASCII,
config.IMAP.Port,
imapTLS,
email,
"password-encrypted",
}
resp.EmailProvider.IncomingServers = append(resp.EmailProvider.IncomingServers, incoming)
if config.IMAP.EnabledOnHTTPS {
tlsMode, _ := socketType(admin.TLSModeImmediate)
incomingALPN := incomingServer{
"imap",
config.IMAP.Host.ASCII,
443,
tlsMode,
email,
"password-encrypted",
}
}
if smtpPort == 0 {
log.Error("autoconfig: no smtp submission configured?")
resp.EmailProvider.IncomingServers = append(resp.EmailProvider.IncomingServers, incomingALPN)
}
resp.EmailProvider.OutgoingServer.Type = "smtp"
resp.EmailProvider.OutgoingServer.Hostname = hostname.ASCII
resp.EmailProvider.OutgoingServer.Port = smtpPort
resp.EmailProvider.OutgoingServer.SocketType = smtpSocket
resp.EmailProvider.OutgoingServer.Username = email
resp.EmailProvider.OutgoingServer.Authentication = "password-encrypted"
outgoing := outgoingServer{
"smtp",
config.Submission.Host.ASCII,
config.Submission.Port,
submissionTLS,
email,
"password-encrypted",
}
resp.EmailProvider.OutgoingServers = append(resp.EmailProvider.OutgoingServers, outgoing)
if config.Submission.EnabledOnHTTPS {
tlsMode, _ := socketType(admin.TLSModeImmediate)
outgoingALPN := outgoingServer{
"smtp",
config.Submission.Host.ASCII,
443,
tlsMode,
email,
"password-encrypted",
}
resp.EmailProvider.OutgoingServers = append(resp.EmailProvider.OutgoingServers, outgoingALPN)
}
// todo: should we put the email address in the URL?
resp.ClientConfigUpdate.URL = fmt.Sprintf("https://%s/mail/config-v1.1.xml", hostname.ASCII)
resp.ClientConfigUpdate.URL = fmt.Sprintf("https://autoconfig.%s/mail/config-v1.1.xml", domain.ASCII)
w.Header().Set("Content-Type", "application/xml; charset=utf-8")
enc := xml.NewEncoder(w)
enc.Indent("", "\t")
fmt.Fprint(w, xml.Header)
if err := enc.Encode(resp); err != nil {
log.Errorx("marshal autoconfig response", err)
}
err = enc.Encode(resp)
log.Check(err, "write autoconfig xml response")
}
// Autodiscover from Microsoft, also used by Thunderbird.
// User should create a DNS record: _autodiscover._tcp.<domain> IN SRV 0 0 443 <hostname or autodiscover.<domain>>
// User should create a DNS record: _autodiscover._tcp.<domain> SRV 0 0 443 <hostname>
//
// In practice, autodiscover does not seem to work wit microsoft clients. A
// connectivity test tool for outlook is available on
@ -162,7 +190,7 @@ func autoconfHandle(w http.ResponseWriter, r *http.Request) {
//
// Thunderbird does understand autodiscover.
func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
log := xlog.WithContext(r.Context())
log := pkglog.WithContext(r.Context())
var addrDom string
defer func() {
@ -180,7 +208,7 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
return
}
log.Debug("autodiscover request", mlog.Field("email", req.Request.EmailAddress))
log.Debug("autodiscover request", slog.String("email", req.Request.EmailAddress))
addr, err := smtp.ParseAddress(req.Request.EmailAddress)
if err != nil {
@ -188,13 +216,33 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
return
}
if _, ok := mox.Conf.Domain(addr.Domain); !ok {
http.Error(w, "400 - bad request - unknown domain", http.StatusBadRequest)
// tlsmode returns the "ssl" and "encryption" fields.
tlsmode := func(tlsMode admin.TLSMode) (string, string, error) {
switch tlsMode {
case admin.TLSModeImmediate:
return "on", "TLS", nil
case admin.TLSModeSTARTTLS:
return "on", "", nil
case admin.TLSModeNone:
return "off", "", nil
default:
return "", "", fmt.Errorf("unknown tls mode %v", tlsMode)
}
}
var imapSSL, imapEncryption string
var submissionSSL, submissionEncryption string
config, err := admin.ClientConfigDomain(addr.Domain)
if err == nil {
imapSSL, imapEncryption, err = tlsmode(config.IMAP.TLSMode)
}
if err == nil {
submissionSSL, submissionEncryption, err = tlsmode(config.Submission.TLSMode)
}
if err != nil {
http.Error(w, "400 - bad request - "+err.Error(), http.StatusBadRequest)
return
}
addrDom = addr.Domain.Name()
hostname := mox.Conf.Static.HostnameDomain
// The docs are generated and fragmented in many tiny pages, hard to follow.
// High-level starting point, https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxdscli/78530279-d042-4eb0-a1f4-03b18143cd19
@ -205,49 +253,10 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
// use. See
// https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxdscli/21fd2dd5-c4ee-485b-94fb-e7db5da93726
var imapPort int
imapSSL := "off"
var imapEncryption string
var smtpPort int
smtpSSL := "off"
var smtpEncryption string
for _, l := range mox.Conf.Static.Listeners {
if l.IMAPS.Enabled {
imapPort = config.Port(l.IMAPS.Port, 993)
imapSSL = "on"
imapEncryption = "TLS" // Assuming this means direct TLS.
} else if l.IMAP.Enabled {
if l.TLS != nil && imapEncryption != "TLS" {
imapSSL = "on"
imapPort = config.Port(l.IMAP.Port, 143)
} else if imapSSL == "" {
imapPort = config.Port(l.IMAP.Port, 143)
}
}
if l.Submissions.Enabled {
smtpPort = config.Port(l.Submissions.Port, 465)
smtpSSL = "on"
smtpEncryption = "TLS" // Assuming this means direct TLS.
} else if l.Submission.Enabled {
if l.TLS != nil && smtpEncryption != "TLS" {
smtpSSL = "on"
smtpPort = config.Port(l.Submission.Port, 587)
} else if smtpSSL == "" {
smtpPort = config.Port(l.Submission.Port, 587)
}
}
}
if imapPort == 0 {
log.Error("autoconfig: no smtp submission configured?")
}
if smtpPort == 0 {
log.Error("autoconfig: no imap configured?")
}
w.Header().Set("Content-Type", "application/xml; charset=utf-8")
// todo: let user configure they prefer or require tls client auth and add "AuthPackage" with value "certificate" to Protocol? see https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxdscli/21fd2dd5-c4ee-485b-94fb-e7db5da93726
resp := autodiscoverResponse{}
resp.XMLName.Local = "Autodiscover"
resp.XMLName.Space = "http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"
@ -259,8 +268,8 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
Protocol: []autodiscoverProtocol{
{
Type: "IMAP",
Server: hostname.ASCII,
Port: imapPort,
Server: config.IMAP.Host.ASCII,
Port: config.IMAP.Port,
LoginName: req.Request.EmailAddress,
SSL: imapSSL,
Encryption: imapEncryption,
@ -269,11 +278,11 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
},
{
Type: "SMTP",
Server: hostname.ASCII,
Port: smtpPort,
Server: config.Submission.Host.ASCII,
Port: config.Submission.Port,
LoginName: req.Request.EmailAddress,
SSL: smtpSSL,
Encryption: smtpEncryption,
SSL: submissionSSL,
Encryption: submissionEncryption,
SPA: "off", // Override default "on", this is Microsofts proprietary authentication protocol.
AuthRequired: "on",
},
@ -282,9 +291,8 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
enc := xml.NewEncoder(w)
enc.Indent("", "\t")
fmt.Fprint(w, xml.Header)
if err := enc.Encode(resp); err != nil {
log.Errorx("marshal autodiscover response", err)
}
err = enc.Encode(resp)
log.Check(err, "marshal autodiscover xml response")
}
// Thunderbird requests these URLs for autoconfig/autodiscover:
@ -292,6 +300,22 @@ func autodiscoverHandle(w http.ResponseWriter, r *http.Request) {
// https://autodiscover.example.org/autodiscover/autodiscover.xml
// https://example.org/.well-known/autoconfig/mail/config-v1.1.xml?emailaddress=user%40example.org
// https://example.org/autodiscover/autodiscover.xml
type incomingServer struct {
Type string `xml:"type,attr"`
Hostname string `xml:"hostname"`
Port int `xml:"port"`
SocketType string `xml:"socketType"`
Username string `xml:"username"`
Authentication string `xml:"authentication"`
}
type outgoingServer struct {
Type string `xml:"type,attr"`
Hostname string `xml:"hostname"`
Port int `xml:"port"`
SocketType string `xml:"socketType"`
Username string `xml:"username"`
Authentication string `xml:"authentication"`
}
type autoconfigResponse struct {
XMLName xml.Name `xml:"clientConfig"`
Version string `xml:"version,attr"`
@ -302,23 +326,8 @@ type autoconfigResponse struct {
DisplayName string `xml:"displayName"`
DisplayShortName string `xml:"displayShortName"`
IncomingServer struct {
Type string `xml:"type,attr"`
Hostname string `xml:"hostname"`
Port int `xml:"port"`
SocketType string `xml:"socketType"`
Username string `xml:"username"`
Authentication string `xml:"authentication"`
} `xml:"incomingServer"`
OutgoingServer struct {
Type string `xml:"type,attr"`
Hostname string `xml:"hostname"`
Port int `xml:"port"`
SocketType string `xml:"socketType"`
Username string `xml:"username"`
Authentication string `xml:"authentication"`
} `xml:"outgoingServer"`
IncomingServers []incomingServer `xml:"incomingServer"`
OutgoingServers []outgoingServer `xml:"outgoingServer"`
} `xml:"emailProvider"`
ClientConfigUpdate struct {
@ -360,3 +369,72 @@ type autodiscoverProtocol struct {
SPA string
AuthRequired string
}
// Serve a .mobileconfig file. This endpoint is not a standard place where Apple
// devices look. We point to it from the account page.
func mobileconfigHandle(w http.ResponseWriter, r *http.Request) {
log := pkglog.WithContext(r.Context())
if r.Method != "GET" {
http.Error(w, "405 - method not allowed - get required", http.StatusMethodNotAllowed)
return
}
addresses := r.FormValue("addresses")
fullName := r.FormValue("name")
var buf []byte
var err error
if addresses == "" {
err = fmt.Errorf("missing/empty field addresses")
}
l := strings.Split(addresses, ",")
if err == nil {
buf, err = MobileConfig(l, fullName)
}
if err != nil {
http.Error(w, "400 - bad request - "+err.Error(), http.StatusBadRequest)
return
}
h := w.Header()
filename := l[0]
filename = strings.ReplaceAll(filename, ".", "-")
filename = strings.ReplaceAll(filename, "@", "-at-")
filename = "email-account-" + filename + ".mobileconfig"
h.Set("Content-Disposition", fmt.Sprintf(`attachment; filename="%s"`, filename))
_, err = w.Write(buf)
log.Check(err, "writing mobileconfig response")
}
// Serve a png file with qrcode with the link to the .mobileconfig file, should be
// helpful for mobile devices.
func mobileconfigQRCodeHandle(w http.ResponseWriter, r *http.Request) {
log := pkglog.WithContext(r.Context())
if r.Method != "GET" {
http.Error(w, "405 - method not allowed - get required", http.StatusMethodNotAllowed)
return
}
if !strings.HasSuffix(r.URL.Path, ".qrcode.png") {
http.NotFound(w, r)
return
}
// Compose URL, scheme and host are not set.
u := *r.URL
if r.TLS == nil {
u.Scheme = "http"
} else {
u.Scheme = "https"
}
u.Host = r.Host
u.Path = strings.TrimSuffix(u.Path, ".qrcode.png")
code, err := qr.Encode(u.String(), qr.L)
if err != nil {
http.Error(w, "500 - internal server error - generating qr-code: "+err.Error(), http.StatusInternalServerError)
return
}
h := w.Header()
h.Set("Content-Type", "image/png")
_, err = w.Write(code.PNG())
log.Check(err, "writing mobileconfig qr code")
}

BIN
http/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 823 B

429
http/gzcache.go Normal file
View File

@ -0,0 +1,429 @@
package http
import (
"compress/gzip"
"encoding/base64"
"errors"
"fmt"
"io"
"io/fs"
"log/slog"
"net/http"
"os"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/mjl-/mox/mlog"
)
// todo: consider caching gzipped responses from forward handlers too. we would need to read the responses (handle up to perhaps 2mb), hash the data (blake2b seems fast), check if we have the gzip content for that hash, cache it on second request. keep around entries for non-yet-cached hashes, with some limit and lru eviction policy. we have to recognize some content-types as not applicable and do direct streaming compression, e.g. for text/event-stream. and we need to detect when backend server could be slowly sending out data and abort the caching attempt. downside is always that we need to read the whole response before and hash it before we can send our response. it is best if the backend just responds with gzip itself though. compression needs more cpu than hashing (at least 10x), but it's only worth it with enough hits.
// Cache for gzipped static files.
var staticgzcache gzcache
type gzcache struct {
dir string // Where all files are stored.
// Max total size of combined files in cache. When adding a new entry, the least
// recently used entries are evicted to stay below this size.
maxSize int64
sync.Mutex
// Total on-disk size of compressed data. Not larger than maxSize. We can
// temporarily have more bytes in use because while/after evicting, a writer may
// still have the old removed file open.
size int64
// Indexed by effective path, based on handler.
paths map[string]gzfile
// Only with files we completed compressing, kept ordered by atime. We evict from
// oldest. On use, we take entries out and put them at newest.
oldest, newest *pathUse
}
type gzfile struct {
// Whether compressing in progress. If a new request comes in while we are already
// compressing, for simplicity of code we just compress again for that client.
compressing bool
mtime int64 // If mtime changes, we remove entry from cache.
atime int64 // For LRU.
gzsize int64 // Compressed size, used in Content-Length header.
use *pathUse // Only set after compressing finished.
}
type pathUse struct {
prev, next *pathUse // Double-linked list.
path string
}
// Initialize staticgzcache from on-disk directory.
// The path and mtime are in the filename, the atime is in the file itself.
func loadStaticGzipCache(dir string, maxSize int64) {
staticgzcache = gzcache{
dir: dir,
maxSize: maxSize,
paths: map[string]gzfile{},
}
// todo future: should we split cached files in sub directories, so we don't end up with one huge directory?
os.MkdirAll(dir, 0700)
entries, err := os.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
pkglog.Errorx("listing static gzip cache files", err, slog.String("dir", dir))
}
for _, e := range entries {
name := e.Name()
var err error
if !strings.HasSuffix(name, ".gz") {
err = errors.New("missing .gz suffix")
}
var path, xpath, mtimestr string
if err == nil {
var ok bool
xpath, mtimestr, ok = strings.Cut(strings.TrimRight(name, ".gz"), "+")
if !ok {
err = fmt.Errorf("missing + in filename")
}
}
if err == nil {
var pathbuf []byte
pathbuf, err = base64.RawURLEncoding.DecodeString(xpath)
if err == nil {
path = string(pathbuf)
}
}
var mtime int64
if err == nil {
mtime, err = strconv.ParseInt(mtimestr, 16, 64)
}
var fi fs.FileInfo
if err == nil {
fi, err = e.Info()
}
var atime int64
if err == nil {
atime, err = statAtime(fi.Sys())
}
if err != nil {
pkglog.Infox("removing unusable/unrecognized file in static gzip cache dir", err)
xerr := os.Remove(filepath.Join(dir, name))
pkglog.Check(xerr, "removing unusable file in static gzip cache dir",
slog.Any("error", err),
slog.String("dir", dir),
slog.String("filename", name))
continue
}
staticgzcache.paths[path] = gzfile{
mtime: mtime,
atime: atime,
gzsize: fi.Size(),
use: &pathUse{path: path},
}
staticgzcache.size += fi.Size()
}
pathatimes := make([]struct {
path string
atime int64
}, len(staticgzcache.paths))
i := 0
for k, gf := range staticgzcache.paths {
pathatimes[i].path = k
pathatimes[i].atime = gf.atime
i++
}
sort.Slice(pathatimes, func(i, j int) bool {
return pathatimes[i].atime < pathatimes[j].atime
})
for _, pa := range pathatimes {
staticgzcache.push(staticgzcache.paths[pa.path].use)
}
// Ensure cache size is OK for current config.
staticgzcache.evictFor(0)
}
// Evict entries so size bytes are available.
// Must be called with lock held.
func (c *gzcache) evictFor(size int64) {
for c.size+size > c.maxSize && c.oldest != nil {
c.evictPath(c.oldest.path)
}
}
// remove path from cache.
// Must be called with lock held.
func (c *gzcache) evictPath(path string) {
gf := c.paths[path]
delete(c.paths, path)
c.unlink(gf.use)
c.size -= gf.gzsize
err := os.Remove(staticCachePath(c.dir, path, gf.mtime))
pkglog.Check(err, "removing cached gzipped static file", slog.String("path", path))
}
// Open cached file for path, requiring it has mtime. If there is no usable cached
// file, a nil file is returned and the caller should compress and add to the cache
// with startPath and finishPath. No usable cached file means the path isn't in the
// cache, or its mtime is different, or there is an entry but it is new and being
// compressed at the moment. If a usable cached file was found, it is opened and
// returned, along with its compressed/on-disk size.
func (c *gzcache) openPath(path string, mtime int64) (*os.File, int64) {
c.Lock()
defer c.Unlock()
gf, ok := c.paths[path]
if !ok || gf.compressing {
return nil, 0
}
if gf.mtime != mtime {
// File has changed, remove old entry. Caller will add to cache again.
c.evictPath(path)
return nil, 0
}
p := staticCachePath(c.dir, path, gf.mtime)
f, err := os.Open(p)
if err != nil {
pkglog.Errorx("open static cached gzip file, removing from cache", err, slog.String("path", path))
// Perhaps someone removed the file? Remove from cache, it will be recreated.
c.evictPath(path)
return nil, 0
}
gf.atime = time.Now().UnixNano()
c.unlink(gf.use)
c.push(gf.use)
c.paths[path] = gf
return f, gf.gzsize
}
// startPath attempts to add an entry to the cache for a new cached compressed
// file. If there is already an entry but it isn't done compressing yet, false is
// returned and the caller can still compress and respond but the entry cannot be
// added to the cache. If the entry is being added, the caller must call finishPath
// or abortPath.
func (c *gzcache) startPath(path string, mtime int64) bool {
c.Lock()
defer c.Unlock()
if _, ok := c.paths[path]; ok {
return false
}
// note: no "use" yet, we only set that when we finish, so we don't have to clean up on abort.
c.paths[path] = gzfile{compressing: true, mtime: mtime}
return true
}
// finishPath completes adding an entry to the cache, marking the entry as
// compressed, accounting for its size, and marking its atime.
func (c *gzcache) finishPath(path string, gzsize int64) {
c.Lock()
defer c.Unlock()
c.evictFor(gzsize)
gf := c.paths[path]
gf.compressing = false
gf.gzsize = gzsize
gf.atime = time.Now().UnixNano()
gf.use = &pathUse{path: path}
c.paths[path] = gf
c.size += gzsize
c.push(gf.use)
}
// abortPath marks an entry as no longer being added to the cache.
func (c *gzcache) abortPath(path string) {
c.Lock()
defer c.Unlock()
delete(c.paths, path)
// note: gzfile.use isn't set yet.
}
// push inserts the "pathUse" to the head of the LRU doubly-linked list, unlinking
// it first if needed.
func (c *gzcache) push(u *pathUse) {
c.unlink(u)
u.prev = c.newest
if c.newest != nil {
c.newest.next = u
}
if c.oldest == nil {
c.oldest = u
}
c.newest = u
}
// unlink removes the "pathUse" from the LRU doubly-linked list.
func (c *gzcache) unlink(u *pathUse) {
if c.oldest == u {
c.oldest = u.next
}
if c.newest == u {
c.newest = u.prev
}
if u.prev != nil {
u.prev.next = u.next
}
if u.next != nil {
u.next.prev = u.prev
}
u.prev = nil
u.next = nil
}
// Return path to the on-disk gzipped cached file.
func staticCachePath(dir, path string, mtime int64) string {
p := base64.RawURLEncoding.EncodeToString([]byte(path))
return filepath.Join(dir, fmt.Sprintf("%s+%x.gz", p, mtime))
}
// staticgzcacheReplacer intercepts responses for cacheable static files,
// responding with the cached content if appropriate and failing further writes so
// the regular response writer stops.
type staticgzcacheReplacer struct {
w http.ResponseWriter
r *http.Request // For its context, or logging.
uncomprPath string
uncomprFile *os.File
uncomprMtime time.Time
uncomprSize int64
statusCode int
// Set during WriteHeader to indicate a compressed file has been written, further
// Writes result in an error to stop the writer of the uncompressed content.
handled bool
}
func (w *staticgzcacheReplacer) logger() mlog.Log {
return pkglog.WithContext(w.r.Context())
}
// Header returns the header of the underlying ResponseWriter.
func (w *staticgzcacheReplacer) Header() http.Header {
return w.w.Header()
}
// WriteHeader checks whether the response is eligible for compressing. If not,
// WriteHeader on the underlying ResponseWriter is called. If so, headers for gzip
// content are set and the gzip content is written, either from disk or compressed
// and stored in the cache.
func (w *staticgzcacheReplacer) WriteHeader(statusCode int) {
if w.statusCode != 0 {
return
}
w.statusCode = statusCode
if statusCode != http.StatusOK {
w.w.WriteHeader(statusCode)
return
}
gzf, gzsize := staticgzcache.openPath(w.uncomprPath, w.uncomprMtime.UnixNano())
if gzf == nil {
// Not in cache, or work in progress.
started := staticgzcache.startPath(w.uncomprPath, w.uncomprMtime.UnixNano())
if !started {
// Another request is already compressing and storing this file.
// todo: we should just wait for the other compression to finish, then use its result.
w.w.(*loggingWriter).UncompressedSize = w.uncomprSize
h := w.w.Header()
h.Set("Content-Encoding", "gzip")
h.Del("Content-Length") // We don't know this, we compress streamingly.
gzw, _ := gzip.NewWriterLevel(w.w, gzip.BestSpeed)
_, err := io.Copy(gzw, w.uncomprFile)
if err == nil {
err = gzw.Close()
}
w.handled = true
if err != nil {
w.w.(*loggingWriter).error(err)
}
return
}
// Compress and write to cache.
p := staticCachePath(staticgzcache.dir, w.uncomprPath, w.uncomprMtime.UnixNano())
ngzf, err := os.OpenFile(p, os.O_CREATE|os.O_EXCL|os.O_RDWR, 0600)
if err != nil {
w.logger().Errorx("create new static gzip cache file", err, slog.String("requestpath", w.uncomprPath), slog.String("fspath", p))
staticgzcache.abortPath(w.uncomprPath)
return
}
defer func() {
if ngzf != nil {
staticgzcache.abortPath(w.uncomprPath)
err := ngzf.Close()
w.logger().Check(err, "closing failed static gzip cache file", slog.String("requestpath", w.uncomprPath), slog.String("fspath", p))
err = os.Remove(p)
w.logger().Check(err, "removing failed static gzip cache file", slog.String("requestpath", w.uncomprPath), slog.String("fspath", p))
}
}()
gzw := gzip.NewWriter(ngzf)
_, err = io.Copy(gzw, w.uncomprFile)
if err == nil {
err = gzw.Close()
}
if err == nil {
err = ngzf.Sync()
}
if err == nil {
gzsize, err = ngzf.Seek(0, 1)
}
if err == nil {
_, err = ngzf.Seek(0, 0)
}
if err != nil {
w.w.(*loggingWriter).error(err)
return
}
staticgzcache.finishPath(w.uncomprPath, gzsize)
gzf = ngzf
ngzf = nil
}
defer func() {
if gzf != nil {
err := gzf.Close()
if err != nil {
w.logger().Errorx("closing static gzip cache file", err)
}
}
}()
// Signal to Write that we aleady (attempted to) write the responses.
w.handled = true
w.w.(*loggingWriter).UncompressedSize = w.uncomprSize
h := w.w.Header()
h.Set("Content-Encoding", "gzip")
h.Set("Content-Length", fmt.Sprintf("%d", gzsize))
w.w.WriteHeader(statusCode)
if _, err := io.Copy(w.w, gzf); err != nil {
w.w.(*loggingWriter).error(err)
}
}
var errHandledCompressed = errors.New("response written with compression")
func (w *staticgzcacheReplacer) Write(buf []byte) (int, error) {
if w.statusCode == 0 {
w.WriteHeader(http.StatusOK)
}
if w.handled {
// For 200 OK, we already wrote the response and just want the caller to stop processing.
return 0, errHandledCompressed
}
return w.w.Write(buf)
}

17
http/main_test.go Normal file
View File

@ -0,0 +1,17 @@
package http
import (
"fmt"
"os"
"testing"
"github.com/mjl-/mox/metrics"
)
func TestMain(m *testing.M) {
m.Run()
if metrics.Panics.Load() > 0 {
fmt.Println("unhandled panics encountered")
os.Exit(2)
}
}

205
http/mobileconfig.go Normal file
View File

@ -0,0 +1,205 @@
package http
import (
"bytes"
"crypto/hmac"
"crypto/sha256"
"encoding/xml"
"fmt"
"maps"
"slices"
"strings"
"github.com/mjl-/mox/admin"
"github.com/mjl-/mox/smtp"
)
// Apple software isn't good at autoconfig/autodiscovery, but it can import a
// device management profile containing account settings.
//
// See https://developer.apple.com/documentation/devicemanagement/mail.
type deviceManagementProfile struct {
XMLName xml.Name `xml:"plist"`
Version string `xml:"version,attr"`
Dict dict `xml:"dict"`
}
type array []dict
type dict map[string]any
// MarshalXML marshals as <dict> with multiple pairs of <key> and a value of various types.
func (m dict) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// The plist format isn't that easy to generate with Go's xml package, it's leaving
// out reasonable structure, instead just concatenating key/value pairs. Perhaps
// there is a better way?
if err := e.EncodeToken(xml.StartElement{Name: xml.Name{Local: "dict"}}); err != nil {
return err
}
l := slices.Sorted(maps.Keys(m))
for _, k := range l {
tokens := []xml.Token{
xml.StartElement{Name: xml.Name{Local: "key"}},
xml.CharData([]byte(k)),
xml.EndElement{Name: xml.Name{Local: "key"}},
}
for _, t := range tokens {
if err := e.EncodeToken(t); err != nil {
return err
}
}
tokens = nil
switch v := m[k].(type) {
case string:
tokens = []xml.Token{
xml.StartElement{Name: xml.Name{Local: "string"}},
xml.CharData([]byte(v)),
xml.EndElement{Name: xml.Name{Local: "string"}},
}
case int:
tokens = []xml.Token{
xml.StartElement{Name: xml.Name{Local: "integer"}},
xml.CharData(fmt.Appendf(nil, "%d", v)),
xml.EndElement{Name: xml.Name{Local: "integer"}},
}
case bool:
tag := "false"
if v {
tag = "true"
}
tokens = []xml.Token{
xml.StartElement{Name: xml.Name{Local: tag}},
xml.EndElement{Name: xml.Name{Local: tag}},
}
case array:
if err := e.EncodeToken(xml.StartElement{Name: xml.Name{Local: "array"}}); err != nil {
return err
}
for _, d := range v {
if err := d.MarshalXML(e, xml.StartElement{Name: xml.Name{Local: "array"}}); err != nil {
return err
}
}
if err := e.EncodeToken(xml.EndElement{Name: xml.Name{Local: "array"}}); err != nil {
return err
}
default:
return fmt.Errorf("unexpected dict value of type %T", v)
}
for _, t := range tokens {
if err := e.EncodeToken(t); err != nil {
return err
}
}
}
if err := e.EncodeToken(xml.EndElement{Name: xml.Name{Local: "dict"}}); err != nil {
return err
}
return nil
}
// MobileConfig returns a device profile for a macOS Mail email account. The file
// should have a .mobileconfig extension. Opening the file adds it to Profiles in
// System Preferences, where it can be installed. This profile does not contain a
// password because sending opaque files containing passwords around to users seems
// like bad security practice.
//
// Multiple addresses can be passed, the first is used for IMAP/submission login,
// and likely seen as primary account by Apple software.
//
// The config is not signed, so users must ignore warnings about unsigned profiles.
func MobileConfig(addresses []string, fullName string) ([]byte, error) {
if len(addresses) == 0 {
return nil, fmt.Errorf("need at least 1 address")
}
addr, err := smtp.ParseAddress(addresses[0])
if err != nil {
return nil, fmt.Errorf("parsing address: %v", err)
}
config, err := admin.ClientConfigDomain(addr.Domain)
if err != nil {
return nil, fmt.Errorf("getting config for domain: %v", err)
}
// Apple software wants identifiers...
t := strings.Split(addr.Domain.Name(), ".")
slices.Reverse(t)
reverseAddr := strings.Join(t, ".") + "." + addr.Localpart.String()
// Apple software wants UUIDs... We generate them deterministically based on address
// and our code (through key, which we must change if code changes).
const key = "mox0"
uuid := func(prefix string) string {
mac := hmac.New(sha256.New, []byte(key))
mac.Write([]byte(prefix + "\n" + "\n" + strings.Join(addresses, ",")))
sum := mac.Sum(nil)
uuid := fmt.Sprintf("%x-%x-%x-%x-%x", sum[0:4], sum[4:6], sum[6:8], sum[8:10], sum[10:16])
return uuid
}
uuidConfig := uuid("config")
uuidAccount := uuid("account")
// The "UseSSL" fields are underspecified in Apple's format. They say "If true,
// enables SSL for authentication on the incoming mail server.". I'm assuming they
// want to know if they should start immediately with a handshake, instead of
// starting out plain. There is no way to require STARTTLS though. You could even
// interpret their wording as this field enable authentication through client-side
// TLS certificates, given their "on the incoming mail server", instead of "of the
// incoming mail server".
var w bytes.Buffer
p := deviceManagementProfile{
Version: "1.0",
Dict: dict(map[string]any{
"PayloadDisplayName": fmt.Sprintf("%s email account", addresses[0]),
"PayloadIdentifier": reverseAddr + ".email",
"PayloadType": "Configuration",
"PayloadUUID": uuidConfig,
"PayloadVersion": 1,
"PayloadContent": array{
dict(map[string]any{
"EmailAccountDescription": addresses[0],
"EmailAccountName": fullName,
"EmailAccountType": "EmailTypeIMAP",
// Comma-separated multiple addresses are not documented at Apple, but seem to
// work.
"EmailAddress": strings.Join(addresses, ","),
"IncomingMailServerAuthentication": "EmailAuthCRAMMD5", // SCRAM not an option at time of writing..
"IncomingMailServerUsername": addresses[0],
"IncomingMailServerHostName": config.IMAP.Host.ASCII,
"IncomingMailServerPortNumber": config.IMAP.Port,
"IncomingMailServerUseSSL": config.IMAP.TLSMode == admin.TLSModeImmediate,
"OutgoingMailServerAuthentication": "EmailAuthCRAMMD5", // SCRAM not an option at time of writing...
"OutgoingMailServerHostName": config.Submission.Host.ASCII,
"OutgoingMailServerPortNumber": config.Submission.Port,
"OutgoingMailServerUsername": addresses[0],
"OutgoingMailServerUseSSL": config.Submission.TLSMode == admin.TLSModeImmediate,
"OutgoingPasswordSameAsIncomingPassword": true,
"PayloadIdentifier": reverseAddr + ".email.account",
"PayloadType": "com.apple.mail.managed",
"PayloadUUID": uuidAccount,
"PayloadVersion": 1,
}),
},
}),
}
if _, err := fmt.Fprint(&w, xml.Header); err != nil {
return nil, err
}
if _, err := fmt.Fprint(&w, "<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n"); err != nil {
return nil, err
}
enc := xml.NewEncoder(&w)
enc.Indent("", "\t")
if err := enc.Encode(p); err != nil {
return nil, err
}
if _, err := fmt.Fprintln(&w); err != nil {
return nil, err
}
return w.Bytes(), nil
}

View File

@ -1,6 +1,7 @@
package http
import (
"log/slog"
"net"
"net/http"
"strings"
@ -13,8 +14,8 @@ import (
)
func mtastsPolicyHandle(w http.ResponseWriter, r *http.Request) {
log := func() *mlog.Log {
return xlog.WithContext(r.Context())
log := func() mlog.Log {
return pkglog.WithContext(r.Context())
}
host := strings.ToLower(r.Host)
@ -30,7 +31,7 @@ func mtastsPolicyHandle(w http.ResponseWriter, r *http.Request) {
}
domain, err := dns.ParseDomain(host)
if err != nil {
log().Errorx("mtasts policy request: bad domain", err, mlog.Field("host", host))
log().Errorx("mtasts policy request: bad domain", err, slog.String("host", host))
http.NotFound(w, r)
return
}
@ -42,16 +43,16 @@ func mtastsPolicyHandle(w http.ResponseWriter, r *http.Request) {
return
}
var mxs []mtasts.STSMX
var mxs []mtasts.MX
for _, s := range sts.MX {
var mx mtasts.STSMX
var mx mtasts.MX
if strings.HasPrefix(s, "*.") {
mx.Wildcard = true
s = s[2:]
}
d, err := dns.ParseDomain(s)
if err != nil {
log().Errorx("bad domain in mtasts config", err, mlog.Field("domain", s))
log().Errorx("bad domain in mtasts config", err, slog.String("domain", s))
http.Error(w, "500 - internal server error - invalid domain in configuration", http.StatusInternalServerError)
return
}
@ -59,7 +60,7 @@ func mtastsPolicyHandle(w http.ResponseWriter, r *http.Request) {
mxs = append(mxs, mx)
}
if len(mxs) == 0 {
mxs = []mtasts.STSMX{{Domain: mox.Conf.Static.HostnameDomain}}
mxs = []mtasts.MX{{Domain: mox.Conf.Static.HostnameDomain}}
}
policy := mtasts.Policy{

File diff suppressed because it is too large Load Diff

View File

@ -6,33 +6,19 @@ import (
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mox-"
)
func TestServeHTTP(t *testing.T) {
os.RemoveAll("../testdata/web/data")
mox.ConfigStaticPath = "../testdata/web/mox.conf"
mox.ConfigStaticPath = filepath.FromSlash("../testdata/web/mox.conf")
mox.ConfigDynamicPath = filepath.Join(filepath.Dir(mox.ConfigStaticPath), "domains.conf")
mox.MustLoadConfig(false)
mox.MustLoadConfig(true, false)
srv := &serve{
PathHandlers: []pathHandler{
{
HostMatch: func(dom dns.Domain) bool {
return strings.HasPrefix(dom.ASCII, "mta-sts.")
},
Path: "/.well-known/mta-sts.txt",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("mta-sts!"))
}),
},
},
Webserver: true,
}
portSrvs := portServes("local", mox.Conf.Static.Listeners["local"])
srv := portSrvs[80]
test := func(method, target string, expCode int, expContent string, expHeaders map[string]string) {
t.Helper()
@ -43,22 +29,22 @@ func TestServeHTTP(t *testing.T) {
srv.ServeHTTP(rw, req)
resp := rw.Result()
if resp.StatusCode != expCode {
t.Fatalf("got statuscode %d, expected %d", resp.StatusCode, expCode)
t.Errorf("got statuscode %d, expected %d", resp.StatusCode, expCode)
}
if expContent != "" {
s := rw.Body.String()
if s != expContent {
t.Fatalf("got response data %q, expected %q", s, expContent)
t.Errorf("got response data %q, expected %q", s, expContent)
}
}
for k, v := range expHeaders {
if xv := resp.Header.Get(k); xv != v {
t.Fatalf("got %q for header %q, expected %q", xv, k, v)
t.Errorf("got %q for header %q, expected %q", xv, k, v)
}
}
}
test("GET", "http://mta-sts.mox.example/.well-known/mta-sts.txt", http.StatusOK, "mta-sts!", nil)
test("GET", "http://mta-sts.mox.example/.well-known/mta-sts.txt", http.StatusOK, "version: STSv1\nmode: enforce\nmax_age: 86400\nmx: mox.example\n", nil)
test("GET", "http://mox.example/.well-known/mta-sts.txt", http.StatusNotFound, "", nil) // mta-sts endpoint not in this domain.
test("GET", "http://mta-sts.mox.example/static/", http.StatusNotFound, "", nil) // static not served on this domain.
test("GET", "http://mta-sts.mox.example/other", http.StatusNotFound, "", nil)
@ -66,4 +52,24 @@ func TestServeHTTP(t *testing.T) {
test("GET", "http://mox.example/static/index.html", http.StatusOK, "html\n", map[string]string{"X-Test": "mox"})
test("GET", "http://mox.example/static/dir/", http.StatusOK, "", map[string]string{"X-Test": "mox"}) // Dir listing.
test("GET", "http://mox.example/other", http.StatusNotFound, "", nil)
// Webmail on IP, localhost, mail host, clientsettingsdomain, not others.
test("GET", "http://127.0.0.1/webmail/", http.StatusOK, "", nil)
test("GET", "http://localhost/webmail/", http.StatusOK, "", nil)
test("GET", "http://mox.example/webmail/", http.StatusOK, "", nil)
test("GET", "http://mail.mox.example/webmail/", http.StatusOK, "", nil)
test("GET", "http://mail.other.example/webmail/", http.StatusNotFound, "", nil)
test("GET", "http://remotehost/webmail/", http.StatusNotFound, "", nil)
// admin on IP, localhost, mail host, not clientsettingsdomain.
test("GET", "http://127.0.0.1/admin/", http.StatusOK, "", nil)
test("GET", "http://localhost/admin/", http.StatusOK, "", nil)
test("GET", "http://mox.example/admin/", http.StatusPermanentRedirect, "", nil) // Override by WebHandler.
test("GET", "http://mail.mox.example/admin/", http.StatusNotFound, "", nil)
// account is off.
test("GET", "http://127.0.0.1/", http.StatusNotFound, "", nil)
test("GET", "http://localhost/", http.StatusNotFound, "", nil)
test("GET", "http://mox.example/", http.StatusNotFound, "", nil)
test("GET", "http://mail.mox.example/", http.StatusNotFound, "", nil)
}

View File

@ -1,14 +1,24 @@
package http
import (
"bufio"
"bytes"
"context"
"crypto/sha1"
"crypto/tls"
"encoding/base64"
"errors"
"fmt"
htmltemplate "html/template"
"io"
"io/fs"
golog "log"
"log/slog"
"net"
"net/http"
"net/http/httputil"
"net/textproto"
"net/url"
"os"
"path/filepath"
"sort"
@ -20,19 +30,28 @@ import (
"github.com/mjl-/mox/dns"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/mox-"
"github.com/mjl-/mox/moxio"
)
func recvid(r *http.Request) string {
cid := mox.CidFromCtx(r.Context())
if cid <= 0 {
return ""
}
return " (id " + mox.ReceivedID(cid) + ")"
}
// WebHandle serves an HTTP request by going through the list of WebHandlers,
// check if there is a domain+path match, and running the handler if so.
// WebHandle runs after the built-in handlers for mta-sts, autoconfig, etc.
// If no handler matched, false is returned.
// WebHandle sets w.Name to that of the matching handler.
func WebHandle(w *loggingWriter, r *http.Request, host dns.Domain) (handled bool) {
redirects, handlers := mox.Conf.WebServer()
func WebHandle(w *loggingWriter, r *http.Request, host dns.IPDomain) (handled bool) {
conf := mox.Conf.DynamicConfig()
redirects := conf.WebDNSDomainRedirects
handlers := conf.WebHandlers
for from, to := range redirects {
if host != from {
if host.Domain != from {
continue
}
u := r.URL
@ -44,7 +63,7 @@ func WebHandle(w *loggingWriter, r *http.Request, host dns.Domain) (handled bool
}
for _, h := range handlers {
if host != h.DNSDomain {
if host.Domain != h.DNSDomain {
continue
}
loc := h.Path.FindStringIndex(r.URL.Path)
@ -60,11 +79,14 @@ func WebHandle(w *loggingWriter, r *http.Request, host dns.Domain) (handled bool
u.Scheme = "https"
u.Host = h.DNSDomain.Name()
w.Handler = h.Name
w.Compress = h.Compress
http.Redirect(w, r, u.String(), http.StatusPermanentRedirect)
return true
}
if h.WebStatic != nil && HandleStatic(h.WebStatic, w, r) {
// We don't want the loggingWriter to override the static handler's decisions to compress.
w.Compress = h.Compress
if h.WebStatic != nil && HandleStatic(h.WebStatic, h.Compress, w, r) {
w.Handler = h.Name
return true
}
@ -76,7 +98,12 @@ func WebHandle(w *loggingWriter, r *http.Request, host dns.Domain) (handled bool
w.Handler = h.Name
return true
}
if h.WebInternal != nil && HandleInternal(h.WebInternal, w, r) {
w.Handler = h.Name
return true
}
}
w.Compress = false
return false
}
@ -127,18 +154,10 @@ table > tbody > tr:nth-child(odd) { background-color: #f8f8f8; }
// slash is written. If a directory is requested and an index.html exists, that
// file is returned. Otherwise, for directories with ListFiles configured, a
// directory listing is returned.
func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (handled bool) {
log := func() *mlog.Log {
return xlog.WithContext(r.Context())
func HandleStatic(h *config.WebStatic, compress bool, w http.ResponseWriter, r *http.Request) (handled bool) {
log := func() mlog.Log {
return pkglog.WithContext(r.Context())
}
recvid := func() string {
cid := mox.CidFromCtx(r.Context())
if cid <= 0 {
return ""
}
return " (id " + mox.ReceivedID(cid) + ")"
}
if r.Method != "GET" && r.Method != "HEAD" {
if h.ContinueNotFound {
// Give another handler that is presumbly configured, for the same path, a chance.
@ -166,13 +185,24 @@ func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (
// fspath will not have a trailing slash anymore, we'll correct for it
// later when the path turns out to be file instead of a directory.
serveFile := func(name string, mtime time.Time, content *os.File) {
serveFile := func(name string, fi fs.FileInfo, content *os.File) {
// ServeContent only sets a content-type if not already present in the response headers.
hdr := w.Header()
for k, v := range h.ResponseHeaders {
hdr.Add(k, v)
}
http.ServeContent(w, r, name, mtime, content)
// We transparently compress here, but still use ServeContent, because it handles
// conditional requests, range requests. It's a bit of a hack, but on first write
// to staticgzcacheReplacer where we are compressing, we write the full compressed
// file instead, and return an error to ServeContent so it stops. We still have all
// the useful behaviour (status code and headers) from ServeContent.
xw := w
if compress && acceptsGzip(r) && compressibleContent(content) {
xw = &staticgzcacheReplacer{w, r, content.Name(), content, fi.ModTime(), fi.Size(), 0, false}
} else {
w.(*loggingWriter).Compress = false
}
http.ServeContent(xw, r, name, fi.ModTime(), content)
}
f, err := os.Open(fspath)
@ -184,36 +214,46 @@ func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (
}
http.NotFound(w, r)
return true
} else if errors.Is(err, syscall.ENAMETOOLONG) {
http.NotFound(w, r)
return true
} else if os.IsPermission(err) {
// If we tried opening a directory, we may not have permission to read it, but
// still access files inside it (execute bit), such as index.html. So try to serve it.
index, err := os.Open(filepath.Join(fspath, "index.html"))
if err == nil {
defer index.Close()
var ifi os.FileInfo
ifi, err = index.Stat()
if err != nil {
log().Errorx("stat index.html in directory we cannot list", err, mlog.Field("url", r.URL), mlog.Field("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(), http.StatusInternalServerError)
return true
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
serveFile("index.html", ifi.ModTime(), index)
if err != nil {
http.Error(w, "403 - permission denied", http.StatusForbidden)
return true
}
http.Error(w, "403 - permission denied", http.StatusForbidden)
defer func() {
err := index.Close()
log().Check(err, "closing index file for serving")
}()
var ifi os.FileInfo
ifi, err = index.Stat()
if err != nil {
log().Errorx("stat index.html in directory we cannot list", err, slog.Any("url", r.URL), slog.String("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(r), http.StatusInternalServerError)
return true
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
serveFile("index.html", ifi, index)
return true
}
log().Errorx("open file for static file serving", err, mlog.Field("url", r.URL), mlog.Field("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(), http.StatusInternalServerError)
log().Errorx("open file for static file serving", err, slog.Any("url", r.URL), slog.String("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(r), http.StatusInternalServerError)
return true
}
defer f.Close()
defer func() {
if err := f.Close(); err != nil {
log().Check(err, "closing file for static file serving")
}
}()
fi, err := f.Stat()
if err != nil {
log().Errorx("stat file for static file serving", err, mlog.Field("url", r.URL), mlog.Field("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(), http.StatusInternalServerError)
log().Errorx("stat file for static file serving", err, slog.Any("url", r.URL), slog.String("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(r), http.StatusInternalServerError)
return true
}
// Redirect if the local path is a directory.
@ -240,18 +280,23 @@ func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (
http.Error(w, "403 - permission denied", http.StatusForbidden)
return true
} else if err == nil {
defer index.Close()
defer func() {
if err := index.Close(); err != nil {
log().Check(err, "closing index file for serving")
}
}()
var ifi os.FileInfo
ifi, err = index.Stat()
if err == nil {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
serveFile("index.html", ifi.ModTime(), index)
serveFile("index.html", ifi, index)
return true
}
}
if !os.IsNotExist(err) {
log().Errorx("stat for static file serving", err, mlog.Field("url", r.URL), mlog.Field("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(), http.StatusInternalServerError)
log().Errorx("stat for static file serving", err, slog.Any("url", r.URL), slog.String("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(r), http.StatusInternalServerError)
return true
}
@ -291,8 +336,8 @@ func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (
if err == io.EOF {
break
} else if err != nil {
log().Errorx("reading directory for file listing", err, mlog.Field("url", r.URL), mlog.Field("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(), http.StatusInternalServerError)
log().Errorx("reading directory for file listing", err, slog.Any("url", r.URL), slog.String("fspath", fspath))
http.Error(w, "500 - internal server error"+recvid(r), http.StatusInternalServerError)
return true
}
}
@ -307,13 +352,13 @@ func HandleStatic(h *config.WebStatic, w http.ResponseWriter, r *http.Request) (
}
}
err = lsTemplate.Execute(w, map[string]any{"Files": files})
if err != nil && !moxio.IsClosed(err) {
log().Errorx("executing directory listing template", err)
if err != nil {
log().Check(err, "executing directory listing template")
}
return true
}
serveFile(fspath, fi.ModTime(), f)
serveFile(fspath, fi, f)
return true
}
@ -369,18 +414,19 @@ func HandleRedirect(h *config.WebRedirect, w http.ResponseWriter, r *http.Reques
return true
}
// HandleInternal passes the request to an internal service.
func HandleInternal(h *config.WebInternal, w http.ResponseWriter, r *http.Request) (handled bool) {
h.Handler.ServeHTTP(w, r)
return true
}
// HandleForward handles a request by forwarding it to another webserver and
// passing the response on. I.e. a reverse proxy.
// passing the response on. I.e. a reverse proxy. It handles websocket
// connections by monitoring the websocket handshake and then just passing along the
// websocket frames.
func HandleForward(h *config.WebForward, w http.ResponseWriter, r *http.Request, path string) (handled bool) {
log := func() *mlog.Log {
return xlog.WithContext(r.Context())
}
recvid := func() string {
cid := mox.CidFromCtx(r.Context())
if cid <= 0 {
return ""
}
return " (id " + mox.ReceivedID(cid) + ")"
log := func() mlog.Log {
return pkglog.WithContext(r.Context())
}
xr := *r
@ -388,6 +434,9 @@ func HandleForward(h *config.WebForward, w http.ResponseWriter, r *http.Request,
if h.StripPath {
u := *r.URL
u.Path = r.URL.Path[len(path):]
if !strings.HasPrefix(u.Path, "/") {
u.Path = "/" + u.Path
}
u.RawPath = ""
r.URL = &u
}
@ -409,22 +458,45 @@ func HandleForward(h *config.WebForward, w http.ResponseWriter, r *http.Request,
proto = "https"
}
r.Header["X-Forwarded-Proto"] = []string{proto}
// note: We are not using "ws" or "wss" for websocket. The request we are
// forwarding is http(s), and we don't yet know if the backend even supports
// websockets.
// todo: add Forwarded header? is anyone using it?
// If we see an Upgrade: websocket, we're going to assume the client needs
// websocket and only attempt to talk websocket with the backend. If the backend
// doesn't do websocket, we'll send back a "bad request" response. For other values
// of Upgrade, we don't do anything special.
// https://www.iana.org/assignments/http-upgrade-tokens/http-upgrade-tokens.xhtml
// Upgrade: ../rfc/9110:2798
// Upgrade headers are not for http/1.0, ../rfc/9110:2880
// Websocket client "handshake" is described at ../rfc/6455:1134
upgrade := r.Header.Get("Upgrade")
if upgrade != "" && !(r.ProtoMajor == 1 && r.ProtoMinor == 0) {
// Websockets have case-insensitive string "websocket".
for _, s := range strings.Split(upgrade, ",") {
if strings.EqualFold(textproto.TrimString(s), "websocket") {
forwardWebsocket(h, w, r, path)
return true
}
}
}
// ReverseProxy will append any remaining path to the configured target URL.
proxy := httputil.NewSingleHostReverseProxy(h.TargetURL)
proxy.FlushInterval = time.Duration(-1) // Flush after each write.
proxy.ErrorLog = golog.New(mlog.ErrWriter(mlog.New("net/http/httputil").WithContext(r.Context()), mlog.LevelDebug, "reverseproxy error"), "", 0)
proxy.ErrorLog = golog.New(mlog.LogWriter(mlog.New("net/http/httputil", nil).WithContext(r.Context()), mlog.LevelDebug, "reverseproxy error"), "", 0)
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
if errors.Is(err, context.Canceled) {
log().Debugx("forwarding request to backend webserver", err, mlog.Field("url", r.URL))
log().Debugx("forwarding request to backend webserver", err, slog.Any("url", r.URL))
return
}
log().Errorx("forwarding request to backend webserver", err, mlog.Field("url", r.URL))
log().Errorx("forwarding request to backend webserver", err, slog.Any("url", r.URL))
if os.IsTimeout(err) {
http.Error(w, "504 - gateway timeout"+recvid(), http.StatusGatewayTimeout)
http.Error(w, "504 - gateway timeout"+recvid(r), http.StatusGatewayTimeout)
} else {
http.Error(w, "502 - bad gateway"+recvid(), http.StatusBadGateway)
http.Error(w, "502 - bad gateway"+recvid(r), http.StatusBadGateway)
}
}
whdr := w.Header()
@ -434,3 +506,365 @@ func HandleForward(h *config.WebForward, w http.ResponseWriter, r *http.Request,
proxy.ServeHTTP(w, r)
return true
}
var errResponseNotWebsocket = errors.New("not a valid websocket response to request")
var errNotImplemented = errors.New("functionality not yet implemented")
// Request has an Upgrade: websocket header. Check more websocketiness about the
// request. If it looks good, we forward it to the backend. If the backend responds
// with a valid websocket response, indicating it is indeed a websocket server, we
// pass the response along and start copying data between the client and the
// backend. We don't look at the frames and payloads. The backend already needs to
// know enough websocket to handle the frames. It wouldn't necessarily hurt to
// monitor the frames too, and check if they are valid, but it's quite a bit of
// work for little benefit. Besides, the whole point of websockets is to exchange
// bytes without HTTP being in the way, so let's do that.
func forwardWebsocket(h *config.WebForward, w http.ResponseWriter, r *http.Request, path string) (handled bool) {
log := func() mlog.Log {
return pkglog.WithContext(r.Context())
}
lw := w.(*loggingWriter)
lw.WebsocketRequest = true // For correct protocol in metrics.
// We check the requested websocket version first. A future websocket version may
// have different request requirements.
// ../rfc/6455:1160
wsversion := r.Header.Get("Sec-WebSocket-Version")
if wsversion != "13" {
// Indicate we only support version 13. Should get a client from the future to fall back to version 13.
// ../rfc/6455:1435
w.Header().Set("Sec-WebSocket-Version", "13")
http.Error(w, "400 - bad request - websockets only supported with version 13"+recvid(r), http.StatusBadRequest)
lw.error(fmt.Errorf("Sec-WebSocket-Version %q not supported", wsversion))
return true
}
// ../rfc/6455:1143
if r.Method != "GET" {
http.Error(w, "400 - bad request - websockets only allowed with method GET"+recvid(r), http.StatusBadRequest)
lw.error(fmt.Errorf("websocket request only allowed with method GET"))
return true
}
// ../rfc/6455:1153
var connectionUpgrade bool
for _, s := range strings.Split(r.Header.Get("Connection"), ",") {
if strings.EqualFold(textproto.TrimString(s), "upgrade") {
connectionUpgrade = true
break
}
}
if !connectionUpgrade {
http.Error(w, "400 - bad request - connection header must be \"upgrade\""+recvid(r), http.StatusBadRequest)
lw.error(fmt.Errorf(`connection header is %q, must be "upgrade"`, r.Header.Get("Connection")))
return true
}
// ../rfc/6455:1156
wskey := r.Header.Get("Sec-WebSocket-Key")
key, err := base64.StdEncoding.DecodeString(wskey)
if err != nil || len(key) != 16 {
http.Error(w, "400 - bad request - websockets requires Sec-WebSocket-Key with 16 bytes base64-encoded value"+recvid(r), http.StatusBadRequest)
lw.error(fmt.Errorf("bad Sec-WebSocket-Key %q, must be 16 byte base64-encoded value", wskey))
return true
}
// ../rfc/6455:1162
// We don't look at the origin header. The backend needs to handle it, if it thinks
// that helps...
// We also don't look at Sec-WebSocket-Protocol and Sec-WebSocket-Extensions. The
// backend can set them, but it doesn't influence our forwarding of the data.
// If this is not a hijacker, there is not point in connecting to the backend.
hj, ok := lw.W.(http.Hijacker)
var cbr *bufio.ReadWriter
if !ok {
log().Info("cannot turn http connection into tcp connection (http.Hijacker)")
http.Error(w, "501 - not implemented - cannot turn this connection into websocket"+recvid(r), http.StatusNotImplemented)
lw.error(fmt.Errorf("connection not a http.Hijacker (%T)", lw.W))
return
}
freq := *r
freq.Proto = "HTTP/1.1"
freq.ProtoMajor = 1
freq.ProtoMinor = 1
fresp, beconn, err := websocketTransact(r.Context(), h.TargetURL, &freq)
if err != nil {
if errors.Is(err, errResponseNotWebsocket) {
http.Error(w, "400 - bad request - websocket not supported"+recvid(r), http.StatusBadRequest)
} else if errors.Is(err, errNotImplemented) {
http.Error(w, "501 - not implemented - "+err.Error()+recvid(r), http.StatusNotImplemented)
} else if os.IsTimeout(err) {
http.Error(w, "504 - gateway timeout"+recvid(r), http.StatusGatewayTimeout)
} else {
http.Error(w, "502 - bad gateway"+recvid(r), http.StatusBadGateway)
}
lw.error(err)
return
}
defer func() {
if beconn != nil {
if err := beconn.Close(); err != nil {
log().Check(err, "closing backend websocket connection")
}
}
}()
// Hijack the client connection so we can write the response ourselves, and start
// copying the websocket frames.
var cconn net.Conn
cconn, cbr, err = hj.Hijack()
if err != nil {
log().Debugx("cannot turn http transaction into websocket connection", err)
http.Error(w, "501 - not implemented - cannot turn this connection into websocket"+recvid(r), http.StatusNotImplemented)
lw.error(err)
return
}
defer func() {
if cconn != nil {
if err := cconn.Close(); err != nil {
log().Check(err, "closing client websocket connection")
}
}
}()
// Below this point, we can no longer write to the ResponseWriter.
// Mark as websocket response, for logging.
lw.WebsocketResponse = true
lw.setStatusCode(fresp.StatusCode)
for k, v := range h.ResponseHeaders {
fresp.Header.Add(k, v)
}
// Write the response to the client, completing its websocket handshake.
if err := fresp.Write(cconn); err != nil {
lw.error(fmt.Errorf("writing websocket response to client: %w", err))
return
}
errc := make(chan error, 1)
// Copy from client to backend.
go func() {
buf, err := cbr.Peek(cbr.Reader.Buffered())
if err != nil {
errc <- err
return
}
if len(buf) > 0 {
n, err := beconn.Write(buf)
if err != nil {
errc <- err
return
}
lw.SizeFromClient += int64(n)
}
n, err := io.Copy(beconn, cconn)
lw.SizeFromClient += n
errc <- err
}()
// Copy from backend to client.
go func() {
n, err := io.Copy(cconn, beconn)
lw.SizeToClient = n
errc <- err
}()
// Stop and close connection on first error from either size, typically a closed
// connection whose closing was already announced with a websocket frame.
lw.error(<-errc)
// Close connections so other goroutine stops as well.
if err := cconn.Close(); err != nil {
log().Check(err, "closing client websocket connection")
}
if err := beconn.Close(); err != nil {
log().Check(err, "closing backend websocket connection")
}
// Wait for goroutine so it has updated the logWriter.Size*Client fields before we
// continue with logging.
<-errc
cconn = nil
return true
}
func websocketTransact(ctx context.Context, targetURL *url.URL, r *http.Request) (rresp *http.Response, rconn net.Conn, rerr error) {
log := func() mlog.Log {
return pkglog.WithContext(r.Context())
}
// Dial the backend, possibly doing TLS. We assume the net/http DefaultTransport is
// unmodified.
transport := http.DefaultTransport.(*http.Transport)
// We haven't implemented using a proxy for websocket requests yet. If we need one,
// return an error instead of trying to connect directly, which would be a
// potential security issue.
treq := *r
treq.URL = targetURL
if purl, err := transport.Proxy(&treq); err != nil {
return nil, nil, fmt.Errorf("determining proxy for websocket backend connection: %w", err)
} else if purl != nil {
return nil, nil, fmt.Errorf("%w: proxy required for websocket connection to backend", errNotImplemented) // todo: implement?
}
host, port, err := net.SplitHostPort(targetURL.Host)
if err != nil {
host = targetURL.Host
if targetURL.Scheme == "https" {
port = "443"
} else {
port = "80"
}
}
addr := net.JoinHostPort(host, port)
conn, err := transport.DialContext(r.Context(), "tcp", addr)
if err != nil {
return nil, nil, fmt.Errorf("dial: %w", err)
}
if targetURL.Scheme == "https" {
tlsconn := tls.Client(conn, transport.TLSClientConfig)
ctx, cancel := context.WithTimeout(r.Context(), transport.TLSHandshakeTimeout)
defer cancel()
if err := tlsconn.HandshakeContext(ctx); err != nil {
return nil, nil, fmt.Errorf("tls handshake: %w", err)
}
conn = tlsconn
}
defer func() {
if rerr != nil {
if xerr := conn.Close(); xerr != nil {
log().Check(xerr, "cleaning up websocket connection")
}
}
}()
// todo: make timeout configurable?
if err := conn.SetDeadline(time.Now().Add(30 * time.Second)); err != nil {
log().Check(err, "set deadline for websocket request to backend")
}
// Set clean connection headers.
removeHopByHopHeaders(r.Header)
r.Header.Set("Connection", "Upgrade")
r.Header.Set("Upgrade", "websocket")
// Write the websocket request to the backend.
if err := r.Write(conn); err != nil {
return nil, nil, fmt.Errorf("writing request to backend: %w", err)
}
// Read response from backend.
br := bufio.NewReader(conn)
resp, err := http.ReadResponse(br, r)
if err != nil {
return nil, nil, fmt.Errorf("reading response from backend: %w", err)
}
defer func() {
if rerr != nil {
if xerr := resp.Body.Close(); xerr != nil {
log().Check(xerr, "closing response body after error")
}
}
}()
if err := conn.SetDeadline(time.Time{}); err != nil {
log().Check(err, "clearing deadline on websocket connection to backend")
}
// Check that the response from the backend server indicates it is websocket. If
// not, don't pass the backend response, but an error that websocket is not
// appropriate.
if err := checkWebsocketResponse(resp, r); err != nil {
return resp, nil, err
}
// note: net/http.Response.Body documents that it implements io.Writer for a
// status: 101 response. But that's not the case when the response has been read
// with http.ReadResponse. We'll write to the connection directly.
buf, err := br.Peek(br.Buffered())
if err != nil {
return resp, nil, fmt.Errorf("peek at buffered data written by backend: %w", err)
}
return resp, websocketConn{io.MultiReader(bytes.NewReader(buf), conn), conn}, nil
}
// A net.Conn but with reads coming from an io multireader (due to buffered reader
// needed for http.ReadResponse).
type websocketConn struct {
r io.Reader
net.Conn
}
func (c websocketConn) Read(buf []byte) (int, error) {
return c.r.Read(buf)
}
// Check that an HTTP response (from a backend) is a valid websocket response, i.e.
// that it accepts the WebSocket "upgrade".
// ../rfc/6455:1299
func checkWebsocketResponse(resp *http.Response, req *http.Request) error {
if resp.StatusCode != 101 {
return fmt.Errorf("%w: response http status not 101 but %s", errResponseNotWebsocket, resp.Status)
}
if upgrade := resp.Header.Get("Upgrade"); !strings.EqualFold(upgrade, "websocket") {
return fmt.Errorf(`%w: response http status is 101, but Upgrade header is %q, should be "websocket"`, errResponseNotWebsocket, upgrade)
}
if connection := resp.Header.Get("Connection"); !strings.EqualFold(connection, "upgrade") {
return fmt.Errorf(`%w: response http status is 101, Upgrade is websocket, but Connection header is %q, should be "Upgrade"`, errResponseNotWebsocket, connection)
}
accept, err := base64.StdEncoding.DecodeString(resp.Header.Get("Sec-WebSocket-Accept"))
if err != nil {
return fmt.Errorf(`%w: response http status, Upgrade and Connection header are websocket, but Sec-WebSocket-Accept header is not valid base64: %v`, errResponseNotWebsocket, err)
}
exp := sha1.Sum([]byte(req.Header.Get("Sec-WebSocket-Key") + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11"))
if !bytes.Equal(accept, exp[:]) {
return fmt.Errorf(`%w: response http status, Upgrade and Connection header are websocket, but backend Sec-WebSocket-Accept value does not match`, errResponseNotWebsocket)
}
// We don't have requirements for the other Sec-WebSocket headers. ../rfc/6455:1340
return nil
}
// From Go 1.20.4 src/net/http/httputil/reverseproxy.go:
// Hop-by-hop headers. These are removed when sent to the backend.
// As of RFC 7230, hop-by-hop headers are required to appear in the
// Connection header field. These are the headers defined by the
// obsoleted RFC 2616 (section 13.5.1) and are used for backward
// compatibility.
// ../rfc/2616:5128
var hopHeaders = []string{
"Connection",
"Proxy-Connection", // non-standard but still sent by libcurl and rejected by e.g. google
"Keep-Alive",
"Proxy-Authenticate",
"Proxy-Authorization",
"Te", // canonicalized version of "TE"
"Trailer", // not Trailers per URL above; https://www.rfc-editor.org/errata_search.php?eid=4522
"Transfer-Encoding",
"Upgrade",
}
// From Go 1.20.4 src/net/http/httputil/reverseproxy.go:
// removeHopByHopHeaders removes hop-by-hop headers.
func removeHopByHopHeaders(h http.Header) {
// RFC 7230, section 6.1: Remove headers listed in the "Connection" header.
// ../rfc/7230:2817
for _, f := range h["Connection"] {
for _, sf := range strings.Split(f, ",") {
if sf = textproto.TrimString(sf); sf != "" {
h.Del(sf)
}
}
}
// RFC 2616, section 13.5.1: Remove a set of known hop-by-hop headers.
// This behavior is superseded by the RFC 7230 Connection header, but
// preserve it for backwards compatibility.
// ../rfc/2616:5128
for _, f := range hopHeaders {
h.Del(f)
}
}

View File

@ -2,6 +2,9 @@ package http
import (
"bytes"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"net/url"
@ -10,14 +13,25 @@ import (
"strings"
"testing"
"golang.org/x/net/websocket"
"github.com/mjl-/mox/mox-"
)
func tcheck(t *testing.T, err error, msg string) {
t.Helper()
if err != nil {
t.Fatalf("%s: %s", msg, err)
}
}
func TestWebserver(t *testing.T) {
os.RemoveAll("../testdata/webserver/data")
mox.ConfigStaticPath = "../testdata/webserver/mox.conf"
mox.ConfigStaticPath = filepath.FromSlash("../testdata/webserver/mox.conf")
mox.ConfigDynamicPath = filepath.Join(filepath.Dir(mox.ConfigStaticPath), "domains.conf")
mox.MustLoadConfig(false)
mox.MustLoadConfig(true, false)
loadStaticGzipCache(mox.DataDirPath("tmp/httpstaticcompresscache"), 1024*1024)
srv := &serve{Webserver: true}
@ -54,10 +68,12 @@ func TestWebserver(t *testing.T) {
test("GET", "http://schemeredir.example", nil, http.StatusPermanentRedirect, "", map[string]string{"Location": "https://schemeredir.example/"})
test("GET", "https://schemeredir.example", nil, http.StatusNotFound, "", nil)
test("GET", "http://mox.example/static/", nil, http.StatusOK, "", map[string]string{"X-Test": "mox"}) // index.html
test("GET", "http://mox.example/static/dir/", nil, http.StatusOK, "", map[string]string{"X-Test": "mox"}) // listing
test("GET", "http://mox.example/static/dir", nil, http.StatusTemporaryRedirect, "", map[string]string{"Location": "/static/dir/"}) // redirect to dir
test("GET", "http://mox.example/static/bogus", nil, http.StatusNotFound, "", nil)
accgzip := map[string]string{"Accept-Encoding": "gzip"}
test("GET", "http://mox.example/static/", accgzip, http.StatusOK, "", map[string]string{"X-Test": "mox", "Content-Encoding": "gzip"}) // index.html
test("GET", "http://mox.example/static/dir/hi.txt", accgzip, http.StatusOK, "", map[string]string{"X-Test": "mox", "Content-Encoding": ""}) // too small to compress
test("GET", "http://mox.example/static/dir/", accgzip, http.StatusOK, "", map[string]string{"X-Test": "mox", "Content-Encoding": "gzip"}) // listing
test("GET", "http://mox.example/static/dir", accgzip, http.StatusTemporaryRedirect, "", map[string]string{"Location": "/static/dir/"}) // redirect to dir
test("GET", "http://mox.example/static/bogus", accgzip, http.StatusNotFound, "", map[string]string{"Content-Encoding": ""})
test("GET", "http://mox.example/nolist/", nil, http.StatusOK, "", nil) // index.html
test("GET", "http://mox.example/nolist/dir/", nil, http.StatusForbidden, "", nil) // no listing
@ -118,4 +134,209 @@ func TestWebserver(t *testing.T) {
test("GET", "http://mox.example/bogus", nil, http.StatusNotFound, "", nil) // path not registered.
test("GET", "http://bogus.mox.example/static/", nil, http.StatusNotFound, "", nil) // domain not registered.
test("GET", "http://mox.example/xadmin/", nil, http.StatusOK, "", nil) // internal admin service
test("GET", "http://mox.example/xaccount/", nil, http.StatusOK, "", nil) // internal account service
test("GET", "http://mox.example/xwebmail/", nil, http.StatusOK, "", nil) // internal webmail service
test("GET", "http://mox.example/xwebapi/v0/", nil, http.StatusOK, "", nil) // internal webapi service
npaths := len(staticgzcache.paths)
if npaths != 1 {
t.Fatalf("%d file(s) in staticgzcache, expected 1", npaths)
}
loadStaticGzipCache(mox.DataDirPath("tmp/httpstaticcompresscache"), 1024*1024)
npaths = len(staticgzcache.paths)
if npaths != 1 {
t.Fatalf("%d file(s) in staticgzcache after loading from disk, expected 1", npaths)
}
loadStaticGzipCache(mox.DataDirPath("tmp/httpstaticcompresscache"), 0)
npaths = len(staticgzcache.paths)
if npaths != 0 {
t.Fatalf("%d file(s) in staticgzcache after setting max size to 0, expected 0", npaths)
}
loadStaticGzipCache(mox.DataDirPath("tmp/httpstaticcompresscache"), 0)
npaths = len(staticgzcache.paths)
if npaths != 0 {
t.Fatalf("%d file(s) in staticgzcache after setting max size to 0 and reloading from disk, expected 0", npaths)
}
}
func TestWebsocket(t *testing.T) {
os.RemoveAll("../testdata/websocket/data")
mox.ConfigStaticPath = filepath.FromSlash("../testdata/websocket/mox.conf")
mox.ConfigDynamicPath = filepath.Join(filepath.Dir(mox.ConfigStaticPath), "domains.conf")
mox.MustLoadConfig(true, false)
srv := &serve{Webserver: true}
var handler http.Handler // Active handler during test.
backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
handler.ServeHTTP(w, r)
}))
defer backend.Close()
backendURL, err := url.Parse(backend.URL)
if err != nil {
t.Fatalf("parsing backend url: %v", err)
}
backendURL.Path = "/"
// warning: it is not normally allowed to access the dynamic config without lock. don't propagate accesses like this!
mox.Conf.Dynamic.WebHandlers[len(mox.Conf.Dynamic.WebHandlers)-1].WebForward.TargetURL = backendURL
server := httptest.NewServer(srv)
defer server.Close()
serverURL, err := url.Parse(server.URL)
tcheck(t, err, "parsing server url")
_, port, err := net.SplitHostPort(serverURL.Host)
tcheck(t, err, "parsing host port in server url")
wsurl := fmt.Sprintf("ws://%s/ws/", net.JoinHostPort("localhost", port))
handler = websocket.Handler(func(c *websocket.Conn) {
io.Copy(c, c)
})
// Test a correct websocket connection.
wsconn, err := websocket.Dial(wsurl, "ignored", "http://ignored.example")
tcheck(t, err, "websocket dial")
_, err = fmt.Fprint(wsconn, "test")
tcheck(t, err, "write to websocket")
buf := make([]byte, 128)
n, err := wsconn.Read(buf)
tcheck(t, err, "read from websocket")
if string(buf[:n]) != "test" {
t.Fatalf(`got websocket data %q, expected "test"`, buf[:n])
}
err = wsconn.Close()
tcheck(t, err, "closing websocket connection")
// Test with server.ServeHTTP directly.
test := func(method string, reqhdrs map[string]string, expCode int, expHeaders map[string]string) {
t.Helper()
req := httptest.NewRequest(method, wsurl, nil)
for k, v := range reqhdrs {
req.Header.Add(k, v)
}
rw := httptest.NewRecorder()
rw.Body = &bytes.Buffer{}
srv.ServeHTTP(rw, req)
resp := rw.Result()
if resp.StatusCode != expCode {
t.Fatalf("got statuscode %d, expected %d", resp.StatusCode, expCode)
}
for k, v := range expHeaders {
if xv := resp.Header.Get(k); xv != v {
t.Fatalf("got %q for header %q, expected %q", xv, k, v)
}
}
}
wsreqhdrs := map[string]string{
"Upgrade": "keep-alive, websocket",
"Connection": "X, Upgrade",
"Sec-Websocket-Version": "13",
"Sec-Websocket-Key": "AAAAAAAAAAAAAAAAAAAAAA==",
}
test("POST", wsreqhdrs, http.StatusBadRequest, nil)
clone := func(m map[string]string) map[string]string {
r := map[string]string{}
for k, v := range m {
r[k] = v
}
return r
}
hdrs := clone(wsreqhdrs)
hdrs["Sec-Websocket-Version"] = "14"
test("GET", hdrs, http.StatusBadRequest, map[string]string{"Sec-Websocket-Version": "13"})
httpurl := fmt.Sprintf("http://%s/ws/", net.JoinHostPort("localhost", port))
// Must now do actual HTTP requests and read the HTTP response. Cannot call
// ServeHTTP because ResponseRecorder is not a http.Hijacker.
test = func(method string, reqhdrs map[string]string, expCode int, expHeaders map[string]string) {
t.Helper()
req, err := http.NewRequest(method, httpurl, nil)
tcheck(t, err, "http newrequest")
for k, v := range reqhdrs {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
tcheck(t, err, "http transaction")
if resp.StatusCode != expCode {
t.Fatalf("got statuscode %d, expected %d", resp.StatusCode, expCode)
}
for k, v := range expHeaders {
if xv := resp.Header.Get(k); xv != v {
t.Fatalf("got %q for header %q, expected %q", xv, k, v)
}
}
}
hdrs = clone(wsreqhdrs)
hdrs["Sec-Websocket-Key"] = "malformed"
test("GET", hdrs, http.StatusBadRequest, nil)
hdrs = clone(wsreqhdrs)
hdrs["Sec-Websocket-Key"] = "c2hvcnQK" // "short"
test("GET", hdrs, http.StatusBadRequest, nil)
// Not responding with a 101, but with regular 200 OK response.
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "bad", http.StatusOK)
})
test("GET", wsreqhdrs, http.StatusBadRequest, nil)
// Respond with 101, but other websocket response headers missing.
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusSwitchingProtocols)
})
test("GET", wsreqhdrs, http.StatusBadRequest, nil)
// With Upgrade: websocket, without Connection: Upgrade
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Upgrade", "websocket")
w.WriteHeader(http.StatusSwitchingProtocols)
})
test("GET", wsreqhdrs, http.StatusBadRequest, nil)
// With malformed Sec-WebSocket-Accept response header.
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h := w.Header()
h.Set("Upgrade", "websocket")
h.Set("Connection", "Upgrade")
h.Set("Sec-WebSocket-Accept", "malformed")
w.WriteHeader(http.StatusSwitchingProtocols)
})
test("GET", wsreqhdrs, http.StatusBadRequest, nil)
// With malformed Sec-WebSocket-Accept response header.
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h := w.Header()
h.Set("Upgrade", "websocket")
h.Set("Connection", "Upgrade")
h.Set("Sec-WebSocket-Accept", "YmFk") // "bad"
w.WriteHeader(http.StatusSwitchingProtocols)
})
test("GET", wsreqhdrs, http.StatusBadRequest, nil)
// All good.
wsresphdrs := map[string]string{
"Connection": "Upgrade",
"Upgrade": "websocket",
"Sec-Websocket-Accept": "ICX+Yqv66kxgM0FcWaLWlFLwTAI=",
"X-Test": "mox",
}
handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
h := w.Header()
h.Set("Upgrade", "websocket")
h.Set("Connection", "Upgrade")
h.Set("Sec-WebSocket-Accept", "ICX+Yqv66kxgM0FcWaLWlFLwTAI=")
w.WriteHeader(http.StatusSwitchingProtocols)
})
test("GET", wsreqhdrs, http.StatusSwitchingProtocols, wsresphdrs)
}

View File

@ -1,39 +1,102 @@
/*
Package imapclient provides an IMAP4 client, primarily for testing the IMAP4 server.
Package imapclient provides an IMAP4 client implementing IMAP4rev1 (RFC 3501),
IMAP4rev2 (RFC 9051) and various extensions.
Commands can be sent to the server free-form, but responses are parsed strictly.
Behaviour that may not be required by the IMAP4 specification may be expected by
this client.
Warning: Currently primarily for testing the mox IMAP4 server. Behaviour that
may not be required by the IMAP4 specification may be expected by this client.
See [Conn] for a high-level client for executing IMAP commands. Use its embedded
[Proto] for lower-level writing of commands and reading of responses.
*/
package imapclient
/*
- Try to keep the parsing method names and the types similar to the ABNF names in the RFCs.
- todo: have mode for imap4rev1 vs imap4rev2, refusing what is not allowed. we are accepting too much now.
- todo: stricter parsing. xnonspace() and xword() should be replaced by proper parsers.
*/
import (
"bufio"
"crypto/tls"
"fmt"
"io"
"log/slog"
"net"
"reflect"
"strings"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/moxio"
)
// Conn is an IMAP connection to a server.
// Conn is an connection to an IMAP server.
//
// Method names on Conn are the names of IMAP commands. CloseMailbox, which
// executes the IMAP CLOSE command, is an exception. The Close method closes the
// connection.
//
// The methods starting with MSN are the original (old) IMAP commands. The variants
// starting with UID should almost always be used instead, if available.
//
// The methods on Conn typically return errors of type Error or Response. Error
// represents protocol and i/o level errors, including io.ErrDeadlineExceeded and
// various errors for closed connections. Response is returned as error if the IMAP
// result is NO or BAD instead of OK. The responses returned by the IMAP command
// methods can also be non-zero on errors. Callers may wish to process any untagged
// responses.
//
// The IMAP command methods defined on Conn don't interpret the untagged responses
// except for untagged CAPABILITY and untagged ENABLED responses, and the
// CAPABILITY response code. Fields CapAvailable and CapEnabled are updated when
// those untagged responses are received.
//
// Capabilities indicate which optional IMAP functionality is supported by a
// server. Capabilities are typically implicitly enabled when the client sends a
// command using syntax of an optional extension. Extensions without new syntax
// from client to server, but with new behaviour or syntax from server to client,
// the client needs to explicitly enable the capability with the ENABLE command,
// see the Enable method.
type Conn struct {
conn net.Conn
r *bufio.Reader
panic bool
// If true, server sent a PREAUTH tag and the connection is already authenticated,
// e.g. based on TLS certificate authentication.
Preauth bool
// Capabilities available at server, from CAPABILITY command or response code.
CapAvailable []Capability
// Capabilities marked as enabled by the server, typically after an ENABLE command.
CapEnabled []Capability
// Proto provides lower-level functions for interacting with the IMAP connection,
// such as reading and writing individual lines/commands/responses.
Proto
}
// Proto provides low-level operations for writing requests and reading responses
// on an IMAP connection.
//
// To implement the IDLE command, write "IDLE" using [Proto.WriteCommandf], then
// read a line with [Proto.Readline]. If it starts with "+ ", the connection is in
// idle mode and untagged responses can be read using [Proto.ReadUntagged]. If the
// line doesn't start with "+ ", use [ParseResult] to interpret it as a response to
// IDLE, which should be a NO or BAD. To abort idle mode, write "DONE" using
// [Proto.Writelinef] and wait until a result line has been read.
type Proto struct {
// Connection, may be original TCP or TLS connection. Reads go through c.br, and
// writes through c.xbw. The "x" for the writes indicate that failed writes cause
// an i/o panic, which is either turned into a returned error, or passed on (see
// boolean panic). The reader and writer wrap a tracing reading/writer and may wrap
// flate compression.
conn net.Conn
connBroken bool // If connection is broken, we won't flush (and write) again.
br *bufio.Reader
tr *moxio.TraceReader
xbw *bufio.Writer
compress bool // If compression is enabled, we must flush flateWriter and its target original bufio writer.
xflateWriter *moxio.FlateWriter
xflateBW *bufio.Writer
xtw *moxio.TraceWriter
log mlog.Log
errHandle func(err error) // If set, called for all errors. Can panic. Used for imapserver tests.
tagGen int
record bool // If true, bytes read are added to recordBuf. recorded() resets.
recordBuf []byte
LastTag string
CapAvailable map[Capability]struct{} // Capabilities available at server, from CAPABILITY command or response code.
CapEnabled map[Capability]struct{} // Capabilities enabled through ENABLE command.
lastTag string
}
// Error is a parse or other protocol error.
@ -47,22 +110,52 @@ func (e Error) Unwrap() error {
return e.err
}
// New creates a new client on conn.
// Opts has optional fields that influence behaviour of a Conn.
type Opts struct {
Logger *slog.Logger
// Error is called for IMAP-level and connection-level errors during the IMAP
// command methods on Conn, not for errors in calls on Proto. Error is allowed to
// call panic.
Error func(err error)
}
// New initializes a new IMAP client on conn.
//
// If xpanic is true, functions that would return an error instead panic. For parse
// errors, the resulting stack traces show typically show what was being parsed.
// Conn should normally be a TLS connection, typically connected to port 993 of an
// IMAP server. Alternatively, conn can be a plain TCP connection to port 143. TLS
// should be enabled on plain TCP connections with the [Conn.StartTLS] method.
//
// The initial untagged greeting response is read and must be "OK".
func New(conn net.Conn, xpanic bool) (client *Conn, rerr error) {
// The initial untagged greeting response is read and must be "OK" or
// "PREAUTH". If preauth, the connection is already in authenticated state,
// typically through TLS client certificate. This is indicated in Conn.Preauth.
//
// Logging is written to opts.Logger. In particular, IMAP protocol traces are
// written with prefixes "CR: " and "CW: " (client read/write) as quoted strings at
// levels Debug-4, with authentication messages at Debug-6 and (user) data at level
// Debug-8.
func New(conn net.Conn, opts *Opts) (client *Conn, rerr error) {
c := Conn{
conn: conn,
r: bufio.NewReader(conn),
panic: xpanic,
CapAvailable: map[Capability]struct{}{},
CapEnabled: map[Capability]struct{}{},
Proto: Proto{conn: conn},
}
defer c.recover(&rerr)
var clog *slog.Logger
if opts != nil {
c.errHandle = opts.Error
clog = opts.Logger
} else {
clog = slog.Default()
}
c.log = mlog.New("imapclient", clog)
c.tr = moxio.NewTraceReader(c.log, "CR: ", &c)
c.br = bufio.NewReader(c.tr)
// Writes are buffered and write to Conn, which may panic.
c.xtw = moxio.NewTraceWriter(c.log, "CW: ", &c)
c.xbw = bufio.NewWriter(c.xtw)
defer c.recoverErr(&rerr)
tag := c.xnonspace()
if tag != "*" {
c.xerrorf("expected untagged *, got %q", tag)
@ -74,9 +167,15 @@ func New(conn net.Conn, xpanic bool) (client *Conn, rerr error) {
if x.Status != OK {
c.xerrorf("greeting, got status %q, expected OK", x.Status)
}
if x.Code != nil {
if caps, ok := x.Code.(CodeCapability); ok {
c.CapAvailable = caps
}
}
return &c, nil
case UntaggedPreauth:
c.xerrorf("greeting: unexpected preauth")
c.Preauth = true
return &c, nil
case UntaggedBye:
c.xerrorf("greeting: server sent bye")
default:
@ -85,8 +184,16 @@ func New(conn net.Conn, xpanic bool) (client *Conn, rerr error) {
panic("not reached")
}
func (c *Conn) recover(rerr *error) {
if c.panic {
func (c *Conn) recoverErr(rerr *error) {
c.recover(rerr, nil)
}
func (c *Conn) recover(rerr *error, resp *Response) {
if *rerr != nil {
if r, ok := (*rerr).(Response); ok && resp != nil {
*resp = r
}
c.errHandle(*rerr)
return
}
@ -94,200 +201,431 @@ func (c *Conn) recover(rerr *error) {
if x == nil {
return
}
err, ok := x.(Error)
if !ok {
var err error
switch e := x.(type) {
case Error:
err = e
case Response:
err = e
if resp != nil {
*resp = e
}
default:
panic(x)
}
if c.errHandle != nil {
c.errHandle(err)
}
*rerr = err
}
func (c *Conn) xerrorf(format string, args ...any) {
panic(Error{fmt.Errorf(format, args...)})
}
func (p *Proto) recover(rerr *error) {
if *rerr != nil {
return
}
func (c *Conn) xcheckf(err error, format string, args ...any) {
if err != nil {
c.xerrorf("%s: %w", fmt.Sprintf(format, args...), err)
x := recover()
if x == nil {
return
}
switch e := x.(type) {
case Error:
*rerr = e
default:
panic(x)
}
}
func (c *Conn) xcheck(err error) {
func (p *Proto) xerrorf(format string, args ...any) {
panic(Error{fmt.Errorf(format, args...)})
}
func (p *Proto) xcheckf(err error, format string, args ...any) {
if err != nil {
p.xerrorf("%s: %w", fmt.Sprintf(format, args...), err)
}
}
func (p *Proto) xcheck(err error) {
if err != nil {
panic(err)
}
}
// Commandf writes a free-form IMAP command to the server.
// If tag is empty, a next unique tag is assigned.
func (c *Conn) Commandf(tag string, format string, args ...any) (rerr error) {
defer c.recover(&rerr)
if tag == "" {
tag = c.nextTag()
// xresponse sets resp if err is a Response and resp is not nil.
func (p *Proto) xresponse(err error, resp *Response) {
if err == nil {
return
}
c.LastTag = tag
if r, ok := err.(Response); ok && resp != nil {
*resp = r
}
panic(err)
}
_, err := fmt.Fprintf(c.conn, "%s %s\r\n", tag, fmt.Sprintf(format, args...))
c.xcheckf(err, "write command")
// Write writes directly to underlying connection (TCP, TLS). For internal use
// only, to implement io.Writer. Write errors do take the connection's panic mode
// into account, i.e. Write can panic.
func (p *Proto) Write(buf []byte) (n int, rerr error) {
defer p.recover(&rerr)
n, rerr = p.conn.Write(buf)
if rerr != nil {
p.connBroken = true
}
p.xcheckf(rerr, "write")
return n, nil
}
// Read reads directly from the underlying connection (TCP, TLS). For internal use
// only, to implement io.Reader.
func (p *Proto) Read(buf []byte) (n int, err error) {
return p.conn.Read(buf)
}
func (p *Proto) xflush() {
// Not writing any more when connection is broken.
if p.connBroken {
return
}
err := p.xbw.Flush()
p.xcheckf(err, "flush")
// If compression is active, we need to flush the deflate stream.
if p.compress {
err := p.xflateWriter.Flush()
p.xcheckf(err, "flush deflate")
err = p.xflateBW.Flush()
p.xcheckf(err, "flush deflate buffer")
}
}
func (p *Proto) xtraceread(level slog.Level) func() {
if p.tr == nil {
// For ParseUntagged and other parse functions.
return func() {}
}
p.tr.SetTrace(level)
return func() {
p.tr.SetTrace(mlog.LevelTrace)
}
}
func (p *Proto) xtracewrite(level slog.Level) func() {
if p.xtw == nil {
// For ParseUntagged and other parse functions.
return func() {}
}
p.xflush()
p.xtw.SetTrace(level)
return func() {
p.xflush()
p.xtw.SetTrace(mlog.LevelTrace)
}
}
// Close closes the connection, flushing and closing any compression and TLS layer.
//
// You may want to call Logout first. Closing a connection with a mailbox with
// deleted messages not yet expunged will not expunge those messages.
//
// Closing a TLS connection that is logged out, or closing a TLS connection with
// compression enabled (i.e. two layered streams), may cause spurious errors
// because the server may immediate close the underlying connection when it sees
// the connection is being closed.
func (c *Conn) Close() (rerr error) {
defer c.recoverErr(&rerr)
if c.conn == nil {
return nil
}
if !c.connBroken && c.xflateWriter != nil {
err := c.xflateWriter.Close()
c.xcheckf(err, "close deflate writer")
err = c.xflateBW.Flush()
c.xcheckf(err, "flush deflate buffer")
c.xflateWriter = nil
c.xflateBW = nil
}
err := c.conn.Close()
c.xcheckf(err, "close connection")
c.conn = nil
return
}
func (c *Conn) nextTag() string {
c.tagGen++
return fmt.Sprintf("x%03d", c.tagGen)
// TLSConnectionState returns the TLS connection state if the connection uses TLS,
// either because the conn passed to [New] was a TLS connection, or because
// [Conn.StartTLS] was called.
func (c *Conn) TLSConnectionState() *tls.ConnectionState {
if conn, ok := c.conn.(*tls.Conn); ok {
cs := conn.ConnectionState()
return &cs
}
return nil
}
// Response reads from the IMAP server until a tagged response line is found.
// WriteCommandf writes a free-form IMAP command to the server. An ending \r\n is
// written too.
//
// If tag is empty, a next unique tag is assigned.
func (p *Proto) WriteCommandf(tag string, format string, args ...any) (rerr error) {
defer p.recover(&rerr)
if tag == "" {
p.nextTag()
} else {
p.lastTag = tag
}
fmt.Fprintf(p.xbw, "%s %s\r\n", p.lastTag, fmt.Sprintf(format, args...))
p.xflush()
return
}
func (p *Proto) nextTag() string {
p.tagGen++
p.lastTag = fmt.Sprintf("x%03d", p.tagGen)
return p.lastTag
}
// LastTag returns the tag last used for a command. For checking against a command
// completion result.
func (p *Proto) LastTag() string {
return p.lastTag
}
// LastTagSet sets a new last tag, as used for checking against a command completion result.
func (p *Proto) LastTagSet(tag string) {
p.lastTag = tag
}
// ReadResponse reads from the IMAP server until a tagged response line is found.
// The tag must be the same as the tag for the last written command.
// Result holds the status of the command. The caller must check if this the status is OK.
func (c *Conn) Response() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
//
// If an error is returned, resp can still be non-empty, and a caller may wish to
// process resp.Untagged.
//
// Caller should check resp.Status for the result of the command too.
//
// Common types for the return error:
// - Error, for protocol errors
// - Various I/O errors from the underlying connection, including os.ErrDeadlineExceeded
func (p *Proto) ReadResponse() (resp Response, rerr error) {
defer p.recover(&rerr)
for {
tag := c.xnonspace()
c.xspace()
tag := p.xnonspace()
p.xspace()
if tag == "*" {
untagged = append(untagged, c.xuntagged())
resp.Untagged = append(resp.Untagged, p.xuntagged())
continue
}
if tag != c.LastTag {
c.xerrorf("got tag %q, expected %q", tag, c.LastTag)
if tag != p.lastTag {
p.xerrorf("got tag %q, expected %q", tag, p.lastTag)
}
status := c.xstatus()
c.xspace()
result = c.xresult(status)
c.xcrlf()
status := p.xstatus()
p.xspace()
resp.Result = p.xresult(status)
p.xcrlf()
return
}
}
// ReadUntagged reads a single untagged response line.
// Useful for reading lines from IDLE.
func (c *Conn) ReadUntagged() (untagged Untagged, rerr error) {
defer c.recover(&rerr)
tag := c.xnonspace()
if tag != "*" {
c.xerrorf("got tag %q, expected untagged", tag)
// ParseCode parses a response code. The string must not have enclosing brackets.
//
// Example:
//
// "APPENDUID 123 10"
func ParseCode(s string) (code Code, rerr error) {
p := Proto{br: bufio.NewReader(strings.NewReader(s + "]"))}
defer p.recover(&rerr)
code = p.xrespCode()
p.xtake("]")
buf, err := io.ReadAll(p.br)
p.xcheckf(err, "read")
if len(buf) != 0 {
p.xerrorf("leftover data %q", buf)
}
c.xspace()
ut := c.xuntagged()
return code, nil
}
// ParseResult parses a line, including required crlf, as a command result line.
//
// Example:
//
// "tag1 OK [APPENDUID 123 10] message added\r\n"
func ParseResult(s string) (tag string, result Result, rerr error) {
p := Proto{br: bufio.NewReader(strings.NewReader(s))}
defer p.recover(&rerr)
tag = p.xnonspace()
p.xspace()
status := p.xstatus()
p.xspace()
result = p.xresult(status)
p.xcrlf()
return
}
// ReadUntagged reads a single untagged response line.
func (p *Proto) ReadUntagged() (untagged Untagged, rerr error) {
defer p.recover(&rerr)
return p.readUntagged()
}
// ParseUntagged parses a line, including required crlf, as untagged response.
//
// Example:
//
// "* BYE shutting down connection\r\n"
func ParseUntagged(s string) (untagged Untagged, rerr error) {
p := Proto{br: bufio.NewReader(strings.NewReader(s))}
defer p.recover(&rerr)
untagged, rerr = p.readUntagged()
return
}
func (p *Proto) readUntagged() (untagged Untagged, rerr error) {
defer p.recover(&rerr)
tag := p.xnonspace()
if tag != "*" {
p.xerrorf("got tag %q, expected untagged", tag)
}
p.xspace()
ut := p.xuntagged()
return ut, nil
}
// Readline reads a line, including CRLF.
// Used with IDLE and synchronous literals.
func (c *Conn) Readline() (line string, rerr error) {
defer c.recover(&rerr)
func (p *Proto) Readline() (line string, rerr error) {
defer p.recover(&rerr)
line, err := c.r.ReadString('\n')
c.xcheckf(err, "read line")
line, err := p.br.ReadString('\n')
p.xcheckf(err, "read line")
return line, nil
}
// ReadContinuation reads a line. If it is a continuation, i.e. starts with a +, it
// is returned without leading "+ " and without trailing crlf. Otherwise, a command
// response is returned. A successfully read continuation can return an empty line.
// Callers should check rerr and result.Status being empty to check if a
// continuation was read.
func (c *Conn) ReadContinuation() (line string, untagged []Untagged, result Result, rerr error) {
if !c.peek('+') {
untagged, result, rerr = c.Response()
c.xcheckf(rerr, "reading non-continuation response")
c.xerrorf("response status %q, expected OK", result.Status)
func (c *Conn) readContinuation() (line string, rerr error) {
defer c.recover(&rerr, nil)
line, rerr = c.ReadContinuation()
if rerr != nil {
if resp, ok := rerr.(Response); ok {
c.processUntagged(resp.Untagged)
c.processResult(resp.Result)
}
}
c.xtake("+ ")
line, err := c.Readline()
c.xcheckf(err, "read line")
return
}
// ReadContinuation reads a line. If it is a continuation, i.e. starts with "+", it
// is returned without leading "+ " and without trailing crlf. Otherwise, an error
// is returned, which can be a Response with Untagged that a caller may wish to
// process. A successfully read continuation can return an empty line.
func (p *Proto) ReadContinuation() (line string, rerr error) {
defer p.recover(&rerr)
if !p.peek('+') {
var resp Response
resp, rerr = p.ReadResponse()
if rerr == nil {
rerr = resp
}
return "", rerr
}
p.xtake("+ ")
line, err := p.Readline()
p.xcheckf(err, "read line")
line = strings.TrimSuffix(line, "\r\n")
return
}
// Writelinef writes the formatted format and args as a single line, adding CRLF.
// Used with IDLE and synchronous literals.
func (c *Conn) Writelinef(format string, args ...any) (rerr error) {
defer c.recover(&rerr)
func (p *Proto) Writelinef(format string, args ...any) (rerr error) {
defer p.recover(&rerr)
s := fmt.Sprintf(format, args...)
_, err := fmt.Fprintf(c.conn, "%s\r\n", s)
c.xcheckf(err, "writeline")
fmt.Fprintf(p.xbw, "%s\r\n", s)
p.xflush()
return nil
}
// Write writes directly to the connection. Write errors do take the connections
// panic mode into account, i.e. Write can panic.
func (c *Conn) Write(buf []byte) (n int, rerr error) {
defer c.recover(&rerr)
// WriteSyncLiteral first writes the synchronous literal size, then reads the
// continuation "+" and finally writes the data. If the literal is not accepted, an
// error is returned, which may be a Response.
func (p *Proto) WriteSyncLiteral(s string) (rerr error) {
defer p.recover(&rerr)
n, rerr = c.conn.Write(buf)
c.xcheckf(rerr, "write")
return n, nil
}
fmt.Fprintf(p.xbw, "{%d}\r\n", len(s))
p.xflush()
// WriteSyncLiteral first writes the synchronous literal size, then read the
// continuation "+" and finally writes the data.
func (c *Conn) WriteSyncLiteral(s string) (rerr error) {
defer c.recover(&rerr)
plus, err := p.br.Peek(1)
p.xcheckf(err, "read continuation")
if plus[0] == '+' {
_, err = p.Readline()
p.xcheckf(err, "read continuation line")
_, err := fmt.Fprintf(c.conn, "{%d}\r\n", len(s))
c.xcheckf(err, "write sync literal size")
line, err := c.Readline()
c.xcheckf(err, "read line")
if !strings.HasPrefix(line, "+") {
c.xerrorf("no continuation received for sync literal")
defer p.xtracewrite(mlog.LevelTracedata)()
_, err = p.xbw.Write([]byte(s))
p.xcheckf(err, "write literal data")
p.xtracewrite(mlog.LevelTrace)
return nil
}
_, err = c.conn.Write([]byte(s))
c.xcheckf(err, "write literal data")
return nil
var resp Response
resp, rerr = p.ReadResponse()
if rerr == nil {
rerr = resp
}
return
}
// Transactf writes format and args as an IMAP command, using Commandf with an
func (c *Conn) processUntagged(l []Untagged) {
for _, ut := range l {
switch e := ut.(type) {
case UntaggedCapability:
c.CapAvailable = []Capability(e)
case UntaggedEnabled:
c.CapEnabled = append(c.CapEnabled, e...)
}
}
}
func (c *Conn) processResult(r Result) {
if r.Code == nil {
return
}
switch e := r.Code.(type) {
case CodeCapability:
c.CapAvailable = []Capability(e)
}
}
// transactf writes format and args as an IMAP command, using Commandf with an
// empty tag. I.e. format must not contain a tag. Transactf then reads a response
// using ReadResponse and checks the result status is OK.
func (c *Conn) Transactf(format string, args ...any) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
func (c *Conn) transactf(format string, args ...any) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
err := c.Commandf("", format, args...)
err := c.WriteCommandf("", format, args...)
if err != nil {
return nil, Result{}, err
return Response{}, err
}
return c.ResponseOK()
return c.responseOK()
}
func (c *Conn) ResponseOK() (untagged []Untagged, result Result, rerr error) {
untagged, result, rerr = c.Response()
if rerr != nil {
return nil, Result{}, rerr
}
if result.Status != OK {
c.xerrorf("response status %q, expected OK", result.Status)
}
return untagged, result, rerr
}
func (c *Conn) responseOK() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
func (c *Conn) xgetUntagged(l []Untagged, dst any) {
if len(l) != 1 {
c.xerrorf("got %d untagged, expected 1: %v", len(l), l)
resp, rerr = c.ReadResponse()
c.processUntagged(resp.Untagged)
c.processResult(resp.Result)
if rerr == nil && resp.Status != OK {
rerr = resp
}
got := l[0]
gotv := reflect.ValueOf(got)
dstv := reflect.ValueOf(dst)
if gotv.Type() != dstv.Type().Elem() {
c.xerrorf("got %v, expected %v", gotv.Type(), dstv.Type().Elem())
}
dstv.Elem().Set(gotv)
}
// Close closes the connection without writing anything to the server.
// You may want to call Logout. Closing a connection with a mailbox with deleted
// message not yet expunged will not expunge those messages.
func (c *Conn) Close() error {
var err error
if c.conn != nil {
err = c.conn.Close()
c.conn = nil
}
return err
return
}

View File

@ -6,81 +6,139 @@ import (
"encoding/base64"
"fmt"
"hash"
"io"
"strings"
"time"
"github.com/mjl-/flate"
"github.com/mjl-/mox/mlog"
"github.com/mjl-/mox/moxio"
"github.com/mjl-/mox/scram"
)
// Capability requests a list of capabilities from the server. They are returned in
// an UntaggedCapability response. The server also sends capabilities in initial
// server greeting, in the response code.
func (c *Conn) Capability() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("capability")
// Capability writes the IMAP4 "CAPABILITY" command, requesting a list of
// capabilities from the server. They are returned in an UntaggedCapability
// response. The server also sends capabilities in initial server greeting, in the
// response code.
func (c *Conn) Capability() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("capability")
}
// Noop does nothing on its own, but a server will return any pending untagged
// responses for new message delivery and changes to mailboxes.
func (c *Conn) Noop() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("noop")
// Noop writes the IMAP4 "NOOP" command, which does nothing on its own, but a
// server will return any pending untagged responses for new message delivery and
// changes to mailboxes.
func (c *Conn) Noop() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("noop")
}
// Logout ends the IMAP session by writing a LOGOUT command. Close must still be
// called on this client to close the socket.
func (c *Conn) Logout() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("logout")
// Logout ends the IMAP4 session by writing an IMAP "LOGOUT" command. [Conn.Close]
// must still be called on this client to close the socket.
func (c *Conn) Logout() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("logout")
}
// Starttls enables TLS on the connection with the STARTTLS command.
func (c *Conn) Starttls(config *tls.Config) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
untagged, result, rerr = c.Transactf("starttls")
// StartTLS enables TLS on the connection with the IMAP4 "STARTTLS" command.
func (c *Conn) StartTLS(config *tls.Config) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
resp, rerr = c.transactf("starttls")
c.xcheckf(rerr, "starttls command")
conn := tls.Client(c.conn, config)
err := conn.Handshake()
conn := c.xprefixConn()
tlsConn := tls.Client(conn, config)
err := tlsConn.Handshake()
c.xcheckf(err, "tls handshake")
c.conn = conn
c.r = bufio.NewReader(conn)
return untagged, result, nil
}
// Login authenticates with username and password
func (c *Conn) Login(username, password string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("login %s %s", astring(username), astring(password))
}
// Authenticate with plaintext password using AUTHENTICATE PLAIN.
func (c *Conn) AuthenticatePlain(username, password string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
untagged, result, rerr = c.Transactf("authenticate plain %s", base64.StdEncoding.EncodeToString(fmt.Appendf(nil, "\u0000%s\u0000%s", username, password)))
c.conn = tlsConn
return
}
// Authenticate with SCRAM-SHA-1 or SCRAM-SHA-256, where the password is not
// exchanged in original plaintext form, but only derived hashes are exchanged by
// both parties as proof of knowledge of password.
func (c *Conn) AuthenticateSCRAM(method string, h func() hash.Hash, username, password string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
// Login authenticates using the IMAP4 "LOGIN" command, sending the plain text
// password to the server.
//
// Authentication is not allowed while the "LOGINDISABLED" capability is announced.
// Call [Conn.StartTLS] first.
//
// See [Conn.AuthenticateSCRAM] for a better authentication mechanism.
func (c *Conn) Login(username, password string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
sc := scram.NewClient(h, username, "")
fmt.Fprintf(c.xbw, "%s login %s ", c.nextTag(), astring(username))
defer c.xtracewrite(mlog.LevelTraceauth)()
fmt.Fprintf(c.xbw, "%s\r\n", astring(password))
c.xtracewrite(mlog.LevelTrace) // Restore.
return c.responseOK()
}
// AuthenticatePlain executes the AUTHENTICATE command with SASL mechanism "PLAIN",
// sending the password in plain text password to the server.
//
// Required capability: "AUTH=PLAIN"
//
// Authentication is not allowed while the "LOGINDISABLED" capability is announced.
// Call [Conn.StartTLS] first.
//
// See [Conn.AuthenticateSCRAM] for a better authentication mechanism.
func (c *Conn) AuthenticatePlain(username, password string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
err := c.WriteCommandf("", "authenticate plain")
c.xcheckf(err, "writing authenticate command")
_, rerr = c.readContinuation()
c.xresponse(rerr, &resp)
defer c.xtracewrite(mlog.LevelTraceauth)()
xw := base64.NewEncoder(base64.StdEncoding, c.xbw)
fmt.Fprintf(xw, "\u0000%s\u0000%s", username, password)
xw.Close()
c.xtracewrite(mlog.LevelTrace) // Restore.
fmt.Fprintf(c.xbw, "\r\n")
c.xflush()
return c.responseOK()
}
// todo: implement cram-md5, write its credentials as traceauth.
// AuthenticateSCRAM executes the IMAP4 "AUTHENTICATE" command with one of the
// following SASL mechanisms: SCRAM-SHA-256(-PLUS) or SCRAM-SHA-1(-PLUS).//
//
// With SCRAM, the password is not sent to the server in plain text, but only
// derived hashes are exchanged by both parties as proof of knowledge of password.
//
// Authentication is not allowed while the "LOGINDISABLED" capability is announced.
// Call [Conn.StartTLS] first.
//
// Required capability: SCRAM-SHA-256-PLUS, SCRAM-SHA-256, SCRAM-SHA-1-PLUS,
// SCRAM-SHA-1.
//
// The PLUS variants bind the authentication exchange to the TLS connection,
// detecting MitM attacks.
func (c *Conn) AuthenticateSCRAM(mechanism string, h func() hash.Hash, username, password string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
var cs *tls.ConnectionState
lmech := strings.ToLower(mechanism)
if strings.HasSuffix(lmech, "-plus") {
tlsConn, ok := c.conn.(*tls.Conn)
if !ok {
c.xerrorf("cannot use scram plus without tls")
}
xcs := tlsConn.ConnectionState()
cs = &xcs
}
sc := scram.NewClient(h, username, "", false, cs)
clientFirst, err := sc.ClientFirst()
c.xcheckf(err, "scram clientFirst")
c.LastTag = c.nextTag()
err = c.Writelinef("%s authenticate %s %s", c.LastTag, method, base64.StdEncoding.EncodeToString([]byte(clientFirst)))
// todo: only send clientFirst if server has announced SASL-IR
err = c.Writelinef("%s authenticate %s %s", c.nextTag(), mechanism, base64.StdEncoding.EncodeToString([]byte(clientFirst)))
c.xcheckf(err, "writing command line")
xreadContinuation := func() []byte {
var line string
line, untagged, result, rerr = c.ReadContinuation()
c.xcheckf(err, "read continuation")
if result.Status != "" {
c.xerrorf("unexpected status %q", result.Status)
}
line, rerr = c.readContinuation()
c.xresponse(rerr, &resp)
buf, err := base64.StdEncoding.DecodeString(line)
c.xcheckf(err, "parsing base64 from remote")
return buf
@ -100,83 +158,131 @@ func (c *Conn) AuthenticateSCRAM(method string, h func() hash.Hash, username, pa
err = c.Writelinef("%s", base64.StdEncoding.EncodeToString(nil))
c.xcheckf(err, "scram client end")
return c.ResponseOK()
return c.responseOK()
}
// Enable enables capabilities for use with the connection, verifying the server has indeed enabled them.
func (c *Conn) Enable(capabilities ...string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
// CompressDeflate enables compression with deflate on the connection by executing
// the IMAP4 "COMPRESS=DEFAULT" command.
//
// Required capability: "COMPRESS=DEFLATE".
//
// State: Authenticated or selected.
func (c *Conn) CompressDeflate() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
untagged, result, rerr = c.Transactf("enable %s", strings.Join(capabilities, " "))
resp, rerr = c.transactf("compress deflate")
c.xcheck(rerr)
var enabled UntaggedEnabled
c.xgetUntagged(untagged, &enabled)
got := map[string]struct{}{}
for _, cap := range enabled {
got[cap] = struct{}{}
}
for _, cap := range capabilities {
if _, ok := got[cap]; !ok {
c.xerrorf("capability %q not enabled by server", cap)
}
}
c.xflateBW = bufio.NewWriter(c)
fw0, err := flate.NewWriter(c.xflateBW, flate.DefaultCompression)
c.xcheckf(err, "deflate") // Cannot happen.
fw := moxio.NewFlateWriter(fw0)
c.compress = true
c.xflateWriter = fw
c.xtw = moxio.NewTraceWriter(mlog.New("imapclient", nil), "CW: ", fw)
c.xbw = bufio.NewWriter(c.xtw)
rc := c.xprefixConn()
fr := flate.NewReaderPartial(rc)
c.tr = moxio.NewTraceReader(mlog.New("imapclient", nil), "CR: ", fr)
c.br = bufio.NewReader(c.tr)
return
}
// Select opens mailbox as active mailbox.
func (c *Conn) Select(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("select %s", astring(mailbox))
// Enable enables capabilities for use with the connection by executing the IMAP4 "ENABLE" command.
//
// Required capability: "ENABLE" or "IMAP4rev2"
func (c *Conn) Enable(capabilities ...Capability) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
var caps strings.Builder
for _, c := range capabilities {
caps.WriteString(" " + string(c))
}
return c.transactf("enable%s", caps.String())
}
// Examine opens mailbox as active mailbox read-only.
func (c *Conn) Examine(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("examine %s", astring(mailbox))
// Select opens the mailbox with the IMAP4 "SELECT" command.
//
// If a mailbox is selected/active, it is automatically deselected before
// selecting the mailbox, without permanently removing ("expunging") messages
// marked \Deleted.
//
// If the mailbox cannot be opened, the connection is left in Authenticated state,
// not Selected.
func (c *Conn) Select(mailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("select %s", astring(mailbox))
}
// Create makes a new mailbox on the server.
func (c *Conn) Create(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("create %s", astring(mailbox))
// Examine opens the mailbox like [Conn.Select], but read-only, with the IMAP4
// "EXAMINE" command.
func (c *Conn) Examine(mailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("examine %s", astring(mailbox))
}
// Delete removes an entire mailbox and its messages.
func (c *Conn) Delete(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("delete %s", astring(mailbox))
// Create makes a new mailbox on the server using the IMAP4 "CREATE" command.
//
// SpecialUse can only be used on servers that announced the "CREATE-SPECIAL-USE"
// capability. Specify flags like \Archive, \Drafts, \Junk, \Sent, \Trash, \All.
func (c *Conn) Create(mailbox string, specialUse []string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
var useStr string
if len(specialUse) > 0 {
useStr = fmt.Sprintf(" USE (%s)", strings.Join(specialUse, " "))
}
return c.transactf("create %s%s", astring(mailbox), useStr)
}
// Rename changes the name of a mailbox and all its child mailboxes.
func (c *Conn) Rename(omailbox, nmailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("rename %s %s", astring(omailbox), astring(nmailbox))
// Delete removes an entire mailbox and its messages using the IMAP4 "DELETE"
// command.
func (c *Conn) Delete(mailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("delete %s", astring(mailbox))
}
// Subscribe marks a mailbox as subscribed. The mailbox does not have to exist. It
// is not an error if the mailbox is already subscribed.
func (c *Conn) Subscribe(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("subscribe %s", astring(mailbox))
// Rename changes the name of a mailbox and all its child mailboxes
// using the IMAP4 "RENAME" command.
func (c *Conn) Rename(omailbox, nmailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("rename %s %s", astring(omailbox), astring(nmailbox))
}
// Unsubscribe marks a mailbox as unsubscribed.
func (c *Conn) Unsubscribe(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("unsubscribe %s", astring(mailbox))
// Subscribe marks a mailbox as subscribed using the IMAP4 "SUBSCRIBE" command.
//
// The mailbox does not have to exist. It is not an error if the mailbox is already
// subscribed.
func (c *Conn) Subscribe(mailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("subscribe %s", astring(mailbox))
}
// List lists mailboxes with the basic LIST syntax.
// Unsubscribe marks a mailbox as unsubscribed using the IMAP4 "UNSUBSCRIBE"
// command.
func (c *Conn) Unsubscribe(mailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("unsubscribe %s", astring(mailbox))
}
// List lists mailboxes using the IMAP4 "LIST" command with the basic LIST syntax.
// Pattern can contain * (match any) or % (match any except hierarchy delimiter).
func (c *Conn) List(pattern string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf(`list "" %s`, astring(pattern))
func (c *Conn) List(pattern string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf(`list "" %s`, astring(pattern))
}
// ListFull lists mailboxes with the extended LIST syntax requesting all supported data.
// ListFull lists mailboxes using the LIST command with the extended LIST
// syntax requesting all supported data.
//
// Required capability: "LIST-EXTENDED". If "IMAP4rev2" is announced, the command
// is also available but only with a single pattern.
//
// Pattern can contain * (match any) or % (match any except hierarchy delimiter).
func (c *Conn) ListFull(subscribedOnly bool, patterns ...string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
func (c *Conn) ListFull(subscribedOnly bool, patterns ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
var subscribedStr string
if subscribedOnly {
subscribedStr = "subscribed recursivematch"
@ -184,110 +290,313 @@ func (c *Conn) ListFull(subscribedOnly bool, patterns ...string) (untagged []Unt
for i, s := range patterns {
patterns[i] = astring(s)
}
return c.Transactf(`list (%s) "" (%s) return (subscribed children special-use status (messages uidnext uidvalidity unseen deleted size recent appendlimit))`, subscribedStr, strings.Join(patterns, " "))
return c.transactf(`list (%s) "" (%s) return (subscribed children special-use status (messages uidnext uidvalidity unseen deleted size recent appendlimit))`, subscribedStr, strings.Join(patterns, " "))
}
// Namespace returns the hiearchy separator in an UntaggedNamespace response with personal/shared/other namespaces if present.
func (c *Conn) Namespace() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("namespace")
// Namespace requests the hiearchy separator using the IMAP4 "NAMESPACE" command.
//
// Required capability: "NAMESPACE" or "IMAP4rev2".
//
// Server will return an UntaggedNamespace response with personal/shared/other
// namespaces if present.
func (c *Conn) Namespace() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("namespace")
}
// Status requests information about a mailbox, such as number of messages, size, etc.
func (c *Conn) Status(mailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("status %s", astring(mailbox))
}
// Append adds message to mailbox with flags and optional receive time.
func (c *Conn) Append(mailbox string, flags []string, received *time.Time, message []byte) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
var date string
if received != nil {
date = ` "` + received.Format("_2-Jan-2006 15:04:05 -0700") + `"`
// Status requests information about a mailbox using the IMAP4 "STATUS" command. For
// example, number of messages, size, etc. At least one attribute required.
func (c *Conn) Status(mailbox string, attrs ...StatusAttr) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
l := make([]string, len(attrs))
for i, a := range attrs {
l[i] = string(a)
}
return c.Transactf("append %s (%s)%s {%d+}\r\n%s", astring(mailbox), strings.Join(flags, " "), date, len(message), message)
return c.transactf("status %s (%s)", astring(mailbox), strings.Join(l, " "))
}
// note: No idle command. Idle is better implemented by writing the request and reading and handling the responses as they come in.
// CloseMailbox closes the currently selected/active mailbox, permanently removing
// any messages marked with \Deleted.
func (c *Conn) CloseMailbox() (untagged []Untagged, result Result, rerr error) {
return c.Transactf("close")
// Append represents a parameter to the IMAP4 "APPEND" or "REPLACE" commands, for
// adding a message to mailbox, or replacing a message with a new version in a
// mailbox.
type Append struct {
Flags []string // Optional, flags for the new message.
Received *time.Time // Optional, the INTERNALDATE field, typically time at which a message was received.
Size int64
Data io.Reader // Required, must return Size bytes.
}
// Unselect closes the currently selected/active mailbox, but unlike CloseMailbox
// does not permanently remove any messages marked with \Deleted.
func (c *Conn) Unselect() (untagged []Untagged, result Result, rerr error) {
return c.Transactf("unselect")
// Append adds message to mailbox with flags and optional receive time using the
// IMAP4 "APPEND" command.
func (c *Conn) Append(mailbox string, message Append) (resp Response, rerr error) {
return c.MultiAppend(mailbox, message)
}
// Expunge removes messages marked as deleted for the selected mailbox.
func (c *Conn) Expunge() (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("expunge")
// MultiAppend atomatically adds multiple messages to the mailbox.
//
// Required capability: "MULTIAPPEND"
func (c *Conn) MultiAppend(mailbox string, message Append, more ...Append) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
fmt.Fprintf(c.xbw, "%s append %s", c.nextTag(), astring(mailbox))
msgs := append([]Append{message}, more...)
for _, m := range msgs {
var date string
if m.Received != nil {
date = ` "` + m.Received.Format("_2-Jan-2006 15:04:05 -0700") + `"`
}
// todo: use literal8 if needed, with "UTF8()" if required.
// todo: for larger messages, use a synchronizing literal.
fmt.Fprintf(c.xbw, " (%s)%s {%d+}\r\n", strings.Join(m.Flags, " "), date, m.Size)
defer c.xtracewrite(mlog.LevelTracedata)()
_, err := io.Copy(c.xbw, m.Data)
c.xcheckf(err, "write message data")
c.xtracewrite(mlog.LevelTrace) // Restore
}
fmt.Fprintf(c.xbw, "\r\n")
c.xflush()
return c.responseOK()
}
// UIDExpunge is like expunge, but only removes messages matching uidSet.
func (c *Conn) UIDExpunge(uidSet NumSet) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("uid expunge %s", uidSet.String())
// note: No Idle or Notify command. Idle/Notify is better implemented by
// writing the request and reading and handling the responses as they come in.
// CloseMailbox closes the selected/active mailbox using the IMAP4 "CLOSE" command,
// permanently removing ("expunging") any messages marked with \Deleted.
//
// See [Conn.Unselect] for closing a mailbox without permanently removing messages.
func (c *Conn) CloseMailbox() (resp Response, rerr error) {
return c.transactf("close")
}
// Unselect closes the selected/active mailbox using the IMAP4 "UNSELECT" command,
// but unlike MailboxClose does not permanently remove ("expunge") any messages
// marked with \Deleted.
//
// Required capability: "UNSELECT" or "IMAP4rev2".
//
// If Unselect is not available, call [Conn.Select] with a non-existent mailbox for
// the same effect: Deselecting a mailbox without permanently removing messages
// marked \Deleted.
func (c *Conn) Unselect() (resp Response, rerr error) {
return c.transactf("unselect")
}
// Expunge removes all messages marked as deleted for the selected mailbox using
// the IMAP4 "EXPUNGE" command. If other sessions marked messages as deleted, even
// if they aren't visible in the session, they are removed as well.
//
// UIDExpunge gives more control over which the messages that are removed.
func (c *Conn) Expunge() (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("expunge")
}
// UIDExpunge is like expunge, but only removes messages matching UID set, using
// the IMAP4 "UID EXPUNGE" command.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDExpunge(uidSet NumSet) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("uid expunge %s", uidSet.String())
}
// Note: No search, fetch command yet due to its large syntax.
// StoreFlagsSet stores a new set of flags for messages from seqset with the STORE command.
// If silent, no untagged responses with the updated flags will be sent by the server.
func (c *Conn) StoreFlagsSet(seqset string, silent bool, flags ...string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
// MSNStoreFlagsSet stores a new set of flags for messages matching message
// sequence numbers (MSNs) from sequence set with the IMAP4 "STORE" command.
//
// If silent, no untagged responses with the updated flags will be sent by the
// server.
//
// Method [Conn.UIDStoreFlagsSet], which operates on a uid set, should be
// preferred.
func (c *Conn) MSNStoreFlagsSet(seqset string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "flags"
if silent {
item += ".silent"
}
return c.Transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
return c.transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
}
// StoreFlagsAdd is like StoreFlagsSet, but only adds flags, leaving current flags on the message intact.
func (c *Conn) StoreFlagsAdd(seqset string, silent bool, flags ...string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
// MSNStoreFlagsAdd is like [Conn.MSNStoreFlagsSet], but only adds flags, leaving
// current flags on the message intact.
//
// Method [Conn.UIDStoreFlagsAdd], which operates on a uid set, should be
// preferred.
func (c *Conn) MSNStoreFlagsAdd(seqset string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "+flags"
if silent {
item += ".silent"
}
return c.Transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
return c.transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
}
// StoreFlagsClear is like StoreFlagsSet, but only removes flags, leaving other flags on the message intact.
func (c *Conn) StoreFlagsClear(seqset string, silent bool, flags ...string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
// MSNStoreFlagsClear is like [Conn.MSNStoreFlagsSet], but only removes flags,
// leaving other flags on the message intact.
//
// Method [Conn.UIDStoreFlagsClear], which operates on a uid set, should be
// preferred.
func (c *Conn) MSNStoreFlagsClear(seqset string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "-flags"
if silent {
item += ".silent"
}
return c.Transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
return c.transactf("store %s %s (%s)", seqset, item, strings.Join(flags, " "))
}
// Copy adds messages from the sequences in seqSet in the currently selected/active mailbox to dstMailbox.
func (c *Conn) Copy(seqSet NumSet, dstMailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("copy %s %s", seqSet.String(), astring(dstMailbox))
// UIDStoreFlagsSet stores a new set of flags for messages matching UIDs from
// uidSet with the IMAP4 "UID STORE" command.
//
// If silent, no untagged responses with the updated flags will be sent by the
// server.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDStoreFlagsSet(uidSet string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "flags"
if silent {
item += ".silent"
}
return c.transactf("uid store %s %s (%s)", uidSet, item, strings.Join(flags, " "))
}
// UIDCopy is like copy, but operates on UIDs.
func (c *Conn) UIDCopy(uidSet NumSet, dstMailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("uid copy %s %s", uidSet.String(), astring(dstMailbox))
// UIDStoreFlagsAdd is like UIDStoreFlagsSet, but only adds flags, leaving
// current flags on the message intact.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDStoreFlagsAdd(uidSet string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "+flags"
if silent {
item += ".silent"
}
return c.transactf("uid store %s %s (%s)", uidSet, item, strings.Join(flags, " "))
}
// Move moves messages from the sequences in seqSet in the currently selected/active mailbox to dstMailbox.
func (c *Conn) Move(seqSet NumSet, dstMailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("move %s %s", seqSet.String(), astring(dstMailbox))
// UIDStoreFlagsClear is like UIDStoreFlagsSet, but only removes flags, leaving
// other flags on the message intact.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDStoreFlagsClear(uidSet string, silent bool, flags ...string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
item := "-flags"
if silent {
item += ".silent"
}
return c.transactf("uid store %s %s (%s)", uidSet, item, strings.Join(flags, " "))
}
// UIDMove is like move, but operates on UIDs.
func (c *Conn) UIDMove(uidSet NumSet, dstMailbox string) (untagged []Untagged, result Result, rerr error) {
defer c.recover(&rerr)
return c.Transactf("uid move %s %s", uidSet.String(), astring(dstMailbox))
// MSNCopy adds messages from the sequences in the sequence set in the
// selected/active mailbox to destMailbox using the IMAP4 "COPY" command.
//
// Method [Conn.UIDCopy], operating on UIDs instead of sequence numbers, should be
// preferred.
func (c *Conn) MSNCopy(seqSet string, destMailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("copy %s %s", seqSet, astring(destMailbox))
}
// UIDCopy is like copy, but operates on UIDs, using the IMAP4 "UID COPY" command.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDCopy(uidSet string, destMailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("uid copy %s %s", uidSet, astring(destMailbox))
}
// MSNSearch returns messages from the sequence set in the selected/active mailbox
// that match the search critera using the IMAP4 "SEARCH" command.
//
// Method [Conn.UIDSearch], operating on UIDs instead of sequence numbers, should be
// preferred.
func (c *Conn) MSNSearch(seqSet string, criteria string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("seach %s %s", seqSet, criteria)
}
// UIDSearch returns messages from the uid set in the selected/active mailbox that
// match the search critera using the IMAP4 "SEARCH" command.
//
// Criteria is a search program, see RFC 9051 and RFC 3501 for details.
//
// Required capability: "UIDPLUS" or "IMAP4rev2".
func (c *Conn) UIDSearch(seqSet string, criteria string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("seach %s %s", seqSet, criteria)
}
// MSNMove moves messages from the sequence set in the selected/active mailbox to
// destMailbox using the IMAP4 "MOVE" command.
//
// Required capability: "MOVE" or "IMAP4rev2".
//
// Method [Conn.UIDMove], operating on UIDs instead of sequence numbers, should be
// preferred.
func (c *Conn) MSNMove(seqSet string, destMailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("move %s %s", seqSet, astring(destMailbox))
}
// UIDMove is like move, but operates on UIDs, using the IMAP4 "UID MOVE" command.
//
// Required capability: "MOVE" or "IMAP4rev2".
func (c *Conn) UIDMove(uidSet string, destMailbox string) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
return c.transactf("uid move %s %s", uidSet, astring(destMailbox))
}
// MSNReplace is like the preferred [Conn.UIDReplace], but operates on a message
// sequence number (MSN) instead of a UID.
//
// Required capability: "REPLACE".
//
// Method [Conn.UIDReplace], operating on UIDs instead of sequence numbers, should be
// preferred.
func (c *Conn) MSNReplace(msgseq string, mailbox string, msg Append) (resp Response, rerr error) {
// todo: parse msgseq, must be nznumber, with a known msgseq. or "*" with at least one message.
return c.replace("replace", msgseq, mailbox, msg)
}
// UIDReplace uses the IMAP4 "UID REPLACE" command to replace a message from the
// selected/active mailbox with a new/different version of the message in the named
// mailbox, which may be the same or different than the selected mailbox.
//
// The replaced message is indicated by uid.
//
// Required capability: "REPLACE".
func (c *Conn) UIDReplace(uid string, mailbox string, msg Append) (resp Response, rerr error) {
// todo: parse uid, must be nznumber, with a known uid. or "*" with at least one message.
return c.replace("uid replace", uid, mailbox, msg)
}
func (c *Conn) replace(cmd string, num string, mailbox string, msg Append) (resp Response, rerr error) {
defer c.recover(&rerr, &resp)
// todo: use synchronizing literal for larger messages.
var date string
if msg.Received != nil {
date = ` "` + msg.Received.Format("_2-Jan-2006 15:04:05 -0700") + `"`
}
// todo: only use literal8 if needed, possibly with "UTF8()"
// todo: encode mailbox
err := c.WriteCommandf("", "%s %s %s (%s)%s ~{%d+}", cmd, num, astring(mailbox), strings.Join(msg.Flags, " "), date, msg.Size)
c.xcheckf(err, "writing replace command")
defer c.xtracewrite(mlog.LevelTracedata)()
_, err = io.Copy(c.xbw, msg.Data)
c.xcheckf(err, "write message data")
c.xtracewrite(mlog.LevelTrace)
fmt.Fprintf(c.xbw, "\r\n")
c.xflush()
return c.responseOK()
}

Some files were not shown because too many files have changed in this diff Show More