12 Commits

Author SHA1 Message Date
Mechiel Lukkien
a2c79e25c1
check and log errors more often in deferred cleanup calls, and log remote-induced errors at lower priority
We normally check errors for all operations. But for some cleanup calls, eg
"defer file.Close()", we didn't. Now we also check and log most of those.
Partially because those errors can point to some mishandling or unexpected code
paths (eg file unexpected already closed). And in part to make it easier to use
"errcheck" to find the real missing error checks, there is too much noise now.

The log.Check function can now be used unconditionally for checking and logging
about errors. It adjusts the log level if the error is caused by a network
connection being closed, or a context is canceled or its deadline reached, or a
socket deadline is reached.
2025-03-24 14:06:05 +01:00
Mechiel Lukkien
c7354cc22b
also unicode-normalize usernames (email addresses) when logging into the imapserver and webapps
and don't do needless normalization for the username we got from scram: the
scram package would have failed if the name wasn't already normalized.

unicode may not be specified for sasl with imap (i'm not sure), but there's no
point in accepting it over smtpserver but not in imapserver.
2025-02-06 15:38:45 +01:00
Mechiel Lukkien
1277d78cb1
keep track of login attempts, both successful and failures
and show them in the account and admin interfaces. this should help with
debugging, to find misconfigured clients, and potentially find attackers trying
to login.

we include details like login name, account name, protocol, authentication
mechanism, ip addresses, tls connection properties, user-agent. and of course
the result.

we group entries by their details. repeat connections don't cause new records
in the database, they just increase the count on the existing record.

we keep data for at most 30 days. and we keep at most 10k entries per account.
to prevent unbounded growth. for successful login attempts, we store them all
for 30d. if a bad user causes so many entries this becomes a problem, it will
be time to talk to the user...

there is no pagination/searching yet in the admin/account interfaces. so the
list may be long. we only show the 10 most recent login attempts by default.
the rest is only shown on a separate page.

there is no way yet to disable this. may come later, either as global setting
or per account.
2025-02-06 14:16:13 +01:00
Mechiel Lukkien
2d3d726f05
add config options to disable a domain and to disable logins for an account
to facilitate migrations from/to other mail setups.

a domain can be added in "disabled" mode (or can be disabled/enabled later on).
you can configure a disabled domain, but incoming/outgoing messages involving
the domain are rejected with temporary error codes (as this may occur during a
migration, remote servers will try again, hopefully to the correct machine or
after this machine has been configured correctly). also, no acme tls certs will
be requested for disabled domains (the autoconfig/mta-sts dns records may still
point to the current/previous machine). accounts with addresses at disabled
domains can still login, unless logins are disabled for their accounts.

an account now has an option to disable logins. you can specify an error
message to show. this will be shown in smtp, imap and the web interfaces. it
could contain a message about migrations, and possibly a URL to a page with
information about how to migrate. incoming/outgoing email involving accounts
with login disabled are still accepted/delivered as normal (unless the domain
involved in the messages is disabled too). account operations by the admin,
such as importing/exporting messages still works.

in the admin web interface, listings of domains/accounts show if they are disabled.
domains & accounts can be enabled/disabled through the config file, cli
commands and admin web interface.

for issue #175 by RobSlgm
2025-01-25 20:39:20 +01:00
Mechiel Lukkien
7647264a72
web interfaces: when there is no login session, and a non-existent path is requested, mention the web interface this is about
may help users understand when /admin/ isn't enabled on a hostname but the
account web interface is at /. the error will now say: no session for "account"
web interface. it hopefully tells users that their request isn't going to an
admin interface, but ends up at the account web interface.

for issue #268
2025-01-22 20:15:14 +01:00
Mechiel Lukkien
afc47c8108
if webauth login cookie is missing, and forwarding was configured, hint that reverse proxy may be stripping path
the cookies are set with a specific path, because the webadmin, webaccount and
webmail cookies can be on the same domain (this is the default). if the reverse
proxy strips the path while forwarding, the browser won't set the cookie and
the login attempt will fail.

based on github issue #151 from naturalethic
2024-04-16 16:06:31 +02:00
Mechiel Lukkien
09fcc49223
add a webapi and webhooks for a simple http/json-based api
for applications to compose/send messages, receive delivery feedback, and
maintain suppression lists.

this is an alternative to applications using a library to compose messages,
submitting those messages using smtp, and monitoring a mailbox with imap for
DSNs, which can be processed into the equivalent of suppression lists. but you
need to know about all these standards/protocols and find libraries. by using
the webapi & webhooks, you just need a http & json library.

unfortunately, there is no standard for these kinds of api, so mox has made up
yet another one...

matching incoming DSNs about deliveries to original outgoing messages requires
keeping history of "retired" messages (delivered from the queue, either
successfully or failed). this can be enabled per account. history is also
useful for debugging deliveries. we now also keep history of each delivery
attempt, accessible while still in the queue, and kept when a message is
retired. the queue webadmin pages now also have pagination, to show potentially
large history.

a queue of webhook calls is now managed too. failures are retried similar to
message deliveries. webhooks can also be saved to the retired list after
completing. also configurable per account.

messages can be sent with a "unique smtp mail from" address. this can only be
used if the domain is configured with a localpart catchall separator such as
"+". when enabled, a queued message gets assigned a random "fromid", which is
added after the separator when sending. when DSNs are returned, they can be
related to previously sent messages based on this fromid. in the future, we can
implement matching on the "envid" used in the smtp dsn extension, or on the
"message-id" of the message. using a fromid can be triggered by authenticating
with a login email address that is configured as enabling fromid.

suppression lists are automatically managed per account. if a delivery attempt
results in certain smtp errors, the destination address is added to the
suppression list. future messages queued for that recipient will immediately
fail without a delivery attempt. suppression lists protect your mail server
reputation.

submitted messages can carry "extra" data through the queue and webhooks for
outgoing deliveries. through webapi as a json object, through smtp submission
as message headers of the form "x-mox-extra-<key>: value".

to make it easy to test webapi/webhooks locally, the "localserve" mode actually
puts messages in the queue. when it's time to deliver, it still won't do a full
delivery attempt, but just delivers to the sender account. unless the recipient
address has a special form, simulating a failure to deliver.

admins now have more control over the queue. "hold rules" can be added to mark
newly queued messages as "on hold", pausing delivery. rules can be about
certain sender or recipient domains/addresses, or apply to all messages pausing
the entire queue. also useful for (local) testing.

new config options have been introduced. they are editable through the admin
and/or account web interfaces.

the webapi http endpoints are enabled for newly generated configs with the
quickstart, and in localserve. existing configurations must explicitly enable
the webapi in mox.conf.

gopherwatch.org was created to dogfood this code. it initially used just the
compose/smtpclient/imapclient mox packages to send messages and process
delivery feedback. it will get a config option to use the mox webapi/webhooks
instead. the gopherwatch code to use webapi/webhook is smaller and simpler, and
developing that shaped development of the mox webapi/webhooks.

for issue #31 by cuu508
2024-04-15 21:49:02 +02:00
Mechiel Lukkien
00c8dacc56
fix previous commit, go fmt 2024-04-11 23:22:03 +02:00
Mechiel Lukkien
666f84edea
fix login for account names with non-ascii chars
we include the username in session cookie values. but cookie values must be ascii-only, go's net/http's drops bad values. the typical solution is to querystring-encode/decode the cookie values, which we'll now do.

problem found by arnt, thanks for reporting!
2024-04-11 23:11:31 +02:00
Mechiel Lukkien
c57aeac7f0
prevent unicode-confusion in password by applying PRECIS, and username/email address by applying unicode NFC normalization
an é (e with accent) can also be written as e+\u0301. the first form is NFC,
the second NFD. when logging in, we transform usernames (email addresses) to
NFC. so both forms will be accepted. if a client is using NFD, they can log
in too.

for passwords, we apply the PRECIS "opaquestring", which (despite the name)
transforms the value too: unicode spaces are replaced with ascii spaces. the
string is also normalized to NFC. PRECIS may reject confusing passwords when
you set a password.
2024-03-09 09:20:29 +01:00
Mechiel Lukkien
d1b87cdb0d
replace packages slog and slices from golang.org/x/exp with stdlib
since we are now at go1.21 as minimum.
2024-02-08 14:49:01 +01:00
Mechiel Lukkien
0f8bf2f220
replace http basic auth for web interfaces with session cookie & csrf-based auth
the http basic auth we had was very simple to reason about, and to implement.
but it has a major downside:

there is no way to logout, browsers keep sending credentials. ideally, browsers
themselves would show a button to stop sending credentials.

a related downside: the http auth mechanism doesn't indicate for which server
paths the credentials are.

another downside: the original password is sent to the server with each
request. though sending original passwords to web servers seems to be
considered normal.

our new approach uses session cookies, along with csrf values when we can. the
sessions are server-side managed, automatically extended on each use. this
makes it easy to invalidate sessions and keeps the frontend simpler (than with
long- vs short-term sessions and refreshing). the cookies are httponly,
samesite=strict, scoped to the path of the web interface. cookies are set
"secure" when set over https. the cookie is set by a successful call to Login.
a call to Logout invalidates a session. changing a password invalidates all
sessions for a user, but keeps the session with which the password was changed
alive. the csrf value is also random, and associated with the session cookie.
the csrf must be sent as header for api calls, or as parameter for direct form
posts (where we cannot set a custom header). rest-like calls made directly by
the browser, e.g. for images, don't have a csrf protection. the csrf value is
returned by the Login api call and stored in localstorage.

api calls without credentials return code "user:noAuth", and with bad
credentials return "user:badAuth". the api client recognizes this and triggers
a login. after a login, all auth-failed api calls are automatically retried.
only for "user:badAuth" is an error message displayed in the login form (e.g.
session expired).

in an ideal world, browsers would take care of most session management. a
server would indicate authentication is needed (like http basic auth), and the
browsers uses trusted ui to request credentials for the server & path. the
browser could use safer mechanism than sending original passwords to the
server, such as scram, along with a standard way to create sessions.  for now,
web developers have to do authentication themselves: from showing the login
prompt, ensuring the right session/csrf cookies/localstorage/headers/etc are
sent with each request.

webauthn is a newer way to do authentication, perhaps we'll implement it in the
future. though hardware tokens aren't an attractive option for many users, and
it may be overkill as long as we still do old-fashioned authentication in smtp
& imap where passwords can be sent to the server.

for issue #58
2024-01-05 10:48:42 +01:00