Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to send email: gomail: could not send email 1: 550 5.2.0 Spam message rejected #4313

Open
theAkito opened this issue Mar 22, 2025 · 12 comments

Comments

@theAkito
Copy link

theAkito commented Mar 22, 2025

It's easy to blame the mail provider here, but finish reading this issue first.

  • Am using Prometheus + Alertmanager mostly through Grafana.
  • Having data source managed alert rules, set up via rules.yml and alertmanager.yml.
  • Having alert rules managed through Grafana GUI.
  • All alerts use the exact same SMTP configuration.
  • When Grafana sends alerts, they arrive.
  • When data source managed, i.e. Alertmanager, alerts are fired and use the exact same SMTP configuration, the aforementioned error appears in the logs and no e-mail is ever received.

Therefore, Alertmanaged specifically does something different from Grafana, that causes the e-mail messages to become rejected.

What is the cause? How to solve this issue?

I'm very surprised I did not find any other such issue on this topic, so far.

@grobinson-grafana
Copy link
Collaborator

I'm afraid that's something you will need to figure out as there is not enough information here to know what the problem might be. Your SMTP server should be able to log the headers of the outgoing email and also show you the reason it was rejected by the recipient mailbox.

@theAkito
Copy link
Author

I'm afraid that's something you will need to figure out as there is not enough information here to know what the problem might be. Your SMTP server should be able to log the headers of the outgoing email and also show you the reason it was rejected by the recipient mailbox.

Then what can I do to leverage, for example, Alertmanager's logging to find out more information? Here is a third party mail provider in use, which is used for a variety of purposes, including these alerts and the only mails, which do not arrive, are from Alertmanager. Even Grafana sent alerts arrive. So, Alertmanager must do something special here.

@grobinson-grafana
Copy link
Collaborator

If the email is rejected by your SMTP server then you'll see an error in your Alertmanager logs with either notify retry canceled or Notify attempt failed. If the email is accepted by your SMTP server, but then rejected by the recipient, you'll need to check with your SMTP server to find out why this happened. Their mail server should return headers explaining why the email was identified as spam.

You could also run a local SMTP server to see the full contents of the email, including headers, sent from Alertmanager, and compare that to emails sent from Grafana. Any differences could be what is causing the recipients spam filter to reject the email.

@theAkito
Copy link
Author

theAkito commented Mar 22, 2025

If the email is rejected by your SMTP server then you'll see an error in your Alertmanager logs with either notify retry canceled or Notify attempt failed. If the email is accepted by your SMTP server, but then rejected by the recipient, you'll need to check with your SMTP server to find out why this happened. Their mail server should return headers explaining why the email was identified as spam.

You could also run a local SMTP server to see the full contents of the email, including headers, sent from Alertmanager, and compare that to emails sent from Grafana. Any differences could be what is causing the recipients spam filter to reject the email.

Yes, this precedes the error message in the title: notify retry canceled due to unrecoverable error after 1 attempts.

Then this means, the sending SMTP server is already rejecting the to be sent e-mails?

@grobinson-grafana
Copy link
Collaborator

Can you share the full log line?

@theAkito
Copy link
Author

Can you share the full log line?

logger=ngalert.notifier.alertmanager org=1 level=error component=alertmanager orgID=1 component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="Target/email[0]: notify retry canceled due to unrecoverable error after 1 attempts: failed to send email: gomail: could not send email 1: 550 5.2.0 Spam message rejected"

@grobinson-grafana
Copy link
Collaborator

Can you share the full log line?

logger=ngalert.notifier.alertmanager org=1 level=error component=alertmanager orgID=1 component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="Target/email[0]: notify retry canceled due to unrecoverable error after 1 attempts: failed to send email: gomail: could not send email 1: 550 5.2.0 Spam message rejected"

This is a log line from Grafana. Isn't the issue in Prometheus Alertmanager? In which case there shouldn't be an ngalert.notifier.alertmanager.

@theAkito
Copy link
Author

theAkito commented Mar 23, 2025

Can you share the full log line?

logger=ngalert.notifier.alertmanager org=1 level=error component=alertmanager orgID=1 component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="Target/email[0]: notify retry canceled due to unrecoverable error after 1 attempts: failed to send email: gomail: could not send email 1: 550 5.2.0 Spam message rejected"

This is a log line from Grafana. Isn't the issue in Prometheus Alertmanager? In which case there shouldn't be an ngalert.notifier.alertmanager.

As far as I can see ngalert.notifier.alertmanager contains the string alertmanager. When I watch the debug logs of the container running the actual Alertmanager plugin, there is nothing relevant in there. That said, the e-mails sent from Grafana, as already mentioned, work fine. It's just the data source managed alerts, i.e. those managed by Prometheus and Alertmanager, which encounter this error. This is the cause for this issue in the first place.

Maybe, I need to be more clear.

  • When an alert set by Grafana is "firing" everything works, including the notifications through e-mail.
  • When an alert set by Prometheus Alertmanager if "firing", the alert itself seems to work, but the notification e-mails are being blocked, as discussed earlier.
  • The log lines shared are the only ones corresponding to the issue at hand. There are no other log lines which are related to this issue.

@grobinson-grafana
Copy link
Collaborator

I'm afraid your explanation is not making sense for a number of reasons:

those managed by Prometheus and Alertmanager, which encounter this error. This is the cause for this issue in the first place.

Datasource Managed Alerts in Grafana cannot (to my knowledge) use Grafana's internal Alertmanager. However, the logs you have shared are from Grafana's internal Alertmanager as they start with ngalert, so they cannot be from Datasource Managed Alerts? They must be from Grafana Managed Alerts?

If you are running Prometheus Alertmanager as a seperate program, please can you share the logs from that that show this error?

If it only happens in the Grafana internal Alertmanager, and you are not actually running Prometheus Alertmanager as a seperate program, you need to open an issue at github.com/grafana/grafana. Grafana uses it's own email implementation for sending emails, swapping out the default implementation in Alertmanager.

@theAkito
Copy link
Author

theAkito commented Mar 23, 2025

I'm afraid your explanation is not making sense for a number of reasons:

those managed by Prometheus and Alertmanager, which encounter this error. This is the cause for this issue in the first place.

Datasource Managed Alerts in Grafana cannot (to my knowledge) use Grafana's internal Alertmanager. However, the logs you have shared are from Grafana's internal Alertmanager as they start with ngalert, so they cannot be from Datasource Managed Alerts? They must be from Grafana Managed Alerts?

If you are running Prometheus Alertmanager as a seperate program, please can you share the logs from that that show this error?

If Grafana's Alertmanager supposedly were not to work, why do I get notifications from Grafana for the Grafana alerts and no notifications from data source managed alerts, when they are, according to all UIs, fired?

Grafana's alerts work fine. Data source managed ones do not send anything.

So, how does Grafana's Alertmanager not work, when in fact it is the other way around?

If it only happens in the Grafana internal Alertmanager, and you are not actually running Prometheus Alertmanager as a seperate program, you need to open an issue at github.com/grafana/grafana. Grafana uses it's own email implementation for sending emails, swapping out the default implementation in Alertmanager.

No, Grafana Alertmanager runs fine.... I don't know what I need to show or explain to you, to make that clear. I can forward you the mail notification coming from Grafana, if you want to see it. And I cannot forward you any mail from the separate Alertmanager, because it does not arrive, ever.

Or do I need to show you the three different docker-compose.yml files, where one is for Grafana, the second is for Prometheus and the third is for Alertmanager? I have created them myself and I fully administer everything myself. Therefore, I am sure, that I have a separate Alertmanager as I set it up with my own hands and it is setup properly, or else I wouldn't see the data source managed alerts inside Grafana.

The error does not make any sense for Grafana's internal Alertmanager, as Grafana's e-mail notifications work fine. So, why would that error appear in regards to Grafana, when Grafana's notifications work? Does not make sense.

@grobinson-grafana
Copy link
Collaborator

If Grafana's Alertmanager supposedly were not to work, why do I get notifications from Grafana for the Grafana alerts and no notifications from data source managed alerts, when they are, according to all UIs, fired?

The log line you shared shows Grafana alerts not working? The reason is the log line for the failed email notification starts with ngalert.notifier.alertmanager, and this can only come from Grafana alerts.

If you are running Prometheus Alertmanager, and emails from that are not working as well, please can you share those logs? It seems to me that you have shared the logs from your Grafana pod instead of your Alertmanager pod.

@grobinson-grafana
Copy link
Collaborator

Did you have any luck with getting logs from your Alertmanager pod?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants