Carbon Language community transparency report through 2025-09-30 #6206
Locked
CelineausBerlin
announced in
Transparency reports
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
The Carbon community works to be welcoming and kind among itself and to others, with a deep commitment to psychological safety, and we want to ensure that doesn’t change as we grow and evolve. To that end, we have a few ground rules that we ask all community members to adhere to:
The following summary is intended to help the community understand what kinds of Code of Conduct incidents were brought to our attention lately, and how we dealt with them.
Publishing such transparency reports on a regular basis is helping us track progress and hold ourselves accountable to high standards of community culture.
Summary
It’s been yet another peaceful quarter in our community spaces since July 2025. We’d like to give an extra thanks to everyone who put efforts into it, and contributed to reminding everyone that internet spaces can be this way. Let’s keep it up!
With enough hindsight we can now tell that we are experiencing a very slight decrease in crowd size on Discord (with 4,823 members to date): the group of active members is consolidating itself, while our GitHub project keeps growing. 175 users had contributed to the project up to the day this report was published. This makes sense: lots of those who joined us on Discord when we went public in 2022 left after never having gotten involved after all, while more folks started to actively join our efforts towards 0.1, expected to ship in 2026, and joined our GitHub repository.
To help with moderation and potential conduct issues, we could count on 11 trained people, including our community lead, whose role is guiding, training and coaching the team.
Our moderation and conduct team members are located in 3 different continents.
Our AutoMod bots are helping us on Discord, automatically catching the use of some harmful language and spam. The idea of having helping bots is to be able to focus on the educational and conversational part of our moderation role, and automate what does not need a human to be taken adequate care of. Indeed, some things are best taken care of by our carefully crafted AutoMod bots, and some by us, humans. So, we are keeping in check what the right balance between automated and manual moderation is, and are making adjustments on the way when appropriate.
Please note that some incidents may have escaped our attention. You can help us keep our spaces welcoming and fostering a spirit of collaboration, and report any situation that may require our intervention:
https://github.com/carbon-language/carbon-lang/blob/trunk/CODE_OF_CONDUCT.md
Our interventions from 2025-07-01 through 2025-09-30
In Q3 2025, our Discord AutoMod bots caught and blocked 7 spam messages from an account that was reported to Discord as well. Also, 3 messages by 2 other users containing harmful language were automatically blocked. One of them got the message and found a more mindful wording for their message. The second one had sent 2 of them in a row, and did not come back after their posts were blocked.
Our AutoMod bots also blocked some messages that looked like spam but weren’t, and the users figured out how to modify their messages so they could get through.
We also had spam that was handled manually, beyond the aforementioned AutoMod activity:
2 accounts were banned and one of them reported to Discord, their spam messages were deleted. We then updated our AutoMod rules, which may be why our bots started filtering out messages as spam which were none, as mentioned just above. We probably have some more tweaking of our automatic spam detection to do.
Also, we hid some off-topic comments by one user on GitHub. There was no further follow-up on this one at that point.
Other activity_
Otherwise, we gathered some insights about how individual team members felt about their moderation skills then, so we could look for ways to close those skill gaps. Continuous learning is key to any professional skill, including moderation.
Further, we had a conversation about how to handle duplicate work on GitHub as moderators, which can happen if someone is starting to work on an issue without checking if others were already on it. We decided to encourage collaboration and put contributors working on the same issue in touch, so they could decide how to move forward together.
Finally, we talked about the fact that when voicing criticism of what had been done before, we should demonstrate an understanding of past discussions and perspectives on the topic, even if the person offering the critique wasn't involved in those discussions.
Closing observations_
Up to September 2025, our community seemed to keep stabilizing in size and working pace, with contributors consistently on best behavior, leading by example.
We can be proud of how safe it is to join and contribute to the Carbon Language Project so far!
We’ll keep watching and providing support when needed, and are hoping for more contributions from new members, while keeping it friendly and kind at all times.
Thank you all for your contributions, and see you soon!
The Carbon Code of Conduct team, 2025-10-13
Beta Was this translation helpful? Give feedback.
All reactions