+A number of studies have been looking at the impact of AI code generation upon code quality metrics and the results are concerning. A study from [Bilkent University](https://arxiv.org/pdf/2304.10778) showed Copilot giving correct code less than 50% of the time and another study from [Stanford](https://arxiv.org/pdf/2211.03622) showed that developers wrote less secure code when using AI assistants, but tended to believe that their code was more secure. Findings from [GitClear](https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality) show a doubling of code churn in 2024 due to AI-generated code which has to be reverted or patched with two weeks of creation. This correlates with the data from the 2024 [DORA State of DevOps](https://dora.dev/research/2024/dora-report/) report, which indicates widespread adoption of AI for writing code and summarising data, but shows 39% of respondents having low trust in the quality of such generated code.
0 commit comments