In April 2017 OWASP released it's first draft of OWASP top 10 i.e. OWASP Top 10 2017 RC1 . While all the items on the list were fine and backed by sufficient data except, A7 & A10 . There was a post by Brian Glas on nVisium's blog, who tried to find the reasoning behind it.
The OWASP Top 10-2017 is based primarily on 40+ data submissions from firms that specialize in application security and an industry survey that was completed by over 500 individuals. This data spans vulnerabilities gathered from hundreds of organizations and over 100,000 real-world applications and APIs. The Top 10 items are selected and prioritized according to this prevalence data, in combination with consensus estimates of exploitability, detectability, and impact.
So on diving deep into the dataset available for this RC1 draft, Brian Glas and the community found that there wasn't enough data to back A7 & A10. He further mentions that the only references he found about them were recommendations from a private company named 'Contrast'.
The only references I could find to A7 and A9 were recommendations from Contrast to add them to the Top 10.
He went on to check the mailing list, slack channels for any discussion regarding the same. However he found none. This raises suspicion that whether OWASP top 10 was being influenced by private entities.
Some blogs went on to describe this as :
So now companies like Contrast Security can use OWASP to literally add “A7. Not enough of Contrast Security”.
As it happens, Contrast Security offers a product called Contrast Protect, which could deal with the situations covered by A7. In addition, Contrast Security was one of the vendors who made suggestions leading to the creation of A7 (the others were Shape Security and Network Test Labs Inc.). The outline of A7 even mentions Runtime Application Self Protection (RASP) directly, which is what Contrast Security offers.
Hence the controversy.
Contrast clarified their stance on their blogpost.
Eventually it didn't go to the final draft of OWASP top 10 2017.