Have you ever looked at the comments section of a website and wondered how many of the commenters are bots? Have you ever thought about buying a 5-star rated item but hesitated because the reviews didn't look quite right? Do you get angry when people game the system to get an unfair advantage? If you're interested in solving these sorts of problems, the Community Trust team could be a great fit.
We own upholding trust in Amazon's Community features. These include Reviews, Questions & Answers, and other experiences where shoppers contribute content. We're solving abuse detection at scale, building rules and ML models to ensure contributions meet guidelines, and ensuring that other teams in the company with similar needs benefit from our expertise.
Our team includes engineers, PMs, research scientists and business intelligence engineers. We're looking for an applied scientist to join us to help design models to detect abuse.
We're a diverse team passionate about operational excellence and high quality code. We value transparency, empowerment, experimentation, and appreciation. We are free from the heavy operational burden that is associated with other teams, and are able to iterate in a low stress environment. Our solutions influence the products and plans of multiple internal teams, and we design and build systems that improve the experience of millions of customers.
Amazon is a company operating a marketplace for consumers, sellers, and content creators.