It’s important to note that this is a dynamic situation. Some of the rewrites my research flagged had changed when I went back to check them by hand, including quite a few that had reverted to simple truncation. It appears that Google is adjusting to feedback. This research and post left me the most uncomfortable with the “so-so” examples. Many of the bad examples can be fixed with better algorithms, but ultimately I believe that the bar for rewriting titles should be relatively high.
Original <title> tags in the so-so examples, and it appears Google has set the rewrite threshold pretty low. You might argue that Google has all of the data (and that I don’t), so maybe they know what bahamas phone number database they’re doing. Maybe so, but I have two problems with this argument. First, as a data scientist, I worry about the scale of Google’s data. Let’s assume that Google A/B tests rewrites against some kind of engagement metric or metrics.
At Google scale (i.e. massive data), it’s possible to reach statistical significance with very small differences. The problem is that statistics don’t tell us anything about whether that change is meaningful enough to offset the consequences of making it. Is a 1% lift in some engagement metric worth it when a rewrite might alter the author’s original intent or even pose branding or legal problems for companies in limited cases? If you’re comparing two machine learning models to each other, then it makes sense to go with the one that performs better on average, even if the difference is small.
There’s nothing wrong with most of the
-
- Posts: 445
- Joined: Sat Dec 28, 2024 3:21 am