Google wants us to use a "Google certified" CMP
Google is now forcing AdSense publishers to use a certified CMP whether they want it or not.
By. Jacob
Edited: 2023-07-21 09:59
My comments to: New Consent Management Platform requirements for serving ads in the EEA and UK - blog.google
When logging into your AdSense account, you may have seen the following notification:
Later this year, Google will require all publishers serving ads to EEA and UK users to use a Google-certified Consent Management Platform (CMP). You can use any Google-certified CMP for this purpose, including Google's own consent management solution. If you are interested in using Google's consent management solution, start by setting up your GDPR message.
We will now be required to use a "Google certified" CMP (consent management platform). If I understand this correctly, this means that those of us relying on our own custom consent implementation, for example by using the pauseAdRequests method, will be forced to adopt yet another external dependency – unless we choose to use Google's own consent dialog – but the built-in AdSense consent suffers from lack of customization options. E.g. It is a fixed modal window that covers the screen and prevents scrolling, which can be bad for UX reasons.
Previously we have been able to record our own user consents and simply pause ads until we obtain consent. E.g:
window.addEventListener("DOMContentLoaded", function() {
(adsbygoogle = window.adsbygoogle || []).pauseAdRequests = 1; // Pause loading of ads
});
I wanted for some time now to change Beamtic's consent dialog by giving it a static placement in the normal flow of the page rather than keeping it fixed – but given the new circumstances, I have instead been forced to use Google's consent mechanism. This is, however, against my will, and I would not have done so if the choice was my own. Google is forcing publishers (website owners) to pick between a so-called "certified" external dependency and Google's own consent mechanism. Luckily, we can also gather our own additional consent through their dialogs if needed, so we do not need to also maintain our own implementation for our own purposes, but Google's consent dialog is annoying to users.
Consent management platforms
The idea of external CMPs is extremely bad, because it introduces more dependencies, and there is no reason why a website can not have its own built-in consent function. However, if regulators keep changing the implementation requirement it will of course be more difficult for us to maintain our own implementation. At some point the confusion has to end and things stabalize.
The current GDPR rules lack a balance between allowing app and website owners to monetize their creations with personalized ads, and protecting users privacy, and they still do not seem to account for anonymization techniques used by ad-networks which should in theory make consent redundant.
AdSense built-in consent is an improvement
Although it is a step forward that consent is now integrated with AdSense, it still leaves much to be desired in terms of customization of the styling. E.g. Some sites might prefer a statically positioned consent button, that is located within the normal flow of the page. The current dialog is a type of modal window that covers the page and prevents reading text and even scrolling. Certainly not ideal UX wise, because it forces users to make a decision.
Of course, one of the aims with the new requirement is to create a more uniform consent experience – something that really should be built into browsers, but is still implemented on top with extra JavaScript, markup, and its own styling. Although minimally, this will add to websites overall size, thereby affecting performance. In fact, just showing a consent dialog is a massive nuisance to many users, hence a statically positioned consent prompt might be better. Not in terms of obtaining consent, but at least in terms of usability and not annoying our users too much.
Browser based consent still wanted
The perfect solution as I see it is still browser based, and this is another area where regulators failed; they came up with the regulations without ensuring a centralized browser API was in place to handle a smooth transition, and this pushed unnecessary implementation costs onto individual owners of apps and websites, almost as if the regulators did not care or understand the consequences.
In the current environment, regulators has also failed to account for the situation that we have little or no chance of identifying users approximate location or nationality. This could also have been avoided with a browser-based solution, since it could happen silently on the client, without having to reveal the information to the website owner or third parties.
Currently, given the responsibility is entirely ours, there just needs to be a way for users to tell us their location in order for us to present them with the relevant legal messages. E.g. At minimum what region they are from, but ideally also their state if applicable, and citizenship; we are legally required to obtain consent based on citizenship regardless of users location. This introduces additional privacy issues and is highly unlikely to ever get implemented, but it is a situation created by carelessly implemented GDPR rules, and I'd even argue that regulators have acted reckless – they "legally broke the internet" in a sense.
We need privacy regulation that aim to achieve a balance between protecting users privacy and still allowing owners to monetize their apps and websites. What we do not need is regulators that are ideologically motivated to destroy a fairly harmless business model.
Sometimes I suspect that regulators has no interest in this, perhaps just wanting to totally kill targeted advertising – given the alternative options, perhaps especially employed anonymization tech, this is just dumb.
Regulators seem myopically obsessed with the issue of users privacy while failing to adequately demonstrate real-work risks. E.g. What specifically will happen by targeting users with ads. Perhaps this is not about privacy after all, but more about avoiding another Cambridge Analytica abuse scenario? That is a fair argument, but heavily regulating political ads would be a better and more balanced approach – ads that aim to shift beliefs are probably always unnecessary, and they are sure to be abused by malicious parties that wish to manipulate the political landscape.
Tell us what you think: