The original paper might have other issues, e. g. https://statmodeling.stat.columbia.edu/2022/01/07/pnas-gigo-qrp-wtf-approaching-the-platonic-ideal-of-junk-science/
But I'm not here to discuss effect size or quality of sources, I think it is much more important to understand that there is no good proof that nudging enables people to make good, lasting changes, while at the same time offering policymakers an easy and cheap way out of applying uncontested, proven methods that would be a lot more beneficial.
Yes, but many things can be mapped to "language", let's say a grammar describing state machines, so it can be used to generate control actions.
Transformer models etc. are not only useful for conversational AI and translations.
I'd be fine with the approach as part of research advancing the field, but unfortunately, that's not what we're seeing.