Yes, LLMs are perfect for making sense of intended and unintended bullshit (bureaucracy). If you don't know what I mean, here's a summary from ChatGPT (sorry):
Here are a couple of examples:
Humans need this nuanced signaling system to navigate complex social structures, classify others, and maintain hierarchy or cohesion within groups. This signaling system is bullshit. Only advanced humans and AI can learn to interpret and act according to the bullshit rules. To go beyond bullshit, you need superhuman capabilities or superhuman AI.