In the debate on artificial intelligence, I see more and more often a contrast that Umberto Eco had already focused on in 1964 when talking about the mass media. “Apocalyptic and integrated” was born as an essay dedicated to communication and theories of mass culture, but its value today goes far beyond the historical context in which it was written. Eco does not limit himself to describing two opposing attitudes, he also shows how sterile it is to transform them into two ideologies. On the one hand, the radical rejection of the new medium, on the other, the enthusiastic and uncritical acceptance. In the middle, the most difficult and most interesting terrain: that of analysis.
That’s exactly where we are today with Artificial Intelligence. On the one hand, there are those who describe it as a dangerous shortcut, a machine that flattens everything, that empties skills, that produces cold and interchangeable texts, that replaces thought with a simulation of thought. On the other hand, there are those who welcome it as a saving revolution: more productivity, more speed, more access, more possibilities. Both reactions contain some truth. The problem is that, taken alone, they explain little.
That’s why I still think that Eco’s lens is still useful. Not because artificial intelligence is like television or newspapers, but because every important technological step always generates enthusiasm on the one hand and fear on the other. And almost always the public debate slips towards easy slogans before even trying to really understand what is changing.
What did Umberto Eco really mean by “apocalyptic and integrated”
Umberto Eco’s strength also lies in the fact that he does not leave you the easy way. It doesn’t tell you to choose your field. Instead, it forces you to recognize that both apocalyptics and integrated people risk being superficial when they stop analyzing the medium and begin to use it as a pretext to confirm a ready-made worldview.
Apocalyptics see the new media as a form of impoverishment. They fear simplification, loss of depth, standardization of taste and thought. Integrated, on the contrary, see innovation as a form of openness: more access, more diffusion, more democratization, more opportunities. Eco observes both positions and highlights their limitations, that is, that total rejection is as passive and uncritical as optimistic acceptance.
If we translate this logic to AI, the picture is surprisingly current. Even today, the risk is not only to make wrong judgments about technology. It is skipping the most important phase altogether, the one in which you observe it for what it really does, for the way it changes processes, for what it makes simpler, for what it makes more dangerous, for the skills it amplifies and for those that risk making you lazy.
That of the essay “Apocalyptic and integrated” is an approach that I share a lot, also for personal attitude. I feel closer to the integrated, but not in a fideistic sense. Not because I think AI is good in itself. Rather, because I use it every day, I see concretely how useful it can be, and precisely because of this it is even clearer to me where the advantages end and where the limits begin.
From mass media to artificial intelligence: what has changed and what has not
Of course, the comparison between mass media and AI should not be taken literally. The mass media of the twentieth century were instruments of dissemination, they spoke to many and often in a unidirectional way. Generative artificial intelligence, on the other hand, has an apparently more dialogic relationship with the user: it responds, reformulates, synthesizes, proposes, completes, translates, produces. But this technical difference does not cancel the cultural knot. Indeed, in some ways it makes it even more delicate.
With the mass media, the problem was above all the formation of collective taste and imagination. With AI, the issue expands and does not only concern what we consume, but also what we produce. AI enters writing, programming, studying, daily work, translations, research, personnel selection, even the way we organize thought. It is not simply a channel, but a technology that infiltrates cognitive and operational processes.
And this is where the comparison with Eco becomes really interesting. Because the point is no longer to ask whether AI is a wonder or a threat. The point is to understand what kind of practical culture it is creating. Is it making us more lucid or lazier? More self-employed or more employees? Faster or more superficial? The answer, inconveniently, is not unique.
AI and content creation: creative acceleration or homologation?
I use AI on a daily basis for content production. I use it to find ideas, to explore different perspectives, to stress an idea and see if it holds up, to look for angles that I may not have considered immediately. In this sense, for me it is not a machine that replaces creativity. It is a tool that puts it to the test.
And that, if you do content marketing, journalism, copywriting, or editorial SEO, is a huge advantage. Because one of the most useful things such a system can do is not to write for you, but to force you to take a position. It gives you back possibilities, but it is you who has to choose. It suggests alternatives, but it’s up to you to figure out which one really makes sense. He offers you a draft, but you have to decide if there is a thought inside or not.
For this reason, I find too simplistic the criticism that AI-generated content is cold and all the same by nature. This is not always the case, although unfortunately it happens very often due to misuse. The problem is not the AI that writes, but the absence of a direction. If you use it to produce text without idea, without experience, without intention and without control, the result inevitably tends to standardize. If you use it instead to enhance an already present thought, then everything changes.
YMYL content: b careful of to the quality of answers
If there’s one area where my enthusiasm for AI becomes much more cautious, it’s YMYL content. When it comes to health, finance, safety or well-being, the mistake is not just an inaccuracy, but can have concrete and impactful consequences. Here, in fact, I feel much less integrated and much more cautious. I also use AI to orient myself or get an initial idea, but I don’t consider it a point of arrival. At most it is a starting point.
In these cases, the difference is made by verification, reliable sources, context and, often, the comparison with real professionals. Prudence, therefore, is not technophobia, but responsibility. Because in the most delicate content, it is not enough for an answer to be fluid and convincing, but it must also be well-founded, correct and treated with the right level of attention.
Here, in my opinion, a decisive point is at stake. AI does not replace voice. At most, it puts it in crisis. And this can be a good thing. Because if you really have a voice, a point of view, a sensibility, then AI can help you develop them faster. If, on the other hand, you don’t have them, AI risks making the void even more apparent.
AI and SEO: in any case, Google rewards useful content
Even in the SEO world I often see a basic misunderstanding. There are those who talk about AI as if it had suddenly introduced a new problem, the pollution of the SERP. But those who have been doing this job for a while know very well that pollution has always been there. Keyword stuffing, texts written for engines and not for people, duplicate content, pages created only to oversee queries. It is certainly not AI that invented all this.
AI rather changes the scale. It makes it much easier to produce large volumes of content in a short amount of time. And therefore it can multiply the noise. But multiplying a problem does not mean that it has been created.
Google, for its side, continues to take a rather clear line: it does not judge content on the basis of whether or not it has been produced with AI tools, but on its usefulness, quality and ability to really respond to people’s needs. At the same time, he warns that using generative AI to create many pages without adding value can fall under anti-spam policies on scaled content abuse. So the problem is only the useless content.
And that’s where, as someone who has been working for years between SEO and content, I feel very integrated. Because I see AI as a very powerful accelerator, not a qualitative criterion in itself. It can help you do better clustering, preliminary search, synthesis, structures, semantic exploration, updating. But it cannot create on its own what really makes the difference: the experience, the selection, the editorial judgment, the ability to understand what really deserves to be published.
AI and programming: a coding aid
Programming is one of the fields in which AI, personally, has impressed me the most. I’m developing a complex personal project, and AI has helped me transform WordPress into a multi-user platform that is far more advanced than I could have built on my own from my background. Not because it gave me a result at the push of a button, but because it allowed me to tackle problems that, without this support, would have remained out of reach or would have taken me enormously longer. It allowed me to think about the code, to iterate, to correct, to test solutions, to arrive at results that for me were, honestly, enormous.
However, this does not lead me to say that everyone is now a developer. On the contrary. It leads me to say the opposite, that AI is lowering the threshold of access but is not eliminating complexity. It’s a fundamental difference. Being able to build something doesn’t automatically mean mastering everything behind it. AI can help you write code, understand logic, solve errors, and prototype much faster. But it doesn’t magically deliver you the architecture, the security, the scalability, the system vision. Those remain real skills.
This is why I do not see AI-assisted programming as a devaluation of technical competence. I see it rather as a redistribution of possibilities. Those who have strong bases can go even faster. Those who have less technical bases but good reasoning skills can go much further than before. But in both cases, the way in which the tool is used remains decisive. AI can raise your operational level; It does not automatically replace depth.
AI and translation: very quick, but control is needed
My position on translations is also very clear, because I use it often and I am well aware of the concrete advantage it offers. It speeds up this activity a lot . It allows me to work faster, to unlock texts in other languages, to have an immediate basis on which to intervene.
But exactly because I use it, I also know that it is not enough to blindly trust the output. I always double-check. And I correct very often, especially when it is necessary to better contextualize, localize, make the text sound the right way for the real reader. Because translating is not just replacing words. It is to convey tone, intention, register, implicit culture. It is localize. And there the automatism, impressive anyway, is still not enough.
Here too, therefore, my position is integrated but not naive. AI is an efficiency multiplier. However, the highest level of translation, the one in which a text is not only correct but really suitable, continues to depend on human sensitivity.
AI and work: more productivity but not by magic
On the job, the topic becomes even more serious, because here the ideological simplification is very strong. On the one hand, there is panic: “it will steal our jobs”. On the other hand, there is the phrase ready for a motivational post: “you have to adapt”.
My position is more demanding. I see every day how much AI can improve productivity when it is used to lighten repetitive, executive, low-ingenuity tasks. And it is a real advantage: If I delegate part of the mechanical work to the tool, I can devote more time to what really requires thought, strategy, sensitivity, evaluation. From this point of view, for me AI does not reduce the value of human labor, but moves it higher.
But it would be naïve to stop here. The OECD points out that AI can bring important benefits, such as increased productivity, better quality of work and even better safety conditions. At the same time, however, it recalls equally concrete risks: automation, loss of agency, bias, discrimination, privacy issues and lack of transparency. In other words, the benefits are there but they don’t distribute themselves and don’t automatically produce a positive outcome for everyone.
This is why I am not convinced by the formula “those who do not adapt will be left behind” if used as a total explanation. As an individual principle it works, in fact studying, updating, understanding the tools will be increasingly important. But as a social reading it is too short. Transitions are never neutral. They depend on business contexts, access to training, the quality of leadership, the way organizations decide to use technology. In short, adaptation matters, but it is not enough to close the discussion.
AI and education: help to understand better or shortcut to not thinking?
If there is one field in which the tension between apocalyptic and integrated is particularly evident, it is that of education.
On the one hand, it is undeniable that AI can help. It can explain, synthesize, reformulate, build schemes, put things in order. In some cases, it can even be an excellent ally in making complex topics more accessible. UNESCO, indeed, recognizes that artificial intelligence has the potential to innovate teaching and learning, but also insists that risks and challenges are running faster than regulatory frameworks and educational policies. The direction indicated is clear: human-centered integration, attentive to critical thinking, equity, rights and agency of students and teachers.
And this is where, in my opinion, everything is at stake. AI can be a lever to understand better. But it can also become a shortcut to not thinking. And the boundary is very thin. It is not enough to wonder if the student will copy. The more serious question is whether AI is supporting the cognitive process or replacing it. Whether it is helping to reason or is offering the illusion of having understood.
For this reason, even here, polarization does not help. AI should neither be banned nor mythologized. It must be included in a culture of use. And this, trivially, is often missing today.
CV and recruitment: the great help of AI to stand out from the crowd
That of CVs written with AI is one of the most interesting cases because it highlights a small hypocrisy in the labor market. On the one hand, there is talk of authenticity and the human side of the candidacy; on the other hand, however, the selection increasingly passes through ATS, automatic filters and very inhuman readability criteria.
For this reason, I feel very integrated here. If companies use automated tools to skim them, it is common for candidates to use intelligent tools to be read better. I don’t see anything wrong with using AI to clarify a profile, bring order to experiences and make a CV more consistent with a job description.
The real distinction is not using AI, but using it well. There is a clear difference between valuing one’s own experience and building an artificial version of oneself that then does not stand the test of facts. A good recruiter, in my opinion, should not worry about catching those who use AI, but to understand if there is a real person behind that CV, with solid skills and told effectively.
What Umberto Eco’s “Integrated Apocalyptics” really teaches us about AI
In the end, I believe that Eco’s most current point is not the distinction between apocalyptic and integrated in themselves. It is the method behind it. Eco reminds us that the media do not understand each other by slogans. It’s not enough to say that they’re the future or that they’re ruining everything. We need to observe how they enter real life, what automatisms they generate, what habits they produce, what shortcuts they encourage, what possibilities they open up.
Applied to AI, this means a very concrete thing: you can be in favor of it without being naive. And you can be prudent without being catastrophic.
By attitude and experience, I feel more integrated. I use it every day to write, to think, to translate, to explore, to build. I use it to lighten repetitive work and free up time for smarter, more strategic tasks. I also use it to test my ideas, not to stop having them. Precisely for this reason, however, I have no desire to turn it into a religion.
AI is a very powerful tool. But the more powerful a tool is, the more important the quality of those who use it. And it is here, perhaps, that “Apocalyptic and integrated” ceases to be a cultured reference and becomes a very practical key to reading the present, The problem is not deciding whether to be on one side or the other. The problem is to avoid becoming superficial just when we think we are modern.



