Dr. Dirk Frese, Director of Sales, Marketing and Service, JULABO USA.
The Biblical story of the Golden Calf Deposits describes how people, when people face uncertainty and control, create worship items from their own resource, and put the final value in their hands. In the same way, if we treat AI something to worship or depend on it, it must be the “golden calf” of our time: creating collective intelligence, some of which are tempted to determine wisdom, authority, or even quasi-divine position.
People are looking for AI tools to find information and knowledge, accepting the results as the ultimate truth to solve the problem. Finding the answers is the right thing to do, just as we did with our encyclopedia before.
Although I am an AI’s early adoption myself, I fully understand the incredible power of these tools, I am concerned about the value that people invest in the results. More and more people tend to cut an argument that is short by hinting in the answers they received from AI. They can even go so far as to manipulate the results in a way that they don’t say this, but it was oh – so it must be true. But is it always?
Wisdom is in us: not in AI
Spiritual traditions emphasize that divine and true wisdom is in every human being. Oh can process data, but it cannot express wisdom, consciousness or divinity. As philosopher Karl Popper claimed, knowledge grows through human creativity, criticism and problem -solving, not to the facts received in blind trust.
As we outdoor detection to AI, we neglect this essential, participatory search for the meaning and truth.
Oh is not the final truth: even peer -reviewed science can be distorted
AI is trained in information that contains both truths and false information. Ai can strengthen prevailing stories, bias and even direct lies, especially when it produces seemingly prestigious prints.
Skandals for AI systems, such as biased rental algorithms or counterfeit study generation, emphasize how easily AI can reflect and strengthen social illusions or mistakes, especially if we allow scientific or objective-sounding outputs that are not asked.
Impact on Leadership: Knowledge and Evaluation for All
Leadership in the AI era meet two obstacles: the first is to judge a judgment and decision -making for AI. This would lead to the loss of responsibility to consider and lead, because stable decision -making is the cornerstone of good leadership.
The other is to forget that every one around you now has almost infinite access to information that can democratize insights, but can also create fragmented realities because each person finds their own “truth” through AI-assisted filters and silos. The harmonization of these views and moving in the same direction requires more leaders than ever before.
Genuine leadership is rooted in detection, humility and ability to ease dialogue, not just repeat what AI predict or prescribe.
Dangers: Feedback Stuffs, Partiance of Confirmation, and distorted reality
When AI “feeds” existing content on the Internet, it can continue with truths and mistakes. Over time, AI outputs can become a new reality, even if they start as a mistake, rumor or bias.
The confirmation deviation has been personally affected by me. Recently, I recently published the content of a few paragraphs on different topics, and later when I asked for AI questions about these topics, it repeatedly returned to results that suddenly mimicked statements. As I dug deeper into AI’s heard and referenced sources, it became clear to me that the tool mentioned myself.
In the calculation of the spreadsheet in Excel, this is called a round reference. Although Excel warns me of round references in the calculation by marking the formula impossible, oh no. In such cases, AI feeds my own opinions to me and thus reinforces what I said earlier, making me believe that it is an absolute truth. However, as many other AI professionals, I usually look for critical thoughts on a particular subject, not the echo chamber.
The other downside is that this makes AI a final authority, and people can give up personal and community responsibility, creativity and adaptability to it.
Impact on personal and professional relationships
“It’s not me; it’s by saying that.” This sentence means a dangerous change in which people guide personal responsibility and authenticity to an external representative, weakening trust and intimacy in relationships.
Hiding in both our professional and private life behind AI’s authority can weaken the responsibility and cause confusion about the source of values, judgments and decisions. This is guilt transfer.
We need to ensure that we remain loyal to our own values and beliefs, challenging them, and are ready to defend them when needed – and always genuine and open to discussions.
Solution: Use AI as a tool, not as an idol
AI should be used as a hammer or a wrench, nothing else. Use it to add tasks, surface information or test ideas, but never as a source of the ultimate meaning or purpose. AI tools create a sense of recognition that we long for, especially for simpler search engines that show search results as a list of borrowed sources. Therefore, we tend to give AI more credibility.
The temptation of the AI -Päpäl God service is seductive precisely because it promises control, security and safety at uncertain times. Still, the continuous work of our personal life and the perception of the collective future, the construction of relationships, humility and creative research. Rejecting technological idols means that our legitimate place is required by the trustees, not the worshipers, our own creation.
Forbes Communications Council Only an invitation community in successful public relations, media strategy, creative and advertising agencies. Can I get a competence?