AI tools for everyone. But with responsibility.

Responsible artificial intelligence in relation to generative AI
Advances in artificial intelligence (AI), particularly in the context of generative AI, have opened up completely new fields of action in recent years that were previously impossible to implement. The ability to conduct natural-language interactions with AI models has significantly improved the accessibility of AI to society. Even people without previous experience with AI can now apply these models and come across new application scenarios.
In practice, numerous solutions have already been developed using generative AI models. One example of this is the pharmaceutical company Boehringer Ingelheim, which uses a specially developed knowledge management solution to support its scientists in researching new drugs. The researchers use the Azure OpenAI service to search and compare scientific documents in their database. As a result, they were able to save an impressive 150,000 working hours within 70 days, which in turn could be invested in drug research.
Guide to AI
It is gratifying to see that the general public is increasingly recognizing the potential that AI has in store for our society. However, we must not lose sight of the potential risks and dangers associated with the use of AI. In 2019, Microsoft formulated six basic principles for the responsible development of AI systems and published a guide that serves as a guide for developers and organizations.
These principles include:
- Fairness: AI systems should be fair and non-discriminatory. It's important to ensure that AI applications don't reinforce existing biases or create new ones.
- Reliability and Security: The system should function reliably in various application scenarios, including those for which it was not originally intended.
- Privacy and security: Protecting personal data is paramount. AI systems must be developed securely and in compliance with data protection regulations. Data must not be leaked out or disclosed.
- Inclusion: The system should include people with different abilities. To achieve this, minorities should be involved in planning, testing, and developing AI systems.
- Transparency: Those who develop AI systems should talk openly about how and why they use AI and communicate the system's limits. In addition, how AI models work should be transparent. Users should be able to understand how decisions were made.
- Responsibility/Accountability: Developers of AI solutions must be aware of their responsibility and ensure that AI systems are used ethically and responsibly.
Numerous tools and processes have already been developed in the past to support the implementation of these principles. Generative AI is a particularly fascinating technology, as it is able to generate completely new content and thus become creative. However, this new field of action also entails a variety of challenges, particularly with regard to the implementation of the above principles.
“Generative AI is a particularly fascinating technology.”
RAG against hallucinations
One of them concerns the non-determinism property of this technology. Generative AI models provide information without confidence scores, a traditional method for evaluating results from machine learning models. These changes the steps to ensure the principles of transparency and fairness.
When a large language model makes erroneous statements, we speak of “hallucinations.” In doing so, the model reproduces information that sounds plausible and comprehensible at first glance. To counteract this problem, the concept of “Retrieval Augmented Generation” (RAG) can be used. RAG does not draw on existing model knowledge. Instead, you simply use the model's ability to understand and generate natural language.
The Approach combines the best of two worlds: Information Search and Creative Text Generation. The process starts with searching for information. This involves looking for existing information that matches the specific request. This “raw material” can come from a database, a corpus of texts, or even the Internet. The generation is then carried out. Once relevant information has been found, a creative approach is used to generate a coherent answer. This involves adding your own words, structuring the text and adapting it to the context.
Active process
In fact, the development of generative AI has created new challenges in the area of responsible AI. At the same time, however, there are also many opportunities. The ability to interact with generative AI models in natural language makes AI more accessible to anyone who hasn't had any direct points of contact with this technology before. More and more people can now use AI and become part of development teams for AI solutions That are based on sound application knowledge, less on purely technological know-how.
Another advantage is that AI is also accessible to people without many years of programming experience. By generating code using GitHub Copilot, they can create program code or better understand existing code. AI is no longer a niche topic for technology experts, but a relevant and tangible field for everyone.
“Using AI responsibly is a continuous process.”
Finally, I would like to stress that using AI responsibly is a continuous process in which companies should actively participate. As technologies are constantly evolving, our control mechanisms must also be expanded and adapted accordingly. We should welcome new technologies with open arms while ensuring that they are used responsibly. This is the only way we can use the full potential of AI while maintaining ethical standards.
This article appeared in a similar form for the first time in Edition 01/24 by data! You can find all issues and articles of our biannual magazine here:







