ChatGPT can create any kind of text. This starts with letters that include the standard phrases and politeness that everyone expects. In business applications, ChatGPT can automate tasks that are time-consuming, repetitive, and do not require specific expertise. For example, ChatGPT can summarise and fill in documents, which can boost the efficiency of businesses. This is especially relevant in heavily regulated sectors such asthe financial services industry, where people spend a lot of time on document processing.
Author: Prof. Radu State, head of the SErvices and Data mANagement (SEDAN) research group.
ChatGPT levels the playing field for applying natural language processing in businesses. While in the past only specialised companies could apply language models, ChatGPT and its competitors now lower the entrance barrier and make them available to basically everyone. With ChatGPT, we see an application of many years of research on neural networks and transformers that is now revolutionising the market. ChatGPT will help automate tedious tasks, allowing people to focus more on conceptual work. Missing out on this efficiency, companies that rely on repetitive and manual processes will lose their competitive advantage and might be at risk of disappearing.
In our National Centre of Excellence in Financial Technologies (NCER-FT), we are trying to bring cutting-edge research closer to industry. I remember my natural language processing classes with Prof. Eric Brill at Johns Hopkins University, 25 years ago, and I am excited about the progress in this area. At that time, we did not have the powerful hardware and deep learning was yet to be invented. Nowadays, OpenAI made a pioneering dream reality by leveraging huge computational power and money to train models on an enormous amount of data. ChatGPT is now trained on more information than millions of users will be able to see in their lifetime. At SnT, we built a research prototype based on ChatGPT, and we were quite amazed to see how much we can achieve in just two weeks of software development. The text analysis that we implemented provided results that are close to what we can expect from manual work done by humans.
Today, ChatGPT can be used to write, autocorrect, and rewrite texts. Tomorrow, ChatGPT or its competitors will also be able to process voice or video, and in fact some projects are already doing this. Thinking ahead, ChatGPT will evolve into a direction where it can model more complex lines of thought, like we humans do. When we reason, we do not only generate text. We are thinking about processes step by step. In the future, this technology will simulate the thought of humans. This will, for example, revolutionise computer programming: While in the past, programming was required to describe each and every step, now you cangive ChatGPT the input specification and obtain code, documentation, as well as test suites automatically. The big winners in this regard are the speed of development and the costs of writing software.
However, research needs to address major challenges for ChatGPT and other large language models from the perspectives of cybersecurity and privacy, business, and ethics. From a cybersecurity perspective, granting a language model the right to execute a program on a computer is a security nightmare. Regarding privacy, we currently cannot control where private data is going that we share with ChatGPT. This can be problematic depending on the legal framework and the data that is used for training. We already know such concerns from some companies that, for example, do not allow the use of services such as Dropbox for reasons of confidentiality. To apply ChatGPT in businesses, we need to be able to customise language models with more precise data for business use cases and to ensure the security and privacy from both a technological and legal point of view. Further, we need to improve the technology to reduce the cost of training large language models, which today are far too high.
Research also needs to address ethical questions around ChatGPT. If training data contains biases, e.g., regarding race or sexual orientation, a model can amplify these biases. Thus, research needs to be able to assess if there is any bias in the training data. Furthermore, while ChatGPT can be helpful in many contexts, we should not rely on it entirely. ChatGPT can be an efficient tool for learning, but it should not be a substitute for learning.
ChatGPT is an amazing development, which led to an incredible number of start-ups and projects in only six months. Thus, we also need a European version of ChatGPT operated over a sovereign cloud and network infrastructure. It will be critical to have the competencies here in Europe to make full use of the technology. Currently, the EU discusses its AI Act, which in the future is supposed to ensure that AI technology is compliant with our norms. Europe will need its own version of ChatGPT from both a strategic and ethical perspective.
Prof. Dr. Radu State is the head of the SErvices and Data mANagement (SEDAN) research group at the University of Luxembourg’s Interdisciplinary Centre for Security, Reliability and Trust (SnT). Prof. State contributes to the National Centre of Excellence in Financial Technologies (NCER-FT).
How does ChatGPT work? Imagine that you take every piece of text available on the Internet. You feed it into a computer, and you let the computer learn what “text” looks like. “Looks like” means: What are words, what are relationships between words, or what is the relationship between sentences. To achieve this, the computer builds a model, called artificial neural network. The model includes coordinates for pieces of text and puts them into a multidimensional space. Distances of the points in this space describe similarities between the pieces of text. If you ask this computer a question, it will look at the coordinates of the question and then generate its answer from there.