ChatGPT’s Flaws Mean It Cannot Be A Universal Tool For Serious Writing – Analysis

By

ChatGPT, artificial intelligence and the implications of self-generating content and analysis raise many fundamental questions about the role of an AI processor. Importantly, ChatGPT models are designed for natural language processing tasks such as text generation and language understanding, but when assembling the data there are important considerations when using this new “toy.”

AI chatbots like ChatGPT can be used for constructing manuscripts. There has been a rising phenomenon of professional firms using such tools to generate papers, often with disastrous results because of corporate failures regarding quality assurance.

Research shows that chatbots, on which ChatGPT is based, are computer software programs that are trained to use extensive “libraries” on the internet to process written or spoken human communications. In essence, they are conversational tools that can perform several functions for humans (based on what chatbots are designed for) and are designed to theoretically follow human conversational instructions and respond to them in detail.

ChatGPT has been extremely popular since its appearance on the market last November. It reached 1 million users in only five days — a new record in this specific industry. ChatGPT can generate texts that closely resemble human language, which means that the content produced requires heavy editing. Outside of document production, ChatGPT also has some other uses. It can engage in multiple ongoing conversations, understand and respond to natural language inputs, and offer customized and interactive assistance. This makes ChatGPT a promising tool for open education, as it can improve the independence and autonomy of autodidactic learners, while also being both practical and adaptable. It provides personalized support, direction and feedback, and has the potential to increase motivation and engagement among students.

However, there are academic and policy limitations of the present generation of chatbots. For instance, chatbots and AI are not conscious. They can only produce content based on the libraries on which they were trained. Secondly, chatbots can produce factually incorrect answers that may sound credible. Thirdly, the information chatbots use can be old (from the time of development of the AI software) rather than current. And chatbots such as ChatGPT have the potential to respond to harmful instructions because of a lack of AI judgment. To be sure, the misuse of several AI chatbots being developed by multiple companies is a concern. Overall, there are questions about the scientific integrity of the content that chatbots produce in their present form.

Researchers are noting a real uptick in individuals producing scholarly work from ChatGPT. They are finding documents that have been submitted for peer review, often with immediate rejection. The stakeholders of academia are noting the role of ChatGPT and AI generally in scientific publications. Scientific institutions are finding that AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. They can only be used to generate data that a human author submits. AI-generated data has no legal status. Difficulties will arise from the absence of conflicts of interest or in terms of managing copyright and license agreements.

ChatGPT receives mixed reviews when applied to the health and medical field. With its ability to generate human-like text based on large amounts of data, ChatGPT has the potential to support individuals and communities in making recommendations, rather than informed decisions, about their health. However, as with any technology, there are limitations and challenges to consider when using ChatGPT in public health.

Clinical research shows that ChatGPT can provide information about the various types of community health programs and services available, the populations they serve and the specific health outcomes they aim to achieve. Additionally, it can provide information about the eligibility criteria for accessing these programs and services, as well as the costs involved and the insurance coverage available. Here, ChatGPT is offering more options that are not necessarily medically sound, so they should be checked with an actual human being with a medical doctorate.

Finally, when considering data processing and AI, it should not be forgotten that futures studies — studies that do trajectory projections when feeding content into a computer processing program — can produce useful outputs. Software programs are able to scan text and process all words and numbers to produce a trajectory, for example, of the year 2050. This type of analysis reviews scientific content and produces reports that can be very useful, involving no AI. In other words, collecting 20 primary source articles and scanning them into a program that takes the data and produces a paper based on the programming software. When examining this type of research methodology, it seems that the ChatGPT-AI mix missed a step.

Overall, ChatGPT and AI suffer from a lack of accuracy, the bias limitations of the data and, in the public arena, limited engagement due to no direct interaction with human beings. That fact limits AI chatbots as a functional research tool. For the education and medical fields, there are many uses, but they come with cons that need to be enshrined in regulatory law. ChatGPT has a particular use, but it is not a panacea for writing a serious academic or policy paper.

Leave a Reply

Your email address will not be published. Required fields are marked *