WHAT ARE LLMs(Large Language Models, eg: ChatGPT)? Applications And Their Potential Risks
Large Language Models , Their Applications And How They Will Affect The Work Industry

Large language Models (LLMs) are deep learning models trained to understand and generate natural language. These models are trained on huge amounts of text data, allowing them to generate human-like responses. These are models created with the goal of getting machines to understand, interact and think like humans.
In our quest to achieve human-like thinking machines, there have been different types of models that have been exploited:
- Convolutional neural network (CNN)-based language models
- Recurrent neural network (RNN)-based language models
- Transformer-based language models
LLMs are now largely synonymous with Transformer-based models due to the widespread adoption and popularity of Transformers.
Transformers have shown several advantages over RNN and CNN based language models because of these characteristics: Parallelization, Long-term dependencies, Transfer learning, Global Context and a better performance.
Current LLMs are effective at generating appropriate responses due to 2 factors:
- They are built on transformers, a state-of-the-art neural network architecture with many parameters. The principal novelty of the transformer lies in its self-attention mechanisms, which enable the model to better understand the relationships between different elements of the inputs.
- Transformer-based LLMs use a two-stage training pipeline to efficiently learn from data. In the pretraining stage, LLMs employ a self-supervised learning approach that enables them to learn from large amounts of unannotated data(raw unlabeled data that the algorithm has to make its own meaning from using patterns) without requiring manual annotation. This approach is a considerable advantage over traditional fully supervised deep learning models, as it eliminates the need for external manual annotation and enables greater scalability. In the subsequent fine-tuning stage, LLMs are trained on small, task-specific, annotated datasets to leverage the knowledge acquired during the pretraining stage and perform specific tasks as intended by end users. As a result, LLMs achieve high accuracy on various tasks with minimal human-provided labels.

TYPES OF LLMs
- GPT (Generative Pre-trained Transformer) is a model developed by OpenAI, the company that also created ChatGPT, the most popular conversational chatbot. It is a general-purpose LLM trained to perform various NLP tasks. The model has several versions, including GPT-1, GPT-2, GPT-3, and the recently announced GPT-4. Each version is significantly more powerful than the previous one, with GPT-2 having 1.5 billion parameters, GPT-3 having a staggering 175 billion parameters, and GPT-4 having even more parameters than the previous two and being more powerful. GPT-4 is trained on more data than the other versions. GPT serves as the foundation for interfaces like ChatSonic (built on GPT-3), ChatGPT (also built on GPT-3), Bing AI Copilot (built on GPT-4), Hugging-GPT, and many others.
- LaMDA (Language Model for Dialogue Applications): This is a conversational model developed by Google specifically trained on dialogue and designed to generate natural dialogue, unlike GPT, which can perform a lot of NLP tasks. Google’s LaMDA technology has the unique ability to turn unstructured input into structured output like data points or actionable responses from its deep learning models quickly and accurately. It is the same model that was recently rumored by a Google Engineer to be sentient. LaMDA has up to 137 billion parameters for its training, which makes it extremely good at having conversations. A more lightweight and optimized version of LaMDA powers Google Bard AI, which is a competitor of ChatGPT.
- PaLM (Pathways Language Model): Also created by Google, PaLM is a a recent and advanced version of LaMDA. It is a much closer contender to GPT-3 and 4 and has a wide array of uses unlike LaMDA which is dialogue-specific. PaLM has up to 540 billion parameters. It is capable of performing some tasks better than GPT-3 when compared. During the 2023 Google I/O event, google released PaLM2, a much more advanced and super-charged version of the first version very competitive with GPT-4.
- BERT (Bidirectional Encoder Representations from Transformers): Created by Google Researchers in 2018. It is moderately good at 11+ of the most common language tasks, such as sentiment analysis and named entity recognition. The transformer-based language model RoBERTa builds on BERT’s languge masking strategy and modifies key hyperparmeters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates.
- LLaMA(Language Model for Multimodal Accessibility): Created by Meta, it is also a transformer-based model that supports multiple languages, including Spanish, English, French, German, italian, Portuguese and Dutch. It is designed to be highly efficient and lightweight, making it ideal for low-latency NLP tasks that require real-time responses. It differs from ChatGPT or Bard as it is not conversational. It is fundamentally a research tool that needs Prompt engineering. It is designed to help researchers advance their work in the subfield of AI.
LLMs can also be classified into General Purpose, Fine-tuned LLMs, and Edge LLMs
General-purpose LLMs are versatile models trained on a diverse set of data that can perform a wide range of tasks, while fine-tuned LLMs are based on pre-trained models that are further trained on specific tasks to generate more accurate responses. Edge LLMs are optimized for fast inference and are designed to run on edge devices( smartphones, IoT devices, laptops, and other hardware that are equipped with processors, sensors, and network connectivity) without relying on cloud-based resources.
How LLMs Will Affect The Work Industry
Everything here is a subjective view and not to be taken as an unquestionable submission.
AI displacing humans at work has always been a much talked about topic and even more since ChatGPT came into the picture. For centuries we have taught of ourselves as special and seen ourselves at the centre of the universe. And for the first time in history, we feel threatened by machine intelligence, yeah we should. Fundamentally, our computational capability is no different from any other intelligent body in our ecosystem. AI, on the other hand, has proven beyond doubt that it is superior to us computationally, they process and learn things much 1000 times faster than us. Just recently, it seemed like only humans could write meaning and novel essays. But then the introduction of LLMs has changed the whole trend. Chatbots like ChatGPT can now write human-like essays with no trace of grammatical errors.
So we have to ask. Where do these live writers and workers in industries where the use of AI is gradually proliferating? Personally, I think we don’t know how this will affect work in the long run whether we will lose our jobs to AI or not. What I am sure of is that AI is going to make us more productive as complementary tools in our work for the foreseeable future.
I believe it will be really hard to purge people who work in creative industries like writers and musicians, social workers, and health workers. In all, I see it to be intrinsically inevitable to do away with workers whose work revolves around interacting with humans.
For as long as we are the dominant species on earth, there’s always going to be work for us and we will always find a way to make ourselves useful. I can with the slightest uncertainty say I don’t see AIs taking over from us soon, until then we are responsible for the role AIs will play in our community.
Also, the possibility of AIs being an existential threat does not seem improbable, albeit I still believe we should not stop creating powerful models as they hold the key to rapid breakthroughs in the field of Health, Industry, Space explorations, Quantum Physics and other areas which seem like black-box to humans.
The potential of LLMs really calls for immediate action from Government to enact laws that will ensure transparency, safety and ethical use of AIs.
CHALLENGES AND ETHICAL CONSIDERATIONS
As far as we have advanced in making LLMs better at understanding humans and provide us with factual information free from errors, they can sometimes be over-confident or even make stuffs up.
- Limited knowledge of the world: Even as powerful as LLMs are, they are not informed on all the events around the world as they are limited by current computing power. This limitation is partly solved by fine-tuning(during pre-training a model can is trained at a specific task to make it more adept at it). As computational power increases LLMs are also bound to be more robust and powerful.
- Hallucinations: LLMs in some cases can generate information that is factually untrue or even inconsistent with reality. This happens when the prompt goes beyond its training data leading to nonsensical statements. According to ChatGPT, there’s a phone called Tecno Phantom X20 Ultra which is the phone I use in my dreams, certainly made it up but ChatGPT thinks Tecno has it.
- Misinformation or harmful information generation: LLMs have the potential to generate harmful content such as hate speech propaganda, racist or sexist language, and other forms of content that could cause harm to individuals. Sometimes even biases in sensitive topics race, politics, and many others. LLMs like chatgpt and chatsonic may seem like a hub of all knowledge but even for academic research they can contrive information and even sometimes produce non-existent citations.
- Privacy Issues: This powerful technology also raises questions about privacy. These models work based on access to data for training which could be the personal data of individuals. They can also raise copyright issues as they can sample without consent.
APPLICATIONS OF LLMS

- Natural Language Processing (NLP): LLMs are widely used in NLP tasks such as language translation, sentiment analysis text summarization, and copywriting. They can generate coherent and fluent sentences, which makes them useful for tasks like chatbots, virtual assistants, and voice assistants.
- Chatbots Applications: LLMs can be used to create personalized conversational AI chatbots that understand natural language and can interact with humans.
- Code Generation: LLMs can generate codes in seconds which would take a human a considerable amount of time increasing productivity.
- Healthcare: LLMs can assist medical practitioners in medical diagnosis and treatment planning. They can analyze patient data, suggest treatment options, and even generate medical reports.
- Education: LLMs can be used in education to create personalized learning experiences for students. They can analyze student data to identify knowledge gaps and provide relevant content to fill those gaps.
There are a lot of challenges and ethical issues that needs to be addressed as these technologies become more incorporated in our spaces like industry, academia, media and politics. As AIs are not cognitively autonomous yet, we are solely responsible for creating Responsible and ethical AIs that will be valuable participants and complements to work and society.
As researchers continue to improve LLMs, we can expect to see even more powerful language models with enhanced capabilities, such as the ability to understand sarcasm, irony, and other nuanced aspects of language. LLMs will likely become even more adept at handling multilingual text and understanding different dialects and regional variations of languages. The future looks very promising. We can expect to see endless developments in LLMs in years to come though they might end up taking your job.
Rest assured that your jobs are safe from Transformers until the arrival of OPTIMUS PRIME😎😎
Follow me more content on Artificial Intelligence, Machine Learning, Data Science, and other interesting topics. ☝️☝️☝️