ChatGPT and others like this widely-popular AI bot generate responses based on a subset of machine learning called Large Language Models (LLMs).
An LLM is a machine learning model trained on a large body of text data to generate outputs for natural language processing (NLP) tasks – like generating the texts that ChatGPT produces. LLMs are based on deep learning neural networks like the Transformer architecture and are trained on massive amounts of datasets – literally billions of words.
Not surprisingly, the computer science ‘garbage in, garbage out’ rule about poor input data producing poor output, applies to LLMs, as well. That’s why companies working on LLMs are scrambling to collect the massive yet high-quality datasets needed to train them. In this post, we’ll take a look at the challenges of LLM datasets, and how companies can address them.
The state of LLMs today
There is an ongoing ‘race of giants’ to develop and implement the world’s most cutting-edge AI-driven bot. Alongside OpenAI’s ChatGPT, which has now been integrated into Microsoft’s Bing search engine, Google is heavily promoting its own Bard – which is still available only to beta testers but is expected to roll out in the coming weeks or months.
As this race heats up, LLM architectures are growing in scale and complexity. OpenAI’s GPT-3 LLM, released in June 2020, had 175 billion parameters. More recently, Nvidia and Microsoft announced Megatron-Turing Natural Language Generation model (MT-NLG) – the largest monolithic transformer language model with some 530 billion parameters. Not to be left behind, Google announced (but has not yet released) PaLM – a 540 billion-parameter model.
What all LLMs – past, present, large, and small – have in common is that they need training data. Whereas LLMs were in the past trained on texts from online sources like news sites, Wikipedia, scientific papers, and even novels – today, data scientists prefer to train models on more sophisticated datasets. The reason? A higher quality – and not just quantity – of training data produces more versatile, more accurate LLMs.
AI researchers divide the data used by LLMs into low-quality and high-quality data. High-quality data is data that came from vetted sources – meaning it has been reviewed either professionally or via peer review for quality. Low-quality data encompasses non-filtered, user-generated text like social media postings, comments on websites, and other similar sources. Training LLMs with low-quality datasets can result in:
- Data bias – A dataset with text from unbalanced sources can make the model perform poorly on particular inputs.
- Spurious correlations – Datasets that use language in one particular way only can teach an LLM to use incorrect shortcuts and result in mistakes in real scenarios.
- Mislabeled examples – By introducing noise into training, mislabeled data can confuse the LLM and lower output quality.
All this means that there’s a huge demand for high-quality, high-volume data. Yet here’s where data scientists are running into challenges: there’s not enough quality data out there. In fact, some researchers estimate that data for training language models may actually be depleted by 2026. Even as developers create ever more sophisticated and powerful LLMs, they’re scrambling to find quality and cost-effective datasets to train them on.
The challenges of collecting data to train LLMs
Why is it so difficult to find quality datasets to train LLMs? Data quality depends not only on the size and diversity of the datasets but also on the time and expense of pre-processing, cleaning, and filtering the data.
For example, public datasets like Common Crawl contain petabytes of data – raw web page data, extracted metadata, and plain text extractions. Hosted by Amazon Web Services through its Public Datasets Program since 2012, Common Crawl crawls the web and freely provides archives and datasets to the public. There are numerous other public datasets that are similarly created and hosted, such as:
- WebText2
- Kaggle
- Google Dataset Search
- Hugging Face
- Data.gov
- Wikipedia database
All these public datasets, however, have one thing in common: they are unfiltered or lightly filtered and tend to be of lower quality than more curated datasets. Today, AI researchers and data scientists are turning to higher-quality datasets. And in fact, they’ve found that working with a smaller amount of high-quality data, with a larger number of model parameters, is actually a better way to train LLMs.
Powerful LLMs: Three data must-haves
Companies looking to train an LLM on high-quality data should look for three key parameters when choosing a dataset:
- Rich metadata – Most dataset providers deliver text and very basic metadata (URL, content length, etc.). Make sure the provider you choose (like Webz.io) offers you dozens of metadata parameters – descriptive, structural, administrative, and text embedded into images for each article. The provider should use advanced web scraping techniques to provide cleaned and structured data from HTML, yet data should not include HTML.
- Data update – Most datasets are fairly static, and even Common Crawl datasets are only updated once a month. Unlike low-quality high-latency public datasets, Webz.io offers near real-time dataset updates – which helps you access the latest information as soon as it becomes available.
- Filtering options – Datasets can be expensive to analyze because they are a bulk dump of text. This means that filtering out certain keywords, languages, topics, and more requires reviewing and analyzing vast amounts of text. With Webz.io you can filter the data by the many fields we automatically extract to improve the relevance and quality of the data. Notably, you can filter the data based on specific criteria like language, location, keywords, sentiment, and more.
Ethical Considerations in LLM Data Collection
The development of Large Language Models (LLMs) has completely changed the field of natural language processing, but it also raises significant ethical considerations regarding data collection and usage. LLM data collection involves sourcing vast amounts of text from diverse origins, necessitating meticulous attention to ethical standards to avoid potential pitfalls.
Some of the ethical concerns that are involved in LLM data collection include:
- Bias and representation:
- Companies need to ensure that training data is balanced and representative to avoid discriminatory models.
- Data scientists must curate datasets that reflect balanced and diverse perspective to prevent perpetuating stereotypes and inequalities.
- Privacy concerns:
- Personal information may inadvertently be included in datasets during LLM data collection.
- Implementing robust anonymization techniques and adhering to data protection regulations (e.g., GDPR, CCPA) is essential to protect individuals’ privacy.
- Transparent data usage policies build trust and ensure accountability.
- Data quality and accuracy:
- Data preparation for LLM must involve filtering out biased content and ensuring the training data is accurate.
- Sources like social media and news articles can potentially introduce biases and inaccuracies that need to be monitored.
- Transparency and consent:
- Users should be informed about how their data is being utilized and given the option to opt-out.
- Transparency in data collection practices enhances trust and accountability.
- Environmental impact:
- Training large LLMs consumes significant computational resources, contributing to carbon emissions.
- Ethical data preparation should include efforts to minimize environmental footprints, such as optimizing algorithms and using sustainable energy sources.
The ethical considerations in all stages, from collection of LLM training data to preparation and processing of LLM data, are multifaceted. Addressing these concerns requires a concerted effort to ensure fairness, privacy, transparency, and environmental responsibility, ultimately leading to the development of more ethical and trustworthy AI systems.
The bottom line
Even as researchers create ever more sophisticated, complex, and powerful LLMs – quality and cost-effective datasets to train them on are in short supply. Choosing an LLM training web dataset provider like Webz.io enables you to cost-effectively collect the massive yet high-quality datasets needed to train your LLM – maximizing training budgets and optimizing training results.
Ready to generate more accurate and relevant models? Talk to one of our experts today!