Text Generation (LLMs)
The core idea here is to model the probability of the next token (word, subword, or character) given all the preceding tokens. This is essentially a sophisticated next-word predictor. For example, “The cat sat on the [mat].” Here, the model calculates the probability of every possible next word and selects one. By doing this millions of times, it generates coherent prose.
We’ll use the Hugging Face ecosystem, which is the industry standard for deploying and experimenting with GenAI models. Specifically, we’ll use the transformers library and a small but capable LLM to get our feet wet with text generation.
OUTPUT PRINTSCREEN
Python Code
from transformers import pipeline
generator = pipeline('text-generation', model="distilgpt2")
prompt = 'The most popular cloth brand will'
output = generator(
prompt,
max_length=100,
num_return_sequences=1,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.75,
)
print("Prompt")
print(prompt)
print("\nGenerated output")
print(output[0]['generated_text'])
References
Kharwal, Aman. Hands-On GenAI, LLMs and AI Agents (p. 16). BlueRose Publishers. Kindle Edition.
No comments:
Post a Comment