openai-community/openai-gpt · Hugging Face (2024)

Table of Contents

  • Model Details
  • How To Get Started With the Model
  • Uses
  • Risks, Limitations and Biases
  • Training
  • Evaluation
  • Environmental Impact
  • Technical Specifications
  • Citation Information
  • Model Card Authors

Model Details

Model Description: openai-gpt (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies.

  • Developed by: Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See associated research paper and GitHub repo for model developers and contributors.
  • Model Type: Transformer-based language model
  • Language(s): English
  • License: MIT License
  • Related Models: GPT2, GPT2-Medium, GPT2-Large and GPT2-XL
  • Resources for more information:

How to Get Started with the Model

Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, weset a seed for reproducibility:

>>> from transformers import pipeline, set_seed>>> generator = pipeline('text-generation', model='openai-gpt')>>> set_seed(42)>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)[{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"}, {'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'}, {'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'}, {'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'}, {'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}]

Here is how to use this model in PyTorch:

from transformers import OpenAIGPTTokenizer, OpenAIGPTModelimport torchtokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")model = OpenAIGPTModel.from_pretrained("openai-gpt")inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")outputs = model(**inputs)last_hidden_states = outputs.last_hidden_state

and in TensorFlow:

from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModeltokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")model = TFOpenAIGPTModel.from_pretrained("openai-gpt")inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")outputs = model(inputs)last_hidden_states = outputs.last_hidden_state

Uses

Direct Use

This model can be used for language modeling tasks.

Downstream Use

Potential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification.

Misuse and Out-of-scope Use

The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Risks, Limitations and Biases

Biases

CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:

>>> from transformers import pipeline, set_seed>>> generator = pipeline('text-generation', model='openai-gpt')>>> set_seed(42)>>> generator("The man worked as a", max_length=10, num_return_sequences=5)[{'generated_text': 'The man worked as a teacher for the college he'}, {'generated_text': 'The man worked as a janitor at the club.'}, {'generated_text': 'The man worked as a bodyguard in america. the'}, {'generated_text': 'The man worked as a clerk for one of the'}, {'generated_text': 'The man worked as a nurse, but there was'}]>>> set_seed(42)>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)[{'generated_text': 'The woman worked as a medical intern but is a'}, {'generated_text': 'The woman worked as a midwife, i know that'}, {'generated_text': 'The woman worked as a prostitute in a sex club'}, {'generated_text': 'The woman worked as a secretary for one of the'}, {'generated_text': 'The woman worked as a nurse, but she had'}]

This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Risks and Limitations

The model developers also wrote in a blog post about risks and limitations of the model, including:

  • Compute Requirements: Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements.
  • The limits and bias of learning about the world through text: Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work (Lucy and Gauthier, 2017) has shown that certain kinds of information are difficult to learn via just text and other work (Gururangan et al., 2018) has shown that models learn and exploit biases in data distributions.
  • Still brittle generalization: Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet.

Training

Training Data

The model developers write:

We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.

Training Procedure

The model developers write:

Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer.

See the paper for further details and links to citations.

Evaluation

The following evaluation information is extracted from the associated blog post. See the associated paper for further details.

Testing Data, Factors and Metrics

The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics:

  • Task: Textual Entailment

    • Datasets: SNLI, MNLI Matched, MNLI Mismatched, SciTail, QNLI, RTE
    • Metrics: Accuracy
  • Task: Semantic Similarity

    • Datasets: STS-B, QQP, MRPC
    • Metrics: Accuracy
  • Task: Reading Comprehension

    • Datasets: RACE
    • Metrics: Accuracy
  • Task: Commonsense Reasoning

    • Datasets: ROCStories, COPA
    • Metrics: Accuracy
  • Task: Sentiment Analysis

    • Datasets: SST-2
    • Metrics: Accuracy
  • Task: Linguistic Acceptability

    • Datasets: CoLA
    • Metrics: Accuracy
  • Task: Multi Task Benchmark

    • Datasets: GLUE
    • Metrics: Accuracy

Results

The model achieves the following results without any fine-tuning (zero-shot):

TaskTETETETETETESSSSSSRCCRCRSALAMTB
DatasetSNLIMNLI MatchedMNLI MismatchedSciTailQNLIRTESTS-BQQPMPRCRACEROCStoriesCOPASST-2CoLAGLUE
89.982.181.488.388.156.082.070.382.359.086.578.691.345.472.8

Environmental Impact

The model developers report that:

The total compute used to train this model was 0.96 petaflop days (pfs-days).

8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 8 P600 GPUs
  • Hours used: 720 hours (30 days)
  • Cloud Provider: Unknown
  • Compute Region: Unknown
  • Carbon Emitted: Unknown

Technical Specifications

See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.

Citation Information

@article{radford2018improving, title={Improving language understanding by generative pre-training}, author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others}, year={2018}, publisher={OpenAI}}

APA: Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.

Model Card Authors

This model card was written by the Hugging Face team.

openai-community/openai-gpt · Hugging Face (2024)
Top Articles
Nigel Slater's mayonnaise recipes
The Best Chocolate Peanut Clusters Recipe - Cooking With Karli
Target Dummies 101 - The Dummy Research/Tutorial Thread
19 Awesome Things to Do in Redmond, Oregon
BEL MOONEY: Should I leave this boorish, bullying layabout?
Aarf Anchorage Alaska
Steve Wallis Wife Age
Craigslist In Lakeland
Savage X Fenty Wiki
Scary Games 🕹️ | Play For Free on GamePix
Sir Mo Farah says 'sport saved me' after finishing final race of illustrious career at Great North Run
Urology Match Spreadsheet
National Weather Service Monterey
Which Statement About These Two Restaurant Meals Is Correct
[PDF] JO S T OR - Free Download PDF
Where to Buy Fresh Masa (and Masa Harina) in the U.S.
Rick Harrison Daughter Ciana
Pear Shaped Rocsi
Longfellow's Works - Evangeline
Are Crazyjamjam Leaks Real or Fake?
Bx11
Craigslist For Sale By Owner Chillicothe Ohio
suggest - Englisch-Deutsch Übersetzung | PONS
Frostbite Blaster
1970 Baltimore Orioles World Series Scroll Pennant
Infinity Pool Showtimes Near Cinemark 14 Chico
David Goggins Is A Fraud
My Fico Forums
Gunsmoke Tv Series Wiki
Courtney Lynn Playboy
Shaw Funeral Home Vici Oklahoma
Acnh Picnic Table
Bryant Air Conditioner Parts Diagram
Kirby D. Anthoney Now
Rs3 Bis Perks
Deborah Clearbranch Psychologist Georgia
80s Z Cavaricci Pants
Chalkies | Gutgash's Territory - maps - Mad Max Game Guide
Ichc's Wheat Ridge Family Health Clinic
13 The Musical Common Sense Media
Sayuri Pilkey
450 Miles Away From Me
Lewisburg Tn Jail Mugshots
Slushy Leaks
Registrar Lls
Craigslist Antelope Valley General For Sale
Used Go Karts For Sale Near Me Craigslist
Csuf Mail
Houses and Apartments For Rent in Maastricht
Welcome to the Newest Members of the Lawrenceville School Faculty
Lanipopvip
Martin's Point Otc Catalog 2022
Latest Posts
Article information

Author: Chrissy Homenick

Last Updated:

Views: 5711

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Chrissy Homenick

Birthday: 2001-10-22

Address: 611 Kuhn Oval, Feltonbury, NY 02783-3818

Phone: +96619177651654

Job: Mining Representative

Hobby: amateur radio, Sculling, Knife making, Gardening, Watching movies, Gunsmithing, Video gaming

Introduction: My name is Chrissy Homenick, I am a tender, funny, determined, tender, glorious, fancy, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.