sahilrajput.com

Learn ChatGPT

TODO: Continue reading docs from here ..

ChatGPT was released publicy on 30 November 2022.

Quick Links:

Courses I found on Coursera:

Docs Links:

Be concise vs. in brief (tldr, Be concise is better)

image

image

chat gpt can now hear and speak - Official Docs

NOTE: Only for plus and enterprice users.

Click here

When I asked to generate 10,000 and 2,000 words article to ChatGPT

Query wordcounter.net platform.openai.com/tokenizer
10,000 words article 998 words 6,438 characters Tokens: 1,285, Characters: 6484
2,000 words article 1,128 words 7,292 characters Tokens: 1,491, Characters 7358

When I asked to write counting upto 2_500, 5_000 and 10_000.

Date: 7 Sep, 2023

Quickstart tutorial - OpenAI end notes

Source: Click here

image

Completions

Correct

image

Incorrect:

image

List of gpt-3.5-turbo models from api - /models

gpt-3.5-turbo-16k-0613
gpt-3.5-turbo
gpt-3.5-turbo-16k
gpt-3.5-turbo-0613
gpt-3.5-turbo-0301

General Terminologies

Source: Official Quickstart Guide from OpenAI: Click here

Source of below image: Click here

image

DEEP DIVE - Understanding tokens and probabilities

Source: Official Quickstart Guide from OpenAI: Click here

image

Pricing - 1/2 Most cost effective model

image

Pricing - 2/2

Source Pricing: Click here

image

Image - 1/2 - Free Trial gives you 5$ (Date: 5 September, 2023).

image

Image - 2/2 - Free Trial gives you 5$ (Date: 5 September, 2023).

image

❤️ ❤️ ❤️ Personalized model training ❤️ ❤️ ❤️ :

image

Rate Limits

Source: Click here

image

Tokenizer

Source: Official Tokenizer Page from ChatGPT: platform.openai.com/tokenizer

The GPT family of models process text using tokens, which are common sequences of characters found in text. The models understand the statistical relationships between these tokens, and excel at producing the next token in a sequence of tokens.

You can use the tool below to understand how a piece of text would be tokenized by the API, and the total count of tokens in that piece of text.

A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text. This translates to roughly ¾ of a word (so 100 tokens ~= 75 words).

If you need a programmatic interface for tokenizing text, check out our tiktoken package for Python. For JavaScript, the gpt-3-encoder package for node.js works for most GPT-3 models.

Compartible models for each endpoint

Source - Docs: Click here

image

Zero Retention

Source - Docs: Click here

image