Skip to content

Prompt Engineering & AI Application deployment

banner

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Go to 8/15/2023 Workshop Agenda

Welcome to our documentation on generative Artificial Intelligence (AI) prompt engineering and application integration in academic research and education. Generative AI is transforming the way we work, learn, and teach at the UArizona. To make the most of this revolutionary technology, you'll need to master the art of creating effective "prompts", the messages or requests that guide the ChatGPT's (or Bard's and Bing's) responses.

Learning Objectives

After the lesson, you should be able to:

* Explain why generative AI matters in education, research, and society
* Create effective prompts in ChatGPT, Bing, Bard, and other GPTs
* Understand how and when to use AI assistants in your daily work

Getting Started

Getting Started with ChatGPT

Getting Started with Bing Chat

ChatGPT is integrated into Microsoft's Edge Browser via Bing Chat.

Getting Started with Bard Chat

Google's competitor, LaMDA, is featured in Bard.

General Productivity

Go to our lesson on Daily Productivity with GPTs

The most likely interaction you will have (or have already had) with generative AI, Transformers, and Large Language Models (LLMs) is with OpenAI's ChatGPT.

Predictive text and auto completion are becoming more common in productivity software. Generative AI powered applications are making their way into everyday software like Word Processors, SMS text messaging, and spreadsheets. LLMs are also being released into productivity software like Microsoft Office 365 w/ CoPilot and Google's Docs and Sheets Workspace.

Education

Go to our lesson on AI in the Classroom

Go to our lesson on Ethics

Research

Research applications of generative AI and LLMs are broad. We obviously won't be able to teach all of them here, but hopefully this is an effective jumping off point:

Programming

Go to our lesson on GitHub CoPilot

Go to our lesson on the OpenAI API

Applications

Go to our lesson on OpenAI API Powered Extensions

Go to our lesson on 🤗 HuggingFace Models

Go to our lesson on 🤗 HuggingFace Datasets

Go to our lesson on 🤗 Gradio UI

Glossary

Google's Machine Learning Glossary

NVIDIA's Data Science Glossary

BARD - Google's general purpose LLM

Bi-directional Encoder Representations from Transformers (BERT) - is a family of masked-language models introduced in 2018 by researchers at Google , (Devlin et al. )

ChatGPT - OpenAI's general purpose LLM

CoPilot - GitHub (Microsoft/OpenAI) AI co-programmer, natively integrated as an extension in VS Code or GitHub CodeSpaces

Embeddings - process of transforming high-dimensional data, such as text or images, into low-dimensional vectors. Embeddings allow us to quantify the meaning and relationships of data.

Generative Pretrained Transformer (GPT) - are a family of large language models, which was introduced in 2018 by the American artificial intelligence organization OpenAI . (Radford et al. )

GitHub - the most widely used Version Control infrastructure, owned by Microsoft and natively integrated with OpenAI

DALL·E - OpenAI stable diffusion image generation model

HuggingFace - library for open source AI models and apps

Large Language Models (LLMs) - is a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabelled text using self-supervised learning ()

Language Models for Dialog Applications (LaMDA) - Google's general purpose LLM

Latent Diffusion Model (LDM) () - machine learning models designed to learn the underlying structure of a dataset by mapping it to a lower-dimensional latent space.

Large Language Model Meta AI (LLAMA) - Meta's general purpose LLM

MidJourney - popular image generation platform (proprietary), which is accessed via Discord

Neural networks - () () - are similar to their biological counter parts, in the sense they have nodes which are interconnected. Rather than string-like neurons and synapses in biology, artificial networks are made of nodes connected by networks of 'weights' which can have positive or negative values.

OpenAI - private company responsible for the first LLMs and ChatGPT

Parameter - () is a value that the model can independently modify as it is trained. Parameters are derived from the training data upon which the model is trained. The number of parameters in the newest LLMs are typically counted in the billions to the trillions.

Segment-Anything (Meta) - is a recently released image and video segmentation technology that allows you to 'clip' a feature from an image with a single click.

Stable Diffusion - computer vision models for creating images from text

Token - a fundamental unit of text that GPT models use to process and generate language. A token can represent an individual character, a word, or a subword depending on the specific tokenization approach.

Tuning - the process of refining models to become more accurate

Weights - are the value by which a model multiplies another value. Weights are typically determined by the proportional value of the importance of the parameters. Weights signify the value of a specific set of parameters after self-training.

Zero-shot - learning where the AI observes samples from classes which were not observed during training, and needs to predict the class that they belong to.


Last update: 2024-02-06