Before writing GPT-3 Reflection, I am trying to jot down some part of a memory left 4 years ago to set baseline. In 2016, I attended a conference of IBM Watson. There were a couple of interesting demo not less than my visit in CES 2019 with quantum computer and flying cars.
For example, personality quiz matching with celebrity types, Harry Potter sorting hair, voice command with virtual reality, control a robot using only thoughts.
IBM Waston was known as the prominent of NLP community long time ago with a television one million prize winner called Jeopardy. In 2017, it was the best candidate recommended by industry education company - Udacity for their official debut AI Nano degree program.
The hot trend of this year is GPT-3 produced by Open AI, soon-to-be a product company with their fine-tuned API. The demo day from Pioneer is quite down to earth with some potential no-code use cases:
- Create a full e-commerce website with no code, but only a brief less than 100 words. The reaction is quite different between panelists. Karparthy, once upon a time a well-known AI hacker in graduate school and now AI Director of Tesla, could not believe it in the demo.
- Create email content from bullet points. The idea sounds quite very appealing at the beginning. But then, there appears a new need of summary which rotates a whole story back to short text and bullet points. Also a concern of writing malicious style.
- Generate creative writing given a very little input. And upgrade the generation process by letting GPT-3 create content for each other.
- Contextual reference recommendation based on content and context.
- Create a fictional character with less 100 word text with a mock interactive conversation on linkedln. Another concern about misuse in politics.
- Create a pitch narrative with 100 word text and embedded it into a ready-to-go presentation.
- Generate graph, charts and bars to illustrate statistics from text description in human language.
- A playground for prompts (an alternative to interact GPT-3 beta, but still need an API key)
- Generate insurance bills with headache code. One question about ethnic between human input and parser.
- Generate dungeon for a new kind of game design. A hybrid version of a game co-designed by humans and AI with NPC dialogue.
- Generate melody by a brief of GPT-3 with a description.
- Expand text in a loop with (GPT-3)^the number of sentences.
- Create a prompt for regrex.
TL;DR: with $5-10M and minGPT-3, you can replicate the work at home.
Discussion about Andrey Karpathy’s open source of min-GPT-3 with panelist.
The source code includes the most basic structure of GPT-3 based on previous versions. In overall perception of demos, people changes the prompt to have different answers but not the weights of model. In one scenario, different kinds of GPT-3 can be trained by anyone at home.
One of the concerns is about the complexity of training such a super large model with optimization, distributing computing, data center and data processing with ETL. Nevertheless, people can continue to fine-tuned pre-trained model with other data set. It would be a shift between a feature engineering to prompt engineering. And leads to a possibility of open Wikipedia 2.0.
I still remembered a conversation from top researchers and engineers in the conference that year. It was about a new kind of programming without code, but a block of cognitive services similar to play lego. The progress was still slow and only open to small selective community in North America.
The future is approaching very close to majority, probably it is the best highlight for an economical downturn caused by a Pandemic. Likely, we are entering to a new dawn of real AI economy emergence when people can’t go to work and travel, universal basic income, most of business going online on the internet with a global effect.
When you fail, make it a dance.
Tagged #reflection, #gpt3, #ai.