2023年11月25日星期六

Post 4 咁乜嘢係生成式人工智能! What is generative AI?

幾個月前,Google擴大咗對Bard嘅適用,係一個早期嘅實驗,可以讓我哋與生成式AI的適用。 從參考文獻來看,Bard由一個大型語言模型提供支持,係一種機器學習模型,以其生成自然語言嘅能力而聞名。 就係點解我哋經常聽到它被互換描述為“生成式人工智能”。 咁乜嘢係生成式人工智能!

Google expanded access to Bard a few months ago, an early experiment that lets you collaborate with generative AI. From the referenec, Bard is powered by a large language model, which is a type of machine learning model that has become known for its ability to generate natural-sounding language. That’s why we often hear it described interchangeably as “generative AI.” Then what a generative AI is!

參考文章中嘅作者採訪了Google高級研究總監道格拉斯·埃克( Douglas Eck ),以了解更多關於生成式人工智能嘅信息。 Doug喺人工智能嘅前沿工作晒,佢擁有文學同音樂研究嘅背景。 首先,令我哋了解人工智能究竟係乜嘢。 

The author from the reference article interviewed Douglas Eck, a senior research director at Google for understanding more about generative AI. Doug works at the forefront of AI, and he has a background in literature and music research. First, let us understand what AI exactly is. 

人工智能係一個廣義嘅術語,通常用于描述各種先進嘅電腦系統,尤其是“機器學習”。 我哋喺人工智能中睇到嘅大部分實際上係機器學習:賦予電腦系統由示例中學習嘅能力。

AI is a broad term often used to describe all sorts of advanced computer systems, particularly “machine learning.” Most of what we see in AI is in fact machine learning: endowing computer systems with the ability to learn from examples.

作者將編程為從示例中學習的機器稱為“神經網絡”。 他們學習的一個主要方式是得到很多例子來學習,比如被告知圖像中有什麼——專家稱之為分類。 如果我哋想教一個網絡如何識別大象,將涉及人類向網絡介紹大象長乜嘢樣嘅好多例子,並相應咁標記呢啲相。 就係模型如何學會區分大象和圖像中嘅其他細節。

The author calls machines programmed to learn from examples “neural networks.” One main way they learn is by being given lots of examples to learn from, like being told what’s in an image — experts call this classification. If we want to teach a network how to recognize an elephant, that would involve a human introducing the network to lots of examples of what an elephant looks like and tagging those photos accordingly. That’s how the model learns to distinguish between an elephant and other details in an image.

語言模型係另一種類型嘅神經網絡。 語言模型基本上可以預測單詞序列中接下來出現嘅單詞。 專家們喺大量文本上訓練呢啲模型,以便佢哋仲好咁理解接下來可能出現嘅單詞。 改進語言模型的一種方式(但不是唯一的方法)是給它更多的“閱讀”——或者用更多的數據訓練它——有點像我們從我們研究的材料中學習的方式。  如果我們開始輸入短語“Mary kicked a...”,那麼在足夠數據上訓練的語言模型可以預測“Mary kicked a ball”。 如果沒有足夠的訓練,它可能只會想出一個“圓形物體”,或者只想出它的顏色“黃色”。 訓練語言模型所涉及的數據越多,它就越微妙,它就越有可能準確地知道瑪麗最有可能踢咗乜嘢。

Language models are another type of neural network. Language models basically predict what word comes next in a sequence of words. The experts train these models on large volumes of text so they better understand what word is likely to come next. One way — but not the only way — to improve a language model is by giving it more “reading” — or training it on more data — kind of like how we learn from the materials we study. If we started to type the phrase, “Mary kicked a…,” a language model trained on enough data could predict, “Mary kicked a ball.” Without enough training, it may only come up with a “round object” or only its color “yellow.” The more data involved in training the language model, the more nuanced it becomes, and the better chance it has the insight to know exactly what Mary is most likely to have kicked.

喺舊時嘅幾年度,專家們喺語言模型中如何實現更好嘅性能方面取得了重大突破,由擴展其規模到減少某些任務所需嘅數據量。

In the last several years, there have been major breakthroughs in how the experts achieve better performance in language models, from scaling their size to reducing the amount of data required for certain tasks.

語言模型已經喺嗰度幫助人們-例如,我哋睇到它們喺Gmail中嘅Smart Compose同Smart Reply中出現。 語言模型都為Bard提供支持。

Language models are already out there helping people — we see them show up with Smart Compose and Smart Reply in Gmail, for instance. And language models power Bard as well.

生成模型可以利用它由所展示嘅示例中學到嘅嘢,並根據呢啲信息創建全新嘅嘢。 因此,“生成”呢個詞! 大型語言模型( LLM )係一種生成式AI,因為它們以自然發音語言嘅形式生成新穎嘅文本組合。 專家甚至可以構建語言模型嚟生成其他類型嘅輸出,例如新圖像、音頻甚至視頻,例如Imagen、AudioLM同Phenaki。

A generative model can take what it has learned from the examples it’s been shown and create something entirely new based on that information. Hence the word “generative!” Large language models (LLMs) are one type of generative AI since they generate novel combinations of text in the form of natural-sounding language. And experts can even build language models to generate other types of outputs, such as new images, audio and even video, like with Imagen, AudioLM and Phenaki.

創意領域有巨大的潛力——可以把它看作是消除一些重複的日常任務的苦差事,比如生成草稿,而不是侵犯他們與生俱來的創造力。 生成式人工智能嘅功能與幾十年前鼓機嘅到來相同。 鼓機產生了一種與人類鼓手聽起嚟不同嘅節奏,推動咗全新嘅音樂流派。

There’s a huge potential for the creative field — think of it as removing some of the repetitive drudgery of mundane tasks like generating drafts, and not encroaching on their innate creativity. generative AI functions in the same way one might think of the arrival of the drum machine decades ago. The drum machine generated a rhythm that was different from what human drummers sounded like, and that fueled entirely new genres of music.





2023年11月22日星期三

Post 3 讀物:生成式人工智慧簡介

以下是有關生成式人工智慧的彙編讀物:

Readings: Introduction to Generative AI

Here are the assembled readings on generative AI:

● Ask a Techspert: What is generative AI? 

https://blog.google/inside-google/googlers/ask-a-techspert/what-is-generative-ai/

● Build new generative AI powered search & conversational experiences with Gen App

Builder: 

https://cloud.google.com/blog/products/ai-machine-learning/create-generative-apps-in-

minutes-with-gen-app-builder

● What is generative AI?

https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai

● Google Research, 2022 & beyond: Generative models:

https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html#Gener

ativeModels

● Building the most open and innovative AI ecosystem:

https://cloud.google.com/blog/products/ai-machine-learning/building-an-open-generativ

e-ai-partner-ecosystem

● Generative AI is here. Who Should Control It?

https://www.nytimes.com/2022/10/21/podcasts/hard-fork-generative-artificial-intelligen

ce.html

● Stanford U & Google’s Generative Agents Produce Believable Proxies of Human

Behaviors:

https://syncedreview.com/2023/04/12/stanford-u-googles-generative-agents-produce-b

elievable-proxies-of-human-behaviours/

● Generative AI: Perspectives from Stanford HAI:

https://hai.stanford.edu/sites/default/files/2023-03/Generative_AI_HAI_Perspectives

● Generative AI at Work:

https://www.nber.org/system/files/working_papers/w31161/w31161.pdf

● The future of generative AI is niche, not generalized:

https://www.technologyreview.com/2023/04/27/1072102/the-future-of-generative-ai-is-

niche-not-generalized/

● The implications of Generative AI for businesses:

https://www2.deloitte.com/us/en/pages/consulting/articles/generative-artificial-intellig

ence.html

● Proactive Risk Management in Generative AI:

https://www2.deloitte.com/us/en/pages/consulting/articles/responsible-use-of-generati

ve-ai.html

● How Generative AI Is Changing Creative Work:

https://hbr.org/2022/11/how-generative-ai-is-changing-creative-work

Here are the assembled readings on large language models:

● NLP's ImageNet moment has arrived: https://thegradient.pub/nlp-imagenet/

● LaMDA: our breakthrough conversation technology:

https://blog.google/technology/ai/lamda/

● Language Models are Few-Shot Learners:

https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-

Paper.pdf

● PaLM-E: An embodied multimodal language model:

https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html

● PaLM API & MakerSuite: an approachable way to start prototyping and building

generative AI applications:

https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html

● The Power of Scale for Parameter-Efficient Prompt Tuning:

https://arxiv.org/pdf/2104.08691.pdf

● Google Research, 2022 & beyond: Language models:

https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html/Langu

ageModels

● Solving a machine-learning mystery:

https://news.mit.edu/2023/large-language-models-in-context-learning-0207

Additional Resources:

● Attention is All You Need: https://research.google/pubs/pub46201/

● Transformer: A Novel Neural Network Architecture for Language Understanding:

https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html

● Transformer on Wikipedia:

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)#:~:text=Transfor

mers%20were%20introduced%20in%202017,allowing%20training%20on%20larger%20da

tasets.

● What is Temperature in NLP? https://lukesalamone.github.io/posts/what-is-temperature/

● Model Garden: https://cloud.google.com/model-garden

● Auto-generated Summaries in Google Docs:

https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html

1



Post 2 睇吓有關於AI嘅資訊

 

每一個人都在講AI,而AI股票蒸蒸日上。 喺呢度,我哋一齊睇吓有關於AI嘅資訊啦。

What is Generative AI and how does it work? What are common applications for Generative AI? Watch this video to learn all about Generative AI, including common applications, model types, and the fundamentals for how to use it.

乜嘢係生成式人工智能,佢係如何工作嘅? 生成式AI有哪些常見應用? 觀看此視頻,瞭解有關生成式AI嘅所有信息,包括常見應用程序、模型類型以及如何使用它的基礎知識。这里有一些片段供参考。