Introduction to Gen AI

introduction_to_gen_ai_studio_with_google_cloud
introduction to gen ai studio with google cloud by chaituinfoblogs

Quiz questions with answers

1. Which of the following is the best way to generate more creative or unexpected content by adjusting the model parameters in Generative AI Studio?

  • Setting the temperature to a high value ✔
  • Setting the temperature to a low value
  • Setting the top K to 1
  • Setting the top P to 25%

2. Which of the following is a type of prompt that allows a large language model to perform a task with only a few examples?

  • Few-shot prompt ✔
  • Unsupervised prompt
  • One-shot prompt
  • Zero-shot prompt

3. What is a prompt?

  • A prompt is a short piece of text that is used to guide a large language model to generate content. ✔
  • A prompt is a long piece of text that explains how a large language model generates text.
  • A prompt is a piece of text that is used to train a large language model.
  • A prompt is a piece of text that is used to evaluate a large language model.

4. How does generative AI generate new content?

  • It learns from a massive amount of existing content. ✔
  • The training leads to a foundation model that cannot be further tuned with a new dataset.
  • It is programmed based on predetermined algorithms that can not be altered.
  • It is a random process.

5. What is Generative AI Studio?

  • A tool that helps you use Generative Al capabilities in your application. ✔
  • A type of artificial intelligence that writes emails for you.
  • A machine learning model that is trained on text only.
  • A technology that lets you code programming languages without learning them.

6. What action does Google recommend organizations take to ensure that AI is used responsibly?

  • Seek participation from a diverse range of people 
  • Follow a top-down approach to increase AI adoption
  • Use a checklist to evaluate responsible AI
  • Focus on being efficient

7. Which of the following is recommended by Google as most important in establishing AI governance?

  • Building practices around ethical decision-making 
  • Encouraging people to make decisions based on their own values
  • Creating decision trees to address common ethical situations
  • Developing technical tools to evaluate ML models

8. According to Google's AI Principles, bias can enter into the system at only specific points in the ML. lifecycle.

  • False 
  • True

9. Google's approach to responsible AI is based on which of the following commitments? (Select 4)

  • It respects privacy. 
  • It's built for everyone. 
  • It's accountable and safe. 
  • It's driven by scientific excellence. 
  • It eliminates team disagreements.
  • It speeds up product development processes.

10. You want to create a chatbot by using generative AI. Which application should you use?

  • Generative AI Studio
  • PALM API
  • Vertex AI 
  • Bard

11. Which of the following tools can be used to help developers train ML models on their data by using different algorithms?

  • PaLM API with MakerSuite
  • Generative AI App Builder
  • Generative AI Studio 
  • Bard

12. What are some benefits of prompt tuning? (Select 2)

  • Enables large language models (LLMS) to adapt to a wide range of tasks 
  • Helps large language models (LLMs) generate more accurate responses 
  • Enables large language models (LLMS) to be trained on small amounts of data
  • Increases large language models (LLMS) production of unbiased responses
  • Generalizes large language models (LLMs) commands to conduct versatile tasks

13. Prompt tuning is a technique for:

  • Improving the output quality of large language models. 
  • Making large language models more accurate.
  • Making large language models more versatile.
  • Training large language models.

14. Your company has a large language model (LLM) to help employees with their everyday tasks. You want to teach your colleagues about prompt design. Which of the following is a good example of a well-designed prompt?

  • Translate the following sentence into French: Hello, how are you today? 
  • Can you tell me how long a really long paragraph can be before it's too long?
  • When does my manager expect me to complete my project?
  • What is the meaning of life?

15. The performance of large language models (LLMs) generally improves as more data and parameters are added.

  • True 
  • False

16. Most organizations have the capability to train large language models (LLMS) from scratch.

  • False 
  • True

17. Which of the following are examples of pre-training for a large language model (LLM)? (Select 3)

  • Text classification 
  • Document summarization 
  • Financial forecasting
  • Question answering 

18. Which are examples of tasks that Bard code generation can perform? (Select 3)

  • Debug your lines of source code 
  • Detect emerging security vulnerabilities
  • Explain your code to you line by line 
  • Translate code from one language to another 

19. Your teammate is using Generative AI App Builder to create an app for generating images. Which benefit of Generative AI App Builder most likely led your teammate to choose this approach?

  • Creating app content without writing code 
  • Access to existing creative works
  • A built-in forum for developers to ask questions
  • Monitoring app performance across different platforms

20. You want to organize documents into distinct groups, without predefining the groups. Which type of machine learning model should you use?

  • Unsupervised learning model 
  • Supervised learning model
  • Natural language processing (NLP) model
  • Discriminative deep learning model

21. You are planning a trip to Germany in a few months, You've booked a hotel that includes dinner. You want to email the hotel so they are aware of your food preferences. Using gen AI, you can easily generate an email in German using English prompts Which gen AI training model enables you to do this?

  • Text-to-text 
  • Text-to-image
  • Text-to-video
  • Text-to-3D

22. How does generative AI work?

  • It uses a neural network to learn from a large dataset. 
  • It uses a generic algorithm to evolve a population of models until it finds one that can generate the desired output.
  • It uses a rule-based system to generate output based on a set of predefined rules.
  • It uses the internet to repeat answers for common questions.

23. You want to explain to a colleague how gen AI works. Which of the following would be a good explanation?

  • Gen AI learns from existing data and then creates new content that is similar to the data it was trained on. 
  • Gen AI determines the relationship between datasets and classifies data according to existing data sets.
  • Gen AI randomly generates new content without any input from existing data.
  • Gen AI uses a set of rules to generate new content that is always unique and original.

24. What is machine learning?

  • Programs or systems that learn from data instead of being explicitly programmed 
  • Artificial intelligence technology that can produce various types of content
  • Large, general purpose language models
  • Algorithms used to describe and create new data

25. Which of the following is a potential use of generative AI?

  • A customer service chatbot 
  • A robot that performs mechanical tasks 
  • Automatic reading of X-rays 
  • Predicting future sales 

26. What are some of the challenges of using LLMs? Select three options.

  • They can be expensive to train. 
  • They can be biased. 
  • After being developed, they only change when they are fed new data.
  • They can be used to generate harmful content. 

27. What are some of the applications of LLMs?

  • LLMS can be used for many tasks, including: writing, translating, coding, answering questions, summarizing text, and generating non-creative discrete probabilities.
  • LLMs can be used for many tasks, including: writing, translating, coding, answering questions, summarizing text, and generating creative content. 
  • LLMs can be used for many tasks, including: writing, translating, coding, answering questions, summarizing text, and generating non-creative discrete probabilities, classes, and predictions.
  • LLMs can be used for many tasks, including: writing, translating, coding, answering questions, summarizing text, and generating non-creative discrete predictions.
  • LLMs can be used for many tasks, including: writing, translating, coding, answering questions, summarizing text, and generating non-creative discrete classes.

28. What are some of the benefits of using large language models (LLMS)?

  • LLMs have many benefits, including: (1) they can generate probabilities and human-quality text, (2) they can be used for many tasks, such as text summarization and code generation, (3) they can be trained on massive datasets of text and code, and (4) they are constantly being improved. 
  • LLMs have many benefits, including: (1) they can generate discrete classes and human-quality text, (2) they can be used for many tasks, such as text summarization and code generation, (3) they can be trained on massive datasets of text and code, and (4) they are constantly improving.
  • LLMs have a number of benefits, including: (1) they can generate non-probabilities and human-quality text, (2) they can be used for many tasks, such as text summarization and code generation, (3) they can be trained on massive datasets of text, image, and code, and (4) they are constantly improving.
  • LLMs have many benefits, including: (1) they can generate human-quality text, (2) they can be used for a variety of tasks, (3) they can be trained on massive datasets of text and code, and (4) they are constantly improved.

29. What are large language models (LLMs)?

  • Generative Al is a type of artificial intelligence (AI) that only can create new content, such as text, images, audio, and video by learning from new data and then using that knowledge to predict a classification output.
  • Generative Al is a type of artificial intelligence (AI) that only can create new content, such as text, images, audio, and video by learning from new data and then using that knowledge to predict a discrete, supervised learning output.
  • An LLM is a type of artificial intelligence (AI) that can generate human-quality text. LLMs are trained on massive datasets of text and code, and they can be used for many tasks, such as writing, translating, and coding 
  • Generative Al is a type of artificial intelligence (AI) that can create new content, such as discrete numbers, classes, and probabilities. It does this by learning from existing data and then using that knowledge to generate new and unique outputs.

30. Why is responsible AI practice important to an organization?

  • Responsible AI practice can help drive revenue.
  • Responsible AI practice can improve communication efficiency.
  • Responsible AI practice can help improve operational efficiency.
  • Responsible AI practice can help build trust with customers and stakeholders. 

31. Organizations are developing their own AI principles that reflect their mission and values. What are the common themes among these principles?

  • A consistent set of ideas about transparency, fairness, and equity.
  • A consistent set of ideas about fairness, accountability, and inclusion.
  • A consistent set of ideas about transparency, fairness, accountability, and privacy. 
  • A consistent set of ideas about transparency, fairness, and diversity.

32. Which of the below is one of Google's 7 AI principles?

  • AI should create unfair bias.
  • AI should gather or use information for surveillance.
  • AI should uphold high standards of operational excellence.
  • AI should uphold high standards of scientific excellence. 

33. Which of these is correct with regard to applying responsible AI practices?

  • Decisions made at all stages in a project make an impact on responsible AI. 
  • Only decisions made by the project owner at any stage in a project make an impact on responsible AI.
  • Decisions made at an early stage in a project do not make an impact on responsible AI.
  • Decisions made at a late stage in a project do not make an impact on responsible AI.

34. What is the name of the machine learning architecture that takes a sequence of words as input and outputs a sequence of words?

  • Collaborative natural network
  • Regressive neural networking
  • Encoder-decoder 
  • Large stream text manipulation

35. What is the purpose of the decoder in an encoder-decoder architecture?

  • To generate the output sequence from the vector representation 
  • To convert the input sequence into a vector representation
  • To predict the next word in the output sequence
  • To learn the relationship between the input and output sequences

36. What is the difference between greedy search and beam search?

  • Greedy search always selects the word with the lowest probability, whereas beam search considers multiple possible words and selects the one with the lowest combined probability.
  • Greedy search considers multiple possible words and selects the one with the lowest combined probability, whereas beam search always selects the word with the lowest probability.
  • Greedy search considers multiple possible words and selects the one with the highest combined probability, whereas beam search always selects the word with the highest probability.
  • Greedy search always selects the word with the highest probability, whereas beam search considers multiple possible words and selects the one with the highest combined probability. 

37. What is the purpose of the encoder in an encoder-decoder architecture?

  • To predict the next word in the output sequence
  • To convert the input sequence into a vector representation 
  • To generate the output sequence from the vector representation
  • To learn the relationship between the input and output sequences

38. What are two ways to generate text from a trained encoder-decoder model at serving time?

  • Greedy search and beam search 
  • Teacher forcing and attention
  • Teacher forcing and beam search
  • Greedy search and attention

39. What is the name of the model that is used to generate text captions for images?

  • Image classification model
  • Encoder-decoder model 
  • Bidirectional Encoder Representations from Transformers (BERT) model
  • Image generation model

40. What is the purpose of the encoder in an encoder-decoder model?

  • To extract information from the image. 
  • To translate text from one language to another.
  • To generate text captions for the image.
  • To answer your questions in an informative way, even if they are open ended, challenging, or strange.

41. What is the purpose of the attention mechanism in an encoder-decoder model?

  • To allow the decoder to focus on specific parts of the image when generating text captions. 
  • To generate text captions for the image.
  • To extract information from the image.
  • To translate text from one language to another.

42. What is the purpose of the decoder in an encoder-decoder model?

  • To learn the relationship between the input and output data
  • To extract information from the input data
  • To store the output data
  • To generate output data from the information extracted by the encoder 

43. What is the name of the dataset the video uses to train the encoder-decoder model?

  • MNIST dataset
  • COCO dataset 
  • ImageNet dataset
  • Fashion-MNIST dataset

44. What is the goal of the image captioning task?

  • To write different creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.
  • To translate text from one language to another
  • To generate text captions for images 
  • To answer your questions in an informative way, even if they are open ended, challenging, or strange.

45. Which process involves a model learning to remove noise from images?

  • Forward diffusion
  • Sampling
  • GANS
  • Reverse diffusion 

46. What is the process of forward diffusion?

  • Start with a clean image and add noise randomly
  • Start with a noisy image and remove noise iteratively
  • Start with a clean image and add noise iteratively 
  • Start with a noisy image and remove noise randomly

47. What is the goal of diffusion models?

  • To learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space 
  • To pit two neural networks against each other
  • To generate images by treating an image as a sequence of vectors
  • To encode images to a compressed size, then decode back to the original size

48. What is the name of the model family that draws inspiration from physics and thermodynamics?

  • Variational autoencoders
  • Diffusion models 
  • Generative adversarial networks
  • Autoregressive models

49. How does an attention model differ from a traditional model?

  • Attention models pass a lot more information to the decoder. 
  • The decoder does not use any additional information.
  • The decoder only uses the final hidden state from the encoder.
  • The traditional model uses the input embedding directly in the decoder to get more context.

50. What is the name of the machine learning technique that allows a neural network to focus on specific parts of an input sequence?

  • Convolutional neural network (CNN)
  • Encoder-decoder
  • Long Short-Term Memory (LSTM)
  • Attention mechanism 

51. What is the name of the machine learning architecture that can be used to translate text from one language to another?

  • Convolutional neural network (CNN)
  • Neural network
  • Encoder-decoder 
  • Long Short-Term Memory (LSTM)

52. What is the advantage of using the attention mechanism over a traditional recurrent neural network (RNN) encoder-decoder?

  • The attention mechanism lets the decoder focus on specific parts of the input sequence, which can improve the accuracy of the translation. 
  • The attention mechanism requires less CPU threads than a traditional RNN encoder-decoder.
  • The attention mechanism is faster than a traditional RNN encoder-decader.
  • The attention mechanism is more cost-effective than a traditional RNN encoder-decoder.

53. What are the two main steps of the attention mechanism?

  • Calculating the context vector and generating the attention weights
  • Calculating the attention weights and generating the context vector 
  • Calculating the context vector and generating the output word
  • Calculating the attention weights and generating the output word

54. What is the advantage of using the attention mechanism over a traditional sequence-to-sequence model?

  • The attention mechanism lets the model formulate parallel outputs.
  • The attention mechanism lets the model focus on specific parts of the input sequence. 
  • The attention mechanism lets the model learn only short term dependencies.
  • The attention mechanism reduces the computation time of prediction.

55. What is the purpose of the attention weights?

  • To incrementally apply noise to the input data.
  • To assign weights to different parts of the input sequence, with the most important parts receiving the highest weights. 
  • To calculate the context vector by averaging words embedding in the context.
  • To generate the output word based on the input data alone.

56. BERT is a transformer model that was developed by Google in 2018. What is BERT used for?

  • It is used to diagnose and treat diseases.
  • It is used to train other machine learning models, such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks.
  • It is used to solve many natural language processing tasks, such as question answering, text classification, and natural language inference. 
  • It is used to generate text, translate languages, and write different kinds of creative content.

57. What is the name of the language modeling technique that excused on Bidirectional Encoder Representations from Transformers (BERT)?

  • Gated Recurrent Unit (GRU)
  • Long Short-Term Memory (LSTM)
  • Transformer 
  • Recurrent Neural Network (RNN)

58. What is a transformer model?

  • A computer vision model that uses fully connected layers to learn relationships between different parts of an image.
  • A deep learning model that uses self-attention to learn relationships between different parts of a sequence. 
  • A natural language processing model that uses convolutions to learn relationships between different parts of a sequence.
  • A machine learning model that uses recurrent neural networks to learn relationships between different parts of a sequence.

59. What kind of transformer model is BERT?

  • Encoder-decoder model
  • Decoder-only model
  • Encoder-only model 
  • Recurrent Neural Network (RNN) encoder-decoder model

60. What is the attention mechanism?

  • A way of determining the importance of each word in a sentence for the translation of another sentence 
  • A way of identifying the topic of a sentence
  • A way of predicting the next word in a sentence
  • A way of determining the similarity between two sentences

61. What are the three different embeddings that are generated from an input sentence in a Transformer model?

  • Token, segment, and position embeddings 
  • Recurrent, feedforward, and attention embeddings
  • Convolution, pooling, and recurrent embeddings
  • Embedding, classification, and next sentence embeddings

62. What does fine-tuning a BERT model mean?

  • Training the model on a specific task and not updating the pre-trained weights
  • Training the hyper-parameters of the models on a specific task
  • Training the model on a specific task by using a large amount of unlabeled data
  • Training the model and updating the pre-trained weights on a specific task by using labeled data 

63. What are the two sublayers of each encoder in a Transformer model?

  • Recurrent and feedforward
  • Convolution and pooling
  • Self-attention and feedforward 
  • Embedding and classification

64. What are the encoder and decoder components of a transformer model?

  • The encoder ingests an input sequence and produces a sequence of images. The decoder takes in the images from the encoder and produces an output sequence.
  • The encoder ingests an input sequence and produces a sequence of tokens. The decoder takes in the tokens from the encoder and produces an output sequence.
  • The encoder ingests an input sequence and produces a sequence of hidden states. The decoder takes in the hidden states from the encoder and produces an output sequence. 
  • The encoder ingests an input sequence and produces a single hidden state. The decoder takes in the hidden state from the encoder and produces an output sequence.

65. What are foundation models in Generative AI?

  • A foundation model is a large AI model pretrained on a vast quantity of data that was "designed to be adapted" (or fine- tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition. 
  • A foundation model is a large AI model both post and pre-trained on a vast quantity of data that was "designed to be adapted" (or fine-tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition.
  • A foundation model is a large AI model post-trained on a vast quantity of data that was "designed to be adapted" (or fine-tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition.
  • A foundation model is a small AI model pretrained on a small quantity of data that was "designed to be adapted" (or fine-tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition.
  • A foundation model is a large AI model pretrained on a vast quantity of data that was "designed to be adapted" (or fine-tuned) to a wide range of upstream tasks, such as sentiment analysis, image captioning, and object recognition.

66. What is a prompt?

  • A prompt is a long piece of text that is given to the large language model as input, and it cannot be used to control the output of the model.
  • A prompt is a short piece of code that is given to the large language model as input, and it can be used to control the output of the model in many ways.
  • A prompt is a short piece of text that is given to the large language model as input, and it can be used to control the input of the model in many ways.
  • A prompt is a short piece of text that is given to the small language model (SLM) as input, and it can be used to control the output of the model in many ways.
  • A prompt is a short piece of text that is given to the large language model as input, and it can be used to control the output of the model in many ways. 

67. What is Generative AI?

  • Generative AI is a type of artificial intelligence (AI) that can only create new content, such as text, images, audio, and video by learning from new data and then using that knowledge to predict a classification output.
  • Generative AI is a type of artificial intelligence (AI) that can create new content, such as text, images, audio, and video. It does this by learning from existing data and then using that knowledge to generate new and unique outputs. 
  • Generative AI is a type of artificial intelligence (AI) that can only create new content, such as text, images, audio, and video by learning from new data and then using that knowledge to predict a discrete, supervised learning output.
  • Generative AI is a type of artificial intelligence (AI) that can create new content, such as discrete numbers, classes, and probabilities. It does this by learning from existing data and then using that knowledge to generate new and unique outputs.

68. What is an example of both a generative AI model and a discriminative AI model?

  • A generative AI model could be trained on a dataset of images of cats and then used to generate new images of cats. A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to classify new images as either cats or dogs.
  • A generative AI model could be trained on a dataset of images of cats and then used to classify new images of cats. A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to predict new images as either cats or dogs. 
  • A generative AI model could be trained on a dataset of images of cats and then used to generate new images of cats. A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to classify new images as either cats or dogs.
  • A generative AI model does not need to be trained on a dataset of images of cats and then used to generate new images of cats, because the Images were already generated by using AI. A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to classify new images as either cats or dogs.

69. Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect. What are some factors that can cause hallucinations? Select three options.

  • The model is not given enough context. 
  • The model is trained on noisy or dirty data. 
  • The model is trained on too much data.
  • The model is not trained on enough data.