๋ชฉํ
gemini API๋ฅผ ํ์ฉํด์ ํ ์คํธ๋ฅผ ์์ฑํ๊ณ , ๋น๋์ค ๋ด์ฉ์ ๋ถ์ํ๋ค.
1. Generate text using Gemini
Cloud Shell์์ Gemini API๋ฅผ ํ์ฉํ์ฌ ์ง๋ฌธ์ ๋ํ ๋ต์ ์์ฑํ๋ค.
๋จผ์ ๋ช ์๋ ํ๊ฒฝ๋ณ์ ์ ํ ํด์ฃผ๊ธฐ.

Gemini APIs๋ฅผ ํ์ฑํ์ํค๋ผ๊ณ ๋์๋๋ฐ, Cloud Console์ Vertex AI ๋ฉ๋ด ๋์ฌ๋ณด๋์์ ๋ณด๋ ์ด๋ฏธ ํ์ฑํ๋์ด์์ด์ ๋ฐ๋ก ์ ์ด์ค ํ์๋ ์์๋ค.
๊ทธ๋ฆฌ๊ณ curl command๋ก Why is sky blue๋ผ๋ ์ง๋ฌธ์ ์ฃ์ด ๋ชจ๋ธ์ ํธ์ถํ๋ค.
curl \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${API_ENDPOINT}:generateContent \
-d '{
"contents": {
"role": "user",
"parts": {
"text": "Why is the sky blue?"
}
}
}'

2. Create a function call using Gemini
์ด์ Vertex AI > workbench ํ๊ฒฝ์์ ์์ ํ๋ค.
์ด ์ํฌ๋ฒค์น์ ๊ณ ์ง์ ์ธ ๋ฌธ์ ๋ ์ฃผํผํฐ๋ฉ ์ด๋ฉด ์๋ฌด๋ฐ ํ์ผ๋ ์๋ณด์ฌ์ ๋งจ๋ ์ฒดํฌ๋ฐ์ค ๋๋ฅด๊ณ reset์ ๋๋ฌ์ผํจ..

์๋์ ๊ฐ์ด ๋ชจ๋ธ์ ์ ์ํด์ค๋ค.
# Task 3.1
# use the following documentation to assist you complete this cell
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling
# Load Gemini 2.0 Flash 001 Model
model_id = "gemini-2.0-flash-001"
๋ชจ๋ธ์ด json input ํ์์ ์์ฑํ ์ ์๊ฒ FunctionDeclaration ๋ก ํธ์ถํ๋ค.
# Task 3.2
# use the following documentation to assist you complete this cell
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling
get_current_weather_func = FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "Location"
}
}
},
)
# Task 3.3
# use the following documentation to assist you complete this cell
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling
weather_tool = Tool(
function_declarations=[get_current_weather_func],
)
# Task 3.4
# use the following documentation to assist you complete this cell
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling
prompt = "What is the weather like in Boston?"
response = client.models.generate_content(
model=model_id,
contents=prompt,
config=GenerateContentConfig(
tools=[weather_tool],
temperature=0,
),
)
response
์ด๋ ๊ฒ ์ ์ ์คํํ๋ฉด, ์๋์ ๊ฐ์ด response๊ฐ ์จ๋ค.

3. Describe video contents using Gemini
gemini API๋ฅผ ์ฌ์ฉํด์ ๋น๋์ค ์ปจํ ์ธ ๋ด์ฉ์ ์ค๋ช ํ๋ ๊ฑฐ๋ค.
jupyterlab ํ๊ฒฝ์์ ๋ง์ฐฌ๊ฐ์ง๋ก cell์์ UPDATE ํ๋ผ๋ ๊ณณ์ ๊ณ ์น๊ณ , ์คํํด์ฃผ๋ฉด ๋๋ค.
# Run the following cell to import required libraries
from google.genai.types import (
GenerationConfig,
Image,
Part,
)
# Task 4.1
# Load the correct Gemini model use the following documentation to assist:
# https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/overview#supported-use-cases
# Load Gemini 2.0 Flash 001 Model
multimodal_model = "gemini-2.0-flash-001"
import http.client
import typing
import urllib.request
import IPython.display
from PIL import Image as PIL_Image
from PIL import ImageOps as PIL_ImageOps
def display_images(
images: typing.Iterable[Image],
max_width: int = 600,
max_height: int = 350,
) -> None:
for image in images:
pil_image = typing.cast(PIL_Image.Image, image._pil_image)
if pil_image.mode != "RGB":
# RGB is supported by all Jupyter environments (e.g. RGBA is not yet)
pil_image = pil_image.convert("RGB")
image_width, image_height = pil_image.size
if max_width < image_width or max_height < image_height:
# Resize to display a smaller notebook image
pil_image = PIL_ImageOps.contain(pil_image, (max_width, max_height))
IPython.display.display(pil_image)
def get_image_bytes_from_url(image_url: str) -> bytes:
with urllib.request.urlopen(image_url) as response:
response = typing.cast(http.client.HTTPResponse, response)
image_bytes = response.read()
return image_bytes
def load_image_from_url(image_url: str) -> Image:
image_bytes = get_image_bytes_from_url(image_url)
return Image.from_bytes(image_bytes)
def display_content_as_image(content: str | Image | Part) -> bool:
if not isinstance(content, Image):
return False
display_images([content])
return True
def display_content_as_video(content: str | Image | Part) -> bool:
if not isinstance(content, Part):
return False
part = typing.cast(Part, content)
file_path = part.file_data.file_uri.removeprefix("gs://")
video_url = f"https://storage.googleapis.com/{file_path}"
IPython.display.display(IPython.display.Video(video_url, width=600))
return True
def print_multimodal_prompt(contents: list[str | Image | Part]):
"""
Given contents that would be sent to Gemini,
output the full multimodal prompt for ease of readability.
"""
for content in contents:
if display_content_as_image(content):
continue
if display_content_as_video(content):
continue
print(content)
# Task 4.2 Generate a video description
# In this cell, update the prompt to ask Gemini to describe the video URL referenced.
# You can use the documentation at the following link to assist.
# https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/sdk-for-gemini/gemini-sdk-overview-reference#generate-content-from-video
# https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference#sample-requests-text-stream-response
# Video URI: gs://github-repo/img/gemini/multimodality_usecases_overview/mediterraneansea.mp4
prompt = """
What is shown in this video?
Where should I go to see it?
What are the top 5 places in the world that look like this?
"""
video = Part.from_uri(
file_uri="gs://github-repo/img/gemini/multimodality_usecases_overview/mediterraneansea.mp4",
mime_type="video/mp4",
)
contents = [prompt, video]
responses = client.models.generate_content_stream(
model=multimodal_model,
contents=contents
)
print("-------Prompt--------")
print_multimodal_prompt(contents)
print("\n-------Response--------")
for response in responses:
print(response.text, end="")
๊ฐ์ฅ ์ ๋จน์๋ ๋ถ๋ถ์ ์๋์ ๊ฐ๋ค.
์๋์ ๊ฐ์ด client.models.generate_content ๋ฉ์๋๋ก ํธ์ถํ์๋, ์๋ ์ ํ์๋ responses ์ถ๋ ฅ ์ฝ๋๋๋ก ํ๋ฉด response๊ฐ ์ฐํ์ง ์๊ณ AttributeError๊ฐ ๋ฌ๋ค.
responses = client.models.generate_content(
model=multimodal_model,
contents=contents
)
print("\n-------Response--------")
for response in responses:
print(response.text, end="")

๊ทธ๋์ print ๋ฌธ์ responses๋ฅผ ๊ทธ๋ฅ ํ๋ฒ์ ์ฐ์ด๋ดค๋๋ฐ, ์๋ฌ์์ด ์๋์ ๊ฐ์ด ์ ์ถ๋ ฅ๋๋ ๊ฒ์ด๋ค??
print("\n-------Response--------")
print(responses,'ํ์ธ')

๊ทธ์น๋ง ์ด ์ฝ๋๋ก๋ ์ ๋ Lab Process ๋ฒํผ์ด ํต๊ณผํ์ง ์์๋ค.
Please ensure that you have completed the python code in cells of a Jupyter notebook to describe contents of a video of Gemini LLM. ๋ผ๋ ์๋ฌ๋ฉ์์ง๋ง ๋ธ.
UPDATE ํ๋ผ๊ณ ๋ช ์๋์ด์๋ ๋ถ๋ถ ๋ง๊ณ ๋ ๊ฑด๋๋ฆฌ์ง ๋ง์์ผํ๋ ๋ฏ..
์ ๋ต์ generate_content_stream ๋ฉ์๋๋ก ํธ์ถํ๋ ๊ฒ์ด์๋ค. stream ์ผ๋ก ํธ์ถ์์๋ for ๋ฌธ ๋ด๋ถ์์ response.text๊ฐ ์ ์ถ๋ ฅ๋์๊ณ , task๋ ๋ฌธ์ ์์ด ํต๊ณผํ์๋ค.!


๊ทธ๋ผ ์์๊ฐ์ด Response๊ฐ ์ค์๊ฐ์ผ๋ก ํ์ค ํ์ค ์ถ๊ฐ๋๋ฉด์ ์ ์ถ๋ ฅ๋๋ค.
Refs
ํ ์คํธ ์์ฑ | Generative AI on Vertex AI | Google Cloud
์๊ฒฌ ๋ณด๋ด๊ธฐ ํ ์คํธ ์์ฑ ์ปฌ๋ ์ ์ ์ฌ์ฉํด ์ ๋ฆฌํ๊ธฐ ๋ด ํ๊ฒฝ์ค์ ์ ๊ธฐ์ค์ผ๋ก ์ฝํ ์ธ ๋ฅผ ์ ์ฅํ๊ณ ๋ถ๋ฅํ์ธ์. ์ด ํ์ด์ง์์๋ Google Cloud ์ฝ์, REST API, ์ง์๋๋ SDK๋ฅผ ์ฌ์ฉํ์ฌ Gemini ๋ชจ๋ธ์ ์ฑํ
cloud.google.com
ํจ์ ํธ์ถ ์ฐธ์กฐ | Generative AI on Vertex AI | Google Cloud
์๊ฒฌ ๋ณด๋ด๊ธฐ ํจ์ ํธ์ถ ์ฐธ์กฐ ์ปฌ๋ ์ ์ ์ฌ์ฉํด ์ ๋ฆฌํ๊ธฐ ๋ด ํ๊ฒฝ์ค์ ์ ๊ธฐ์ค์ผ๋ก ์ฝํ ์ธ ๋ฅผ ์ ์ฅํ๊ณ ๋ถ๋ฅํ์ธ์. ํจ์ ํธ์ถ์ ๊ด๋ จ์ฑ์ด ์๋ ์ํฉ๋ณ ๋ต๋ณ์ ์ ๊ณตํ๋ LLM์ ๊ธฐ๋ฅ์ ํฅ์์ํต๋๋ค.
cloud.google.com
Vertex AI์์ Gemini API๋ก ์ฝํ ์ธ ์์ฑ | Generative AI on Vertex AI | Google Cloud
Vertex AI์์ Gemini์ฉ Model API๋ฅผ ์ฌ์ฉํ์ฌ ์ปค์คํ ์ ํ๋ฆฌ์ผ์ด์ ์ ๋ง๋ญ๋๋ค. Gemini ๋ชจ๋ธ ์์ฒญ ๋ณธ๋ฌธ, ๋ชจ๋ธ ๋งค๊ฐ๋ณ์, ์๋ต ๋ณธ๋ฌธ, ์ํ ์์ฒญ ๋ฐ ์๋ต์ ๊ฒํ ํฉ๋๋ค.
cloud.google.com
'Google Cloud Platform' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
| Build a Website on Google Cloud: Challenge Lab (1) | 2025.09.11 |
|---|---|
| Set Up a Google Cloud Network (2) | 2025.08.23 |
| Develop GenAI Apps with Gemini and Streamlit | feat. WebSocket error (0) | 2025.08.18 |
| Introduction to Function Calling with Gemini (0) | 2025.08.11 |
| Getting Started with the Gemini API in Vertex AI with cURL / REST API (1) | 2025.08.07 |