Category : google-cloud-vertex-ai

I’m trying to run a simple Ada-boosted Decision Tree regressor on GCP Vertex AI. To parse hyperparams and other arguments I use Click for Python, a very simple CLI library. Here’s the setup for my task function: @click.command() @click.argument("input_path", type=str) @click.option("–output-path", type=str, envvar=’AIP_MODEL_DIR’) @click.option(‘–gcloud’, is_flag=True, help=’Run as if in Google Cloud Vertex AI Pipeline’) @click.option(‘–grid’, ..

Read more

I want to get command line parameters in flask. I have tried many times and still haven’t solved this problem. The expectation is that CMD passes in parameters when running the service, and flask accepts parameters. sever.py from flask import Flask, jsonify, request import pickle import argparse app = Flask(__name__) # load model def load_pickle(gcs_bucket_name, ..

Read more

I’m attempting to run a Vertex AI custom training job using the python SDK, following the general instructions laid out in this readme. My code is as follows (sensitive data removed): job = aiplatform.CustomContainerTrainingJob( display_name=’python_api_test’, container_uri='{URI FOR CUSTOM CONTAINER IN GOOGLE ARTIFACT REGISTRY}’, staging_bucket='{GCS BUCKET PATH IN ‘gs://’ FORMAT}’, model_serving_container_image_uri=’us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-4:latest’, ) job.run( model_display_name=’python_api_model’, args='{ARG PASSED ..

Read more

I am using vertex ai’s python SDK and it’s built on top of Kubeflow pipelines. In it, you supposedly can do this: train_op = (sklearn_classification_train( train_data = data_op.outputs[‘train_out’] ). set_cpu_limit(training_cpu_limit). set_memory_limit(training_memory_limit). add_node_selector_constraint(training_node_selector). set_gpu_limit(training_gpu_limit) ) where you can add these functions (set_cpu_limit, set_memory_limit, add_node_selector, and set_gpu_limit) onto your component. I’ve haven’t used this syntax before. How ..

Read more

I have deployed an index in Vertex AI IndexEndpoint. According to the docs for DeployedIndex, I have set the attribute enable_access_logging to True to enable private endpoints access logs. enable_access_logging Optional. If true, private endpoint’s access logs are sent to StackDriver Logging. These logs are like standard server access logs, containing information like timestamp and ..

Read more

I’m trying to write the Python code for a pipeline in VertexAI using kfp components. I have a step where i create a system.Dataset object that is the following: @component(base_image="python:3.9", packages_to_install=["google-cloud-bigquery","pandas","pyarrow","fsspec","gcsfs"]) def create_dataframe( project: str, region: str, destination_dataset: str, destination_table_name: str, dataset: Output[Dataset], ): from google.cloud import bigquery client = bigquery.Client(project=project, location=region) dataset_ref = bigquery.DatasetReference(project, ..

Read more

I have this code to start a VertexAI pipeline job: import google.cloud.aiplatform as vertexai vertexai.init(project=PROJECT_ID,staging_bucket=PIPELINE_ROOT) job = vertexai.PipelineJob( display_name=’pipeline-test-1′, template_path=’xgb_pipe.json’ ) job.run() which works nicely, but the run name label is a random number. How can I specify the run name? Source: Python..

Read more