I am trying to build a small app in Google Cloud Platform. (I am new to it) It involves collecting and storing files in Google Storage buckets. To do that I need to use, from google.cloud import storage But when I deploy the app in the terminal, I get the following error. File "/home/vmagent/app/application.py", line ..
I’ve created this Python script that uses a number of packages (pandas, smtplib etc.) on Pycharm. The script needs to get specific Google Analytics Data through the API which we will transform in the script itself. The script works like a charm. Now we want to let this script run every day at a specific ..
I have a situation where my data lies in a different GCP project say "data-pro" and my compute project is set up as a different GCP project, which has access to "data-pro" ‘s tables. So is there way to specify the default project-id using which the queries must run ? i can see that there ..
This is for a project involving the deployment of a series of web-scrapers. I’m looking to receive URLs in one "load-balancer" cloud function and then invoke several synchronous/identical web-scraper cloud functions to handle the workload of URLs. I’ve attempted designs using HTTP requests and pub/sub between the functions. Anyone with more experience see a good ..
I’m trying to generate artificial big datasets for 1 billion rows (observations) and 1000 columns and export the information as CSV file. My code for generating the data works as I want but it breaks when it generates 1.8 million observations. I’ve been running this code on the cloud in a virtual machine in GCP ..
I would like to use a cloud scheduler to run my python script on an hourly basis to get data from an API request. I would also like to then save this data in some format. The amount of data I use takes relatively little storage space so it would be best if I could ..
Within my code, I am attempting to gather the Application Default Credentials from the associated service account in Cloud Build: from google.auth import default credentials, project_id = default() This works fine in my local space because I have set the environment variable GOOGLE_APPLICATION_CREDENTIALS appropriately. However, when this line is executed (via a test step in ..
I am working on a script that takes files from a gcp bucket and uploads them to another server. Currently my script downloads all of the files from the gcp bucket into my local storage using blob.download_to_filename and then sends a POST request (using requests library) to upload those files to my server. I know ..
As of now i do kubectl –context <cluster context> get pod -A to get pod in specific cluster is there a python way to set kubernetes context for a virtual env , so we can use multiple context at the same time example : Terminal 1: (cluster context1) [email protected] # Terminal 2: (cluster context2) [email protected] ..
I know how to download the file from cloud storage within the cloud run instance. But, I can’t find the syntax for reading the file in python. I’m looking to immediately convert the csv file into a pandas dataframe, just by using pd.read_csv(‘testing.csv’). So my personal code looks like, download_blob(bucket_name, source_blob_name, ‘testing.csv’). So shouldn’t I ..