I have a SQL statement that I want to run against an oracle database using a JDBC driver in databricks. I can get this to successfully run if the SQL statement is quite short, for example if it’s just selecting all of the data from a table with no filters etc.. (e.g. select * from ..
Category : apache-spark
Coming from here, I’m trying to read the correct values from this dataset in Pyspark. I made a good progress using df = spark.read.csv("hashtag_donaldtrump.csv", header=True, multiLine=True), but now I have some weird values in some cells, as you can see in this picture (last lins): Do you know how could I get rid of them? ..
So I have a given PySpark DataFrame, say df, looking like below: df.show() +——————–+——————-+ | series| value | +——————–+——————-+ | XXXX-AAAA | 1 | | XXXX-BB | 2 | | XXXX-CCCCC | 3 | +——————–+——————-+ In the series column, I would like to get rid of the XXXX- substring (i.e. length of 5 characters), which ..
I have a dataset which contains Multiple columns and rows. Currently, it’s in String type And, I wanted to convert to a date-time format for further task. I tried this below code which returns null df = df.withColumn(‘Date_Time’,df[‘Date_Time’].cast(TimestampType())) df.show() I tried some of the solutions from here, but none of them is working all, in ..
I am new to Spark. I am trying to make a recommendation system, in order to obtain implicit weights i wanted to count how many time a user has ordered a product. I am strugeling with this. i have a table with user_id, product_id and weight. these id’s are not unique, i would like to ..
say u have a data in hive , How do you write a spark program that reads this data from this hive table then add a columns to the hive table and finally transfer the data into another hive table . Please you can use any sample data set Source: Python..
Hi I am new to spark and I am trying to write my first program when I import from pyspark.mllib.linalg import Vectors i get the following I am using anaconda and Jyupter note book I have installed spark and I am able to execute from terminal ————————————————————————– TypeError Traceback (most recent call last) <ipython-input-1-da13502db94b> in ..
How to save bytes string to hadoop hdfs in pyspark? In python, bytes string can by simply saved to single xml file: xml = b'<Value>1</Value>’ file_path = ‘/home/user/file.xml’ with open(file_path,’wb’) as f: f.write(xml) Expected output: ‘hdfs://hostname:9000/file.xml’ Source: Python..
I would like to do some performance testing on Databricks. To do this I would like to log what cluster (VM type e.g. Standard_DS3_v2) I was using during the test (we can assume that the driver and worker nodes are the same). I know I could log the no of workers, no of cores (on ..
I am working on a project that predicts if a person is sick with a certain disease by entering some information. I have to work with pyspark, so my idea is to train a logistic regression model with pyspark and then use it in a flask app so that a patient can enter his parameters ..
Recent Comments