Problem with Lambda function, apply no aplicable

  apache-spark, dataframe, pyspark, python

I am trying to do a calculation through a dataframe in Databricks. I am using a ‘lambda’ function to do it. This I have previously done with Pandas, now I am using Pyspark.

What I do is the following:

df_conversion_pre = df_conversion
df_conversion_pre['RENTA_2'] = df_conversion_pre.apply(lambda x: x['RENTA'] * float(precio),axis=1)
df_conversion_pre.head()

where ‘precio’ is just a value of type float

The error returned is: ** AttributeError: ‘DataFrame’ object has no attribute ‘apply’ **

I really don’t understand why this happens. Can you explain me please?

From already thank you very much.

regards!!

Source: Python Questions

LEAVE A COMMENT