scala - How to catch exceptions when using DataFrameWriter to write dataframe with JDBC -
i writing dataframe mysql database. database throwing java.sql.batchupdateexception: duplicate entry 'qwer1234' key 'transaction_id_unique'
.
the code using write database is
txnsdf.write.mode("append").jdbc(connprop.getproperty("url"), query, connprop)
now can't write try catch around because not on row level , looking @ org.apache.spark.sql.execution.datasources.jdbc.jdbcutils class inserts batches of 1000 transactions @ time.
i remove constraint on database doesn't solve problem of handling exceptions coming class.
my last option, partially rewrite savepartition method in jdbcutils handle exception , don't want that. suggestions?
Comments
Post a Comment