admin管理员组文章数量:1130349
所用软件版本:
spark2.3.0
IDEA2019.1
kafka_2.11-01.0.2.2
spark-streaming-kafka-0-10_2.11-2.3.0
先贴出代码:
package com.bd.spark
import java.util.Properties
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafkamon.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.dstream.InputDStream
object kafkaSparkStreaming_10version {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("kafka-spark-demo").setMaster("local[4]")
val ssc =所用软件版本:
spark2.3.0
IDEA2019.1
kafka_2.11-01.0.2.2
spark-streaming-kafka-0-10_2.11-2.3.0
先贴出代码:
package com.bd.spark
import java.util.Properties
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafkamon.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.dstream.InputDStream
object kafkaSparkStreaming_10version {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("kafka-spark-demo").setMaster("local[4]")
val ssc =本文标签: 错误数据KafkasparkstreamingJson
版权声明:本文标题:解决sparkstreaming读取kafka中的json数据,消费后保存到MySQL中,报_corrupt_record和name错误的!! 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://it.en369.cn/jiaocheng/1725943338a580388.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。


发表评论