Spark 2.0 Cassandra Scala Shell Error: java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
By : user5830960
Date : March 29 2020, 07:55 AM
wish helps you I finally got this working. I've added a gist for reference. https://gist.github.com/ghafran/19d0067d88dc074413422d4cae4cc344 code :
# install java
sudo apt-get update -y
sudo apt-get install software-properties-common -y
sudo add-apt-repository -y ppa:openjdk-r/ppa
sudo apt-get install wget -y
sudo apt-get install openjdk-8-jdk -y
sudo apt-get update -y
# make serve directory
sudo mkdir -p /srv
cd /srv
# install scala 2.11
sudo wget http://downloads.lightbend.com/scala/2.11.7/scala-2.11.7.deb
sudo dpkg -i scala-2.11.7.deb
# get spark 2.0
sudo wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.0-bin-hadoop2.7.tgz
sudo tar -zxf spark-2.0.0-bin-hadoop2.7.tgz
sudo mv spark-2.0.0-bin-hadoop2.7 spark
# build spark cassandra connector
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
sudo apt-get install apt-transport-https -y
sudo apt-get update -y
sudo apt-get install sbt -y
git clone https://github.com/datastax/spark-cassandra-connector.git
cd spark-cassandra-connector
git checkout v2.0.0-M2
sudo sbt assembly -Dscala-2.11=true
# move spark cassandra connector to spark jar directory
find . -iname "*.jar" -type f -exec /bin/cp {} /srv/spark/jars/ \;
# start master
/srv/spark/sbin/start-master.sh --host 0.0.0.0
# start slave
/srv/spark/sbin/start-slave.sh --host 0.0.0.0 spark://localhost:7077
# start shell
/srv/spark/sbin/spark-shell --driver-class-path $(echo /srv/spark/jars/*.jar |sed 's/ /:/g')
# test
sc.stop
import org.apache.spark
import org.apache.spark._
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.cassandra
import org.apache.spark.sql.cassandra._
import com.datastax.spark
import com.datastax.spark._
import com.datastax.spark.connector
import com.datastax.spark.connector._
import com.datastax.spark.connector.cql
import com.datastax.spark.connector.cql._
import com.datastax.spark.connector.cql.CassandraConnector
import com.datastax.spark.connector.cql.CassandraConnector._
val conf = new SparkConf(true).set("spark.cassandra.connection.host", "cassandraserver")
val sc = new SparkContext("spark://localhost:7077", "test", conf)
val table = sc.cassandraTable("keyspace", "users")
println(table.count)
|
sbt test gives: java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class"
By : Elle Ameli
Date : March 29 2020, 07:55 AM
this will help This line: libraryDependencies += "com.typesafe.akka" % "akka-actor_2.10" % "2.1.3" code :
libraryDependencies += "org.scala-lang.modules" % "scala-xml_2.11" % "1.0.5"
libraryDependencies += "org.scala-lang.modules" %% "scala-xml" % "1.0.5"
|
Spark : java.lang.NoClassDefFoundError: scala/collection/mutable/ArraySeq$ofRef
By : user3233928
Date : March 29 2020, 07:55 AM
This might help you As stated in the comments, the solution is to use for development the same version of Scala that you will use on the cluster.
|
Run spray-servlet war gives java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
By : silentkriz
Date : March 29 2020, 07:55 AM
I wish this help you I ran into the very same error message with a different setup. I was using spray-client in a standalone app (no JBoss). The cause was that although I was using Spray 1.3.1 which is supposed to run on Scala 2.11 the version I was actually getting from the Maven repo (I use Maven in my project) was built against Scala 2.10. Check out the POM if you want to see for yourself.
|
Apache hive exception noClassDefFoundError: scala/collection/iterable
By : Rafael Ribeiro
Date : March 29 2020, 07:55 AM
around this issue this issue usually because of classpath config problem,try this may useful step 1: find this code in $HIVE_HOME/bin/hive(perhaps backup this file is better):
|