久久亚洲精品国产精品_羞羞漫画在线版免费阅读网页漫画_国产精品久久久久久久久久久久_午夜dj免费观看在线视频_希崎杰西卡番号

不兼容的應(yīng)用程序(不兼容的應(yīng)用程序怎么安裝)

前沿拓展:

不兼容的應(yīng)用程序

以電腦為例,不兼容的應(yīng)用程序解決的方法:
  
  1、第一,拿任意軟件做例子,右鍵進(jìn)入其軟件的屬性。
  
  2、將彈出的屬性窗口切換至兼容性面板。
  
2009年,計(jì)算機(jī)用戶數(shù)量從原來(lái)的630萬(wàn)增長(zhǎng)至6710萬(wàn)臺(tái),聯(lián)網(wǎng)計(jì)算機(jī)臺(tái)數(shù)由原來(lái)的5940序義聯(lián)長(zhǎng)字父轉(zhuǎn)汽權(quán)萬(wàn)臺(tái)上升至2.9億臺(tái)。互聯(lián)網(wǎng)用戶已經(jīng)達(dá)到3.16才當(dāng)提重億,**互聯(lián)網(wǎng)有6.7億移動(dòng)用戶,其中手機(jī)上網(wǎng)用戶達(dá)1.17億,為全球第一位。


用戶反饋?zhàn)约簩?xiě)的spark程序放到Y(jié)ANR執(zhí)行偶爾報(bào)錯(cuò),錯(cuò)誤提示java.lang.IllegalArgumentException: Illegal pattern component: XXX。從日志來(lái)看是創(chuàng)建FastDateFormat對(duì)象報(bào)錯(cuò),并且偶爾報(bào)錯(cuò),難道和執(zhí)行的數(shù)據(jù)有關(guān),用戶反饋相同數(shù)據(jù)也是有時(shí)成功有時(shí)失敗;報(bào)錯(cuò)都發(fā)生在集群固定節(jié)點(diǎn)嗎,這個(gè)也不明確,先仔細(xì)看看錯(cuò)誤日志再說(shuō)。

使用vim查看spark日志,也可使用yarn logs -applicationId $appId查看日志

23/02/08 10:15:06 ERROR Executor: Exception in task 5.3 in stage 0.0 (TID 4)
java.lang.IllegalArgumentException: Illegal pattern component: XXX
at org.apache.commons.lang3.time.FastDatePrinter.parsePattern(FastDatePrinter.java:282)
at org.apache.commons.lang3.time.FastDatePrinter.init(FastDatePrinter.java:149)
at org.apache.commons.lang3.time.FastDatePrinter.<init>(FastDatePrinter.java:142)
at org.apache.commons.lang3.time.FastDateFormat.<init>(FastDateFormat.java:384)
at org.apache.commons.lang3.time.FastDateFormat.<init>(FastDateFormat.java:369)
at org.apache.commons.lang3.time.FastDateFormat$1.createInstance(FastDateFormat.java:91)
at org.apache.commons.lang3.time.FastDateFormat$1.createInstance(FastDateFormat.java:88)
at org.apache.commons.lang3.time.FormatCache.getInstance(FormatCache.java:82)
at org.apache.commons.lang3.time.FastDateFormat.getInstance(FastDateFormat.java:165)
at org.apache.spark.sql.catalyst.json.JSONOptions.<init>(JSONOptions.scala:83)
at org.apache.spark.sql.catalyst.json.JSONOptions.<init>(JSONOptions.scala:43)
at org.apache.spark.sql.Dataset$anonfun$toJSON$1.apply(Dataset.scala:3146)
at org.apache.spark.sql.Dataset$anonfun$toJSON$1.apply(Dataset.scala:3142)
at org.apache.spark.sql.execution.MapPartitionsExec$anonfun$5.apply(objects.scala:188)
at org.apache.spark.sql.execution.MapPartitionsExec$anonfun$5.apply(objects.scala:185)
at org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$25.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$25.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

根據(jù)日志錯(cuò)誤是從spark catalyst出來(lái)的,即org.apache.spark.sql.catalyst.json.JSONOptions類(lèi)的83行,創(chuàng)建FastDateFormat實(shí)例的過(guò)程發(fā)生異常。查看spark JSONOptions類(lèi)源碼,我們發(fā)現(xiàn)報(bào)錯(cuò)行入?yún)⑻幥『糜凶址鸛XX,是否和這兒有關(guān)繼續(xù)看創(chuàng)建創(chuàng)建FastDateFormat過(guò)程,跟蹤源碼到commons-lang3的org.apache.commons.lang3.time.FastDatePrinter類(lèi)

//Spark源碼
val timestampFormat: FastDateFormat =
FastDateFormat.getInstance(
parameters.getOrElse("timestampFormat", "yyyy-MM-dd'T'HH:mm:ss.SSSXXX"), timeZone, Locale.US)

找到spark源碼引用的commons-lang3-3.8.1.jar包,進(jìn)入在FastDatePrinter類(lèi)搜索關(guān)鍵詞Illegal pattern component,關(guān)鍵詞Illegal pattern component找到了,僅此一處,但仔細(xì)一看行號(hào)對(duì)不上,說(shuō)明spark加載的FastDateFormat并不是3.8.1版本,那從哪兒來(lái)的呢

protected List<Rule> parsePattern() {
//。。。省略
case 'M': // month in year (text and number)
if (tokenLen >= 4) {
rule = new TextField(Calendar.MONTH, months);
} else if (tokenLen == 3) {
rule = new TextField(Calendar.MONTH, shortMonths);
} else if (tokenLen == 2) {
rule = TwoDigitMonthField.INSTANCE;
} else {
rule = UnpaddedMonthField.INSTANCE;
}
break;
case 'd': // day in month (number)
rule = selectNumberRule(Calendar.DAY_OF_MONTH, tokenLen);
break;
//。。。省略
default:
throw new IllegalArgumentException("Illegal pattern component: " + token);
}

rules.add(rule);
}

return rules;
}

第一看用戶代碼是否引入了commons-lang3,解壓fat-jar后確實(shí)存在org.apache.commons.lang3.time.FastDateFormat,但版本也是3.8.1,不會(huì)這兒引起的。查找spark安裝目錄,執(zhí)行命令find . -name "commons-lang3*"進(jìn)行查找,只有一個(gè)commons-lang3-3.8.1.jar在${SPARK_HOME}/jars/目錄下,再查找hadoop目錄,發(fā)現(xiàn)了有commons-lang3,但下載下來(lái)反編譯后,檢查行號(hào)仍然對(duì)不上,說(shuō)明也不是加載的這個(gè)jar,奇怪為啥沒(méi)找到哪兒來(lái)的類(lèi),想嘗試用arthas查看但是不確定executor會(huì)啟動(dòng)在哪個(gè)節(jié)點(diǎn),且沒(méi)多久就報(bào)錯(cuò)結(jié)束了。

嘗試添加JVM參數(shù)打印spark driver/executor加載的類(lèi),從日志看spark driver加載了正確的包c(diǎn)ommons-lang3-3.8.1.jar,spark executor沒(méi)找到想要日志,可能verbose輸出的時(shí)候還沒(méi)用到FastDateFormat類(lèi)?下面是spark-submit添加JVM參數(shù)的方式:

spark-submit \
–master yarn \
–driver-memory 4G \
–name 'AppName' \
–conf 'spark.driver.extraJavaOptions=-verbose:class' \
–conf 'spark.executor.extraJavaOptions=-verbose:class' \

用戶催的急沒(méi)時(shí)間了,改代碼看看FastDateFormat哪兒來(lái)的,順便在發(fā)生異常的情況下重新創(chuàng)建FastDateFormat對(duì)象,去掉參數(shù)XXX,這樣避免因?yàn)椴恢С諼XX參數(shù)而導(dǎo)致程序失敗退出。

// 修改org.apache.spark.sql.catalyst.json.JSONOptions源碼,捕獲timestampFormat實(shí)例化的異常
// 捕獲異常,打印Message和類(lèi)加載來(lái)源
logWarning("==============>>>" + e.getMessage)
val clazz = FastDateFormat.getInstance().getClass
val location = clazz.getResource('/' + clazz.getName.replace('.', '/') + ".class")
logWarning("resource location: " + location.toString)

打包編譯并替換spark對(duì)應(yīng)jar包,運(yùn)行成功了且捕獲到了異常信息,發(fā)現(xiàn)FastDateFormat來(lái)源hive-exec-1.2.1.spark2.jar,才想起忘了搜索jar包里的class類(lèi),除了包名commons-lang3*,還需要查找jar里的內(nèi)容,hive-exec剛好是個(gè)fat-jar

23/02/08 17:12:39 WARN JSONOptions: ==============>>>Illegal pattern component: XXX
23/02/08 17:12:39 WARN JSONOptions: resource location: jar:file:/data/hadoop/yarn/local/usercache/…/__spark_libs__1238265929018908261.zip/hive-exec-1.2.1.spark2.jar!/org/apache/commons/lang3/time/FastDateFormat.class

spark的commons-lang3-3.8.1.jar和hive-exec-1.2.1.spark2.jar都在目錄${SPARK_HOME}/jar/下,百度了下這種情況下類(lèi)加載順序和CLASSPATH填寫(xiě)的JAR順序以及創(chuàng)建時(shí)間都相關(guān)(沒(méi)找到一個(gè)權(quán)威的文章),期間嘗試過(guò)設(shè)置spark-submit參數(shù):–conf 'spark.executor.extraClassPath=commons-lang3-3.8.1.jar'來(lái)優(yōu)先加載這個(gè)jar,仍然執(zhí)行報(bào)錯(cuò),后來(lái)看日志的時(shí)候偶然發(fā)現(xiàn) CLASSPATH的路徑時(shí)錯(cuò)誤的,如下日志,看來(lái)spark.executor.extraClassPath的作用就是把jar包放到classpath的最前面,這樣達(dá)到優(yōu)先加載的目的。

export SPARK_YARN_STAGING_DIR="hdfs://nnHA/user/p55_u34_tsp_caihong/.sparkStaging/application_1670726876109_157924"
export CLASSPATH="commons-lang3-3.8.1.jar:$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*:/etc/hadoop/conf:/usr/lib/hadoop/libs/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*:$PWD/__spark_conf__/__hadoop_conf__"

spark.executor.extraClassPath必須使用全路徑,嘗試修改spark-submit命令,寫(xiě)全spark.executor.extraClassPath路徑,重新執(zhí)行問(wèn)題解決,沒(méi)在報(bào)錯(cuò)。但是仍有疑問(wèn):為啥偶爾成功失敗,按理說(shuō)兩個(gè)jar包雖然都在jars目錄,也會(huì)有固定的加載順序吧,不解。。。了解的同學(xué)還請(qǐng)?jiān)谠u(píng)論區(qū)告之,不勝感激。修改源碼和設(shè)置spark.executor.extraClassPath參數(shù)都能有效解決這個(gè)問(wèn)題。

spark-submit \
–master yarn \
–driver-memory 4G \
–name 'AppName' \
–conf 'spark.driver.extraJavaOptions=-verbose:class' \
–conf 'spark.executor.extraJavaOptions=-verbose:class' \
–conf 'spark.executor.extraClassPath=${SPARK_HOME}/jars/commons-lang3-3.8.1.jar' \

拓展知識(shí):

原創(chuàng)文章,作者:九賢生活小編,如若轉(zhuǎn)載,請(qǐng)注明出處:http://www.cddhlm.com/43306.html