是否需要一个特别的OpenJDK版本来支持新的苹果硅芯片?
我看到目前有macOS/OS X的JDK下载,但这些似乎只适用于x86处理器。对吗?如果是这样,我可以在哪里下载用于M1的OpenJDK版本?
是否需要一个特别的OpenJDK版本来支持新的苹果硅芯片?
我看到目前有macOS/OS X的JDK下载,但这些似乎只适用于x86处理器。对吗?如果是这样,我可以在哪里下载用于M1的OpenJDK版本?
当前回答
我按照下面的步骤,成功地在MacBook Air (M1)上运行了JDK 16:
登陆Oracle.com 进入产品→软件→Java 点击“立即下载Java” 点击“JDK下载” 选择“macOS安装程序” 安装JDK 尝试任何示例Java程序,这应该适合您。
我成功地在我的MacBook Air (M1)上安装并运行了这个程序。
其他回答
以下是安装Oracle JDK 8并从Rosetta - https://www.oracle.com/in/java/technologies/javase/javase-jdk8-downloads.html运行它的步骤
下载macOS x64版本 在尝试安装该包时,如果Rosetta已经不存在,您将收到安装它的提示 其余的安装步骤与任何其他包相同。
您可以通过打开终端并输入以下命令来验证它是否工作:
java -version
请到Azul网站下载。dmg文件:
https://www.azul.com/downloads/zulu-community/?os=macos&architecture=arm-64-bit&package=jdk
它将被放置在一个库中,一旦IntelliJ IDEA识别出它,就应该可以运行了。
我尝试过Azul JDK 8。
我只是想说,虽然Azul JDK在Apple M1上原生运行,而且速度非常快,但仍然存在一些问题。特别是当一些Java代码需要调用c++代码时。
例如,我是一名大数据开发人员。我开始使用Azul JDK进行开发工作流程。但是我注意到切换后某些测试开始失败。例如,当测试写入Parquet/Avro文件时,它会失败。我认为这是因为有一些用c++为Parquet/Avro编写的原生东西,而不是为M1编译的。
由于这个特殊的原因,我被迫使用速度较慢的非m1 JDK。没有什么问题。
这里有一个我用Azul得到的错误的例子,我没有得到非m1 jdk:
- convert Base64 JSON back to rpo Avro *** FAILED ***
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 14, localhost, executor driver): org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Mac and os.arch=aarch64
at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
at org.apache.avro.file.SnappyCodec.compress(SnappyCodec.java:43)
at org.apache.avro.file.DataFileStream$DataBlock.compressUsing(DataFileStream.java:358)
at org.apache.avro.file.DataFileWriter.writeBlock(DataFileWriter.java:382)
at org.apache.avro.file.DataFileWriter.sync(DataFileWriter.java:401)
at org.apache.avro.file.DataFileWriter.flush(DataFileWriter.java:410)
at org.apache.avro.file.DataFileWriter.close(DataFileWriter.java:433)
at org.apache.avro.mapred.AvroOutputFormat$1.close(AvroOutputFormat.java:170)
at org.apache.spark.internal.io.SparkHadoopWriter.close(SparkHadoopWriter.scala:101)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$5.apply$mcV$sp(PairRDDFunctions.scala:1145)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1393)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1145)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1125)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
...
Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Mac and os.arch=aarch64
at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
at org.apache.avro.file.SnappyCodec.compress(SnappyCodec.java:43)
at org.apache.avro.file.DataFileStream$DataBlock.compressUsing(DataFileStream.java:358)
at org.apache.avro.file.DataFileWriter.writeBlock(DataFileWriter.java:382)
at org.apache.avro.file.DataFileWriter.sync(DataFileWriter.java:401)
at org.apache.avro.file.DataFileWriter.flush(DataFileWriter.java:410)
at org.apache.avro.file.DataFileWriter.close(DataFileWriter.java:433)
正如你所看到的,它说:Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Mac and os.arch=aarch64
我谷歌了一下这个问题,他们说原生库是为Spark的后续版本编译的,不幸的是。
这让我非常沮丧,我现在想要一台Windows笔记本电脑,哈哈。在M1芯片上运行Intel JDK有时会很慢,我不希望这样。
小心!
Update: They released new versions of their libraries with support for M1, I updated them in the projects and everything works, thank God. Sometimes these "native code errors" manifest themselves with different exceptions and this is additional P.I.T.A. that I have to deal with while my colleagues on windows laptops don't need to deal with it. The errors can be a bit unclear sometimes, but if you see something about native code in the error log, or words like "jna" or "jni", then it's an M1 chip problem.
简单地使用Java下载,在菜单中选择选项卡“Java 18”,然后选项卡“macOS”。下载Arm 64 DMG安装程序。
您可以通过以下方法验证它是否有效 打开终端并输入:
java -version
brew install openjdk
以我为例,在MacBook Air (M1)上成功安装OpenJDK后,java命令仍然不起作用。我用
brew info openjdk
然后有一个命令喜欢
For the system Java wrappers to find this JDK, symlink it with
sudo ln -sfn /opt/homebrew/opt/openjdk/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk.jdk
执行该命令,java命令生效。