如何将 PyCharm 与 PySpark 联系起来?

我是阿帕奇火花的新手,显然我在我的苹果笔记本电脑里安装了自制的阿帕奇火花:

Last login: Fri Jan  8 12:52:04 on console
user@MacBook-Pro-de-User-2:~$ pyspark
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/01/08 14:46:44 INFO SparkContext: Running Spark version 1.5.1
16/01/08 14:46:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/01/08 14:46:47 INFO SecurityManager: Changing view acls to: user
16/01/08 14:46:47 INFO SecurityManager: Changing modify acls to: user
16/01/08 14:46:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); users with modify permissions: Set(user)
16/01/08 14:46:50 INFO Slf4jLogger: Slf4jLogger started
16/01/08 14:46:50 INFO Remoting: Starting remoting
16/01/08 14:46:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.64:50199]
16/01/08 14:46:51 INFO Utils: Successfully started service 'sparkDriver' on port 50199.
16/01/08 14:46:51 INFO SparkEnv: Registering MapOutputTracker
16/01/08 14:46:51 INFO SparkEnv: Registering BlockManagerMaster
16/01/08 14:46:51 INFO DiskBlockManager: Created local directory at /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/blockmgr-769e6f91-f0e7-49f9-b45d-1b6382637c95
16/01/08 14:46:51 INFO MemoryStore: MemoryStore started with capacity 530.0 MB
16/01/08 14:46:52 INFO HttpFileServer: HTTP File server directory is /private/var/folders/5x/k7n54drn1csc7w0j7vchjnmc0000gn/T/spark-8e4749ea-9ae7-4137-a0e1-52e410a8e4c5/httpd-1adcd424-c8e9-4e54-a45a-a735ade00393
16/01/08 14:46:52 INFO HttpServer: Starting HTTP Server
16/01/08 14:46:52 INFO Utils: Successfully started service 'HTTP file server' on port 50200.
16/01/08 14:46:52 INFO SparkEnv: Registering OutputCommitCoordinator
16/01/08 14:46:52 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/01/08 14:46:52 INFO SparkUI: Started SparkUI at http://192.168.1.64:4040
16/01/08 14:46:53 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/01/08 14:46:53 INFO Executor: Starting executor ID driver on host localhost
16/01/08 14:46:53 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50201.
16/01/08 14:46:53 INFO NettyBlockTransferService: Server created on 50201
16/01/08 14:46:53 INFO BlockManagerMaster: Trying to register BlockManager
16/01/08 14:46:53 INFO BlockManagerMasterEndpoint: Registering block manager localhost:50201 with 530.0 MB RAM, BlockManagerId(driver, localhost, 50201)
16/01/08 14:46:53 INFO BlockManagerMaster: Registered BlockManager
Welcome to
____              __
/ __/__  ___ _____/ /__
_\ \/ _ \/ _ `/ __/  '_/
/__ / .__/\_,_/_/ /_/\_\   version 1.5.1
/_/


Using Python version 2.7.10 (default, Jul 13 2015 12:05:58)
SparkContext available as sc, HiveContext available as sqlContext.
>>>

我想开始玩,以便了解更多关于 MLlib。但是,我使用 Pycharm 在 python 中编写脚本。问题是,当我去 Pycharm 试图打电话给 pypark 时,Pycharm 找不到这个模块。我尝试将这条通往 Pycharm 的道路添加如下:

cant link pycharm with spark

然后从 博客我试了这个:

import os
import sys


# Path for spark source folder
os.environ['SPARK_HOME']="/Users/user/Apps/spark-1.5.2-bin-hadoop2.4"


# Append pyspark  to Python Path
sys.path.append("/Users/user/Apps/spark-1.5.2-bin-hadoop2.4/python/pyspark")


try:
from pyspark import SparkContext
from pyspark import SparkConf
print ("Successfully imported Spark Modules")


except ImportError as e:
print ("Can not import Spark Modules", e)
sys.exit(1)

而且还不能开始使用 PySpark 和 PyCharm,有什么办法可以将 PyCharm 和 apache-pypark“连接”起来吗。

更新:

然后,我搜索 apache-park 和 python 路径,以便设置 Pycharm 的环境变量:

Apache-火花路径:

user@MacBook-Pro-User-2:~$ brew info apache-spark
apache-spark: stable 1.6.0, HEAD
Engine for large-scale data processing
https://spark.apache.org/
/usr/local/Cellar/apache-spark/1.5.1 (649 files, 302.9M) *
Poured from bottle
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/apache-spark.rb

蟒蛇路径:

user@MacBook-Pro-User-2:~$ brew info python
python: stable 2.7.11 (bottled), HEAD
Interpreted, interactive, object-oriented programming language
https://www.python.org
/usr/local/Cellar/python/2.7.10_2 (4,965 files, 66.9M) *

然后,根据上述信息,我试着将环境变量设置如下:

configuration 1

有没有什么办法能正确地把魅力和火花联系起来?

然后,当我使用上面的配置运行一个 python 脚本时,我有一个例外:

/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/user/PycharmProjects/spark_examples/test_1.py
Traceback (most recent call last):
File "/Users/user/PycharmProjects/spark_examples/test_1.py", line 1, in <module>
from pyspark import SparkContext
ImportError: No module named pyspark

更新: 然后我尝试了@zero323提出的这种配置

配置1:

/usr/local/Cellar/apache-spark/1.5.1/

conf 1

出去:

 user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1$ ls
CHANGES.txt           NOTICE                libexec/
INSTALL_RECEIPT.json  README.md
LICENSE               bin/

配置2:

/usr/local/Cellar/apache-spark/1.5.1/libexec

enter image description here

出去:

user@MacBook-Pro-de-User-2:/usr/local/Cellar/apache-spark/1.5.1/libexec$ ls
R/        bin/      data/     examples/ python/
RELEASE   conf/     ec2/      lib/      sbin/
109702 次浏览

From the documentation:

To run Spark applications in Python, use the bin/spark-submit script located in the Spark directory. This script will load Spark’s Java/Scala libraries and allow you to submit applications to a cluster. You can also use bin/pyspark to launch an interactive Python shell.

You are invoking your script directly with the CPython interpreter, which I think is causing problems.

Try running your script with:

"${SPARK_HOME}"/bin/spark-submit test_1.py

If that works, you should be able to get it working in PyCharm by setting the project's interpreter to spark-submit.

With PySpark package (Spark 2.2.0 and later)

With SPARK-1267 being merged you should be able to simplify the process by pip installing Spark in the environment you use for PyCharm development.

  1. Go to File -> Settings -> Project Interpreter
  2. Click on install button and search for PySpark

    enter image description here

  3. Click on install package button.

Manually with user provided Spark installation

Create Run configuration:

  1. Go to Run -> Edit configurations
  2. Add new Python configuration
  3. Set Script path so it points to the script you want to execute
  4. Edit Environment variables field so it contains at least:

    • SPARK_HOME - it should point to the directory with Spark installation. It should contain directories such as bin (with spark-submit, spark-shell, etc.) and conf (with spark-defaults.conf, spark-env.sh, etc.)
    • PYTHONPATH - it should contain $SPARK_HOME/python and optionally $SPARK_HOME/python/lib/py4j-some-version.src.zip if not available otherwise. some-version should match Py4J version used by a given Spark installation (0.8.2.1 - 1.5, 0.9 - 1.6, 0.10.3 - 2.0, 0.10.4 - 2.1, 0.10.4 - 2.2, 0.10.6 - 2.3, 0.10.7 - 2.4)

      enter image description here

  5. Apply the settings

Add PySpark library to the interpreter path (required for code completion):

  1. Go to File -> Settings -> Project Interpreter
  2. Open settings for an interpreter you want to use with Spark
  3. Edit interpreter paths so it contains path to $SPARK_HOME/python (an Py4J if required)
  4. Save the settings

Optionally

  1. Install or add to path type annotations matching installed Spark version to get better completion and static error detection (Disclaimer - I am an author of the project).

Finally

Use newly created configuration to run your script.

I used the following page as a reference and was able to get pyspark/Spark 1.6.1 (installed via homebrew) imported in PyCharm 5.

http://renien.com/blog/accessing-pyspark-pycharm/

import os
import sys


# Path for spark source folder
os.environ['SPARK_HOME']="/usr/local/Cellar/apache-spark/1.6.1"


# Append pyspark  to Python Path
sys.path.append("/usr/local/Cellar/apache-spark/1.6.1/libexec/python")


try:
from pyspark import SparkContext
from pyspark import SparkConf
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
sys.exit(1)

With the above, pyspark loads, but I get a gateway error when I try to create a SparkContext. There's some issue with Spark from homebrew, so I just grabbed Spark from the Spark website (download the Pre-built for Hadoop 2.6 and later) and point to the spark and py4j directories under that. Here's the code in pycharm that works!

import os
import sys


# Path for spark source folder
os.environ['SPARK_HOME']="/Users/myUser/Downloads/spark-1.6.1-bin-hadoop2.6"


# Need to Explicitly point to python3 if you are using Python 3.x
os.environ['PYSPARK_PYTHON']="/usr/local/Cellar/python3/3.5.1/bin/python3"


#You might need to enter your local IP
#os.environ['SPARK_LOCAL_IP']="192.168.2.138"


#Path for pyspark and py4j
sys.path.append("/Users/myUser/Downloads/spark-1.6.1-bin-hadoop2.6/python")
sys.path.append("/Users/myUser/Downloads/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip")


try:
from pyspark import SparkContext
from pyspark import SparkConf
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
sys.exit(1)


sc = SparkContext('local')
words = sc.parallelize(["scala","java","hadoop","spark","akka"])
print(words.count())

I had a lot of help from these instructions, which helped me troubleshoot in PyDev and then get it working PyCharm - https://enahwe.wordpress.com/2015/11/25/how-to-configure-eclipse-for-developing-with-python-and-spark-on-hadoop/

I'm sure somebody has spent a few hours bashing their head against their monitor trying to get this working, so hopefully this helps save their sanity!

Here's how I solved this on mac osx.

  1. brew install apache-spark
  2. Add this to ~/.bash_profile

    export SPARK_VERSION=`ls /usr/local/Cellar/apache-spark/ | sort | tail -1`
    export SPARK_HOME="/usr/local/Cellar/apache-spark/$SPARK_VERSION/libexec"
    export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
    export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
    
  3. Add pyspark and py4j to content root (use the correct Spark version):

    /usr/local/Cellar/apache-spark/1.6.1/libexec/python/lib/py4j-0.9-src.zip
    /usr/local/Cellar/apache-spark/1.6.1/libexec/python/lib/pyspark.zip
    

enter image description here

Check out this video.

Assume your spark python directory is: /home/user/spark/python

Assume your Py4j source is: /home/user/spark/python/lib/py4j-0.9-src.zip

Basically you add the the spark python directory and the py4j directory within that to the interpreter paths. I don't have enough reputation to post a screenshot or I would.

In the video, the user creates a virtual environment within pycharm itself, however, you can make the virtual environment outside of pycharm or activate a pre-existing virtual environment, then start pycharm with it and add those paths to the virtual environment interpreter paths from within pycharm.

I used other methods to add spark via the bash environment variables, which works great outside of pycharm, but for some reason they weren't recognized within pycharm, but this method worked perfectly.

I followed the tutorials on-line and added the env variables to .bashrc:

# add pyspark to python
export SPARK_HOME=/home/lolo/spark-1.6.1
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH

I then just got the value in SPARK_HOME and PYTHONPATH to pycharm:

(srz-reco)lolo@K:~$ echo $SPARK_HOME
/home/lolo/spark-1.6.1
(srz-reco)lolo@K:~$ echo $PYTHONPATH
/home/lolo/spark-1.6.1/python/lib/py4j-0.9-src.zip:/home/lolo/spark-1.6.1/python/:/home/lolo/spark-1.6.1/python/lib/py4j-0.9-src.zip:/home/lolo/spark-1.6.1/python/:/python/lib/py4j-0.8.2.1-src.zip:/python/:

Then I copied it to Run/Debug Configurations -> Environment variables of the script.

You need to setup PYTHONPATH, SPARK_HOME before you launch IDE or Python.

Windows, edit environment variables, added spark python and py4j into

PYTHONPATH=%PYTHONPATH%;{py4j};{spark python}

Unix,

export PYTHONPATH=${PYTHONPATH};{py4j};{spark/python}

Configure pyspark in pycharm (windows)

File menu - settings - project interpreter - (gearshape) - more - (treebelowfunnel) - (+) - [add python folder form spark installation and then py4j-*.zip] - click ok

Ensure SPARK_HOME set in windows environment, pycharm will take from there. To confirm :

Run menu - edit configurations - environment variables - [...] - show

Optionally set SPARK_CONF_DIR in environment variables.

Here is the setup that works for me (Win7 64bit, PyCharm2017.3CE)

Set up Intellisense:

  1. Click File -> Settings -> Project: -> Project Interpreter

  2. Click the gear icon to the right of the Project Interpreter dropdown

  3. Click More... from the context menu

  4. Choose the interpreter, then click the "Show Paths" icon (bottom right)

  5. Click the + icon two add the following paths:

    \python\lib\py4j-0.9-src.zip

    \bin\python\lib\pyspark.zip

  6. Click OK, OK, OK

Go ahead and test your new intellisense capabilities.

The easiest way is

Go to the site-packages folder of your anaconda/python installation, Copy paste the pyspark and pyspark.egg-info folders there.

Restart pycharm to update index. The above mentioned two folders are present in spark/python folder of your spark installation. This way you'll get code completion suggestions also from pycharm.

The site-packages can be easily found in your python installation. In anaconda its under anaconda/lib/pythonx.x/site-packages

I use conda to manage my Python packages. So all I did in a terminal outside PyCharm was:

conda install pyspark

or, if you want an earlier version, say 2.2.0, then do:

conda install pyspark=2.2.0

This automatically pulls in py4j as well. PyCharm then no longer complained about import pyspark... and code completion also worked. Note my PyCharm project was already configured to use the Python interpreter that comes with Anaconda.

The simplest way is to install PySpark through project interpreter.

  1. Go to File - Settings - Project - Project Interpreter
  2. Click on the + icon on top right.
  3. Search for PySpark and other packages you want to install
  4. Finally click install package
  5. Its Done!!

I tried to add the pyspark module via Project Interpreter menu but was not enough... there are a number of system environment variables that need to be set like SPARK_HOME and a path to /hadoop/bin/winutils.exe in order to read local data files. You also need to be using correct versions of Python, JRE, JDK all available in system environment variables and PATH. After googling a lot, the instructions in these videos worked

I used pycharm to link python and spark. I had Java and Spark pre-installed in my pc.

These are the steps I followed

  1. Create New project

  2. In Settings for New Project--> I selected Python3.7(venv) as my python. This is the python.exe file present in the venv folder inside my new project. You can give any python available in your pc.

  3. In settings --> Project structure --> Add Content_Root

    I added two zip folders as directories of spark

    1. C:\Users\USER\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\py4j-0.10.8.1-src.zip
    2. C:\Users\USER\spark-3.0.0-preview2-bin-hadoop2.7\python\lib\pyspark.zip
  4. Create a python file inside the new project. Then go to Edit Configurations(in upper right side dropdown) and select Environment Variables

  5. I used the below environment variables and it worked fine for me

    1. PYTHONUNBUFFERED 1
    2. JAVA_HOME C:\Program Files\Java\jre1.8.0_251
    3. PYSPARK_PYTHON C:\Users\USER\PycharmProjects\pyspark\venv\Scripts\python.exe
    4. SPARK_HOME C:\Users\USER\spark-3.0.0-preview2-bin-hadoop2.7
    5. HADOOP_HOME C:\Users\USER\winutils

    you may want to additionally download winutils.exe and place it in the path C:\Users\USER\winutils\bin

  6. Give the same environment variables inside Edit Configurations--> Templates

  7. Go to Settings--> Project Interpreter --> import pyspark

  8. Run your first pyspark program!

This tutorial from pyspark_xray, a tool that enables debugging pyspark code on PyCharm, can answer your question. It covers both Windows and Mac.

Preparation

  • Open command line, kick off java command, if you get an error, then download and install java (version 1.8.0_221 as of April 2020)
  • If you don't have it, download and install PyCharm Community edition (version 2020.1 as of April 2020)
  • If you don't have it, download and install Anaconda Python 3.7 runtime
  • Download and install spark latest Pre-built for Apache Hadoop (spark-2.4.5-bin-hadoop2.7 as of April 2020, 200+MB size) locally
    • Windows:
      • if you don't have unzip tool, please download and install 7zip, a free tool to zip/unzip files
      • extract contents of spark tgz file to c:\spark-x.x.x-bin-hadoopx.x folder
      • follow the steps in this tutorial
        • install winutils.exe into c:\spark-x.x.x-bin-hadoopx.x\bin folder, without this executable, you will run into error when writing engine output
    • Mac:
      • extract contents of spark tgz file to \Users[USERNAME]\spark-x.x.x-bin-hadoopx.x folder
  • install pyspark by pip install pyspark or conda install pyspark

Run Configuration

You run Spark application on a cluster from command line by issuing spark-submit command which submit a Spark job to the cluster. But from PyCharm or other IDE on a local laptop or PC, spark-submit cannot be used to kick off a Spark job. Instead, follow these steps to set up a Run Configuration of pyspark_xray's demo_app on PyCharm

  • Set Environment Variables:
    • set HADOOP_HOME value to C:\spark-2.4.5-bin-hadoop2.7
    • set SPARK_HOME value to C:\spark-2.4.5-bin-hadoop2.7
  • use Github Desktop or other git tools to clone pyspark_xray from Github
  • PyCharm > Open pyspark_xray as project
  • Open PyCharm > Run > Edit Configurations > Defaults > Python and enter the following values:
    • Environment variables (Windows): PYTHONUNBUFFERED=1;PYSPARK_PYTHON=python;PYTHONPATH=$SPARK_HOME/python;PYSPARK_SUBMIT_ARGS=pyspark-shell;
  • Open PyCharm > Run > Edit Configurations, create a new Python configuration, point the script to the path of driver.py of pyspark_xray > demo_app

driver-run-config

Go to Project Structure:

Option 1: File -> Settings -> Project: -> Project Structure

Option 2: PyCharm -> Preferences -> Project: -> Project Structure

Add Content Root: all ZIP files from $SPARK_HOME/python/lib

For latest Spark and Python versions on MacOS as below:

SPARK_VERSION=3.1.1
PY4J=0.10.9
PYTHON=3.8.12

Adding the below environment variables for SPARK_HOME, PYTHONPATH as well as PYENV_ROOT to ~/.bash_profile. In addition, SPARK_HOME and PYENV_ROOT being added to PATH.

export SPARK_VERSION=`ls /usr/local/Cellar/apache-spark/ | sort | tail -1`
export SPARK_HOME=/usr/local/Cellar/apache-spark/$SPARK_VERSION/libexec
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.10.9-src.zip:$PYTHONPATH


export PYENV_ROOT=/usr/local/opt/pyenv
export PATH=$PYENV_ROOT/bin:$PATH


if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
fi

Under Project -> Preferences -> Python Interpreter, add the PyEnv Python as a new Interpreter and use it instead of the default interpreter.

Under Add Python Interpreter, go to Virtual Environment -> Under Existing Environment -> select /usr/local/opt/pyenv/versions/3.8.12/bin/python as the Python interpreter for the PySpark project.

In the Python code, add the below code block at the beginning (NOTE: pyspark, findspark and py4j needs to be installed as packages beforehead)

import findspark
from pyspark import SparkContext
findspark.init("/usr/local/Cellar/apache-spark/3.1.1/libexec")