This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Download Microsoft Edge More info about Internet Explorer and Microsoft Edge

This article covers DataGrip, which is neither provided nor supported by Databricks. To contact the provider, see DataGrip Support .

DataGrip is an integrated development environment (IDE) for database developers that provides a query console, schema navigation, explain plans, smart code completion, real-time analysis and quick fixes, refactorings, version control integration, and other features.

This article describes how to use your local development machine to install, configure, and use DataGrip to work with databases in Azure Databricks.

This article was tested with macOS, Databricks JDBC Driver version 2.6.25, and DataGrip version 2021.1.1.

Requirements

Before you install DataGrip, your local development machine must meet the following requirements:

  • A Linux, macOS, or Windows operating system.
  • Download the Databricks JDBC Driver onto your local development machine, extracting the DatabricksJDBC42.jar file from the downloaded DatabricksJDBC42-<version>.zip file.
  • An Azure Databricks cluster or SQL warehouse to connect with DataGrip.
  • Step 1: Install DataGrip

    Download and install DataGrip .

  • Linux : Download the .zip file, extract its contents, and then follow the instructions in the Install-Linux-tar.txt file.
  • macOS : Download and run the .dmg file.
  • Windows : Download and run the .exe file.
  • For more information, see Install DataGrip on the DataGrip website.

    Step 2: Configure the Databricks JDBC Driver for DataGrip

    Set up DataGrip with information about the Databricks JDBC Driver that you downloaded earlier.

  • Start DataGrip.
  • Click File > Data Sources .
  • In the Data Sources and Drivers dialog box, click the Drivers tab.
  • Click the + ( Driver ) button to add a driver.
  • For Name , enter Databricks .
  • On the General tab, in the Driver Files list, click the + ( Add ) button.
  • Click Custom JARs .
  • Browse to and select the DatabricksJDBC42.jar file that you extracted earlier, and then click Open .
  • For Class , select com.databricks.client.jdbc.Driver .
  • Click OK .
  • Step 3: Connect DataGrip to your Azure Databricks databases

    Use DataGrip to connect to the cluster or SQL warehouse that you want to use to access the databases in your Azure Databricks workspace.

  • In DataGrip, click File > Data Sources .

  • On the Data Sources tab, click the + ( Add ) button.

  • Select the Databricks driver that you added in the preceding step.

  • On the General tab, for URL , enter the value of the JDBC URL field for your Azure Databricks resource as follows:

    Cluster

  • Find the JDBC URL field value on the JDBC/ODBC tab within the Advanced Options area for your cluster. The JDBC URL should look similar to this one:

    jdbc:databricks://adb-1234567890123456.7.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/1234567890123456/1234-567890-reef123;AuthMech=3;UID=token;PWD=<personal-access-token>
    

    Important

    If the JDBC URL starts with jdbc:spark:, you must change it to jdbc:databricks: or else you will get a connection error later.

  • Replace <personal-access-token> with your personal access token for the Azure Databricks workspace.

    If you do not want to store your personal access token on your local development machine, omit UID=token;PWD=<personal-access-token> from the JDBC URL and, in the Save list, choose Never. You will be prompted for your User (token) and Password (your personal access token) each time you try to connect.

  • For Name, enter Databricks cluster.

    For more information, see Data sources and drivers dialog on the DataGrip website.

    Sql warehouse

  • Find the JDBC URL field value on the Connection Details tab for your SQL warehouse. The JDBC URL should look similar to this one:

    jdbc:databricks://adb-1234567890123456.7.azuredatabricks.net:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/a123456bcde7f890;
    

    Important

    If the JDBC URL starts with jdbc:spark:, you must change it to jdbc:databricks: or else you will get a connection error later.

  • For User, enter token.

  • For Password, enter your personal access token.

    If you do not want to store your personal access token on your local development machine, leave User and Password blank and, in the Save list, select Never. You will be prompted for your User (the word token) and Password (your personal access token) each time you try to connect.

  • For Name, enter Databricks SQL warehouse.

    For more information, see Data sources and drivers dialog on the DataGrip website.

  • Click Test Connection.

    You should start your resource before testing your connection. Otherwise the test might take several minutes to complete while the resource starts.

  • If the connection succeeds, on the Schemas tab, check the boxes for the schemas that you want to be able to access, for example default.

  • Click OK.

    Repeat the instructions in this step for each resource that you want DataGrip to access.

    Step 4: Use DataGrip to browse tables

    Use DataGrip to access tables in your Azure Databricks workspace.

  • In DataGrip, in the Database window, expand your resource node, expand the schema you want to browse, and then expand tables.
  • Double-click a table. The first set of rows from the table are displayed.
  • Repeat the instructions in this step to access additional tables.

    To access tables in other schemas, in the Database window’s toolbar, click the Data Source Properties icon. In the Data Sources and Drivers dialog box, on the Schemas tab, check the box for each additional schema you want to access, and then click OK.

    Step 5: Use DataGrip to run SQL statements

    Use DataGrip to load the sample diamonds table from the Sample datasets into the default database in your workspace and then query the table. For more information, see Create a table in Tutorial: Query data with notebooks. If you do not want to load a sample table, skip ahead to Next steps.

  • In DataGrip, in the Database window, with the default schema expanded, click File > New > SQL File.

  • Enter a name for the file, for example create_diamonds.

  • In the file tab, enter these SQL statements, which deletes a table named diamonds if it exists, and then creates a table named diamonds based on the contents of the CSV file within the specified Databricks File System (DBFS) mount point:

    DROP TABLE IF EXISTS diamonds;
    CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header "true");
    
  • Select the DROP TABLE statement.

  • On the file tab’s toolbar, click the Execute icon.

  • Select DROP TABLE IF EXISTS diamonds; CREATE TABLE diamon. .. from the drop-down list.

    To change what happens when you click the Execute icon, select Customize in the drop-down list.

  • In the Database window, double-click the diamonds table to see its data. If the diamonds table is not displayed, click the Refresh button in the window’s toolbar.

    To delete the diamonds table:

  • In DataGrip, in the Database window’s toolbar, click the Jump to Query Console button.

  • Select console (Default).

  • In the console tab, enter this SQL statement:

    DROP TABLE diamonds;
    
  • Select the DROP TABLE statement.

  • On the console tab’s toolbar, click the Execute icon. The diamonds table disappears from the list of tables. If the diamonds table does not disappear, click the Refresh button in the Database window’s toolbar.

    Next steps

  • Learn more about the Query console in DataGrip.
  • Learn about the Data editor in DataGrip.
  • Learn more about the various tool windows in DataGrip.
  • Learn how to search in DataGrip.
  • Learn how to export data in DataGrip.
  • Learn how to find and replace text using regular expressions in DataGrip.
  • Additional resources

  • DataGrip documentation
  • DataGrip Support
  •