FAQ

1. What is Databricks in Capillary?

Databricks is Capillary’s enterprise data warehouse where you can run SQL queries, schedule automated data exports via FTP, and create interactive visualizations using notebooks. It also enables seamless data sharing across BI tools and multiple database platforms.

2. What are Databricks notebooks used for?

In Capillary’s Databricks environment, notebooks serve as your primary workspace for building data-science and machine-learning workflows. You’ll benefit from real-time coauthoring in languages such as Python, SQL, Scala, and R, with built-in version control and inline charts to help you collaborate and present results without leaving the notebook.

3. Which languages can I use in Databricks notebooks?

Databricks notebooks support Python, SQL, Scala, R, and Markdown. This flexible mix lets you write code, document your process, and embed visualizations all in one place, making it easier to explore data and share insights with your team.

4. How do I get access to Capillary’s Databricks workspace?

To gain access, please reach out to the Capillary Access Team. If you require organization-level permissions or additional roles, your manager can facilitate those requests on your behalf.

5. Who should I contact with questions about Databricks?

For general access or troubleshooting, the Capillary Access Team is your first point of contact. If you encounter issues related to organization-wide settings or permissions, please loop in your direct manager.

6. Does internet connectivity impact my ability to load or run Databricks notebooks?

An active internet connection is required to load the notebook UI, enable real-time collaboration, and display visualizations. However, once you hit Run, all compute jobs execute on Capillary’s AWS-hosted clusters, so brief network interruptions won’t halt your backend processing.