prefect_databricks.flows
Module containing flows for interacting with Databricks
Functions
jobs_runs_submit_and_wait_for_completion
databricks_credentials: Credentials to use for authentication with Databricks.tasks: Tasks to run, e.g.
run_name: An optional name for the run. The default value isUntitled, e.g.A multitask job run.git_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job’s notebook tasks. Key-values:- git_url:
URL of the repository to be cloned by this job. The maximum
length is 300 characters, e.g.
https\://github.com/databricks/databricks-cli. - git_provider:
Unique identifier of the service used to host the Git
repository. The value is case insensitive, e.g.
github. - git_branch:
Name of the branch to be checked out and used by this job.
This field cannot be specified in conjunction with git_tag
or git_commit. The maximum length is 255 characters, e.g.
main. - git_tag:
Name of the tag to be checked out and used by this job. This
field cannot be specified in conjunction with git_branch or
git_commit. The maximum length is 255 characters, e.g.
release-1.0.0. - git_commit:
Commit to be checked out and used by this job. This field
cannot be specified in conjunction with git_branch or
git_tag. The maximum length is 64 characters, e.g.
e0056d01. - git_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs.
timeout_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g.86400.idempotency_token: An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g.8f018174-4792-40d5-bcbc-3e6a527352c8.access_control_list: List of permissions to set on the job.max_wait_seconds: Maximum number of seconds to wait for the entire flow to complete.poll_frequency_seconds: Number of seconds to wait in between checks for run completion.return_metadata: When True, method will return a tuple of notebook output as well as job run metadata; by default though, the method only returns notebook outputjob_submission_handler: An optional callable to intercept job submission.**jobs_runs_submit_kwargs: Additional keyword arguments to pass tojobs_runs_submit.
- Either a dict or a tuple (depends on
return_metadata) comprised of -
- task_notebook_outputs: dictionary of task keys to its corresponding notebook output; this is the only object returned by default from this method
-
- jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only
returned if
return_metadata=True.
- jobs_runs_metadata: dictionary containing IDs of the jobs runs tasks; this is only
returned if
jobs_runs_wait_for_completion
run_name: The name of the jobs runs task.multi_task_jobs_run_id: The ID of the jobs runs task to watch.databricks_credentials: Credentials to use for authentication with Databricks.max_wait_seconds: Maximum number of seconds to wait for the entire flow to complete.poll_frequency_seconds: Number of seconds to wait in between checks for run completion.
- A dict containing the jobs runs life cycle state and message.
- A dict containing IDs of the jobs runs tasks.
jobs_runs_submit_by_id_and_wait_for_completion
databricks_credentials: Credentials to use for authentication with Databricks.job_id: Id of the databricks job.idempotency_token: An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs, e.g.8f018174-4792-40d5-bcbc-3e6a527352c8.jar_params: A list of parameters for jobs with Spark JAR tasks, for example “jar_params” : [“john doe”, “35”]. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run- now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (for example{"jar_params"\: ["john doe","35"]}) cannot exceed 10,000 bytes.max_wait_seconds: Maximum number of seconds to wait for the entire flow to complete.poll_frequency_seconds: Number of seconds to wait in between checks for run completion.notebook_params: A map from keys to values for jobs with notebook task, for example “notebook_params”:{"name"\: "john doe", "age"\: "35"}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters. notebook_params cannot be specified in conjunction with jar_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example{"notebook_params"\:{"name"\:"john doe","age"\:"35"}}) cannot exceed 10,000 bytes.python_params: A list of parameters for jobs with Python tasks, for example “python_params” :[“john doe”, “35”]. The parameters are passed to Python file as command- line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example{"python_params"\:["john doe","35"]}) cannot exceed 10,000 bytes Use Task parameter variables to set parameters containing information about job runs. These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis.spark_submit_params: A list of parameters for jobs with spark submit task, for example “spark_submit_params”: [“—class”, “org.apache.spark.examples.SparkPi”]. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example{"python_params"\:["john doe","35"]}) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis.python_named_params: A map from keys to values for jobs with Python wheel task, for example “python_named_params”:{"name"\: "task", "data"\: "dbfs\:/path/to/data.json"}.pipeline_params: Iffull_refreshis set to true, trigger a full refresh on the delta live table e.g.
sql_params: A map from keys to values for SQL tasks, for example “sql_params”:{"name"\: "john doe", "age"\: "35"}. The SQL alert task does not support custom parameters.dbt_commands: An array of commands to execute for jobs with the dbt task, for example “dbt_commands”: [“dbt deps”, “dbt seed”, “dbt run”]job_submission_handler: An optional callable to intercept job submission
DatabricksJobTerminated: Raised when the Databricks job run is terminated with a non-successful result state.DatabricksJobSkipped: Raised when the Databricks job run is skipped.DatabricksJobInternalError: Raised when the Databricks job run encounters an internal error.
- A dictionary containing information about the completed job run.