diff --git a/examples/foundation-model-examples/chronos/01_chronos_load_inference.py b/examples/foundation-model-examples/chronos/01_chronos_load_inference.py index 51f82f0..71e10d5 100644 --- a/examples/foundation-model-examples/chronos/01_chronos_load_inference.py +++ b/examples/foundation-model-examples/chronos/01_chronos_load_inference.py @@ -7,7 +7,7 @@ # MAGIC %md # MAGIC ## Cluster setup # MAGIC -# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ---------- diff --git a/examples/foundation-model-examples/moirai/01_moirai_load_inference.py b/examples/foundation-model-examples/moirai/01_moirai_load_inference.py index 0df2de5..4229978 100644 --- a/examples/foundation-model-examples/moirai/01_moirai_load_inference.py +++ b/examples/foundation-model-examples/moirai/01_moirai_load_inference.py @@ -7,7 +7,7 @@ # MAGIC %md # MAGIC ## Cluster setup # MAGIC -# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ---------- diff --git a/examples/foundation-model-examples/moment/01_moment_load_inference.py b/examples/foundation-model-examples/moment/01_moment_load_inference.py index e80dc36..3a0ae3c 100644 --- a/examples/foundation-model-examples/moment/01_moment_load_inference.py +++ b/examples/foundation-model-examples/moment/01_moment_load_inference.py @@ -7,7 +7,7 @@ # MAGIC %md # MAGIC ## Cluster setup # MAGIC -# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above. The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ---------- diff --git a/examples/foundation-model-examples/timegpt/01_timegpt_load_inference.py b/examples/foundation-model-examples/timegpt/01_timegpt_load_inference.py index 87444ef..44e1555 100644 --- a/examples/foundation-model-examples/timegpt/01_timegpt_load_inference.py +++ b/examples/foundation-model-examples/timegpt/01_timegpt_load_inference.py @@ -22,7 +22,7 @@ # MAGIC %md # MAGIC ## Cluster setup # MAGIC -# MAGIC TimeGPT is accessible through an API as a service, so the actual compute for inference or fine-tuning will not take place on Databricks. For this reason a GPU cluster is not necessary and we recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above with CPUs. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC TimeGPT is accessible through an API as a service, so the actual compute for inference or fine-tuning will not take place on Databricks. For this reason a GPU cluster is not necessary and we recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above with CPUs. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ---------- diff --git a/examples/foundation-model-examples/timegpt/02_timegpt_fine_tune.py b/examples/foundation-model-examples/timegpt/02_timegpt_fine_tune.py index c299ad6..985b8b3 100644 --- a/examples/foundation-model-examples/timegpt/02_timegpt_fine_tune.py +++ b/examples/foundation-model-examples/timegpt/02_timegpt_fine_tune.py @@ -21,7 +21,7 @@ # MAGIC %md # MAGIC ## Cluster setup # MAGIC -# MAGIC TimeGPT is accessible through an API as a service, so the actual compute for inference or fine-tuning will not take place on Databricks. For this reason a GPU cluster is not necessary and we recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above with CPUs. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC TimeGPT is accessible through an API as a service, so the actual compute for inference or fine-tuning will not take place on Databricks. For this reason a GPU cluster is not necessary and we recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html) or above with CPUs. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ---------- diff --git a/examples/foundation-model-examples/timesfm/01_timesfm_load_inference.py b/examples/foundation-model-examples/timesfm/01_timesfm_load_inference.py index 68197b9..72c8c4a 100644 --- a/examples/foundation-model-examples/timesfm/01_timesfm_load_inference.py +++ b/examples/foundation-model-examples/timesfm/01_timesfm_load_inference.py @@ -9,7 +9,7 @@ # MAGIC # MAGIC **As of June 5, 2024, TimesFM supports python version below 3.10. So make sure your cluster is below DBR ML 14.3.** # MAGIC -# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html). The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. MMF leverages [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. +# MAGIC We recommend using a cluster with [Databricks Runtime 14.3 LTS for ML](https://docs.databricks.com/en/release-notes/runtime/14.3lts-ml.html). The cluster can be single-node or multi-node with one or more GPU instances on each worker: e.g. [g5.12xlarge [A10G]](https://aws.amazon.com/ec2/instance-types/g5/) on AWS or [Standard_NV72ads_A10_v5](https://learn.microsoft.com/en-us/azure/virtual-machines/nva10v5-series) on Azure. This notebook will leverage [Pandas UDF](https://docs.databricks.com/en/udf/pandas.html) for distributing the inference tasks and utilizing all the available resource. # COMMAND ----------