Skip to content

Commit

Permalink
rename master branch to main
Browse files Browse the repository at this point in the history
* part two

Signed-off-by: Alex Aizman <[email protected]>
  • Loading branch information
alex-aizman committed Dec 12, 2023
1 parent ba2b9d1 commit d5c82e5
Show file tree
Hide file tree
Showing 53 changed files with 199 additions and 199 deletions.
24 changes: 12 additions & 12 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ variables:

.default_only_template: &default_only_def
only:
- master
- main
- merge_requests
- schedules
- webs
Expand All @@ -82,7 +82,7 @@ variables:
- ais
timeout: 25m
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
<<: *gather_logs_def
Expand All @@ -95,7 +95,7 @@ variables:
timeout: 25m
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "web"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
<<: *gather_logs_def
Expand All @@ -107,7 +107,7 @@ variables:
timeout: 3h
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master" || $CI_PIPELINE_SOURCE == "web"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main" || $CI_PIPELINE_SOURCE == "web"'
when: manual
allow_failure: true
<<: *gather_logs_def
Expand All @@ -118,7 +118,7 @@ variables:
- ais
timeout: 3h
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
<<: *gather_logs_def
Expand Down Expand Up @@ -320,7 +320,7 @@ test:long:aisloader:
- FLAGS="--duration=5m" make test-aisloader
- cd ./python; make PYAISLOADER_TEST_TYPE=long test-pyaisloader
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true

Expand Down Expand Up @@ -374,7 +374,7 @@ test:short:assorted:k8s:
- ais-k8s
timeout: 30m
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
variables:
Expand All @@ -398,7 +398,7 @@ test:long:k8s:
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
- if: '$CI_MERGE_REQUEST_LABELS =~ /.*k8s-ci.*/'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master" || $CI_PIPELINE_SOURCE == "web"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main" || $CI_PIPELINE_SOURCE == "web"'
when: manual
allow_failure: true
script:
Expand All @@ -415,7 +415,7 @@ test:long:k8s:single-target:
timeout: 3h
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /.*k8s-ci.*/'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
script:
Expand All @@ -432,7 +432,7 @@ test:long:k8s:aisloader:
timeout: 15m
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /.*k8s-ci.*/'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
script:
Expand All @@ -450,7 +450,7 @@ test:long:k8s:all:
timeout: 5h
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /.*k8s-ci.*/'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH == "main"'
when: manual
allow_failure: true
before_script:
Expand Down Expand Up @@ -482,4 +482,4 @@ checkmarx-scan-csv:
stage: security
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "web"'
allow_failure: true
allow_failure: true
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ AIS consistently shows balanced I/O distribution and **linear scalability** acro
* **ETL offload**. The capability to run I/O intensive custom data transformations *close to data* - offline (dataset to dataset) and inline (on-the-fly).
* **File datasets**. AIS can be immediately populated from any file-based data source (local or remote, ad-hoc/on-demand or via asynchronus batch).
* **Read-after-write consistency**. Reading and writing (as well as all other control and data plane operations) can be performed via any (random, selected, or load-balanced) AIS gateway (a.k.a. "proxy"). Once the first replica of an object is written and _finalized_ subsequent reads are guaranteed to view the same content. Additional copies and/or EC slices, if configured, are added asynchronously via `put-copies` and `ec-put` jobs, respectively.
* **Write-through**. In presence of any [remote backend](/docs/providers.md), AIS executes remote write (e.g., using vendor's SDK) as part of the [transaction](https://github.com/NVIDIA/aistore/blob/master/docs/overview.md#read-after-write-consistency) that places and _finalizes_ the first replica.
* **Write-through**. In presence of any [remote backend](/docs/providers.md), AIS executes remote write (e.g., using vendor's SDK) as part of the [transaction](https://github.com/NVIDIA/aistore/blob/main/docs/overview.md#read-after-write-consistency) that places and _finalizes_ the first replica.
* **Small file datasets.** To serialize small files and facilitate batch processing, AIS supports TAR, TAR.GZ (or TGZ), ZIP, and TAR.LZ4 formatted objects (often called _shards_). Resharding (for optimal sorting and sizing), listing contained files (samples), appending to existing shards, and generating new ones from existing objects and/or client-side files - is also fully supported.
* **Kubernetes**. Provides for easy Kubernetes deployment via a separate GitHub [repo](https://github.com/NVIDIA/ais-k8s) and [AIS/K8s Operator](https://github.com/NVIDIA/ais-k8s/tree/master/operator).
* **Command line management**. Integrated powerful [CLI](/docs/cli.md) for easy management and monitoring.
Expand All @@ -29,11 +29,11 @@ AIS consistently shows balanced I/O distribution and **linear scalability** acro
AIS runs natively on Kubernetes and features open format - thus, the freedom to copy or move your data from AIS at any time using the familiar Linux `tar(1)`, `scp(1)`, `rsync(1)` and similar.

For developers and data scientists, there's also:
* native [Go (language) API](https://github.com/NVIDIA/aistore/tree/master/api) that we utilize in a variety of tools including [CLI](/docs/cli.md) and [Load Generator](/docs/aisloader.md);
* native [Python SDK](https://github.com/NVIDIA/aistore/tree/master/python/aistore/sdk)
* native [Go (language) API](https://github.com/NVIDIA/aistore/tree/main/api) that we utilize in a variety of tools including [CLI](/docs/cli.md) and [Load Generator](/docs/aisloader.md);
* native [Python SDK](https://github.com/NVIDIA/aistore/tree/main/python/aistore/sdk)
- [Python SDK reference guide](/docs/python_sdk.md)
* [PyTorch integration](https://github.com/NVIDIA/aistore/tree/master/python/aistore/pytorch) and usage examples
* [Boto3 support](https://github.com/NVIDIA/aistore/tree/master/python/aistore/botocore_patch) for interoperability with AWS SDK for Python (aka [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)) client
* [PyTorch integration](https://github.com/NVIDIA/aistore/tree/main/python/aistore/pytorch) and usage examples
* [Boto3 support](https://github.com/NVIDIA/aistore/tree/main/python/aistore/botocore_patch) for interoperability with AWS SDK for Python (aka [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)) client
- and other [Botocore](https://github.com/boto/botocorehttps://github.com/boto/botocore) derivatives.

For the original AIStore **white paper** and design philosophy, for introduction to large-scale deep learning and the most recently added features, please see [AIStore Overview](/docs/overview.md) (where you can also find six alternative ways to work with existing datasets). Videos and **animated presentations** can be found at [videos](/docs/videos.md).
Expand All @@ -50,12 +50,12 @@ Since prerequisites boil down to, essentially, having Linux with a disk the depl

| Option | Objective |
| --- | ---|
| [Local playground](https://github.com/NVIDIA/aistore/blob/master/docs/getting_started.md#local-playground) | AIS developers and development, Linux or Mac OS |
| [Local playground](https://github.com/NVIDIA/aistore/blob/main/docs/getting_started.md#local-playground) | AIS developers and development, Linux or Mac OS |
| Minimal production-ready deployment | This option utilizes preinstalled docker image and is targeting first-time users or researchers (who could immediately start training their models on smaller datasets) |
| [Easy automated GCP/GKE deployment](https://github.com/NVIDIA/aistore/blob/master/docs/getting_started.md#kubernetes-deployments) | Developers, first-time users, AI researchers |
| [Easy automated GCP/GKE deployment](https://github.com/NVIDIA/aistore/blob/main/docs/getting_started.md#kubernetes-deployments) | Developers, first-time users, AI researchers |
| [Large-scale production deployment](https://github.com/NVIDIA/ais-k8s) | Requires Kubernetes and is provided via a separate repository: [ais-k8s](https://github.com/NVIDIA/ais-k8s) |

Further, there's the capability referred to as [global namespace](https://github.com/NVIDIA/aistore/blob/master/docs/providers.md#remote-ais-cluster): given HTTP(S) connectivity, AIS clusters can be easily interconnected to "see" each other's datasets. Hence, the idea to start "small" to gradually and incrementally build high-performance shared capacity.
Further, there's the capability referred to as [global namespace](https://github.com/NVIDIA/aistore/blob/main/docs/providers.md#remote-ais-cluster): given HTTP(S) connectivity, AIS clusters can be easily interconnected to "see" each other's datasets. Hence, the idea to start "small" to gradually and incrementally build high-performance shared capacity.

> For detailed discussion on supported deployments, please refer to [Getting Started](/docs/getting_started.md).
Expand Down Expand Up @@ -98,19 +98,19 @@ With a little effort, they all could be extracted and used outside.
- [Getting Started](/docs/getting_started.md)
- [Technical Blog](https://aiatscale.org/blog)
- API and SDK
- [Go (language) API](https://github.com/NVIDIA/aistore/tree/master/api)
- [Python SDK](https://github.com/NVIDIA/aistore/tree/master/python/aistore), and also:
- [Go (language) API](https://github.com/NVIDIA/aistore/tree/main/api)
- [Python SDK](https://github.com/NVIDIA/aistore/tree/main/python/aistore), and also:
- [pip package](https://pypi.org/project/aistore/)
- [reference guide](/docs/python_sdk.md)
- [REST API](/docs/http_api.md)
- [Easy URL](/docs/easy_url.md)
- Amazon S3
- [`s3cmd` client](/docs/s3cmd.md)
- [S3 compatibility](/docs/s3compat.md)
- [Boto3 support](https://github.com/NVIDIA/aistore/tree/master/python/aistore/botocore_patch)
- [Boto3 support](https://github.com/NVIDIA/aistore/tree/main/python/aistore/botocore_patch)
- [CLI](/docs/cli.md)
- [`ais help`](/docs/cli/help.md)
- [Reference guide](https://github.com/NVIDIA/aistore/blob/master/docs/cli.md#cli-reference)
- [Reference guide](https://github.com/NVIDIA/aistore/blob/main/docs/cli.md#cli-reference)
- [Monitoring](/docs/cli/show.md)
- [`ais show cluster`](/docs/cli/show.md)
- [`ais show performance`](/docs/cli/show.md)
Expand Down Expand Up @@ -183,7 +183,7 @@ With a little effort, they all could be extracted and used outside.
- [Start/stop maintenance mode, shutdown, decommission, and related operations](/docs/lifecycle_node.md)
- [Downloader](/docs/downloader.md)
- [On-disk layout](/docs/on_disk_layout.md)
- [Buckets: definition, operations, properties](https://github.com/NVIDIA/aistore/blob/master/docs/bucket.md#bucket)
- [Buckets: definition, operations, properties](https://github.com/NVIDIA/aistore/blob/main/docs/bucket.md#bucket)
- [Validate Warm GET: a quick synopsys](/docs/validate_warm_get.md)

## License
Expand Down
4 changes: 2 additions & 2 deletions api/env/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ As such, the `env` package is, effectively, part of the API: the names defined h

## See also

* List of _all_ [environment variables](https://github.com/NVIDIA/aistore/blob/master/docs/environment-vars.md)
* List of [system filenames ("filename constants")](https://github.com/NVIDIA/aistore/blob/master/cmn/fname/fname.go)
* List of _all_ [environment variables](https://github.com/NVIDIA/aistore/blob/main/docs/environment-vars.md)
* List of [system filenames ("filename constants")](https://github.com/NVIDIA/aistore/blob/main/cmn/fname/fname.go)
2 changes: 1 addition & 1 deletion deploy/conf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ You need to increase the maximum number of open file descriptors in all docker i

Copy/replace [`limits.conf`](/deploy/conf/limits.conf) to/with `/etc/security/limits.conf`.

For more information, read [performance/maximum-number-of-open-files](https://github.com/NVIDIA/aistore/blob/master/docs/performance.md#maximum-number-of-open-files).
For more information, read [performance/maximum-number-of-open-files](https://github.com/NVIDIA/aistore/blob/main/docs/performance.md#maximum-number-of-open-files).
2 changes: 1 addition & 1 deletion deploy/prod/docker/single/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Or, you can also download the latest released CLI binary from the [release asset
Further references:

* [AIS CLI](/docs/cli.md)
* [CLI Documentation](https://github.com/NVIDIA/aistore/tree/master/docs/cli)
* [CLI Documentation](https://github.com/NVIDIA/aistore/tree/main/docs/cli)

## How to Build

Expand Down
4 changes: 2 additions & 2 deletions docs/_posts/2021-08-10-tar-append.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,5 +126,5 @@ fh.Close()

For the latest code, please see:

- The function `OpenTarForAppend` in ["cos" package](https://github.com/NVIDIA/aistore/blob/master/cmn/cos/archive.go).
- Example of how to use `OpenTarForAppend` in the implementation of the function `appendToArch` in the [core package](https://github.com/NVIDIA/aistore/blob/master/ais/tgtobj.go).
- The function `OpenTarForAppend` in ["cos" package](https://github.com/NVIDIA/aistore/blob/main/cmn/cos/archive.go).
- Example of how to use `OpenTarForAppend` in the implementation of the function `appendToArch` in the [core package](https://github.com/NVIDIA/aistore/blob/main/ais/tgtobj.go).
2 changes: 1 addition & 1 deletion docs/_posts/2021-10-21-ais-etl-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ In addition, there's a locally running ETL - locally as far as *transforming* da

1. **High Performance I/O For Large Scale Deep Learning**, https://arxiv.org/abs/2001.01858
2. **Efficient PyTorch I/O library for Large Datasets, Many Files, Many GPUs**, https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus
3. **AIS ETL: Getting Started, Tutorial, Inline and Offline examples, Kubernetes deployment**, https://github.com/NVIDIA/aistore/blob/master/docs/etl.md
3. **AIS ETL: Getting Started, Tutorial, Inline and Offline examples, Kubernetes deployment**, https://github.com/NVIDIA/aistore/blob/main/docs/etl.md
4. **GitHub open source**:
- [AIStore](https://github.com/NVIDIA/aistore)
- [AIS/Kubernetes Operator, AIS on bare-metal, Deployment Playbooks, Helm](https://github.com/NVIDIA/ais-k8s)
Expand Down
2 changes: 1 addition & 1 deletion docs/_posts/2021-10-22-ais-etl-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,6 @@ Complete code is available here:
- [AIS-ETL containers and specs](https://github.com/NVIDIA/ais-etl)
2. Documentation, blogs, videos:
- https://aiatscale.org
- https://github.com/NVIDIA/aistore/tree/master/docs
- https://github.com/NVIDIA/aistore/tree/main/docs

PS. Note that we have omitted setting-up ETL for the validation loader - leaving it as an exercise for the reader. To be continued...
4 changes: 2 additions & 2 deletions docs/_posts/2021-10-29-ais-etl-3.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Pre-shared ImageNet will be stored in a Google Cloud bucket that we'll also call

Thus, in terms of its internal structure, this dataset is identical to what we've had in the [previous article](https://aiatscale.org/blog/2021/10/22/ais-etl-2), with one distinct difference: shards (formatted as .tar files).

Further, we assume (and require) that AIStore can "see" this GCP bucket. Covering the corresponding AIStore configuration would be outside the scope, but the main point is that AIS *self-populates* on demand. When getting user data from any [remote location](https://github.com/NVIDIA/aistore/blob/master/docs/providers.md), AIS always stores it (ie., the data), acting simultaneously as a fast-cache tier and a high-performance reliable-and-scalable storage.
Further, we assume (and require) that AIStore can "see" this GCP bucket. Covering the corresponding AIStore configuration would be outside the scope, but the main point is that AIS *self-populates* on demand. When getting user data from any [remote location](https://github.com/NVIDIA/aistore/blob/main/docs/providers.md), AIS always stores it (ie., the data), acting simultaneously as a fast-cache tier and a high-performance reliable-and-scalable storage.

## Client-side transformation with WebDataset, and with AIStore acting as a traditional (dumb) storage

Expand Down Expand Up @@ -267,4 +267,4 @@ Other references include:
- [AIS-ETL containers and specs](https://github.com/NVIDIA/ais-etl)
3. Documentation, blogs, videos:
- [https://aiatscale.org](https://aiatscale.org/docs)
- [https://github.com/NVIDIA/aistore/tree/master/docs](https://github.com/NVIDIA/aistore/tree/master/docs)
- [https://github.com/NVIDIA/aistore/tree/main/docs](https://github.com/NVIDIA/aistore/tree/main/docs)
Loading

0 comments on commit d5c82e5

Please sign in to comment.