You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.20/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
138
+
See our [installation guide](https://docs.cortex.dev/install), then deploy one of our [examples](https://github.com/cortexlabs/cortex/tree/0.21/examples) or bring your own models to build [realtime APIs](https://docs.cortex.dev/deployments/realtime-api) and [batch APIs](https://docs.cortex.dev/deployments/batch-api).
msg+=fmt.Sprintf(" → run `cortex cluster info --env %s` to update your environment (include `--config <cluster.yaml>` if you have a cluster configuration file)\n", envName)
57
57
// CORTEX_VERSION_MINOR
58
-
msg+=" → if you set `operator_load_balancer_scheme: internal` in your cluster configuration file, your CLI must run from within a VPC that has access to your cluster's VPC (see https://docs.cortex.dev/v/master/guides/vpc-peering)\n"
58
+
msg+=" → if you set `operator_load_balancer_scheme: internal` in your cluster configuration file, your CLI must run from within a VPC that has access to your cluster's VPC (see https://docs.cortex.dev/v/0.21/guides/vpc-peering)\n"
Copy file name to clipboardExpand all lines: docs/cluster-management/config.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ instance_volume_type: gp2
39
39
40
40
# whether the subnets used for EC2 instances should be public or private (default: "public")
41
41
# if "public", instances will be assigned public IP addresses; if "private", instances won't have public IPs and a NAT gateway will be created to allow outgoing network requests
42
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
42
+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-cluster for more information
43
43
subnet_visibility: public # must be "public" or "private"
44
44
45
45
# whether to include a NAT gateway with the cluster (a NAT gateway is necessary when using private subnets)
@@ -48,12 +48,12 @@ nat_gateway: none # must be "none", "single", or "highly_available" (highly_ava
48
48
49
49
# whether the API load balancer should be internet-facing or internal (default: "internet-facing")
50
50
# note: if using "internal", APIs will still be accessible via the public API Gateway endpoint unless you also disable API Gateway in your API's configuration (if you do that, you must configure VPC Peering to connect to your APIs)
51
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-cluster for more information
51
+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-cluster for more information
52
52
api_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
53
53
54
54
# whether the operator load balancer should be internet-facing or internal (default: "internet-facing")
55
-
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/master/guides/vpc-peering)
56
-
# see https://docs.cortex.dev/v/master/miscellaneous/security#private-operator for more information
55
+
# note: if using "internal", you must configure VPC Peering to connect your CLI to your cluster operator (https://docs.cortex.dev/v/0.21/guides/vpc-peering)
56
+
# see https://docs.cortex.dev/v/0.21/miscellaneous/security#private-operator for more information
57
57
operator_load_balancer_scheme: internet-facing # must be "internet-facing" or "internal"
58
58
59
59
# whether to disable API gateway cluster-wide
@@ -65,10 +65,10 @@ api_gateway: public # must be "public" or "none"
65
65
tags: # <string>: <string> map of key/value pairs
66
66
67
67
# whether to use spot instances in the cluster (default: false)
68
-
# see https://docs.cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
68
+
# see https://docs.cortex.dev/v/0.21/cluster-management/spot-instances for additional details on spot configuration
69
69
spot: false
70
70
71
-
# see https://docs.cortex.dev/v/master/guides/custom-domain for instructions on how to set up a custom domain
71
+
# see https://docs.cortex.dev/v/0.21/guides/custom-domain for instructions on how to set up a custom domain
72
72
ssl_certificate_arn:
73
73
74
74
# primary CIDR block for the cluster's VPC (default: 192.168.0.0/16)
@@ -82,17 +82,17 @@ The docker images used by the Cortex cluster can also be overridden, although th
Copy file name to clipboardExpand all lines: docs/deployments/batch-api/predictors.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,7 +95,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
95
95
### Examples
96
96
97
97
<!-- CORTEX_VERSION_MINOR -->
98
-
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/master/examples/batch/image-classifier).
98
+
You can find an example of a BatchAPI using a PythonPredictor in [examples/batch/image-classifier](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/image-classifier).
99
99
100
100
### Pre-installed packages
101
101
@@ -166,7 +166,7 @@ torchvision==0.6.1
166
166
```
167
167
168
168
<!-- CORTEX_VERSION_MINOR x3 -->
169
-
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-predictor-inf/Dockerfile) (for Inferentia).
169
+
The pre-installed system packages are listed in [images/python-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-cpu/Dockerfile) (for CPU), [images/python-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-gpu/Dockerfile) (for GPU), or [images/python-predictor-inf/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/python-predictor-inf/Dockerfile) (for Inferentia).
170
170
171
171
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
172
172
@@ -223,7 +223,7 @@ class TensorFlowPredictor:
223
223
```
224
224
225
225
<!-- CORTEX_VERSION_MINOR -->
226
-
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
226
+
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.21/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
227
227
228
228
When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#tensorflow-predictor) for more information.
229
229
@@ -232,7 +232,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
232
232
### Examples
233
233
234
234
<!-- CORTEX_VERSION_MINOR -->
235
-
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/master/examples/batch/tensorflow).
235
+
You can find an example of a BatchAPI using a TensorFlowPredictor in [examples/batch/tensorflow](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/tensorflow).
236
236
237
237
### Pre-installed packages
238
238
@@ -253,7 +253,7 @@ tensorflow==2.3.0
253
253
```
254
254
255
255
<!-- CORTEX_VERSION_MINOR -->
256
-
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tensorflow-predictor/Dockerfile).
256
+
The pre-installed system packages are listed in [images/tensorflow-predictor/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/tensorflow-predictor/Dockerfile).
257
257
258
258
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
259
259
@@ -310,7 +310,7 @@ class ONNXPredictor:
310
310
```
311
311
312
312
<!-- CORTEX_VERSION_MINOR -->
313
-
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
313
+
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.21/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
314
314
315
315
When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`). See the [multi model guide](../../guides/multi-model.md#onnx-predictor) for more information.
316
316
@@ -319,7 +319,7 @@ For proper separation of concerns, it is recommended to use the constructor's `c
319
319
### Examples
320
320
321
321
<!-- CORTEX_VERSION_MINOR -->
322
-
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/master/examples/batch/onnx).
322
+
You can find an example of a BatchAPI using an ONNXPredictor in [examples/batch/onnx](https://github.com/cortexlabs/cortex/tree/0.21/examples/batch/onnx).
323
323
324
324
### Pre-installed packages
325
325
@@ -337,6 +337,6 @@ requests==2.24.0
337
337
```
338
338
339
339
<!-- CORTEX_VERSION_MINOR x2 -->
340
-
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-predictor-gpu/Dockerfile) (for GPU).
340
+
The pre-installed system packages are listed in [images/onnx-predictor-cpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/onnx-predictor-cpu/Dockerfile) (for CPU) or [images/onnx-predictor-gpu/Dockerfile](https://github.com/cortexlabs/cortex/tree/0.21/images/onnx-predictor-gpu/Dockerfile) (for GPU).
341
341
342
342
If your application requires additional dependencies, you can install additional [Python packages](../python-packages.md) and [system packages](../system-packages.md).
0 commit comments