Option |
Description |
--context |
Specify the configuration context
name.
- Status
- Optional.
- Syntax
--context=<catalog-project-or-space-id>
- Default value
- Not applicable.
- Valid values
- A valid configuration context name.
|
--cpd-config |
The Cloud Pak for Data
configuration location. For example,
$HOME/.cpd-cli/config.
- Status
- Required.
- Syntax
--cpd-config=<cpd-config-location>
- Default value
- $HOME/.cpd-cli/config
- Valid values
- A valid Cloud Pak for Data configuration
location.
|
--cpd-scope |
The Cloud Pak for Data space, project, or catalog scope. For example, cpd://default-context/spaces/7bccdda4-9752-4f37-868e-891de6c48135.
- Status
- Optional.
- Syntax
--cpd-scope=<cpd-scope>
- Default value
- No default.
- Valid values
- A valid Cloud Pak for Data space,
project, or catalog scope.
|
--custom |
Specify user-defined properties as
key-value pairs.
- Status
- Optional.
- Syntax
--custom=<map<key,value>>
- Default value
- No default.
- Valid values
- Valid key-value pairs.
|
--decision-optimization |
Provide details about the input
and output data and other properties that are used for a batch deployment job of a decision
optimization problem.
- Status
- Optional.
- Syntax
--decision-optimization=<input-output-data-properties>
- Default value
- No default.
- Valid values
-
input_data
- Use the 'input_data' value to specify the input data for batch
processing as part of the job's payload. The 'input_data' value is mutually exclusive with the
'input_data_references' value. When 'input_data' is specified, the processed output of batch
deployment job is available in the deployment job response's 'scoring.predictions'
parameter.
input_data_references
- Use the 'input_data_references' value to specify the details that
pertain to the remote source where the input data for batch deployment job is available. The
'input_data_references' value must be used with the 'output_data_references' value. The
'input_data_references' value is mutually exclusive with the 'input_data' property
value.
output_data_references
- Use the 'output_data_references' value to specify the details that
pertain to the remote source where the input data for batch deployment job is available. The
'output_data_references' value must be used with the 'input_data_references'
value.
|
--deployment |
Specify a reference to a
resource.
- Status
- Required.
- Syntax
--deployment=<resource-reference>
- Default value
- No default.
- Valid values
- A valid resource reference.
|
--description |
Specify a
resource
description.
- Status
- Optional.
- Syntax
--description=<resource-description>
- Default value
- No default.
- Valid values
- A valid resource description.
|
--hardware-spec |
Specify a hardware
specification.
- Status
- Optional.
- Syntax
--hardware-spec=<hardware-specification>
- Default value
- No default.
- Valid values
- A valid hardware specification.
|
--help
-h
|
Display command
help.
- Status
- Optional.
- Syntax
--help
- Default value
- No default.
- Valid values
- Not applicable.
|
--hybrid-pipeline-hardware-specs |
Specify a hybrid pipeline hardware
specification.
- Status
- Optional.
- Syntax
--hybrid-pipeline-hardware-specs=<hybrid-pipeline-hardware-specification>
- Default value
- No default.
- Valid values
- A valid hybrid pipeline hardware specification.
|
--jmes-query |
Provide a JMESPath query to
customize the output.
- Status
- Optional.
- Syntax
--jmes-query=<jmespath-query>
- Default value
- No default.
- Valid values
- A valid JMESPath query.
|
--name |
Specify a
resource name.
- Status
- Required.
- Syntax
--name=<resource-name>
- Default value
- No default.
- Valid values
- A valid resource name.
|
--output |
Specify an output
format.
- Status
- Optional.
- Syntax
--output=json|yaml|text
- Default value
text
- Valid values
- Valid formats include JSON, YAML, or text (the default format).
|
--output-file |
Specify a file path where all
output is redirected.
- Status
- Optional.
- Syntax
--output-file=<output-file-location>
- Default value
- No default.
- Valid values
- A valid output file path location.
|
--profile |
The name of the profile that you
created to store information about an instance of Cloud Pak for Data and your credentials for the
instance.
- Status
- Required.
- Syntax
--profile=<cpd-profile-name>
- Default value
- No default.
- Valid values
-
The name of the profile that you created.
|
--quiet |
Suppress verbose
messages.
- Status
- Optional.
- Syntax
--quiet
- Default value
- No default.
- Valid values
- Not
applicable.
|
--raw-output |
When set to true, single values
are not surrounded by quotation marks in
JSON output
mode.
- Status
- Optional.
- Syntax
--raw-output=true|false
- Default value
false
- Valid values
-
false
- Single values in JSON output mode are surrounded by quotation marks.
true
- Single values in JSON output mode are not surrounded by quotation marks.
|
--scoring |
Specify details about the input
and output data and other properties that are used for a model’s batch deployment job (Python
Function or Python Scripts).
- Status
- Optional.
- Syntax
--scoring=<input-output-data-details>
- Default value
- No default.
- Valid values
-
input_data
- Use the 'input_data' value to specify the input data for batch
processing as part of the job's payload. The 'input_data' value is mutually exclusive with the
'input_data_references' value. When 'input_data' is specified, the processed output of batch
deployment job is available in the deployment job response's 'scoring.predictions' parameter. The
‘input_data' value is not supported for batch deployment of Python
Scripts.
input_data_references
- Use the 'input_data_references' value to specify the details that
pertain to the remote source where the input data for batch deployment job is available. The
'input_data_references' value must be used with the 'output_data_references' value. The
'input_data_references' value is mutually exclusive with the 'input_data' property value. The
'input_data_references' value is not supported for batch deployment job of Spark models and Python
Functions.
output_data_references
- Use the 'output_data_references' value to specify the details that
pertain to the remote source where the input data for batch deployment job is available. The
'output_data_references' value must be used with the 'input_data_references' value. The
'output_data_references' value is not supported for batch deployment job of Spark models and Python
Functions.
|
--space-id |
Specify a space
identifier.
- Status
- Required.
- Syntax
--space-id=<space-identifier>
- Default value
- No default.
- Valid values
- A valid space identifier.
|
--tags |
Specify the data asset tags.
Multiple tags can be specified.
- Status
- Optional.
- Syntax
--tags=<tag1,tag2,...>
- Default value
- No default.
- Valid values
- A valid list of comma-separated data asset tags.
|
--verbose |
Logs include more detailed
messages.
- Status
- Optional.
- Syntax
--verbose
- Default value
- No
default.
- Valid values
- Not applicable.
|