Gpfs/trace/set: PUT
Configures the GPFS trace options before the tracing process starts.
Availability
Available on all IBM Storage Scale editions.
Description
The PUT scalemgmt/v2/gpfs/trace/set request configures the GPFS trace options before the tracing begins. For more information about the fields in the data structures that are returned, see mmtracectl command.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/gpfs/trace/set
where- trace/set
- The GPFS trace that is configured.
Request headers
Content-Type: application/json
Accept: application/json
Request data
{
"trace": "io |all |def",
"traceRecycle": "off | local | global | globalOnShutdown",
"tracedevWriteMode": "blocking | overwrite",
"tracedevBufferSize": File Size,
"traceFileSize": "Size",
"node": "Node details
"tracedevOverwriteBufferSize": Size,
"format":true| false,
"noFormat":true| false
}
For more information about the fields in the following data structures, see the links at the end of the topic.
- "trace":"io |all |def"
- The predefined and user-specified trace levels.
- "traceRecycle": "off | local | global | globalOnShutdown"
- The control mode for trace recycling during daemon termination.
- "tracedevWriteMode": "blocking | overwrite"
- Specifies when to overwrite the old data.
- "tracedevBufferSize": "Size"
- Specifies the trace buffer size for Linux® trace in blocking mode. If
--tracedev-write-mode
is set toblocking
, this parameter is used. It must not be less than 4 K and not more than 64 M. The default is 4 M. - "node": "Node name |all"
- The node that participates in the tracing of the file system. This option supports all defined node classes (except for mount). The default value is all.
- "traceFileSize": "File size"
- The size of the trace file. The default is 128 M on Linux and 64 M on other platforms.
- "tracedevOverwriteBufferSize": "File size"
- Specifies the trace buffer size for Linux trace in overwrite mode. If
--tracedev-write-mode
is set tooverwrite
, this parameter is used . It must not be less than 16 M. The default is 64 M. - "format": "true |false"
- Specifies whether formatting is enabled.
- "noformat": "true |false"
- Specifies whether formatting is disabled.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see
the links at the end of the topic.- "jobs":
- An array of elements that describe jobs. Each element describes one job.
- "status":
- Return status.
- "message": "ReturnMessage",
- The return message.
- "code": ReturnCode
- The return code.
- "result"
-
- "commands":"String'
- Array of commands that are run in this job.
- "progress":"String'
- Progress information for the request.
- "exitCode":"Exit code"
- Exit code of command. Zero is success and nonzero denotes failure.
- "stderr":"Error"
- CLI messages from stderr.
- "stdout":"String"
- CLI messages from stdout.
- "request"
-
- "type":"{GET | POST | PUT | DELETE}"
- HTTP request type.
- "url":"URL"
- The URL through which the job is submitted.
- "data":" "
- Optional.
- "jobId":"ID",
- The unique ID of the job.
- "submitted":"Time"
- The time at which the job was submitted.
- "completed":"Time"
- The time at which the job was completed.
- "status":"RUNNING | COMPLETED | FAILED"
- Status of the job.
Examples
The following example how to stop GPFS tracing.
curl -k -u admin:admin001 -X PUT --header 'content-type:application/json' --header 'accept:application/json'
-d '{
"trace": "io",
"traceRecycle": "off",
"tracedevWriteMode": "blocking",
"tracedevBufferSize": 1048576,
"traceFileSize": "1048576",
"tracedevOverwriteBufferSize": 67108864,
"node":
"format":true,
"noFormat":true
}'https://198.51.100.1:443/scalemgmt/v2/gpfs/trace/set'
Response data: Note: In the JSON data that is returned, the return code indicates whether the
command is successful. The response code 200 indicates that the command successfully retrieved the
information. Error code 400 represents an
invalid request and 500 represents internal server error.
"jobs": [
{
"jobId": 1000000000001,
"status": "RUNNING",
"submitted": "2021-10-27 06:30:42,763",
"completed": "N/A",
"runtime": 18,
],
"status": {
"code": 202,
"message": "The request was accepted for processing."
}
}