Filesystems/{filesystemName}/watch: PUT
Enables or disables clustered watch for a file system.
Availability
Available on all IBM Storage Scale editions.
Description
The PUT filesystems/filesystemName/watch request enables or disables clustered file system watch. For more information about the fields in the data structures that are returned, see mmwatch command.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/watch
where- filesystems/filesystemName
- The file system for which clustered watch needs to be enabled.
- watch
- Specifies the action to be performed on the file system.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the
request:
Parameter name | Description and applicable keywords | Required/optional |
---|---|---|
filesystemName | The file system name. You can also use keywords such as :all:, :all_local:, or :all_remote: | Required. |
body | Body of the request that contains the required parameters to be passed on to the IBM Storage Scale system to perform the requested operation. | Required. |
Request data
{
"action": "Enable or disable file system watch",
"watchfolderConfig":
{
"description": "Description",
"eventTypes": "Event types",
"eventHandler": "Event handler",
"sinkBrokers": "Sink broker",
"sinkTopic": "Sink topic",
"sinkAuthConfig": "Sink authentication config"
},
"watchId": "Watch ID"
}
For more information about the fields in the following data structures, see the links at the end of this topic.
- "action": "Enable or disable file system watch"
- Whether to enable or disable file system watch. Possible values are
enable | disable
. - "watchfolderConfig":
-
- "description": "Description"
- Description of the clustered watch.
- "eventTypes": "Event types"
- Types of events that need to be watched.
- "eventHandler": "Event handler"
- Type of event handler. Only Kafkasink is supported as the evens handler.
- "sinkBrokers": "Sink broker"
- Includes a comma-separated list of broker:port pairs for the sink Kafka cluster, which is the external Kafka cluster where the events are sent.
- "sinkTopic": "Sink topic"
- The topic that producers write to in the sink Kafka cluster.
- "sinkAuthConfig": "Sink authentication config"
- The path to the sink authentication configuration file.
- watchId": "Watch ID"
- Remote cluster name from where the file system must be watched.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For
more information about the fields in the following data structures, see the links at the end of this
topic.- "jobs":
- An array of elements that describe jobs. Each element describes one job.
- "status":
- Return status.
- "message": "ReturnMessage",
- The return message.
- "code": ReturnCode
- The return code.
- "result"
-
- "commands":"String'
- Array of commands that are run in this job.
- "progress":"String'
- Progress information for the request.
- "exitCode":"Exit code"
- Exit code of command. Zero is success, nonzero denotes failure.
- "stderr":"Error"
- CLI messages from stderr.
- "stdout":"String"
- CLI messages from stdout.
- "request"
-
- "type":"{GET | POST | PUT | DELETE}"
- HTTP request type.
- "url":"URL"
- The URL through which the job is submitted.
- "data":" "
- Optional.
- "jobId":"ID",
- The unique ID of the job.
- "submitted":"Time"
- The time at which the job was submitted.
- "completed":"Time"
- The time at which the job was completed.
- "status":"RUNNING | COMPLETED | FAILED"
- Status of the job.
Examples
The following example how to enable the clustered file system watch for the file system gpfs0.
Request data:
Corresponding
request URL:
{
"action": "enable",
"watchfolderConfig": {
"description": "description",
"eventTypes": "IN_ACCESS,IN_ATTRIB",
"eventHandler": "kafkasink",
"sinkBrokers": "Broker1:Port,Broker2:Port",
"sinkTopic": "topic",
"sinkAuthConfig": "/mnt/gpfs0/sink-auth-config"
},
"watchId": "string"
}
curl -k -u admin:admin001 -X PUT --header 'content-type:application/json' --header 'accept:application/json'
-d '{ \
"action": "enable", \
"watchfolderConfig": { \
"description": "description", \
"eventTypes": "IN_ACCESS,IN_ATTRIB", \
"eventHandler": "kafkasink", \
"sinkBrokers": "Broker1:Port,Broker2:Port", \
"sinkTopic": "topic", \
"sinkAuthConfig": "/mnt/gpfs0/sink-auth-config" \
}, \
"watchId": "string" \
}' 'https://198.51.100.1:443/scalemgmt/v2/filesystems/gpfs0/watch'
Response
data: Note: In the JSON data that is returned, the return code indicates whether the
command is successful. The response code 200 indicates that the command successfully retrieved the
information. Error code 400 represents an
invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/filesystems/gpfs0/watch",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}