Execute using multiple GPU

After analyzing the error, trying to use my own GPU, I’ve found out that this library/code from imageai is trying to use multiple GPU. Is there any way that we can specify to use only one GPU and CPU? The error is like below

2019-12-04 10:39:23.543430: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources’ devices. Current candidate devices are [
/job:localhost/replica:0/task:0/device:GPU:0
/job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_=’/device:GPU:1’ assigned_device_name_=’’ resource_device_name_=’/device:GPU:1’ supported_device_types_=[GPU, CPU] possible_devices_=[]
VariableV2: GPU CPU
Assign: GPU CPU
Identity: GPU CPU
AssignAdd: GPU CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
replica_1_1/model_4/yolo_layer_6/Variable (VariableV2) /device:GPU:1
replica_1_1/model_4/yolo_layer_6/Variable/Assign (Assign) /device:GPU:1
replica_1_1/model_4/yolo_layer_6/Variable/read (Identity) /device:GPU:1
replica_1_1/model_4/yolo_layer_6/AssignAdd (AssignAdd) /device:GPU:1

2019-12-04 10:39:23.611797: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources’ devices. Current candidate devices are [
/job:localhost/replica:0/task:0/device:GPU:0
/job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_=’/device:GPU:1’ assigned_device_name_=’’ resource_device_name_=’/device:GPU:1’ supported_device_types_=[GPU, CPU] possible_devices_=[]
VariableV2: GPU CPU
Assign: GPU CPU
Identity: GPU CPU
AssignAdd: GPU CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
replica_1_1/model_4/yolo_layer_5/Variable (VariableV2) /device:GPU:1
replica_1_1/model_4/yolo_layer_5/Variable/Assign (Assign) /device:GPU:1
replica_1_1/model_4/yolo_layer_5/Variable/read (Identity) /device:GPU:1
replica_1_1/model_4/yolo_layer_5/AssignAdd (AssignAdd) /device:GPU:1

2019-12-04 10:39:23.667680: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources’ devices. Current candidate devices are [
/job:localhost/replica:0/task:0/device:GPU:0
/job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_=’/device:GPU:1’ assigned_device_name_=’’ resource_device_name_=’/device:GPU:1’ supported_device_types_=[GPU, CPU] possible_devices_=[]
VariableV2: GPU CPU
Assign: GPU CPU
Identity: GPU CPU
AssignAdd: GPU CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
replica_1_1/model_4/yolo_layer_4/Variable (VariableV2) /device:GPU:1
replica_1_1/model_4/yolo_layer_4/Variable/Assign (Assign) /device:GPU:1
replica_1_1/model_4/yolo_layer_4/Variable/read (Identity) /device:GPU:1
replica_1_1/model_4/yolo_layer_4/AssignAdd (AssignAdd) /device:GPU:1

1 Like

@aqiff12 You can specify the number of GPUs by calling the function setGpuUsage before your .trainModel function with any of the following values:

  • number of desired GPUs (int)
  • a list of integers, indicating the id of the GPUs to be used
1 Like