ResourceExhaustedError: OOM error when training detection Model

Hi, my name is choi eo jin from south korea.

I am using your code now :slight_smile:

train-code :

from imageai.Detection.Custom import DetectionModelTrainer

trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory=“Kim”)
trainer.setTrainConfig(object_names_array=[“Kim”], batch_size=4, num_experiments=200, train_from_pretrained_model=“pretrained-yolov3.h5”)
trainer.trainModel()

But when i was running train-code,
I got this error message:

Traceback (most recent call last):
File “”, line 1, in
File “D:\pycharm\PyCharm Community Edition 2019.2.1\helpers\pydev_pydev_bundle\pydev_umd.py”, line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File “D:\pycharm\PyCharm Community Edition 2019.2.1\helpers\pydev_pydev_imps_pydev_execfile.py”, line 18, in execfile
exec(compile(contents+"\n", file, ‘exec’), glob, loc)
File “C:/Users/User/PycharmProjects/CAPSTONE/detection_training.py”, line 10, in
trainer.trainModel()
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\imageai\Detection\Custom_ init _.py”, line 301, in trainModel
max_queue_size=8
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\legacy\interfaces.py”, line 91, in wrapper
return func(*args, **kwargs)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\engine\training.py”, line 1658, in fit_generator
initial_epoch=initial_epoch)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\engine\training_generator.py”, line 215, in fit_generator
class_weight=class_weight)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\engine\training.py”, line 1449, in train_on_batch
outputs = self.train_function(ins)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\backend\tensorflow_backend.py”, line 2979, in call
return self._call(inputs)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\keras\backend\tensorflow_backend.py”, line 2937, in _call
fetched = self._callable_fn(*array_vals)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\tensorflow\python\client\session.py”, line 1439, in call
run_metadata_ptr)
File “C:\Users\User\PycharmProjects\CAPSTONE\venv\lib\site-packages\tensorflow\python\framework\errors_impl.py”, line 528, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2,26,26,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node replica_1/model_1/leaky_42/LeakyRelu}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node replica_0/model_1/bnorm_62/FusedBatchNorm}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

In this situation, how can i fix it…?

Thank you.

@chldjwls123 What type of GPU does your machine have? You are experiencing this issue because the batch_size=4 stated in your code exceeds the capacity of your GPU memory as you can see in the error line tensorflow.python.framework.errors_impl.ResourceExhaustedError:

Change the batch_size to 2 and let me know if the problem persists.

2 Likes

Thanks for all your help :slight_smile:

Actually, i have another question about live-stream video using a web-cam.

If i want to use a custom version, should i just use a below code? or change something?


from imageai.Detection import VideoObjectDetection
import os
import cv2
execution_path = os.getcwd()

camera = cv2.VideoCapture(0)

detector = VideoObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath(os.path.join(execution_path , “resnet50_coco_best_v2.0.1.h5”))
detector.loadModel()

video_path = detector.detectObjectsFromVideo(
camera_input=camera,
output_file_path=os.path.join(execution_path, “camera_detected_video”),
frames_per_second=20, log_progress=True, minimum_percentage_probability=40)

For detecting and analyzing objects in video using your custom detection model, you will use the
CustomVideoObjectDetection class from

from imageai.Detection.Custom import CustomVideoObjectDetection.

See the link below for full documentation and sample code.

Thank you again!

I want to make a live stream video detection.
But, this code’s output is “saved file”.
I don’t want a saved one.
i just want to see the output in the real time, as soon as i press the “run” button.:disappointed_relieved:

  • Ensure you run your detection code on a system with NVIDIA GPU with Tensorflow-GPU 1.13.1 installed
  • Adapt the per_second function in the sample code linked below to your code to retrieve each detected frame in your video and display Matplotlib or OpenCV window

Thank you! :slight_smile:

1 Like