Region Based Convolutional Neural Networks

From HandWiki
Short description: Machine learning model family

Region-based Convolutional Neural Networks (R-CNN) are a family of machine learning models for computer vision and specifically object detection.

History

The original goal of R-CNN was to take an input image and produce a set of bounding boxes as output, where each bounding box contains an object and also the category (e.g. car or pedestrian) of the object. More recently, R-CNN has been extended to perform other computer vision tasks. The following covers some of the versions of R-CNN that have been developed.

  • November 2013: R-CNN. Given an input image, R-CNN begins by applying a mechanism called Selective Search to extract regions of interest (ROI), where each ROI is a rectangle that may represent the boundary of an object in image. Depending on the scenario, there may be as many as two thousand ROIs. After that, each ROI is fed through a neural network to produce output features. For each ROI's output features, a collection of support-vector machine classifiers is used to determine what type of object (if any) is contained within the ROI.[1]
  • April 2015: Fast R-CNN. While the original R-CNN independently computed the neural network features on each of as many as two thousand regions of interest, Fast R-CNN runs the neural network once on the whole image. At the end of the network is a novel method called ROIPooling, which slices out each ROI from the network's output tensor, reshapes it, and classifies it. As in the original R-CNN, the Fast R-CNN uses Selective Search to generate its region proposals.[2]
  • June 2015: Faster R-CNN. While Fast R-CNN used Selective Search to generate ROIs, Faster R-CNN integrates the ROI generation into the neural network itself.[2]
  • March 2017: Mask R-CNN. While previous versions of R-CNN focused on object detection, Mask R-CNN adds instance segmentation. Mask R-CNN also replaced ROIPooling with a new method called ROIAlign, which can represent fractions of a pixel.[3][4]
  • June 2019: Mesh R-CNN adds the ability to generate a 3D mesh from a 2D image.[5]

Applications

Region-based convolutional neural networks have been used for tracking objects from a drone-mounted camera,[6] locating text in an image,[7] and enabling object detection in Google Lens.[8] Mask R-CNN serves as one of seven tasks in the MLPerf Training Benchmark, which is a competition to speed up the training of neural networks.[9]

References

  1. Gandhi, Rohith (July 9, 2018). "R-CNN, Fast R-CNN, Faster R-CNN, YOLO — Object Detection Algorithms". Towards Data Science. https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e. 
  2. 2.0 2.1 Bhatia, Richa (September 10, 2018). "What is region of interest pooling?". Analytics India. https://analyticsindiamag.com/what-is-region-of-interest-pooling/. 
  3. Farooq, Umer (February 15, 2018). "From R-CNN to Mask R-CNN". Medium. https://medium.com/@umerfarooq_26378/from-r-cnn-to-mask-r-cnn-d6367b196cfd. 
  4. Weng, Lilian (December 31, 2017). "Object Detection for Dummies Part 3: R-CNN Family". Lil'Log. https://lilianweng.github.io/lil-log/2017/12/31/object-recognition-for-dummies-part-3.html. 
  5. Wiggers, Kyle (October 29, 2019). "Facebook highlights AI that converts 2D objects into 3D shapes". VentureBeat. https://venturebeat.com/2019/10/29/facebook-highlights-ai-that-converts-2d-objects-into-3d-shapes/. 
  6. Nene, Vidi (Aug 2, 2019). "Deep Learning-Based Real-Time Multiple-Object Detection and Tracking via Drone". Drone Below. https://dronebelow.com/2019/08/02/deep-learning-based-real-time-multiple-object-detection-and-tracking-via-drone/. 
  7. Ray, Tiernan (Sep 11, 2018). "Facebook pumps up character recognition to mine memes". ZDnet. https://www.zdnet.com/article/facebook-pumps-up-character-recognition-to-mine-memes/. 
  8. Sagar, Ram (Sep 9, 2019). "These machine learning methods make google lens a success". Analytics India. https://analyticsindiamag.com/these-machine-learning-techniques-make-google-lens-a-success/. 
  9. "MLPerf Training Benchmark". 2019. arXiv:1910.01500v3 [math.LG].