Engineering:Robotic sensing

From HandWiki
Short description: Subarea of robotics

Robotic sensing is a subarea of robotics science intended to provide sensing capabilities to robots. Robotic sensing provides robots with the ability to sense their environments and is typically used as feedback to enable robots to adjust their behavior based on sensed input. Robot sensing includes the ability to see,[1][2][3] touch,[4][5][6] hear[7] and move[8][9][10] and associated algorithms to process and make use of environmental feedback and sensory data. Robot sensing is important in applications such as vehicular automation, robotic prosthetics, and for industrial, medical, entertainment and educational robots.

Vision

Main pages: Computer vision and Machine vision

Method

Visual sensing systems can be based on a variety of technologies and methods including the use of camera, sonar, laser and radio frequency identification (RFID)[1] technology. All four methods aim for three procedures—sensation, estimation, and matching.

Image processing

Image quality is important in applications that require excellent robotic vision. Algorithms based on wavelet transform that are used for fusing images of different spectra and different foci result in improved image quality.[2] Robots can gather more accurate information from the resulting improved image.

Usage

Visual sensors help robots to identify the surrounding environment and take appropriate action.[3] Robots analyze the image of the immediate environment based on data input from the visual sensor. The result is compared to the ideal, intermediate or end image, so that appropriate movement or action can be determined to reach the intermediate or final goal.

Touch

[11]

Robot skin

Types and examples

Examples of the current state of progress in the field of robot skins as of mid-2022 are a robotic finger covered in a type of manufactured living human skin,[12][13] an electronic skin giving biological skin-like haptic sensations and touch/pain-sensitivity to a robotic hand,[14][15] a system of an electronic skin and a human-machine interface that can enable remote sensed tactile perception, and wearable or robotic sensing of many hazardous substances and pathogens,[16][17] and a multilayer tactile sensor hydrogel-based robot skin.[18][19]

Tactile discrimination

Signal processing

Touch sensory signals can be generated by the robot's own movements. It is important to identify only the external tactile signals for accurate operations. Previous solutions employed the Wiener filter, which relies on the prior knowledge of signal statistics that are assumed to be stationary. Recent solution applies an adaptive filter to the robot's logic.[4] It enables the robot to predict the resulting sensor signals of its internal motions, screening these false signals out. The new method improves contact detection and reduces false interpretation.

Usage

[20] Touch patterns enable robots to interpret human emotions in interactive applications. Four measurable features—force, contact time, repetition, and contact area change—can effectively categorize touch patterns through the temporal decision tree classifier to account for the time delay and associate them to human emotions with up to 83% accuracy.[5] The Consistency Index[5] is applied at the end to evaluate the level of confidence of the system to prevent inconsistent reactions.

Robots use touch signals to map the profile of a surface in hostile environment such as a water pipe. Traditionally, a predetermined path was programmed into the robot. Currently, with the integration of touch sensors, the robots first acquire a random data point; the algorithm[6] of the robot will then determine the ideal position of the next measurement according to a set of predefined geometric primitives. This improves the efficiency by 42%.[5]

In recent years, using touch as a stimulus for interaction has been the subject of much study. In 2010, the robot seal PARO was built, which reacts to many stimuli from human interaction, including touch. The therapeutic benefits of such human-robot interaction is still being studied, but has shown very positive results.[21]

Hearing

Signal processing

Accurate audio sensors require low internal noise contribution. Traditionally, audio sensors combine acoustical arrays and microphones to reduce internal noise level. Recent solutions combine also piezoelectric devices.[7] These passive devices use the piezoelectric effect to transform force to voltage, so that the vibration that is causing the internal noise could be eliminated. On average, internal noise up to about 7dB can be reduced.[7]

Robots may interpret strayed noise as speech instructions. Current voice activity detection (VAD) system uses the complex spectrum circle centroid (CSCC) method and a maximum signal-to-noise ratio (SNR) beamformer.[22] Because humans usually look at their partners when conducting conversations, the VAD system with two microphones enable the robot to locate the instructional speech by comparing the signal strengths of the two microphones. Current system is able to cope with background noise generated by televisions and sounding devices that come from the sides.

Usage

Robots can perceive emotions through the way we talk and associated characteristics and features. Acoustic and linguistic features are generally used to characterize emotions. The combination of seven acoustic features and four linguistic features improves the recognition performance when compared to using only one set of features.[23]

Acoustic feature

Linguistic feature

  • Bag of words
  • Part-of-speech
  • Higher semantics
  • Varia

Olfaction

Taste

For example, robot cooks may be able to taste food for dynamic cooking.[24]

Motion perception

Robots at the RoboCup 2019

Usage

Automated robots require a guidance system to determine the ideal path to perform its task. However, at the molecular scale, nano-robots lack such guidance system because individual molecules cannot store complex motions and programs. Therefore, the only way to achieve motion in such environment is to replace sensors with chemical reactions. Currently, a molecular spider that has one streptavidin molecule as an inert body and three catalytic legs is able to start, follow, turn and stop when came across different DNA origami.[8] The DNA-based nano-robots can move over 100 nm with a speed of 3 nm/min.[8]

In a TSI operation, which is an effective way to identify tumors and potentially cancer by measuring the distributed pressure at the sensor's contacting surface, excessive force may inflict a damage and have the chance of destroying the tissue. The application of robotic control to determine the ideal path of operation can reduce the maximum forces by 35% and gain a 50% increase in accuracy[9] compared to human doctors.

Performance

Efficient robotic exploration saves time and resources. The efficiency is measured by optimality and competitiveness. Optimal boundary exploration is possible only when a robot has square sensing area, starts at the boundary, and uses the Manhattan metric.[10] In complicated geometries and settings, a square sensing area is more efficient and can achieve better competitiveness regardless of the metric and of the starting point.[10]

Non-human senses

Robots may not only be equipped with higher sensitivity and capabilities per sense than all or most[25] non-cyborg humans such as being able to "see" more of the electromagnetic spectrum such as ultraviolet and with higher fidelity and granularity,[additional citation(s) needed] but may also be able have more senses[additional citation(s) needed] such as sensing of magnetic fields (magnetoreception)[26] or of various hazardous air components.[17]

Collective sensing and sensemaking

Robots may share,[27] store, and transmit sensory data as well as data based on such. They may learn from or interpret the same or related data in different ways and some robots may have remote senses (e.g. without local interpretation or processing or computation such as with common types of telerobotics or with embedded[28] or mobile "sensor nodes").[additional citation(s) needed] Processing of sensory data may include processes such as facial recognition,[29] facial expression recognition,[30] gesture recognition and integration of interpretative abstract knowledge.[additional citation(s) needed]

See also

References

  1. 1.0 1.1 Roh SG, Choi HR (Jan 2009). "3-D Tag-Based RFID System for Recognition of Object." IEEE Transactions on Automation Science and Engineering 6 (1): 55–65.
  2. 2.0 2.1 Arivazhagan S, Ganesan L, Kumar TGS (Jun 2009). "A modified statistical approach for image fusion using wavelet transform." Signal Image and Video Processing 3 (2): 137-144.
  3. 3.0 3.1 Jafar FA, et al (Mar 2011). "An Environmental Visual Features Based Navigation Method for Autonomous Mobile Robots." International Journal of Innovative Computing, Information and Control 7 (3): 1341-1355.
  4. 4.0 4.1 Anderson S, et al (Dec 2010). "Adaptive Cancelation of Self-Generated Sensory Signals in a Whisking Robot." IEEE Transactions on Robotics 26 (6): 1065-1076.
  5. 5.0 5.1 5.2 5.3 Kim YM, et al (Aug 2010)."A Robust Online Touch Pattern Recognition for Dynamic Human-robot Interaction." IEEE Transactions on Consumer Electronics 56 (3): 1979-1987.
  6. 6.0 6.1 Mazzini F, et al (Feb 2011). "Tactile Robotic Mapping of Unknown Surfaces, with Application to Oil Wells." IEEE Transactions on Instrumentation and Measurement 60 (2): 420-429.
  7. 7.0 7.1 7.2 Matsumoto M, Hashimoto S (2010). "Internal Noise Reduction Using Piezoelectric Device under Blind Condition." Internatl (Jan 2011). "Searching for the most important feature types signalling emotion-related user states in speech." Computer Speech and Language 25 (1): 4-28.
  8. 8.0 8.1 8.2 Lund K, et al (May 2010). "Molecular robots guided by prescriptive landscapes." Nature 465 (7295): 206-210.
  9. 9.0 9.1 Trejos AL, et al (Sep 2009). "Robot-assisted Tactile Sensing for Minimally Invasive Tumor Localization." International Journal of Robotics Research 28 (9): 1118-1133.
  10. 10.0 10.1 10.2 Czyzowicz J, Labourel A, Pelc A (Jan 2011). "Optimality and Competitiveness of Exploring Polygons by Mobile Robots." Information and Computation 209 (1): 74-88.
  11. Dahiya, Ravinder S.; Valle, Maurizio (2013). Robotic Tactile Sensing: Technologies and System. Springer. doi:10.1007/978-94-007-0579-1. ISBN 9789400705784. https://www.springer.com/engineering/robotics/book/978-94-007-0578-4. 
  12. Temming, Maria (9 June 2022). "Scientists grew living human skin around a robotic finger". Science News. https://www.sciencenews.org/article/robotic-finger-human-skin-self-healing. 
  13. Kawai, Michio; Nie, Minghao; Oda, Haruka; Morimoto, Yuya; Takeuchi, Shoji (6 July 2022). "Living skin on a robot" (in English). Matter 5 (7): 2190–2208. doi:10.1016/j.matt.2022.05.019. ISSN 2590-2393. 
  14. Barker, Ross (June 1, 2022). "Artificial skin capable of feeling pain could lead to new generation of touch-sensitive robots" (in en). University of Glasgow. https://techxplore.com/news/2022-06-artificial-skin-capable-pain-touch-sensitive.html. 
  15. Liu, Fengyuan; Deswal, Sweety; Christou, Adamos; Shojaei Baghini, Mahdieh; Chirila, Radu; Shakthivel, Dhayalan; Chakraborty, Moupali; Dahiya, Ravinder (June 2022). "Printed synaptic transistor–based electronic skin for robots to feel and learn" (in en). Science Robotics 7 (67): eabl7286. doi:10.1126/scirobotics.abl7286. ISSN 2470-9476. PMID 35648845. http://eprints.gla.ac.uk/270490/1/270490.pdf. 
  16. Velasco, Emily (June 2, 2022). "Artificial skin gives robots sense of touch and beyond" (in en). California Institute of Technology. https://techxplore.com/news/2022-06-artificial-skin-robots.html. 
  17. 17.0 17.1 Yu, You; Li, Jiahong; Solomon, Samuel A.; Min, Jihong; Tu, Jiaobing; Guo, Wei; Xu, Changhao; Song, Yu et al. (June 1, 2022). "All-printed soft human-machine interface for robotic physicochemical sensing" (in en). Science Robotics 7 (67): eabn0495. doi:10.1126/scirobotics.abn0495. ISSN 2470-9476. PMID 35648844. 
  18. Yirka, Bob (June 9, 2022). "Biomimetic elastomeric robot skin has tactile sensing abilities" (in en). Tech Xplore. https://techxplore.com/news/2022-06-biomimetic-elastomeric-robot-skin-tactile.html. 
  19. Park, K.; Yuk, H.; Yang, M.; Cho, J.; Lee, H.; Kim, J. (8 June 2022). "A biomimetic elastomeric robot skin using electrical impedance and acoustic tomography for tactile sensing" (in en). Science Robotics 7 (67): eabm7187. doi:10.1126/scirobotics.abm7187. ISSN 2470-9476. PMID 35675452. 
  20. http://www.robotcub.org/misc/papers/10_Dahiya_etal.pdf [bare URL PDF]
  21. Archived at Ghostarchive and the Wayback Machine: "Cute Baby Seal Robot - PARO Theraputic Robot #DigInfo". https://www.youtube.com/watch?v=oJq5PQZHU-I. 
  22. Kim HD, et al (2009). "Target Speech Detection and Separation for Communication with Humanoid Robots in Noisy Home Environments." Advanced Robotics 23 (15): 2093-2111.
  23. Batliner A, et al (Jan 2011). "Searching for the most important feature types signalling emotion-related user states in speech." Computer Speech and Language 25 (1): 4-28.
  24. Sochacki, Grzegorz; Abdulali, Arsen; Iida, Fumiya (2022). "Mastication-Enhanced Taste-Based Classification of Multi-Ingredient Dishes for Robotic Cooking". Frontiers in Robotics and AI 9: 886074. doi:10.3389/frobt.2022.886074. ISSN 2296-9144. PMID 35603082. 
  25. "Super seers: why some people can see ultraviolet light". New Scientist. 4 December 2019. https://www.newscientist.com/lastword/mg24432591-000-super-seers-why-some-people-can-see-ultraviolet-light/. 
  26. Cañón Bermúdez, Gilbert Santiago; Fuchs, Hagen; Bischoff, Lothar; Fassbender, Jürgen; Makarov, Denys (November 2018). "Electronic-skin compasses for geomagnetic field-driven artificial magnetoreception and interactive electronics" (in en). Nature Electronics 1 (11): 589–595. doi:10.1038/s41928-018-0161-6. ISSN 2520-1131. 
  27. Varadharajan, Vivek Shankar; St-Onge, David; Adams, Bram; Beltrame, Giovanni (1 March 2020). "SOUL: data sharing for robot swarms" (in en). Autonomous Robots 44 (3): 377–394. doi:10.1007/s10514-019-09855-2. ISSN 1573-7527. https://espace2.etsmtl.ca/id/eprint/18906/1/St-Onge%20D%202019%2018906.pdf. 
  28. Scholl, Philipp M.; Brachmann, Martina; Santini, Silvia; Van Laerhoven, Kristof (2014). "Integrating Wireless Sensor Nodes in the Robot Operating System" (in en). Cooperative Robots and Sensor Networks 2014. Studies in Computational Intelligence. 554. Springer. pp. 141–157. doi:10.1007/978-3-642-55029-4_7. ISBN 978-3-642-55028-7. 
  29. Vincent, James (14 November 2019). "Security robots are mobile surveillance devices, not human replacements" (in en). The Verge. https://www.theverge.com/2019/11/14/20964584/knightscope-security-robot-guards-surveillance-devices-facial-recognition-numberplate-mobile-phone. 
  30. Melinte, Daniel Octavian; Vladareanu, Luige (23 April 2020). "Facial Expressions Recognition for Human–Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer". Sensors 20 (8): 2393. doi:10.3390/s20082393. PMID 32340140. Bibcode2020Senso..20.2393M. 

External links