The article can be downloaded here: AES E-Library
The article is also freely available under the AES "Green" Open Access policy: ResearchGate / Research portal of Aalto University
Whilst room acoustic measurements can accurately capture the sound field of real rooms, they are usually time consuming and tedious if many positions need to be measured. Therefore, this contribution presents the Autonomous Robot Twin System for Room Acoustic Measurements (ARTSRAM) to autonomously capture large sets of room impulse responses with variable sound source and receiver positions. The proposed implementation of the system consists of two robots, one of which is equipped with a loudspeaker, while the other one is equipped with a microphone array. Each robot contains collision sensors, thus enabling it to move autonomously within the room. The robots move according to a random walk procedure to ensure a big variability between measured positions. A tracking system provides position data matching the respective measurements. After outlining the robot system, this paper presents a validation, in which anechoic responses of the robots are presented and the movement paths resulting from the random walk procedure are investigated. Additionally, the quality of the obtained room impulse responses is demonstrated with a sound field visualization. In summary, the evaluation of the robot system indicates that large sets of diverse and high-quality room impulse responses can be captured with the system in an automated way. Such large sets of measurements will benefit research in the fields of room acoustics and acoustic virtual reality.
The Autonomous Robot Twin System for Room Acoustic Measurements (ARTSRAM) is capable of measuring room impulse responses (RIRs) with variable sound source and receiver positions. It consists of two independent robots that are able to move freely in a room. Both robots are equipped with collision sensors, thus allowing them to explore the room autonomously. The measurements of RIRs are complemented with corresponding position information of the robots.
The base for each robot is an iRobot Create 2 Roomba robot, which is controlled by a Raspberry Pi single-board computer. HTC Vive trackers of the second generation are used for tracking the position of the robots. Although it would be possible to mount several microphone arrays and loudspeakers on each of the robots to enable multi-way RIR measurements, we chose to clearly distinguish between a source and a receiver robot in this paper. The source robot uses a Minirig MRBT-2 portable loudspeaker to play back excitation signals that are recorded by the receiver robot with a Zoom H3-VR first-order microphone array. The resulting RIRs are stored on a microSD card inserted into the receiver robot's Raspberry Pi.
The entire measurement procedure is implemented in Python. It is controlled by a main measurement script running on a separate measurement laptop. The main measurement script sends commands over a TCP network socket to the Raspberry Pis of the source and receiver robot. Subsequently, the corresponding server scripts running on the Raspberry Pis handle the commands. For example, the server scripts can trigger robot movements or an RIR measurement. The HTC Vive trackers directly communicate with the measurement laptop over another wireless connection, thus allowing the main measurement script to immediately access and store the positions of both robots.
This video shows an exemplary measurement. Please note that the movement parameters (i.e. maximum spin time, maximum drive time, drive backwards time) differ from the ones described in the paper.
This video shows a visualization of the sound energy inside a room, based on 330 ARTSRAM measurements. The measurement procedure followed the proposed random walk. The static sound source is depicted in orange. The receiver positions are omitted for clarity, but they can be found in the figure below.
Source and receiver positions during the measurements that were used to produce the animation above.