Navigation : EXPO21XX > ROBOTICS 21XX > H25: AI, IT and Computer Vision > Ritsumeikan University
Videos
Loading the player ...
  • Offer Profile
  • We are investigating machine intelligence based on mechanics and its related technologies including sensors and actuators.
Product Portfolio
  • Hirai Lab: Lab for integrated machine intelligence

  • Current Projects

    • Soft-fingered Manipulation

    • The goal of this research is to perform dexterous and stable object manipulation using soft-fingered mechanical hands. The location of a manipulated object measured by a realtime vision system and the grasping force measured by a tactile sensor are fedback to the hand motion to realize stable grasping and manipulation. We are interested in the modeling of soft fingertips and the control law for grasping and manipulation.
    • Non-Uniform Biological Object Modeling

    • This research will establish a method to build the deformation model of non-uniform biological objects based on their inner measurement. We will obtain the deformation field inside the object using CT and MRI to estimate non-uniform deformation parameters.
    • Tensegrity Robots

    • In this research, we will investigate a robot that moves over terrain via the deformation of tensegrity structure. The body consists of rigid elements connected by tensional members. Deformation of tensegrity structure yields the locomotion over terrain.
       
    • Crawling and Jumping Soft Robots

    • In this research, we will develop a robot capable of rough terrain locomotion by its rolling and jumping. A robot consisting of deformable soft body and flexible actuators can roll and jump on a ground by the deformation of its deformable body.
    • Belt Object Manipulation

    • The goal of this research is to realize the manipulation of belt objects such as flat cables and flexible circuit boards. Deformation properties are estimated through visual observation of object deformation to determine the trajectory of a manipulator handling the object.
    • CMOS+FPGA Vision

    • We develop a CMOS+FPGA vision system to perform fast (1,000fps) and high-resolution (1,000x1,000 pixels) visual feedback. A CMOS image sensor realizes fast capturing of successive images and an FPGA where vision algorithms are implemented enables realtime computation of features for visual feedback.
    • Micro Parts Feeding

    • The goal of this project is to realize vibration drive of micro electric parts such as chip condensers and resisters. We apply asymmetric surface (saw-tooth surface) with symmetric vibration (sinusoidal vibration) to realize one-directional motion of micro parts. We are analyzing dynamics of micro parts during feeding.
    • Micro Pneumatic Valve

    • We develop a micro pneumatic proportional valve that can be embedded into pneumatic muscles and can control 0.5MPa air flow driving the pneumatic muscles.
    • Cloth Manipulation

    • This project aims at the development of a mechanical system that performs unfolding of clothes. The unfolding consists of grasping, expansion, and placing operatoins. We analyze dynamic expansion by pinching slip motion.
    • Soft Interface


    • The goal of this research is to control mechanisms including soft interface. Through the simultaneous control of motion and deformation of a soft object and the control of a loosely coupled joint, we reveal the interaction between mechanics and control in the control of mechanisms with soft interface.
    • Manipulation of Deformable Linear Objects

    • In this research, we will explore the manipulation of deformable linear objects such as cables, cords, and tubes. Based on linear object modeling, we will establish control strategy to perform the manipulation of linear objects.
    • Belt Object Modeling

    • The goal of this research is to establish the modeling method for deformable belt objects such as flat cables and flexible circuit boards. A modeling method, which is based on differential geometry, is developed to describe bend and twist of a belt object.
  • Integrated Sensors and intelligence Lab.

  • Integrated Sensors and Intelligence Laboratory has been established in 2009, and is aiming at developping intelligent sensing systems for autonomous and adaptive robotic systems by integrating sensory, intelligent and motor systems. Our research issues include intelligent sensors, sensor fusion, neuromorphic systems, and vision-based robot control.
  • Intelligent vision systems for autonomous and adaptive robots

    • Neuromorphic vision chips


    • A silicon retina is an intelligent vision sensor that can execute real-time image pre-processing by using a parallel analog circuit that mimics the structure of the neuronal circuits in a vertebrate retina. In order to enhance robustness against changes in lighting conditions, we designed and fabricated a frame-based, wide dynamic range silicon retina with a logarithmic illumination-to-voltage transfer characteristics. The chip realized dynamic range wide enough for perceiving objects in both indoor and outdoor environments.
    • Binocular robot vision that emulates neural mechanism of stereopsis

    • We have developped a binocular vision system that emulates disparity computation in the neuronal circuit of the primary visual cortex (V1). The system consists of two sets of silicon retinas and simple cell chips that correspond to the binocular vision and field programmable gate array (FPGA) circuit. This arrangement mimics the hierarchical architecture of the visual system of the brain. Due to the combination of the parallel and analog computation of the analog VLSIs and the pixel-wise computation with hard-wired digital circuits, the present system can efficiently compute the binocular disparity using compact hardware and low power dissipation in real-time.
    • Vision-based navigation of small mobile robot

    • We designed a low power and compact binocular robotic vision system. The system consists of two silicon retinas and FPGA circuits and can calculate depth map and velocity map in real-time. Algorithm of computation we developed is inspired by the hierarchical architecture of the neuronal network of the primary visual cortex. We applied the system to vision-based navigation of a mobile robot, which is developped by Ishii Lab. at Kyushu Institute of Technology, in a real environment.
  • Cooperations

  • Control / Operation of Underwater Robot with Multi-jointed Dual-arm

  • In this research, we have developed a human-sized underwater robot (length; 700mm, diameter; 200mm) with two arms (total length; 600mm) instead of expert divers (Figure 1). One serial-linked arm has 5 DOF including 2 DOF of twisting/gripping of the hand. The single-handed operator we have developed can drive the arms/body simultaneously in this system (Figure 2). The mass of arms occupies 20% of the whole body weight so that the attitude of the robot may change during working. Movable flotation blocks keep/change the attitude to support the working, shifting the center of buoyancy with respect to the center of gravity (Figure 2, Figure 3). We have checked that this robot can work several underwater tasks instead of humans through field works (See movies).
    • Fig.1: Prototype (Coco)
    • Fig.2: Concept
    • Fig.3: Principle of changing the attitude by shifting movable flotation blocks