Couger Inc. released “Dimension” – an AI simulator for creating environments that include full human-like behavior
Couger Co., Ltd. (Head office: Shibuya-ku, Tokyo) is starting to provide a AI 3DCG training simulator called “Dimension.” The simulator is capable of modelling human behavior in towns, commercial facilities, etc. Couger is now opening the platform for public participation. This technology has been introduced to Honda R & D Co., Ltd. (Head office: Minato-ku, Tokyo, President & CEO: Takahiro Hachigo, hereinafter “Honda R & D Lab”) where it is being used for research purposes.
In the near future, robots, drones and autonomous vehicles will be involved in peoples daily lives, but most of current AI are trained in simulation environments that only contain simple conditions such as buildings or roads. They do not represent environments with any human behavior. In the real world, AIs will navigate environments full of different patterns of human actions and it’s very important to include them in training AI. This is impossible to reproduce in real space. “Dimension” allows to easily reproduce any place of people acting and give AIs the optimal, safe environment to learn.
The difference with existing simulators
- Dimenson does not just model fixed assets of roads and buildings, it’s an integrated model of space for human behavior;
- Dimension allows to create sensation data by LIDAR. In the case of 3DCG, no matter how accurate it will be, there still difference with a photographed image, which may cause problems with the learning process. LIDAR’s drawing approach leads to no difference between sensing of 3DCG and sensing of reality, which makes the learning process possible. Also, it reduces the cost of bringing 3DCG closer to the image of reality;
- Although many existing simulators based on AI have been created for the purpose of “testing”, “Dimension” is a simulation which can be used in something more than machine learning. For example in human learning, where it can play the role of both textbooks and exam questions.
Difference from learning with real-shot images
The reproducing scene in real life for AI learning is similar to shooting a movie with extras. However, in fact, learning AI requires innumerable situation patterns, which make impossible to reproduce each of them.
With Dimension, you can easily create a place like Shibuya with 50 people in front, 10 from the right, 20 from the left and 30 behind.
Giving a certain amount of free behavior to human model AI can determine a wide movement. For example, dozens of people can walk around at the same moment in time without bumping each other. At this point, setting movements for everyone will take an enormous amount of time, on the other hand, setting everything random will lead to the absolute impossibility to reproduce good learning situations, which means “wide movements” is a necessary thing.
In the case of real-shot images, AI learning for segmentation of people, trees and other objects will take time to set their depths, but Dimension does this almost instantaneously.
Our unique technologies:
Creating dynamic human models require much more advanced technology than creating roads and buildings. Developing games since its foundation, Couger has accumulated knowledge about autonomously acting characters and is now implementing it into Dimension . Because of that, humans showed in Dimension are not just some people you have seen, but characters that behave like humans.
(Example of learning application)
Autonomously driven car navigating crossroad full of people
Robots understanding and behaving properly in a place like an office or school
Robots holding belongings in the shopping mall without hitting other people
Empty autonomously driven taxi stops at the sidewalk when a person raises their hand
Creation of data for machine learning: RGB image, segmentation image, depth imaging, LIDAR point cloud data
Image composite background: creating a 3D human model from random images
Assets of 3D modeling background: data of a city
Continuous data generation: creating data of random movements of people, cars, cameras and creating time-series data
Behavior scenario tools: scenario tool for creating continuous behavior data
AI control of characters: avoiding collisions, road recognition
People/environment random generation and configuration: lighting, creating random characters, mass production of configurated data
Characters customization function: body type, hair color, clothes, skin color
Characters belongings: color and form of characters holding belongings
Various character actions
Combining AI, IoT, AR, and blockchain in a uniqe way, our company is developing “Connectome” technology for smart spaces.
Connectome Pte Ltd（Head office: Singapore, : Yasunori Motani, Executive Director at Connectome Pte. Ltd.）is promoting the virtual human agent development project “Connectome”. Technology development on the project is handled by Couger Inc.(Head office: Tokyo, Shibuya-ku : Ishii Atsushi, CEO at Couger Inc.) developing the VHA.