Skip to 0 minutes and 14 seconds The ultimate goal of VIVID is to simulate real life events. As we know, there are many campus shooting events happened in America. If we can develop an intelligent drone which can identify the gunman automatically. Many tragedies can be stopped. To train such a drone, we need to get training data first. In fact, the video games have most shooting actions. Using VIVID, we can train intelligent security drones, which can identify the shooter and attack it, and prevent tragedy in advance. It may sound like a Science fiction. But I do believe that, with the accelerating speed of technology advancement, we will see the security drones within a decade. So let me show you a preliminary example of shooting event simulation.
Skip to 1 minute and 6 seconds Some of you may remember the Las Vegas shooting. The gunman was hiding on the high floors and shooting at downstairs. We try to simulate the events and train AI to identify the gunman. There is still a long way to go, but our lab is working on it and the VIVID is improving everyday. Please visit our GitHub for latest results. Here are more events simulation. that we can do in VIVID. The left one is wildfire escape. A robot is running and trying to find a way out; the right one is earthquake rescue. A drone is flying in a ruined school to search for survivors. More examples can be find on our GitHub.
Skip to 1 minute and 51 seconds To conclude our work, we created a virtual environment, VIVID, which can be for learning different computer vision tasks. VIVID provides deep-learning friendly API, and is easy to use with python examples and pre-compiled binaries Finally, we want to initiate researches on real-life event simulation, and train intelligent robots for human welfare
Application on VIVID
Prof. Lai continues to give examples of VIVID: Virtual Environment for Visual Deep Learning. The ultimate goal of VIVID is to simulate real-life events. One of the examples is campus shooting events. How to identify the shooter?
Another application of VIVID is wildfire escape. In the slide, a robot is running and trying to find a way out; the right one is earthquake rescue. A drone is flying in a ruined school to search for survivors. More examples are shown on our GitHub.
This will also conclude this week’s speech. In conclusion, VIVID is created for learning various computer vision tasks. It provides deep learning-friendly API. and It is easy to use with python examples and pre-compiled binaries. Finally, It can initiate research on real-life event simulation.