A weekly review of news articles that discusses using AI to create video teasers
13 May 2019
This week we are going to see how IBM’s Watson generates a trailer for a Hollywood movie and how Google Photos captures special moments using AI
Now AI has stepped into the field of the films. AI is now being used to create new movie teasers without the help of any human effort. This is done by a computer from analysed data. In 2016 20th century fox came up with an idea of creating a teaser for their upcoming movie ‘Morgan’ with the help of IBM’s tool ‘Watson’ for creating a teaser. It takes around 20 to 30 days to create a trailer for a person, but IBM’s Watson took about 24 hours, after processing the entire movie. The big problem is that we need to teach the system ‘what is scary?’ And to create a trailer with thriller and suspense. These are the basic patterns that helps to recognise the emotion of a movie.
Not only is AI currently being used in google photos to create the best video from your raw source of data, but also share these wonderful moments from the videos with your friends or family. The first step to identify the magic moment is to find the magical moment in the video by analysing the moments and use crowd sourcing localisation on each moment. Temporal action localisation helps you to identify the right moment. Each and everyday AI is stepping into different fields. The power of AI resides with the person who decides to uses it either for progressive purposes or otherwise.
How do you create a movie trailer about an artificially enhanced human? You turn to the real thing – artificial intelligence. 20th Century Fox has partnered with IBM Research to develop the first-ever “cognitive movie trailer” for its upcoming suspense/horror film, “Morgan”. Fox wanted to explore using artificial intelligence (AI) to create a horror movie…
You can now create a suspense-filled horror movie trailer with the help of AI. 20th-century fox has partnered with IBM research to develop the first movie trailer, using AI.
Everyone’s reaction to horror movies is different. There are some patterns and reactions, each person has to different scenes in a horror movie.
Intricacies and interrelation are the two things that need to be understood by AI, to create a stunning movie trailer filled with suspense and horror. But the team needs to teach the system “what is scary”?. Then to create a trailer a filled with suspense for the majority.
The first step to creating a trailer with AI is to train it with deep learning skills.
The system creates the following tasks: A visual analysis, an audio analysis and an analysis of each scene’s composition. This analysis was done on each scene separately by the system to understand and categorically pick up appropriate scenes.
The full movie was fed into the system after which AI identifies 10 movie segments which are more relevant and suitable for a trailer. If it was a comedy movie the segments would be divided differently.
Normally making a trailer would take around 10 to 30 days which includes human effort and expert supervision.
From the 90 minute movie, the system provided six minutes of movie clips. The system took nearly 24 hours to complete this job. AI has reduced weeks of work to a single day. This is the true power of AI.
Taking a video using a camera and sharing with friends is a common activity. But going through all raw data to find a concoction of perfect moments to share with your friends is a bit time consuming.
Now Google photos makes it simple by automatically generating the magical moments you might want to share or create a new animation. These clips can be shared with your friends and family.
In “Rethinking the Faster R-CNN Architecture for Temporal Action Localization” there are some problems which is due to the complex situation of identifying the right moment from a highly varied array of data. Temporal Action Localisation Network (TALNet) helps you to identify the right moments to achieve the art of performance and helps google photos to create the best part of the video.
The initial step to identify the relevant moments is to assemble a list of actions that appear in the video. With crowd sourcing of information the video is segmented into several moments. The final data is reviewed in training models used in finding data in new videos.
That’s all for the week. We’d love to hear your thoughts on these articles and anything else data related! Email us at email@example.com