The IP.com Prior Art Database
English (United States)
2 pages / 31.8 KB
System to create a post-synopsis of a computer based presentation .
This invention builds up a set of data so the presenter or presenters can see what parts of the presentation attracted the most interest and what parts did not, and straight away the presenter can jump to key points of the presentation without having to go through the entire length of the presentation. This quickly allows the reviewer of the presentation to see what areas achieved the greatest interest or where the audience showed little or no interest. The invention builds a set of data for the presenter to allow a page or section selective method through a lengthy presentation rather than listening to the entire presentation. Often this sort of feedback is done through surveys or by manually reviewing the whole presentation again. This addresses this issue by making it faster to see feedback visually.
1. The presenter actives the presentation. Once active all audio is being recorded.
2. As the presentation is being discussed, the application records the screens and audio and links the screens in a file.
3. Once the presentation is complete, the application scans through the slides and makes a note of the following.
Time spent on each slide as well if a particular slide was returned to.
Ambient noises within the slide. For example, coughing or applause.
Track the number of speakers and differentiate the presenter and audience within each slide. This is done by audio matching of the speaker and anyone from the audience speaking.
If the presenter is clicking through bullet points these are also used as reference points when recording the audio.
4. The user will be given a visual display of the presentation breakdown. It shows the number of voices determined within the slide and the length of time spent on the slide. Each voice is matched to see if it is the same speaker through use of recording device used and audio matching of voices. A visual cue is given to the user at each point of the discussion.
5. The user can click on one of the cues and assign it a tag. This tag is then assigned to all other instances of the voice found. T...