<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Vubble News]]></title><description><![CDATA[Vubble and industry news.]]></description><link>https://news.vubblepop.com/</link><generator>Ghost 4.5</generator><lastBuildDate>Mon, 02 Feb 2026 12:44:57 GMT</lastBuildDate><atom:link href="https://news.vubblepop.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Top 3 Tips To Turn Your Newsletter Into a Revenue-Generating Machine (and take back your relationship with your audience/subscribers)]]></title><description><![CDATA[<p><em>Brought to you by Vubble&apos;s new <a href="https://news.vubblepop.com/introducing-the-vubble-revletter/">RevLetter</a></em></p><ol><li><strong>Get your data in order</strong><br>What data do you <em>really</em> need? Do you need a subscriber&apos;s name? Probably not. Their location? That might be relevant. How about the engagement data (e.g. what they click on in the newsletter)</li></ol>]]></description><link>https://news.vubblepop.com/how-to-turn-your-newsletter-into-a-revenue-generating-machine-and-take-back-your-relationship-with-your-audience-subscribers/</link><guid isPermaLink="false">60d246018f8b2403e1beab6c</guid><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Tue, 22 Jun 2021 20:39:15 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2021/06/Vubble-Top3Tips-Newletter-Header-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2021/06/Vubble-Top3Tips-Newletter-Header-1.png" alt="Top 3 Tips To Turn Your Newsletter Into a Revenue-Generating Machine (and take back your relationship with your audience/subscribers)"><p><em>Brought to you by Vubble&apos;s new <a href="https://news.vubblepop.com/introducing-the-vubble-revletter/">RevLetter</a></em></p><ol><li><strong>Get your data in order</strong><br>What data do you <em>really</em> need? Do you need a subscriber&apos;s name? Probably not. Their location? That might be relevant. How about the engagement data (e.g. what they click on in the newsletter)? That may be enormously helpful&#x2014;and you can anonymize that data.<br><br>Don&apos;t collect data you don&#x2019;t actually need. Be frank and transparent with your subscribers about what data you collect and how you use it. Anonymize the data that you&#x2019;re using (be sure there&#x2019;s absolutely no personally-identifiable information). Compile it (spreadsheets work) and analyze it to see what kind of patterns emerge.<br><br></li><li><strong>Learn from the data you collect</strong><br>Get to know your audience better. What kinds of things are they interested in? Are there certain days and time of delivery that work better? What happens when you switch things up in your newsletter (lots of text vs. little text, use of photos and video, subject headings, etc.)? <br><br>Is your audience open to advertising, sponsorship, or are they willing to pay a subscription fee to receive your newsletter? Ask them! <br><br>Help your subscribers find what they&#x2019;re looking for &#x2014; and delight them by suggesting content they didn&#x2019;t even know they were interested in. <br><br>At Vubble, we use machine learning to automate curation, to find patterns and make superb recommendations from our vast library of the world&#x2019;s best video. It saves our subscribers from being deluged with too much information, and our machine gets smarter with every click (or non-click).<br><br></li><li><strong>Create a monetization strategy</strong><br>Smart data collection and analysis enables you to constantly learn about your subscribers&apos; content preferences and interests&#x2014;the most important first party data for ad and sponsorship targeting.<br><br>- Meet with your organization&#x2019;s sales team to talk about ad targeting.<br><br>- Don&apos;t have a sales team? No problem! It&apos;s never been easier to place your own call for ads in your newsletter. <br><br>- Feel more comfortable going the sponsorship route? Reach out to your existing sponsors, or pitch your newsletter to new sponsors looking to help fund the creation of more great content.<br></li></ol><p><strong>At Vubble, we help quality information get to people who need it.</strong><br>Our team of journalists assess, data-label and curate the world&apos;s best video. Our data is used by media and education publishers to distribute quality information content. Learn more at <strong><a href="https://www.vubblepop.com">www.vubblepop.com</a></strong>.</p>]]></content:encoded></item><item><title><![CDATA[Introducing: The Vubble RevLetter]]></title><description><![CDATA[<p>Our team of journalists and engineers has been working on ways to add monetization opportunities into our Smart Newsletter &#x2013; and we&apos;ve cracked it! </p><p>The Vubble RevLetter generates privacy-respecting first-party-data, is proven to increase video views (from your library, or our dataset of 13.5+ million minutes of</p>]]></description><link>https://news.vubblepop.com/introducing-the-vubble-revletter/</link><guid isPermaLink="false">60d20fab8f8b2403e1beab1a</guid><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Tue, 22 Jun 2021 16:51:41 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2021/06/Vubble-RevLetter-Header.png" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2021/06/Vubble-RevLetter-Header.png" alt="Introducing: The Vubble RevLetter"><p>Our team of journalists and engineers has been working on ways to add monetization opportunities into our Smart Newsletter &#x2013; and we&apos;ve cracked it! </p><p>The Vubble RevLetter generates privacy-respecting first-party-data, is proven to increase video views (from your library, or our dataset of 13.5+ million minutes of the world&apos;s best video) and the best part is &#x2013;<em> <strong>it allows you to monetize your newsletter on your terms. </strong></em></p><figure class="kg-card kg-image-card"><img src="https://news.vubblepop.com/content/images/2021/06/Vubble-RevLetter.png" class="kg-image" alt="Introducing: The Vubble RevLetter" loading="lazy" width="800" height="1034" srcset="https://news.vubblepop.com/content/images/size/w600/2021/06/Vubble-RevLetter.png 600w, https://news.vubblepop.com/content/images/2021/06/Vubble-RevLetter.png 800w" sizes="(min-width: 720px) 720px"></figure><p>Contact one of our co-CEOs (<a href="mailto:tessa@vubblepop.com">Tessa Sproule</a> and <a href="mailto:katie@vubblepop.com">Katie MacGuire</a>) today for a demo! </p><p><strong>(If you&apos;re coming from #ONA21, be sure to let us know so we can apply your 15% discount!)</strong></p><p>Best,<br>Tessa</p>]]></content:encoded></item><item><title><![CDATA[Vubble is at #ONA21 - come say 'Hi'!]]></title><description><![CDATA[Vubble is presenting at #ONA21! We'll discuss how the new cookie-less ad reality creates a unique opportunity for publishers and their newsletters.]]></description><link>https://news.vubblepop.com/how-to-turn-your-newsletter-into-a-revenue-generating-machine-using-content-and-first-party-datatled/</link><guid isPermaLink="false">60cb9e7f8f8b2403e1beaa7f</guid><dc:creator><![CDATA[Vubble News]]></dc:creator><pubDate>Fri, 18 Jun 2021 14:01:38 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2021/06/Vubble-banner-1500x680-3.png" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2021/06/Vubble-banner-1500x680-3.png" alt="Vubble is at #ONA21 - come say &apos;Hi&apos;!"><p><strong>When?</strong> <a href="https://ona21.journalists.org/sessions/vubble-newsletter-revenue-generator/">Thursday, Jun 24 &#x2013; 11:30 AM &#x2013; 11:50 AM ET (15:30 &#x2013; 15:50 UTC)</a></p><p><strong>Where?</strong> <a href="https://ona21.journalists.org/">ONA21</a> (Online News Association)</p><p><strong>Who?</strong> <a href="https://www.vubblepop.com/">Vubble</a> (&#x201C;video bubble&#x201D;) Co-CEO <a href="https://www.linkedin.com/in/tessasproule/">Tessa Sproule</a> will moderate a discussion with <a href="https://www.linkedin.com/in/flynnnicole/">Nicole Flynn</a> (CMO, <a href="https://tinyurl.com/ygp2f35q">cielo24</a>) and Vubble data scientist, <a href="https://www.linkedin.com/in/sana-f/">Sana Farooqui</a>.</p><p><strong>What&apos;s the presentation about?</strong></p><p>The way digital content creators work with sponsors and advertisers is changing dramatically. We believe the new cookie-less, tracking-less world is going to level the playing field between publishers and platforms. </p><p><strong>How? </strong>News organizations have a new opportunity to generate privacy-respecting, truly useful data with their newsletters. We&apos;ll talk about how adding personalization to your newsletter garners truly useful first-party data that is both respectful of audience privacy and drives new revenue opportunities.</p><p>Let&apos;s peek through the narrow window of opportunity that has opened to learn how creators big and small are finding ways to target ads, predict optimal subscription calls-to-action, and increase sustainable sponsorship revenue.</p><p><a href="https://ona21.journalists.org/sessions/vubble-newsletter-revenue-generator/"><strong>Join us!</strong></a></p>]]></content:encoded></item><item><title><![CDATA[We tested our automated video Categorizer on the 5G network. Here’s what we discovered.]]></title><description><![CDATA[<h3 id="first-why-we-do-what-we-do-at-vubble">First, why we do what we do at Vubble</h3><p>Too much content exists online. We are drowning in a sea of information, misinformation and disinformation, an indecipherable blend of fact and opinion. Too little of the information we access is reliable, well-sourced, and relevant to us. Vubble creates solutions for</p>]]></description><link>https://news.vubblepop.com/untitlewe-tested-our-automated-video-categorizer-on-the-5g-network-heres-what-we-discovered-d/</link><guid isPermaLink="false">60a523bd8f8b2403e1bea931</guid><dc:creator><![CDATA[Katie Macguire]]></dc:creator><pubDate>Wed, 19 May 2021 20:00:16 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2021/05/5g-header-1.jpg" medium="image"/><content:encoded><![CDATA[<h3 id="first-why-we-do-what-we-do-at-vubble">First, why we do what we do at Vubble</h3><img src="https://news.vubblepop.com/content/images/2021/05/5g-header-1.jpg" alt="We tested our automated video Categorizer on the 5G network. Here&#x2019;s what we discovered."><p>Too much content exists online. We are drowning in a sea of information, misinformation and disinformation, an indecipherable blend of fact and opinion. Too little of the information we access is reliable, well-sourced, and relevant to us. Vubble creates solutions for this new information economy. We boost and strengthen the capacity of organizations that create and distribute evidence-based quality information.</p><h3 id="our-computer-vision-challenge">Our computer vision challenge</h3><p>This problem of too much information online is especially true for video - the dominant medium for information consumption online. This creates a significant technical challenge - understanding what&#x2019;s happening in a video. </p><p>Automatic video categorization is a hard and active challenge in the computer vision research community. Scenes from the same video category are recorded at different lighting conditions, in different environments, with various backgrounds, from different viewpoints, or at various temporal or spatial resolutions. On the other hand, sometimes video sequences from different categories may be very similar and hard to distinguish from each other. &#xA0;Each of these challenges raise the level of complexity faced by the research and development community in designing and building automated systems to enable video categorization.</p><p>Currently, companies across industry sectors use data farms, usually offshore centres, with minimally-paid people to tag and label videos, often with violent and offensive content. This current solution fails in correctly identifying ambiguous information. For example: is a particular video providing political information? Is it humorous? Is it propaganda? Is the information credible? Current solutions also fails in correctly identifying contextual information - i.e. is a video of a breastfeeding mother pornography or maternal health advice? </p><p>Through an ongoing multi-year collaboration with <a href="https://ict.senecacollege.ca/">Seneca College</a>, Vubble has developed and implemented a video categorizer system, which we call the Vubble Video Categorizer. Customized for video curators, the Categorizer automates the data labelling of ambiguous and contextual information based on both visual and audio cues. Vubble&#x2019;s solution can categorize ambiguous and contextual information, and it does that at scale. </p><p>That&#x2019;s where 5G comes in. </p><h3 id="vubble%E2%80%99s-5g-experiments">Vubble&#x2019;s 5G experiments</h3><p>5G networks allow high-quality broadcast and reliable transmission of massive amounts of data. This has the potential to improve the precision and speed of Vubble&#x2019;s automatic categorization pipelines. Vubble&apos;s technical team has optimized the video Categorizer system for the 5G testbed in order to run a series of pre-recorded and live video categorization tests. These tests were funded in part by <a href="https://www.oc-innovation.ca/">Ontario Centre of Innovation (OCI</a>) and supported by the <a href="https://www.communitech.ca/future-facing/platforms/5g-networks/">Communitech ENCQOR</a> team. They include:</p><p><strong>Test 1 - In this experiment, we installed the Vubble Categorizer system on a personal computer and connected the computer to the 5G network. Then we put the Categorizer to work, automating the data labelling of several hundred videos we downloaded from YouTube.</strong></p><p>The objective of this test was to provide a baseline for future tests. The 5G network allowed the processing of multiple videos simultaneously for a period of two hours. Video categorization requests ran in intervals from two to 20 seconds. This performance speed is similar to that of physical infrastructure, like a cable connection. One limiting factor was the performance of the computer.</p><p>This test provided valuable performance data on the simultaneous download (parallelization) of videos through a 5G network. It also helped the Vubble team identify areas of the Categorizer system that needed improvement, specifically, parts of the code that needed to be rewritten to avoid a situation where the system ran out of RAM memory because of the increased download speed. Finally, the test helped to identify the best balance of the Categorizer system components (the optimum number of downloaders, audio categorizers and video categorizers). <br></p><p><strong>Test 2 - In this experiment, a client accessed the Vubble Categorizer system in the cloud and requested remote categorizations of hundreds of videos.</strong></p><p>The objective of this test was to gather comparison data for Test 1 and to identify the limit of parallel categorizations of the Categorizer system in the cloud infrastructure, when compared with a local deployment of the Categorizer system code on a laptop.</p><p>Again the 5G network allowed a high communication speed - sending video categorization requests and receiving the output quickly. With this test, the Vubble team identified the limit of the number of requests that could be managed by the Vubble Categorizer system. This limit was caused by the longer processing time of the component that produced video transcriptions. The team solved this problem by improving the number of replicas of the API and the video Categorizer pods to factors of 3 and 8 respectively. This reduced wait time, improving clients&apos; experience of the system.</p><p><strong>Test 3 - In this experiment, a client with video footage on a device pushed the video to the cloud-based Vubble storage and then requested categorization of video.</strong></p><p>The objective of this test was to simulate a situation in which the client records videos locally, (e.g. video on surveillance cameras or motion-detection cameras), and then pushes the video to the cloud-based Vubble storage prior to categorization.</p><p>The 5G performance for pushing videos to Vubble storage was comparable to the speed of upload experiences using a cable network. This experiment revealed that it is possible to provide an online categorization service for clients connected to a 5G network. The bottleneck problem frequently encountered when uploading huge files in 3G and 4G networks disappeared on the 5G network.</p><p><strong>Test 4 - In this experiment, a client on the 5G network streamed video from Vubble storage.</strong></p><p>The objective of this test was to compare the performance of video streaming for videos stored in Vubble storage to other storage services such as YouTube&#x2019;s. The test ran one-hour videos stored both on YouTube and on Vubble to evaluate the quality of transmission on a 5G network.</p><p>The test result proved that it is possible for Vubble to offer private video streaming to clients connected to 5G networks.</p><p><strong>Test 5 - In this experiment, a client streamed a video to Vubble&#x2019;s storage for live categorization.</strong></p><p>In this final test, the Vubble team evaluated the categorization of &#xA0;live video. &#xA0;The idea was to evaluate the performance of streaming a local camera to the Vubble cloud infrastructure. In a second step, these streamed frames were processed and categorized.</p><p>The 5G network allowed the streaming of live video without any issues. However, the test showed that the Vubble categorization system did not have the capacity to receive a transmission of more than six frames per second. The speed of the 5G network was more than the Vubble system was capable of processing live.</p><p>The team concluded that the Vubble Video Categorizer needs improvements to be able to categorize live video on a 5G network. </p><p><strong>Our conclusions</strong></p><ol><li>The Vubble Categorizer system can be deployed on a stand-alone device and connected to the 5G network.<br></li><li>A client on the 5G network can request remote categorization of videos from the cloud-based Vubble categorization system without having to deploy the system on their device.<br></li><li>Vubble can store clients&#x2019; videos and provide video streaming of those videos.<br></li><li>Given the speed of the 5G network, a client can push videos to Vubble&#x2019;s storage for categorization without a local deployment of the Vubble Categorizer on their device.<br></li><li>A client can send live footage from remote cameras to the Vubble system for categorization in the cloud. However, the Vubble categorization system code must be modified to address a problem where the code kept reading and writing files, slowing performance.<br></li></ol><h3 id="whats-next-for-the-vubble-categorizer">What&apos;s next for the Vubble Categorizer?</h3><p>This 5G test allowed the Vubble team to create two new solutions: The first was the creation of a stand-alone Categorizer system that can be deployed in the cloud and receive videos from cameras and then categorize those videos without a client installing the Vubble Categorizer system on his or her local device. The second was the creation of Vubble video storage so that Vubble can host and stream videos without the need for third-party services such as YouTube. </p><p>The performance on the 5G network demonstrated that it is possible to process more videos than previously expected. With this is mind, the Vubble research team will:</p><ul><li>Scale-up the current deployment of the Categorizer system with the addition of more replicas of the audio Categorizer and the addition of more nodes (servers) to the current cluster.</li><li>Test the use of GPU-capable machines to improve the speed of video data labelling.</li><li>Improve the code of the camera live-stream to increase the number of frames available for processing in the Vubble Categorizer. </li></ul><p>Photo by <a href="https://unsplash.com/@umby?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Umberto</a> on <a href="https://unsplash.com/collections/12224688/5g?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a><br></p>]]></content:encoded></item><item><title><![CDATA[Popping Bubbles: How to Categorize and Recommend Information Video]]></title><description><![CDATA[Vubble’s Director of Machine Learning talks about how Vubble leverages our massive information video dataset to develop automated tools for video categorization and recommendation]]></description><link>https://news.vubblepop.com/popping-bubbles-how-to-categorize-and-recommend-information-video/</link><guid isPermaLink="false">6048ed124d81fa68ecd16d9c</guid><category><![CDATA[artificial intelligence]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[ai]]></category><dc:creator><![CDATA[Mariah Martin Shein]]></dc:creator><pubDate>Wed, 10 Mar 2021 16:01:53 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2021/03/vubble-blog-popping-bubbles-3.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: html--><div style="width: 100%; font-size: 11px !important; line-height: 15px important; color: #818181; text-align: center; margin: -32px auto 25px auto !important; padding: 0 !important;">Photo by <a href="https://unsplash.com/@jancanty?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank">Jan Canty</a> on <a href="https://unsplash.com/s/photos/bubble-pop?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" target="_blank">Unsplash</a></div><!--kg-card-end: html--><img src="https://news.vubblepop.com/content/images/2021/03/vubble-blog-popping-bubbles-3.jpg" alt="Popping Bubbles: How to Categorize and Recommend Information Video"><p></p><p>In this presentation, Mariah Martin Shein, Vubble&#x2019;s Director of Machine Learning, talks about how Vubble leverages our massive information video dataset to develop automated tools for video categorization and recommendation. &#xA0;For categorization, our goal is not just to produce high-quality predictions, but also to continuously provide support to our journalist editors, our journalists-in-the-loop, by offloading repetitive tasks to machines. For recommendation, Vubble&#x2019;s algorithms suggest videos that balance personalization with showing serendipitous alternate viewpoints. This blend provides an engaging experience that helps &#x2018;pop&#x2019; individuals&#x2019; news bubbles and encourages critical thinking. This talk was presented at the Waterloo Data Science and Data Engineering Meetup Group, Feb. 4, 2021.</p><!--kg-card-begin: html--><div style="overflow: hidden; padding-top: 56.25%; position: relative;">
	<iframe style="position: absolute; top: 0; left: 0; height: 100%; width: 100%;" src="https://www.youtube.com/embed/WLhRyXAABYk?start=58" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Unpacking the ‘black box’: Vubble’s Recommender Engine]]></title><description><![CDATA[Some people talk about artificial intelligence as a ‘black box’. At Vubble, there is no such thing. Here’s Mariah Martin Shein, Vubble’s Director of Machine Learning, with an inside view.
]]></description><link>https://news.vubblepop.com/unpacking-the-black-box-vubbles-recommender-engine/</link><guid isPermaLink="false">5f80a304cdea3405af33b9be</guid><dc:creator><![CDATA[Mariah Martin Shein]]></dc:creator><pubDate>Tue, 22 Sep 2020 17:55:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/ai-header-keyboard.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/ai-header-keyboard.jpeg" alt="Unpacking the &#x2018;black box&#x2019;: Vubble&#x2019;s Recommender Engine"><p>The Vubble Recommender Engine is a versatile, modular and robust system for generating relevant, interesting and surprising content suggestions. It can provide recommendations within any of our Vubble products; it can also hook up to other platforms to provide recommendations for a diverse array of content, whether video or text, entertaining/educational, heavy or uplifting&#x2026; <em><em>(our journalist-annotated list of data labels runs more than 550 elements long)</em></em>.</p><p>The Recommender Engine is a critical component of the Vubble platform. It extends the functionality of our base A.I. system and the way we&#x2019;ve built it makes it easier for our technical team to develop, test improvements and reiterate based on what we learn.</p><p>Looking at the Recommender Engine from an outside perspective: it&#x2019;s simply a system that you can ask to make recommendations for a specific user, and it gives you back a list of content suggestions (all media content is possible, Vubble&#x2019;s current focus is news/information video).</p><p><em><em>Here&#x2019;s what that looks like:</em></em></p><figure class="kg-card kg-image-card"><img src="https://news.vubblepop.com/content/images/2020/10/overall-diagram.png" class="kg-image" alt="Unpacking the &#x2018;black box&#x2019;: Vubble&#x2019;s Recommender Engine" loading="lazy" width="1050" height="632" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/overall-diagram.png 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/overall-diagram.png 1000w, https://news.vubblepop.com/content/images/2020/10/overall-diagram.png 1050w" sizes="(min-width: 720px) 720px"></figure><p>Simple, right? Let&#x2019;s take the lid off and see you how those recommendations are made (things are going to get a little complicated, grab a cup and stay with me).</p><h2 id="the-components">The components</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/vubble-structure.png" class="kg-image" alt="Unpacking the &#x2018;black box&#x2019;: Vubble&#x2019;s Recommender Engine" loading="lazy" width="1050" height="603" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/vubble-structure.png 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/vubble-structure.png 1000w, https://news.vubblepop.com/content/images/2020/10/vubble-structure.png 1050w" sizes="(min-width: 720px) 720px"><figcaption>The basic structure of the Vubble Recommender Engine</figcaption></figure><p>Think about the inside of the box in three parts: the data preprocessing, the machine learning (ML) models, and the bias spread algorithm.</p><p>Information flows through the system in that order: first preparing the data for use by the ML models, then the models generating recommendations, and finally the bias spread algorithm combining the recommendations in a way that aims to lift critical thinking <em><em>(In some usecases, particularly news and information video content, it&#x2019;s important that the recommendations not only be aligned with previous user behaviour &#x2014; Vubble&#x2019;s &#x2018;bias spread&#x2019; algorithm aims to correct for bias by nudging users towards content that may be slightly outside their past interest sphere)</em></em>.</p><p>I&#x2019;m going to talk about the three parts of Vubble&#x2019;s (not-black) box in a slightly different order, however, because it makes more sense to explain the whole system by starting with the most fundamental component, the machine&#x2019;s core.</p><p><strong><strong><em><em>1/ Machine Learning Models</em></em></strong></strong></p><p>The middle box of the diagram holds the machine learning (ML) models. This is where the artificial intelligence of the Vubble system resides. The ML models are the most essential components because without them, we can&#x2019;t have any recommendations.</p><p>The Recommender Engine is designed to be able to use any kind of ML algorithm that creates models capable of making recommendations &#x2014; and it can have more than one such model making recommendations in parallel.</p><p>These models tend to fall into one of two categories: <em><em>collaborative filtering</em></em> and <em><em>content-based filtering</em></em>.</p><p><strong><strong>Content-based filtering</strong></strong> makes recommendations based on a user&#x2019;s explicit preferences. When a user first starts looking for video recommendations in any of our Vubble products, they are asked to pick some categories that they&#x2019;re interested in. Then, a content-based filtering model picks out video content that best matches these categories. This type of model makes the best use of Vubble&#x2019;s large, journalist-annotated data set.</p><p>Currently, we have implementations of content-based filtering algorithms that create a model by representing every video in our dataset, and every user&#x2019;s preferences, as a vector of categories. To get recommendations for a specific user, it calculates the similarity between that user&#x2019;s vector and all the video content vectors using a metric called <a href="https://en.wikipedia.org/wiki/Cosine_similarity" rel="noopener nofollow">cosine similarity</a>. Then it simply picks the top-most-similar videos to recommend. Of course, there are many other ways to do content-based filtering, and with this new Recommender Engine, other content-based filtering models can be easily added to the system.</p><p><strong><strong>Collaborative filtering</strong></strong> makes recommendations based on a user&#x2019;s past behaviour &#x2014; which videos did the user actually watch and enjoy? When a returning user is looking for recommendations, a collaborative filtering model compares the videos they&#x2019;ve seen to the videos watched by all the other users in order to predict what the user might want to watch next. For example, check out the table below. &#x2018;1&#x2019; means the user is interested in that video, and &#x2018;0&#x2019; means they are <em><em>not</em></em> interested in it. Spaces with &#x2018;?&#x2019; indicate that we don&#x2019;t know if that user would like that content or not. If you were trying to guess whether Casey would be interested in &#x201C;Kodak tries to reinvent after struggling to adapt&#x201D;, would you recommend it?</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/vubble-casey.jpeg" class="kg-image" alt="Unpacking the &#x2018;black box&#x2019;: Vubble&#x2019;s Recommender Engine" loading="lazy" width="800" height="296" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/vubble-casey.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/vubble-casey.jpeg 800w" sizes="(min-width: 720px) 720px"><figcaption>Would Casey be interested in the Kodak story?</figcaption></figure><p>If you guessed that Casey would want to see it, you probably came to that conclusion in a similar way to a collaborative filtering model: by comparing what you know about Casey&#x2019;s likes and dislikes to those of Alice and Bob.</p><p>Currently, at Vubble we&#x2019;re using an implementation of collaborative filtering that makes use of <em><em>implicit</em></em> ratings. When a user interacts with a recommender system, they can indicate interest in content either through explicit ratings (e.g., giving it a score out of 5 stars) or through implicit behaviour, like which video out of a list they actually chose to watch. Using implicit ratings to make recommendations is generally considered to be at least as good as using explicit ratings, and has the added advantage of requiring less work from the user.</p><p><strong><strong>2/ <em><em>Data Preprocessing</em></em></strong></strong></p><p>Each of these ML models needs data to learn, and this data needs to be in a specific format, based on the model. Content-based filtering needs to know content categories, but doesn&#x2019;t care about user behaviour, and collaborative filtering is the opposite. So the first box in the diagram (&#x201C;Data preprocessing&#x201D;) represents the parts of the Recommender Engine that load in data from the main database and make sure it is in the right format for each model to use.</p><p><strong><strong>3/ <em><em>Bias Spread</em></em></strong></strong></p><p>The final part of Vubble&#x2019;s Recommender Engine is our bias spread algorithm. This algorithm does a lot of heavy lifting. It&#x2019;s responsible for <em><em>combining the recommendations</em></em> made by the different ML models into one final list, and does so in a way that <em><em>balances recommendations</em></em> from each model while <em><em>maximizing the overall diversity</em></em> of the suggestions.</p><p>I claimed earlier that the ML models are the most essential parts of the Recommender Engine, for basic practical reasons. However, while it is not essential to making recommendations, the Bias Spread algorithm is arguably the most important part of the Recommender Engine, for two qualitative reasons.</p><p>First, one of the goals of Vubble is to &#x201C;lift critical thinking&#x201D;. A <a href="https://www.sciencedirect.com/science/article/pii/S1871187120301759" rel="noopener nofollow">recent article</a> by a researcher at Maastricht University suggests that encouraging people to engage with a wide variety of perspectives improves their critical thinking skills. Vubble&#x2019;s Bias Spread algorithm maximizes the diversity of the suggestions in the final list in order to introduce users to ideas they may not have considered before.</p><p>Second, maximizing diversity in recommendations is also <a href="https://papers-gamma.link/static/memory/pdfs/153-Kunaver_Diversity_in_Recommender_Systems_2017.pdf" rel="noopener nofollow">well-known</a> to increase users&#x2019; interest and engagement in a recommender system. So the Bias Spread algorithm helps make the Recommender Engine&#x2019;s suggestions more interesting to the user, while also encouraging them to push themselves outside of their regular thought bubbles.</p><hr><p>The Recommender Engine is just one part of the Vubble platform. It works closely with our Recommendation Queue Manager, <a href="https://www.mongodb.com/" rel="noopener nofollow">MongoDB</a> database, and our backend API, all fully containerized and managed with <a href="https://kubernetes.io/" rel="noopener nofollow">Kubernetes</a>.</p><p>I hope you&#x2019;ve enjoyed this peek inside the box of the Vubble Recommender Engine. If you want to know more, feel free to contact me (<a href="mailto:mariah@vubblepop.com" rel="noopener nofollow">mariah@vubblepop.com</a>).</p><p><em><em>Stay tuned for another post that will take a look at the inner workings of the Recommendation Queue Manager!</em></em></p>]]></content:encoded></item><item><title><![CDATA[Our information ecosystem is in trouble. Here’s how we can fix it.]]></title><description><![CDATA[I failed to recognize BigTech’s Trojan Horse as it ambled through the gates of our information ecosystem in the days after September 11, 2001. But I strongly believe we can fix things now, if we work together.]]></description><link>https://news.vubblepop.com/our-information-ecosystem-is-in-trouble-heres-how-we-can-fix-it/</link><guid isPermaLink="false">5f80b028cdea3405af33b9f0</guid><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Thu, 09 Jan 2020 20:01:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/ecosystem_header.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_header.jpeg" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it."><p>September 11, 2001. Our world was reeling. We were drawn to our TVs, showing horrible scenes from <a href="https://en.wikipedia.org/wiki/World_Trade_Center_site" rel="noopener nofollow">Ground Zero</a> in New York. Expert &#x2018;talking heads&#x2019; gasped, nodded and filled the <a href="https://www.cnn.com/videos/cnnmoney/2016/09/11/rs-aaron-brown-intv.cnn" rel="noopener nofollow">24 hour news channels</a>; car radios provided updates from political leaders as the &#x2018;<a href="https://en.wikipedia.org/wiki/War_on_terror" rel="noopener nofollow">War On Terror</a>&#x2019; began to take shape (<a href="https://youtu.be/z3YnAsOeUpA" rel="noopener nofollow">&#x201C;I can hear you!&#x201D;</a> U.S. President George W. Bush shouted into a megaphone, to a crowd gathered on the pile. &#x201C;The rest of the world hears you! And the people who knocked these buildings down will hear all of us soon!&#x201D;); newspapers printed extra editions, spilling barrels of ink to detail the latest investigations into the terror that had rained down on American soil.</p><p>For the news media, it was a pivotal moment, marking a shift in which the internet became the most powerful utility in our information ecosystem.</p><p><em><em>Re-read the first paragraph. I didn&#x2019;t mention the web once up there. </em></em>Not because it wasn&#x2019;t already a dominant force in 2001, but because, for the most part, legacy media failed to take its &#x201C;pivot to digital&#x201D; seriously from the very beginning.</p><p>It is one of the greatest regrets of my journalism career that I failed to recognize BigTech&#x2019;s Trojan Horse as it ambled through the gates of our information ecosystem in the days after September 11, 2001. But I strongly believe we can fix things now, if we work together.</p><h2 id="yes-the-internet-was-here-already-in-2001-but-it-was-different-">Yes, the internet was here already in 2001 &#x2014; but it was different.</h2><p>Most of us didn&#x2019;t have cellphones (it would take another year before RIM launched its first smartphone, the <a href="https://biztechmagazine.com/article/2016/11/blackberry-5810-kickstarted-mobile-work-era" rel="noopener nofollow">BlackBerry 5810</a>, and Steve Jobs was still six years away from launching the <a href="https://www.businessinsider.com/first-phone-anniversary-2016-12" rel="noopener nofollow">first iPhone</a>). Those of us who were trying to use the web to follow the news of 9/11 were probably using our desktop computers at work. The majority of us had our browser (<a href="https://en.wikipedia.org/wiki/Timeline_of_web_browsers" rel="noopener nofollow">probably Microsoft&#x2019;s Internet Explorer</a>) pointing to our favourite news organization as our <a href="https://qz.com/209950/the-homepage-is-dead-and-the-social-web-has-won-even-at-the-new-york-times/" rel="noopener nofollow">&#x2018;home page&#x2019; of the web</a>.</p><p>We would start there.</p><p>It&#x2019;s kind of quaint, when you look back with our 2020 vision today. On September 11, 2001, there was no Facebook for people to &#x2018;check in&#x2019; on their loved-ones. None of it was captured on Facebook Live, because that didn&#x2019;t exist (Mark Zuckerberg was just 17, not yet building the world&#x2019;s largest <a href="https://www.reuters.com/article/us-facebook-ai/facebook-labels-posts-by-hand-posing-privacy-questions-idUSKCN1SC01T" rel="noopener nofollow">data training set based on the actions of billions of people around the globe in real time</a>).</p><p>There was no Twitter to scroll. User-uploaded video was not a thing &#x2014; <a href="https://en.wikipedia.org/wiki/YouTube" rel="noopener nofollow">YouTube was still 3 years away</a>. If you were a web-head, maybe you had a <a href="https://alejandrorioja.com/blog/history-of-blogging/" rel="noopener nofollow">&#x2018;weblog&#x2019;</a>, but most of us wouldn&#x2019;t visit one of those until 2005.</p><p><a href="https://www.thoughtco.com/who-invented-google-1991852" rel="noopener nofollow">Google was all about search</a>, on the cusp of becoming the world&#x2019;s predominant search engine (that would happen in 2002, 2004 if you&#x2019;re a purist); but you had to know what keywords to search in order to find what you were looking for (<a href="https://www.theguardian.com/technology/2015/jul/23/panopticon-digital-surveillance-jeremy-bentham" rel="noopener nofollow">over to you, Foucault</a>).</p><p>Early search engines like Google&#x2019;s literally indexed the web, which in 2001 was just over a billion pages of mostly HTML-coded text. Put into context, a media organization today probably has more than a billion &#x201C;pages&#x201D; of &#x201C;content&#x201D; within its own digital ecosystem.</p><p>Those early engines also made the choice to rank results by how many &#x2018;other sites&#x2019; linked to them, placing the most &#x2018;popular&#x2019; at the top.</p><p><em><em>&lt;snark&gt;Surely an infallible system! No one would create fake sites/people to inflate and skew this approach!&lt;/snark&gt;</em></em></p><p>&#x2018;Popularity&#x2019; (or, data we agree is a signal of popularity?) won the day then, as it does now. The headlines we clicked on, the stuff we now &#x201C;like&#x201D; or retweet, <em><em>our literal behavior and life on the web&#x2014;where we go, what we linger over, how much we despise that actor&#x2019;s haircut&#x2014; </em></em>these are all popularity data signals that BigTech has collected and built its business on.</p><p>Technology and storytellers are alike in that &#x2014; we know there is tremendous power packed in our big emotions: anger, fear, jealousy and love.</p><p>That day, September 11th, 2001, woke Silicon Valley up to what we in the media business have always known: humans, with all our messy, beautiful, tragic, terrible and brilliant selves &#x2014; when we get together, <em><em>you can feel it</em></em>.</p><p>If you&#x2019;re a journalist at heart, I bet you want to use that energy in those big moments to help your fellow citizen better understand the world around them. I bet on those big news days, like September 11, your primary concern is how you tell the story as honestly and fairly as possible. The number of people you reach is secondary &#x2014; <em><em>but it is hard to resist the &#x201C;views&#x201D; count.</em></em></p><h2 id="because-advertising-">Because, advertising.</h2><p>Back in 2001, as now, we humans did most of the work when using the technology platforms. We were, all of us, the real workers of the information economy, serving BigTech&#x2019;s machines the minutiae of our moments&#x2014; what we&#x2019;re up to, where we&#x2019;re going, what we&#x2019;re interested in, what we&#x2019;d like to spend our time (and money) doing &#x2014; all enormously valuable popularity data signals when it comes to advertising.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_graph_advertising.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="750" height="636" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_graph_advertising.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ecosystem_graph_advertising.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>The tech takeover of advertising began when companies like Google, Facebook and Twitter realized there was significant revenue to be made in the digital distribution of news information content. Today, 97% of Facebook&#x2019;s revenue comes from ads.</figcaption></figure><p>Our problematic relationship with BigTech&#x2019;s approach to predicting what information we need to make sense of the complicated world around us &#x2026; it started long before anyone had even imagined Instagram, TikTok, Facebook or the <a href="https://www.wired.com/story/russia-ira-propaganda-senate-report/" rel="noopener nofollow">Russian Internet Research Agency</a>.</p><h2 id="the-early-days-of-digital-news-in-conventional-media">The early days of digital news in conventional media</h2><p>On 9/11, I was in my mid-twenties, working as a digital producer for the Canadian Broadcasting Corporation for an investigative show on CBC TV called <em><em>Disclosure</em></em>.</p><p>As Canada&#x2019;s national public broadcaster, on September 11th &#x2014; and for weeks after &#x2014; CBC had a massive problem. Canadians could not access our homepage, <a href="http://www.cbc.ca/" rel="noopener nofollow">CBC.ca</a>, because too many people were trying to open it at once. Remember, this is before social media and <a href="https://en.wikipedia.org/wiki/Google_News" rel="noopener nofollow">Google News</a>. People came to the news brand they trusted and made it their homepage; our servers at CBC.ca were simply overwhelmed.</p><p>Despite being the oldest, most trusted and farthest-reaching network in Canada, and despite how the internet was already the dominant news delivery platform for my generation, we couldn&#x2019;t make the internet work for us or our audience &#x2014; Canadian citizens who needed quality, trustworthy information. We were journalists failing to get the story out.</p><p>If you&#x2019;ve ever seen a movie about journalism, you know that&#x2019;s not how the story ever ends.</p><p>Looking back, I believe September 11th was a watershed moment, when the media ecosystem moved from an analog, broadcast model towards a digital &#x2014;and ultimately, AI-reliant pillar of the information ecosystem.</p><p>The trouble is, BigTech got there first. And we helped them; by not pulling up our sleeves and doing the difficult business of figuring out how to effectively use technology to distribute our content, ourselves.</p><p>Today, we&#x2019;re facing an even more powerful entrant in the newsroom, AI, and we must get it right or risk the information economy being defined by <em><em>again</em></em> BigTech &#x2014; not to mention authoritarian tech. When I say that, I&#x2019;m not just talking about what&#x2019;s going on in <a href="https://en.wikipedia.org/wiki/Social_Credit_System" rel="noopener nofollow">China</a>; there are many examples closer to home in places like the<a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" rel="noopener nofollow"> criminal justice system</a>,<a href="https://www.cbc.ca/radio/thesundayedition/november-18-2018-the-sunday-edition-1.4907270/how-artificial-intelligence-could-change-canada-s-immigration-and-refugee-system-1.4908587" rel="noopener nofollow"> migration policy</a> and<a href="https://www.theglobeandmail.com/business/article-sidewalk-labs-document-reveals-companys-early-plans-for-data/" rel="noopener nofollow"> &#x201C;smart city&#x201D; building</a>. We need to get much better at defining how, where, why and when we enlist the help of AI in the crucial decision-making of our lives.</p><hr><h2 id="how-september-11th-sparked-the-origins-of-google-news-and-bigtech-s-takeover-of-distribution">How September 11th sparked the origins of Google News &#x2014; and BigTech&#x2019;s takeover of distribution</h2><p>From the mid-90s until September 2001, the web was largely a &#x2018;nice to have&#x2019;. When it came to infrastructure spending at most media companies, faced with a decision to invest in a new suite of cameras for the field or a robust server farm, investing in the tools of creation would win every time.</p><p>At CBC, we had managed traffic in the thousands, occasionally hundreds of thousands. In the days after September 11th, millions were trying to squeeze through a pipe that simply wasn&#x2019;t equipped to handle it. (The nostalgic in me imagines some of that traffic getting stuck in the original server built by a CBC Radio technician in a closet in the early 90&#x2019;s &#x2014;<a href="https://www.cbc.ca/10th/columns/prehistory_gorbould.html" rel="noopener nofollow"> the original host of CBC.ca</a>.)</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_google.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="750" height="506" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_google.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ecosystem_google.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>Screen grab from Google.com (October 9, 2001, <a href="https://web.archive.org/web/20010801000000*/google.com" rel="noopener nofollow">Wayback Machine</a>). While Google did not agree to post links directly on its (very tidy!) homepage, the company did create a &#x201C;News and Resources&#x201D; section on the site to point to news providers and list direct links to articles suggested by media companies such as CBC.</figcaption></figure><p>The CBC&#x2019;s patchwork, online server infrastructure was no match for the the traffic from the events of September 11, so we called Google.<br><br>CBC News joined other media organizations &#x2014; Canadian, American and international &#x2014; with a simple ask: if we send Google direct links to the most important articles on our websites, could Google list direct links on its homepage? (It&#x2019;s a process I&#x2019;d initiated with Yahoo! a few years earlier when I ran CBC&#x2019;s first online arts and entertainment news portal, <em><em>Infoculture</em></em>.)</p><p>&#x201C;<a href="https://en.wikipedia.org/wiki/Human-in-the-loop" rel="noopener nofollow">Human-in-the-loop</a>&#x201D; intervention by journalists was something the tech platforms needed and wanted. BigTech needed help putting context around the relentless barrage of news; they needed us to tell them what our stories were about, feeding data signals into their systems.</p><p>Our servers were melting. Google had the server heft, and 9/11 search queries dominated their interactions &#x2014; everyone would win if we could just work together.</p><p><strong><strong>And there you have it &#x2014; the origins of Google News.</strong></strong></p><p>It was an invention of the best kind &#x2014; made out of necessity. At CBC, we had digital line-up producers emailing contacts at Google with suggestions for news articles they should link to (my now Co-CEO/Co-Founder at Vubble called home to tell her parents when her biography of Osama bin Laden was featured on Google.com). The headlines and links went up, and Google came back for more.</p><p>For a while, it worked like that &#x2014; we&#x2019;d send links and mostly Google would list them. Then, as the news cycle pushed on, we went back to what we do, as journalists: getting the information, putting it together, getting it out there.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_algorithms.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="750" height="567" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_algorithms.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ecosystem_algorithms.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>Algorithms, machine learning, the functioning processes of AI &#x2014; they can only work with the signals we provide them.</figcaption></figure><p>The trouble is, legacy broadcast networks like CBC were so invested in old infrastructure (and for some, bountiful advertising revenue), we let go of the &#x2018;getting it out there&#x2019; piece.</p><p>This lack of innovation and imagination for how audience behaviour was changing was exploited with tremendous effect by the social media platforms that emerged in the years that followed with the rise of &#x201C;Web 2.0&#x201D;. And it was exploited again by dark players who would later weaponize the frailties of those same platforms to wreak havoc on elections, human rights and ultimately our understanding of ourselves.</p><p>Getting trustworthy, quality news content in front of citizens who need to see and understand it is not a new thing. It is a core mission of journalists and the information media ecosystems of every democracy on Earth. We should have done better back on 9/11, and we must carefully consider our approaches to digital distribution now as artificial intelligence automates editorial functions in the newsroom today and tomorrow.</p><h2 id="how-bigtech-won-the-first-round">How BigTech won the first round</h2><p>In the fall of 2002, Google officially launched <a href="https://news.google.com/" rel="noopener nofollow">Google News </a>&#x2014; a technological solution to the problem of &#x2018;how can we help people find meaningful, relevant information on the web?&#x2019; It was a problem legacy media had failed to address, handing it to BigTech to manage. And they did.</p><p>An interesting side-note: Google News was followed by Gmail (2004), Google Maps (2005) and YouTube (2005). Google News was a precursor to all of it. Tech companies like Google could see where conventional media was failing the emerging digital audience in 2001, and they filled that void.</p><p>To their credit, they made it better &#x2014; and for a time, the tools Google (later joined by Facebook, Twitter and the whole FAANG squad) came up with were irresistible to legacy media as we tried to sort out this &#x201C;internet thing&#x201D;.</p><p>Google News was digital publishing on steroids &#x2014; it was super efficient. If you could get your article listed (it was pretty much only text back then), you got audience to it. Exponentially bigger audience numbers were coming directly from the BigTech platform referrals, eclipsing the traffic a media company could achieve alone.</p><p>Then, Google did what all tech platforms do &#x2014; they began automating their systems; at CBC, we had fewer and fewer opportunities to connect with the Google people to pitch for our articles to show up &#x2014; and eventually they just stopped answering the phone and responding to our emails. Eventually those friendly Google people we&#x2019;d dealt with in the early days after September 11 were replaced by algorithms to read, prioritize and distribute our articles &#x2014; serving millions of people at click of a mouse, something we simply couldn&#x2019;t do with our &#x201C;legacy&#x201D; systems.</p><h2 id="content-creators-publishers-and-distributors-must-win-the-next-round">Content creators, publishers and distributors must win the next round</h2><p>Algorithms, machine learning, the functioning processes of AI &#x2014; they can only work with the signals we provide them. We give a few instructions, and based on those signals, the machines make predictions &#x2014; and ultimately they will make <em><em>decisions</em></em> (if we let them &#x2014; we really should be more <a href="https://blogs.scientificamerican.com/observations/ethics-in-the-age-of-artificial-intelligence/" rel="noopener nofollow">careful about when that should happen</a>).</p><p>The earliest versions of Google&#x2019;s own search system relied on &#x201C;regular people&#x201D; like you or me, helping to identify the content of a web page and the topics it covered. For example, <a href="https://en.wikipedia.org/wiki/DMOZ" rel="noopener nofollow">DMOZ</a>, a collaborative editorial project, played a significant role in helping Google understand what websites of the day were really about, by enlisting an army of human volunteers to annotate and validate the context of what a web page might be about. <em><em>Because computers can&#x2019;t really think.</em></em></p><p>Today, the world&#x2019;s biggest technology companies are using thousands of human workers around the world to tell computers what to &#x201C;think&#x201D;. It is not exactly futuristic work. It is mundane but necessary data grunt-work; the manual annotation of content, data-tagging has <a href="https://www.ft.com/content/56dde36c-aa40-11e9-984c-fac8325aaa04" rel="noopener nofollow">exploded as an industry</a>. Most tech executives don&#x2019;t discuss the labor-intensive process that goes into its creation. But I will &#x2014; and I will tell you that AI is learning from humans. Lots and lots of humans.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_structured_data.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="400" height="394"><figcaption>Data tagging accounts for 80 percent of the time spent building AI technology. At Vubble, we insist on a &#x201C;journalist-in-the-loop&#x201D; approach to this work when dealing with news/information content.</figcaption></figure><p>Before an AI system like my company&#x2019;s can learn, <a href="https://www.economist.com/business/2019/10/17/data-labelling-startups-want-to-help-improve-corporate-ai" rel="noopener nofollow">people have to label the data it learns from</a>. This work is vital to the creation of artificial intelligence used in systems for <a href="https://www.nytimes.com/2018/01/04/technology/self-driving-cars-aurora.html" rel="noopener nofollow">self-driving cars</a>, <a href="https://www.nytimes.com/2019/01/24/technology/satellites-artificial-intelligence.html" rel="noopener nofollow">surveillance</a> and <a href="https://www.nytimes.com/2019/03/10/technology/artificial-intelligence-eye-hospital-india.html" rel="noopener nofollow">automated health care</a>.</p><p>The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, <a href="https://www.cognilytica.com/2019/03/06/report-data-engineering-preparation-and-labeling-for-ai-2019/" rel="noopener nofollow">according to the research firm Cognilytica</a>. Data tagging accounts for 80 percent of the time spent building AI technology.</p><p>BigTech keeps quiet about this work; they face growing concerns about privacy and the mental health of &#x201C;taggers&#x201D; (cousins of the &#x201C;content moderators&#x201D;). At Vubble, we insist on using local journalists to data tag news video for some of the world&#x2019;s leading news organizations. That&#x2019;s because, as journalists, we know that context is everything, and humans still beat today&#x2019;s earthworm-brain AI.</p><p>AI is great when we use it to spot a cancerous mole &#x2014; it is mind-blowing that a machine can spot anomalies in thousands of images in a millisecond, something a human doctor, no matter how brilliant, could ever do.</p><p><em><em>Er&#x2026; hold on!</em></em> <a href="https://youtu.be/PFUgwqOsbl8?t=122" rel="noopener nofollow">Here&#x2019;s an AI engineer from Google itself, telling us</a> that the human doctors out-performed the AI in some cases too. He ends by saying AI is best in complement to the human brain. We lift each other up.</p><p>Today, we need to work together, with machines, in ways we have never worked before. If you stop reading here, I just ask as that you keep this in mind: We all need to think carefully about how we work with AI going forward. The decisions we let AI make on our behalf are based on the signals we give it to base those decisions on.</p><p>(2/4<em><em> &#x2014; In the next section, we&#x2019;ll did into the weaknesses of AI, and how that&#x2019;s opening up new opportunities for the news media industry</em></em>)</p><hr><h2 id="take-what-ai-knows-with-a-grain-of-salt">Take what AI &#x201C;knows&#x201D; with a grain of salt</h2><p>The way we talk about AI is full of hype. We&#x2019;re really only at the <em><em>beginnings</em></em> of the beginning of AI and its intersection with humanity. It is doing some incredible things right now. It will do absolutely remarkable things in the future, I&#x2019;m sure.</p><p>But right now it&#x2019;s about as smart as an earthworm. (That&#x2019;s a favourite analogy of AI researcher <a href="https://aiweirdness.com/aboutme" rel="noopener nofollow">Janelle Shane</a>, author of the hilarious and telling book, &#x201C;<a href="https://aiweirdness.com/books" rel="noopener nofollow"><em><em>You Look Like a Thing and I Love You: How AI Works and Why It&#x2019;s Making the World a Weirder Place</em></em></a>&#x201D;, which documents how AI can do some ridiculously awful things &#x2014; while showing us who we are in the process.)</p><p>In my couple of decades as a journalist, and as a human growing up in the time when a computer went from a &#x201C;business machine&#x201D; (<a href="https://en.wikipedia.org/wiki/ICON_(microcomputer)" rel="noopener nofollow">this was the first computer I used in my grade 6 &#x2018;computer lab&#x2019;</a>) to a personal AI butler in your pocket in less than three decades, I&#x2019;ve learned that technology really only moves as fast as we do. But sometimes it feels like we&#x2019;re not going in the same direction. We all need to pay attention and know when our paths diverge, especially if you&#x2019;re already using AI in your newsroom or as a tool for content recommendation.</p><h2 id="ai-doesn-t-know-what-you-re-talking-about">AI doesn&#x2019;t &#x201C;know&#x201D; what you&#x2019;re talking about</h2><p>It&#x2019;s difficult, but not impossible, to come up with signals about complex and evolving news stories. But we need humans, <em><em>journalists</em></em>, to teach the machines when it comes to information content.</p><p>This has become the core issue of our lives today, because the unexpected is our new normal. The world has gone from being complicated to being complex. There are patterns (which machines can spot and identify), but they don&#x2019;t repeat themselves with regularity (confounding those same machines). And so much of our world defies forecasting now. Maybe Iran will retaliate against the USA, but we don&#x2019;t know why or when and whether it will be physical or cyber or something else. Climate change is real, but we can&#x2019;t predict what will happen in Australia&#x2019;s bush-fire crisis, and what the impact will be when climate migrants begin to move in significant numbers. Brexit may finally happen. Or not. And we&#x2019;re all at the mercy of one guy&#x2019;s Twitter quips from the White House (remember when &#x201C;microblogging&#x201D; sounded cute?).</p><p>Uncertainty rules the day. The &#x201C;news&#x201D;, what is happening and what might happen next &#x2014; it defies so much forecasting, efficiency doesn&#x2019;t help us, it specifically undermines and erodes our capacity to adapt and respond.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_notre_dame.png" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="750" height="647" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_notre_dame.png 600w, https://news.vubblepop.com/content/images/2020/10/ecosystem_notre_dame.png 750w" sizes="(min-width: 720px) 720px"><figcaption>Google&#x2019;s own algorithm misinterpreted what was going on and began recommending stories about 9/11 to citizens watching footage of Paris&#x2019; Notre Dame cathedral ablaze on April 15, 2019.</figcaption></figure><p>I have had this feeling before. Maybe you have too. To me it feels exactly like the days in and around 9/11 &#x2014; and I&#x2019;m feeling a warning coming on: <em><em>when we abdicate responsibility for understanding the complex issues of our day to technology, it makes mistakes.We make mistakes. </em></em>Like Google&#x2019;s own algorithm that began <a href="https://www.theguardian.com/world/2019/apr/15/notre-dame-fire-youtube-panels-show-9-11-attacks" rel="noopener nofollow">recommending stories about 9/11 to citizens watching footage of Paris&#x2019; Notre Dame cathedral ablaze on April 15, 2019</a>.</p><p>What irony &#x2014; watching Google&#x2019;s own tagging training set provide the basis for inaccurate and outright stupid recommendations on Google&#x2019;s own video platform, at a time when people need access to reliable, factual information.</p><p>But hey &#x2014; It&#x2019;s going to be okay. That&#x2019;s why we&#x2019;re in this business. We journalists roll with uncertainty. We literally work to find the facts amid ambiguous noise to help our fellow citizens understand what&#x2019;s going on today. That&#x2019;s our job. We do the work that AI needs most now. Structured, reliable, dependable data that a machine can learn from: it is the foundation of AI.</p><p>And the great news is that when it comes to &#x201C;ambiguous&#x201D; information content &#x2014; that structured data belongs to all of us.</p><p>We just have to keep it that way.</p><h2 id="a-call-for-collaboration-between-ai-and-journalism">A call for collaboration between AI and journalism</h2><p>Our world is awesome, chaotic and confusing. The complexity of human life is more than the 1&#x2019;s and 0&#x2019;s a machine can understand. When it works well, AI can very quickly perform repetitive, narrow and defined tasks.</p><p>When you&#x2019;re working with AI, it&#x2019;s not like working with another human, it&#x2019;s more like working with some weird force of nature. It is really easy to give AI the wrong problem to solve. We as humans aren&#x2019;t always great at defining a narrow problem, because our brains are wildly complex. Our brains do a lot of really broad, advanced problem solving without us even noticing.</p><p>Chess is a complicated game. But it&#x2019;s also based on rules, logic and probability. Machine learning can handle that, and while it surprised many when a supercomputer named <a href="https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov" rel="noopener nofollow">Deep Blue beat world chess champion Garry Kasparov in 1997</a>, it makes perfect sense that a machine could learn from our moves and mistakes and ultimately <a href="https://www.theverge.com/2019/11/27/20985260/ai-go-alphago-lee-se-dol-retired-deepmind-defeat" rel="noopener nofollow">kick our pants in an even more complex game, Go, in 2016</a>.</p><p>Is playing a game of chess more complex than doing the laundry? You might say yes, but let&#x2019;s dive in for a moment. What about the different fabrics? Can they all be washed the same way? Sure, you might be super high-tech with your smart-labelled clothes, but what of the items that don&#x2019;t? What about the colours? Your kid&#x2019;s tie-dyed shirt from camp, can that go in? Where did that other blue sock go?</p><p>What we might consider the simple chore of doing laundry is actually a much more complicated task than at first glance. (Incidentally, I would be remiss if I didn&#x2019;t take a moment to flag that there are some problems tech just <a href="https://www.theverge.com/2019/4/23/18512529/laundroid-laundry-folding-robot-seven-dreamers-bankrupt-ces" rel="noopener nofollow">doesn&#x2019;t need to solve for us</a>. We need to get better at deciding when that is.)</p><p>This is why it&#x2019;s so hard to design a problem that AI can understand and make dependable predictions and recommendations on. This problem gets infinitely more complicated when we&#x2019;re dealing with video.</p><p>The AI that&#x2019;s used to recommend video content on YouTube and now by some media publishers for their own information video, these algorithms are optimized to bias in favour of clicks and views &#x2014; popularity signals are the main drivers of recommendations, because more clicks and views means more exposure to advertisements, the revenue source of most content publishers and the BigTech.</p><p>But here&#x2019;s something we know about humans. Content that is sensational, that makes us angry, that kind of content really fires us up &#x2014; we click, we comment, we share, we give that content a lot of our attention. Our engagement behaviour around that content, in turn, provides signals to the machines recommending it, amplifying its spread. This is why within a few clicks, you&#x2019;ll likely be recommended misinformation, conspiracy theories, and worse. The AI itself doesn&#x2019;t have a concept of what this content is, or what the consequences might be for recommending it. It&#x2019;s just recommending what we&#x2019;ve told it to.</p><p>(3/4 <em><em>&#x2014;In the final section we&#x2019;ll explore a narrow opportunity we have at this very moment to bolster the world&#x2019;s information ecosystem, putting us, curious, thoughtful human thinkers at the centre again.</em></em>)</p><p>We as humans have to learn how to communicate with AI. We have to learn what it&#x2019;s good at helping us with, and what it might mess up if we&#x2019;re not watching it.</p><p>The role of the modern digital citizen of a democracy has become similar to the old-school editor &#x2014; knowing where a piece of information came from, assessing its credibility and potential biases, framing it within the context of the rest of the information of the day.</p><p>There is simply no substitute for human judgement. Algorithms making decisions need to be audited to help us uncover biases (unintentional and overt), and if there are biases, how our AI systems can be adjusted to limit their impact.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_bulb.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="500" height="500"><figcaption>We do the work that AI needs most now. Structured, reliable, dependable data that a machine can learn from: it is the foundation of AI.</figcaption></figure><p>The role of the modern provider of news and information media is to know how AI is being used to distribute your content. It is our absolute responsibility to know exactly what instructions our machine learning systems are basing their predictions on. We must also know what training data set our AI is using to learn from; where that training set may be thin, where it may lack diversity in its examples; how it can be improved upon to deliver the right content recommendation to the person who needs it when they need it.</p><p>There is simply too much at risk, and tremendous opportunity missed, if we don&#x2019;t.</p><h2 id="what-we-need-to-do-next">What we need to do next</h2><p>Beware of the hype. Today&#x2019;s AI is not super-competent and all-knowing. Everything AI knows is what we&#x2019;ve told it. And right now, the media industry has fallen behind in helping AI help us when it comes to news and information content.</p><p>BigTech marketers would like us to believe that AI systems are neutral, highly-intelligent and sophisticated. But we simply aren&#x2019;t there yet. The tech world gets excited about things like &#x201C;big data&#x201D; and &#x201C;data as the new oil&#x201D;. It kind of is if you only think of it as a resource &#x2014; which it is. But to my mind, we need to be thinking about it as a <em><em>public</em></em> resource.</p><p>Data signals can be used for good and for bad (intentionally and not). The same data training set could be used by medical researchers to uncover better diagnostic symptoms for a form of breast cancer &#x2014; or used by an insurance provider to identify those customers more likely to contract that breast cancer. One machine learning system could use a training set to hire the best candidate for the job, another could unintentionally ignore female applicants because of a biased weighting in its machine learning logic.</p><p>Instead of throwing up our hands, we have a narrow opportunity at this very moment to bolster the world&#x2019;s information ecosystem, putting us, curious, thoughtful human thinkers at the centre again.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_vubble.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="750" height="570" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_vubble.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ecosystem_vubble.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>In 2020, it is Vubble&#x2019;s mission to help Canada&#x2019;s news media break through the barrier of weak and unstructured data, while building the world&#x2019;s largest, context-rich data training set for teaching machines to provide top-quality, reliable recommendations at a mass scale.</figcaption></figure><p>We need to start with some grunt work: data-tagging. Creating structured data for information video content is not glamorous, but it is the building block of effective machine learning. Since 2014, my company, Vubble, has been doing the critical work of data-tagging news video from the world&#x2019;s leading news organizations (including CTV News in Canada and Channel 4 News in the UK).</p><p>Using our unique &#x2018;journalist-in-the-loop&#x2019; approach to annotation and our proprietary taxonomy created by journalists and library scientists, Vubble has created what we believe to be the world&#x2019;s largest data training set for &#x201C;ambiguous&#x201D; information video content &#x2014; the key to unlocking AI that can help us understand what&#x2019;s happening in video, moving images, and even predicting what is happening in <em><em>real life</em></em>. (&#x201C;Ambiguous&#x201D; content is an AI term that refers to complex information that requires context for comprehension by humans, and is particularly opaque to the earthworm mind of current AI systems).</p><p>The AI systems that exist today, including Vubble&#x2019;s, can only predict with slightly better-than-random certainty what is actually happening in a news video. But our AI is getting smarter every day, thanks in large part to the priority we put on transparency, human (journalistic) insight and oversight, and what&#x2019;s called in the industry &#x201C;explainable AI&#x201D; (an emerging field in machine learning that aims to provide overt transparency, accountability and trustworthiness in AI systems).</p><p>Our AI has more to learn from every day as we continue to annotate information video from the world&#x2019;s leading media publishers. In 2019, the Canadian government joined us to help, providing Vubble with funding via the <a href="https://www.canada.ca/en/canadian-heritage/services/online-disinformation.html" rel="noopener nofollow">Digital Citizen Initiative</a> to subsidize our cloud-based data-tagging of the long-tail information video from Canada&#x2019;s major news media companies.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ecosystem_discovery_box.jpeg" class="kg-image" alt="Our information ecosystem is in trouble. Here&#x2019;s how we can fix it." loading="lazy" width="1050" height="614" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ecosystem_discovery_box.jpeg 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/ecosystem_discovery_box.jpeg 1000w, https://news.vubblepop.com/content/images/2020/10/ecosystem_discovery_box.jpeg 1050w" sizes="(min-width: 720px) 720px"><figcaption>Discovery Box: Canada, a bilingual database of news video, filterable in three ways: linear feed, keyword search and an opt-in algorithm.</figcaption></figure><p>This month, we will launch a public-facing version of this effort, <a href="https://www.vubblepop.com/media" rel="noopener nofollow">Discovery Box: Canada</a> (scroll down a bit), a bilingual database of news video, filterable in three ways: linear feed, keyword search and an opt-in algorithm.</p><p>What we&#x2019;re trying to do with Discovery Box: Canada:</p><ul><li>standardize the annotation of news video content among Canadian creators and publishers</li><li>expand the diversity of content distribution via Vubble&#x2019;s proprietary &#x2018;bias spread&#x2019; algorithmic approach</li><li>lift critical thinking among the Canadian public through explainable AI and our content assessment tool, the Vubble Credibility Meter.</li></ul><p>In return, Vubble is providing Canada&#x2019;s main news media publishers with the structured data our editors have generated around their news video. A must-have for quality, reliable AI recommendations, this structured data, if used thoughtfully and effectively, will help Canada&#x2019;s news media as they move from conventional print and broadcast distribution towards AI distribution, ready to make powerful, reliable content recommendations and get the right information in front of people who need to receive it.</p><p>In 2020, it is Vubble&#x2019;s mission to help Canada&#x2019;s news media break through the barrier of weak and unstructured data, while building the world&#x2019;s largest, context-rich data training set for teaching machines to provide top-quality, reliable recommendations at a mass scale.</p><p>We&#x2019;re passionate about the innovation, research and development possibilities that promise to grow from here &#x2014; from using the Vubble training data set to help companies predict changes in audience usage behaviour, to automating the real-time mass delivery of critical news information across platforms and devices &#x2014; we&#x2019;re pulling up our sleeves, developing new distribution tools to meet Canadians where they are, with their needs at the core of our decision-making.</p><h2 id="we-re-not-doing-this-because-we-can-we-re-doing-this-because-we-must">We&#x2019;re not doing this because we can, we&#x2019;re doing this because we must</h2><p>Not long ago, the media industry woke up and realized that we no longer own our relationship with our customers. We no longer run the distribution business that generates profits from our work, and most of us don&#x2019;t own the relationship with the technology to get our stories out there. When that first Trojan Horse rolled into our industry in the days after September 11, 2001, we began to cede virtually every facet of our industry to BigTech.</p><p>No more.</p><p>The news media&#x2019;s relationship with the citizens of our democracies is a partnership &#x2014; one that requires trust, respect and transparency as AI enters the newsroom. We have a common goal: to help citizens access trustworthy, factual information when and how they need to receive it.</p><p>As automation advances into the media business, particularly in the distribution space, it is the responsibility of our entire industry to ensure that we move forward together, in meaningful cooperation, to defray the power and influence of BigTech in the information ecosystem.</p><p>At Vubble, we&#x2019;re committed to building strong and lasting collaborations within the news media around three things: providing structured data around your large libraries of information video content; continuing to build the world&#x2019;s largest journalist-annotated information video training dataset; and being a &#x2018;sandbox&#x2019; of AI R&amp;D, where we can all work together to test new ML methods, try out new training models, and share new learning.</p><p>In 2020, if we can find ways to work together, the entire Canadian media industry will be better prepared for AI&#x2019;s advance into the newsroom. The future hasn&#x2019;t been written yet &#x2014; a free and informed society depends on us pulling up our sleeves and getting into the hard work of rewriting our industry&#x2019;s relationship with AI.</p><p>Because a healthy information ecosystem is the lifeblood of a functioning democracy.</p><p>(4/4 &#x2014; <em><em>Thanks for reading. If you have thoughts, </em></em><a href="mailto:tessa@vubblepop.com" rel="noopener nofollow"><em><em>get in touch</em></em></a><em><em>. I&#x2019;d love to hear from you!</em></em>)</p><hr><p><em><em>Tessa Sproule is the Co-Founder and Co-CEO of </em></em><a href="https://www.vubblepop.com/" rel="noopener nofollow"><em><em>Vubble</em></em></a><em><em>, a media technology company based in Toronto and Waterloo, Canada. Vubble helps media and educational groups (like CTV News, Channel 4 News, Let&#x2019;s Talk Science) by cloud-annotating news video, building tools for digital distribution and generating deeply personalized recommendations via Vubble&#x2019;s machine-learning platform.</em></em></p>]]></content:encoded></item><item><title><![CDATA[Watching the watchers: How the Big Tech platforms are working for — and against — you]]></title><description><![CDATA[This week, we want to draw your attention to a remarkably insightful series on social media manipulation from Destin Sandlin at his YouTube channel "Smarter Every Day".]]></description><link>https://news.vubblepop.com/watching-the-watchers-how-the-big-tech-platforms-are-working-for-and-against-you/</link><guid isPermaLink="false">5f80b4d0cdea3405af33ba54</guid><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Fri, 26 Apr 2019 19:08:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/watching_header.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/watching_header.jpeg" alt="Watching the watchers: How the Big Tech platforms are working for &#x2014; and against &#x2014; you"><p>We&#x2019;ve been watching the mechanics of the Big Tech platforms since before we founded Vubble Inc. in August, 2014.</p><p>From our backgrounds in the legacy broadcast media industry, we knew first-hand how powerful and potent platforms like Facebook and Twitter had become in the distribution of content in the information economy. We hypothesized that advertising revenue was driving most of the internal decision-making coded into those platforms&#x2019; algorithms, in a deeply problematic way.</p><p>A lot of our early investigating has had a direct impact on how we formed Vubble and the company&#x2019;s values (not using advertising as a revenue model, for example). It&#x2019;s been a big challenge, bucking the conventions of tech business-building, but world events like the US 2016 Election and the manipulation by dark players on democracy itself have accumulated and proven to us that we are on the right path.</p><p>This week, we want to draw your attention to a remarkably insightful series on social media manipulation from <a href="https://en.wikipedia.org/wiki/Destin_Sandlin" rel="noopener nofollow">Destin Sandlin</a> at his YouTube channel <a href="https://www.youtube.com/channel/UC6107grRI4m0o2-emgoDnAA" rel="noopener nofollow">Smarter Every Day</a>. The series is presented in three parts (watch below) and shows just how difficult the public&#x2019;s relationship with the Big Tech platforms has become.</p><p>If you use social media or any of the Big Tech platforms, we encourage you to check it out.</p><h2 id="part-one-manipulating-the-youtube-algorithm">Part One: Manipulating the YouTube algorithm</h2><!--kg-card-begin: html--><iframe width="612" height="344" src="https://www.youtube-nocookie.com/embed/1PGm8LslEb4" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><!--kg-card-end: html--><h2 id="part-two-twitter-platform-manipulation">Part Two: Twitter platform manipulation</h2><!--kg-card-begin: html--><iframe width="612" height="344" src="https://www.youtube-nocookie.com/embed/V-1RhQ1uuQ4" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><!--kg-card-end: html--><h2 id="part-three-people-are-manipulating-you-on-facebook">Part Three: People are manipulating you on Facebook</h2><!--kg-card-begin: html--><iframe width="612" height="344" src="https://www.youtube-nocookie.com/embed/FY_NtO7SIrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><!--kg-card-end: html--><p>We want to say a personal thanks to Destin Sandlin for the enormous effort that went into this series. We know how challenging it must have been, not just in the research and production, but in gaining the access to the platforms.</p><p>Until next time &#x2014; keep watching the watchers.</p><p>Tessa + Katie</p>]]></content:encoded></item><item><title><![CDATA[It’s Time to Talk About Ethics in Artificial Intelligence]]></title><description><![CDATA[In the media and other industries, automation is presented as a neutral process, the straightforward consequence of technological progress. It is not.]]></description><link>https://news.vubblepop.com/its-time-to-talk-about-ethics-in-artificial-intelligence/</link><guid isPermaLink="false">5f779342be2c5560f57e1238</guid><category><![CDATA[ai]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[ethics]]></category><category><![CDATA[technology]]></category><category><![CDATA[tech]]></category><category><![CDATA[artificial intelligence]]></category><category><![CDATA[regulation]]></category><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Tue, 04 Dec 2018 21:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/ai-ethics-banner-1-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-banner-1-1.jpeg" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence"><p><strong>Tessa Sproule &#x2014; Co-Founder and Co-CEO, Vubble<br><em>Based on <a href="https://data-hub.hubs.vidyard.com/watch/tqCparDiN31KgTn1m2Fzwg">a presentation to Communitech</a>, Data Hub in Waterloo, Ontario, Canada &#x2014; December 5, 2018</em><br></strong><br></p><p>Whether we build it or use it or both, a lot of us think of technology as a neutral thing &#x2014; something that&#x2019;s working in the background: helping us by automating the annoying stuff in our day; amplifying our brightest ideas by making them even bigger, faster and more awesome; keeping us on the right track on our drive home. </p><p>But technology is not neutral. </p><p>We can not simply engineer perfect solutions to the messy problems that come with human living. Just ask Mark Zuckerberg.</p><h2 id="artificial-intelligence-ai-defined">Artificial Intelligence (AI) Defined</h2><p>AI isn&#x2019;t just one thing. It&#x2019;s a constellation of things. It&#x2019;s machine learning. It&#x2019;s algorithms. It&#x2019;s data collection. It&#x2019;s data processing. It doesn&#x2019;t just happen inside a computer; it happens in conjunction with real people. People who sort things. People who decide things. People who want things.</p><p>Artificial intelligence is also personal. Without our inputs, without our information, it doesn&#x2019;t work. It&#x2019;s not something happening by itself in a computer somewhere &#x2014; it&#x2019;s happening because of how you&#x2019;ve interacted with it. What you&#x2019;ve shared with it, knowingly or not &#x2014; and what the people behind it think is important.</p><h2 id="the-impact-of-ai-on-the-information-industry">The Impact of AI on the Information Industry</h2><p>For me, it got very personal with information media content. In 2013, I was Director of Digital at CBC and increasingly worried about the path ahead for media as we moved quickly into a post-broadcast world.<br><br>I was worried about filter bubbles. Algorithms, largely on Facebook, were defining what most adult Canadians were watching and hearing about online. I was worried about how the public sphere would have enough shared knowledge about issues of the day to make sound policy advances. I was worried we were heading fast down a dark tunnel of clickbait, misinformation and popularity governing the information industry.</p><figure class="kg-card kg-image-card"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-laptop-bed.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="1500" height="897" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-laptop-bed.jpeg 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/ai-ethics-laptop-bed.jpeg 1000w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-laptop-bed.jpeg 1500w" sizes="(min-width: 720px) 720px"></figure><p>AI technology is not a tool when it comes to media. It is the master of our attention. &#x201C;Attentional control&#x201D; is something unique to us humans. It&#x2019;s a thing that is embedded in our psychology that is very automatic &#x2014; it helps you understand what is important, what you should pay attention to, what to allocate your brain powers to. In our earlier times, &#x2018;attentional control&#x2019; was especially helpful if you encountered a bear while you were out foraging.<br><br>Today, we still possess this unique, innate trigger. Today, the media&#x2019;s use of AI technology is driven by the &#x201C;attention economy,&#x201D; in which we buy with our likes, our views, our shares. In the &#x201C;attention economy,&#x201D; the most viral wins.<br><br>And the most viral tends to be the extreme stuff &#x2014; the things that really anger us travel farthest and fastest. The single <a href="https://www.nbcnews.com/politics/politics-news/fake-news-went-viral-2016-expert-studied-who-clicked-n836581">most popular news story of the entire 2016 US election</a> &#x2014; &#x201C;Pope Francis Shocks World, Endorses Donald Trump for President&#x201D; &#x2014; was a lie fabricated by teenagers in Macedonia. Three times as many Americans read and shared it on their social media accounts as they did the top-performing article from the New York Times. In the final three months of the 2016 election, <a href="https://www.bbc.com/news/blogs-trending-42724320">more fake political headlines were shared on Facebook than real ones</a>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-phones-on-platform.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="750" height="500" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-phones-on-platform.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-phones-on-platform.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>Today, the media&#x2019;s use of AI technology is driven by the &#x201C;attention economy,&#x201D; in which we buy with our likes, our views, our shares. In the &#x201C;attention economy,&#x201D; the most viral wins. Photo by <a href="https://unsplash.com/photos/gRsBNSKgfII?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">rawpixel</a> on <a href="https://unsplash.com/search/photos/cellphone?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>When we pay more attention to something that has more likes at the expense of really important things, we&#x2019;re making decisions on what&#x2019;s a priority, rewriting our brains, determining whose voices are heard, whose facts dominate.<br><br>Now we carry a propaganda megaphone in our pockets, and algorithms engineered towards deeply personalized experiences that are designed to ratchet up our attention consumption &#x2014; feeding us more and more of the stuff we like, the things that push our buttons, the things that grab our attention. Meanwhile, we are starving for quality information in the midst of plenty.</p><h2 id="technology-is-not-neutral">Technology is Not Neutral</h2><p>In the media and other industries, automation is presented as a neutral process, the straightforward consequence of technological progress. It is not.<br><br>Online, just as offline, attention and influence largely gather around those who already have plenty of both. A few giant companies remain the gatekeepers, while the worst habits of the old media model &#x2014; the pressure towards quick celebrity, to be sensational above all &#x2014; have proliferated in the ad-driven system.<br><br>Tech&#x2019;s posture is to deny its own impact. (&#x201C;We just run a platform.&#x201D;) But the effects are deep, real and troubling. Just because the Internet is open doesn&#x2019;t mean it&#x2019;s equal; offline hierarchies carry over to the online world and are even amplified there.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-robot-human.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="750" height="648" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-robot-human.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-robot-human.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>In media and other industries, automation is presented as a neutral process, the straightforward consequence of technological progress. It is not. Photo by <a href="https://unsplash.com/photos/YKW0JjP7rlU?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Franck V.</a> on <a href="https://unsplash.com/search/photos/robot?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Unsplash</a></figcaption></figure><p>A profoundly troubling area where bias is obvious and problematic is AI-driven facial recognition. For darker-skinned women, existing AI image software has a 35% error rate. For darker skinned men, it has a 12% error rate. The caucasian error rate is much lower than either of those.<br><br>And guess what happens when you add something like predictive policing to that scenario? Non-profit journalists at ProPublica audited a risk-assessment software used by American courts and the justice system to determine the likelihood that a convicted criminal will re-offend and <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">found the system routinely and wrongly predicted black convicts would re-offend</a>.<br><br>Work is being done to fix this, but even if biases are addressed and facial recognition systems operate in a way we all deem is fair, there&#x2019;s still a problem. Facial recognition, like many AI technologies, has a rate of error even when it operates in an unbiased way. Who wants to be in the unlucky group on the wrong end of a false positive?</p><h2 id="overselling-ai">Overselling AI</h2><p>Here&#x2019;s a secret. Big tech already knows about the limits of the technology &#x2014; that humans are needed to intervene at points. What they don&#x2019;t show, and we fail to see, is the labor of our fellow human beings behind the curtain.<br><br><a href="https://www.youtube.com/watch?v=k9m0axUDpro">The Moderators</a> is a 2017 documentary directed by Adrian Chen and Ciar&#xE1;n Cassidy. It gives a look into the lives of workers who screen and censor digital content. Hundreds of thousands of people work in this field, staring at be-headings, rape and animal torture, and other terrible images in order to filter what appears in our social media feeds.<br><br>More people work in the shadow mines of content moderation than are officially employed by Facebook or Google. These are the people who keep our Disneyland version of the web spic and span.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-camera-pole.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="1500" height="1000" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-camera-pole.jpeg 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/ai-ethics-camera-pole.jpeg 1000w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-camera-pole.jpeg 1500w" sizes="(min-width: 720px) 720px"><figcaption>What role do we want this type of technology to play in everyday society? Photo by <a href="https://unsplash.com/photos/16pOau3hBMY?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Nathaniel dahan</a> on <a href="https://unsplash.com/search/photos/surveillance?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Unsplash</a></figcaption></figure><h2 id="a-tool-for-good-and-bad">A Tool for Good and Bad</h2><p>Yes, AI can be used in positive and profound ways. It can be used to track a heartbeat and predict health issues and even mental health episodes before they happen. It can be used to identify a missing child. It can alert the police about a terrorist wandering in a crowd. It can help the blind understand what is happening around them in real time. It can predict &#x2014; with better accuracy than human doctors &#x2014; the likelihood of skin cancer based on an image of a mole.<br></p><p>But it can also be used in deeply damaging ways. Damaging to freedoms, human rights and our understanding of ourselves. It can be used to track you without your permission or knowledge. It might sell your interests to marketers about the shoes you looked at in the store window; it could predict that you&#x2019;re probably going to get cancer to your potential life insurance provider; it could tell your boss you&#x2019;re buying too much beer; it could tell the police you&#x2019;re probably going to do something bad. Look to what&#x2019;s happening in China right now with the euphemistically named &#x201C;Social Credit System&#x201D; for a stark and sobering example in action.<br><br>It all raises a critical question: what role do we want this type of technology to play in everyday society?</p><h2 id="regulation-enter-the-big-r-word">Regulation &#x2014; Enter the Big R Word</h2><p>If we want the Internet to truly be a people&#x2019;s platform, we have to work to make it so.<br><br>We are privileged to live in an advanced democratic country; we need to call on our elected representatives on issues that require the balancing of public safety with our democratic freedoms. Artificial intelligence requires the public and private sectors alike to step up &#x2014; and to act.<br><br>I&#x2019;m not saying the Internet needs to be regulated &#x2014; but that these big tech corporations need to be subject to governmental oversight. They are reaching farther into our private moments. They are watching us. We need to watch them.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-big-tech.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="750" height="500" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-big-tech.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-big-tech.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>Big tech corporations need to be subject to governmental oversight. They are reaching farther into our private moments. They are watching us. We need to watch them. Photo by <a href="https://unsplash.com/photos/ra4vJwxnvAo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Glen Carrie</a> on <a href="https://unsplash.com/search/photos/facebook?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Unsplash</a></figcaption></figure><p>I&#x2019;m for regulating specific things, like Internet access, and stronger protections and restrictions on data gathering, retention, and use. The better we can get to be out in front of the social consequences of AI, the better for all.<br><br>Yes, it&#x2019;s unusual for a company to ask for government regulation of its products, but at Vubble we believe thoughtful regulation contributes to a healthier ecosystem for consumers and producers alike. We advocate for a &#x201C;technocracy&#x201D; approach. The production of technology that doesn&#x2019;t just feed our business, and that of our customers, but that does good and makes society a better place for us all.<br><br>Consider this: the auto industry spent decades in the 20th century resisting calls for regulation, but today we all appreciate the role regulations have played in making us safer. As Zeynep Tufekci put it:</p><blockquote>&#x201C;Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones.&#x201D;</blockquote><p>The issue at stake is nothing less than what kind of society we want to be living in in the future &#x2014; that we want our children to be living in. It is not enough to just build it. We need to think it through. What does it mean to be a leader in the responsible development and use of artificial intelligence &#x2014; will you join us?</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/ai-ethics-auto-industry.jpeg" class="kg-image" alt="It&#x2019;s Time to Talk About Ethics in Artificial Intelligence" loading="lazy" width="1050" height="589" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/ai-ethics-auto-industry.jpeg 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/ai-ethics-auto-industry.jpeg 1000w, https://news.vubblepop.com/content/images/2020/10/ai-ethics-auto-industry.jpeg 1050w" sizes="(min-width: 720px) 720px"><figcaption>The auto industry spent decades in the 20th century resisting calls for regulation, but today we all appreciate the role regulations have played in making us all safer. Photo by <a href="https://unsplash.com/photos/C4pbVC4VptI?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Gabriel Jimenez</a> on <a href="https://unsplash.com/search/photos/old-chevrolet?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText" rel="noopener nofollow">Unsplash</a></figcaption></figure><h2 id="things-we-re-considering-at-vubble-">Things We&#x2019;re Considering at Vubble:</h2><ol><li>Consult with or hire an ethicist to work with our corporate decision makers. They should be able to help answer things like: is this AI levelling the playing field? What are negative potential consequences of using it? What are the consequences of not using it?</li><li>Develop an ethics code that lays out how issues will be handled.</li><li>Have an AI review board that audits how our AI is working and addresses ethical questions on an ongoing basis.</li><li>Reward AI for &#x201C;showing its workings&#x201D;. Invest in &#x2018;explainable AI&#x2019;; make clear what parameters are being assessed by our machine learning and how are those parameters are being weighted.</li><li>Develop annotated coding trails that show how programming decisions have been made.</li><li>Implement AI training programs for our employees that operationalize ethical considerations.</li><li>Build a diverse team that&#x2019;s enabled and invited to interrogate decision making in AI. AI should reflect the diversity of the users it serves.</li><li>What is our plan for remediation in those cases where AI ends up inflicting harm or damages on people? AI must be held to account &#x2014; so must its developers and builders.</li><li>How does our AI both replace and also create?</li></ol><p>This list, which is by no means exhaustive, illustrates the breadth and importance of the issues involved. It&#x2019;s a start, and we invite input.<br><br>There&#x2019;s a role for you, the public, to play in all of this, in critiquing and voting with your ballots, wallets and attention in holding us all to account. Self regulation is no substitute for public judgement when it comes to decision making. So please, interrogate the AI in your lives. Speak up. Let us know how we&#x2019;re doing and hold us to account.<br><br><em>Based on <a href="https://data-hub.hubs.vidyard.com/watch/tqCparDiN31KgTn1m2Fzwg">a presentation by Tessa Sproule to Communitech, Data Hub in Waterloo, Ontario, Canada</a>, December 5, 2018.</em></p>]]></content:encoded></item><item><title><![CDATA[5 things you need to know about the fight against misinformation today]]></title><description><![CDATA[How regulation, Facebook's data, cool tools, media literacy and provenance relate to the fight against media misinformation.]]></description><link>https://news.vubblepop.com/5-things-you-need-to-know-about-the-fight-against-misinformation-today/</link><guid isPermaLink="false">5f85e1c3cdea3405af33ba80</guid><dc:creator><![CDATA[Katie Macguire]]></dc:creator><pubDate>Tue, 14 Aug 2018 19:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/five-things-header.jpg" medium="image"/><content:encoded><![CDATA[<ol><li><strong><strong>Regulation: Americans are not keen to regulate their platforms.</strong></strong>Turns out that First Amendment (the one about freedom of speech) is really robust, and the net neutrality mentality has a lot of people studying misinformation saying that not too much can be done to regulate the platforms. Department of Justice Deputy Assistant Attorney General Adam S. Hickey explained at MisInfoCon DC, &#x201C;Transparency, not prohibition, has been the government&#x2019;s response to misinformation.&#x201D; The DOJ will pass information to the platforms but it won&#x2019;t regulate them. Check out his <a href="https://misinfocon.com/transparency-not-prohibition-is-the-u-s-government-response-to-misinformation-doj-official-says-f80042c02e75">full remarks</a>. Outside of the U.S, Germany is leading the charge in regulating the platform press. And Facebook Germany has hired scores of real people to review content, particularly for hate speech. This did lead to a <a href="https://www.theatlantic.com/international/archive/2018/05/germany-facebook-afd/560435/">high-profile piece of content being censored</a> and that has the free speech advocates nervous. On an interesting note, Dr. Haroon Ullah, Chief Strategy Officer from the Broadcasting Board of Governors, urges would-be platform regulators to think outside of national borders. He wants us to consider languages, not countries, when creating solutions for misinformation (think misinformation in Russian and not in Russia).</li><li><strong><strong>Facebook&#x2019;s data: Researchers from Misinfocon who are studying misinformation really want Facebook to release its data.</strong></strong> Other platforms (ie. Twitter) have done so. Researchers want to dig around and see what they can find in Facebook&#x2019;s treasure trove, including how many people were really exposed to the Internet Research Agency US election content. There seems to be the general feeling that Facebook thinks PR first and public service second. And there are some brave tenured professors who are willing to do what they need to do to get the data they need.</li><li><strong><strong>Cool tools: There are some really cool tools that researchers at the Observatory on Social Media have developed to help people identify misinformation campaigns online.</strong></strong> Here&#x2019;s an awesome tool for figuring out if a twitter account is actually a bot called the <a href="https://botometer.iuni.iu.edu/#!/">Botometer</a>. And another tool from the same team for reconstructing the diffusion networks that allowed a lie to spread called <a href="http://hoaxy.iuni.iu.edu/">Hoaxy</a>. Here&#x2019;s a <a href="https://www.youtube.com/watch?time_continue=25&amp;v=BIv9054dBBI">quick video overview of these tools</a>.</li><li><strong><strong>Media literacy: There&#x2019;s general agreement that media literacy is needed, but there&#x2019;s little agreement over how to deliver a media literacy campaign or even who should do it.</strong></strong> There are lots of cool small scale experiments going on including <a href="https://teach.kqed.org/">KQED Education</a> in California. They&#x2019;re focussed on elevating teen voices through hands on media producing. I like the work that Jevin West is doing at the <a href="https://datalab.ischool.uw.edu/">DataLab</a> at University of Washington Information school. He&#x2019;s trying to help his student be critical not cynical about the science information, particularly numbers, they consume. (You can also see Vubble&#x2019;s credibility meter, our media literacy tool, on videos that appear <a href="https://www.vubblepop.com/">on our website</a>).</li><li><strong><strong>Provenance (noun); the place of origin or earliest known history of something.</strong></strong> It&#x2019;s a beautiful word, and an essential one for understanding misinformation campaigns. Knowing where information began is key to assessing its credibility. And while we&#x2019;re defining words. Misinformation means false or inaccurate information. Disinformation means intentionally false or inaccurate information that is spread deliberately. The difference is the intention of the spreader. (If you want to know more definitions and frameworks for describing our collective <a href="https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c">Disinformation Disorder; this report is a brilliant place to start</a>). One final word on definitions, &#x201C;fake news&#x201D; is a useless bipartisan hammer. Let&#x2019;s ditch that phrase all together.</li></ol>]]></content:encoded></item><item><title><![CDATA[Vubble and Seneca launch innovative AI Video Categorization Project]]></title><description><![CDATA[Seneca Awarded Grant from SOSCIP]]></description><link>https://news.vubblepop.com/vubble-and-seneca-launch-innovative-ai-video-categorization-project/</link><guid isPermaLink="false">5f85e41acdea3405af33ba8f</guid><dc:creator><![CDATA[Vubble News]]></dc:creator><pubDate>Sat, 30 Jun 2018 19:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/vubble_seneca_header.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="seneca-awarded-grant-from-soscip">Seneca Awarded Grant from SOSCIP</h2><img src="https://news.vubblepop.com/content/images/2020/10/vubble_seneca_header.jpg" alt="Vubble and Seneca launch innovative AI Video Categorization Project"><p>Today <strong><strong>Vubble</strong></strong>,&#x200B; Canada&#x2019;s leading online video discovery and distribution company, announces a partnership with <strong><strong>Seneca&#x200B; Applied Research, Innovation and Entrepreneurship</strong></strong> &#x200B;for an exciting new research project targeted at <strong><strong>Advancing Video Categorization</strong></strong>. <strong><strong>Vubble</strong></strong>&#x200B; is collaborating with Dr. Vida Movahedi and student research assistants from Seneca&#x2019;s <a href="http://www.senecacollege.ca/school/information-and-communications-technology/">School of Information and Communications Technology</a> (ICT) to develop a machine-learning algorithm that will automatically identify the subject area of the curated videos in the <strong><strong>Vubble</strong></strong>&#x200B; library.</p><p>&#x201C;We&#x2019;re excited to be working with Dr. Movahedi and her team at Seneca. This video categorization research project will be a significant advancement of <strong><strong>Vubble</strong></strong>&#x200B;&#x2019;s ability to automate the accurate categorization of video,&#x201D; says <strong><strong>Vubble</strong></strong>&#x200B; CEO Tessa Sproule. &#x201C;Video curation is at the core of our business and the ability to automate elements within our curation process is key to growing our company.&#x201D;</p><p>&#x201C;Seneca&#x2019;s School of ICT is a data analytics powerhouse. Working with <strong><strong>Vubble</strong></strong> is an excellent opportunity for Seneca faculty and students to apply our data analytics capabilities to video categorization, using machine-learning,&#x201D; says Seneca&#x2019;s Dean, <a href="http://www.senecacollege.ca/research">Applied Research, Innovation and Entrepreneurship</a>, Vanessa Williamson. &#x201C;Seneca is the first college member of SOSCIP and we are thrilled that our collaboration with <strong><strong>Vubble</strong></strong> will be our first project funded through the consortium.&#x201D;</p><p>The year-long applied research project will be supported by SOSCIP, an R&amp;D consortium based in Ontario that helps Canadian companies drive innovation with data science. SOSCIP is supported through investment from FedDev Ontario, the Province of Ontario and others.</p><p><strong><strong>Vubble</strong></strong>&#x2019;s Advancing Video Categorization Research Project was <a href="https://www.vubblepop.com/vubble-bits/soscip-2018-impact-report?rel=web">featured in the SOSCIP 2018 Impact Report</a>.</p><h3 id="about-vubble"><strong>About Vubble</strong></h3><p>The women-led media tech company based in Toronto and Waterloo, has created a groundbreaking platform that curates, assesses and distributes personalized feeds of video content, using advanced AI technology and human curation. <strong><strong>Vubble</strong></strong>&#x2019;s clients include Canada&#x2019;s top media companies and educational publishers.</p><h3 id="about-seneca"><strong>About Seneca</strong></h3><p>With campuses in Toronto, York Region and Peterborough, Seneca offers degrees, diplomas, certificates and graduate programs renowned for their quality and respected by employers. It is one of the largest comprehensive colleges in Canada, offering nearly 300 full-time, part-time and online programs. Combining the highest academic standards with work-integrated and applied learning, expert teaching faculty and the latest technology ensure Seneca graduates are career-ready.</p><h3 id="about-soscip"><strong>About SOSCIP</strong></h3><p>Launched in 2012, SOSCIP&#x2019;s mission is to pair industry with academic researchers and advanced computing tools to fuel innovation in Canada. SOSCIP is a ground-breaking collaboration between Ontario&#x2019;s research-intensive post-secondary institutions, IBM Canada Ltd., Ontario Centres of Excellence (OCE) and dozens of small and medium-sized enterprises (SMEs) across the province. SOSCIP is supported through significant funding from the federal government (FedDev Ontario), the Province of Ontario, IBM Canada and others.</p><p>For further information about <strong><strong>Vubble</strong></strong>, please contact <a href="mailto:support@vubblepop.com">support@vubblepop.com</a></p><p>For further information about Seneca, please contact: Lisa Pires, media.relations@senecacollege.ca</p>]]></content:encoded></item><item><title><![CDATA[SOSCIP 2018 Impact Report]]></title><description><![CDATA[Vubble’s Advancing Video Categorization Research Project (in partnership with Seneca) was  featured in the SOSCIP 2018 Impact Report. ]]></description><link>https://news.vubblepop.com/soscip-2018-impact-report/</link><guid isPermaLink="false">5f85e57acdea3405af33baa1</guid><dc:creator><![CDATA[Vubble News]]></dc:creator><pubDate>Sat, 30 Jun 2018 04:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/soscip-header.png" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/soscip-header.png" alt="SOSCIP 2018 Impact Report"><p>Vubble&#x2019;s Advancing Video Categorization Research Project (<a href="https://www.vubblepop.com/vubble-bits/vubble-and-seneca-innovative-ai-video-categorization-project?rel=web">in partnership with Seneca</a>) was recently featured in the SOSCIP 2018 Impact Report. Check it out!</p><p>For more on SOSCIP <a href="https://www.soscip.org/">https://www.soscip.org/</a></p>]]></content:encoded></item><item><title><![CDATA[The Secret Behind Vubble’s Success]]></title><description><![CDATA[Q & A with Vubble co-founders Tessa Sproule and Katie MacGuire]]></description><link>https://news.vubblepop.com/the-secret-behind-vubbles-success/</link><guid isPermaLink="false">5f85e768cdea3405af33baaf</guid><dc:creator><![CDATA[Vubble News]]></dc:creator><pubDate>Tue, 12 Jun 2018 04:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/secret-success-header.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/secret-success-header.jpg" alt="The Secret Behind Vubble&#x2019;s Success"><p><em>Above: Vubble co-founders Tessa Sproule (left) and Katie MacGuire</em></p><p>The Vubble team was driving from Toronto to Waterloo, Ontario recently to attend Communitech&#x2019;s True North tech conference and while we were on the road, I interviewed Vubble&apos;s founders Tessa Sproule and Katie MacGuire on what makes Vubble tick. Here are my notes from our engaging chat:</p><h4 id="why-did-you-start-vubble"><strong>Why did you start Vubble?</strong></h4><p>T - We were concerned about the future of public media. Katie and I spent two decades living and working (at the CBC). The idea of media being the filter through which people understand the world around them is very much at the core of how we approach everything. We wanted to find a better way for people to be exposed to information media content that they need to see or is important for our democratic institutions &#x2014; for people to understand complex issues.</p><p>You&#x2019;ve got to get people engaging with issues and ideas that are outside of their comfort zone, outside their filter bubble, so that they see a broader, richer, fuller picture of the world.</p><h4 id="where-did-the-company-name-come-from"><strong>Where did the company name come from?</strong></h4><p>K - Vubble = video + bubble. &#x201C;How do we break through filter bubbles that are forming around people&apos;s experiences in the digital space&#x201D; was one the first things we tackled. So video and bubble came together.</p><h4 id="how-has-the-company-grown-since-you-first-launched-in-2014-to-become-a-leading-canadian-media-tech-team"><strong>How has the company grown since you first launched in 2014 to become a leading Canadian media tech team?</strong></h4><p>K - We started Vubble as a media company curating videos from across the internet. We had the idea that eventually we could build an algorithm that would distribute those videos in a way that burst filter bubbles. We built a large audience very quickly, but we realized that the media advertising business model would not work for us.</p><p>We decided to build a software platform that could be licensed. This is a sustainable model that gives us the freedom we need to solve the problems we are after.</p><p>Our first real business breakthrough came when we started working with the CMF (Canada Media Fund). We received funding to build our core AI from their innovation program in 2017.</p><p>T - The CMF was a huge advantage to us. If we were an American company, we would probably be in the VC space working from the perspective of how do we quickly build enormous value and sell. Eighty percent of my job would be spent trying to build up the story of what we are, rather than to actually building the tools and solutions that are going to change the world.</p><h4 id="why-is-vubble-different-than-the-other-companies"><strong>Why is Vubble different than the other companies?</strong></h4><p>T - I think first and foremost, it&#x2019;s that we have the human element &#x2014; our editors. A lot of companies are coming at it purely from a technological play. There is great advantage and potential scale in that. But there&apos;s also enormous risk, and that&#x2019;s come to light recently with things like the Cambridge Analytica scandal and the boycott YouTube from advertisers.</p><p>No technology is yet able to identify what&apos;s actually happening inside of a video with better accuracy than a human. And that&apos;s becoming a massive issue with things like fake news, and the deep fakes that are now starting to permeate on the web, where you&apos;ve got fake video that is believable when you look at it; it&#x2019;s terrifying, to be honest. The nuance of what a human can pick up on is the thing that separates us from the robots.</p><p>K - Vubble is a values-led company and that makes us unique too. Misinformation, disinformation and fake news threaten how we gather evidence-based information on a mass scale. The technology and services we offer are solutions, in part, to this problem. Vubble&#x2019;s clients are also concerned with this problem and want to be part of the solution.</p><h4 id="what-s-the-biggest-business-problem-you-re-trying-to-solve-for-media"><strong>What&#x2019;s the biggest business problem you&#x2019;re trying to solve for media?</strong></h4><p>T - Content discovery and the erosion of digital revenues. Conventional broadcast and publishers under threat because the old models of discovery and revenue no longer exist. So they&apos;re fighting amongst themselves. Everybody is fighting for the scarce, waking hours that people have to consume content, whether it&apos;s information or entertainment content. Netflix is right up against us &#x2014; we&apos;re all up against against each other, which is crazy.</p><p>Our media clients are attracted to Vubble because we are helping them break through the legacy models that are challenging the ways they do business.</p><h4 id="what-is-your-solution-for-the-media"><strong>What is your solution for the media?</strong></h4><p>K - Our media customers own and generate a lot of videos. A lot of that video is not being watched. We help our customers to bring those &#x201C;lost&#x201D; videos back to life with our proprietary AI-powered video distribution platform.</p><p>We also build distribution tools that help media publishers be where their audiences are: via email, native bots, chatbots, discovery boxes on web pages, or even feeding recommended videos directly into their video players. We are also working with media clients to get a deeper understanding of their audience through new kinds of video data and as a result, to create premium advertising opportunities.</p><h4 id="how-does-it-work"><strong>How does it work?</strong></h4><p>T - We have reverse-engineered how some of the big platforms work with their AI. We asked ourselves: if you can create machine learning that understands how to deliver content based on categories to match ads, can you also do that to get people engaging with content in more meaningful ways as well?</p><p>So our machine learning, which is delivered via tools like our Vubble native bot, sends content to consumers in category areas that they are interested in. We also show content that differs from what that consumer would usually receive.</p><p>That&apos;s what&#x2019;s really important &#x2014; you&#x2019;ve got to show people content that is going to be serendipitous and engage with them in surprising and interesting ways, and they&apos;re going to find delight in that experience. This is a powerful way to increase user engagement and keep them inside the publishers&#x2019; ecosystem.</p><h4 id="what-can-vubble-do-for-education"><strong>What can Vubble do for education?</strong></h4><p>T - I have kids and my co-founder Katie has kids. They would never watch conventional broadcast and they&apos;ve never picked up a newspaper. How do we reach the digital citizens of tomorrow? Media companies are getting to them by publishing to YouTube and other digital channels. But as a parent, I feel very uncomfortable sitting my young daughter in front of YouTube, because god knows what she&apos;s gonna find in her &#x201C;up next&#x201D; recommendations.</p><p>Education was a natural byproduct of what we are already creating. We have human editors, journalists, do the hard work of evaluating, vetting, and assessing the best information video worth watching every day, and one obvious place that needs that is the education space.</p><p>We curate education feeds based on topics such as STEM, history, social studies, the arts... We provide content that is safe, secure, that provide great and timely value in terms of the information and stories that they tell.</p><h4 id="what-can-vubble-do-about-fake-news"><strong>What can Vubble do about fake News?</strong></h4><p>K - There are great fact-checking services out there, like Snopes, but that&apos;s not really the solution. We believe the best solution is to raise critical thinking skills across the board. Our solution is the Vubble Credibility Meter. Our editors use it on every single piece of video that comes through our system. They evaluate the credibility of that information based on a 15-point metric scoring system that we developed at a hackathon at MIT in Feb. 2017. The tool is like a nutrition label for content, we know that sometimes you&apos;re going to want to have sugar in your diet and that&apos;s okay. You just want to know when you&apos;re having sugar in your diet. You need a well-balanced diet of information to be a well informed citizen of a country like Canada, or any democracy.</p><p>It&#x2019;s interesting, because sometimes something from a fairly unknown YouTuber might have a higher score in credibility than something from a conventional media organization. So that&apos;s the other thing that&apos;s really important too &#x2014; just assessing it based on the source doesn&#x2019;t work and that&apos;s how some technology platforms like Facebook have been approaching it.</p><p>Now anybody with their phone can have the same reach as a network of the past and the playing field is quite unfair. Just because someone is producing video from their basement doesn&#x2019;t mean that is not as credible. It might actually be a first hand account UGC piece of video that is very journalistically sound.</p><h4 id="what-s-new-at-vubble"><strong>What&#x2019;s New at Vubble?</strong></h4><p>T - We just wrapped the production phase of the CMF and have launched a suite of tools that are built to solve real problems that the Canadian and international media industry are wrestling with. We are developing a native video bot which we think will change the way AI is used to engage audiences.</p><p>And we&#x2019;re excited about a new partnership with Seneca&#x2019;s Applied Research, Innovation and Entrepreneurship group for an exciting new research project targeted at using AI to advance video identification that will automatically identify the subject area of the curated videos in the Vubble library. We&apos;re constantly in market research mode, looking at how people are consuming information digitally. We have our fingers on the pulse of how people&#x2019;s habits are changing.</p><p><strong><strong>We get many of these same questions at conferences and in meetings with new clients. Please let us know if you have some questions of your own. Thanks.</strong></strong></p><p><a href="mailto:support@vubblepop.com">support@vubblepop.com</a></p>]]></content:encoded></item><item><title><![CDATA[On Fake News, the Collapse of Content Discovery and How Canada Can Fix This]]></title><description><![CDATA[We’re paying attention in all the wrong ways]]></description><link>https://news.vubblepop.com/on-fake-news-the-collapse-of-content-discovery-and-how-canada-can-fix-this/</link><guid isPermaLink="false">5f80b895cdea3405af33ba6b</guid><dc:creator><![CDATA[Vubble News]]></dc:creator><pubDate>Sun, 05 Mar 2017 05:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/fake_news_header.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/fake_news_header.jpg" alt="On Fake News, the Collapse of Content Discovery and How Canada Can Fix This"><p>Oh Canada, we are stuck between a rock and a hard place.</p><p>On one hand, it&#x2019;s never been easier for us to tell our stories and have them seen and heard by your neighbour next door and a stranger on the other side of the world.</p><p>On the other, it&#x2019;s never been so difficult, conflicting and costly to have our stories discovered by the audience who needs to see them most.</p><p>Discovery is broken in our digital age, and I believe Canada is in the best position to fix it.</p><h2 id="how-did-we-get-here">How did we get here?</h2><p>Yesterday&#x2019;s gatekeepers are crumbling. The print publisher dominated the turn of the last century, enjoying enormous profit by turning ink into information and attention into advertising. Those who have survived are being <a href="http://fortune.com/2017/02/28/buffett-newspapers-doomed/" rel="noopener">held over the ink barrel</a>, struggling to transform their business models to be &#x201C;digital first&#x201D;.</p><p>The golden age of television saw a medium flourish with its capacity to amuse, draw monstrous ratings (again, turning attention into advertising) and put (mostly) men in suits behind desks to anchor the day with insight into the world&#x2019;s challenges and hopes.</p><p>Today we see networks pining for even half of the audience they might have seen five years ago, news media tousling with Twitter to be &#x2018;first&#x2019; in a relentless and increasingly irrelevant 24-hour news cycle, and distributors cowering as <a href="https://www.wired.com/2017/02/youtube-tv-skinny-bundle/" rel="noopener">digital behemoths launch disruptive platforms</a> that threaten to cut them out of the picture for good.</p><p>A new era of gatekeepers is upon us &#x2014; the digital behemoths &#x2014; and they are hungry. The scale of attention they need to turn into advertising is colossal; it takes a lot of &#x2018;digital dimes&#x2019; to make a &#x2018;legacy dollar&#x2019;. The growth expectations for their latest quarter <a href="http://www.economist.com/news/business/21716070-app-company-has-pioneered-distinctive-vision-internet-snaps-ipo-will-be-largest" rel="noopener">are exponential</a>. The efficiencies they need to find in order to increase their margins are unabating.</p><p>It isn&#x2019;t surprising then that they have put technology at the front of the line, automating formerly human-editorial and curatorial processes of legacy media, using filters to help us sort the overwhelming plenty at the information buffet, having their algorithms tell us what we should pay attention to today.</p><p>But paying attention to the stuff we like, it turns out, costs us a lot.</p><h2 id="on-the-rise-of-fake-news-and-filter-bubbles">On the rise of Fake News and Filter Bubbles</h2><p>Algorithms are not inherently complicated. They are simple &#x201C;if this, then that&#x201D; queries. <em><em>If you liked this, then you will more than likely like that.</em></em> The problem lies in the motivation behind those series of 1&#x2019;s and 0&#x2019;s.</p><p>For algorithmically-driven discovery platforms, from Facebook to Amazon to Netflix to Google, the aim is to deeply engage audiences inside their platforms so we stick around; provide an endless feed of delicious content for us to stuff our faces until we&#x2019;re so bloated we can&#x2019;t leave. Hopefully, at some point during our banquet, we&#x2019;ll have noticed an ad or two (bonus points for the platform if we actually click on something.)</p><p>Fair enough. That&#x2019;s similar to our behaviour with mass media in the past, the TV dinner tray being a curious relic of our inclination to sit back and watch the world flash by.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/fake_news_misinfocon.jpeg" class="kg-image" alt="On Fake News, the Collapse of Content Discovery and How Canada Can Fix This" loading="lazy" width="750" height="1000" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/fake_news_misinfocon.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/fake_news_misinfocon.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>Vubble attended the <a href="https://medium.com/misinfocon/misinfocon-a-summit-on-misinformation-feb-24-26-at-mit-media-lab-the-nieman-foundation-for-232507bd08a6#.hafr6vemj" rel="noopener">MisinfoCon hackathon at MIT</a>, February 25&#x2013;26, 2017.</figcaption></figure><p>But then &#x201C;fake news&#x201D; happened and suddenly we have a bad taste in our mouths.</p><p>Let&#x2019;s be clear. Fake news isn&#x2019;t new; gatekeepers exploiting our basic human emotions for the benefit of our attention (read: advertising) is not new. Propaganda is not new. But the speed and spread of misinformation today is unprecedented; the power wielded by our click-and-share distribution model is unparalleled &#x2014; even if the audience <a href="https://www.washingtonpost.com/news/the-intersect/wp/2016/06/16/six-in-10-of-you-will-share-this-link-without-reading-it-according-to-a-new-and-depressing-study/" rel="noopener">only reads the headline</a>.</p><p>Again, the root of the problem lies in the new gatekeepers&#x2019; motivation.</p><p>Facebook doesn&#x2019;t care whether or not you have a better understanding of nuclear disarmament, only that you &#x201C;like&#x201D; (and hopefully shared) the post about Donald Trump&#x2019;s late night tweet on the issue. No matter your politics, now matter if the information at the end of that click is true. Just that you saw it, clicked that little &#x2018;thumbs up&#x2019; (even better if you had <a href="http://www.huffingtonpost.com/entry/facebook-weighs-your-reactions-more-than-your-likes_us_58b7044ce4b019d36d0fe13c" rel="noopener">an &#x201C;emotional&#x201D; reaction</a>), and saw the ad for that printer that&#x2019;s been following you around on the web since you searched for reviews about it last week.</p><h2 id="what-can-we-do-about-it">What can we do about it?</h2><p>Canada hits above its weight class in culture. From our award-winning novelists to our stadium-filling pop stars, our big-screen celebrities to the Los Angeles brain-drained army of behind-the-scenes talent, Canadian creators have had enormous success around the world. We have the stories. We have the content. But we have to <a href="https://www.washingtonpost.com/politics/for-legacy-media-publications-facebook-experiment-is-a-tricky-one/2015/05/17/e991e06c-fc9a-11e4-8b6c-0dcce21e223d_story.html" rel="noopener">pay for access to the gatekeepers</a> that currently control the distribution and discovery of that content.</p><p>I left my job as Director of Digital at CBC just over two years ago to co-found a company that tackles the problem of content discovery in the digital age with a new business model at its core &#x2014; a new motivation.</p><p>At Vubble, we believe content discovery needs the public to be at the centre. Always. We believe that today&#x2019;s digital gatekeepers and their technology can be augmented with a new form of public media that seeks to burst filter bubbles, advance media literacy into the digital age, and build audience agency.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/fake_news_misinfocon_tessa.jpeg" class="kg-image" alt="On Fake News, the Collapse of Content Discovery and How Canada Can Fix This" loading="lazy" width="750" height="622" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/fake_news_misinfocon_tessa.jpeg 600w, https://news.vubblepop.com/content/images/2020/10/fake_news_misinfocon_tessa.jpeg 750w" sizes="(min-width: 720px) 720px"><figcaption>A <a href="https://vimeo.com/205619223" rel="noopener">presentation about what we&#x2019;re building at Vubble</a> to combat fake news and filter bubbles (Nieman Foundation, February 24, 2017).</figcaption></figure><p>To burst filter bubbles, we&#x2019;re turning machine-learning on its head and inserting a human editorial layer into the process. <a href="https://vimeo.com/205619223" rel="noopener">Here I am talking about how we&#x2019;re doing that at the Nieman Foundation</a> at Harvard last week.</p><p>To advance media literacy in the digital age, <a href="https://cartt.ca/article/groupe-m%C3%A9dia-tfo-taps-start-vubble-fresh-content" rel="noopener">we&#x2019;re working with education-focused media companies like Groupe M&#xE9;dia TFO</a> to curate feeds of smart and safe video content into classrooms and homes across Ontario.</p><p>To build audience agency, we&#x2019;re creating tools like a &#x201C;Credibility Meter&#x201D; (born out of the <a href="https://medium.com/misinfocon/misinfocon-a-summit-on-misinformation-feb-24-26-at-mit-media-lab-the-nieman-foundation-for-232507bd08a6#.hafr6vemj" rel="noopener">MisinfoCon hackathon we recently attended at MIT</a>) and a Vubble Chatbot to meet the growing appetite for digital-natives to use messaging platforms for sourcing information content.</p><p>Meanwhile, we&#x2019;re developing insight into how content is discovered within the new gatekeepers&#x2019; platforms and how it can be discovered outside of their walled gardens. We believe there is enormous value in that, and we&#x2019;re working with brands who see that value too. That&#x2019;s Vubble&#x2019;s business model &#x2014; in saying &#x2018;no&#x2019; to advertising to drive revenue, we&#x2019;ve opened new value chains with great growth potential.</p><p>Because that&#x2019;s what you do when you&#x2019;re stuck between a rock and a hard place &#x2014; you climb.</p><p>Whether you&#x2019;re a Canadian creator, a network executive, a brand representative or a citizen just trying to understand the complexity of a public policy shift, now is the time for us to climb out of this strange chasm we find ourselves in.</p><p>This is the time for reinvention and reimagination.</p><p>This is the time to get out of our comfort zones and challenge how we move forward, and how we see ourselves.</p><p><em><em>This article </em></em><a href="https://cartt.ca/article/commentary-fake-news-collapse-content-discovery-and-how-canada-can-fix" rel="noopener"><em><em>originally appeared in Cartt.ca</em></em></a><em><em>, March 4, 2017.</em></em></p>]]></content:encoded></item><item><title><![CDATA[Stop complaining about “the media” and let’s do something about it]]></title><description><![CDATA[The pendulum has swung from the ‘all human’ traditional media platforms of the past,t to ‘all algorithm’ platforms of the present. The future is where technology dovetails with humanity.]]></description><link>https://news.vubblepop.com/stop-complaining-about-the-media-and-lets-do-something-about-it/</link><guid isPermaLink="false">5f85e93ecdea3405af33babf</guid><dc:creator><![CDATA[Tessa Sproule]]></dc:creator><pubDate>Wed, 07 Dec 2016 05:00:00 GMT</pubDate><media:content url="https://news.vubblepop.com/content/images/2020/10/stop-complaining-header.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://news.vubblepop.com/content/images/2020/10/stop-complaining-header.jpeg" alt="Stop complaining about &#x201C;the media&#x201D; and let&#x2019;s do something about it"><p>I&#x2019;m worried about my son. He&#x2019;s 10, and like all kids his age, he&#x2019;s intensely curious about the world around him. He asks questions all the time. He always has great questions.</p><p>But the other day he started to ask me something. &#x201C;Mom, why did September 11th happen?&#x201D; Before I could answer, he interrupted with &#x201C;You&#x2019;re busy. I can just look it up.&#x201D;</p><p>Normally it would be great for him to show that kind of initiative. But here&#x2019;s the problem.</p><p>If he typed &#x201C;why did September 11 happen&#x201D; he&#x2019;d likely get a bunch of reasonably informative <a href="https://www.google.ca/webhp?sourceid=chrome-instant&amp;ion=1&amp;espv=2&amp;ie=UTF-8#q=why%20did%20september%2011%20happen" rel="noopener nofollow">results at the top of the list</a>.</p><p>But if he added one word, and instead asked &#x201C;why did September 11 <em><em>really</em></em> happen&#x201D; he&#x2019;d <a href="https://www.google.ca/webhp?sourceid=chrome-instant&amp;ion=1&amp;espv=2&amp;ie=UTF-8#q=why+did+september+11+really+happen" rel="noopener nofollow">enter a cesspool</a> of conspiracy theories, fake news and propaganda. A whirling, swirling mess for a young mind to make sense of at the moment he&#x2019;s figuring out what matters to him, what he believes in, who he is.</p><p><strong><strong>Media is failing. But there is a way forward.</strong></strong></p><p>Do you see the same colour red as me? Do you worry about the future of democracy? Do you know what&#x2019;s going on in Syria?</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/filters.jpg" class="kg-image" alt="Stop complaining about &#x201C;the media&#x201D; and let&#x2019;s do something about it" loading="lazy" width="750" height="600" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/filters.jpg 600w, https://news.vubblepop.com/content/images/2020/10/filters.jpg 750w" sizes="(min-width: 720px) 720px"><figcaption>We all see the world around us in our particular way &#x2014; these are the filters that sharpen and cloud our vision.</figcaption></figure><p>We all see the world around us in our particular way. With the things we&#x2019;ve been exposed to, the opinions we&#x2019;ve formed, and the energy you have right now as you&#x2019;re reading this &#x2014; <strong><strong>these are the filters that sharpen and cloud our vision.</strong></strong></p><p>Today, hundreds of millions of citizens go online every day, <a href="http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/" rel="noopener nofollow">62% to social media platforms</a>, to be informed about the world around them.</p><p>It started as friendly updates &#x2014; reconnecting with an old classmate, photos from a faraway cousin, video of a first dance at a wedding &#x2014; and grew into algorithms pushing streams of news content reinforcing our preexisting beliefs.</p><p>But in our easy social media life, we have abdicated editorial judgment to machine learning; we are increasingly <a href="https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories" rel="noopener nofollow">sheltered from opposing viewpoints</a> and <a href="https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook" rel="noopener nofollow">reliable news sources</a>, resulting in filter bubbles and a vicious polarization of national politics across democratic states.</p><p>Tech giants like Facebook argue they <a href="https://techcrunch.com/2016/09/14/facebook-denies-its-a-media-company-despite-censorship-decisions/" rel="noopener nofollow">shouldn&#x2019;t be considered media companies</a>, and they aren&#x2019;t. But social media and search algorithms have become an outsize influence in crafting all of our understanding of the events that take place around us <strong><strong>&#x2014; <em><em>the core function of media</em></em>.</strong></strong></p><p>With traditional news and information sources <a href="http://www.mediainsight.org/PDFs/Millennials/Millennials%20Report%20FINAL.pdf" rel="noopener nofollow">declining in use and trust</a> among millennials, fake news from hyper-bipartisan sites <a href="http://qz.com/848917/facebook-fb-fake-news-data-from-jumpshot-its-the-biggest-traffic-referrer-to-fake-and-hyperpartisan-news-sites/" rel="noopener nofollow">dominating distribution</a> on digital platforms, and an informed citizenry being essential to a healthy democracy &#x2014; <strong><strong><em><em>there is no issue more critical today.</em></em></strong></strong></p><p>All of this is <a href="http://www.marketingmag.ca/media/cbc%E2%80%99s-digital-guru-tessa-sproule-contemplates-life-after-%E2%80%98public-media%E2%80%99-112462" rel="noopener nofollow">why I left a powerful role as Director of Digital at CBC</a>; because I couldn&#x2019;t do what needed to happen from within.</p><p>The entire media landscape has been displaced by the digital age and its endless barrage of crushing challenges thrown on crumbling frameworks.</p><p>The unrelenting demand for innovation is something legacy media models simply cannot address. We can blame hubris, or greed, or the stasis that comes from being comfortably &#x2018;in control&#x2019;. But we have no time to blame. We need to change.</p><p>Public broadcasting is one thing. Public media in a digital age is entirely another. What I am working on now is <strong><strong>public media for the next generation.</strong></strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://news.vubblepop.com/content/images/2020/10/social-media-search.jpeg" class="kg-image" alt="Stop complaining about &#x201C;the media&#x201D; and let&#x2019;s do something about it" loading="lazy" width="1050" height="590" srcset="https://news.vubblepop.com/content/images/size/w600/2020/10/social-media-search.jpeg 600w, https://news.vubblepop.com/content/images/size/w1000/2020/10/social-media-search.jpeg 1000w, https://news.vubblepop.com/content/images/2020/10/social-media-search.jpeg 1050w" sizes="(min-width: 720px) 720px"><figcaption>Social media and search algorithms have become an outsize influence in crafting all of our understanding of the events that take place around us &#x2014; <em>the core function of media</em>.</figcaption></figure><p>The world&#x2019;s public broadcasters need to deliver on their mandates through a broadcast-first posture. I get that. Many of their mandates were written before the internet went mainstream. All of them were written before platforms like Facebook became the dominant distributors of the news that forms our understanding of the world and our place in it.</p><p>There will be a large contingent of the public that will rely on broadcast and other legacy media platforms for the next 15, maybe even 20 years. I fully get that too, and I support the efforts of most of our public broadcasters and other legacy media, <a href="http://future.cbc.ca/" rel="noopener nofollow">including the CBC</a>, who are trying to chart a path for that period of transition.</p><p>However, as evidenced by the <a href="https://techcrunch.com/2016/11/09/rigged/" rel="noopener nofollow">recent American elections</a>, next generation platforms like Facebook have become our de facto public media channels and they have enormous influence over how we understand each other and ourselves.</p><p>Their algorithms and our narrowing filter bubbles will continue to fail to provide balanced, diverse information to our citizens, unless we, the public, do something about it. <strong><strong><em><em>Now</em></em></strong></strong><em><em>.</em></em></p><p>Filter bubbles and fake news aside, platforms like Facebook were built to <a href="http://www.nytimes.com/2016/05/19/opinion/the-real-bias-built-in-at-facebook.html" rel="noopener nofollow">maximize engagement and thus advertising revenues</a>, a business model that prevents them from <a href="https://techcrunch.com/2016/11/10/zuck-denies-facebook-news-feed-bubble-impacted-the-election/" rel="noopener nofollow">facing this new challenge</a> honestly and effectively.</p><p>I believe the near future sits in between, where technology is deployed in conjunction with human judgment, curiosity and empathy to help us better understand the world around us, and ourselves.</p><p>A new era in media is upon us: the pendulum has swung from the &#x2018;all human&#x2019; traditional media platforms of the past, to &#x2018;all algorithm&#x2019; platforms of the present. The future is where technology dovetails with humanity. This new approach is what we&#x2019;re building at <a href="http://vubblepop.com/" rel="noopener nofollow">Vubble</a>.</p><p><strong><strong>How do we plan to do that?</strong></strong><br><br>It&#x2019;s quite simple. We use technology to filter the web, humans to add the serendipity and empathy only we can, and machine learning to bring you news that otherwise wouldn&#x2019;t show up in your regular feeds, bursting your filter bubble with a machine-learning system that is transparent and routinely audited &#x2014; by humans &#x2014; to defiantly counter bias.</p><p>But we&#x2019;re not quite there yet, and we need your help.</p><p>For the past year, we&#x2019;ve been testing out our algorithm and editorial chops. You can sign up for our daily newsletter <a href="http://www.vubblepop.com/signup" rel="noopener nofollow">here</a>. That will bring you the strongest video of the day from what we&#x2019;ve selected for our paying clients (largely <a href="http://www.asapscience.com/" rel="noopener nofollow">science</a>, <a href="http://planetinfocus.org/watch-now/" rel="noopener nofollow">environment</a> and <a href="http://www.vubblepop.com/partner/CFCMedialab" rel="noopener nofollow">media</a> right now, though we&#x2019;re working hard to expand that). You can also <a href="https://www.facebook.com/vubblePOP/" rel="noopener nofollow">like us on Facebook</a> or follow us <a href="https://twitter.com/vubblepop" rel="noopener nofollow">on Twitter</a>, where we share some of the video worth shining a light on.</p><p>If you&#x2019;re a brand, you can join our network of clients and get a 24/7 dynamic feed of the best video related to your interests &#x2014; we&#x2019;ll filter out fake news, lousy and noisy content, and give you articulate, engaging and real stories to share with your customers. Help us build out the virtual library and open the doors to the public for free.</p><p><strong><strong>It&#x2019;s a win-win for us all.</strong></strong></p><p>Because that&#x2019;s the point. We need to see each other&#x2019;s point of view, especially when it&#x2019;s outside of our comfort zone. We need to understand enough of each other &#x2014; to see ourselves. And we need to show that empathy, that common ground, to the next generation.</p><p>What else is there, if we fail?</p><p><em><em>Tessa Sproule is the CEO and Co-Founder of </em></em><a href="https://www.vubblepop.com/" rel="noopener nofollow"><em><em>Vubble Inc.</em></em></a><em><em>, a Canadian media technology company that offers a suite of products that curate, assess and distribute personalized delivery of the world&#x2019;s best video content, using our proprietary system of advanced artificial intelligence technology and human curation.</em></em></p>]]></content:encoded></item></channel></rss>