Entertainment as Code
TheJam.dev is a 2-day virtual conference focused on building modern web applications using full stack JavaScript, static site generators, serverless and more.
Live streaming enables impactful community engagement through unpredictable, dynamic, and interactive experiences. However, building reliable, low-latency streaming infrastructure with global scale poses complex challenges. Amazon Interactive Video Service (IVS) aims to simplify adding live video to your application by providing a fully-managed service built on the same network and infrastructure that powers Twitch.
In this session, I’ll introduce you to Amazon IVS and show you how it can be used to create low latency streaming channels for your users that can deliver video with around 2-5 seconds of latency. I’ll also teach you how to create collaborative streaming experiences with Amazon IVS real-time streaming where latency is as low as 250ms. There’s no need to be a video expert to unleash the power of live streaming in your application. You won’t have to think about transcoding, storage, edge delivery - or any of the many challenges that come with building a low latency global network capable of delivering high quality video around the world. As a developer, you should focus on user experience - not building and maintaining the infrastructure that enables it
I’m a developer who advocates and evangelizes about interactive video streaming at Twitch. I’ve been writing code since 2004 and I’m passionate about technology, code, and helping developers learn new things. As part of my role as a Developer Advocate, I write demos to show other developers how to use certain languages, frameworks, and technologies to solve the problems that they face every day. I have a background in writing and a love for theater, so I’m extra fortunate to have found a role that lets me use those skills and passions as part of my everyday routine. I’m married to my best friend and we live in the North Georgia mountains with our 2 kids, 8 chickens, 2 dogs, and a cat. When I’m not working, I love to cook, hike, travel, play video games and tinker with electronics and microcontrollers.
Todd Sharp 0:16 I appreciate y’all having me here today. Thank you for joining us for this event. It’s been a great event. I’ve been kind of keeping an eye on it while I’ve while I’ve been working and preparing for this session today. And Katie, amazing job on the web. Audio always amazes me to see what people can do with just raw audio on the web. Like I remember a few years ago, there was another person that I met at a conference who did a talk about Mideast and all that stuff. Totally amazing, totally impressive. So great job, Katie and, of course, Lewis with image, all the amazing things he was doing with images. But in this session, we’re going to talk about something that utilizes or accounts for about 82% of all internet traffic worldwide, and that is video. So you can’t have a pixelPalooza without talking about video. And because I’m a developer advocate for Amazon interactive video service, I’m here to talk to you today about adding live streaming to your application. And the key here that hopefully you will all be a member of this group, is that we’re talking about this with a specific focus on people who are not Video Experts. And I certainly am not a video expert, even though I work at Twitch and for Amazon IVs, I’m not a video expert. I’ve been writing code for web and desktop and mobile applications for the last 20 years, as Brian said, but I am not a video expert. I’ve been at Twitch for two years. I’ve learned a lot about video but at the end of the day, I am still a web developer, and I have a feeling that most of you probably are as well as Brian said, I’ve been a developer for about 20 years. I live in Georgia in the United States, and let’s talk about building a live streaming application. So if you were to build a live streaming application by yourself from scratch today, these are the types of things that you would have to consider, and the first of those things is you would need some VMs. You’d need a global network of virtual machines for video ingest and transcoding. This may not be virtual machines. This may be bare metal machines, whatever it is you need a global network of them. After you get that global network of virtual machines or bare metal machines, you need to develop some sort of transcoding and packaging software. Now, there are solutions that you can get open source. There are paid solutions to do this for you. However, what you’ll quickly find is those are off the shelf, not going to solve most of your problems. They will get you most of the way there. But if you want to develop a really performant system for delivering high quality video at low latency, you’re gonna have to tweak it, and probably end up rolling your own from scratch. At some point, you’ll have to become an expert on video codecs and protocols, things like h2 64 h2 65 RTMP. HLS, MP four, web, RTC, some of these terms, Codex protocols may be familiar to you. If you’re watching this stream. Some of them may not, but if you want to develop a live streaming infrastructure, you will need to become an expert on all of them. You will also need to maintain all of those virtual machines. That means patches, security updates, all of that sort of things that go along with owning a global fleet of virtual machines, you will have to manage and take care of.
What about a network? You will need a global CDN, and not just any global CDN, you’ll need a global CDN that’s optimized for video distribution. You’ll also need to load balance and make sure that you’re distributing traffic to those in the proper way when you get any kind of spiky traffic. What about broadcasting and playback? You’ll need software that optimizes playback on low end devices, bandwidth constrained networks if you’re developing a global live streaming application, because the reality of it is certain regions, certain countries, places like India, have a lot of lower end devices, and maybe their bandwidth is not as great in some areas, but you want all of these viewers and broadcasters to have a the best possible experience, finally and not last, but not least. Of course, there’s much more than this, but you also have to develop some custom monitoring software or stream quality and herb observability. The bottom line is, live stream.
Streaming is really, really complicated, if you’re looking at Twitch, the company that I work for, in order to provide really high quality and really low latency, Twitch maintains nearly 100 pops in different geo locations around the world, and each one of those pops are connected to our private backbone network, and there’s a really, really complex set of routing and processing and distribution that happens. If you want to learn more about how Twitch handles all of this, there’s a link at the bottom of the page that you can check out to read more about how Twitch ingests live video streams at global scale. Now, certainly there must be an easier way to handle this, to build live streaming applications, correct? And the answer is, of course there is, and that is the origin let’s talk about the origin story of Amazon interactive video service. So Amazon interactive video service, I’ll call it from now on Amazon. IVs was born from Twitch, so the leading live streaming platform everyone here has heard of twitch.tv. Been around for about 13 years, I believe now,
Twitch supports millions of viewers every day, over 2 billion hours every month, and in 2023 there were 1.3 5 trillion. It’s a video watched on Twitch, obviously ultra low real time latency streams, IVS was born out of this Twitch network, out of this Twitch infrastructure. So we had this network and infrastructure for delivering Twitch video, and we decided to make this available as an AWS service, so that people like you could build your own live streaming applications and take advantage of all of that experience, all of that hard work, and all of those millions of hours of video that have been watched, and all the lessons we learned, we turned those into Amazon IBS. So there are essentially three different flavors of Amazon IBS. We have two of them focused on video, one of them focused on chat. The first flavor of IBS is low latency streaming, which allows you to deliver I highly scalable video with latency that can be under three seconds from host to viewer. We’ll learn a little bit more about each one of these as we go through the session today. We also have real time streaming, and we also have a chat product, because, like chocolate and peanut butter, chat and live streams go together very, very nicely. So let’s talk about low latency streaming, what we call channels on Amazon IVS.
The first thing that I thought was really impressive, the very first time that I worked with Amazon IBS, and this is before I even worked at Twitch, was the fact that there is literally no spin up time involved. When you’re talking about creating a channel, you go into the console, or you use the CLI or the SDK, you create a channel, and it’s immediately ready to go. You can immediately start live streaming to it without having to wait for anything to be provisioned, anything to be turned up, it’s immediately available for you, another feature of low latency channels is the fact that we have adaptive bit rate playback on your channels. We’re not going to get too deep into this today, but essentially, what we’re talking about with adaptive bit rate is a player that’s intelligent enough to know when a network may be a little constrained or a device may not be able to handle quite the quality that’s coming in and have it automatically switched to either a lower rendition or a slower bit rate, sorry, a lower resolution in order to get the lowest latency possible. The ingest for low latency channels is handled via the RTMP protocol, and videos delivered to the web with HLS HTTP Live Streaming, low low latency. Channels also have health monitoring, the ability to record to s3 which is a really nice feature if you’re building a user generated content Twitch like experience and you want to create videos on demand or VODs, you can automatically record your low latency live streams, two, s3 put a cloud front distribution in front of them, and have the ability to play those back on demand. Timed metadata is a another cool feature, which essentially allows you to insert an arbitrary piece of data into the live stream at a very specific point in time, to do things like trivia games or E commerce sales, or things like that featured products in an E commerce live stream, for example, and that time, metadata is actually becomes a part of the stream.
Itself. So the nice thing about that is, when you go to that VOD, that video on demand that you recorded to s3 that metadata event still exists in the stream, so it will still trigger on playback. You can still handle that. Everything’s good to go. You can have private channels with low latency streaming, which essentially allows you to block playback of that channel to maybe subscribers only or premium users of a channel. So really nice feature there. In order to mitigate potential issues, you can geo block or domain restrict your low latency channel. So geo blocking, if for whatever reason, you want to limit playback to a specific region or a specific country, or exclude a specific region or exclude a specific country, you can do that. Domain restrictions allow you to limit playback to only your domain, so essentially blocking people from embedding your player into a third party site and doing something with the video that you don’t want them to do. Multi track video is a brand new feature. We just launched it seven days ago, and it’s really exciting. The most exciting thing to me about it, which sounds a little crazy, is the fact that you can actually save up to 75% on your cloud bill by taking advantage of multi track video. Not going to get too deep into it today, but the concept of multi track video is essentially allowing your broadcasters to use the power of the GPU in their machine to produce multiple renditions of their video and deliver those to your viewers, instead of having our servers and our data centers be the actual transcoding machine, so you can actually produce those renditions, broadcast those and deliver those and provide a really good experience for your viewers and save money at The same time so the latency of low latency channels is around two to five seconds. That means from glass to glass, from the glass of your camera to the glass of your viewers monitor, is typically around two to five seconds. And if you go to twitch.tv right now, for example, please don’t go there now. Please hang out here. Do not run away. But if you go to there after pixel Palooza this evening, and you go to the player, and you look at the advanced settings and that player, it’ll tell you the latency, and depending on where you’re at, depending on your network, depending on your machine, I’m guessing it’ll probably be around two seconds, which is really, good. It can provide for some really interactive experiences with chat and things like that. Low Latency channels are really scalable. You can reach up to 1 million plus concurrent viewers. The quality is really good, at up to 10 ADP, and you can have unlimited channels. You can have as many channels as you need to have. Of course, there’s some service quotas that apply, but if you have a need for 10 million channels, let’s talk, and I’m sure our team would be willing to accommodate you with a service quota increase. So two to five seconds is really nice. The ability to scale to all of those users is also really nice, but it doesn’t solve every single problem. We’ll get to that in just a little bit. Some of the use cases for low latency channels, as I’ve already mentioned, user generated content, things like social networks. You want to build your own twitch.
You want to do live events, something like pixel Palooza, for example, could be delivered via low latency channels, conferences, user groups, concerts, those kind of things really good use case for low latency channels e commerce. So Amazon live shopping, for example, is powered by Amazon interactive video service. You can go on Amazon live shopping. You could view an influencer who’s demonstrating products and purchase something directly from that live stream. Really good use case, sports and fitness, things like leader LED or trainer led, live streams where maybe a trainer gets up on a elliptical machine or a cycle or treadmill, and leads a class, motivates the class in a workout routine, perfect use case for low latency channels and, of course, gaming, right? ESports, playing your favorite video game, letting people watch along essentially exactly what you would see on Twitch, perfect use case for low latency. So I want to tell you just how easy it is to get started with low latency streaming. Essentially, it’s three steps. You create a channel, and you could do that in either the CLI the console or with the SDK. Okay, you broadcast to that channel with the web, SDK, mobile, SDK, or third, even third party software, things like OBS, stream, labs, anything like that, and then you play it back. And that’s it. And I actually want to get started, or show you just how easy it is to get started with low latency. So I’m going to exit out of my presentation. I’m going to open up a console, and I’m actually going to create a channel with the AWS CLI. So to do that, I type AWS IVs create channel. And as I said earlier, there’s no spin up time. I’ve got a channel that’s ready to be used, and all of the information about that channel are returned to me with the CLI. So the first thing I need to do here is grab the channel Arn and copy that. And if I hop over to my editor, I can paste that into my app. And I also need my ingest endpoint. So if I scroll up a little bit here and I copy that and I come back over to my editor, editor, why am I saying it like that? It’s editor, not editor. Even
Stanley got a little upset by me saying it that way. So let’s talk a little bit about this file. This is a very simple file, really basic. And of course, obviously, in production, you’re going to be using some kind of framework, I’m sure, right, like React or Angular or svelte, or whatever your favorite framework is. But in order to demonstrate this, I’m going to keep it really, really simple, simple HTML page script, even even the JavaScript is in line. I know terrible practice. Don’t do this in reality, but just for demo purposes, I’ve got all of my script in my HTML HTML file here. So what do I need here? I need the Amazon IVs web broadcast, SDK, and hopefully that font is big enough. If it’s not big enough, somebody please holler at me and let me know. But I need the Amazon IVs web broadcast, SDK, so I’ve included that here in a script tag. You can, of course, install this as well with NPM. If you have some sort of bundle, web pack or bite or something like that. You could do it that way, of course, and but you just need the dependency included in your app. So scrolling down a little bit here, I do have that stream key and ingest endpoint. And actually I do not have the stream key. I copied it incorrectly. So let me come back over here and I need the value, not the Arn. So if I paste that in, so now I have my stream key and my ingest endpoint, and yes, if someone in the participant stream watching this right now, if they were to take my stream key, stream key and ingest endpoint and start using it, yes, they would be broadcasting to my channel. But that’s okay. I’m going to delete this as soon as we’re done here, but yes. So in other words, your stream key should be treated like any other kind of sensitive credential, sensitive kind of password, like security. You normally wouldn’t be pasting this into your source code. You’d be making a call once the user is logged in, retrieving it from an endpoint, of course. So
I’ve got my stream key. I’ve got my ingest endpoint. I have a canvas as my only HTML element in the entire page here. So I have a canvas called preview, and that will be where our low latency channel is previewed, our webcam is previewed. So that when I go to this page and I start broadcasting, I can at least see myself and know that I’ve got some good feedback. I can see my hair looks nice, my legging looks good, all that good stuff. So I’ve got the canvas for preview. And the first thing we need to do is create a client. So to do that, we use the IVs broadcast client. We call create and we send it an object with a stream configuration. Now there are some kind of parameters that you can customize here, like your width, height, frame rate, aspect ratio, those kind of things. But we’ve gone ahead and given you this option to use a object directly from the SDK that includes kind of all the preset standard, height, width, aspect ratio, frame rate, all of that good stuff. So you don’t have to configure that yourself. We also pass the ingest endpoint, ingest endpoint that we pasted here above, and that’s it. We have a client. Once we have our client, we can get the camera and mic permissions using navigator. Dot media devices dot Get User Media and for video and audio, we simply pass true, and this will prompt the user when they load the page to grant permission to use those things. Now. We can preview that stream in our canvas by calling client attach preview and passing it the DOM element where we want to preview that. And that could be either a canvas and that could also be a video tag. If you wanted to use a video tag, you could do that as well.
Now that we have our preview setup, we need to get a stream from both the camera and the microphone. And in order to do that, I’m going to call enumerate devices, which is a way to list all of the video and audio devices on the user’s machine. Now, in a normal application, you would probably filter these out, like I’ve done below, to video and audio devices, and you’d probably put that in like a drop down menu allow allowing your users to select which camera and microphone they want to use if they have more than one connected. But for this demo, I’m simply grabbing the first one in the array, and I’m using that to establish a camera stream and a microphone stream with the first audio device, and that’s just a call, another call to Get User Media in the video object. We pass the device ID of the device we want to use and for the video, I’m just going to make sure that the aspect ratio is 16 by nine. Once we have our streams, we can add the video input device and audio input device from that is a representation of those streams. We give it a name. And for video, at least, we have the configuration object that allows you to, for example, change the index. So this is kind of like a Z index, and allows you to stack things, right? So if you wanted to, for example, get a screen share using get display media, and you wanted to have that be your main stream, and you wanted to overlay video, just like you’re seeing here, done on this live stream, although that’s being handled by Potter, who’s the amazing behind the scenes AV guy, he’s handling all that manually. But if you wanted to do that on the web, just like he just did, he added me right next to it. You could do that with this configuration object so you have an index which tells you essentially what layer the video is on. You could also pass width, height, X, Y, and get really creative and get really customized with this, once we have all of that, we call start broadcast. We pass it the stream key, and we are broadcasting to our channel. So if I hop over to my browser and I open that, you can see that our broadcast has started that 404, there. Ignore that. That that is just for the Fave icon that I do not have, but this one is using my
FaceTime camera or from my laptop next to me, just to make sure that it doesn’t conflict with my broadcast that I’m sending out to you all. But we’re broadcasting, we are on the low latency channel and broadcasting to this Amazon IBS channel. So if we copy this playback URL, we can talk about playback. And if I hop over to my playback demo, we can see this is even easier than broadcasting. So for playback, we have the Amazon IBS player, SDK. I’ve included the latest, which is 134, one, and then scrolling down here, we have a video tag right here, and then I have about six lines of JavaScript in order to play back my video. So if I paste in my playback URL that I got when I created the channel with the CLI we can create an instance of the player using IBS player dot create. We can attach that video element, passing the DOM element to it. We can load that playback URL, and we can play that so if I save that with my playback URL, and I come back over to my browser, and I jump over to the playback page, we can see that playback is indeed working very nicely, very easily. And within like five minutes, I created a channel, I started broadcasting to it, and I have playback in my browser. Now, this is the really cool thing to me, right? I’m not a video expert. I didn’t have any knowledge of codecs or keyframe intervals or resolutions or any of the hard work that’s done behind the scenes, and I’ve got a live stream, both broadcasting and playback directly in my browser. And I think the really cool thing about that is that you are all experts at creating your application. You’re in whatever your your domain specialty is, if you’re a e commerce application, if you’re a social gaming company, you’re experts in that. You don’t need to be experts. In video in order to add video to your website or application. So if I close that, I should throw a quick I like to throw a quick kind of little tip in here, and that is the fact that the Amazon IVS web broadcast SDK can also be used to broadcast to a actual Twitch channel. So if you use this ingest endpoint here, and you use your stream your your stream key from Twitch, so if you actually log into Twitch, go into the console, get your stream key. I’ve hidden mine here. Collapse that so that you can’t see it, but essentially all of the other code is the same here. And if I come to my browser and I click on Twitch broadcast,
I open up the console, we can see that my broadcast is started. And if I hop over to my Twitch channel, I’m uh, it. Well, let me re let me reload that. Try again. I might have, ah, there we go. I was gonna say I might be absolutely hammering my bandwidth with all of these demos, but it looks like it’s working great. So I’m actually broadcasting to twitch with that same exact code from the IVs web broadcast, SDK, and, yeah, doing that right from the browser really quickly, really easily. So if you wanted to just kind of put that in your back pocket, and remember that if you ever wanted to do some kind of integration with Twitch, that’s a possibility for you to use the web broadcast. SDK, so let’s talk now about real time streaming. And if I do a quick tech check or time check here, let’s see here. We’ve got about 15 minutes left, I believe, so I need to move a little quicker. So real time streaming? What is real time streaming? We’re talking about streaming with less than 300 milliseconds of latency. So why do you need such low latency? Well, during COVID, everybody was sitting at home, everybody was doing something video related, right?
Live streaming or participating in zoom conference calls or Google meet or teams or slack huddles, things like that. So real time streaming, real time latency, became very important to a lot of people, and we’re really seeing this from a lot of like social networks, where they’re introducing things called like PK mode or versus mode, same thing. PK mode, versus modes, same thing. But essentially, this is where two different streamers can compete in some sort of competition, like a singing competition, or a dance off, or a costume competition, or whatever it may be, in order to have that real time interaction, you need really, really low latency, right? You can’t have two to five seconds of latency where I say something and the other person has to wait two seconds before they hear me, before they can respond. So you need that real time latency. So that’s why we launched Amazon IVs stages about a year and a half ago, and these are web RTC based, so little different than the MP based ingest for low latency channels, but we do have multiple ingest options. Won’t go too deep into that. We have session and participant participant monitoring can also record these to s3 and we have something similar to the multi track video that I mentioned earlier for low latency. But this one is called simulcast, but it’s the same concept, creating multiple renditions so that viewers who do not have the device capabilities or bandwidth to stream at the highest resolution, can still view your live stream at a lower resolution. So there are a little bit different limits. Oh, wow. I’ve got a I’ve got a cursor. Shame on me. I’ve got a cursor in my screenshot. I just noticed that that’s that’s terrible. I need to fix that. So we’ve got up to 25,000 concurrent viewers or stages, you can have up to 12 people that are producing video to the stage at the same time. So you can’t have all 25,000 people on camera broadcasting to each other at the same time. But this really makes sense, right? If you think about like a zoom or a Google meet, even if there’s 500 people from your company in a meeting at one time, you’re not going to see all 500 of those videos, right? You’re only going to see like a tile of like 12 tops video at the same time. For stages, the current limit or quality is 720, p, that may increase in the future. Q.
Keep an eye out if that’s something that you’re interested in. So the use cases, again, PK mode, guest star, kind of conversations, audio only, influencer, fan chats. Live auctions are becoming really popular, where things like Pokemon cards are auctioned in real time. And that’s obviously something you need the lowest possible latency for all your viewers. Because if you have that two to five seconds delay, and maybe someone’s got a really good connection, and they’re only getting two seconds, and someone else has a poor connection, and they’re getting five seconds of latency, that’s not a fair situation for people trying to bid on, like, let’s say, a playing card or a pair of sneakers. So you need that real time latency for live auctions. So getting started with real time is almost as easy as low latency. I won’t say it’s difficult, but it is a little more involved. It takes a few more steps to get started with real time, but let’s walk through those. So first thing we need to do is create a stage. So to do that, we’re going to use the IBS real time CLI command and call create stage and give it a name. Once we have that created again, no spin up time immediately available. And I copy that Arn or Amazon resource name, and I hop back over to my editor, and I paste in that down here. And now we can walk through quickly this demo. So first thing we need to do is include that web broadcast, SDK. I’ve included bootstrap because I’m horrible at styling. And yes, I know it’s old and but, but then again, so am I. So I use Bootstrap because it’s easy and I can remember a lot of the various classes for layout and styling, and it looks better than just plain HTML, let’s be honest. So I have two divs. I have a local participant and I have a remote participant. Now, of course, if you’re creating an experience where you could have up to that 12 you probably aren’t going to create your video tags. You might create them with JavaScript and add them to the DOM, put it in some kind of, you know, repeater or something. You wouldn’t be always hard coding the video tags. But for this demo, I’ve just got a simple remote and local scrolling down a little bit here. Every participant, including those who are just viewing, need a stage token. And this is just a JWT that can be created with our SDK. And the purpose of this is for a couple reasons. You can essentially use this for authentication, not authentication, but essentially monitoring. When you look at your logs, you can see a participant ID, and that is comes from the token. So it allows you to do some troubleshooting. It also allows you, if you need to, for whatever reason, disconnect a participant for moderation purposes, you have the ability to revert or revoke that token at any time via the SDK. So we need a token. I’ve got a lambda set up to generate that once we have a token, again, we need our permissions and our devices and our streams. So it’s the same as last time, but this time, we’re creating an instance of a local stage stream, and we’re passing the video and audio track from our camera and microphone stream to this we have a strategy object, which is just a JavaScript object with a few different properties, in this case, the audio and video track and then three different methods that are invoked throughout the stage life cycle to determine, for example, should the participant be published? Should you subscribe to a specific participant? So this is just a way to kind of control things. Maybe you have a moderator or a producer type role where they don’t need to produce their video, but they want to, of course, view all of the participants. Gives you a little flexibility. So the time, though, you’re going to use this kind of basic strategy for most of your streams. Then we create an instance of a stage passing at the token and strategy. And then we need to know when people join and leave the stage. And for that, we can subscribe to various events. For now, the only thing I have in here is to be notified when streams are added. But of course, there’s different events for like when somebody leaves and you can remove that element from the DOM. So in this case, I’m just listening for when streams are added. The only difference here is that I need to check whether or not this is a local participant. So the stream that’s being added, is this my, for example, my video. And if so, we want to make sure that we only produce, or we only add to the video tag, the actual video stream, and not the audio stream. And that’s to prevent any kind of local feedback or echo when you’re streaming, for example, you don’t need to hear yourself, because you can hear yourself. Typically, if you’re able to hear you can hear yourself talking in real life. You don’t have to hear it coming back through the web, of course. So once we have the streams that we want to display. We take our video element and create a media stream and set that as the source object of the video element. And then we simply add all of those streams that we want to display to that. And finally, we call stage. Dot join. Once we do that, if we hop over here, we’re here and check the console, okay, reload the page. Okay. What have I done wrong? I didn’t save the page. There we go, reload that and hit that in a new browser page. And you can see that my video here is broadcasting by really quickly, come over here and I start a local server here, and if I create a QR code and Then for my mobile device, if I join that, I’m you can see that I am now broadcasting a real time conversation with myself, one of them being from my FaceTime camera, one of them being from my mobile device. And the latency, as you can see, really low. We’re talking about around 300 milliseconds of latency, maybe better than that. So that’s how you create a real time stream with Amazon IVS. Now you may be asking yourself, with real time streams being limited to 25,000 viewers. Is there a way to extend that reach? And I’m not going to get into it, but yes, there is a way. So if you have a real time stream where people are interacting, you can actually composite that either client side or server side and broadcast that to a low latency channel. So really, if you only need that real time latency between participants, where you have, like, maybe a guests conversation with another streamer who’s remote. You can have that being handled by the real time stage, and then composite that and broadcast that to a low latency channel and reach the millions of viewers that you need to reach.
Not going to get deep into stream chat, but we do have a WebSocket based stream chat, stream chat solution. It’s fully managed, scalable, performant. You can do moderation via lambda, which essentially invokes a lambda function with every single message that’s sent. You can do any kind of moderation that you need in that lambda function. We also have, of course, manual moderation to allow someone with the proper permissions via their chat token to do things like delete messages and disconnect other users. Very simple to use, and, as I said, highly performant. So one of the other kind of cool things that you could do here, beyond the use cases that we’ve already talked about is do things like streaming directly from games, so with real time stages, because they support the whip protocol, which is web, RTC, HTTP, ingest protocol, little bit of A mouthful, you can use that from any any application which supports that protocol. So for example, Unity has a whip plug in WebRTC plugin that you could use so you can, in theory, broadcast directly from a game using Amazon IBS, using using Unity and Amazon IBS, and have viewers be able to view that stream in real time, which gives them the ability to do things like integrate chat directly into the game, and also integrate things like AppSync to actually create a dynamic interactive experience where the viewers can actually modify the game in real time. So I’m going to play this video really quickly here, and you’ll be able to see what I’m talking about. So on the left hand side of your screen, you will see a remote video from my mobile device, and on the right hand side is the actual game itself. So as the viewer on the left hand side watches this live stream, if they click the buttons that are shown on the left hand side of the stream, they can actually modify and spawn obstacles and interactions directly from the viewer side of the game. So this create allows you to create really dynamic experiences like at no point will the game ever be the same when this user is.
Playing this and broadcasting to their viewers, because the viewer is involved in the experience. You can also do things like create multiple camera angles, which, as you see in the top hand top of the screen on the left hand side is actually a second camera angle from within the game that’s being broadcast to the viewers. You can create dynamic cameras that only the viewer can change the viewpoint so they can zoom in, zoom out, really create crazy interactive experiences with Amazon, IBS and a few other tools. If you want to learn more, we have a ton of demos on a website called IVs dot rocks, we have a user generated content demo, a web conference platform demo, which is kind of like a Google meet type thing, a demo of client side transcription. So if you’re looking to use web GPU to do real time transcriptions with AI models, you could check that out and create audio only room. So if you want to create a like a mobile audio chat room, you could do that. If you want to learn more, please check out IVs dot rocks. We have demos. We have tons of GitHub repositories up there. We do have a pricing calculator that you can check out as well to learn how much this may cost you. And of course, all of that is based on our public pricing. If you have needs for larger volume and you want to discuss private pricing, those costs can obviously change. And of course, you can check out the docs on amazon.com and our blog posts over on dev two. And with that, I was a little bit rushed, but I want to thank you all for joining and thank you to Brian and the rest of the crew for having me. If there’s any questions, I know we probably don’t have too much time, but if you want to get a hold of me, I am recursive codes on social websites and connect with me on LinkedIn or, yeah, that’s about it, LinkedIn or or nothing, LinkedIn or nothing, just, I was gonna say that other one, but I don’t use it anymore. So blue sky, blue sky. I don’t have, I have like, four followers on blue sky. So, yeah, not anymore. Man, it’s been crazy lately, like I’ve been gaining for every few minutes, like it’s just a constant stream of people joining the the exodus has has officially hit, as they say. So, yeah, we did run out of time, but that was really, really cool. You know, Potter and I were backstage, basically trying to figure out, like, if we could build our own, you know, we run build your own live stream for this, for virtual conferences, exactly. So, yeah, yeah. You know, we’re ideating about it, so I’m just going to let Potter build it for me.
Brian Rinaldi 42:57 That was really, that was really, really cool. So thank you, Todd. I wish, I wish we had time for questions, but maybe I’ll have you back and we’ll talk some more.
Todd Sharp 43:06 I’d love to thank you. You.
TheJam.dev is a 2-day virtual conference focused on building modern web applications using full stack JavaScript, static site generators, serverless and more.
Ishan Anand will divulge the secrets behind how the LLM magic works using just a spreadsheet interface and some JavaScript and web components.
Burke Holland will give us all the tips and tricks we need to get the most of out of generative AI using GitHub Copilot.
Moar Serverless will give you all the information you need to take advantage of serverless in your application development including new AI and edge capabilities.
Johannes Dienst will share his experiences revamping his product's developer documentation and share some tips and advice for what worked to enhance the quality of the documentation and examples.