Brandon Cannaday: Hello, everybody. My name is Brandon Cannaday, and welcome to another Deeper Dive webinar. As the name implies, this gives us a chance to take a deep dive into some of the more technical areas of the Losant platform. Speaking today, it's going to be me, I'm going to be acting as kind of host and emcee getting us kicked off here. Once again, I'm Brandon Cannaday, I'm the Chief Product Officer here at Losant. The person doing the majority of the speaking is going to be Dylan Schuster. Dylan was one of our lead engineers for a long time, and now is our Director of Product for the platform itself. He's going to be taking us through a lot of implementation best practices when it comes to our workflow engine. One of the biggest benefits for joining us live versus catching a replay is that you do get to ask questions, so joining us for Q&A is Heath Blandford, one of our success engineers. If you've ever been in our forums or reached out to support, you've probably had a conversation with Heath. Over the years, he's become quite an expert in all things Losant. Speaking of Q&A, I want to direct everyone's attention to the bottom middle of the Zoom webinar software. You'll see a Q&A button. At any time during the webinar, feel free to click that button and put a question out there. We're going to do all the questions at the end, but don't hesitate at any moment if something pops in your head while Dylan is speaking, to go ahead and drop a question out there. If you aren't able to stick around for Q&A, don't worry about it. The webinar is recorded. We'll be sending that out to all the attendees. They're also available on our YouTube page, and you can find all of the previous recordings on our website, losant.com/deeper-dive, so you can check out this replay and replays for all our previous deeper dives there. If you happen to be new to Losant, I want to give a quick overview of what we are so that the rest of this webinar can make sense. We provide the edge and cloud software foundation on which our customers develop and bring to market their own IoT products and services. You can think of us like an IoT platform or an IoT application. We provide those building blocks on which applications are built. There's a lot of functionality in the Losant platform, but today's webinar is really going to focus on the visual workflow engine. That's our low code development environment to really provide the intelligence that backs all parts of your Losant application, so realtime stream processing, alerting, notifications, even those business rules that are required for a real application. If you want to get a demo of all of the rest of what Losant is, feel free to reach out, just go to losant.com, you'll find some contact forms there. We'd be more than happy to get you connected to one of our experts, and they can provide you a more thorough demo of all of the rest of Losant. We are one of the leaders in the IoT application enablement space, in the industrial smart environment, and telecommunication industries. We provide this foundation for some of the largest organizations, and we've got customers all over the world. We recognize that Losant, while it provides the software foundation, there's a lot of other technology that all has to come together in order to build one of those real, complete IoT products and services. So we surrounded ourselves with an excellent partner ecosystem which includes strategic partners who we kind of share business models and go to market strategies with. We've got solution partners, which actually help do the development work, if you want to kind of offload that outside of your own organization, and then technology partners which provide hardware, sensors, or even other software services that augment or extend what may not be provided by Losant out of the box. So if you are a potential partner and you want to look at working with Losant, or you're a solution developer or a customer looking for other parts of the technology stack, I really recommend you go check out losant.com/partners, browse our partner ecosystem, and see if there's something there that might solve a problem that you have. Today's webinar is going to be all about workflows, building performant workflows that scale with your IoT solution. Before I toss it over to Dylan, I want to provide a little bit of background about why we wanted to approach this topic. Losant at its core is really a technology that you use and develop on top of, and every technology has kind of wrong ways to use it and right ways to use it. This example here is kind of an analogy I wanted to use to compare Losant to another technology you might be familiar, and that's SQL databases. If you've ever had to scale a SQL database, you certainly understand some of the challenges that happen as you grow and grow and grow. If you made some decisions poorly early on, they'll come back and bite you later on. I've listed a couple the best practices here that you might be familiar with databases, properly configuring your indexes, using correct data types, avoiding SELECT * and using LIMIT, restrict the amount of data that you're querying. All these technologies have these best practices, where, even if you don't use them, what would work on a very small amount of data may start to break down you have a large amount of data. In a lot of ways, Losant works in the same way. What would work in your proof of concept phase with a very small amount of devices may start to have challenges when you scale it up into the millions of devices, and that's really what today's webinar is all about. Dylan's going to take us through some of these best practices so that you can kind of start your development with the right frame of mind, the right techniques, so that when you do grow these applications, the Losant platform will happily scale with it. At this point, I'm going to pass it over to Dylan. He's going to take us through the rest of this webinar. Dylan?
Brandon Cannaday: Awesome. Thank you, Dylan. Yep, we've got some questions, but while I finish up here, please once again, if anything popped in your head, there is that Q&A button along the bottom edge of the Zoom webinar software, so feel free to pop that open and ask us some questions. Okay, before we get into Q&A, I got a little bit of final housekeeping to do. The first is a save the date. We're going to be doing another one of these deeper dive webinars with a technology partner called EnOcean. If you've never heard of EnOcean, I like to think of them as kind of magic sensors. They use energy harvesting, so really a lot of their sensors, all of their sensors maybe, don't require any batteries. They kind of use the energy in the environment. So you can think about maybe a smart light switch, the energy that is obtained by a human going pushing the switch in actually generates enough electricity for it to kind of send a little BLE advertisement packet. They use a lot of clever tricks like that. They've got a ton of different sensors, and without requiring batteries really opens up a lot of interesting use cases, so we will have an application template that makes it easy to get started with EnOcean, so come check that out. You can register for that now at losant.com/deeper-dive. In terms of further educational resources, we've got a lot. No feature in Losant goes out the door without being fully documented, so check out the Losant documentation. There's also Losant University. If you're new to Losant, that is a lot of video educational material kind of walking you through most of the components of the Losant platform. You can end with a certificate of completion, kind of get that nice certificate that says you did the work and you understand Losant. Of course we've got this deeper dive series. There's a lot of deeper dives out there now covering a ton of different parts of Losant. It can be really enjoyable to explore some of those. They're all fairly focused on individual topics, so if there's an area of Losant you're interested in, definitely check out the deeper dive landing page, kind of browse through what we have available. A lot of tutorials on our blog. We continually publish information to the blog, so check that out, losant.com/blog, and then, if you've got any questions, if you're exploring, you are running into some performance issues, you've got some of that Debug Timing tab and you can't quite figure out why a number is so big or it's bigger than you would expect, feel free to ask us on the forums. We can certainly jump in and help resolve any issues that might come up. If you do want to reach out to us and get a thorough demo of the Losant platform, all the other stuff that makes up Losant that we don't cover in these deeper dives, you can send us an email directly at email@example.com, or check us out at losant.com, browse our material there, and reach out to our team. With that, let's jump into some Q&A. We have gotten a lot of good questions. The first is actually really interesting. It's really high-level, but it could be cool to cover. The question is, what are workflows? Are they compiled to some kind of language? How are they actually executed behind the scenes? I thought that was kind of a cool one, really high level one. Dylan, you want to talk about what a workflow... I guess when you're done, you're dragging all the nodes in there and you hit deploy, what does that look like behind the scenes and how does that actually get executed by the Losant platform?
Dylan Schuster: It's a good question. I'm trying to figure out how much detail to go into here. When you save a workflow, that workflow and all of its configuration settings are sent into a document store, a document database, and that's where they live along with the workflow versions and things. The triggers inside of a workflow, so your device connections and disconnections and device state and messages that come in... Every time a trigger comes in, we are popping a message into a message queue, and it's waiting in that message queue where it's trying to run. It's rarely there for more than, call it 300ms, we've got SLAs set up around that to make sure that we're processing those messages in a certain amount of time. Once it's its turn to run, it gets passed off to any number of workflow runners whose job it is to actually execute the workflow, so it has the workflow that you built in, and it has the payload from the initial trigger, and it's basically going to pass that payload from node to node. It's going to execute one node, make any mutations to the payload, then look at the outputs of that one node and pass it on to one or more of the ones that follow it inside of there, and it's going to keep passing that message along and along and along and doing the various mutations and taking certain actions every step of the way. It's very close to Node-RED. If you've ever used Node-RED, it's kind of...let me look at that, there's a name for what they call themselves. It's a flow-based editor. I actually wrote a blog post comparing Node-RED to Losant Workflow Engine a few months ago, the positives and negatives of both. That's a way of thinking about it, is that it's individual processes per node doing whatever it's doing in each one, and then passing it on to the next node, and eventually of course it ends. Any time it hits a debug node, it's going to report the timing information that it accumulated all the way back up to you along with the payload itself.
Brandon Cannaday: All right, awesome. You mentioned the word payload a lot. That's a really fundamental concept within Losant. That's really what every workflow node is modifying and using along the path. Heath, got a question for you. Related to payloads, and really it's around the payload size. The workflow can add a ton of information to payload, and those things can get pretty big, so the question is, does the size of that payload have any impact on a workflow's performance?
Heath Blandford: Great question. We do get a lot of questions about payloads, and certainly they are kind of a cornerstone for what we do here at Losant. To answer the question, typically the answer is no. The size of the payload typically does not impact performance, except for that special case when you're using a Function Node. Dylan talked about the barrier, or that kind of path that the data passes. We're serializing that payload data, it's going to get into that sandbox, and then working on that data, and then pushing it back out. When you're using a Function Node and your payload is very large, you can probably expect some performance impacts when using a Function Node. Again, it's one of those things where, just like Dylan said, it's probably best case, try to use those first-class nodes that we have for some of those functions, but know that if your payload does get large and you are using a Function Node, you could experience some performance impact.
Brandon Cannaday: Okay, cool, thanks, and to add on that just a little bit, there is actually a protection limit that we have on payload size. It's 5MB when it comes to Function Nodes. So if a payload is bigger than 5MB and you attempt to put it into a Function Node, that's one of those performance catches that we have tried to protect you kind of from yourself, because we know the time it takes to serialize all that information, push it into that little safety sandbox that we have, and get it back out, once your payload exceeds 5MB, you're going to get in trouble. So you might see an error, something around "payload too large." If you do see that, that's likely because you're trying to push too much information into a Function Node. Related to Function Node as well, Dylan, you did mention some concurrency limit per application, so we did get a question about, what is that concurrency limit? Heath, since you're already talking about the Function Node, do you want to talk about the per-app concurrency limit for Function Nodes?
Heath Blandford: Sure, and just to clarify, it's not the total number of Function Nodes that you have. You could theoretically have an unlimited number of Function Nodes. This is specifically talking about the number of Function Nodes that are running concurrently. The number of Function Nodes you can have in a single application that are running at the same time is three. Not a huge number. It's why we also, again, try to tell you to not use the Function Node, but we do understand that there are some things that the Function Node just makes easier. Just know that you can only have three total Function Nodes running at a time in an application.
Brandon Cannaday: If you do the math on that, so if your Function Nodes are like 30ms and you've got three, that does equate to many hundreds or even thousands of Function Nodes per second that can be processed by your application, so another reason to make sure you're keeping your Function Nodes and your workflow execution times as short as possible.
Dylan Schuster: If I can add something to that very quickly, one thing I do want to say is, a caveat is that that concurrency does not apply to edge workloads, because those edge workflows are executing on your hardware itself, not, of course, in the Losant cloud platform. Bear that in mind. For a lot of these limits that we're talking about, that applies to anything that's executing in the Losant cloud, not down on the edge. There are still benefits and detractions to using Function Nodes even on the edge, a lot of the same things. We still spin off the sandbox and stuff. I actually don't know off the top of my head if there is a concurrency limit, but you're much less likely to run into it, I would think, in an edge workflow, just by the nature of how those are built and deployed. But that limit only applies to the cloud-side workflow executions.
Brandon Cannaday: Dylan, we do have a question about specifically edge workflows, because we covered just some of the throttles and limits for cloud workflows. We had just a general question here about, are there anything that I should be aware of when it comes to edge workflows? You talked a little bit about the Function Node, but are there any other limits or issues that people should be aware of when it comes specifically to edge workflows?
Dylan Schuster: Number one, the workflow execution timeout, which is 60 seconds in the cloud environment. That is actually configurable inside of an edge compute device as an environment variable when you spin up the gateway edge agent. It is 60 seconds by default, and actually unless you've got a good reason, I would leave it at 60 seconds. That'd be one thing to consider. Another one is going to be that, typically speaking, an edge compute device is going to be slower to make those third-party API requests than Losant's cloud infrastructure just because it's a lower power device. You have no idea what the quality of your Internet connection is going to be versus coming out of a cloud environment, of course, it's going to be one of the best in the world probably, so that's something else to consider. On that note, one thing that I always recommend is, if your use case allows it, instead of using those third-party services in an edge workflow itself, instead of using an HTTP node to make a request or any of the AWS nodes and stuff, it's going to be better for the resiliency of your workflow to instead queue up a message to send to Losant using an MQTT node and passing the payload up there. The reason being is that you can pass it up, it goes up into the cloud, and we can make that request up there, number one. Number two, and this is more important, is that if your device does not have an Internet connection at the time that that workflow is executing, that HTTP request is never going to happen. That AWS Lambda function is never going to run. However, if you were to queue it up with an MQTT node, we will hold on to that message in the queue locally on the device, and when it reconnects, then we can send all those messages back up to the cloud, and in the cloud, we can make those requests for you. I always recommend to try to pass off as much to the cloud as is reasonable inside of an edge workflow for a lot of those kind of things where an Internet connection is required for that purpose.
Brandon Cannaday: Okay, cool. The only thing I would add on that is, we do get asked fairly often, well, how many workflows per second or whatever can my edge gateway process? There's really no out-of-the-box answer for that, it depends so much on what those workflows are doing. I can give you one actual example. We had these scales for inventory management, and you can configure the scales to basically turn into broadcast mode or serial, so every scale is just shooting weight data at the edge workflow, each one about 500 times per second. Total, we had two of them, both over serial and the one gateway, this little Intel Atom based gateway, 1,000 workflows per second trying to execute, and that really taxed the processor on that quite a bit. So it's really what the workflows are doing and how much memory and CPU horsepower your specific gateway has, so the best way to know that would be some of that testing that Dylan had talked about. I'm just going to put one more question in here. I know we're over the hour, but I think it's a good one, because we didn't really get into custom nodes at all and whether custom nodes had any performance implication on workflows. The question here: "Is there a performance cost to using custom nodes versus placing all the contents of a custom node just inline in the workflow?" Heath, you want to talk about that one?
Heath Blandford: Definitely. Custom nodes are a great way for you as a Losant developer to create reusable content that you can use in your workflows. But just because they're custom doesn't mean that they have any kind of special implications on performance. They're still running those first-class nodes. Now, if you are running a Function Node inside of a custom node, the same kind of performance implications are applied. But using a custom node in a workflow does not actually have any direct implications to performance.
Brandon Cannaday: All right, awesome, Heath. Well, thank you, Heath, thank you, Dylan, and thank you, everyone else who joined us today for this deeper dive webinar. Once again, we've got another one coming up October 26 around EnOcean, so you can go to losant.com/deeper-dive, register for that one, you'll be able to go to that same location, check out the replay for this one and the replays for all other webinars. That's all we got today, and we'll see you on the next deeper dive.