What’s New in AWS: An Opinionated Recap — Part 2

Hülagü
13 min readFeb 21, 2021

--

Here is Part 1 for those who are interested in the latest news about Data Analytics, container technology, remote working, and call centers.

This is my second (and final) post on re:Invent 2020 Leadership sessions where I’m discussing selected talks from a subjective point of view, especially focusing on new tools&services, and evaluating them in the broader context and in terms of their place in the technosphere of the future. I’ll be focusing on three sessions that are centered on the Internet of Things, Distributed Intelligence, and Serverless applications — all of which are technologies I’m extremely curious about.

Cloudy today, foggy tomorrow

Dirk Didascalou, Vice President at AWS, gave a speech with the title “Connect today, transform tomorrow with AWS IoT” on perishable foods and orthopedics. OK, I’m half-joking — it was on IoT, but the other two were also briefly mentioned in that context. It’s an informative speech about various exciting technologies, although he made me slightly disappointed when he said “I grew up in the mobile industry for the first 20 years of my career including a company called Nokia” — sure, Gen-Z may not be very familiar with Nokia products, but come on, it is the company that introduced the magic of “connecting with things” to us Millenials when we were adolescents, so let’s have at least some thankfulness. Anyway, enough with the nostalgia.

AWS Partner IoT solutions

Not a novel concept, but I didn’t want to skip this in case some of you may find it useful to know about this promotion of 1NCE, an Advanced Technology Partner of AWS. Didascalou mentions that they offer free IoT connectivity plans for a period of 12 months, including global connectivity through Tier 1 networks and 100 free SIM cards. So, if you love free things as much as myself, you can obtain their IoT SIM and connectivity suite from AWS Marketplace and try onboarding your devices to AWS IoT Core.

IoT Greengrass 2.0

Major update (or, more accurately, reconstruction) on the edge platform. It’s become modular so you can now choose which software components to install, optimizing for the capability of your devices. What’s more, it now lets you conveniently develop applications locally without deploying to the cloud until you want to. Both are welcome improvements, if you ask me — it’s in line with the theme of “hybrid” of this year’s re:Invent that they don’t push you to always be embedded (no pun intended) into the cloud, and use their solutions exactly in the way they want you to.

As a cherry on top Greengrass is now open-source — you can take a look right now at its GitHub repository which is mainly written in Java.

FreeRTOS

AWS has some good news for users of FreeRTOS, the leading open-source, embedded operating system. They offer the FreeRTOS Long Term Support (LTS) which consists of IoT libraries and relevant AWS libraries, along with security updates and bug fixes to the kernel maintained by AWS. To recap, it’s basically less effort and more reliability, in exchange for some potential vendor lock-in. Anyway, you may take it or leave it according to your taste, and if you want to examine it more closely before you decide here is the GitHub repository.

IoT Core for LoRaWAN

The world of IoT is messy— programmable devices don’t really have the convenient abstractions of mainstream computers, and scaling through a Serverless architecture is but a distant dream for those who struggle with resource optimization on the cold bare metal. AWS looks to provide a bit of support to those enduring developers with IoT Core for LoRaWAN — the network that requires minimal power that is especially useful for devices with small sensors. Thanks to this managed service, Didascalou claims, you can connect such devices “without developing or operating a LoRaWAN network server” yourself. He even goes so far as defining IoT Core for LoRaWAN as a “plug-and-play experience” which, hopefully, isn’t just a marketing hyperbole.

Fleet Hub

Thanks to advancing technology you can now manage fleets that consists of millions of devices… or can you? Maintaining that tangled mess is often a nightmare, even for established businesses with dedicated teams of engineers and technicians. Fleet Hub is AWS’s answer to this problem — it lets you create a managed application without writing any code, where you can monitor and interact with your device fleet. And for identity management it’s fully compatible with SSO, AD, and AWS Organizations. Thus it is yet another service that helps decentralization, through centralization.

IoT SiteWise Edge

As can be inferred from the name, this service is basically IoT SiteWise, brought to the edge. Not to underrate it though — implementing cloud capabilities in the local context is barely a small feat. Boasting to be directly transferable to the edge from the cloud as it shares the same asset models with SiteWise proper, SiteWise Edge is capable of reading data from on-site resources by integrating into standard third-party gateways using a Greengrass connector, and can also process data before sending it to the cloud (the consumer is SiteWise by default, but it’s also able to transmit to other targets including S3). SiteWise Edge also includes SiteWise Monitor for local visualization of assets, whose capability to work without connecting to the cloud is once again emphasized by Didascalou.

Other new features

Let’s mention a few (relatively) minor ones before concluding. Enthusiasts who want to delve deeper into IoT may find IoT EduKit useful, which comes with actual hardware that you can practice implementing embedded apps. IoT Core can now send data directly to Kafka in a VPC for stream processing. IoT Device Defender has implemented ML for progressive safety, and lets you add custom metrics for power usage, battery usage, and other features. Finally, you can now add alarms for IoT SiteWise and IoT Events and display them in SiteWise Monitor.

To sum this session up, there is a clear pattern emerging here: decentralize the computation while keeping the security mainly central. Cloud power is still there with all its might but it’s now closer to where the action happens, in line with the emerging distributed pattern known as fog computing. We’re yet to see how successful those services will be in real-life applications, but as a framework and vision I believe this to be a step in the right direction.

The queen and the swarm

Starcraft is one of my all-time favorite games (please don’t quit, this part is indeed about the speech “The extended cloud: AWS powers edge-to-cloud applications” from Bill Vass — I’ll explain). I was only 10 years old when I first played it, yet I could still appreciate this masterpiece. In contrast to many previous strategy games where races only differ in tiny details, Starcraft had three completely different, yet somehow balanced races. Having lots of units with many unique abilities and with a myriad of interactions between each other, it never let you play the game with template strategies — you had to be using your brain and responding precisely to your opponent's moves from start to end. And the story… with all its turns and twists, great multi-dimensional characters, deep lore, and a plot with love, hate, war, science, politics, brotherhood, sacrifice, and betrayal masterfully vowed into each other, it’s still one of the best among every game genre. Given all those, it’s no wonder that it was love at first sight for me with Starcraft.

Yet, it was a mystery even for me why I loved one of the races so much: It was the Zerg. They were a race of alien creatures that you can’t easily empathize with — unlike the other two races whose members spoke English, Zerg didn’t even speak (at least not in a way that we can understand). Their units were wild, ugly, stupid, destructive. They are continuously mutating and adapting creatures with very little form. To recap, Zerg are as “inhumane” as it can be. So, why did I adore them so much? Why did I root for them despite they are presented as “bad guys”? Why did I almost exclusively choose them to play? Years later I would see a meme with a picture of a zergling (the most basic unit of Zerg) and a text that says “We love Zerg because they look on the outside as we look on the inside”, and it immediately made sense.

A smart device… sorry, a zergling (source)

Right, we all have an ugly side that we are scared out of our minds that somebody may notice. To hide this darkness we mask it with “form” — complying with etiquette, putting on make-up, wearing suits… Yet Zerg don’t have that — they are pure “essence”, meaning that they have nothing to hide, as for Zerg inside is the same as outside, and mind is the same as matter. No beauty whatsoever, just flawless functionality. Paradoxically this is what makes them beautiful to me.

But there is more. Zerg are actually a single creature — similar to an ant colony but even more extreme. Trillions of Zerg spread around the universe, looking like distinct organisms, are functionally cells of one giant, boundless organism. Individual creatures may have small and primitive brains, but the colony as a whole has emergent, resilient swarm intelligence. However, unlike an ant colony, their minds are also directly connected to their queen, who determines the purpose of the Zerg colony. Without the will of a queen, Zerg wander aimlessly. Because intelligence is always toward something — it needs to have an object in order to emerge. Once the colony has a purpose trillions of creatures start to behave in perfect harmony. The purpose given by the queen is the only meaning of a Zerg’s existence — it has no ego whatsoever. It will sacrifice its life without blinking an eye if that will benefit the colony. Every single Zerg processes the information they obtain with their senses, all of which come together and simultaneously flow through the mind of the queen and back to the colony, becoming a continuous, meaningful whole with the colony’s emergent intelligence and the purpose of the queen.

OK, I’m not sure all this rambling helped you visualize the concept of Distributed Machine Learning, but I hope it did. As you may guess, the queen is the cloud, and individual Zergs are the devices. The Zerg as I’ve just described represent an extreme, idealized implementation of this concept — where devices have optimal functionality, there is real-time and lossless communication with the cloud, and data is processed where it’s supposed to in a way that optimizes for the global target given to the system. Which, at last, brings us to this speech by Bill Vass, VP of engineering at AWS. These frameworks and services he’s introduced (along with ones from Didascalou’s speech I listed above), although not immediately enabling such a system, are indeed small but crucial steps towards that ideal.

Edge-to-cloud architecture

Vass shows an informative slide which demonstrates different stages & aspects of IoT technology, along with the corresponding AWS services. I’d love to share the screenshot here, but I’m slightly afraid of this copyright thing. Instead, let me give you a link to its timestamp which can be useful for conceptualizing the IoT landscape at AWS. But if you confuse the myriad of individual services anyway, believe me, you’re not alone. Things often get more complicated, before deciding what’s actually needed and simplifying accordingly.

Sagemaker Edge Manager

Sagemaker Neo, which came out in 2018, made it possible to develop ML models in the cloud, compile them and export to supported edge devices. Edge Manager brings this one step forward by letting you manage those models directly at the edge at scale, have multiple models that are able to interact with one another and continuously monitor them. Anomaly detection with smart cameras is just one example Vass mentions. I believe Edge Manager has the potential to make the implementation of Federated Learning more convenient, if it works as promised.

Lookout for Equipment & Monitron

Lookout for Equipment can be nicely summarized by Vass’s words: “Corrective maintenance using Machine Learning at the edge”. In this service actual ML model lives in the cloud so detection of problems is a shared responsibility between the cloud and the device that is connected to IoT Core, but you’re able to react locally to any failure. However, the data needs to be transmitted continuously to the cloud so network problems may cause some cases to go undetected. Monitron, on the other hand, works even when it’s disconnected. It consists of a device with embedded ML and sensors that automatically connect to it to monitor the equipment on-site, using temperature and migration. Monitron is capable of improving its ML model using feedback, and streaming data to the cloud if you prefer so.

Lookout for Vision & Panorama

This service is for cameras connected to the cloud and nicely integrates with Kinesis Video Streams. It lets you stream visual data, and uses ML in the cloud for automatically detecting defects, smoke, fire, even people not wearing PPE, as claimed by Vass. According to him using Lookout for Vision requires no ML knowledge at all and you can easily build a model even with 30 images. Yet again, for those who are not capable of creating a stable connection to the cloud, there is Panorama as the alternative solution. Panorama Appliance is a device with its own GPU on which you can locally analyze multiple videos from your existing cameras that you will connect to it. Panorama Device SDK, on the other hand, is directly implemented into smart cameras with enough processing power, and you can stream to Kinesis Video when you need further analysis. Unlike most of the other services I’ve just mentioned, though, this one requires some degree of programming and ML know-how.

Location Service

While the other tools focused on answering your questions starting with “What”, “Why”, or “How”, Location Service is for “Where” questions. It promises a quick and cheap way for mapping and tracking your assets, as well as geofencing — monitoring if an asset leaves an area with virtual boundaries that you determine. It’s also possible to use any other AWS service with Location Service and integrate the data it provides into other applications.

Toward the end of the talk Vass also gives an example of which AWS services you can take advantage of in which stage of the IoT pipeline (connect -> deploy -> manage -> secure), and it’s pretty mind-boggling for me to imagine all the possibilities available to us even right now. All in all, you can probably see my excitement for these new tools in the IoT landscape — and I haven’t even mentioned the developments in the existing services such as Wavelength and RoboMaker which are also very promising. By the way, please don’t think I’m a starry-eyed optimist — I’m well aware of the potential malicious use cases of all these tools and technology, such as for surveillance. However, the technology will continue to advance whether we want it or not, so instead of desperately trying to stop it, it seems we need to focus on ensuring that the control of those tools will stay at the hands of benevolent forces.

The less is more is sometimes less, sometimes more

Ahh, Serverless… also known as “somebody else’s server”. At this point I’ve already accepted that it’s unavoidably the final destination of computing, and we are transitioning into a world where our sole responsibility will be writing the code for applications (and often not even that) — the only question is whether this transition will be peaceful or not. Does David Richardson, VP of Serverless at AWS, possibly have the answer in his speech “Increasing innovation with serverless applications”? Let’s see!

Lambda extensions

These enable operating Lambda functions through a stack of your preference, including third-party tools. One of those tools that Richardson mentions by name is Terraform. Terraform, as far as I know, is not especially loved by AWS who (understandably) promote their own IaC tool CloudFormation — yet this has already been a re:Invent where they start embracing integration, instead of monopolization.

Lambda container image support

This was an expected feature for a while. According to Richardson, apart from providing native service integrations and a simple programming model, Lambda is also ten times faster than container orchestration systems (yes, Kubernetes, he’s talking about you). What’s more, again in line with the general theme, images for the supported language runtimes, Lambda interface clients and emulator has been made open-source.

So, for running containers, Lambda, Fargate, ECS/EKS (ordered from simple to complex, or rigid to flexible) now co-exist in a single cloud provider (those with masochistic tendencies may feel free to add EC2). My magic ball says this situation can’t last long. I would say Fargate can be the first one to go (possibly merging with Lambda), though I wouldn’t bet my house on that.

Step Functions synchronous express workflows

“I can’t tell you how much engineering effort inside of AWS gets put into things like retry logic, back off algorithms, and tuning received queues”, Richardson complains. Yes, Mr. Richardson, you actually can. It’s the same for all engineers, from the gigantic conglomerate you’re working for to the humblest startups — we all dream of spending our working hours designing clever architectures and going to a bar to relax after work at 5 p.m., yet in practice we stay awake at night desperately trying to understand why that Lambda can’t be given the permission to be triggered by an S3 event. Anyway, thanks for at least doing something to make our lives a bit more tolerable. Step Functions is indeed nice for abstracting away some of the labourious coding and letting us concentrate on the logic. Synchronous express workflows, on the other hand, are claimed to make working with high volume cases requiring low latency more viable. Another benefit would be integrating human integration to the flow easier thanks to synchronous processing.

Decoupling legacy systems

Yet another bunch of steps toward more integration that are supposed to simplify your migration to the cloud. Amazon MQ has now added support for RabbitMQ, and is also capable of triggering Lambda functions. It’s also made possible to integrate Amazon MSK and Lambda. Managed Workflows for Apache Airflow is a brand new service for the legacy (!) workflow management tool that is Airflow. Yet, to be fair, Richardson seems to hold Airflow in high regard, praising its “active community, large library of prebuilt integrations to third party data processing tools and the ability to use Python scripts to create workflows”. A bit unexpected to hear from an Amazon executive, no? Anyway, it’s another managed service that offers us data engineers more time to spend on research and development. I want to believe.

Other features

Let’s quickly pass over a few more news. Lambda now supports up to 10 GB of memory and taking advantage of scalable EFS. Lambda functions can now be enforced to only be accessed from inside a VPC. There is further integration with Signer and KMS. Billing granularity has decreased to 1 ms. Lambda has additional support for doing stream aggregation using Kinesis and DynamoDB streams. The speech also has a nice demonstration of using Lambda functions within a call center architecture which I would recommend everyone to see.

So, the developments with Serverless this year at AWS seem to be aimed at more convenience for adopting Lambda functions. It is indeed what you need most when working with complex, distributed systems where Lambda is most useful tying different parts together. It will be very interesting to see the response of the other two of the Big Three — I’m watching with my popcorn.

That’s all!

With this post I’m done with re:Invent 2020. I hope I was able to make the future of the Cloud a bit more clear for you. Yet, I’m not done with writing in Medium, as I enjoy sharing knowledge and there are still a myriad of things I want to talk with you!

--

--