By KIM BELLARD
Perhaps you read about, or were directly impacted by, the massive, multi-hour Amazon Web Services (AWS) outage last week. Ironically, AWS’s effort to add capacity triggered the outage, although apparently was not the root cause. It’s no surprise that AWS sought to add capacity; it, like most cloud service vendors these days, has seen skyrocketing growth. Even healthcare has jumped into the cloud in a big way.
But, as the outage reminds us, sometimes having core computing functions done in far-off data centers may not be always a great idea. Still, we’re not about to go back to local mainframes or networked PCs. The compromise may be edge computing.
Definitions vary, and the concept is somewhat amorphous, but goal is to move as much computing to the “edge” of networks, primarily to reduce latency. PwC predicts: “Now, with the rise of IoT, the centralised cloud is moving down and out, and edge computing is set to take on much of the grunt work.”
As they describe it:
With edge, instead of pushing data to the cloud to be computed, processing is done by devices ‘at the edge’ of your network. The grunt work is done closer to the user, at an edge gateway server and then select or relevant data is sent to the cloud for storage (or back to your devices).
The oft-cited example is self-driving cars; you really don’t want the AI to wait a single millisecond longer than necessary to make a potentially life-saving decision. An article in Nextgov pointed out:
Thus, a Tesla isn’t just a next-generation car; it’s an edge compute node. But even with Tesla, a relatively straightforward use case, building and deploying the edge node is just the beginning. In order to unlock the full promise of these technologies, an entire paradigm shift is required.Continue reading…