Way back when, companies such as Amazon and Google realised that they could leverage the large amounts of computing infrastructure developed to support their own operations by selling their spare compute and memory capacity as self-service resources.
The engineering effort used to guarantee the high service quality levels for their core businesses could be sold on to startups, and established companies alike, who did not have the engineering expertise to develop and run their own scalable, and resilient, cloud services. (You’d know if Amazon Web Services (AWS) went down completely: so would large parts of the web that are hosted there.)
In the last couple of years, the likes of Google, Amazon and IBM have moved up a level, and now offer “commodity AI” services – recognising faces and and objects in photographs, performing entity extraction on the contents of large texts, generating speech from text and text from speech, and so on. (Facebook seems to prefer to remain inward looking.)
In a spate of announcements today, Amazon joined the part with the release of their own AI services, reviewed in a post by Amazon CTO, Werner Vogels, Bringing the Magic of Amazon AI and Alexa to Apps on AWS. (I’ll post my own summary review when I’ve had a chance to play with them…)
But it seems that AWS have been shopping too. As well as providing a range of different server sizes and base operating systems, the machine instances that Amazon provides now includes FPGAs (Field Programmable Gate Arrays; which is to say, programmable chips…) and (soon) GPUs.
The FPGA machine instance, the suitably named F1 includes one to eight [Xilinx UltraScale+ VU9P?] FPGAs dedicated to the instance, isolated for use in multi-tenant environments. to support the development the machine instance also incudes
a 2.3GHz Intel Broadwell E5 2686 v4 processors, up to 976 GiB of memory and up to 4 TB of NVMe SSD storage. So that looks alright, then… Gulp. (For more, see the product announcement, Developer Preview – EC2 Instances (F1) with Programmable Hardware.)
The pre-announcement for the GPU instances (In the Works – Amazon EC2 Elastic GPUs), which have been a long time coming, look set to offer Windows support for Open GL, followed by support for other versions of OpenGL, DirectX and Vulkan. This means you’ll be able to render and stream your own 3D models, at scale. (Anyone think this may be gearing up to support AR and VR apps, as well as online streaming games? Or support for GPU crunched Deep Learning/AI models?)
(All the new machine instance offerings are described in the summary announcement post, EC2 Instance Type Update – T2, R4, F1, Elastic GPUs, I3, C5</a.)
As well as offering more physical machine types, Amazon have also upgraded their Aurora relational database product so that it is now compliant with PostgreSQL as well as MySQL (Amazon Aurora Update – PostgreSQL Compatibility).
But it doesn’t stop there. For the consumer, just wanting to run their oiwn web hosted instance of WordPress, Amazon virtual personal servers are now available: Amazon Lightsail – The Power of AWS, the Simplicity of a VPS (though it looks a bit pricey compared to something like Reclaim Hosting…)
Back to the big commercial users, another of the benefits of using Amazon Web Services, whose resources far exceed the capacity of all but the largest technology operating companies, is that you can avail yourself of the large amounts of computing resource that might be required to analyse and process large datasets. Very large datasets. Huge datasets, in fact. Datasets so huge that you need a freight container to ship the data to Amazon because you’re unlikely to have the bandwidth to get it there via any other means. Freight containers like AWS Snowmobile (H/T Les Carr for the pointer).
According to the FAQ, each Snowmobile is a secure data truck with up to 100PB storage capacity in a 45-foot long High Cube tamper-resistant, water-resistent, temperature controlled and GPS-tracked shipping container. On arrival at your datacentre, it needs a 350KW power supply (Amazon can supply a generator, if required). Physical access to your datacentre is achieved using the supplied removable connector rack (up to two kilometers of networking cable are provided too).
Once you have completed the data transfer using your local data connect, the Snowmobile is returned to a designated AWS region datacentre. It’s not clear how the data is then uploaded – maybe they just wheel the container into a spare bay and hook it up?
This is all starting to get really silly now…