Tagged: AWS

New Amazon Developer/Devops Tools, Mobile Targeting

I’ve always found Amazon’s AWS tools really fiddly to use – settings all over the place, the all too easy possibility of putting things into the wrong zone and then forgetting about them/having to try to track them down as you get billed for them, etc etc – but that’s partly the way of self-service, I guess.

Anyway, last week, amongst a slew of other announcements (AI services, new hardware platforms that include FPGAs), Amazon announced a range of developer/devops productivity tools that shows they’re now looking at supporting workflows as well as just providing raw services.

Here’s a quick summary of the ones I spotted:

  • AWS Batch: run batch jobs on AWS;
  • AWS CodeBuild: “a managed build service” that will “build[s] in a fresh, isolated, container-based environment”, incorporating:
    • Source Repository – Source code location (AWS CodeCommit repository, GitHub repository, or S3 bucket).
    • Build Environment – Language / runtime environment (Android, Java, Python, Ruby, Go, Node.js, or Docker).
    • IAM Role – Grants CodeBuild permission to access to specific AWS services and resources.
    • Build Spec – Series of build commands, in YAML form.
    • Compute Type – Amount of memory and compute power required (up to 15 GB of memory and 8 vCPUs).
  • Amazon X-Ray: a debug tool that allows you track things across multiple connected Amazon services. Apparently, Amazon X-Ray provides:

    … follow-the-thread tracing by adding an HTTP header (including a unique ID) to requests that do not already have one, and passing the header along to additional tiers of request handlers. The data collected at each point is called a segment, and is stored as a chunk of JSON data. A segment represents a unit of work, and includes request and response timing, along with optional sub-segments that represent smaller work units (down to lines of code, if you supply the proper instrumentation). A statistically meaningful sample of the segments are routed to X-Ray (a daemon process handles this on EC2 instances and inside of containers) where it is assembled into traces (groups of segments that share a common ID). The traces are segments are further processed to create service graphs that visually depict the relationship of services to each other.

  • AWS Shield: a tool that protects your service against DDoS attacks. In waggish mood, @daveyp suggested that many DDoS attacks he’s aware of come from AWS IP addresses. This feels a bit like a twist on an operating system vendor also selling security software to make up for security deficiencies in their base O/S? That said, “AWS Shield Standard is available to all AWS customers at no extra cost” and seems to be applied in basic mode automatically. Security essentials, then?!

Amazon are also starting to offer segmented alert targeting services for your mobile apps with Amazon Pinpoint. The service lets you “define target segments from a variety of different data sources” and more:

You can identify target segments from app user data collected in Pinpoint. You can build custom target segments from user data collected in other AWS services such as Amazon S3 and Amazon Redshift, and import target user segments from third party sources such as Salesforce via S3.

Once you define your segments, Pinpoint lets you send targeted notifications with personalized messages to each user in the campaign based on custom attributes such as game level, favorite team, and news preferences for example. Amazon Pinpoint can send push notifications immediately, at a time you define, or as a recurring campaign. By scheduling campaigns, you can optimize the push notifications to be delivered at a specific time across multiple time zones. For your marketing campaigns Pinpoint supports Rich Notifications to enable you to send images as part of your campaigns. We also support silent or data notifications which allow you to control app behavior and app config on the background.

Once your campaign is running, Amazon Pinpoint provides metrics to track the impact of your campaign, including the number of notifications received, number of times the app was opened as a result of the campaign, time of app open, push notification opt-out rate, and revenue generated from campaigns.

One thing I didn’t spot were any announcements about significant moves into “digital manufacturing” and 3D print-on-demand (something I wondered about some time ago: Amazon “Edge Services” – Digital Manufacturing).

They do seem to be moving into surveilled, auto-checkout, real-world shopping though… Amazon Go.

Amazon Webservices Move Up a Level

Way back when, companies such as Amazon and Google realised that they could leverage the large amounts of computing infrastructure developed to support their own operations by selling their spare compute and memory capacity as self-service resources.

The engineering effort used to guarantee the high service quality levels for their core businesses could be sold on to startups, and established companies alike, who did not have the engineering expertise to develop and run their own scalable, and resilient, cloud services. (You’d know if Amazon Web Services (AWS) went down completely: so would large parts of the web that are hosted there.)

In the last couple of years, the likes of Google, Amazon and IBM have moved up a level, and now offer “commodity AI” services – recognising faces and and objects in photographs, performing entity extraction on the contents of large texts, generating speech from text and text from speech, and so on. (Facebook seems to prefer to remain inward looking.)

In a spate of announcements today, Amazon joined the part with the release of their own AI services, reviewed in a post by Amazon CTO, Werner Vogels, Bringing the Magic of Amazon AI and Alexa to Apps on AWS. (I’ll post my own summary review when I’ve had a chance to play with them…)

But it seems that AWS have been shopping too. As well as providing a range of different server sizes and base operating systems, the machine instances that Amazon provides now includes FPGAs (Field Programmable Gate Arrays; which is to say, programmable chips…) and (soon) GPUs.

The FPGA machine instance, the suitably named F1 includes one to eight [Xilinx UltraScale+ VU9P?] FPGAs dedicated to the instance, isolated for use in multi-tenant environments. to support the development the machine instance also incudes
a 2.3GHz Intel Broadwell E5 2686 v4 processors, up to 976 GiB of memory and up to 4 TB of NVMe SSD storage. So that looks alright, then… Gulp. (For more, see the product announcement, Developer Preview – EC2 Instances (F1) with Programmable Hardware.)

The pre-announcement for the GPU instances (In the Works – Amazon EC2 Elastic GPUs), which have been a long time coming, look set to offer Windows support for Open GL, followed by support for other versions of OpenGL, DirectX and Vulkan. This means you’ll be able to render and stream your own 3D models, at scale. (Anyone think this may be gearing up to support AR and VR apps, as well as online streaming games? Or support for GPU crunched Deep Learning/AI models?)

(All the new machine instance offerings are described in the summary announcement post, EC2 Instance Type Update – T2, R4, F1, Elastic GPUs, I3, C5</a.)

As well as offering more physical machine types, Amazon have also upgraded their Aurora relational database product so that it is now compliant with PostgreSQL as well as MySQL (Amazon Aurora Update – PostgreSQL Compatibility).

But it doesn’t stop there. For the consumer, just wanting to run their oiwn web hosted instance of WordPress, Amazon virtual personal servers are now available: Amazon Lightsail – The Power of AWS, the Simplicity of a VPS (though it looks a bit pricey compared to something like Reclaim Hosting…)

Back to the big commercial users, another of the benefits of using Amazon Web Services, whose resources far exceed the capacity of all but the largest technology operating companies, is that you can avail yourself of the large amounts of computing resource that might be required to analyse and process large datasets. Very large datasets. Huge datasets, in fact. Datasets so huge that you need a freight container to ship the data to Amazon because you’re unlikely to have the bandwidth to get it there via any other means. Freight containers like AWS Snowmobile (H/T Les Carr for the pointer).

According to the FAQ, each Snowmobile is a secure data truck with up to 100PB storage capacity in a 45-foot long High Cube tamper-resistant, water-resistent, temperature controlled and GPS-tracked shipping container. On arrival at your datacentre, it needs a 350KW power supply (Amazon can supply a generator, if required). Physical access to your datacentre is achieved using the supplied removable connector rack (up to two kilometers of networking cable are provided too).

Once you have completed the data transfer using your local data connect, the Snowmobile is returned to a designated AWS region datacentre. It’s not clear how the data is then uploaded – maybe they just wheel the container into a spare bay and hook it up?

This is all starting to get really silly now…

Implementing Slash Commands Using Amazon Lambda Functions – Encrypting the Slack Token

In an earlier post, Implementing Slack Slash Commands Using Amazon Lambda Functions – Getting Started, I avoided the use of an encrypted Slack token to identify the provenance of an incoming request in favour of the plaintext version to try to simplify the “getting started with AWS Lambda functions” aspect of that recipe. In this post, I’ll describe how to to step up to the mark and use the encrypted token.

Although I tried to limit myself to free tier usage, an invoice from Amazon made me realise that there’s a cost associated with generating and subscribing to AWS encryption keys of $1 per month…
To begin with, you’ll need to create an AWS encryption key. The method is described here but I’ll walk you though it…

The is generated from the IAM console – select the Encyrption Keys element from the left hand sidebar, and then make sure you select the correct AWS region (that is, the region that the Lambda function is defined in) before creating the key:

IAM_1_Management_Console_and_Timeline

Check again that you’re in the correct region, and then give your key an alias (I used slackslashtest):

IAM_7_Management_Console

You then need to set various permissions for potential users of the encryption key. I avoided giving anyone administrative permissions:

IAM_2_Management_Console_and_Timeline

but I did give usage permissions to the role I’d defined to execute my Lambda function:

IAM_3_Management_Console

Once you’ve assigned the roles and defined the encryption key, you should be able to see it from the IAM Encryption Keys console listing:

IAM_5_Management_Console

Select the encryption key and make a copy of the ARN that identifies it:

IAM_4_Management_Console

You now need to add the ARN for this encryption key to a policy that defines what the role used to execute the Lambda function can do. From the IAM console, select Roles and then the role you’re interested in:

IAM_8_Management_Console

Create a new role policy for that role:

IAM_9_Management_Console

You can use the policy generator tool to create the policy:

IAM_13_Management_Console

Select the AWS Key Management Service, and then select the Decrypt action. This will allow the role to use the decrypt method for the specified encryption key:

IAM_10_Management_Console

Add the ARN for your encryption key (the one you copied above) and select Add Statement to add the decrypt action on the specified encryption key to the newly created role policy.

IAM_11_Management_Console

You can now generate and review the policy – you may want to give it a sensible name:

IAM_12_Management_Console

So… we’ve now created a key, with the alias slackslashtest, and given the role that executes the Lambda function permission to access it as part of the encryption key definition; we’ve then declared access to the Decrypt method via the role policy definition.

Now we need to use the encryption key to encrypt our Slack token. You can do this using the Amazon CLI (Command Line Interface). To do this, you first need to install the AWS CLI on your computer. (I think I did this on a Mac using Homebrew? I’m not sure if there’s an online console way of doing the encryption?)

Once the AWS CLI is installed, you need to configure it. To do this, you need to get some more keys. From the IAM console, select Users and then your user. You now need to Create Access Key.

IAM_Management_Console_k

Creating an access key is fraught with risk – you get one opportunity to look at the key values, and one opportunity to download the credentials, and that’s it! So make a note of the values…

IAM_Management_Console_k2

You’re now going to use these access keys to set up the AWS CLI on your computer (you should only need to do this once). After ensuring that the AWS CLI is installed, (enter the command aws on the command line and see if it takes!), run the command aws configure and provide your access key credentials. Also make sure you select the region you want to work in.

serverless_slack_—_bash_—_80×24

Having configured the CLI with permission to talk to the AWS servers, you can now use it to encrypt the Slack token. Run the command:

aws kms encrypt --key-id alias/YOUR_KEY_ALIAS --plaintext "YOUR_SLACK_TOKEN"

using approriate values for the AWS encryption key alias (mine was slackslashtest) and Slack token. This calls the key encryption service and uses the specified encryption key, via its alias, to encrypt the plaintext string.

serverless_slack_—_bash_—_80×24_and_tm351-docker-build-example_—_vagrant_tm351docker-jul15b___vagrant_—_bash_—_202×24_and_TM351VM_—_bash_—_163×25

The CiphertextBlob is the encrypted version of the token. In your AWS Lambda function definition, you can use this value as the encrypted expected token from Slack that checks the provenance of whoever’s made a request to the Lambda function:

Lambda_Management_Console_and_slashtest___OUseful_Slack

Comment out – or better, delete! – the original plaintext version of the Slack token that we used as a shortcut previously, and save the Lambda function.

Now when you call the Lambda function from Slack, via the slash function, it should run as before, only this time the Slack token lookup is made against an encrypted, rather than plaintext, version of it on the AWS side.

In the final post of this short series, I’ll describe how to write a simple test event to test the Lambda function.

Implementing Slack Slash Commands Using Amazon Lambda Functions – Getting Started

The cloud, it seems, is everywhere nowadays. One way I find it useful to classify the offerings is the following crude categorisation:

  • applications, such as Google Docs or Gmail;
  • infrastructure, such as the AWS (Amazon Web Services) S3 storage service or the EC2 compute service (virtual servers and containers);
  • services, such as the AWS Simple Queue Service (SQS) or Lambda functions

Other ways of categorising offerings are available too; for example, AWS divvy up their offerings as follows:

AWS_Management_Console

Having recently just signed back into the AWS world, I thought I’d start to try out some of the first year free tier offerings. So for this first bit of toe dipping into the AWS ocean, I thought I’d see if I could make use of Amazon Lambda functions – “serverless” computational functions executed by AWS – to implement something akin to the Slack slash command handler I described in the previous post.

In that previous post, I described how I used a hook.io Slack /slash pattern that takes an HTTP POST request from a Slack slash extension to call out to a microservice on hook.io; that service responds to an incoming callback extension on Slack. The microservice itself also makes a query request to a third party search API. The architecture looks something like this (though I wonder if I could have simplified it by just responding to the slash command request, rather than returning the response via the Slack incoming extension?):

slack-hook

Amazon Lambda functions work in a similar way to the way hook.io handles the compute function definition and its execution, but the invocation needs to come either from an event triggered by another AWS source or over HTTPS using an event raised by the Amazon API Gateway (AWS Lambda Function and Event Sources). That is, we need a pattern that looks more like this (though I haven’t tried the call out to the UK Parliament API yet):

aws_slack

A recent post on the AWS blog – New – Slack Integration Blueprints for AWS Lambda – described a simple blueprint for implementing a simple “echo” slash command handler running on AWS. Excellent – it took me less than half an hour to hack together the hook.io thing, so I was hoping for the same with AWS.

Hmm…

That was this morning, well before coffee, and now it’s after lunch. Having got it working, it’s a simple five minute job. but it took me a couple of hours to find the 5 minute route. (Trying to follow notes on the web is one reason I blog the way I do, and why I have such high regard (honestly!) for the majority of OU materials. Recalling the times when I used to work through through maths texts, too many tutorials have a “hence” or “just” step that may be obvious to an expert, but is a huge blocker to a novice…)

So here’s the five minute version (maybe fifteen!;-), containing pictures with boxes and arrows and a paragraph associated with each one to describe what’s going on…

Step the first

You need an Amazon AWS account – you means handing over your credit card. That said, when you sign up you get access to to the free tier for a year. You may even get additional credit if you sign up via the Github Student Developer Pack.

Step the second

Go to the AWS Lambda console (you may want to change region – I’m going via Ireland) and get started…

Lambda_Management_Console

 

Step the third

We can make use of the simple template for the slack echo command using Python.

Lambda_Management_Console2

Step the fourth

In this step, we start naming things. Names are important, because we’ll be calling things by name to invoke them; you need to keep track of what’s called what and where so that you can make sure you’re calling it properly.

The first thing you need to do is give your Lambda function a name, I’m calling mine simpletest. This is effectively a filename – for the python function I’m creating, we can think of this setting as saving the filename of the local/inline copy of the function code to simpletest.py.

Lambda_Management_Console_1

The second thing you need to check is the name of the function in the code you want to invoke when the lambda function is called. In the example code, this is the function lambda_handler().

The third thing you need to check is the name of the handler that will be executed when the Lambda function is triggered. This is the function-in-the-file we want to run in the form FILENAME.FUNCTION. In this example, simpletest.lambda_handler.

Step the fifth

Define the Lambda function role.The suggested role is a “Basic execution role”. On first run you won’t have one of these, so you’ll need to create one (your browser will possibly need pop-ups enabling).

IAM_Management_Console

Step the Sixth

If you now look at the guidance given in the example Lambda function code, it starts off with the following:

Follow these steps to configure the slash command in Slack:
1. Navigate to https://<your-team-domain>.slack.com/services/new
2. Search for and select "Slash Commands".
3. Enter a name for your command and click "Add Slash Command Integration".
4. Copy the token string from the integration settings and use it in the next section.
5. After you complete this blueprint, enter the provided API endpoint URL in the URL field.

This is all good advice. Except for the use it in the next section bit, because we’re going to ignore that for now.

Step the Seventh – just don’t…

In the guidance, steps are described for encrypting the token you got from the Slack slash definition page. This is Good Practice, but a real pain if you’re just trying to get started and what to check things are working in the first place because you’ll quite possibly  end up going down various ratholes. (I’ll describe what you need to do to follow those steps in another post.)

So for the instructions that begin:

Follow these steps to encrypt your Slack token for use in this function:

just ignore them. Instead, edit the code, comment out the encrypted token handler bits, and paste in a plaintext version of the token you got from Slack. (We’re just trying stuff out, remember… we can reset the token and move to an encrypted one once we know the other bits are working).

#ENCRYPTED_EXPECTED_TOKEN = &quot;&lt;kmsEncryptedToken&gt;&quot; # Enter the base-64 encoded, encrypted Slack command token (CiphertextBlob)

#kms = boto3.client('kms')
#expected_token = kms.decrypt(CiphertextBlob = b64decode(ENCRYPTED_EXPECTED_TOKEN))['Plaintext']

expected_token ='YOUR_SLACK_TOKEN'

Step the Eighth

The next step of guidance (the bit beginning Follow these steps to complete the configuration of your command API endpoint) refer to what happens on the next step – which I’ll walk through…

Click on Next from the function definition page, and start to configure the API endpoint, specifically setting the Method to POST and the Security to Open. (You might also want to change the name of the API to something more appropriate, perhaps away from LambdMicroservice and towards something more personally recognisable, such as slacktestservice.) Leave the deployment stage set to prod.

Lambda_Management_Console_2

 

Step the Ninth

Move on to the next step, and you can create your Lambda function:

Lambda_Management_Console_3

 

But…. we’re still not there yet….

Step the Tenth

…there’s still stuff to do with the API definition. From the API Endpoints tab, you need to go into the prod deployment stage settings:

Lambda_Management_Console_4This will allow us to tweak the way that the API handles requests made to it.

Step the Eleventh

From the API Gateway console, select the service we associated with the Lambda function, which by default was called LambdMicroservice; (if you renamed the service, for example to slacktestservice, click on that service.

API_Gateway_1

 

Step the Twelfth

Select the simpletest function, and click on the POST method. This shows the steps associated with the call handler. Click on the Integration Request setting.

API_Gateway_2

We  now need to set the API service up so it can handle the Slack POSTed content.

Step the Thirteenth

The Integration Request needs customising to handle the JSON data sent from Slack. To do this we need to create Mapping Template for the JSON content.

API_Gateway_3

So create one…

Step the Fifteenth

The mapping we need to make is from the accepted application/x-www-form-urlencoded type. (Note, the official guidance currently (incorrectly) sets this as x-www-form-urlencoded).

API_Gateway_4

 

Step the Sixteenth

Select the Mapping template, and define the template as follows: 

{"body": $input.json("$")}

API_Gateway_5

Accept the template setting.

Step the Seventeenth

Having defined the mapping template, deploy the API.

API_Gateway_6

Make sure you deploy to the correct place (recall, we were using prod)!

API_Gateway_7

Step the Eighteenth

From the Lambda function control panel, you should be able to see the URL for your API endpoint. Grab a copy of this URL.

Lambda_Management_Console_5

Step the Nineteenth

Paste the API endpoint URL – making sure it points to the correct function handler (in my case, simpletest).

Slash_Commands___OUseful_Slack_2

Make sure you save/update the settings!

Step the Twentieth

Finally, you should be able to try out your Slack slash command…

slashtest___OUseful_Slack_aws

Summary

Phew… got there eventually, albeit insecurely… In a later post, I’ll describe how to do the token encryption bit, because for an AWS n00b it again takes multiple, and not necessarily obvious, steps… I’ll also describe how to set up a simple test case for testing out the function.

PS If I’ve missed anything out in this tutorial, please let me know. I’d only intended to spend half and hour or so tinkering and half and hour blogging this, and it’s now getting on for six hours after I started, though a fair chunk of that time was also spent putting this post together … So if I can spare anyone else the pain…!;-)