Why did I leave it so long? Busy training, sorry. And then Christmas. And I wanted to get my head around everything I learnt in such a short space of time. Why bother bringing it up now? Because it's still relevant. I didn't want to just regurgitate all of the announcements from re:Invent 2017 as so many other people do; you can just get them fresh from; instead I'm going to list what I feel are the top five announcements from the conference.
This post is based on a conversation I had with a guy on the way from my hotel (where I'd stopped off to grab my purple wig that broke check-in) to re:Play, The World's Biggest Rave [that is attended solely by Geeks], who seemed to have spent the entire conference at the gaming tables as he didn't really even know what a VPC is [or indeed that there was a HUGE party about to start]. I basically delivered AWS Technical Essentials overview to him, in the 30 minutes or so it took to get to the three aircraft hangars that held the party. Because, frankly, yes, I can talk that fast. I even had time to answer his question about "what are your top five announcements"?
NOTE: cheesy title supplied by my boss. Blame him.
Everyone's going to have a different list, but here are my top five announcements from the conference.
Wearing my trainer's hat:
1: Inter-region VPC peering
Why's it on the list?
Because it's (one of) the most requested things. In the olden days you'd have to stand up a VPN between your VPCs to do any kind of network interconnect (or go via your corporate data centre), then a couple of years ago they launched VPC peering but it was limited to intra-region only.
Also because I predicted it in QA's annual "what are AWS going to announce this year" sweepstake. I may or may not have won a set of steak knives, depending on whether you, my loyal reader, approve of gambling or not. It was hinted at by James Hamilton at re:Invent 2016, when he talked about Amazon laying their own transoceanic cables. Further hinted at a few weeks before the conference began with global Direct Connect (DX), where nowadays if you have DX into any region, you've got it into all regions (my assumption is that their private backbone is now complete enough but that's purely an assumption and shouldn't be taken as factual truth).
Traffic between VPCs inter-region is encrypted and, more importantly, anonymised.
Only supported in some regions for now (Ohio, Virginia, Oregon and Ireland at the time of writing).
2(T): Neptune - Fully-managed graph database (limited preview)
Because you can now literally use horses for courses with managed databases and no servers to feed, clothe and water. Relational? RDS. Data Warehouse? RedShift. Document / Key-value store? Dynamo (or ElastiCache for the K-V store). Graph? Neptune!
You what now?
A graph database is very good for modelling many-to-many (M:N) relationships. Think social media - you are a node (user) with lots of other edges (connections) to other nodes (users, companies, apps, etc). Those "friend suggestions" are created by traversing the graph, looking at who else has edges to things that you have edges to.
A use case for us here at QA might be Delegates and Courses, the basis for a recommendation engine that says "people who attended CourseX also attended CourseY". I'm itching to get on the preview / convince the powers that be to let me at some anonymised data.
Relational Databases aren't great at modelling lots of M:N relationships (they're better at 1:M (and M:1)). Schema-less (/ NoSQL / post-relational / whatever you prefer to call them) databases are better for high locality (say documents - it's no coincidence that Microsoft's NoSQL offering is called DocumentDB - or non-related data like high score tables and IoT sensor data).
A native graph database rounds things out nicely.
It supports two popular graph implementations:
- Property Graph, queried using Apache TinkerPop which supports imperative (programmatic) querying using Gremlin and
- RDF (Resource Document Format - see the Semantic Web by Tim Berners Lee) which is queried semantically using an SQL (or "a SQL" depending on how you pronounce your "SQL") dialect called SparQL (pronounced "Sparkle").
It's an either / or decision of course. You won't be able to do both.
Storage engine allows for transactions, 6 nodes across 3 AZs (as it was presented to me, but I'm reading that as 3 "facilities"). However, the service is in preview in Virginia, which has 6 AZs, so that might indicate that those AZs are indeed AZs.
Backup and restore is very interesting - as a side effect of how the storage layer has been written, point-in-time restore is a breeze, and even more interestingly you can go back to a p-i-t (say, when a node hadn't been deleted) and restore all updates save for the one that deleted that node by making it (and possibly others) "invisible".
2(T): Aurora serverless (limited preview)
If you know my predilection for cost-consciousness you'll know why.
Why am I running a db.r4.large 24 / 7 when I'm a 9 to 5 company? (Or in my case, why am I running a db.t2.micro 24 / 7 when I only need it for 5 minutes a month when I'm running a demo?)
If you ain't using it, you ain't paying for it. This is huge folks.
2(T): DynamoDB global tables
Well it wasn't that hard to achieve using DynamoDB Streams and Lambda and lots of people were doing it, so why not make it a button-click? Cross-region replication for Dynamo.
Not available in all regions (yet).
[Yes I'm cheating by densely-ranking, but the three runners-up are all database related, so, whatevs...]
When I deliver the Security Operations on AWS course, I invariably quote a fellow trainer who has pointed out that security is a Big Data problem - you need to analyse potentially thousands of log events (audit and server), examine the communications of potentially thousands of hosts, etc., in real-time, looking for patterns of behaviour. So you can imagine there's thousands of AWS customers, all trying to do pretty much the same thing with the same resources. And keep them available, scalable and up to date with the latest threat intelligence. So Amazon say (as always), "yeah, let's make that easier for you."
Arguably this should be higher up the list, but it's my list so it's third.
Not sure if it's right for you or not? There's a thirty day free trial. Thereafter, you pay for the number of CloudTrail events processed and the amount of log data.
4: SageMaker - fully-managed Machine Learning engine. No servers.
Couple of reasons.
Firstly, because ML is hard. Preparing your training data is hard. Preparing your testing data is even harder. Training the model is ... well OK not that hard but not easy. Choosing the right libraries and algorithms is hard. Publishing the model so that you can use it is hard. AWS want it to be as easy as possible, so they've taken all the undifferentiated heavy lifting out of it and just left us with choices. MxNet or TensorFlow (or something else)? Off you go.
Secondly, it proves that AWS never sit still. Amazon ML was launched, what, a couple of years ago? And (in my experience) it came in for some criticism for various reasons - "not flexible enough", "doesn't include my favourite algorithms", etc. Rather than trying to tweak what was a perfectly adequate if not earth-shatteringly awesome offering, they rebuilt from the ground up, based on (as always) the demands of their customers.
OK three reasons. Great name! What does it do? It makes sages of us all... [...assuming we prepare our training data correctly. That's still hard.]
5: Hyper-V import via Server Migration Service (SMS [but not text messaging!])
Do I really need to spell it out? You can now import Hyper-V Virtual Machines into AWS. Absolutely not a surprising announcement as it was always going to follow vCentre support, but I know it's going to make a lot of customers happy. Going after a particular user base AWS?
- Aurora multi region multi master.
- DynamoDB backup and restore.
Yeah, I thought I was a developer, looks like I'm a DBA after all...
Wearing my fanboy hat:
- Sumerian - democratised AR / VR development tool. Looks totes amazeballs. Gives me the self-justification I need to buy an Oculus Rift. I can actually see some training applications for it, so this isn't just goshwow. It has a cool name. And I got onto the preview!
- SageMaker - makes both lists.
- DeepLens - A (fairly) cheap "toy" ML tool with camera intended to encourage devs to learn more about ML. Use it switch your coffee maker on when it detects your number plate outside the house? Monitor your pet when you're out to see if they're happy or not. Sounds cool, no idea how to start on that though! Which I guess makes me the target market.
- Aurora Serverless - also makes both lists.
- Fargate - Fully managed ECS; I might actually start using containers now that I don't have to faff around with feeding, clothing and watering EC2 instances!
Other takeaways from the event
Would have to include (in no particular order):
- Breaking check-in by posing for my badge photo in a purple wig (and not turning up to check-in wearing it...)
- The "Future of Sport Entertainment" part of Werner's keynote. Or rather, the demonstration part. The talky bit got a bit repetitive. Worth looking for on The You Tube.
- Colleen Manaher from U.S. Customs and Border Protection talking about the future of passport control (hint: facial recognition) at the Partner Keynote in full regalia including her sidearm!
- The obvious push for a greater diversity of speakers in the keynotes. There was a lot happening around We Power Tech throughout the conference and it was good to see this being actively supported.
- Bumping into some Alumni at the bar outside my hotel the day before the conference kicked off and spending a very Happy Hour (or three) with them. You know who you are gents. It was good to catch up.
- Not smashing my phone on arrival this year (more kind of a happier memory than last year).
- Socks! Everyone was giving out socks this year!
- Being mistaken for Jeff Barr at re:Play because I was wearing my nylon purple fire hazard wig (Jeff had a purple dye-job and I look nothing like him, although I did walk past him twice when I wasn't wearing it).
That's it for my (belated) re:Invent 2017 wrap-up! Come back next year for more. If you can be bothered and if AWS let me back and if QA let me go that is...
Daniel IvesDaniel Ives has been helping people to build Amazing Things in the cloud for 10 years.
More articles by Daniel
What are IaaS, PaaS and SaaS?
What is the difference between IaaS, PaaS and SaaS? QA Learning Consultant Daniel Ives breaks down these common cloud service…25 August 2021
5 good reasons to get cloud certified
Experienced QA Learning Consultant Daniel Ives reasons that getting cloud certification is now more valid than ever – despite…25 February 2021
Kubernetes certifications: QA's new cloud native courses
Daniel Ives unpacks QA's new cloud native offering – and gets very excited about the Kubernetes certifications.10 February 2021
The benefits of AWS certification
Understand more about AWS certifications and their associated benefits, the certification lifecycle, and QA training courses…29 January 2020
AWS re:Invent 2016 - Certified Advanced Networking - Specialty
In your organisation, are you considered a leader? If so, do you spend your day at the helm or are you on deck with the team?…02 December 2016
AWS re:Invent 2016 - Certified Big Data - Specialty
QA's AWS Principal Technologist is at AWS re:Invent 2016. In this final blog he shares his first impressions of the beta-stag…05 December 2016
Converting CloudFormation JSON templates into YAML
People keep asking me why I'm so excited about YAML in CloudFormation. The simple answer is that it's easier for humans to re…07 December 2016
A first look at the new Migrating to AWS course
QA's AWS Principal Technologist, Daniel Ives, gives an overview of the latest AWS course, called Migrating to AWS.24 February 2017
The benefits of the cloud and Amazon Web Services (AWS)
If you read the tech press, you would think absolutely everybody was moving to the cloud. But is that just hype, or is it rea…09 February 2017
Dude, where's my Data?
QA's Principal Technologist gives his experience of the Google Data Engineering course.08 August 2017