DynamoDB Streams is the absolutely amazing feature. It is a mixture of SQS and SNS with builtin retry strategy which allows you to implement Kaizen (https://en.wikipedia.org/wiki/Kaizen) approach to your solution. Review collected by and hosted on G2.com.
Luck of SQL like querying. But I heard that Athena service going to support DynamoDB as origin soon. SO should be good soon. Review collected by and hosted on G2.com.
As far as getting started, it's hard to beat Dyanmo, open the AWS console, make a new table, and start putting data in it from the command line or the various client libraries. The pricing is very simple - the more throughput you want the more you pay. the data model is very flexible, being a subset (representing a superset) of JSON, with the only requirements being that the primary and optional sort keys must be present - everything else is up to the user, allowing the data model to change over time. Review collected by and hosted on G2.com.
Deciding on how much capacity your particular application will need can be difficult, and from memory changing the values can take a few minutes if you get it wrong and need to respond to excess load. There are some interesting gotchyas too, such as its inability to store empty strings - empty strings should be distinct from non-existent strings so I regard this is a serious flaw in the data model (I'm sure there's a million ways to work around this, and the problem comes from the implementation of a million independent solutions to the problem). Review collected by and hosted on G2.com.