Obtain the name of the elastic beanstalk autoscaling group

It is possible to export the name of the autoscaling group which elastic beanstalk provisions as part of each elastic beanstalk environment. This exported value can then be used from other cloud formation stacks in order to add custom scaling triggers, etc. We can export values by adding an outputs section into a .ebextensions file.

If you have large numbers of stacks it is very helpful if one can use a consistent naming scheme for the exported values. There is a way to pass a namespace into elastic beanstalk by using a custom option.

Here is the CloudFormation yaml to be included in an elastic beanstalk extension file:

Parameters:
  BeanstalkASGName:
    Type: String
    Description: "The name to export the autoscaling group name under"
    Default:
      Fn::GetOptionSetting:
        OptionName: MyBeanstalkStackInfoName
        DefaultValue: unknown
Outputs:
  OutputAutoScalingGroupName:
    Description: Beanstalk AutoScalingGroup Name
    Value:
      Ref: "AWSEBAutoScalingGroup"
    Export:
      Name:
        Fn::Join:
          - "-"
          - - { "Ref" : "BeanstalkASGName" }
            - "AutoScalingGroup"

Note that Fn::GetOptionSetting does not seem to be allowed directly in the Outputs section. So we have to use it instead to set the value of a parameter in a Parameters section, and then use the value indirectly via the parameter.

Here is a snippet to use in your master CloudFormation file:

Namespace: aws:elasticbeanstalk:customoption
OptionName: MyBeanstalkStackInfoName
Value: anyvalue

Cloudformation example of an S3 bucket with attached SQS notifications

Creating an s3 bucket with an SQS queue attached is a simple and powerful configuration. Cloudformation allows one to express such a configuration as code and commit it to a git repository. I was not able to find a complete example of how to express such a configuration using Cloudformation. What follows is written using the Troposhere library. Please do not take this post to be an endorsement of using Troposhere.

t = self.template
 
# The queue which will handle the S3 event messages
t.add_resource(Queue(
    "MyQueue",
    VisibilityTimeout=30,
    MessageRetentionPeriod=60,
    QueueName=Sub("my-${AWS::Region}-${AWS::AccountId}")
))
 
# The bucket that will generate the s3 events. The NotificationConfiguration
# also supports SNS and Lambda. Notifications can also be filtered according
# the S3 key of the object to which the event relates.
t.add_resource(Bucket(
    "MyBucket",
    BucketName=Sub("my-${AWS::Region}-${AWS::AccountId}"),
    # Note that the queue policy must be created first
    DependsOn="MyQueuePolicy",
    NotificationConfiguration=NotificationConfiguration(
        QueueConfigurations=[
            QueueConfigurations(
                Event="s3:ObjectCreated:*",
                Queue=GetAtt("MyQueue", "Arn"),
            )
        ]
    )
))
 
# The queue policy will give access to the S3 bucket to send on the queue
# The queue policy can also be used to give permission to the message receiver
t.add_resource(QueuePolicy(
    "MyQueuePolicy",
    Queues=[Ref("MyQueue")],
    PolicyDocument={
        "Version": "2012-10-17",
        "Statement": [
            # Allow the S3 bucket to publish to the queue
            # https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#grant
            # -destinations-permissions-to-s3
            {
                "Effect": "Allow",
                "Principal": Principal("Service", ["s3.amazonaws.com"]),
                "Action": [
                    "SQS:SendMessage"
                ],
                "Resource": GetAtt("MyQueue", "Arn"),
                "Condition": {
                    "ArnLike": {
                        # have to construct the ARN from the static bucket name to avoid
                        # the circular dependency
                        # https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
                        "aws:SourceArn": Join("", [
                            "arn:aws:s3:::",
                            Sub("my-${AWS::Region}-${AWS::AccountId}")
                        ])
                    }
                }
            },
            # Allow some user to read from the queue. This is just and example,
            # please change this to match the permissions your use case requires.
            {
                "Effect": "Allow",
                "Principal": AWSPrincipal(GetAtt("MyUser", "Arn")),
                "Action": [
                    "sqs:ReceiveMessage"
                ],
                "Resource": GetAtt("MyQueue", "Arn"),
            }
        ]
    }
))
 
# Allow some user to manipulate the S3 bucket. This is just and example,
# please change this to match the permissions your use case requires.
t.add_resource(BucketPolicy(
    "MyBucketPolicy",
    Bucket=Ref("MyBucket"),
    PolicyDocument={
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": AWSPrincipal(GetAtt("MyUser", "Arn")),
                "Action": [
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:DeleteObject"
                ],
                "Resource": Join("", [GetAtt("MyBucket", "Arn"), "/*"])
            }
        ]
    }
))

Server-to-Server authentication for the Microsoft Dynamics web API

To connect to the Microsoft Dynamics web API from another server you can use OAuth v2.0 with the client_credentials grant. Making request to obtain the OAuth token is very simple.

The tenant ID is obtained from the azure portal, under manage properties and is indicated as the directory ID

You can obtain the value for the scope from the dynamics portal by visiting Settings > Customization > Developer Resources and looking at the service root URL.

requests.post(
      f'https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token',
        data={
            'client_id': settings.DYNAMICS_CLIENT_ID,
            'scope': 'https://*********.api.crm*.dynamics.com/.default',
            'client_secret': settings.DYNAMICS_CLIENT_SECRET,
            'grant_type': 'client_credentials',
        }
    )

A number of steps are required to make this work.

Register an OAuth App in Azure AD

There is a walkthrough of registering an OAuth app on Azure AD. The essential process is to use the Active Directory section of the Azure management portal to register a new application, give it the Dynamics 365 (online)Delegated Permissions permission, and create a new secret for it.

Note that the redirect URI you enter doesn’t actually need to identify a real reachable resource.

Obtain Admin Consent for Dynamics 365

To obtain admin consent for the app, put the following URL in to a web browser. In the URL you must fill in the ID of the OAuth App you registered in the previous step, and also the redirect URI you registered for that App.

https://login.microsoftonline.com/{tenant}/adminconsent?client_id={your_client_id}&state=12345&redirect_uri={your_redirect_uri}

Configure a System User on Dynamics 365

In order to use your approved OAuth client you need to create a corresponding system user on Dynamics itself. This user does not require a license.

You need to create an appropriate security role for this user. Then create the user, ensuring that you select the Application user form. The Application ID must match the one that you registered with Azure. The strange padlock icon on some of the fields means that you should not fill them in because they will be looked up using the Application ID

NOTE: you may find it useful to give the security role the prvActOnBehalfOfAnotherUser permission to allow your service to impersonate other users.

You will also need to give the new system user sufficient permissions to take the actions that your application will perform. Both the system user and the impersonated user must have permissions in order for the action to be permitted.

Make Requests using the token

You should now be able to make requests using the token:

    api_root = 'https://********.api.crm*.dynamics.com/api/data/v9.0/'
 
    r = requests.get(
        api_root + 'systemusers',
        params={
            f"$filter": "internalemailaddress eq '{email}'",
            "$select": "internalemailaddress",
        },
        headers={
            'Authorization': 'Bearer ' + token,
        })

To impersonate a user you have to send a custom header with the correct ID for the particular user you want to impersonate. You can find the ID of a user using the request in the example above.

'MSCRMCallerID': systemuserid

Connecting SNS to a lambda function using CloudFormation

We are using Amazon CloudFormation to configure our infrastructure as code. We are doing video processing using a Lambda function triggered by a message in an SNS queue. The documentation on how to do this in CloudFormation is fairly poor. In this article I will show some troposphere code that shows how to do this.

First create the lambda function. This one just logs its invocation event into the cloudwatch logs:

def create_lambda(self):
    t = self.template
 
    code = [
        "exports.handler = function(event, context) {" +
        "    console.log(\"event: \", JSON.stringify(event, null, 4));" +
        "    context.succeed(\"success\");" +
        "}"
    ]
 
    return t.add_resource(Function(
        "LambdaFunction",
        Code=Code(
            ZipFile=Join("", code)
        ),
        Handler="index.handler",
        Role=GetAtt("LambdaExecutionRole", "Arn"),
        Runtime="nodejs4.3",
    ))

Creating the SNS topic is straightforward:

def subscribe_lambda_to_topic(self, topic, function):
    topic.Subscription = [Subscription(
        Protocol="lambda",
        Endpoint=GetAtt(function, "Arn")
    )]

The most complicated, and least well documented, part of the configuration is to give the relevant authorisations. The lambda function itself will require authorisation to use any resources it needs. In this example it is given authority to log to cloudwatch. It is also necessary to set a lambda permission that allows SNS to invoke the lambda in response to a message on the topic. Note that this permission is not a normal IAM role and policy, but something specific to lambda.

def give_permission_to_lambda(self, topic, function):
    t = self.template
 
    # This role gives the lambda the permissions it needs during execution
    lambda_execution_role = t.add_resource(Role(
        "LambdaExecutionRole",
        Path="/",
        AssumeRolePolicyDocument={"Version": "2012-10-17", "Statement": [
            {
                "Action": ["sts:AssumeRole"],
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "lambda.amazonaws.com",
                    ]
                }
            }
        ]},
    ))
 
    lambda_execution_policy = t.add_resource(PolicyType(
        "LambdaExecutionPolicy",
        PolicyName="LambdaExecutionPolicy",
        PolicyDocument={
            "Version": "2012-10-17", "Statement": [
                {"Resource": "arn:aws:logs:*:*:*",
                 "Action": ["logs:*"],
                 "Effect": "Allow",
                 "Sid": "logaccess"}]},
        Roles=[Ref(lambda_execution_role)]
    ))
 
    t.add_resource(Permission(
        "InvokeLambdaPermission",
        FunctionName=GetAtt(function, "Arn"),
        Action="lambda:InvokeFunction",
        SourceArn=Ref(topic),
        Principal="sns.amazonaws.com"
    ))