1. Packages
  2. Azure Native v2
  3. API Docs
  4. awsconnector
  5. DynamoDbTable
These are the docs for Azure Native v2. We recommenend using the latest version, Azure Native v3.
Azure Native v2 v2.90.0 published on Thursday, Mar 27, 2025 by Pulumi

azure-native-v2.awsconnector.DynamoDbTable

Explore with Pulumi AI

These are the docs for Azure Native v2. We recommenend using the latest version, Azure Native v3.
Azure Native v2 v2.90.0 published on Thursday, Mar 27, 2025 by Pulumi

A Microsoft.AwsConnector resource Azure REST API version: 2024-12-01.

Example Usage

DynamoDbTables_CreateOrReplace

using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;

return await Deployment.RunAsync(() => 
{
    var dynamoDbTable = new AzureNative.AwsConnector.DynamoDbTable("dynamoDbTable", new()
    {
        Location = "fmkjilswdjyisfuwxuj",
        Name = "Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])",
        Properties = new AzureNative.AwsConnector.Inputs.DynamoDBTablePropertiesArgs
        {
            Arn = "gimtbcfiznraniycjyalnwrfstm",
            AwsAccountId = "dejqcxb",
            AwsProperties = new AzureNative.AwsConnector.Inputs.AwsDynamoDBTablePropertiesArgs
            {
                Arn = "qbvqgymuxfzuwybdspdhcuvfouwnet",
                AttributeDefinitions = new[]
                {
                    new AzureNative.AwsConnector.Inputs.AttributeDefinitionArgs
                    {
                        AttributeName = "caryhpofnkqtoc",
                        AttributeType = "bcmjgzaljcemcrswr",
                    },
                },
                BillingMode = "pwxrsjcybdcidejuhvrckvxyxad",
                ContributorInsightsSpecification = new AzureNative.AwsConnector.Inputs.ContributorInsightsSpecificationArgs
                {
                    Enabled = true,
                },
                DeletionProtectionEnabled = true,
                GlobalSecondaryIndexes = new[]
                {
                    new AzureNative.AwsConnector.Inputs.GlobalSecondaryIndexArgs
                    {
                        ContributorInsightsSpecification = new AzureNative.AwsConnector.Inputs.ContributorInsightsSpecificationArgs
                        {
                            Enabled = true,
                        },
                        IndexName = "uqlzacnvsvayrvirrwwttb",
                        KeySchema = new[]
                        {
                            new AzureNative.AwsConnector.Inputs.KeySchemaArgs
                            {
                                AttributeName = "wisgqkyoouaxivtrtay",
                                KeyType = "kwkqgbxrwnoklpgmoypovxe",
                            },
                        },
                        Projection = new AzureNative.AwsConnector.Inputs.ProjectionArgs
                        {
                            NonKeyAttributes = new[]
                            {
                                "loqmvohtjsscueegam",
                            },
                            ProjectionType = "atbzepkydpgudoaqi",
                        },
                        ProvisionedThroughput = new AzureNative.AwsConnector.Inputs.ProvisionedThroughputArgs
                        {
                            ReadCapacityUnits = 10,
                            WriteCapacityUnits = 28,
                        },
                    },
                },
                ImportSourceSpecification = new AzureNative.AwsConnector.Inputs.ImportSourceSpecificationArgs
                {
                    InputCompressionType = "bjswmnwxleqmcth",
                    InputFormat = "grnhhysgejvbnecrqoynjomz",
                    InputFormatOptions = new AzureNative.AwsConnector.Inputs.InputFormatOptionsArgs
                    {
                        Csv = new AzureNative.AwsConnector.Inputs.CsvArgs
                        {
                            Delimiter = "qzowvvpwwhptthlgvrtnpyjszetrt",
                            HeaderList = new[]
                            {
                                "gminuylhgebpjx",
                            },
                        },
                    },
                    S3BucketSource = new AzureNative.AwsConnector.Inputs.S3BucketSourceArgs
                    {
                        S3Bucket = "exulhkspgmo",
                        S3BucketOwner = "pyawhaxbwqhgarz",
                        S3KeyPrefix = "ogjgqdsvu",
                    },
                },
                KeySchema = new[]
                {
                    new AzureNative.AwsConnector.Inputs.KeySchemaArgs
                    {
                        AttributeName = "wisgqkyoouaxivtrtay",
                        KeyType = "kwkqgbxrwnoklpgmoypovxe",
                    },
                },
                KinesisStreamSpecification = new AzureNative.AwsConnector.Inputs.KinesisStreamSpecificationArgs
                {
                    ApproximateCreationDateTimePrecision = AzureNative.AwsConnector.KinesisStreamSpecificationApproximateCreationDateTimePrecision.MICROSECOND,
                    StreamArn = "qldltl",
                },
                LocalSecondaryIndexes = new[]
                {
                    new AzureNative.AwsConnector.Inputs.LocalSecondaryIndexArgs
                    {
                        IndexName = "gintyosxvkjqpe",
                        KeySchema = new[]
                        {
                            new AzureNative.AwsConnector.Inputs.KeySchemaArgs
                            {
                                AttributeName = "wisgqkyoouaxivtrtay",
                                KeyType = "kwkqgbxrwnoklpgmoypovxe",
                            },
                        },
                        Projection = new AzureNative.AwsConnector.Inputs.ProjectionArgs
                        {
                            NonKeyAttributes = new[]
                            {
                                "loqmvohtjsscueegam",
                            },
                            ProjectionType = "atbzepkydpgudoaqi",
                        },
                    },
                },
                PointInTimeRecoverySpecification = new AzureNative.AwsConnector.Inputs.PointInTimeRecoverySpecificationArgs
                {
                    PointInTimeRecoveryEnabled = true,
                },
                ProvisionedThroughput = new AzureNative.AwsConnector.Inputs.ProvisionedThroughputArgs
                {
                    ReadCapacityUnits = 10,
                    WriteCapacityUnits = 28,
                },
                ResourcePolicy = null,
                SseSpecification = new AzureNative.AwsConnector.Inputs.SSESpecificationArgs
                {
                    KmsMasterKeyId = "rvwuejohzknzrntkvprgxt",
                    SseEnabled = true,
                    SseType = "osjalywculjbrystezvjojxe",
                },
                StreamArn = "xvkrzs",
                StreamSpecification = new AzureNative.AwsConnector.Inputs.StreamSpecificationArgs
                {
                    ResourcePolicy = null,
                    StreamViewType = "wemod",
                },
                TableClass = "tmbfrfbppwhjpm",
                TableName = "mqvlcdboopn",
                Tags = new[]
                {
                    new AzureNative.AwsConnector.Inputs.TagArgs
                    {
                        Key = "txipennfw",
                        Value = "dkgweupnz",
                    },
                },
                TimeToLiveSpecification = new AzureNative.AwsConnector.Inputs.TimeToLiveSpecificationArgs
                {
                    AttributeName = "sxbfejubturdtyusqywguqni",
                    Enabled = true,
                },
            },
            AwsRegion = "rdzrhtbydhmaxzuwe",
            AwsSourceSchema = "sqkkuxwamzevkp",
            AwsTags = 
            {
                { "key3791", "iikafuvbjkvnbogujm" },
            },
            PublicCloudConnectorsResourceId = "nugnoqcknmrrminwvfvloqsporjd",
            PublicCloudResourceName = "lkbwyvnzooydbnembmykhmw",
        },
        ResourceGroupName = "rgdynamoDBTable",
        Tags = 
        {
            { "key2178", "lyeternduvkobwvqhpicnxel" },
        },
    });

});
Copy
package main

import (
	awsconnector "github.com/pulumi/pulumi-azure-native-sdk/awsconnector/v2"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := awsconnector.NewDynamoDbTable(ctx, "dynamoDbTable", &awsconnector.DynamoDbTableArgs{
			Location: pulumi.String("fmkjilswdjyisfuwxuj"),
			Name:     pulumi.String("Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])"),
			Properties: &awsconnector.DynamoDBTablePropertiesArgs{
				Arn:          pulumi.String("gimtbcfiznraniycjyalnwrfstm"),
				AwsAccountId: pulumi.String("dejqcxb"),
				AwsProperties: &awsconnector.AwsDynamoDBTablePropertiesArgs{
					Arn: pulumi.String("qbvqgymuxfzuwybdspdhcuvfouwnet"),
					AttributeDefinitions: awsconnector.AttributeDefinitionArray{
						&awsconnector.AttributeDefinitionArgs{
							AttributeName: pulumi.String("caryhpofnkqtoc"),
							AttributeType: pulumi.String("bcmjgzaljcemcrswr"),
						},
					},
					BillingMode: pulumi.String("pwxrsjcybdcidejuhvrckvxyxad"),
					ContributorInsightsSpecification: &awsconnector.ContributorInsightsSpecificationArgs{
						Enabled: pulumi.Bool(true),
					},
					DeletionProtectionEnabled: pulumi.Bool(true),
					GlobalSecondaryIndexes: awsconnector.GlobalSecondaryIndexArray{
						&awsconnector.GlobalSecondaryIndexArgs{
							ContributorInsightsSpecification: &awsconnector.ContributorInsightsSpecificationArgs{
								Enabled: pulumi.Bool(true),
							},
							IndexName: pulumi.String("uqlzacnvsvayrvirrwwttb"),
							KeySchema: awsconnector.KeySchemaArray{
								&awsconnector.KeySchemaArgs{
									AttributeName: pulumi.String("wisgqkyoouaxivtrtay"),
									KeyType:       pulumi.String("kwkqgbxrwnoklpgmoypovxe"),
								},
							},
							Projection: &awsconnector.ProjectionArgs{
								NonKeyAttributes: pulumi.StringArray{
									pulumi.String("loqmvohtjsscueegam"),
								},
								ProjectionType: pulumi.String("atbzepkydpgudoaqi"),
							},
							ProvisionedThroughput: &awsconnector.ProvisionedThroughputArgs{
								ReadCapacityUnits:  pulumi.Int(10),
								WriteCapacityUnits: pulumi.Int(28),
							},
						},
					},
					ImportSourceSpecification: &awsconnector.ImportSourceSpecificationArgs{
						InputCompressionType: pulumi.String("bjswmnwxleqmcth"),
						InputFormat:          pulumi.String("grnhhysgejvbnecrqoynjomz"),
						InputFormatOptions: &awsconnector.InputFormatOptionsArgs{
							Csv: &awsconnector.CsvArgs{
								Delimiter: pulumi.String("qzowvvpwwhptthlgvrtnpyjszetrt"),
								HeaderList: pulumi.StringArray{
									pulumi.String("gminuylhgebpjx"),
								},
							},
						},
						S3BucketSource: &awsconnector.S3BucketSourceArgs{
							S3Bucket:      pulumi.String("exulhkspgmo"),
							S3BucketOwner: pulumi.String("pyawhaxbwqhgarz"),
							S3KeyPrefix:   pulumi.String("ogjgqdsvu"),
						},
					},
					KeySchema: awsconnector.KeySchemaArray{
						&awsconnector.KeySchemaArgs{
							AttributeName: pulumi.String("wisgqkyoouaxivtrtay"),
							KeyType:       pulumi.String("kwkqgbxrwnoklpgmoypovxe"),
						},
					},
					KinesisStreamSpecification: &awsconnector.KinesisStreamSpecificationArgs{
						ApproximateCreationDateTimePrecision: pulumi.String(awsconnector.KinesisStreamSpecificationApproximateCreationDateTimePrecisionMICROSECOND),
						StreamArn:                            pulumi.String("qldltl"),
					},
					LocalSecondaryIndexes: awsconnector.LocalSecondaryIndexArray{
						&awsconnector.LocalSecondaryIndexArgs{
							IndexName: pulumi.String("gintyosxvkjqpe"),
							KeySchema: awsconnector.KeySchemaArray{
								&awsconnector.KeySchemaArgs{
									AttributeName: pulumi.String("wisgqkyoouaxivtrtay"),
									KeyType:       pulumi.String("kwkqgbxrwnoklpgmoypovxe"),
								},
							},
							Projection: &awsconnector.ProjectionArgs{
								NonKeyAttributes: pulumi.StringArray{
									pulumi.String("loqmvohtjsscueegam"),
								},
								ProjectionType: pulumi.String("atbzepkydpgudoaqi"),
							},
						},
					},
					PointInTimeRecoverySpecification: &awsconnector.PointInTimeRecoverySpecificationArgs{
						PointInTimeRecoveryEnabled: pulumi.Bool(true),
					},
					ProvisionedThroughput: &awsconnector.ProvisionedThroughputArgs{
						ReadCapacityUnits:  pulumi.Int(10),
						WriteCapacityUnits: pulumi.Int(28),
					},
					ResourcePolicy: &awsconnector.ResourcePolicyArgs{},
					SseSpecification: &awsconnector.SSESpecificationArgs{
						KmsMasterKeyId: pulumi.String("rvwuejohzknzrntkvprgxt"),
						SseEnabled:     pulumi.Bool(true),
						SseType:        pulumi.String("osjalywculjbrystezvjojxe"),
					},
					StreamArn: pulumi.String("xvkrzs"),
					StreamSpecification: &awsconnector.StreamSpecificationArgs{
						ResourcePolicy: &awsconnector.ResourcePolicyArgs{},
						StreamViewType: pulumi.String("wemod"),
					},
					TableClass: pulumi.String("tmbfrfbppwhjpm"),
					TableName:  pulumi.String("mqvlcdboopn"),
					Tags: awsconnector.TagArray{
						&awsconnector.TagArgs{
							Key:   pulumi.String("txipennfw"),
							Value: pulumi.String("dkgweupnz"),
						},
					},
					TimeToLiveSpecification: &awsconnector.TimeToLiveSpecificationArgs{
						AttributeName: pulumi.String("sxbfejubturdtyusqywguqni"),
						Enabled:       pulumi.Bool(true),
					},
				},
				AwsRegion:       pulumi.String("rdzrhtbydhmaxzuwe"),
				AwsSourceSchema: pulumi.String("sqkkuxwamzevkp"),
				AwsTags: pulumi.StringMap{
					"key3791": pulumi.String("iikafuvbjkvnbogujm"),
				},
				PublicCloudConnectorsResourceId: pulumi.String("nugnoqcknmrrminwvfvloqsporjd"),
				PublicCloudResourceName:         pulumi.String("lkbwyvnzooydbnembmykhmw"),
			},
			ResourceGroupName: pulumi.String("rgdynamoDBTable"),
			Tags: pulumi.StringMap{
				"key2178": pulumi.String("lyeternduvkobwvqhpicnxel"),
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
Copy
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.awsconnector.DynamoDbTable;
import com.pulumi.azurenative.awsconnector.DynamoDbTableArgs;
import com.pulumi.azurenative.awsconnector.inputs.DynamoDBTablePropertiesArgs;
import com.pulumi.azurenative.awsconnector.inputs.AwsDynamoDBTablePropertiesArgs;
import com.pulumi.azurenative.awsconnector.inputs.ContributorInsightsSpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.ImportSourceSpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.InputFormatOptionsArgs;
import com.pulumi.azurenative.awsconnector.inputs.CsvArgs;
import com.pulumi.azurenative.awsconnector.inputs.S3BucketSourceArgs;
import com.pulumi.azurenative.awsconnector.inputs.KinesisStreamSpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.PointInTimeRecoverySpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.ProvisionedThroughputArgs;
import com.pulumi.azurenative.awsconnector.inputs.ResourcePolicyArgs;
import com.pulumi.azurenative.awsconnector.inputs.SSESpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.StreamSpecificationArgs;
import com.pulumi.azurenative.awsconnector.inputs.TimeToLiveSpecificationArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var dynamoDbTable = new DynamoDbTable("dynamoDbTable", DynamoDbTableArgs.builder()
            .location("fmkjilswdjyisfuwxuj")
            .name("Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])")
            .properties(DynamoDBTablePropertiesArgs.builder()
                .arn("gimtbcfiznraniycjyalnwrfstm")
                .awsAccountId("dejqcxb")
                .awsProperties(AwsDynamoDBTablePropertiesArgs.builder()
                    .arn("qbvqgymuxfzuwybdspdhcuvfouwnet")
                    .attributeDefinitions(AttributeDefinitionArgs.builder()
                        .attributeName("caryhpofnkqtoc")
                        .attributeType("bcmjgzaljcemcrswr")
                        .build())
                    .billingMode("pwxrsjcybdcidejuhvrckvxyxad")
                    .contributorInsightsSpecification(ContributorInsightsSpecificationArgs.builder()
                        .enabled(true)
                        .build())
                    .deletionProtectionEnabled(true)
                    .globalSecondaryIndexes(GlobalSecondaryIndexArgs.builder()
                        .contributorInsightsSpecification(ContributorInsightsSpecificationArgs.builder()
                            .enabled(true)
                            .build())
                        .indexName("uqlzacnvsvayrvirrwwttb")
                        .keySchema(KeySchemaArgs.builder()
                            .attributeName("wisgqkyoouaxivtrtay")
                            .keyType("kwkqgbxrwnoklpgmoypovxe")
                            .build())
                        .projection(ProjectionArgs.builder()
                            .nonKeyAttributes("loqmvohtjsscueegam")
                            .projectionType("atbzepkydpgudoaqi")
                            .build())
                        .provisionedThroughput(ProvisionedThroughputArgs.builder()
                            .readCapacityUnits(10)
                            .writeCapacityUnits(28)
                            .build())
                        .build())
                    .importSourceSpecification(ImportSourceSpecificationArgs.builder()
                        .inputCompressionType("bjswmnwxleqmcth")
                        .inputFormat("grnhhysgejvbnecrqoynjomz")
                        .inputFormatOptions(InputFormatOptionsArgs.builder()
                            .csv(CsvArgs.builder()
                                .delimiter("qzowvvpwwhptthlgvrtnpyjszetrt")
                                .headerList("gminuylhgebpjx")
                                .build())
                            .build())
                        .s3BucketSource(S3BucketSourceArgs.builder()
                            .s3Bucket("exulhkspgmo")
                            .s3BucketOwner("pyawhaxbwqhgarz")
                            .s3KeyPrefix("ogjgqdsvu")
                            .build())
                        .build())
                    .keySchema(KeySchemaArgs.builder()
                        .attributeName("wisgqkyoouaxivtrtay")
                        .keyType("kwkqgbxrwnoklpgmoypovxe")
                        .build())
                    .kinesisStreamSpecification(KinesisStreamSpecificationArgs.builder()
                        .approximateCreationDateTimePrecision("MICROSECOND")
                        .streamArn("qldltl")
                        .build())
                    .localSecondaryIndexes(LocalSecondaryIndexArgs.builder()
                        .indexName("gintyosxvkjqpe")
                        .keySchema(KeySchemaArgs.builder()
                            .attributeName("wisgqkyoouaxivtrtay")
                            .keyType("kwkqgbxrwnoklpgmoypovxe")
                            .build())
                        .projection(ProjectionArgs.builder()
                            .nonKeyAttributes("loqmvohtjsscueegam")
                            .projectionType("atbzepkydpgudoaqi")
                            .build())
                        .build())
                    .pointInTimeRecoverySpecification(PointInTimeRecoverySpecificationArgs.builder()
                        .pointInTimeRecoveryEnabled(true)
                        .build())
                    .provisionedThroughput(ProvisionedThroughputArgs.builder()
                        .readCapacityUnits(10)
                        .writeCapacityUnits(28)
                        .build())
                    .resourcePolicy()
                    .sseSpecification(SSESpecificationArgs.builder()
                        .kmsMasterKeyId("rvwuejohzknzrntkvprgxt")
                        .sseEnabled(true)
                        .sseType("osjalywculjbrystezvjojxe")
                        .build())
                    .streamArn("xvkrzs")
                    .streamSpecification(StreamSpecificationArgs.builder()
                        .resourcePolicy()
                        .streamViewType("wemod")
                        .build())
                    .tableClass("tmbfrfbppwhjpm")
                    .tableName("mqvlcdboopn")
                    .tags(TagArgs.builder()
                        .key("txipennfw")
                        .value("dkgweupnz")
                        .build())
                    .timeToLiveSpecification(TimeToLiveSpecificationArgs.builder()
                        .attributeName("sxbfejubturdtyusqywguqni")
                        .enabled(true)
                        .build())
                    .build())
                .awsRegion("rdzrhtbydhmaxzuwe")
                .awsSourceSchema("sqkkuxwamzevkp")
                .awsTags(Map.of("key3791", "iikafuvbjkvnbogujm"))
                .publicCloudConnectorsResourceId("nugnoqcknmrrminwvfvloqsporjd")
                .publicCloudResourceName("lkbwyvnzooydbnembmykhmw")
                .build())
            .resourceGroupName("rgdynamoDBTable")
            .tags(Map.of("key2178", "lyeternduvkobwvqhpicnxel"))
            .build());

    }
}
Copy
import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";

const dynamoDbTable = new azure_native.awsconnector.DynamoDbTable("dynamoDbTable", {
    location: "fmkjilswdjyisfuwxuj",
    name: "Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])",
    properties: {
        arn: "gimtbcfiznraniycjyalnwrfstm",
        awsAccountId: "dejqcxb",
        awsProperties: {
            arn: "qbvqgymuxfzuwybdspdhcuvfouwnet",
            attributeDefinitions: [{
                attributeName: "caryhpofnkqtoc",
                attributeType: "bcmjgzaljcemcrswr",
            }],
            billingMode: "pwxrsjcybdcidejuhvrckvxyxad",
            contributorInsightsSpecification: {
                enabled: true,
            },
            deletionProtectionEnabled: true,
            globalSecondaryIndexes: [{
                contributorInsightsSpecification: {
                    enabled: true,
                },
                indexName: "uqlzacnvsvayrvirrwwttb",
                keySchema: [{
                    attributeName: "wisgqkyoouaxivtrtay",
                    keyType: "kwkqgbxrwnoklpgmoypovxe",
                }],
                projection: {
                    nonKeyAttributes: ["loqmvohtjsscueegam"],
                    projectionType: "atbzepkydpgudoaqi",
                },
                provisionedThroughput: {
                    readCapacityUnits: 10,
                    writeCapacityUnits: 28,
                },
            }],
            importSourceSpecification: {
                inputCompressionType: "bjswmnwxleqmcth",
                inputFormat: "grnhhysgejvbnecrqoynjomz",
                inputFormatOptions: {
                    csv: {
                        delimiter: "qzowvvpwwhptthlgvrtnpyjszetrt",
                        headerList: ["gminuylhgebpjx"],
                    },
                },
                s3BucketSource: {
                    s3Bucket: "exulhkspgmo",
                    s3BucketOwner: "pyawhaxbwqhgarz",
                    s3KeyPrefix: "ogjgqdsvu",
                },
            },
            keySchema: [{
                attributeName: "wisgqkyoouaxivtrtay",
                keyType: "kwkqgbxrwnoklpgmoypovxe",
            }],
            kinesisStreamSpecification: {
                approximateCreationDateTimePrecision: azure_native.awsconnector.KinesisStreamSpecificationApproximateCreationDateTimePrecision.MICROSECOND,
                streamArn: "qldltl",
            },
            localSecondaryIndexes: [{
                indexName: "gintyosxvkjqpe",
                keySchema: [{
                    attributeName: "wisgqkyoouaxivtrtay",
                    keyType: "kwkqgbxrwnoklpgmoypovxe",
                }],
                projection: {
                    nonKeyAttributes: ["loqmvohtjsscueegam"],
                    projectionType: "atbzepkydpgudoaqi",
                },
            }],
            pointInTimeRecoverySpecification: {
                pointInTimeRecoveryEnabled: true,
            },
            provisionedThroughput: {
                readCapacityUnits: 10,
                writeCapacityUnits: 28,
            },
            resourcePolicy: {},
            sseSpecification: {
                kmsMasterKeyId: "rvwuejohzknzrntkvprgxt",
                sseEnabled: true,
                sseType: "osjalywculjbrystezvjojxe",
            },
            streamArn: "xvkrzs",
            streamSpecification: {
                resourcePolicy: {},
                streamViewType: "wemod",
            },
            tableClass: "tmbfrfbppwhjpm",
            tableName: "mqvlcdboopn",
            tags: [{
                key: "txipennfw",
                value: "dkgweupnz",
            }],
            timeToLiveSpecification: {
                attributeName: "sxbfejubturdtyusqywguqni",
                enabled: true,
            },
        },
        awsRegion: "rdzrhtbydhmaxzuwe",
        awsSourceSchema: "sqkkuxwamzevkp",
        awsTags: {
            key3791: "iikafuvbjkvnbogujm",
        },
        publicCloudConnectorsResourceId: "nugnoqcknmrrminwvfvloqsporjd",
        publicCloudResourceName: "lkbwyvnzooydbnembmykhmw",
    },
    resourceGroupName: "rgdynamoDBTable",
    tags: {
        key2178: "lyeternduvkobwvqhpicnxel",
    },
});
Copy
import pulumi
import pulumi_azure_native as azure_native

dynamo_db_table = azure_native.awsconnector.DynamoDbTable("dynamoDbTable",
    location="fmkjilswdjyisfuwxuj",
    name="Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])",
    properties={
        "arn": "gimtbcfiznraniycjyalnwrfstm",
        "aws_account_id": "dejqcxb",
        "aws_properties": {
            "arn": "qbvqgymuxfzuwybdspdhcuvfouwnet",
            "attribute_definitions": [{
                "attribute_name": "caryhpofnkqtoc",
                "attribute_type": "bcmjgzaljcemcrswr",
            }],
            "billing_mode": "pwxrsjcybdcidejuhvrckvxyxad",
            "contributor_insights_specification": {
                "enabled": True,
            },
            "deletion_protection_enabled": True,
            "global_secondary_indexes": [{
                "contributor_insights_specification": {
                    "enabled": True,
                },
                "index_name": "uqlzacnvsvayrvirrwwttb",
                "key_schema": [{
                    "attribute_name": "wisgqkyoouaxivtrtay",
                    "key_type": "kwkqgbxrwnoklpgmoypovxe",
                }],
                "projection": {
                    "non_key_attributes": ["loqmvohtjsscueegam"],
                    "projection_type": "atbzepkydpgudoaqi",
                },
                "provisioned_throughput": {
                    "read_capacity_units": 10,
                    "write_capacity_units": 28,
                },
            }],
            "import_source_specification": {
                "input_compression_type": "bjswmnwxleqmcth",
                "input_format": "grnhhysgejvbnecrqoynjomz",
                "input_format_options": {
                    "csv": {
                        "delimiter": "qzowvvpwwhptthlgvrtnpyjszetrt",
                        "header_list": ["gminuylhgebpjx"],
                    },
                },
                "s3_bucket_source": {
                    "s3_bucket": "exulhkspgmo",
                    "s3_bucket_owner": "pyawhaxbwqhgarz",
                    "s3_key_prefix": "ogjgqdsvu",
                },
            },
            "key_schema": [{
                "attribute_name": "wisgqkyoouaxivtrtay",
                "key_type": "kwkqgbxrwnoklpgmoypovxe",
            }],
            "kinesis_stream_specification": {
                "approximate_creation_date_time_precision": azure_native.awsconnector.KinesisStreamSpecificationApproximateCreationDateTimePrecision.MICROSECOND,
                "stream_arn": "qldltl",
            },
            "local_secondary_indexes": [{
                "index_name": "gintyosxvkjqpe",
                "key_schema": [{
                    "attribute_name": "wisgqkyoouaxivtrtay",
                    "key_type": "kwkqgbxrwnoklpgmoypovxe",
                }],
                "projection": {
                    "non_key_attributes": ["loqmvohtjsscueegam"],
                    "projection_type": "atbzepkydpgudoaqi",
                },
            }],
            "point_in_time_recovery_specification": {
                "point_in_time_recovery_enabled": True,
            },
            "provisioned_throughput": {
                "read_capacity_units": 10,
                "write_capacity_units": 28,
            },
            "resource_policy": {},
            "sse_specification": {
                "kms_master_key_id": "rvwuejohzknzrntkvprgxt",
                "sse_enabled": True,
                "sse_type": "osjalywculjbrystezvjojxe",
            },
            "stream_arn": "xvkrzs",
            "stream_specification": {
                "resource_policy": {},
                "stream_view_type": "wemod",
            },
            "table_class": "tmbfrfbppwhjpm",
            "table_name": "mqvlcdboopn",
            "tags": [{
                "key": "txipennfw",
                "value": "dkgweupnz",
            }],
            "time_to_live_specification": {
                "attribute_name": "sxbfejubturdtyusqywguqni",
                "enabled": True,
            },
        },
        "aws_region": "rdzrhtbydhmaxzuwe",
        "aws_source_schema": "sqkkuxwamzevkp",
        "aws_tags": {
            "key3791": "iikafuvbjkvnbogujm",
        },
        "public_cloud_connectors_resource_id": "nugnoqcknmrrminwvfvloqsporjd",
        "public_cloud_resource_name": "lkbwyvnzooydbnembmykhmw",
    },
    resource_group_name="rgdynamoDBTable",
    tags={
        "key2178": "lyeternduvkobwvqhpicnxel",
    })
Copy
resources:
  dynamoDbTable:
    type: azure-native:awsconnector:DynamoDbTable
    properties:
      location: fmkjilswdjyisfuwxuj
      name: Replace this value with a string matching RegExp ^(z=.{0,259}[^zs.]$)(z!.*[zzzzzzzz])
      properties:
        arn: gimtbcfiznraniycjyalnwrfstm
        awsAccountId: dejqcxb
        awsProperties:
          arn: qbvqgymuxfzuwybdspdhcuvfouwnet
          attributeDefinitions:
            - attributeName: caryhpofnkqtoc
              attributeType: bcmjgzaljcemcrswr
          billingMode: pwxrsjcybdcidejuhvrckvxyxad
          contributorInsightsSpecification:
            enabled: true
          deletionProtectionEnabled: true
          globalSecondaryIndexes:
            - contributorInsightsSpecification:
                enabled: true
              indexName: uqlzacnvsvayrvirrwwttb
              keySchema:
                - attributeName: wisgqkyoouaxivtrtay
                  keyType: kwkqgbxrwnoklpgmoypovxe
              projection:
                nonKeyAttributes:
                  - loqmvohtjsscueegam
                projectionType: atbzepkydpgudoaqi
              provisionedThroughput:
                readCapacityUnits: 10
                writeCapacityUnits: 28
          importSourceSpecification:
            inputCompressionType: bjswmnwxleqmcth
            inputFormat: grnhhysgejvbnecrqoynjomz
            inputFormatOptions:
              csv:
                delimiter: qzowvvpwwhptthlgvrtnpyjszetrt
                headerList:
                  - gminuylhgebpjx
            s3BucketSource:
              s3Bucket: exulhkspgmo
              s3BucketOwner: pyawhaxbwqhgarz
              s3KeyPrefix: ogjgqdsvu
          keySchema:
            - attributeName: wisgqkyoouaxivtrtay
              keyType: kwkqgbxrwnoklpgmoypovxe
          kinesisStreamSpecification:
            approximateCreationDateTimePrecision: MICROSECOND
            streamArn: qldltl
          localSecondaryIndexes:
            - indexName: gintyosxvkjqpe
              keySchema:
                - attributeName: wisgqkyoouaxivtrtay
                  keyType: kwkqgbxrwnoklpgmoypovxe
              projection:
                nonKeyAttributes:
                  - loqmvohtjsscueegam
                projectionType: atbzepkydpgudoaqi
          pointInTimeRecoverySpecification:
            pointInTimeRecoveryEnabled: true
          provisionedThroughput:
            readCapacityUnits: 10
            writeCapacityUnits: 28
          resourcePolicy: {}
          sseSpecification:
            kmsMasterKeyId: rvwuejohzknzrntkvprgxt
            sseEnabled: true
            sseType: osjalywculjbrystezvjojxe
          streamArn: xvkrzs
          streamSpecification:
            resourcePolicy: {}
            streamViewType: wemod
          tableClass: tmbfrfbppwhjpm
          tableName: mqvlcdboopn
          tags:
            - key: txipennfw
              value: dkgweupnz
          timeToLiveSpecification:
            attributeName: sxbfejubturdtyusqywguqni
            enabled: true
        awsRegion: rdzrhtbydhmaxzuwe
        awsSourceSchema: sqkkuxwamzevkp
        awsTags:
          key3791: iikafuvbjkvnbogujm
        publicCloudConnectorsResourceId: nugnoqcknmrrminwvfvloqsporjd
        publicCloudResourceName: lkbwyvnzooydbnembmykhmw
      resourceGroupName: rgdynamoDBTable
      tags:
        key2178: lyeternduvkobwvqhpicnxel
Copy

Create DynamoDbTable Resource

Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

Constructor syntax

new DynamoDbTable(name: string, args: DynamoDbTableArgs, opts?: CustomResourceOptions);
@overload
def DynamoDbTable(resource_name: str,
                  args: DynamoDbTableArgs,
                  opts: Optional[ResourceOptions] = None)

@overload
def DynamoDbTable(resource_name: str,
                  opts: Optional[ResourceOptions] = None,
                  resource_group_name: Optional[str] = None,
                  location: Optional[str] = None,
                  name: Optional[str] = None,
                  properties: Optional[DynamoDBTablePropertiesArgs] = None,
                  tags: Optional[Mapping[str, str]] = None)
func NewDynamoDbTable(ctx *Context, name string, args DynamoDbTableArgs, opts ...ResourceOption) (*DynamoDbTable, error)
public DynamoDbTable(string name, DynamoDbTableArgs args, CustomResourceOptions? opts = null)
public DynamoDbTable(String name, DynamoDbTableArgs args)
public DynamoDbTable(String name, DynamoDbTableArgs args, CustomResourceOptions options)
type: azure-native:awsconnector:DynamoDbTable
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

Parameters

name This property is required. string
The unique name of the resource.
args This property is required. DynamoDbTableArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name This property is required. str
The unique name of the resource.
args This property is required. DynamoDbTableArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name This property is required. string
The unique name of the resource.
args This property is required. DynamoDbTableArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name This property is required. string
The unique name of the resource.
args This property is required. DynamoDbTableArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name This property is required. String
The unique name of the resource.
args This property is required. DynamoDbTableArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Constructor example

The following reference example uses placeholder values for all input properties.

var dynamoDbTableResource = new AzureNative.Awsconnector.DynamoDbTable("dynamoDbTableResource", new()
{
    ResourceGroupName = "string",
    Location = "string",
    Name = "string",
    Properties = 
    {
        { "arn", "string" },
        { "awsAccountId", "string" },
        { "awsProperties", 
        {
            { "arn", "string" },
            { "attributeDefinitions", new[]
            {
                
                {
                    { "attributeName", "string" },
                    { "attributeType", "string" },
                },
            } },
            { "billingMode", "string" },
            { "contributorInsightsSpecification", 
            {
                { "enabled", false },
            } },
            { "deletionProtectionEnabled", false },
            { "globalSecondaryIndexes", new[]
            {
                
                {
                    { "contributorInsightsSpecification", 
                    {
                        { "enabled", false },
                    } },
                    { "indexName", "string" },
                    { "keySchema", new[]
                    {
                        
                        {
                            { "attributeName", "string" },
                            { "keyType", "string" },
                        },
                    } },
                    { "projection", 
                    {
                        { "nonKeyAttributes", new[]
                        {
                            "string",
                        } },
                        { "projectionType", "string" },
                    } },
                    { "provisionedThroughput", 
                    {
                        { "readCapacityUnits", 0 },
                        { "writeCapacityUnits", 0 },
                    } },
                },
            } },
            { "importSourceSpecification", 
            {
                { "inputCompressionType", "string" },
                { "inputFormat", "string" },
                { "inputFormatOptions", 
                {
                    { "csv", 
                    {
                        { "delimiter", "string" },
                        { "headerList", new[]
                        {
                            "string",
                        } },
                    } },
                } },
                { "s3BucketSource", 
                {
                    { "s3Bucket", "string" },
                    { "s3BucketOwner", "string" },
                    { "s3KeyPrefix", "string" },
                } },
            } },
            { "keySchema", new[]
            {
                
                {
                    { "attributeName", "string" },
                    { "keyType", "string" },
                },
            } },
            { "kinesisStreamSpecification", 
            {
                { "approximateCreationDateTimePrecision", "string" },
                { "streamArn", "string" },
            } },
            { "localSecondaryIndexes", new[]
            {
                
                {
                    { "indexName", "string" },
                    { "keySchema", new[]
                    {
                        
                        {
                            { "attributeName", "string" },
                            { "keyType", "string" },
                        },
                    } },
                    { "projection", 
                    {
                        { "nonKeyAttributes", new[]
                        {
                            "string",
                        } },
                        { "projectionType", "string" },
                    } },
                },
            } },
            { "pointInTimeRecoverySpecification", 
            {
                { "pointInTimeRecoveryEnabled", false },
            } },
            { "provisionedThroughput", 
            {
                { "readCapacityUnits", 0 },
                { "writeCapacityUnits", 0 },
            } },
            { "resourcePolicy", 
            {
                { "policyDocument", "any" },
            } },
            { "sseSpecification", 
            {
                { "kmsMasterKeyId", "string" },
                { "sseEnabled", false },
                { "sseType", "string" },
            } },
            { "streamArn", "string" },
            { "streamSpecification", 
            {
                { "resourcePolicy", 
                {
                    { "policyDocument", "any" },
                } },
                { "streamViewType", "string" },
            } },
            { "tableClass", "string" },
            { "tableName", "string" },
            { "tags", new[]
            {
                
                {
                    { "key", "string" },
                    { "value", "string" },
                },
            } },
            { "timeToLiveSpecification", 
            {
                { "attributeName", "string" },
                { "enabled", false },
            } },
        } },
        { "awsRegion", "string" },
        { "awsSourceSchema", "string" },
        { "awsTags", 
        {
            { "string", "string" },
        } },
        { "publicCloudConnectorsResourceId", "string" },
        { "publicCloudResourceName", "string" },
    },
    Tags = 
    {
        { "string", "string" },
    },
});
Copy
example, err := awsconnector.NewDynamoDbTable(ctx, "dynamoDbTableResource", &awsconnector.DynamoDbTableArgs{
	ResourceGroupName: "string",
	Location:          "string",
	Name:              "string",
	Properties: map[string]interface{}{
		"arn":          "string",
		"awsAccountId": "string",
		"awsProperties": map[string]interface{}{
			"arn": "string",
			"attributeDefinitions": []map[string]interface{}{
				map[string]interface{}{
					"attributeName": "string",
					"attributeType": "string",
				},
			},
			"billingMode": "string",
			"contributorInsightsSpecification": map[string]interface{}{
				"enabled": false,
			},
			"deletionProtectionEnabled": false,
			"globalSecondaryIndexes": []map[string]interface{}{
				map[string]interface{}{
					"contributorInsightsSpecification": map[string]interface{}{
						"enabled": false,
					},
					"indexName": "string",
					"keySchema": []map[string]interface{}{
						map[string]interface{}{
							"attributeName": "string",
							"keyType":       "string",
						},
					},
					"projection": map[string]interface{}{
						"nonKeyAttributes": []string{
							"string",
						},
						"projectionType": "string",
					},
					"provisionedThroughput": map[string]interface{}{
						"readCapacityUnits":  0,
						"writeCapacityUnits": 0,
					},
				},
			},
			"importSourceSpecification": map[string]interface{}{
				"inputCompressionType": "string",
				"inputFormat":          "string",
				"inputFormatOptions": map[string]interface{}{
					"csv": map[string]interface{}{
						"delimiter": "string",
						"headerList": []string{
							"string",
						},
					},
				},
				"s3BucketSource": map[string]interface{}{
					"s3Bucket":      "string",
					"s3BucketOwner": "string",
					"s3KeyPrefix":   "string",
				},
			},
			"keySchema": []map[string]interface{}{
				map[string]interface{}{
					"attributeName": "string",
					"keyType":       "string",
				},
			},
			"kinesisStreamSpecification": map[string]interface{}{
				"approximateCreationDateTimePrecision": "string",
				"streamArn":                            "string",
			},
			"localSecondaryIndexes": []map[string]interface{}{
				map[string]interface{}{
					"indexName": "string",
					"keySchema": []map[string]interface{}{
						map[string]interface{}{
							"attributeName": "string",
							"keyType":       "string",
						},
					},
					"projection": map[string]interface{}{
						"nonKeyAttributes": []string{
							"string",
						},
						"projectionType": "string",
					},
				},
			},
			"pointInTimeRecoverySpecification": map[string]interface{}{
				"pointInTimeRecoveryEnabled": false,
			},
			"provisionedThroughput": map[string]interface{}{
				"readCapacityUnits":  0,
				"writeCapacityUnits": 0,
			},
			"resourcePolicy": map[string]interface{}{
				"policyDocument": "any",
			},
			"sseSpecification": map[string]interface{}{
				"kmsMasterKeyId": "string",
				"sseEnabled":     false,
				"sseType":        "string",
			},
			"streamArn": "string",
			"streamSpecification": map[string]interface{}{
				"resourcePolicy": map[string]interface{}{
					"policyDocument": "any",
				},
				"streamViewType": "string",
			},
			"tableClass": "string",
			"tableName":  "string",
			"tags": []map[string]interface{}{
				map[string]interface{}{
					"key":   "string",
					"value": "string",
				},
			},
			"timeToLiveSpecification": map[string]interface{}{
				"attributeName": "string",
				"enabled":       false,
			},
		},
		"awsRegion":       "string",
		"awsSourceSchema": "string",
		"awsTags": map[string]interface{}{
			"string": "string",
		},
		"publicCloudConnectorsResourceId": "string",
		"publicCloudResourceName":         "string",
	},
	Tags: map[string]interface{}{
		"string": "string",
	},
})
Copy
var dynamoDbTableResource = new DynamoDbTable("dynamoDbTableResource", DynamoDbTableArgs.builder()
    .resourceGroupName("string")
    .location("string")
    .name("string")
    .properties(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
    .tags(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
    .build());
Copy
dynamo_db_table_resource = azure_native.awsconnector.DynamoDbTable("dynamoDbTableResource",
    resource_group_name=string,
    location=string,
    name=string,
    properties={
        arn: string,
        awsAccountId: string,
        awsProperties: {
            arn: string,
            attributeDefinitions: [{
                attributeName: string,
                attributeType: string,
            }],
            billingMode: string,
            contributorInsightsSpecification: {
                enabled: False,
            },
            deletionProtectionEnabled: False,
            globalSecondaryIndexes: [{
                contributorInsightsSpecification: {
                    enabled: False,
                },
                indexName: string,
                keySchema: [{
                    attributeName: string,
                    keyType: string,
                }],
                projection: {
                    nonKeyAttributes: [string],
                    projectionType: string,
                },
                provisionedThroughput: {
                    readCapacityUnits: 0,
                    writeCapacityUnits: 0,
                },
            }],
            importSourceSpecification: {
                inputCompressionType: string,
                inputFormat: string,
                inputFormatOptions: {
                    csv: {
                        delimiter: string,
                        headerList: [string],
                    },
                },
                s3BucketSource: {
                    s3Bucket: string,
                    s3BucketOwner: string,
                    s3KeyPrefix: string,
                },
            },
            keySchema: [{
                attributeName: string,
                keyType: string,
            }],
            kinesisStreamSpecification: {
                approximateCreationDateTimePrecision: string,
                streamArn: string,
            },
            localSecondaryIndexes: [{
                indexName: string,
                keySchema: [{
                    attributeName: string,
                    keyType: string,
                }],
                projection: {
                    nonKeyAttributes: [string],
                    projectionType: string,
                },
            }],
            pointInTimeRecoverySpecification: {
                pointInTimeRecoveryEnabled: False,
            },
            provisionedThroughput: {
                readCapacityUnits: 0,
                writeCapacityUnits: 0,
            },
            resourcePolicy: {
                policyDocument: any,
            },
            sseSpecification: {
                kmsMasterKeyId: string,
                sseEnabled: False,
                sseType: string,
            },
            streamArn: string,
            streamSpecification: {
                resourcePolicy: {
                    policyDocument: any,
                },
                streamViewType: string,
            },
            tableClass: string,
            tableName: string,
            tags: [{
                key: string,
                value: string,
            }],
            timeToLiveSpecification: {
                attributeName: string,
                enabled: False,
            },
        },
        awsRegion: string,
        awsSourceSchema: string,
        awsTags: {
            string: string,
        },
        publicCloudConnectorsResourceId: string,
        publicCloudResourceName: string,
    },
    tags={
        string: string,
    })
Copy
const dynamoDbTableResource = new azure_native.awsconnector.DynamoDbTable("dynamoDbTableResource", {
    resourceGroupName: "string",
    location: "string",
    name: "string",
    properties: {
        arn: "string",
        awsAccountId: "string",
        awsProperties: {
            arn: "string",
            attributeDefinitions: [{
                attributeName: "string",
                attributeType: "string",
            }],
            billingMode: "string",
            contributorInsightsSpecification: {
                enabled: false,
            },
            deletionProtectionEnabled: false,
            globalSecondaryIndexes: [{
                contributorInsightsSpecification: {
                    enabled: false,
                },
                indexName: "string",
                keySchema: [{
                    attributeName: "string",
                    keyType: "string",
                }],
                projection: {
                    nonKeyAttributes: ["string"],
                    projectionType: "string",
                },
                provisionedThroughput: {
                    readCapacityUnits: 0,
                    writeCapacityUnits: 0,
                },
            }],
            importSourceSpecification: {
                inputCompressionType: "string",
                inputFormat: "string",
                inputFormatOptions: {
                    csv: {
                        delimiter: "string",
                        headerList: ["string"],
                    },
                },
                s3BucketSource: {
                    s3Bucket: "string",
                    s3BucketOwner: "string",
                    s3KeyPrefix: "string",
                },
            },
            keySchema: [{
                attributeName: "string",
                keyType: "string",
            }],
            kinesisStreamSpecification: {
                approximateCreationDateTimePrecision: "string",
                streamArn: "string",
            },
            localSecondaryIndexes: [{
                indexName: "string",
                keySchema: [{
                    attributeName: "string",
                    keyType: "string",
                }],
                projection: {
                    nonKeyAttributes: ["string"],
                    projectionType: "string",
                },
            }],
            pointInTimeRecoverySpecification: {
                pointInTimeRecoveryEnabled: false,
            },
            provisionedThroughput: {
                readCapacityUnits: 0,
                writeCapacityUnits: 0,
            },
            resourcePolicy: {
                policyDocument: "any",
            },
            sseSpecification: {
                kmsMasterKeyId: "string",
                sseEnabled: false,
                sseType: "string",
            },
            streamArn: "string",
            streamSpecification: {
                resourcePolicy: {
                    policyDocument: "any",
                },
                streamViewType: "string",
            },
            tableClass: "string",
            tableName: "string",
            tags: [{
                key: "string",
                value: "string",
            }],
            timeToLiveSpecification: {
                attributeName: "string",
                enabled: false,
            },
        },
        awsRegion: "string",
        awsSourceSchema: "string",
        awsTags: {
            string: "string",
        },
        publicCloudConnectorsResourceId: "string",
        publicCloudResourceName: "string",
    },
    tags: {
        string: "string",
    },
});
Copy
type: azure-native:awsconnector:DynamoDbTable
properties:
    location: string
    name: string
    properties:
        arn: string
        awsAccountId: string
        awsProperties:
            arn: string
            attributeDefinitions:
                - attributeName: string
                  attributeType: string
            billingMode: string
            contributorInsightsSpecification:
                enabled: false
            deletionProtectionEnabled: false
            globalSecondaryIndexes:
                - contributorInsightsSpecification:
                    enabled: false
                  indexName: string
                  keySchema:
                    - attributeName: string
                      keyType: string
                  projection:
                    nonKeyAttributes:
                        - string
                    projectionType: string
                  provisionedThroughput:
                    readCapacityUnits: 0
                    writeCapacityUnits: 0
            importSourceSpecification:
                inputCompressionType: string
                inputFormat: string
                inputFormatOptions:
                    csv:
                        delimiter: string
                        headerList:
                            - string
                s3BucketSource:
                    s3Bucket: string
                    s3BucketOwner: string
                    s3KeyPrefix: string
            keySchema:
                - attributeName: string
                  keyType: string
            kinesisStreamSpecification:
                approximateCreationDateTimePrecision: string
                streamArn: string
            localSecondaryIndexes:
                - indexName: string
                  keySchema:
                    - attributeName: string
                      keyType: string
                  projection:
                    nonKeyAttributes:
                        - string
                    projectionType: string
            pointInTimeRecoverySpecification:
                pointInTimeRecoveryEnabled: false
            provisionedThroughput:
                readCapacityUnits: 0
                writeCapacityUnits: 0
            resourcePolicy:
                policyDocument: any
            sseSpecification:
                kmsMasterKeyId: string
                sseEnabled: false
                sseType: string
            streamArn: string
            streamSpecification:
                resourcePolicy:
                    policyDocument: any
                streamViewType: string
            tableClass: string
            tableName: string
            tags:
                - key: string
                  value: string
            timeToLiveSpecification:
                attributeName: string
                enabled: false
        awsRegion: string
        awsSourceSchema: string
        awsTags:
            string: string
        publicCloudConnectorsResourceId: string
        publicCloudResourceName: string
    resourceGroupName: string
    tags:
        string: string
Copy

DynamoDbTable Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

The DynamoDbTable resource accepts the following input properties:

ResourceGroupName
This property is required.
Changes to this property will trigger replacement.
string
The name of the resource group. The name is case insensitive.
Location Changes to this property will trigger replacement. string
The geo-location where the resource lives
Name Changes to this property will trigger replacement. string
Name of DynamoDBTable
Properties Pulumi.AzureNative.AwsConnector.Inputs.DynamoDBTableProperties
The resource-specific properties for this resource.
Tags Dictionary<string, string>
Resource tags.
ResourceGroupName
This property is required.
Changes to this property will trigger replacement.
string
The name of the resource group. The name is case insensitive.
Location Changes to this property will trigger replacement. string
The geo-location where the resource lives
Name Changes to this property will trigger replacement. string
Name of DynamoDBTable
Properties DynamoDBTablePropertiesArgs
The resource-specific properties for this resource.
Tags map[string]string
Resource tags.
resourceGroupName
This property is required.
Changes to this property will trigger replacement.
String
The name of the resource group. The name is case insensitive.
location Changes to this property will trigger replacement. String
The geo-location where the resource lives
name Changes to this property will trigger replacement. String
Name of DynamoDBTable
properties DynamoDBTableProperties
The resource-specific properties for this resource.
tags Map<String,String>
Resource tags.
resourceGroupName
This property is required.
Changes to this property will trigger replacement.
string
The name of the resource group. The name is case insensitive.
location Changes to this property will trigger replacement. string
The geo-location where the resource lives
name Changes to this property will trigger replacement. string
Name of DynamoDBTable
properties DynamoDBTableProperties
The resource-specific properties for this resource.
tags {[key: string]: string}
Resource tags.
resource_group_name
This property is required.
Changes to this property will trigger replacement.
str
The name of the resource group. The name is case insensitive.
location Changes to this property will trigger replacement. str
The geo-location where the resource lives
name Changes to this property will trigger replacement. str
Name of DynamoDBTable
properties DynamoDBTablePropertiesArgs
The resource-specific properties for this resource.
tags Mapping[str, str]
Resource tags.
resourceGroupName
This property is required.
Changes to this property will trigger replacement.
String
The name of the resource group. The name is case insensitive.
location Changes to this property will trigger replacement. String
The geo-location where the resource lives
name Changes to this property will trigger replacement. String
Name of DynamoDBTable
properties Property Map
The resource-specific properties for this resource.
tags Map<String>
Resource tags.

Outputs

All input properties are implicitly available as output properties. Additionally, the DynamoDbTable resource produces the following output properties:

Id string
The provider-assigned unique ID for this managed resource.
SystemData Pulumi.AzureNative.AwsConnector.Outputs.SystemDataResponse
Azure Resource Manager metadata containing createdBy and modifiedBy information.
Type string
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
Id string
The provider-assigned unique ID for this managed resource.
SystemData SystemDataResponse
Azure Resource Manager metadata containing createdBy and modifiedBy information.
Type string
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
id String
The provider-assigned unique ID for this managed resource.
systemData SystemDataResponse
Azure Resource Manager metadata containing createdBy and modifiedBy information.
type String
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
id string
The provider-assigned unique ID for this managed resource.
systemData SystemDataResponse
Azure Resource Manager metadata containing createdBy and modifiedBy information.
type string
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
id str
The provider-assigned unique ID for this managed resource.
system_data SystemDataResponse
Azure Resource Manager metadata containing createdBy and modifiedBy information.
type str
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
id String
The provider-assigned unique ID for this managed resource.
systemData Property Map
Azure Resource Manager metadata containing createdBy and modifiedBy information.
type String
The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"

Supporting Types

AttributeDefinition
, AttributeDefinitionArgs

AttributeName string
A name for the attribute.
AttributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
AttributeName string
A name for the attribute.
AttributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName String
A name for the attribute.
attributeType String
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName string
A name for the attribute.
attributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attribute_name str
A name for the attribute.
attribute_type str
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName String
A name for the attribute.
attributeType String
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary

AttributeDefinitionResponse
, AttributeDefinitionResponseArgs

AttributeName string
A name for the attribute.
AttributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
AttributeName string
A name for the attribute.
AttributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName String
A name for the attribute.
attributeType String
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName string
A name for the attribute.
attributeType string
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attribute_name str
A name for the attribute.
attribute_type str
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary
attributeName String
A name for the attribute.
attributeType String
The data type for the attribute, where: + S - the attribute is of type String + N - the attribute is of type Number + B - the attribute is of type Binary

AwsDynamoDBTableProperties
, AwsDynamoDBTablePropertiesArgs

Arn string
Property arn
AttributeDefinitions List<Pulumi.AzureNative.AwsConnector.Inputs.AttributeDefinition>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
BillingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
ContributorInsightsSpecification Pulumi.AzureNative.AwsConnector.Inputs.ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
DeletionProtectionEnabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
GlobalSecondaryIndexes List<Pulumi.AzureNative.AwsConnector.Inputs.GlobalSecondaryIndex>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
ImportSourceSpecification Pulumi.AzureNative.AwsConnector.Inputs.ImportSourceSpecification
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchema>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
KinesisStreamSpecification Pulumi.AzureNative.AwsConnector.Inputs.KinesisStreamSpecification
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
LocalSecondaryIndexes List<Pulumi.AzureNative.AwsConnector.Inputs.LocalSecondaryIndex>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
PointInTimeRecoverySpecification Pulumi.AzureNative.AwsConnector.Inputs.PointInTimeRecoverySpecification
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
ProvisionedThroughput Pulumi.AzureNative.AwsConnector.Inputs.ProvisionedThroughput
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ResourcePolicy Pulumi.AzureNative.AwsConnector.Inputs.ResourcePolicy
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
SseSpecification Pulumi.AzureNative.AwsConnector.Inputs.SSESpecification
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
StreamArn string
Property streamArn
StreamSpecification Pulumi.AzureNative.AwsConnector.Inputs.StreamSpecification
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
TableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
TableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
Tags List<Pulumi.AzureNative.AwsConnector.Inputs.Tag>
An array of key-value pairs to apply to this resource. For more information, see Tag.
TimeToLiveSpecification Pulumi.AzureNative.AwsConnector.Inputs.TimeToLiveSpecification
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
Arn string
Property arn
AttributeDefinitions []AttributeDefinition
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
BillingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
ContributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
DeletionProtectionEnabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
GlobalSecondaryIndexes []GlobalSecondaryIndex
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
ImportSourceSpecification ImportSourceSpecification
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
KeySchema []KeySchema
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
KinesisStreamSpecification KinesisStreamSpecification
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
LocalSecondaryIndexes []LocalSecondaryIndex
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
PointInTimeRecoverySpecification PointInTimeRecoverySpecification
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
ProvisionedThroughput ProvisionedThroughput
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ResourcePolicy ResourcePolicy
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
SseSpecification SSESpecification
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
StreamArn string
Property streamArn
StreamSpecification StreamSpecification
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
TableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
TableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
Tags []Tag
An array of key-value pairs to apply to this resource. For more information, see Tag.
TimeToLiveSpecification TimeToLiveSpecification
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn String
Property arn
attributeDefinitions List<AttributeDefinition>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode String
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled Boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes List<GlobalSecondaryIndex>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification ImportSourceSpecification
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema List<KeySchema>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification KinesisStreamSpecification
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes List<LocalSecondaryIndex>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification PointInTimeRecoverySpecification
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput ProvisionedThroughput
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy ResourcePolicy
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification SSESpecification
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn String
Property streamArn
streamSpecification StreamSpecification
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass String
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName String
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags List<Tag>
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification TimeToLiveSpecification
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn string
Property arn
attributeDefinitions AttributeDefinition[]
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes GlobalSecondaryIndex[]
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification ImportSourceSpecification
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema KeySchema[]
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification KinesisStreamSpecification
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes LocalSecondaryIndex[]
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification PointInTimeRecoverySpecification
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput ProvisionedThroughput
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy ResourcePolicy
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification SSESpecification
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn string
Property streamArn
streamSpecification StreamSpecification
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags Tag[]
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification TimeToLiveSpecification
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn str
Property arn
attribute_definitions Sequence[AttributeDefinition]
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billing_mode str
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributor_insights_specification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletion_protection_enabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
global_secondary_indexes Sequence[GlobalSecondaryIndex]
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
import_source_specification ImportSourceSpecification
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
key_schema Sequence[KeySchema]
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesis_stream_specification KinesisStreamSpecification
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
local_secondary_indexes Sequence[LocalSecondaryIndex]
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
point_in_time_recovery_specification PointInTimeRecoverySpecification
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisioned_throughput ProvisionedThroughput
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resource_policy ResourcePolicy
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sse_specification SSESpecification
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
stream_arn str
Property streamArn
stream_specification StreamSpecification
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
table_class str
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
table_name str
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags Sequence[Tag]
An array of key-value pairs to apply to this resource. For more information, see Tag.
time_to_live_specification TimeToLiveSpecification
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn String
Property arn
attributeDefinitions List<Property Map>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode String
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification Property Map
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled Boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes List<Property Map>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification Property Map
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema List<Property Map>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification Property Map
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes List<Property Map>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification Property Map
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput Property Map
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy Property Map
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification Property Map
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn String
Property streamArn
streamSpecification Property Map
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass String
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName String
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags List<Property Map>
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification Property Map
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.

AwsDynamoDBTablePropertiesResponse
, AwsDynamoDBTablePropertiesResponseArgs

Arn string
Property arn
AttributeDefinitions List<Pulumi.AzureNative.AwsConnector.Inputs.AttributeDefinitionResponse>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
BillingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
ContributorInsightsSpecification Pulumi.AzureNative.AwsConnector.Inputs.ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
DeletionProtectionEnabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
GlobalSecondaryIndexes List<Pulumi.AzureNative.AwsConnector.Inputs.GlobalSecondaryIndexResponse>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
ImportSourceSpecification Pulumi.AzureNative.AwsConnector.Inputs.ImportSourceSpecificationResponse
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchemaResponse>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
KinesisStreamSpecification Pulumi.AzureNative.AwsConnector.Inputs.KinesisStreamSpecificationResponse
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
LocalSecondaryIndexes List<Pulumi.AzureNative.AwsConnector.Inputs.LocalSecondaryIndexResponse>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
PointInTimeRecoverySpecification Pulumi.AzureNative.AwsConnector.Inputs.PointInTimeRecoverySpecificationResponse
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
ProvisionedThroughput Pulumi.AzureNative.AwsConnector.Inputs.ProvisionedThroughputResponse
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ResourcePolicy Pulumi.AzureNative.AwsConnector.Inputs.ResourcePolicyResponse
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
SseSpecification Pulumi.AzureNative.AwsConnector.Inputs.SSESpecificationResponse
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
StreamArn string
Property streamArn
StreamSpecification Pulumi.AzureNative.AwsConnector.Inputs.StreamSpecificationResponse
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
TableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
TableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
Tags List<Pulumi.AzureNative.AwsConnector.Inputs.TagResponse>
An array of key-value pairs to apply to this resource. For more information, see Tag.
TimeToLiveSpecification Pulumi.AzureNative.AwsConnector.Inputs.TimeToLiveSpecificationResponse
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
Arn string
Property arn
AttributeDefinitions []AttributeDefinitionResponse
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
BillingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
ContributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
DeletionProtectionEnabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
GlobalSecondaryIndexes []GlobalSecondaryIndexResponse
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
ImportSourceSpecification ImportSourceSpecificationResponse
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
KeySchema []KeySchemaResponse
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
KinesisStreamSpecification KinesisStreamSpecificationResponse
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
LocalSecondaryIndexes []LocalSecondaryIndexResponse
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
PointInTimeRecoverySpecification PointInTimeRecoverySpecificationResponse
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
ProvisionedThroughput ProvisionedThroughputResponse
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ResourcePolicy ResourcePolicyResponse
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
SseSpecification SSESpecificationResponse
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
StreamArn string
Property streamArn
StreamSpecification StreamSpecificationResponse
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
TableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
TableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
Tags []TagResponse
An array of key-value pairs to apply to this resource. For more information, see Tag.
TimeToLiveSpecification TimeToLiveSpecificationResponse
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn String
Property arn
attributeDefinitions List<AttributeDefinitionResponse>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode String
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled Boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes List<GlobalSecondaryIndexResponse>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification ImportSourceSpecificationResponse
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema List<KeySchemaResponse>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification KinesisStreamSpecificationResponse
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes List<LocalSecondaryIndexResponse>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification PointInTimeRecoverySpecificationResponse
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput ProvisionedThroughputResponse
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy ResourcePolicyResponse
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification SSESpecificationResponse
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn String
Property streamArn
streamSpecification StreamSpecificationResponse
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass String
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName String
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags List<TagResponse>
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification TimeToLiveSpecificationResponse
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn string
Property arn
attributeDefinitions AttributeDefinitionResponse[]
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode string
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes GlobalSecondaryIndexResponse[]
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification ImportSourceSpecificationResponse
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema KeySchemaResponse[]
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification KinesisStreamSpecificationResponse
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes LocalSecondaryIndexResponse[]
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification PointInTimeRecoverySpecificationResponse
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput ProvisionedThroughputResponse
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy ResourcePolicyResponse
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification SSESpecificationResponse
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn string
Property streamArn
streamSpecification StreamSpecificationResponse
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass string
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName string
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags TagResponse[]
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification TimeToLiveSpecificationResponse
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn str
Property arn
attribute_definitions Sequence[AttributeDefinitionResponse]
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billing_mode str
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributor_insights_specification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletion_protection_enabled bool
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
global_secondary_indexes Sequence[GlobalSecondaryIndexResponse]
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
import_source_specification ImportSourceSpecificationResponse
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
key_schema Sequence[KeySchemaResponse]
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesis_stream_specification KinesisStreamSpecificationResponse
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
local_secondary_indexes Sequence[LocalSecondaryIndexResponse]
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
point_in_time_recovery_specification PointInTimeRecoverySpecificationResponse
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisioned_throughput ProvisionedThroughputResponse
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resource_policy ResourcePolicyResponse
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sse_specification SSESpecificationResponse
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
stream_arn str
Property streamArn
stream_specification StreamSpecificationResponse
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
table_class str
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
table_name str
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags Sequence[TagResponse]
An array of key-value pairs to apply to this resource. For more information, see Tag.
time_to_live_specification TimeToLiveSpecificationResponse
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
arn String
Property arn
attributeDefinitions List<Property Map>
A list of attributes that describe the key schema for the table and indexes. This property is required to create a DDB table. Update requires: Some interruptions. Replacement if you edit an existing AttributeDefinition.
billingMode String
Specify how you are charged for read and write throughput and how you manage capacity. Valid values include: + PROVISIONED - We recommend using PROVISIONED for predictable workloads. PROVISIONED sets the billing mode to Provisioned Mode. + PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode. If not specified, the default is PROVISIONED.
contributorInsightsSpecification Property Map
The settings used to enable or disable CloudWatch Contributor Insights for the specified table. The settings used to enable or disable CloudWatch Contributor Insights.
deletionProtectionEnabled Boolean
Determines if a table is protected from deletion. When enabled, the table cannot be deleted by any user or process. This setting is disabled by default. For more information, see Using deletion protection in the Developer Guide.
globalSecondaryIndexes List<Property Map>
Global secondary indexes to be created on the table. You can create up to 20 global secondary indexes. If you update a table to include a new global secondary index, CFNlong initiates the index creation and then proceeds with the stack update. CFNlong doesn't wait for the index to complete creation because the backfilling phase can take a long time, depending on the size of the table. You can't use the index or update the table until the index's status is ACTIVE. You can track its status by using the DynamoDB DescribeTable command. If you add or delete an index during an update, we recommend that you don't update any other resources. If your stack fails to update and is rolled back while adding a new index, you must manually delete the index. Updates are not supported. The following are exceptions: + If you update either the contributor insights specification or the provisioned throughput values of global secondary indexes, you can update the table without interruption. + You can delete or add one global secondary index without interruption. If you do both in the same update (for example, by changing the index's logical ID), the update fails.
importSourceSpecification Property Map
Specifies the properties of data being imported from the S3 bucket source to the table. If you specify the ImportSourceSpecification property, and also specify either the StreamSpecification, the TableClass property, or the DeletionProtectionEnabled property, the IAM entity creating/updating stack must have UpdateTable permission. Specifies the properties of data being imported from the S3 bucket source to the table.
keySchema List<Property Map>
Specifies the attributes that make up the primary key for the table. The attributes in the KeySchema property must also be defined in the AttributeDefinitions property.
kinesisStreamSpecification Property Map
The Kinesis Data Streams configuration for the specified table. The Kinesis Data Streams configuration for the specified table.
localSecondaryIndexes List<Property Map>
Local secondary indexes to be created on the table. You can create up to 5 local secondary indexes. Each index is scoped to a given hash key value. The size of each hash key can be up to 10 gigabytes.
pointInTimeRecoverySpecification Property Map
The settings used to enable point in time recovery. The settings used to enable point in time recovery.
provisionedThroughput Property Map
Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Amazon DynamoDB Table ProvisionedThroughput. If you set BillingMode as PROVISIONED, you must specify this property. If you set BillingMode as PAY_PER_REQUEST, you cannot specify this property. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
resourcePolicy Property Map
A resource-based policy document that contains permissions to add to the specified table. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. When you attach a resource-based policy while creating a table, the policy creation is strongly consistent. For information about the considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
sseSpecification Property Map
Specifies the settings to enable server-side encryption. Represents the settings used to enable server-side encryption.
streamArn String
Property streamArn
streamSpecification Property Map
The settings for the DDB table stream, which capture changes to items stored in the table. Represents the DynamoDB Streams configuration for a table in DynamoDB.
tableClass String
The table class of the new table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS.
tableName String
A name for the table. If you don't specify a name, CFNlong generates a unique physical ID and uses that ID for the table name. For more information, see Name Type. If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.
tags List<Property Map>
An array of key-value pairs to apply to this resource. For more information, see Tag.
timeToLiveSpecification Property Map
Specifies the Time to Live (TTL) settings for the table. For detailed information about the limits in DynamoDB, see Limits in Amazon DynamoDB in the Amazon DynamoDB Developer Guide. Represents the settings used to enable or disable Time to Live (TTL) for the specified table.

ContributorInsightsSpecification
, ContributorInsightsSpecificationArgs

Enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
Enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled Boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled Boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).

ContributorInsightsSpecificationResponse
, ContributorInsightsSpecificationResponseArgs

Enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
Enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled Boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled bool
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).
enabled Boolean
Indicates whether CloudWatch Contributor Insights are to be enabled (true) or disabled (false).

Csv
, CsvArgs

Delimiter string
The delimiter used for separating items in the CSV file being imported.
HeaderList List<string>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
Delimiter string
The delimiter used for separating items in the CSV file being imported.
HeaderList []string
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter String
The delimiter used for separating items in the CSV file being imported.
headerList List<String>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter string
The delimiter used for separating items in the CSV file being imported.
headerList string[]
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter str
The delimiter used for separating items in the CSV file being imported.
header_list Sequence[str]
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter String
The delimiter used for separating items in the CSV file being imported.
headerList List<String>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.

CsvResponse
, CsvResponseArgs

Delimiter string
The delimiter used for separating items in the CSV file being imported.
HeaderList List<string>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
Delimiter string
The delimiter used for separating items in the CSV file being imported.
HeaderList []string
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter String
The delimiter used for separating items in the CSV file being imported.
headerList List<String>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter string
The delimiter used for separating items in the CSV file being imported.
headerList string[]
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter str
The delimiter used for separating items in the CSV file being imported.
header_list Sequence[str]
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.
delimiter String
The delimiter used for separating items in the CSV file being imported.
headerList List<String>
List of the headers used to specify a common header for all source CSV files being imported. If this field is specified then the first line of each CSV file is treated as data instead of the header. If this field is not specified the the first line of each CSV file is treated as the header.

DynamoDBTableProperties
, DynamoDBTablePropertiesArgs

Arn string
Amazon Resource Name (ARN)
AwsAccountId string
AWS Account ID
AwsProperties Pulumi.AzureNative.AwsConnector.Inputs.AwsDynamoDBTableProperties
AWS Properties
AwsRegion string
AWS Region
AwsSourceSchema string
AWS Source Schema
AwsTags Dictionary<string, string>
AWS Tags
PublicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
PublicCloudResourceName string
Public Cloud Resource Name
Arn string
Amazon Resource Name (ARN)
AwsAccountId string
AWS Account ID
AwsProperties AwsDynamoDBTableProperties
AWS Properties
AwsRegion string
AWS Region
AwsSourceSchema string
AWS Source Schema
AwsTags map[string]string
AWS Tags
PublicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
PublicCloudResourceName string
Public Cloud Resource Name
arn String
Amazon Resource Name (ARN)
awsAccountId String
AWS Account ID
awsProperties AwsDynamoDBTableProperties
AWS Properties
awsRegion String
AWS Region
awsSourceSchema String
AWS Source Schema
awsTags Map<String,String>
AWS Tags
publicCloudConnectorsResourceId String
Public Cloud Connectors Resource ID
publicCloudResourceName String
Public Cloud Resource Name
arn string
Amazon Resource Name (ARN)
awsAccountId string
AWS Account ID
awsProperties AwsDynamoDBTableProperties
AWS Properties
awsRegion string
AWS Region
awsSourceSchema string
AWS Source Schema
awsTags {[key: string]: string}
AWS Tags
publicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
publicCloudResourceName string
Public Cloud Resource Name
arn str
Amazon Resource Name (ARN)
aws_account_id str
AWS Account ID
aws_properties AwsDynamoDBTableProperties
AWS Properties
aws_region str
AWS Region
aws_source_schema str
AWS Source Schema
aws_tags Mapping[str, str]
AWS Tags
public_cloud_connectors_resource_id str
Public Cloud Connectors Resource ID
public_cloud_resource_name str
Public Cloud Resource Name
arn String
Amazon Resource Name (ARN)
awsAccountId String
AWS Account ID
awsProperties Property Map
AWS Properties
awsRegion String
AWS Region
awsSourceSchema String
AWS Source Schema
awsTags Map<String>
AWS Tags
publicCloudConnectorsResourceId String
Public Cloud Connectors Resource ID
publicCloudResourceName String
Public Cloud Resource Name

DynamoDBTablePropertiesResponse
, DynamoDBTablePropertiesResponseArgs

ProvisioningState This property is required. string
The status of the last operation.
Arn string
Amazon Resource Name (ARN)
AwsAccountId string
AWS Account ID
AwsProperties Pulumi.AzureNative.AwsConnector.Inputs.AwsDynamoDBTablePropertiesResponse
AWS Properties
AwsRegion string
AWS Region
AwsSourceSchema string
AWS Source Schema
AwsTags Dictionary<string, string>
AWS Tags
PublicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
PublicCloudResourceName string
Public Cloud Resource Name
ProvisioningState This property is required. string
The status of the last operation.
Arn string
Amazon Resource Name (ARN)
AwsAccountId string
AWS Account ID
AwsProperties AwsDynamoDBTablePropertiesResponse
AWS Properties
AwsRegion string
AWS Region
AwsSourceSchema string
AWS Source Schema
AwsTags map[string]string
AWS Tags
PublicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
PublicCloudResourceName string
Public Cloud Resource Name
provisioningState This property is required. String
The status of the last operation.
arn String
Amazon Resource Name (ARN)
awsAccountId String
AWS Account ID
awsProperties AwsDynamoDBTablePropertiesResponse
AWS Properties
awsRegion String
AWS Region
awsSourceSchema String
AWS Source Schema
awsTags Map<String,String>
AWS Tags
publicCloudConnectorsResourceId String
Public Cloud Connectors Resource ID
publicCloudResourceName String
Public Cloud Resource Name
provisioningState This property is required. string
The status of the last operation.
arn string
Amazon Resource Name (ARN)
awsAccountId string
AWS Account ID
awsProperties AwsDynamoDBTablePropertiesResponse
AWS Properties
awsRegion string
AWS Region
awsSourceSchema string
AWS Source Schema
awsTags {[key: string]: string}
AWS Tags
publicCloudConnectorsResourceId string
Public Cloud Connectors Resource ID
publicCloudResourceName string
Public Cloud Resource Name
provisioning_state This property is required. str
The status of the last operation.
arn str
Amazon Resource Name (ARN)
aws_account_id str
AWS Account ID
aws_properties AwsDynamoDBTablePropertiesResponse
AWS Properties
aws_region str
AWS Region
aws_source_schema str
AWS Source Schema
aws_tags Mapping[str, str]
AWS Tags
public_cloud_connectors_resource_id str
Public Cloud Connectors Resource ID
public_cloud_resource_name str
Public Cloud Resource Name
provisioningState This property is required. String
The status of the last operation.
arn String
Amazon Resource Name (ARN)
awsAccountId String
AWS Account ID
awsProperties Property Map
AWS Properties
awsRegion String
AWS Region
awsSourceSchema String
AWS Source Schema
awsTags Map<String>
AWS Tags
publicCloudConnectorsResourceId String
Public Cloud Connectors Resource ID
publicCloudResourceName String
Public Cloud Resource Name

GlobalSecondaryIndex
, GlobalSecondaryIndexArgs

ContributorInsightsSpecification Pulumi.AzureNative.AwsConnector.Inputs.ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
IndexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchema>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Pulumi.AzureNative.AwsConnector.Inputs.Projection
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
ProvisionedThroughput Pulumi.AzureNative.AwsConnector.Inputs.ProvisionedThroughput
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ContributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
IndexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
KeySchema []KeySchema
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Projection
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
ProvisionedThroughput ProvisionedThroughput
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName String
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema List<KeySchema>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput ProvisionedThroughput
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema KeySchema[]
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput ProvisionedThroughput
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributor_insights_specification ContributorInsightsSpecification
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
index_name str
The name of the global secondary index. The name must be unique among all other indexes on this table.
key_schema Sequence[KeySchema]
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisioned_throughput ProvisionedThroughput
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification Property Map
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName String
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema List<Property Map>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Property Map
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput Property Map
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.

GlobalSecondaryIndexResponse
, GlobalSecondaryIndexResponseArgs

ContributorInsightsSpecification Pulumi.AzureNative.AwsConnector.Inputs.ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
IndexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchemaResponse>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Pulumi.AzureNative.AwsConnector.Inputs.ProjectionResponse
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
ProvisionedThroughput Pulumi.AzureNative.AwsConnector.Inputs.ProvisionedThroughputResponse
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
ContributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
IndexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
KeySchema []KeySchemaResponse
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
ProvisionedThroughput ProvisionedThroughputResponse
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName String
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema List<KeySchemaResponse>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput ProvisionedThroughputResponse
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName string
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema KeySchemaResponse[]
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput ProvisionedThroughputResponse
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributor_insights_specification ContributorInsightsSpecificationResponse
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
index_name str
The name of the global secondary index. The name must be unique among all other indexes on this table.
key_schema Sequence[KeySchemaResponse]
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisioned_throughput ProvisionedThroughputResponse
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.
contributorInsightsSpecification Property Map
The settings used to enable or disable CloudWatch Contributor Insights for the specified global secondary index. The settings used to enable or disable CloudWatch Contributor Insights.
indexName String
The name of the global secondary index. The name must be unique among all other indexes on this table.
keySchema List<Property Map>
The complete key schema for a global secondary index, which consists of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Property Map
Represents attributes that are copied (projected) from the table into the global secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
provisionedThroughput Property Map
Represents the provisioned throughput settings for the specified global secondary index. For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide. Throughput for the specified table, which consists of values for ReadCapacityUnits and WriteCapacityUnits. For more information about the contents of a provisioned throughput structure, see Table ProvisionedThroughput.

ImportSourceSpecification
, ImportSourceSpecificationArgs

InputCompressionType string
Type of compression to be used on the input coming from the imported table.
InputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
InputFormatOptions Pulumi.AzureNative.AwsConnector.Inputs.InputFormatOptions
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
S3BucketSource Pulumi.AzureNative.AwsConnector.Inputs.S3BucketSource
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
InputCompressionType string
Type of compression to be used on the input coming from the imported table.
InputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
InputFormatOptions InputFormatOptions
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
S3BucketSource S3BucketSource
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType String
Type of compression to be used on the input coming from the imported table.
inputFormat String
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions InputFormatOptions
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource S3BucketSource
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType string
Type of compression to be used on the input coming from the imported table.
inputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions InputFormatOptions
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource S3BucketSource
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
input_compression_type str
Type of compression to be used on the input coming from the imported table.
input_format str
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
input_format_options InputFormatOptions
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3_bucket_source S3BucketSource
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType String
Type of compression to be used on the input coming from the imported table.
inputFormat String
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions Property Map
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource Property Map
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.

ImportSourceSpecificationResponse
, ImportSourceSpecificationResponseArgs

InputCompressionType string
Type of compression to be used on the input coming from the imported table.
InputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
InputFormatOptions Pulumi.AzureNative.AwsConnector.Inputs.InputFormatOptionsResponse
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
S3BucketSource Pulumi.AzureNative.AwsConnector.Inputs.S3BucketSourceResponse
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
InputCompressionType string
Type of compression to be used on the input coming from the imported table.
InputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
InputFormatOptions InputFormatOptionsResponse
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
S3BucketSource S3BucketSourceResponse
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType String
Type of compression to be used on the input coming from the imported table.
inputFormat String
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions InputFormatOptionsResponse
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource S3BucketSourceResponse
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType string
Type of compression to be used on the input coming from the imported table.
inputFormat string
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions InputFormatOptionsResponse
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource S3BucketSourceResponse
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
input_compression_type str
Type of compression to be used on the input coming from the imported table.
input_format str
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
input_format_options InputFormatOptionsResponse
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3_bucket_source S3BucketSourceResponse
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.
inputCompressionType String
Type of compression to be used on the input coming from the imported table.
inputFormat String
The format of the source data. Valid values for ImportFormat are CSV, DYNAMODB_JSON or ION.
inputFormatOptions Property Map
Additional properties that specify how the input is formatted, The format options for the data that was imported into the target table. There is one value, CsvOption.
s3BucketSource Property Map
The S3 bucket that provides the source for the import. The S3 bucket that is being imported from.

InputFormatOptions
, InputFormatOptionsArgs

Csv Pulumi.AzureNative.AwsConnector.Inputs.Csv
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
Csv Csv
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv Csv
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv Csv
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv Csv
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv Property Map
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.

InputFormatOptionsResponse
, InputFormatOptionsResponseArgs

Csv Pulumi.AzureNative.AwsConnector.Inputs.CsvResponse
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
Csv CsvResponse
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv CsvResponse
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv CsvResponse
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv CsvResponse
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.
csv Property Map
The options for imported source files in CSV format. The values are Delimiter and HeaderList. The options for imported source files in CSV format. The values are Delimiter and HeaderList.

KeySchema
, KeySchemaArgs

AttributeName string
The name of a key attribute.
KeyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
AttributeName string
The name of a key attribute.
KeyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName String
The name of a key attribute.
keyType String
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName string
The name of a key attribute.
keyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attribute_name str
The name of a key attribute.
key_type str
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName String
The name of a key attribute.
keyType String
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

KeySchemaResponse
, KeySchemaResponseArgs

AttributeName string
The name of a key attribute.
KeyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
AttributeName string
The name of a key attribute.
KeyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName String
The name of a key attribute.
keyType String
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName string
The name of a key attribute.
keyType string
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attribute_name str
The name of a key attribute.
key_type str
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
attributeName String
The name of a key attribute.
keyType String
The role that this key attribute will assume: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.

KinesisStreamSpecification
, KinesisStreamSpecificationArgs

ApproximateCreationDateTimePrecision string | Pulumi.AzureNative.AwsConnector.KinesisStreamSpecificationApproximateCreationDateTimePrecision
The precision for the time and date that the stream was created.
StreamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
ApproximateCreationDateTimePrecision string | KinesisStreamSpecificationApproximateCreationDateTimePrecision
The precision for the time and date that the stream was created.
StreamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision String | KinesisStreamSpecificationApproximateCreationDateTimePrecision
The precision for the time and date that the stream was created.
streamArn String
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision string | KinesisStreamSpecificationApproximateCreationDateTimePrecision
The precision for the time and date that the stream was created.
streamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximate_creation_date_time_precision str | KinesisStreamSpecificationApproximateCreationDateTimePrecision
The precision for the time and date that the stream was created.
stream_arn str
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision String | "MICROSECOND" | "MILLISECOND"
The precision for the time and date that the stream was created.
streamArn String
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.

KinesisStreamSpecificationApproximateCreationDateTimePrecision
, KinesisStreamSpecificationApproximateCreationDateTimePrecisionArgs

MICROSECOND
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
MILLISECOND
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND
KinesisStreamSpecificationApproximateCreationDateTimePrecisionMICROSECOND
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
KinesisStreamSpecificationApproximateCreationDateTimePrecisionMILLISECOND
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND
MICROSECOND
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
MILLISECOND
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND
MICROSECOND
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
MILLISECOND
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND
MICROSECOND
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
MILLISECOND
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND
"MICROSECOND"
MICROSECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MICROSECOND
"MILLISECOND"
MILLISECONDKinesisStreamSpecificationApproximateCreationDateTimePrecision enum MILLISECOND

KinesisStreamSpecificationResponse
, KinesisStreamSpecificationResponseArgs

ApproximateCreationDateTimePrecision string
The precision for the time and date that the stream was created.
StreamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
ApproximateCreationDateTimePrecision string
The precision for the time and date that the stream was created.
StreamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision String
The precision for the time and date that the stream was created.
streamArn String
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision string
The precision for the time and date that the stream was created.
streamArn string
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximate_creation_date_time_precision str
The precision for the time and date that the stream was created.
stream_arn str
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.
approximateCreationDateTimePrecision String
The precision for the time and date that the stream was created.
streamArn String
The ARN for a specific Kinesis data stream. Length Constraints: Minimum length of 37. Maximum length of 1024.

LocalSecondaryIndex
, LocalSecondaryIndexArgs

IndexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchema>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Pulumi.AzureNative.AwsConnector.Inputs.Projection
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
IndexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
KeySchema []KeySchema
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Projection
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName String
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema List<KeySchema>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema KeySchema[]
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
index_name str
The name of the local secondary index. The name must be unique among all other indexes on this table.
key_schema Sequence[KeySchema]
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Projection
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName String
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema List<Property Map>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Property Map
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

LocalSecondaryIndexResponse
, LocalSecondaryIndexResponseArgs

IndexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
KeySchema List<Pulumi.AzureNative.AwsConnector.Inputs.KeySchemaResponse>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection Pulumi.AzureNative.AwsConnector.Inputs.ProjectionResponse
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
IndexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
KeySchema []KeySchemaResponse
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
Projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName String
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema List<KeySchemaResponse>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName string
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema KeySchemaResponse[]
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
index_name str
The name of the local secondary index. The name must be unique among all other indexes on this table.
key_schema Sequence[KeySchemaResponse]
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection ProjectionResponse
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
indexName String
The name of the local secondary index. The name must be unique among all other indexes on this table.
keySchema List<Property Map>
The complete key schema for the local secondary index, consisting of one or more pairs of attribute names and key types: + HASH - partition key + RANGE - sort key The partition key of an item is also known as its hash attribute. The term 'hash attribute' derives from DynamoDB's usage of an internal hash function to evenly distribute data items across partitions, based on their partition key values. The sort key of an item is also known as its range attribute. The term 'range attribute' derives from the way DynamoDB stores items with the same partition key physically close together, in sorted order by the sort key value.
projection Property Map
Represents attributes that are copied (projected) from the table into the local secondary index. These are in addition to the primary key attributes and index key attributes, which are automatically projected. Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.

PointInTimeRecoverySpecification
, PointInTimeRecoverySpecificationArgs

PointInTimeRecoveryEnabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
PointInTimeRecoveryEnabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled Boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
point_in_time_recovery_enabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled Boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.

PointInTimeRecoverySpecificationResponse
, PointInTimeRecoverySpecificationResponseArgs

PointInTimeRecoveryEnabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
PointInTimeRecoveryEnabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled Boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
point_in_time_recovery_enabled bool
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.
pointInTimeRecoveryEnabled Boolean
Indicates whether point in time recovery is enabled (true) or disabled (false) on the table.

Projection
, ProjectionArgs

NonKeyAttributes List<string>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
ProjectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
NonKeyAttributes []string
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
ProjectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes List<String>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType String
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes string[]
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
non_key_attributes Sequence[str]
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projection_type str
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes List<String>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType String
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.

ProjectionResponse
, ProjectionResponseArgs

NonKeyAttributes List<string>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
ProjectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
NonKeyAttributes []string
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
ProjectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes List<String>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType String
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes string[]
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType string
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
non_key_attributes Sequence[str]
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projection_type str
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.
nonKeyAttributes List<String>
Represents the non-key attribute names which will be projected into the index. For local secondary indexes, the total count of NonKeyAttributes summed across all of the local secondary indexes, must not exceed 100. If you project the same attribute into two different indexes, this counts as two distinct attributes when determining the total.
projectionType String
The set of attributes that are projected into the index: + KEYS_ONLY - Only the index and primary keys are projected into the index. + INCLUDE - In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify. + ALL - All of the table attributes are projected into the index. When using the DynamoDB console, ALL is selected by default.

ProvisionedThroughput
, ProvisionedThroughputArgs

ReadCapacityUnits int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
WriteCapacityUnits int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
ReadCapacityUnits int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
WriteCapacityUnits int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits Integer
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits Integer
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits number
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits number
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
read_capacity_units int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
write_capacity_units int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits Number
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits Number
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.

ProvisionedThroughputResponse
, ProvisionedThroughputResponseArgs

ReadCapacityUnits int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
WriteCapacityUnits int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
ReadCapacityUnits int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
WriteCapacityUnits int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits Integer
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits Integer
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits number
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits number
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
read_capacity_units int
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
write_capacity_units int
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
readCapacityUnits Number
The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.
writeCapacityUnits Number
The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. For more information, see Specifying Read and Write Requirements in the Amazon DynamoDB Developer Guide. If read/write capacity mode is PAY_PER_REQUEST the value is set to 0.

ResourcePolicy
, ResourcePolicyArgs

PolicyDocument object
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
PolicyDocument interface{}
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument Object
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policy_document Any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument Any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.

ResourcePolicyResponse
, ResourcePolicyResponseArgs

PolicyDocument object
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
PolicyDocument interface{}
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument Object
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policy_document Any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.
policyDocument Any
A resource-based policy document that contains permissions to add to the specified DDB table, index, or both. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples.

S3BucketSource
, S3BucketSourceArgs

S3Bucket string
The S3 bucket that is being imported from.
S3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
S3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
S3Bucket string
The S3 bucket that is being imported from.
S3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
S3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
s3Bucket String
The S3 bucket that is being imported from.
s3BucketOwner String
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix String
The key prefix shared by all S3 Objects that are being imported.
s3Bucket string
The S3 bucket that is being imported from.
s3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
s3_bucket str
The S3 bucket that is being imported from.
s3_bucket_owner str
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3_key_prefix str
The key prefix shared by all S3 Objects that are being imported.
s3Bucket String
The S3 bucket that is being imported from.
s3BucketOwner String
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix String
The key prefix shared by all S3 Objects that are being imported.

S3BucketSourceResponse
, S3BucketSourceResponseArgs

S3Bucket string
The S3 bucket that is being imported from.
S3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
S3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
S3Bucket string
The S3 bucket that is being imported from.
S3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
S3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
s3Bucket String
The S3 bucket that is being imported from.
s3BucketOwner String
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix String
The key prefix shared by all S3 Objects that are being imported.
s3Bucket string
The S3 bucket that is being imported from.
s3BucketOwner string
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix string
The key prefix shared by all S3 Objects that are being imported.
s3_bucket str
The S3 bucket that is being imported from.
s3_bucket_owner str
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3_key_prefix str
The key prefix shared by all S3 Objects that are being imported.
s3Bucket String
The S3 bucket that is being imported from.
s3BucketOwner String
The account number of the S3 bucket that is being imported from. If the bucket is owned by the requester this is optional.
s3KeyPrefix String
The key prefix shared by all S3 Objects that are being imported.

SSESpecification
, SSESpecificationArgs

KmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
SseEnabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
SseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
KmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
SseEnabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
SseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId String
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled Boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType String
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kms_master_key_id str
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sse_enabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sse_type str
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId String
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled Boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType String
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).

SSESpecificationResponse
, SSESpecificationResponseArgs

KmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
SseEnabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
SseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
KmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
SseEnabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
SseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId String
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled Boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType String
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId string
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType string
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kms_master_key_id str
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sse_enabled bool
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sse_type str
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).
kmsMasterKeyId String
The KMS key that should be used for the KMS encryption. To specify a key, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. Note that you should only provide this parameter if the key is different from the default DynamoDB key alias/aws/dynamodb.
sseEnabled Boolean
Indicates whether server-side encryption is done using an AWS managed key or an AWS owned key. If enabled (true), server-side encryption type is set to KMS and an AWS managed key is used (KMS charges apply). If disabled (false) or not specified, server-side encryption is set to AWS owned key.
sseType String
Server-side encryption type. The only supported value is: + KMS - Server-side encryption that uses KMSlong. The key is stored in your account and is managed by KMS (KMS charges apply).

StreamSpecification
, StreamSpecificationArgs

ResourcePolicy Pulumi.AzureNative.AwsConnector.Inputs.ResourcePolicy
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
StreamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
ResourcePolicy ResourcePolicy
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
StreamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy ResourcePolicy
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType String
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy ResourcePolicy
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resource_policy ResourcePolicy
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
stream_view_type str
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy Property Map
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType String
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.

StreamSpecificationResponse
, StreamSpecificationResponseArgs

ResourcePolicy Pulumi.AzureNative.AwsConnector.Inputs.ResourcePolicyResponse
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
StreamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
ResourcePolicy ResourcePolicyResponse
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
StreamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy ResourcePolicyResponse
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType String
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy ResourcePolicyResponse
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType string
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resource_policy ResourcePolicyResponse
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
stream_view_type str
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.
resourcePolicy Property Map
Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table's streams. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. Creates or updates a resource-based policy document that contains the permissions for DDB resources, such as a table, its indexes, and stream. Resource-based policies let you define access permissions by specifying who has access to each resource, and the actions they are allowed to perform on each resource. In a CFNshort template, you can provide the policy in JSON or YAML format because CFNshort converts YAML to JSON before submitting it to DDB. For more information about resource-based policies, see Using resource-based policies for and Resource-based policy examples. While defining resource-based policies in your CFNshort templates, the following considerations apply: + The maximum size supported for a resource-based policy document in JSON format is 20 KB. DDB counts whitespaces when calculating the size of a policy against this limit. + Resource-based policies don't support drift detection. If you update a policy outside of the CFNshort stack template, you'll need to update the CFNshort stack with the changes. + Resource-based policies don't support out-of-band changes. If you add, update, or delete a policy outside of the CFNshort template, the change won't be overwritten if there are no changes to the policy within the template. For example, say that your template contains a resource-based policy, which you later update outside of the template. If you don't make any changes to the policy in the template, the updated policy in DDB won’t be synced with the policy in the template. Conversely, say that your template doesn’t contain a resource-based policy, but you add a policy outside of the template. This policy won’t be removed from DDB as long as you don’t add it to the template. When you add a policy to the template and update the stack, the existing policy in DDB will be updated to match the one defined in the template. For a full list of all considerations, see Resource-based policy considerations.
streamViewType String
When an item in the table is modified, StreamViewType determines what information is written to the stream for this table. Valid values for StreamViewType are: + KEYS_ONLY - Only the key attributes of the modified item are written to the stream. + NEW_IMAGE - The entire item, as it appears after it was modified, is written to the stream. + OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the stream. + NEW_AND_OLD_IMAGES - Both the new and the old item images of the item are written to the stream.

SystemDataResponse
, SystemDataResponseArgs

CreatedAt string
The timestamp of resource creation (UTC).
CreatedBy string
The identity that created the resource.
CreatedByType string
The type of identity that created the resource.
LastModifiedAt string
The timestamp of resource last modification (UTC)
LastModifiedBy string
The identity that last modified the resource.
LastModifiedByType string
The type of identity that last modified the resource.
CreatedAt string
The timestamp of resource creation (UTC).
CreatedBy string
The identity that created the resource.
CreatedByType string
The type of identity that created the resource.
LastModifiedAt string
The timestamp of resource last modification (UTC)
LastModifiedBy string
The identity that last modified the resource.
LastModifiedByType string
The type of identity that last modified the resource.
createdAt String
The timestamp of resource creation (UTC).
createdBy String
The identity that created the resource.
createdByType String
The type of identity that created the resource.
lastModifiedAt String
The timestamp of resource last modification (UTC)
lastModifiedBy String
The identity that last modified the resource.
lastModifiedByType String
The type of identity that last modified the resource.
createdAt string
The timestamp of resource creation (UTC).
createdBy string
The identity that created the resource.
createdByType string
The type of identity that created the resource.
lastModifiedAt string
The timestamp of resource last modification (UTC)
lastModifiedBy string
The identity that last modified the resource.
lastModifiedByType string
The type of identity that last modified the resource.
created_at str
The timestamp of resource creation (UTC).
created_by str
The identity that created the resource.
created_by_type str
The type of identity that created the resource.
last_modified_at str
The timestamp of resource last modification (UTC)
last_modified_by str
The identity that last modified the resource.
last_modified_by_type str
The type of identity that last modified the resource.
createdAt String
The timestamp of resource creation (UTC).
createdBy String
The identity that created the resource.
createdByType String
The type of identity that created the resource.
lastModifiedAt String
The timestamp of resource last modification (UTC)
lastModifiedBy String
The identity that last modified the resource.
lastModifiedByType String
The type of identity that last modified the resource.

Tag
, TagArgs

Key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key String
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value String
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key str
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value str
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key String
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value String
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.

TagResponse
, TagResponseArgs

Key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
Value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key String
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value String
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key string
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value string
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key str
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value str
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
key String
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
value String
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.

TimeToLiveSpecification
, TimeToLiveSpecificationArgs

AttributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
Enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
AttributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
Enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName String
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled Boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attribute_name str
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName String
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled Boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.

TimeToLiveSpecificationResponse
, TimeToLiveSpecificationResponseArgs

AttributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
Enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
AttributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
Enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName String
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled Boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName string
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attribute_name str
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled bool
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.
attributeName String
The name of the TTL attribute used to store the expiration time for items in the table. + The AttributeName property is required when enabling the TTL, or when TTL is already enabled. + To update this property, you must first disable TTL and then enable TTL with the new attribute name.
enabled Boolean
Indicates whether TTL is to be enabled (true) or disabled (false) on the table.

Import

An existing resource can be imported using its type token, name, and identifier, e.g.

$ pulumi import azure-native:awsconnector:DynamoDbTable wjhshaxtpxprmkvirlnkg /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AwsConnector/dynamoDBTables/{name} 
Copy

To learn more about importing existing cloud resources, see Importing resources.

Package Details

Repository
azure-native-v2 pulumi/pulumi-azure-native
License
Apache-2.0
These are the docs for Azure Native v2. We recommenend using the latest version, Azure Native v3.
Azure Native v2 v2.90.0 published on Thursday, Mar 27, 2025 by Pulumi