azure-native.videoanalyzer.PipelineTopology
Explore with Pulumi AI
Pipeline topology describes the processing steps to be applied when processing content for a particular outcome. The topology should be defined according to the scenario to be achieved and can be reused across many pipeline instances which share the same processing characteristics. For instance, a pipeline topology which captures content from a RTSP camera and archives the content can be reused across many different cameras, as long as the same processing is to be applied across all the cameras. Individual instance properties can be defined through the use of user-defined parameters, which allow for a topology to be parameterized. This allows individual pipelines refer to different values, such as individual cameras’ RTSP endpoints and credentials. Overall a topology is composed of the following:
- Parameters: list of user defined parameters that can be references across the topology nodes.
- Sources: list of one or more data sources nodes such as an RTSP source which allows for content to be ingested from cameras.
- Processors: list of nodes which perform data analysis or transformations.
- Sinks: list of one or more data sinks which allow for data to be stored or exported to other destinations. Azure REST API version: 2021-11-01-preview. Prior API version in Azure Native 1.x: 2021-11-01-preview.
Example Usage
Create or update a pipeline topology with an Rtsp source and video sink.
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;
return await Deployment.RunAsync(() => 
{
    var pipelineTopology = new AzureNative.VideoAnalyzer.PipelineTopology("pipelineTopology", new()
    {
        AccountName = "testaccount2",
        Description = "Pipeline Topology 1 Description",
        Kind = AzureNative.VideoAnalyzer.Kind.Live,
        Parameters = new[]
        {
            new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
            {
                Default = "rtsp://microsoft.com/video.mp4",
                Description = "rtsp source url parameter",
                Name = "rtspUrlParameter",
                Type = AzureNative.VideoAnalyzer.ParameterType.String,
            },
            new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
            {
                Default = "password",
                Description = "rtsp source password parameter",
                Name = "rtspPasswordParameter",
                Type = AzureNative.VideoAnalyzer.ParameterType.SecretString,
            },
        },
        PipelineTopologyName = "pipelineTopology1",
        ResourceGroupName = "testrg",
        Sinks = new[]
        {
            new AzureNative.VideoAnalyzer.Inputs.VideoSinkArgs
            {
                Inputs = new[]
                {
                    new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
                    {
                        NodeName = "rtspSource",
                    },
                },
                Name = "videoSink",
                Type = "#Microsoft.VideoAnalyzer.VideoSink",
                VideoCreationProperties = new AzureNative.VideoAnalyzer.Inputs.VideoCreationPropertiesArgs
                {
                    Description = "Parking lot south entrance",
                    SegmentLength = "PT30S",
                    Title = "Parking Lot (Camera 1)",
                },
                VideoName = "camera001",
                VideoPublishingOptions = new AzureNative.VideoAnalyzer.Inputs.VideoPublishingOptionsArgs
                {
                    DisableArchive = "false",
                    DisableRtspPublishing = "true",
                },
            },
        },
        Sku = new AzureNative.VideoAnalyzer.Inputs.SkuArgs
        {
            Name = AzureNative.VideoAnalyzer.SkuName.Live_S1,
        },
        Sources = new[]
        {
            new AzureNative.VideoAnalyzer.Inputs.RtspSourceArgs
            {
                Endpoint = new AzureNative.VideoAnalyzer.Inputs.UnsecuredEndpointArgs
                {
                    Credentials = new AzureNative.VideoAnalyzer.Inputs.UsernamePasswordCredentialsArgs
                    {
                        Password = "${rtspPasswordParameter}",
                        Type = "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                        Username = "username",
                    },
                    Type = "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
                    Url = "${rtspUrlParameter}",
                },
                Name = "rtspSource",
                Transport = AzureNative.VideoAnalyzer.RtspTransport.Http,
                Type = "#Microsoft.VideoAnalyzer.RtspSource",
            },
        },
    });
});
package main
import (
	videoanalyzer "github.com/pulumi/pulumi-azure-native-sdk/videoanalyzer/v2"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := videoanalyzer.NewPipelineTopology(ctx, "pipelineTopology", &videoanalyzer.PipelineTopologyArgs{
			AccountName: pulumi.String("testaccount2"),
			Description: pulumi.String("Pipeline Topology 1 Description"),
			Kind:        pulumi.String(videoanalyzer.KindLive),
			Parameters: videoanalyzer.ParameterDeclarationArray{
				&videoanalyzer.ParameterDeclarationArgs{
					Default:     pulumi.String("rtsp://microsoft.com/video.mp4"),
					Description: pulumi.String("rtsp source url parameter"),
					Name:        pulumi.String("rtspUrlParameter"),
					Type:        pulumi.String(videoanalyzer.ParameterTypeString),
				},
				&videoanalyzer.ParameterDeclarationArgs{
					Default:     pulumi.String("password"),
					Description: pulumi.String("rtsp source password parameter"),
					Name:        pulumi.String("rtspPasswordParameter"),
					Type:        pulumi.String(videoanalyzer.ParameterTypeSecretString),
				},
			},
			PipelineTopologyName: pulumi.String("pipelineTopology1"),
			ResourceGroupName:    pulumi.String("testrg"),
			Sinks: videoanalyzer.VideoSinkArray{
				&videoanalyzer.VideoSinkArgs{
					Inputs: videoanalyzer.NodeInputArray{
						&videoanalyzer.NodeInputArgs{
							NodeName: pulumi.String("rtspSource"),
						},
					},
					Name: pulumi.String("videoSink"),
					Type: pulumi.String("#Microsoft.VideoAnalyzer.VideoSink"),
					VideoCreationProperties: &videoanalyzer.VideoCreationPropertiesArgs{
						Description:   pulumi.String("Parking lot south entrance"),
						SegmentLength: pulumi.String("PT30S"),
						Title:         pulumi.String("Parking Lot (Camera 1)"),
					},
					VideoName: pulumi.String("camera001"),
					VideoPublishingOptions: &videoanalyzer.VideoPublishingOptionsArgs{
						DisableArchive:        pulumi.String("false"),
						DisableRtspPublishing: pulumi.String("true"),
					},
				},
			},
			Sku: &videoanalyzer.SkuArgs{
				Name: pulumi.String(videoanalyzer.SkuName_Live_S1),
			},
			Sources: pulumi.Array{
				videoanalyzer.RtspSource{
					Endpoint: videoanalyzer.UnsecuredEndpoint{
						Credentials: videoanalyzer.UsernamePasswordCredentials{
							Password: "${rtspPasswordParameter}",
							Type:     "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
							Username: "username",
						},
						Type: "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
						Url:  "${rtspUrlParameter}",
					},
					Name:      "rtspSource",
					Transport: videoanalyzer.RtspTransportHttp,
					Type:      "#Microsoft.VideoAnalyzer.RtspSource",
				},
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.videoanalyzer.PipelineTopology;
import com.pulumi.azurenative.videoanalyzer.PipelineTopologyArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.ParameterDeclarationArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoSinkArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoCreationPropertiesArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoPublishingOptionsArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.SkuArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }
    public static void stack(Context ctx) {
        var pipelineTopology = new PipelineTopology("pipelineTopology", PipelineTopologyArgs.builder()
            .accountName("testaccount2")
            .description("Pipeline Topology 1 Description")
            .kind("Live")
            .parameters(            
                ParameterDeclarationArgs.builder()
                    .default_("rtsp://microsoft.com/video.mp4")
                    .description("rtsp source url parameter")
                    .name("rtspUrlParameter")
                    .type("String")
                    .build(),
                ParameterDeclarationArgs.builder()
                    .default_("password")
                    .description("rtsp source password parameter")
                    .name("rtspPasswordParameter")
                    .type("SecretString")
                    .build())
            .pipelineTopologyName("pipelineTopology1")
            .resourceGroupName("testrg")
            .sinks(VideoSinkArgs.builder()
                .inputs(NodeInputArgs.builder()
                    .nodeName("rtspSource")
                    .build())
                .name("videoSink")
                .type("#Microsoft.VideoAnalyzer.VideoSink")
                .videoCreationProperties(VideoCreationPropertiesArgs.builder()
                    .description("Parking lot south entrance")
                    .segmentLength("PT30S")
                    .title("Parking Lot (Camera 1)")
                    .build())
                .videoName("camera001")
                .videoPublishingOptions(VideoPublishingOptionsArgs.builder()
                    .disableArchive("false")
                    .disableRtspPublishing("true")
                    .build())
                .build())
            .sku(SkuArgs.builder()
                .name("Live_S1")
                .build())
            .sources(RtspSourceArgs.builder()
                .endpoint(TlsEndpointArgs.builder()
                    .credentials(UsernamePasswordCredentialsArgs.builder()
                        .password("${rtspPasswordParameter}")
                        .type("#Microsoft.VideoAnalyzer.UsernamePasswordCredentials")
                        .username("username")
                        .build())
                    .type("#Microsoft.VideoAnalyzer.UnsecuredEndpoint")
                    .url("${rtspUrlParameter}")
                    .build())
                .name("rtspSource")
                .transport("Http")
                .type("#Microsoft.VideoAnalyzer.RtspSource")
                .build())
            .build());
    }
}
import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";
const pipelineTopology = new azure_native.videoanalyzer.PipelineTopology("pipelineTopology", {
    accountName: "testaccount2",
    description: "Pipeline Topology 1 Description",
    kind: azure_native.videoanalyzer.Kind.Live,
    parameters: [
        {
            "default": "rtsp://microsoft.com/video.mp4",
            description: "rtsp source url parameter",
            name: "rtspUrlParameter",
            type: azure_native.videoanalyzer.ParameterType.String,
        },
        {
            "default": "password",
            description: "rtsp source password parameter",
            name: "rtspPasswordParameter",
            type: azure_native.videoanalyzer.ParameterType.SecretString,
        },
    ],
    pipelineTopologyName: "pipelineTopology1",
    resourceGroupName: "testrg",
    sinks: [{
        inputs: [{
            nodeName: "rtspSource",
        }],
        name: "videoSink",
        type: "#Microsoft.VideoAnalyzer.VideoSink",
        videoCreationProperties: {
            description: "Parking\u202flot\u202fsouth\u202fentrance",
            segmentLength: "PT30S",
            title: "Parking\u202fLot\u202f(Camera\u202f1)",
        },
        videoName: "camera001",
        videoPublishingOptions: {
            disableArchive: "false",
            disableRtspPublishing: "true",
        },
    }],
    sku: {
        name: azure_native.videoanalyzer.SkuName.Live_S1,
    },
    sources: [{
        endpoint: {
            credentials: {
                password: "${rtspPasswordParameter}",
                type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                username: "username",
            },
            type: "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
            url: "${rtspUrlParameter}",
        },
        name: "rtspSource",
        transport: azure_native.videoanalyzer.RtspTransport.Http,
        type: "#Microsoft.VideoAnalyzer.RtspSource",
    }],
});
import pulumi
import pulumi_azure_native as azure_native
pipeline_topology = azure_native.videoanalyzer.PipelineTopology("pipelineTopology",
    account_name="testaccount2",
    description="Pipeline Topology 1 Description",
    kind=azure_native.videoanalyzer.Kind.LIVE,
    parameters=[
        {
            "default": "rtsp://microsoft.com/video.mp4",
            "description": "rtsp source url parameter",
            "name": "rtspUrlParameter",
            "type": azure_native.videoanalyzer.ParameterType.STRING,
        },
        {
            "default": "password",
            "description": "rtsp source password parameter",
            "name": "rtspPasswordParameter",
            "type": azure_native.videoanalyzer.ParameterType.SECRET_STRING,
        },
    ],
    pipeline_topology_name="pipelineTopology1",
    resource_group_name="testrg",
    sinks=[{
        "inputs": [{
            "node_name": "rtspSource",
        }],
        "name": "videoSink",
        "type": "#Microsoft.VideoAnalyzer.VideoSink",
        "video_creation_properties": {
            "description": "Parking\u202flot\u202fsouth\u202fentrance",
            "segment_length": "PT30S",
            "title": "Parking\u202fLot\u202f(Camera\u202f1)",
        },
        "video_name": "camera001",
        "video_publishing_options": {
            "disable_archive": "false",
            "disable_rtsp_publishing": "true",
        },
    }],
    sku={
        "name": azure_native.videoanalyzer.SkuName.LIVE_S1,
    },
    sources=[{
        "endpoint": {
            "credentials": {
                "password": "${rtspPasswordParameter}",
                "type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                "username": "username",
            },
            "type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
            "url": "${rtspUrlParameter}",
        },
        "name": "rtspSource",
        "transport": azure_native.videoanalyzer.RtspTransport.HTTP,
        "type": "#Microsoft.VideoAnalyzer.RtspSource",
    }])
resources:
  pipelineTopology:
    type: azure-native:videoanalyzer:PipelineTopology
    properties:
      accountName: testaccount2
      description: Pipeline Topology 1 Description
      kind: Live
      parameters:
        - default: rtsp://microsoft.com/video.mp4
          description: rtsp source url parameter
          name: rtspUrlParameter
          type: String
        - default: password
          description: rtsp source password parameter
          name: rtspPasswordParameter
          type: SecretString
      pipelineTopologyName: pipelineTopology1
      resourceGroupName: testrg
      sinks:
        - inputs:
            - nodeName: rtspSource
          name: videoSink
          type: '#Microsoft.VideoAnalyzer.VideoSink'
          videoCreationProperties:
            description: Parking lot south entrance
            segmentLength: PT30S
            title: Parking Lot (Camera 1)
          videoName: camera001
          videoPublishingOptions:
            disableArchive: 'false'
            disableRtspPublishing: 'true'
      sku:
        name: Live_S1
      sources:
        - endpoint:
            credentials:
              password: $${rtspPasswordParameter}
              type: '#Microsoft.VideoAnalyzer.UsernamePasswordCredentials'
              username: username
            type: '#Microsoft.VideoAnalyzer.UnsecuredEndpoint'
            url: $${rtspUrlParameter}
          name: rtspSource
          transport: Http
          type: '#Microsoft.VideoAnalyzer.RtspSource'
Create PipelineTopology Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new PipelineTopology(name: string, args: PipelineTopologyArgs, opts?: CustomResourceOptions);@overload
def PipelineTopology(resource_name: str,
                     args: PipelineTopologyArgs,
                     opts: Optional[ResourceOptions] = None)
@overload
def PipelineTopology(resource_name: str,
                     opts: Optional[ResourceOptions] = None,
                     account_name: Optional[str] = None,
                     kind: Optional[Union[str, Kind]] = None,
                     resource_group_name: Optional[str] = None,
                     sinks: Optional[Sequence[VideoSinkArgs]] = None,
                     sku: Optional[SkuArgs] = None,
                     sources: Optional[Sequence[Union[RtspSourceArgs, VideoSourceArgs]]] = None,
                     description: Optional[str] = None,
                     parameters: Optional[Sequence[ParameterDeclarationArgs]] = None,
                     pipeline_topology_name: Optional[str] = None,
                     processors: Optional[Sequence[EncoderProcessorArgs]] = None)func NewPipelineTopology(ctx *Context, name string, args PipelineTopologyArgs, opts ...ResourceOption) (*PipelineTopology, error)public PipelineTopology(string name, PipelineTopologyArgs args, CustomResourceOptions? opts = null)
public PipelineTopology(String name, PipelineTopologyArgs args)
public PipelineTopology(String name, PipelineTopologyArgs args, CustomResourceOptions options)
type: azure-native:videoanalyzer:PipelineTopology
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var pipelineTopologyResource = new AzureNative.VideoAnalyzer.PipelineTopology("pipelineTopologyResource", new()
{
    AccountName = "string",
    Kind = "string",
    ResourceGroupName = "string",
    Sinks = new[]
    {
        new AzureNative.VideoAnalyzer.Inputs.VideoSinkArgs
        {
            Inputs = new[]
            {
                new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
                {
                    NodeName = "string",
                },
            },
            Name = "string",
            Type = "#Microsoft.VideoAnalyzer.VideoSink",
            VideoName = "string",
            VideoCreationProperties = new AzureNative.VideoAnalyzer.Inputs.VideoCreationPropertiesArgs
            {
                Description = "string",
                RetentionPeriod = "string",
                SegmentLength = "string",
                Title = "string",
            },
            VideoPublishingOptions = new AzureNative.VideoAnalyzer.Inputs.VideoPublishingOptionsArgs
            {
                DisableArchive = "string",
                DisableRtspPublishing = "string",
            },
        },
    },
    Sku = new AzureNative.VideoAnalyzer.Inputs.SkuArgs
    {
        Name = "string",
    },
    Sources = new[]
    {
        new AzureNative.VideoAnalyzer.Inputs.RtspSourceArgs
        {
            Endpoint = new AzureNative.VideoAnalyzer.Inputs.TlsEndpointArgs
            {
                Credentials = new AzureNative.VideoAnalyzer.Inputs.UsernamePasswordCredentialsArgs
                {
                    Password = "string",
                    Type = "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                    Username = "string",
                },
                Type = "#Microsoft.VideoAnalyzer.TlsEndpoint",
                Url = "string",
                TrustedCertificates = new AzureNative.VideoAnalyzer.Inputs.PemCertificateListArgs
                {
                    Certificates = new[]
                    {
                        "string",
                    },
                    Type = "#Microsoft.VideoAnalyzer.PemCertificateList",
                },
                Tunnel = new AzureNative.VideoAnalyzer.Inputs.SecureIotDeviceRemoteTunnelArgs
                {
                    DeviceId = "string",
                    IotHubName = "string",
                    Type = "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
                },
                ValidationOptions = new AzureNative.VideoAnalyzer.Inputs.TlsValidationOptionsArgs
                {
                    IgnoreHostname = "string",
                    IgnoreSignature = "string",
                },
            },
            Name = "string",
            Type = "#Microsoft.VideoAnalyzer.RtspSource",
            Transport = "string",
        },
    },
    Description = "string",
    Parameters = new[]
    {
        new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
        {
            Name = "string",
            Type = "string",
            Default = "string",
            Description = "string",
        },
    },
    PipelineTopologyName = "string",
    Processors = new[]
    {
        new AzureNative.VideoAnalyzer.Inputs.EncoderProcessorArgs
        {
            Inputs = new[]
            {
                new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
                {
                    NodeName = "string",
                },
            },
            Name = "string",
            Preset = new AzureNative.VideoAnalyzer.Inputs.EncoderCustomPresetArgs
            {
                Type = "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
                AudioEncoder = new AzureNative.VideoAnalyzer.Inputs.AudioEncoderAacArgs
                {
                    Type = "#Microsoft.VideoAnalyzer.AudioEncoderAac",
                    BitrateKbps = "string",
                },
                VideoEncoder = new AzureNative.VideoAnalyzer.Inputs.VideoEncoderH264Args
                {
                    Type = "#Microsoft.VideoAnalyzer.VideoEncoderH264",
                    BitrateKbps = "string",
                    FrameRate = "string",
                    Scale = new AzureNative.VideoAnalyzer.Inputs.VideoScaleArgs
                    {
                        Height = "string",
                        Mode = "string",
                        Width = "string",
                    },
                },
            },
            Type = "#Microsoft.VideoAnalyzer.EncoderProcessor",
        },
    },
});
example, err := videoanalyzer.NewPipelineTopology(ctx, "pipelineTopologyResource", &videoanalyzer.PipelineTopologyArgs{
	AccountName:       pulumi.String("string"),
	Kind:              pulumi.String("string"),
	ResourceGroupName: pulumi.String("string"),
	Sinks: videoanalyzer.VideoSinkArray{
		&videoanalyzer.VideoSinkArgs{
			Inputs: videoanalyzer.NodeInputArray{
				&videoanalyzer.NodeInputArgs{
					NodeName: pulumi.String("string"),
				},
			},
			Name:      pulumi.String("string"),
			Type:      pulumi.String("#Microsoft.VideoAnalyzer.VideoSink"),
			VideoName: pulumi.String("string"),
			VideoCreationProperties: &videoanalyzer.VideoCreationPropertiesArgs{
				Description:     pulumi.String("string"),
				RetentionPeriod: pulumi.String("string"),
				SegmentLength:   pulumi.String("string"),
				Title:           pulumi.String("string"),
			},
			VideoPublishingOptions: &videoanalyzer.VideoPublishingOptionsArgs{
				DisableArchive:        pulumi.String("string"),
				DisableRtspPublishing: pulumi.String("string"),
			},
		},
	},
	Sku: &videoanalyzer.SkuArgs{
		Name: pulumi.String("string"),
	},
	Sources: pulumi.Array{
		videoanalyzer.RtspSource{
			Endpoint: videoanalyzer.TlsEndpoint{
				Credentials: videoanalyzer.UsernamePasswordCredentials{
					Password: "string",
					Type:     "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
					Username: "string",
				},
				Type: "#Microsoft.VideoAnalyzer.TlsEndpoint",
				Url:  "string",
				TrustedCertificates: videoanalyzer.PemCertificateList{
					Certificates: []string{
						"string",
					},
					Type: "#Microsoft.VideoAnalyzer.PemCertificateList",
				},
				Tunnel: videoanalyzer.SecureIotDeviceRemoteTunnel{
					DeviceId:   "string",
					IotHubName: "string",
					Type:       "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
				},
				ValidationOptions: videoanalyzer.TlsValidationOptions{
					IgnoreHostname:  "string",
					IgnoreSignature: "string",
				},
			},
			Name:      "string",
			Type:      "#Microsoft.VideoAnalyzer.RtspSource",
			Transport: "string",
		},
	},
	Description: pulumi.String("string"),
	Parameters: videoanalyzer.ParameterDeclarationArray{
		&videoanalyzer.ParameterDeclarationArgs{
			Name:        pulumi.String("string"),
			Type:        pulumi.String("string"),
			Default:     pulumi.String("string"),
			Description: pulumi.String("string"),
		},
	},
	PipelineTopologyName: pulumi.String("string"),
	Processors: videoanalyzer.EncoderProcessorArray{
		&videoanalyzer.EncoderProcessorArgs{
			Inputs: videoanalyzer.NodeInputArray{
				&videoanalyzer.NodeInputArgs{
					NodeName: pulumi.String("string"),
				},
			},
			Name: pulumi.String("string"),
			Preset: videoanalyzer.EncoderCustomPreset{
				Type: "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
				AudioEncoder: videoanalyzer.AudioEncoderAac{
					Type:        "#Microsoft.VideoAnalyzer.AudioEncoderAac",
					BitrateKbps: "string",
				},
				VideoEncoder: videoanalyzer.VideoEncoderH264{
					Type:        "#Microsoft.VideoAnalyzer.VideoEncoderH264",
					BitrateKbps: "string",
					FrameRate:   "string",
					Scale: videoanalyzer.VideoScale{
						Height: "string",
						Mode:   "string",
						Width:  "string",
					},
				},
			},
			Type: pulumi.String("#Microsoft.VideoAnalyzer.EncoderProcessor"),
		},
	},
})
var pipelineTopologyResource = new PipelineTopology("pipelineTopologyResource", PipelineTopologyArgs.builder()
    .accountName("string")
    .kind("string")
    .resourceGroupName("string")
    .sinks(VideoSinkArgs.builder()
        .inputs(NodeInputArgs.builder()
            .nodeName("string")
            .build())
        .name("string")
        .type("#Microsoft.VideoAnalyzer.VideoSink")
        .videoName("string")
        .videoCreationProperties(VideoCreationPropertiesArgs.builder()
            .description("string")
            .retentionPeriod("string")
            .segmentLength("string")
            .title("string")
            .build())
        .videoPublishingOptions(VideoPublishingOptionsArgs.builder()
            .disableArchive("string")
            .disableRtspPublishing("string")
            .build())
        .build())
    .sku(SkuArgs.builder()
        .name("string")
        .build())
    .sources(RtspSourceArgs.builder()
        .endpoint(TlsEndpointArgs.builder()
            .credentials(UsernamePasswordCredentialsArgs.builder()
                .password("string")
                .type("#Microsoft.VideoAnalyzer.UsernamePasswordCredentials")
                .username("string")
                .build())
            .type("#Microsoft.VideoAnalyzer.TlsEndpoint")
            .url("string")
            .trustedCertificates(PemCertificateListArgs.builder()
                .certificates("string")
                .type("#Microsoft.VideoAnalyzer.PemCertificateList")
                .build())
            .tunnel(SecureIotDeviceRemoteTunnelArgs.builder()
                .deviceId("string")
                .iotHubName("string")
                .type("#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel")
                .build())
            .validationOptions(TlsValidationOptionsArgs.builder()
                .ignoreHostname("string")
                .ignoreSignature("string")
                .build())
            .build())
        .name("string")
        .type("#Microsoft.VideoAnalyzer.RtspSource")
        .transport("string")
        .build())
    .description("string")
    .parameters(ParameterDeclarationArgs.builder()
        .name("string")
        .type("string")
        .default_("string")
        .description("string")
        .build())
    .pipelineTopologyName("string")
    .processors(EncoderProcessorArgs.builder()
        .inputs(NodeInputArgs.builder()
            .nodeName("string")
            .build())
        .name("string")
        .preset(EncoderCustomPresetArgs.builder()
            .type("#Microsoft.VideoAnalyzer.EncoderCustomPreset")
            .audioEncoder(AudioEncoderAacArgs.builder()
                .type("#Microsoft.VideoAnalyzer.AudioEncoderAac")
                .bitrateKbps("string")
                .build())
            .videoEncoder(VideoEncoderH264Args.builder()
                .type("#Microsoft.VideoAnalyzer.VideoEncoderH264")
                .bitrateKbps("string")
                .frameRate("string")
                .scale(VideoScaleArgs.builder()
                    .height("string")
                    .mode("string")
                    .width("string")
                    .build())
                .build())
            .build())
        .type("#Microsoft.VideoAnalyzer.EncoderProcessor")
        .build())
    .build());
pipeline_topology_resource = azure_native.videoanalyzer.PipelineTopology("pipelineTopologyResource",
    account_name="string",
    kind="string",
    resource_group_name="string",
    sinks=[{
        "inputs": [{
            "node_name": "string",
        }],
        "name": "string",
        "type": "#Microsoft.VideoAnalyzer.VideoSink",
        "video_name": "string",
        "video_creation_properties": {
            "description": "string",
            "retention_period": "string",
            "segment_length": "string",
            "title": "string",
        },
        "video_publishing_options": {
            "disable_archive": "string",
            "disable_rtsp_publishing": "string",
        },
    }],
    sku={
        "name": "string",
    },
    sources=[{
        "endpoint": {
            "credentials": {
                "password": "string",
                "type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                "username": "string",
            },
            "type": "#Microsoft.VideoAnalyzer.TlsEndpoint",
            "url": "string",
            "trusted_certificates": {
                "certificates": ["string"],
                "type": "#Microsoft.VideoAnalyzer.PemCertificateList",
            },
            "tunnel": {
                "device_id": "string",
                "iot_hub_name": "string",
                "type": "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
            },
            "validation_options": {
                "ignore_hostname": "string",
                "ignore_signature": "string",
            },
        },
        "name": "string",
        "type": "#Microsoft.VideoAnalyzer.RtspSource",
        "transport": "string",
    }],
    description="string",
    parameters=[{
        "name": "string",
        "type": "string",
        "default": "string",
        "description": "string",
    }],
    pipeline_topology_name="string",
    processors=[{
        "inputs": [{
            "node_name": "string",
        }],
        "name": "string",
        "preset": {
            "type": "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
            "audio_encoder": {
                "type": "#Microsoft.VideoAnalyzer.AudioEncoderAac",
                "bitrate_kbps": "string",
            },
            "video_encoder": {
                "type": "#Microsoft.VideoAnalyzer.VideoEncoderH264",
                "bitrate_kbps": "string",
                "frame_rate": "string",
                "scale": {
                    "height": "string",
                    "mode": "string",
                    "width": "string",
                },
            },
        },
        "type": "#Microsoft.VideoAnalyzer.EncoderProcessor",
    }])
const pipelineTopologyResource = new azure_native.videoanalyzer.PipelineTopology("pipelineTopologyResource", {
    accountName: "string",
    kind: "string",
    resourceGroupName: "string",
    sinks: [{
        inputs: [{
            nodeName: "string",
        }],
        name: "string",
        type: "#Microsoft.VideoAnalyzer.VideoSink",
        videoName: "string",
        videoCreationProperties: {
            description: "string",
            retentionPeriod: "string",
            segmentLength: "string",
            title: "string",
        },
        videoPublishingOptions: {
            disableArchive: "string",
            disableRtspPublishing: "string",
        },
    }],
    sku: {
        name: "string",
    },
    sources: [{
        endpoint: {
            credentials: {
                password: "string",
                type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
                username: "string",
            },
            type: "#Microsoft.VideoAnalyzer.TlsEndpoint",
            url: "string",
            trustedCertificates: {
                certificates: ["string"],
                type: "#Microsoft.VideoAnalyzer.PemCertificateList",
            },
            tunnel: {
                deviceId: "string",
                iotHubName: "string",
                type: "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
            },
            validationOptions: {
                ignoreHostname: "string",
                ignoreSignature: "string",
            },
        },
        name: "string",
        type: "#Microsoft.VideoAnalyzer.RtspSource",
        transport: "string",
    }],
    description: "string",
    parameters: [{
        name: "string",
        type: "string",
        "default": "string",
        description: "string",
    }],
    pipelineTopologyName: "string",
    processors: [{
        inputs: [{
            nodeName: "string",
        }],
        name: "string",
        preset: {
            type: "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
            audioEncoder: {
                type: "#Microsoft.VideoAnalyzer.AudioEncoderAac",
                bitrateKbps: "string",
            },
            videoEncoder: {
                type: "#Microsoft.VideoAnalyzer.VideoEncoderH264",
                bitrateKbps: "string",
                frameRate: "string",
                scale: {
                    height: "string",
                    mode: "string",
                    width: "string",
                },
            },
        },
        type: "#Microsoft.VideoAnalyzer.EncoderProcessor",
    }],
});
type: azure-native:videoanalyzer:PipelineTopology
properties:
    accountName: string
    description: string
    kind: string
    parameters:
        - default: string
          description: string
          name: string
          type: string
    pipelineTopologyName: string
    processors:
        - inputs:
            - nodeName: string
          name: string
          preset:
            audioEncoder:
                bitrateKbps: string
                type: '#Microsoft.VideoAnalyzer.AudioEncoderAac'
            type: '#Microsoft.VideoAnalyzer.EncoderCustomPreset'
            videoEncoder:
                bitrateKbps: string
                frameRate: string
                scale:
                    height: string
                    mode: string
                    width: string
                type: '#Microsoft.VideoAnalyzer.VideoEncoderH264'
          type: '#Microsoft.VideoAnalyzer.EncoderProcessor'
    resourceGroupName: string
    sinks:
        - inputs:
            - nodeName: string
          name: string
          type: '#Microsoft.VideoAnalyzer.VideoSink'
          videoCreationProperties:
            description: string
            retentionPeriod: string
            segmentLength: string
            title: string
          videoName: string
          videoPublishingOptions:
            disableArchive: string
            disableRtspPublishing: string
    sku:
        name: string
    sources:
        - endpoint:
            credentials:
                password: string
                type: '#Microsoft.VideoAnalyzer.UsernamePasswordCredentials'
                username: string
            trustedCertificates:
                certificates:
                    - string
                type: '#Microsoft.VideoAnalyzer.PemCertificateList'
            tunnel:
                deviceId: string
                iotHubName: string
                type: '#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel'
            type: '#Microsoft.VideoAnalyzer.TlsEndpoint'
            url: string
            validationOptions:
                ignoreHostname: string
                ignoreSignature: string
          name: string
          transport: string
          type: '#Microsoft.VideoAnalyzer.RtspSource'
PipelineTopology Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The PipelineTopology resource accepts the following input properties:
- AccountName string
- The Azure Video Analyzer account name.
- Kind
string | Pulumi.Azure Native. Video Analyzer. Kind 
- Topology kind.
- ResourceGroup stringName 
- The name of the resource group. The name is case insensitive.
- Sinks
List<Pulumi.Azure Native. Video Analyzer. Inputs. Video Sink> 
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- Sku
Pulumi.Azure Native. Video Analyzer. Inputs. Sku 
- Describes the properties of a SKU.
- Sources
List<Union<Pulumi.Azure Native. Video Analyzer. Inputs. Rtsp Source, Pulumi. Azure Native. Video Analyzer. Inputs. Video Source Args>> 
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- Description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- Parameters
List<Pulumi.Azure Native. Video Analyzer. Inputs. Parameter Declaration> 
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- PipelineTopology stringName 
- Pipeline topology unique identifier.
- Processors
List<Pulumi.Azure Native. Video Analyzer. Inputs. Encoder Processor> 
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- AccountName string
- The Azure Video Analyzer account name.
- Kind string | Kind
- Topology kind.
- ResourceGroup stringName 
- The name of the resource group. The name is case insensitive.
- Sinks
[]VideoSink Args 
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- Sku
SkuArgs 
- Describes the properties of a SKU.
- Sources []interface{}
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- Description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- Parameters
[]ParameterDeclaration Args 
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- PipelineTopology stringName 
- Pipeline topology unique identifier.
- Processors
[]EncoderProcessor Args 
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- accountName String
- The Azure Video Analyzer account name.
- kind String | Kind
- Topology kind.
- resourceGroup StringName 
- The name of the resource group. The name is case insensitive.
- sinks
List<VideoSink> 
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Sku
- Describes the properties of a SKU.
- sources
List<Either<RtspSource,Video Source Args>> 
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description String
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
List<ParameterDeclaration> 
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipelineTopology StringName 
- Pipeline topology unique identifier.
- processors
List<EncoderProcessor> 
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- accountName string
- The Azure Video Analyzer account name.
- kind string | Kind
- Topology kind.
- resourceGroup stringName 
- The name of the resource group. The name is case insensitive.
- sinks
VideoSink[] 
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Sku
- Describes the properties of a SKU.
- sources
(RtspSource | Video Source Args)[] 
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
ParameterDeclaration[] 
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipelineTopology stringName 
- Pipeline topology unique identifier.
- processors
EncoderProcessor[] 
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- account_name str
- The Azure Video Analyzer account name.
- kind str | Kind
- Topology kind.
- resource_group_ strname 
- The name of the resource group. The name is case insensitive.
- sinks
Sequence[VideoSink Args] 
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku
SkuArgs 
- Describes the properties of a SKU.
- sources
Sequence[Union[RtspSource Args, Video Source Args]] 
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description str
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
Sequence[ParameterDeclaration Args] 
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipeline_topology_ strname 
- Pipeline topology unique identifier.
- processors
Sequence[EncoderProcessor Args] 
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- accountName String
- The Azure Video Analyzer account name.
- kind String | "Live" | "Batch"
- Topology kind.
- resourceGroup StringName 
- The name of the resource group. The name is case insensitive.
- sinks List<Property Map>
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Property Map
- Describes the properties of a SKU.
- sources List<Property Map | Property Map>
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description String
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters List<Property Map>
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipelineTopology StringName 
- Pipeline topology unique identifier.
- processors List<Property Map>
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
Outputs
All input properties are implicitly available as output properties. Additionally, the PipelineTopology resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of the resource
- SystemData Pulumi.Azure Native. Video Analyzer. Outputs. System Data Response 
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- Type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of the resource
- SystemData SystemData Response 
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- Type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of the resource
- systemData SystemData Response 
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type String
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The name of the resource
- systemData SystemData Response 
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The name of the resource
- system_data SystemData Response 
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type str
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of the resource
- systemData Property Map
- Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type String
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
Supporting Types
AudioEncoderAac, AudioEncoderAacArgs      
- BitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- BitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps String
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate_kbps str
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps String
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
AudioEncoderAacResponse, AudioEncoderAacResponseArgs        
- BitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- BitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps String
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps string
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate_kbps str
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrateKbps String
- Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
EncoderCustomPreset, EncoderCustomPresetArgs      
- AudioEncoder Pulumi.Azure Native. Video Analyzer. Inputs. Audio Encoder Aac 
- Describes a custom preset for encoding audio.
- VideoEncoder Pulumi.Azure Native. Video Analyzer. Inputs. Video Encoder H264 
- Describes a custom preset for encoding video.
- AudioEncoder AudioEncoder Aac 
- Describes a custom preset for encoding audio.
- VideoEncoder VideoEncoder H264 
- Describes a custom preset for encoding video.
- audioEncoder AudioEncoder Aac 
- Describes a custom preset for encoding audio.
- videoEncoder VideoEncoder H264 
- Describes a custom preset for encoding video.
- audioEncoder AudioEncoder Aac 
- Describes a custom preset for encoding audio.
- videoEncoder VideoEncoder H264 
- Describes a custom preset for encoding video.
- audio_encoder AudioEncoder Aac 
- Describes a custom preset for encoding audio.
- video_encoder VideoEncoder H264 
- Describes a custom preset for encoding video.
- audioEncoder Property Map
- Describes a custom preset for encoding audio.
- videoEncoder Property Map
- Describes a custom preset for encoding video.
EncoderCustomPresetResponse, EncoderCustomPresetResponseArgs        
- AudioEncoder Pulumi.Azure Native. Video Analyzer. Inputs. Audio Encoder Aac Response 
- Describes a custom preset for encoding audio.
- VideoEncoder Pulumi.Azure Native. Video Analyzer. Inputs. Video Encoder H264Response 
- Describes a custom preset for encoding video.
- AudioEncoder AudioEncoder Aac Response 
- Describes a custom preset for encoding audio.
- VideoEncoder VideoEncoder H264Response 
- Describes a custom preset for encoding video.
- audioEncoder AudioEncoder Aac Response 
- Describes a custom preset for encoding audio.
- videoEncoder VideoEncoder H264Response 
- Describes a custom preset for encoding video.
- audioEncoder AudioEncoder Aac Response 
- Describes a custom preset for encoding audio.
- videoEncoder VideoEncoder H264Response 
- Describes a custom preset for encoding video.
- audio_encoder AudioEncoder Aac Response 
- Describes a custom preset for encoding audio.
- video_encoder VideoEncoder H264Response 
- Describes a custom preset for encoding video.
- audioEncoder Property Map
- Describes a custom preset for encoding audio.
- videoEncoder Property Map
- Describes a custom preset for encoding video.
EncoderProcessor, EncoderProcessorArgs    
- Inputs
List<Pulumi.Azure Native. Video Analyzer. Inputs. Node Input> 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Pulumi.Azure | Pulumi.Native. Video Analyzer. Inputs. Encoder Custom Preset Azure Native. Video Analyzer. Inputs. Encoder System Preset 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- Inputs
[]NodeInput 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
EncoderCustom | EncoderPreset System Preset 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
List<NodeInput> 
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset System Preset 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
NodeInput[] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset System Preset 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Sequence[NodeInput] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset System Preset 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset Property Map | Property Map
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
EncoderProcessorResponse, EncoderProcessorResponseArgs      
- Inputs
List<Pulumi.Azure Native. Video Analyzer. Inputs. Node Input Response> 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Pulumi.Azure | Pulumi.Native. Video Analyzer. Inputs. Encoder Custom Preset Response Azure Native. Video Analyzer. Inputs. Encoder System Preset Response 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- Inputs
[]NodeInput Response 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
EncoderCustom | EncoderPreset Response System Preset Response 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
List<NodeInput Response> 
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset Response System Preset Response 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
NodeInput Response[] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset Response System Preset Response 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Sequence[NodeInput Response] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- preset
EncoderCustom | EncoderPreset Response System Preset Response 
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset Property Map | Property Map
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
EncoderSystemPreset, EncoderSystemPresetArgs      
- Name
string | Pulumi.Azure Native. Video Analyzer. Encoder System Preset Type 
- Name of the built-in encoding preset.
- Name
string | EncoderSystem Preset Type 
- Name of the built-in encoding preset.
- name
String | EncoderSystem Preset Type 
- Name of the built-in encoding preset.
- name
string | EncoderSystem Preset Type 
- Name of the built-in encoding preset.
- name
str | EncoderSystem Preset Type 
- Name of the built-in encoding preset.
- name
String | "SingleLayer_540p_H264_AAC" | "Single Layer_720p_H264_AAC" | "Single Layer_1080p_H264_AAC" | "Single Layer_2160p_H264_AAC" 
- Name of the built-in encoding preset.
EncoderSystemPresetResponse, EncoderSystemPresetResponseArgs        
- Name string
- Name of the built-in encoding preset.
- Name string
- Name of the built-in encoding preset.
- name String
- Name of the built-in encoding preset.
- name string
- Name of the built-in encoding preset.
- name str
- Name of the built-in encoding preset.
- name String
- Name of the built-in encoding preset.
EncoderSystemPresetType, EncoderSystemPresetTypeArgs        
- SingleLayer_540p_H264_AAC 
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_720p_H264_AAC 
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_1080p_H264_AAC 
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SingleLayer_2160p_H264_AAC 
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- EncoderSystem Preset Type_Single Layer_540p_H264_AAC 
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- EncoderSystem Preset Type_Single Layer_720p_H264_AAC 
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- EncoderSystem Preset Type_Single Layer_1080p_H264_AAC 
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- EncoderSystem Preset Type_Single Layer_2160p_H264_AAC 
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SingleLayer_540p_H264_AAC 
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_720p_H264_AAC 
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_1080p_H264_AAC 
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SingleLayer_2160p_H264_AAC 
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SingleLayer_540p_H264_AAC 
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_720p_H264_AAC 
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SingleLayer_1080p_H264_AAC 
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SingleLayer_2160p_H264_AAC 
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SINGLE_LAYER_540P_H264_AAC
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SINGLE_LAYER_720P_H264_AAC
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SINGLE_LAYER_1080P_H264_AAC
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SINGLE_LAYER_2160P_H264_AAC
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- "SingleLayer_540p_H264_AAC" 
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- "SingleLayer_720p_H264_AAC" 
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- "SingleLayer_1080p_H264_AAC" 
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- "SingleLayer_2160p_H264_AAC" 
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
Kind, KindArgs  
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- KindLive 
- LiveLive pipeline topology resource.
- KindBatch 
- BatchBatch pipeline topology resource.
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- LIVE
- LiveLive pipeline topology resource.
- BATCH
- BatchBatch pipeline topology resource.
- "Live"
- LiveLive pipeline topology resource.
- "Batch"
- BatchBatch pipeline topology resource.
NodeInput, NodeInputArgs    
- NodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- NodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName String
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- node_name str
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName String
- The name of the upstream node in the pipeline which output is used as input of the current node.
NodeInputResponse, NodeInputResponseArgs      
- NodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- NodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName String
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName string
- The name of the upstream node in the pipeline which output is used as input of the current node.
- node_name str
- The name of the upstream node in the pipeline which output is used as input of the current node.
- nodeName String
- The name of the upstream node in the pipeline which output is used as input of the current node.
ParameterDeclaration, ParameterDeclarationArgs    
- Name string
- Name of the parameter.
- Type
string | Pulumi.Azure Native. Video Analyzer. Parameter Type 
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- Name string
- Name of the parameter.
- Type
string | ParameterType 
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- name String
- Name of the parameter.
- type
String | ParameterType 
- Type of the parameter.
- default_ String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
- name string
- Name of the parameter.
- type
string | ParameterType 
- Type of the parameter.
- default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- description string
- Description of the parameter.
- name str
- Name of the parameter.
- type
str | ParameterType 
- Type of the parameter.
- default str
- The default value for the parameter to be used if the pipeline does not specify a value.
- description str
- Description of the parameter.
- name String
- Name of the parameter.
- type
String | "String" | "SecretString" | "Int" | "Double" | "Bool" 
- Type of the parameter.
- default String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
ParameterDeclarationResponse, ParameterDeclarationResponseArgs      
- Name string
- Name of the parameter.
- Type string
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- Name string
- Name of the parameter.
- Type string
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- name String
- Name of the parameter.
- type String
- Type of the parameter.
- default_ String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
- name string
- Name of the parameter.
- type string
- Type of the parameter.
- default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- description string
- Description of the parameter.
- name str
- Name of the parameter.
- type str
- Type of the parameter.
- default str
- The default value for the parameter to be used if the pipeline does not specify a value.
- description str
- Description of the parameter.
- name String
- Name of the parameter.
- type String
- Type of the parameter.
- default String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
ParameterType, ParameterTypeArgs    
- String
- StringThe parameter's value is a string.
- SecretString 
- SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- ParameterType String 
- StringThe parameter's value is a string.
- ParameterType Secret String 
- SecretStringThe parameter's value is a string that holds sensitive information.
- ParameterType Int 
- IntThe parameter's value is a 32-bit signed integer.
- ParameterType Double 
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- ParameterType Bool 
- BoolThe parameter's value is a boolean value that is either true or false.
- String
- StringThe parameter's value is a string.
- SecretString 
- SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- String
- StringThe parameter's value is a string.
- SecretString 
- SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- STRING
- StringThe parameter's value is a string.
- SECRET_STRING
- SecretStringThe parameter's value is a string that holds sensitive information.
- INT
- IntThe parameter's value is a 32-bit signed integer.
- DOUBLE
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- BOOL
- BoolThe parameter's value is a boolean value that is either true or false.
- "String"
- StringThe parameter's value is a string.
- "SecretString" 
- SecretStringThe parameter's value is a string that holds sensitive information.
- "Int"
- IntThe parameter's value is a 32-bit signed integer.
- "Double"
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- "Bool"
- BoolThe parameter's value is a boolean value that is either true or false.
PemCertificateList, PemCertificateListArgs      
- Certificates List<string>
- PEM formatted public certificates. One certificate per entry.
- Certificates []string
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
- certificates string[]
- PEM formatted public certificates. One certificate per entry.
- certificates Sequence[str]
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
PemCertificateListResponse, PemCertificateListResponseArgs        
- Certificates List<string>
- PEM formatted public certificates. One certificate per entry.
- Certificates []string
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
- certificates string[]
- PEM formatted public certificates. One certificate per entry.
- certificates Sequence[str]
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
RtspSource, RtspSourceArgs    
- Endpoint
Pulumi.Azure | Pulumi.Native. Video Analyzer. Inputs. Tls Endpoint Azure Native. Video Analyzer. Inputs. Unsecured Endpoint 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport
string | Pulumi.Azure Native. Video Analyzer. Rtsp Transport 
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- Endpoint
TlsEndpoint | UnsecuredEndpoint 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport
string | RtspTransport 
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredEndpoint 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport
String | RtspTransport 
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredEndpoint 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name string
- Node name. Must be unique within the topology.
- transport
string | RtspTransport 
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredEndpoint 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name str
- Node name. Must be unique within the topology.
- transport
str | RtspTransport 
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint Property Map | Property Map
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String | "Http" | "Tcp"
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
RtspSourceResponse, RtspSourceResponseArgs      
- Endpoint
Pulumi.Azure | Pulumi.Native. Video Analyzer. Inputs. Tls Endpoint Response Azure Native. Video Analyzer. Inputs. Unsecured Endpoint Response 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- Endpoint
TlsEndpoint | UnsecuredResponse Endpoint Response 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredResponse Endpoint Response 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredResponse Endpoint Response 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name string
- Node name. Must be unique within the topology.
- transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
TlsEndpoint | UnsecuredResponse Endpoint Response 
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name str
- Node name. Must be unique within the topology.
- transport str
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint Property Map | Property Map
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
RtspTransport, RtspTransportArgs    
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- RtspTransport Http 
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- RtspTransport Tcp 
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- HTTP
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- TCP
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- "Http"
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- "Tcp"
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
SecureIotDeviceRemoteTunnel, SecureIotDeviceRemoteTunnelArgs          
- DeviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- IotHub stringName 
- Name of the IoT Hub.
- DeviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- IotHub stringName 
- Name of the IoT Hub.
- deviceId String
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub StringName 
- Name of the IoT Hub.
- deviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub stringName 
- Name of the IoT Hub.
- device_id str
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot_hub_ strname 
- Name of the IoT Hub.
- deviceId String
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub StringName 
- Name of the IoT Hub.
SecureIotDeviceRemoteTunnelResponse, SecureIotDeviceRemoteTunnelResponseArgs            
- DeviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- IotHub stringName 
- Name of the IoT Hub.
- DeviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- IotHub stringName 
- Name of the IoT Hub.
- deviceId String
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub StringName 
- Name of the IoT Hub.
- deviceId string
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub stringName 
- Name of the IoT Hub.
- device_id str
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot_hub_ strname 
- Name of the IoT Hub.
- deviceId String
- The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iotHub StringName 
- Name of the IoT Hub.
Sku, SkuArgs  
- Name
string | Pulumi.Azure Native. Video Analyzer. Sku Name 
- The SKU name.
- name String | "Live_S1" | "Batch_S1"
- The SKU name.
SkuName, SkuNameArgs    
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- SkuName_Live_S1 
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- SkuName_Batch_S1 
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- LIVE_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- BATCH_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- "Live_S1"
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- "Batch_S1"
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
SkuResponse, SkuResponseArgs    
SystemDataResponse, SystemDataResponseArgs      
- CreatedAt string
- The timestamp of resource creation (UTC).
- CreatedBy string
- The identity that created the resource.
- CreatedBy stringType 
- The type of identity that created the resource.
- LastModified stringAt 
- The timestamp of resource last modification (UTC)
- LastModified stringBy 
- The identity that last modified the resource.
- LastModified stringBy Type 
- The type of identity that last modified the resource.
- CreatedAt string
- The timestamp of resource creation (UTC).
- CreatedBy string
- The identity that created the resource.
- CreatedBy stringType 
- The type of identity that created the resource.
- LastModified stringAt 
- The timestamp of resource last modification (UTC)
- LastModified stringBy 
- The identity that last modified the resource.
- LastModified stringBy Type 
- The type of identity that last modified the resource.
- createdAt String
- The timestamp of resource creation (UTC).
- createdBy String
- The identity that created the resource.
- createdBy StringType 
- The type of identity that created the resource.
- lastModified StringAt 
- The timestamp of resource last modification (UTC)
- lastModified StringBy 
- The identity that last modified the resource.
- lastModified StringBy Type 
- The type of identity that last modified the resource.
- createdAt string
- The timestamp of resource creation (UTC).
- createdBy string
- The identity that created the resource.
- createdBy stringType 
- The type of identity that created the resource.
- lastModified stringAt 
- The timestamp of resource last modification (UTC)
- lastModified stringBy 
- The identity that last modified the resource.
- lastModified stringBy Type 
- The type of identity that last modified the resource.
- created_at str
- The timestamp of resource creation (UTC).
- created_by str
- The identity that created the resource.
- created_by_ strtype 
- The type of identity that created the resource.
- last_modified_ strat 
- The timestamp of resource last modification (UTC)
- last_modified_ strby 
- The identity that last modified the resource.
- last_modified_ strby_ type 
- The type of identity that last modified the resource.
- createdAt String
- The timestamp of resource creation (UTC).
- createdBy String
- The identity that created the resource.
- createdBy StringType 
- The type of identity that created the resource.
- lastModified StringAt 
- The timestamp of resource last modification (UTC)
- lastModified StringBy 
- The identity that last modified the resource.
- lastModified StringBy Type 
- The type of identity that last modified the resource.
TlsEndpoint, TlsEndpointArgs    
- Credentials
Pulumi.Azure Native. Video Analyzer. Inputs. Username Password Credentials 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- TrustedCertificates Pulumi.Azure Native. Video Analyzer. Inputs. Pem Certificate List 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Pulumi.Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- ValidationOptions Pulumi.Azure Native. Video Analyzer. Inputs. Tls Validation Options 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- Credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- TrustedCertificates PemCertificate List 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- ValidationOptions TlsValidation Options 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates PemCertificate List 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions TlsValidation Options 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates PemCertificate List 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions TlsValidation Options 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- trusted_certificates PemCertificate List 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation_options TlsValidation Options 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates Property Map
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions Property Map
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
TlsEndpointResponse, TlsEndpointResponseArgs      
- Credentials
Pulumi.Azure Native. Video Analyzer. Inputs. Username Password Credentials Response 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- TrustedCertificates Pulumi.Azure Native. Video Analyzer. Inputs. Pem Certificate List Response 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Pulumi.Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- ValidationOptions Pulumi.Azure Native. Video Analyzer. Inputs. Tls Validation Options Response 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- Credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- TrustedCertificates PemCertificate List Response 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- ValidationOptions TlsValidation Options Response 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates PemCertificate List Response 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions TlsValidation Options Response 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates PemCertificate List Response 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions TlsValidation Options Response 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- trusted_certificates PemCertificate List Response 
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation_options TlsValidation Options Response 
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trustedCertificates Property Map
- List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validationOptions Property Map
- Validation options to use when authenticating a TLS connection. By default, strict validation is used.
TlsValidationOptions, TlsValidationOptionsArgs      
- IgnoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- IgnoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- IgnoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- IgnoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname String
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature String
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore_hostname str
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore_signature str
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname String
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature String
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
TlsValidationOptionsResponse, TlsValidationOptionsResponseArgs        
- IgnoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- IgnoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- IgnoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- IgnoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname String
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature String
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname string
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature string
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore_hostname str
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore_signature str
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignoreHostname String
- When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignoreSignature String
- When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
UnsecuredEndpoint, UnsecuredEndpointArgs    
- Credentials
Pulumi.Azure Native. Video Analyzer. Inputs. Username Password Credentials 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Pulumi.Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials 
- Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
UnsecuredEndpointResponse, UnsecuredEndpointResponseArgs      
- Credentials
Pulumi.Azure Native. Video Analyzer. Inputs. Username Password Credentials Response 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Pulumi.Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
UsernamePassword Credentials Response 
- Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- tunnel
SecureIot Device Remote Tunnel Response 
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
UsernamePasswordCredentials, UsernamePasswordCredentialsArgs      
UsernamePasswordCredentialsResponse, UsernamePasswordCredentialsResponseArgs        
VideoCreationProperties, VideoCreationPropertiesArgs      
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- RetentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- SegmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- RetentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- SegmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod String
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength String
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
- description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description str
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention_period str
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment_length str
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title str
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod String
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength String
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
VideoCreationPropertiesResponse, VideoCreationPropertiesResponseArgs        
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- RetentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- SegmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- RetentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- SegmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod String
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength String
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
- description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod string
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength string
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description str
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention_period str
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment_length str
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title str
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retentionPeriod String
- Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segmentLength String
- Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
VideoEncoderH264, VideoEncoderH264Args      
- BitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- FrameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Pulumi.Azure Native. Video Analyzer. Inputs. Video Scale 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- BitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- FrameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
VideoScale 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps String
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate String
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate_kbps str
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame_rate str
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps String
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate String
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale Property Map
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
VideoEncoderH264Response, VideoEncoderH264ResponseArgs      
- BitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- FrameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Pulumi.Azure Native. Video Analyzer. Inputs. Video Scale Response 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- BitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- FrameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
VideoScale Response 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps String
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate String
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale Response 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps string
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate string
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale Response 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate_kbps str
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame_rate str
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
VideoScale Response 
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrateKbps String
- The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frameRate String
- The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale Property Map
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
VideoPublishingOptions, VideoPublishingOptionsArgs      
- DisableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- DisableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- DisableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- DisableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive String
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp StringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable_archive str
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable_rtsp_ strpublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive String
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp StringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
VideoPublishingOptionsResponse, VideoPublishingOptionsResponseArgs        
- DisableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- DisableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- DisableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- DisableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive String
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp StringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive string
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp stringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable_archive str
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable_rtsp_ strpublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disableArchive String
- When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disableRtsp StringPublishing 
- When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
VideoScale, VideoScaleArgs    
- Height string
- The desired output video height.
- Mode
string | Pulumi.Azure Native. Video Analyzer. Video Scale Mode 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- Height string
- The desired output video height.
- Mode
string | VideoScale Mode 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- height String
- The desired output video height.
- mode
String | VideoScale Mode 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
- height string
- The desired output video height.
- mode
string | VideoScale Mode 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width string
- The desired output video width.
- height str
- The desired output video height.
- mode
str | VideoScale Mode 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width str
- The desired output video width.
- height String
- The desired output video height.
- mode
String | "Pad" | "PreserveAspect Ratio" | "Stretch" 
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
VideoScaleMode, VideoScaleModeArgs      
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- PreserveAspect Ratio 
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- VideoScale Mode Pad 
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- VideoScale Mode Preserve Aspect Ratio 
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- VideoScale Mode Stretch 
- StretchStretches the original video so it resized to the specified dimensions.
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- PreserveAspect Ratio 
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- PreserveAspect Ratio 
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- PAD
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- PRESERVE_ASPECT_RATIO
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- STRETCH
- StretchStretches the original video so it resized to the specified dimensions.
- "Pad"
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- "PreserveAspect Ratio" 
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- "Stretch"
- StretchStretches the original video so it resized to the specified dimensions.
VideoScaleResponse, VideoScaleResponseArgs      
- Height string
- The desired output video height.
- Mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- Height string
- The desired output video height.
- Mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- height String
- The desired output video height.
- mode String
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
- height string
- The desired output video height.
- mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width string
- The desired output video width.
- height str
- The desired output video height.
- mode str
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width str
- The desired output video width.
- height String
- The desired output video height.
- mode String
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
VideoSequenceAbsoluteTimeMarkers, VideoSequenceAbsoluteTimeMarkersArgs          
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges str
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
VideoSequenceAbsoluteTimeMarkersResponse, VideoSequenceAbsoluteTimeMarkersResponseArgs            
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges str
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
VideoSink, VideoSinkArgs    
- Inputs
List<Pulumi.Azure Native. Video Analyzer. Inputs. Node Input> 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- VideoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- VideoCreation Pulumi.Properties Azure Native. Video Analyzer. Inputs. Video Creation Properties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- VideoPublishing Pulumi.Options Azure Native. Video Analyzer. Inputs. Video Publishing Options 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- Inputs
[]NodeInput 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- VideoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- VideoCreation VideoProperties Creation Properties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- VideoPublishing VideoOptions Publishing Options 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
List<NodeInput> 
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- videoName String
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation VideoProperties Creation Properties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing VideoOptions Publishing Options 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
NodeInput[] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- videoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation VideoProperties Creation Properties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing VideoOptions Publishing Options 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Sequence[NodeInput] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- video_name str
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video_creation_ Videoproperties Creation Properties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- video_publishing_ Videooptions Publishing Options 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- videoName String
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation Property MapProperties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing Property MapOptions 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
VideoSinkResponse, VideoSinkResponseArgs      
- Inputs
List<Pulumi.Azure Native. Video Analyzer. Inputs. Node Input Response> 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- VideoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- VideoCreation Pulumi.Properties Azure Native. Video Analyzer. Inputs. Video Creation Properties Response 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- VideoPublishing Pulumi.Options Azure Native. Video Analyzer. Inputs. Video Publishing Options Response 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- Inputs
[]NodeInput Response 
- An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- VideoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- VideoCreation VideoProperties Creation Properties Response 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- VideoPublishing VideoOptions Publishing Options Response 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
List<NodeInput Response> 
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- videoName String
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation VideoProperties Creation Properties Response 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing VideoOptions Publishing Options Response 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
NodeInput Response[] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- videoName string
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation VideoProperties Creation Properties Response 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing VideoOptions Publishing Options Response 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Sequence[NodeInput Response] 
- An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- video_name str
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video_creation_ Videoproperties Creation Properties Response 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- video_publishing_ Videooptions Publishing Options Response 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- videoName String
- Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- videoCreation Property MapProperties 
- Optional video properties to be used in case a new video resource needs to be created on the service.
- videoPublishing Property MapOptions 
- Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
VideoSource, VideoSourceArgs    
- Name string
- Node name. Must be unique within the topology.
- TimeSequences Pulumi.Azure Native. Video Analyzer. Inputs. Video Sequence Absolute Time Markers 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- VideoName string
- Name of the Video Analyzer video resource to be used as the source.
- Name string
- Node name. Must be unique within the topology.
- TimeSequences VideoSequence Absolute Time Markers 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- VideoName string
- Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- timeSequences VideoSequence Absolute Time Markers 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName String
- Name of the Video Analyzer video resource to be used as the source.
- name string
- Node name. Must be unique within the topology.
- timeSequences VideoSequence Absolute Time Markers 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName string
- Name of the Video Analyzer video resource to be used as the source.
- name str
- Node name. Must be unique within the topology.
- time_sequences VideoSequence Absolute Time Markers 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video_name str
- Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- timeSequences Property Map
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName String
- Name of the Video Analyzer video resource to be used as the source.
VideoSourceResponse, VideoSourceResponseArgs      
- Name string
- Node name. Must be unique within the topology.
- TimeSequences Pulumi.Azure Native. Video Analyzer. Inputs. Video Sequence Absolute Time Markers Response 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- VideoName string
- Name of the Video Analyzer video resource to be used as the source.
- Name string
- Node name. Must be unique within the topology.
- TimeSequences VideoSequence Absolute Time Markers Response 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- VideoName string
- Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- timeSequences VideoSequence Absolute Time Markers Response 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName String
- Name of the Video Analyzer video resource to be used as the source.
- name string
- Node name. Must be unique within the topology.
- timeSequences VideoSequence Absolute Time Markers Response 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName string
- Name of the Video Analyzer video resource to be used as the source.
- name str
- Node name. Must be unique within the topology.
- time_sequences VideoSequence Absolute Time Markers Response 
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video_name str
- Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- timeSequences Property Map
- Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- videoName String
- Name of the Video Analyzer video resource to be used as the source.
Import
An existing resource can be imported using its type token, name, and identifier, e.g.
$ pulumi import azure-native:videoanalyzer:PipelineTopology pipelineTopology1 /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Media/videoAnalyzers/{accountName}/pipelineTopologies/{pipelineTopologyName} 
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- Azure Native pulumi/pulumi-azure-native
- License
- Apache-2.0