Google.Cloud.VideoIntelligence.V1
Google.Cloud.VideoIntelligence.V1
is a.NET client library for the Google Cloud Video Intelligence API.
Note:
This documentation is for version 3.4.0
of the library.
Some samples may not work with other versions.
Installation
Install the Google.Cloud.VideoIntelligence.V1
package from NuGet. Add it to
your project in the normal way (for example by right-clicking on the
project in Visual Studio and choosing "Manage NuGet Packages...").
Authentication
When running on Google Cloud, no action needs to be taken to authenticate.
Otherwise, the simplest way of authenticating your API calls is to set up Application Default Credentials. The credentials will automatically be used to authenticate. See Set up Application Default Credentials for more details.
Getting started
All operations are performed through VideoIntelligenceServiceClient.
Create a client instance by calling the static Create
or CreateAsync
methods. Alternatively,
use the builder class associated with each client class (e.g. VideoIntelligenceServiceClientBuilder for VideoIntelligenceServiceClient)
as an easy way of specifying custom credentials, settings, or a custom endpoint. Clients are thread-safe,
and we recommend using a single instance across your entire application unless you have a particular need
to configure multiple client objects separately.
Using the REST (HTTP/1.1) transport
This library defaults to performing RPCs using gRPC using the binary Protocol Buffer wire format.
However, it also supports HTTP/1.1 and JSON, for situations where gRPC doesn't work as desired. (This is typically due to an incompatible proxy
or other network issue.) To create a client using HTTP/1.1, specify a RestGrpcAdapter
reference for the GrpcAdapter
property in the client builder.
Sample code:
var client = new VideoIntelligenceServiceClientBuilder
{
GrpcAdapter = RestGrpcAdapter.Default
}.Build();
For more details, see the transport selection page.
Performing the initial request
Perform an initial call to AnnotateVideo
or AnnotateVideoAsync
.
This will return a long-running operation,
which you can poll to check for completion and the results.
Sample code
Annotate labels within a video
VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.Create();
Operation<AnnotateVideoResponse, AnnotateVideoProgress> operation = client.AnnotateVideo(
"gs://cloud-samples-data/video/gbikes_dinosaur.mp4",
new[] { Feature.LabelDetection });
Operation<AnnotateVideoResponse, AnnotateVideoProgress> resultOperation = operation.PollUntilCompleted();
VideoAnnotationResults result = resultOperation.Result.AnnotationResults[0];
foreach (LabelAnnotation label in result.ShotLabelAnnotations)
{
Console.WriteLine($"Label entity: {label.Entity.Description}");
Console.WriteLine("Frames:");
foreach (LabelSegment segment in label.Segments)
{
Console.WriteLine($" {segment.Segment.StartTimeOffset}-{segment.Segment.EndTimeOffset}: {segment.Confidence}");
}
}
Transcribe speech from a video
VideoIntelligenceServiceClient client = VideoIntelligenceServiceClient.Create();
AnnotateVideoRequest request = new AnnotateVideoRequest
{
InputUri = "gs://cloud-samples-data/video/googlework_short.mp4",
Features = { Feature.SpeechTranscription },
VideoContext = new VideoContext
{
SpeechTranscriptionConfig = new SpeechTranscriptionConfig
{
LanguageCode = "en-US",
EnableAutomaticPunctuation = true
}
}
};
Operation<AnnotateVideoResponse, AnnotateVideoProgress> operation = client.AnnotateVideo(request);
Operation<AnnotateVideoResponse, AnnotateVideoProgress> resultOperation = operation.PollUntilCompleted();
VideoAnnotationResults result = resultOperation.Result.AnnotationResults[0];
foreach (SpeechTranscription transcription in result.SpeechTranscriptions)
{
Console.WriteLine($"Language code: {transcription.LanguageCode}");
Console.WriteLine("Alternatives:");
foreach (SpeechRecognitionAlternative alternative in transcription.Alternatives)
{
Console.WriteLine($"({alternative.Confidence}) {alternative.Transcript}");
}
}