.Guarantee compatibility with various frameworks, including.NET 6.0,. NET Structure 4.6.2, and.NET Requirement 2.0 and above.Minimize reliances to stop variation problems and also the need for binding redirects.Translating Sound Record.One of the primary performances of the SDK is audio transcription. Programmers can easily translate audio documents asynchronously or in real-time. Below is an instance of how to transcribe an audio data:.using AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local area data, similar code may be made use of to attain transcription.wait for utilizing var stream = brand new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK additionally supports real-time audio transcription utilizing Streaming Speech-to-Text. This component is actually particularly practical for applications demanding instant processing of audio information.using AssemblyAI.Realtime.wait for making use of var scribe = new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Ultimate: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining sound coming from a mic as an example.GetAudio( async (piece) => await transcriber.SendAudioAsync( portion)).await transcriber.CloseAsync().Utilizing LeMUR for LLM Applications.The SDK combines along with LeMUR to make it possible for creators to build huge foreign language model (LLM) functions on vocal records. Listed below is an example:.var lemurTaskParams = brand new LemurTaskParams.Cue="Offer a quick recap of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Versions.Also, the SDK comes with built-in assistance for audio cleverness models, allowing view evaluation as well as various other state-of-the-art components.var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = real. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For additional information, check out the formal AssemblyAI blog.Image source: Shutterstock.